text
stringlengths
1
2.34M
meta
dict
\section{Introduction and background} A signed multidigraph $G_\sigma$ is a pair that consists of a multidigraph $G$ (a digraph possibly with multiple arcs) and a function $\sigma$, called the sign, from the arcs of $G$ into the set $\{1,-1\}$. Along the paper, all digraphs are allowed to have multiple signed arcs; when digraphs have neither multiple nor signed arcs, then we refer them as graphs. Given a set of variables $X_G=\{x_u\, :\, u\in V(G)\}$ indexed by the vertices of $G$ and a principal ideal domain (PID) $\mathcal{P}$, the generalized Laplacian matrix $L(G_\sigma,X_G)$ of $G_\sigma$ is the matrix whose entries are given by \[ L(G_\sigma,X_G)_{uv}=\begin{cases} x_u& \text{ if } u=v,\\ -\sigma(uv) m_{uv} 1_{\mathcal{P}}& \text{ otherwise}, \end{cases} \] where $m_{uv}$ is the number of arcs leaving $u$ and entering to $v$, and $1_{\mathcal{P}}$ is the identity of $\mathcal{P}$. Moreover, if $\mathcal{P}[X_G]$ is the polynomial ring over $\mathcal{P}$ in the variables $X_G$, then the critical ideals of $G_\sigma$ are the determinantal ideals given by \[ I_i(G_\sigma,X_G)=\langle \{ {\rm det} (m) \, : \, m \text{ is an }i\times i \text{ submatrix of }L(G_\sigma,X_G)\}\rangle\subseteq \mathcal{P}[X_G], \] for all $1\leq i\leq |V(G)|$. We say that a critical ideal is trivial when it is equal to $\langle1_{\mathcal{P}}\rangle$. For simplicity. we write $I_i(G_\sigma,X)$ instead of $I_i(G_\sigma,X_G)$. \begin{Definition} The algebraic co-rank $\gamma_\mathcal{P}(G_\sigma)$ of $G_\sigma$ is the maximum integer $i$ such that $I_i(G_\sigma,X)$ is trivial. \end{Definition} Since $I_n(G_\sigma,X)=\langle {\rm det} (L(G_\sigma,X))\rangle\neq \langle 1\rangle$, $\gamma_\mathcal{P}(G_\sigma)\leq n-1$. The algebraic co-rank of a graph is closely related to combinatorial properties of the graph. For instance, if $H_\sigma$ is an induced subgraph of $G_\sigma$, then $I_i(H_\sigma,X)\subseteq I_i(G_\sigma,X)$ for all $1\leq i\leq |V(H)|$ (see~\cite[Proposition 3.3]{critical}). Therefore, $\gamma(H_\sigma)\leq\gamma(G_\sigma)$. Also, if $\alpha(G)$ and $\omega(G)$ denote the stability number and the clique number of $G$, respectively, then \[ \gamma_\mathcal{P}(G)\leq 2(n-\omega(G))+1\text{ and } \gamma_\mathcal{P}(G)\leq 2(n-\alpha(G)), \] see \cite[Theorem 3.13]{critical}. We now introduce the operations of duplication and replication of vertices. Given a multidigraph $G$ and a vertex $v\in V(G)$, duplicating the vertex $v$ consists in adding a new vertex $v^1$ to $V(G)$ and making it adjacent to each neighbor of $v$, respecting the multiplicities and signs of arcs. Let $d(G, v)$ denote the multidigraph obtained from $G$ after duplicating the vertex $v$. Similarly, replicating the vertex $v$ consists in duplicating $v$ and adding the arcs $vv^1$ and $v^1v$. Let $r(G, v)$ denote the multidigraph obtained from $G$ by replicating the vertex $v$. Two vertices $u$ and $v$ are called twins if they have the same neighborhood. In the literature, duplicated vertices are also known as false twins, and replicated vertices are also named true twins. Let $d^k(G,v)$ denote the multidigraph obtained from $G$ by duplicating the vertex $v$ a total of $k$ times and similarly for $r^k(G,v)$. Given ${\bf d}\in \mathbb{Z}^{|V|}$, let $G^{\bf d}$ be the graph obtained from $G$ by duplicating the vertex $v$ ${\bf d}_v$ times if ${\bf d}_v>0$, and replicating $v$ $-{\bf d}_v$ times if ${\bf d}_v<0$, for each $v\in V(G)$. Note that $G=G^{\bf 0}$. Let $V(G^{\bf d},v)$ denote the vertex set $\{v,v^1,\ldots,v^{|{\bf d}_v|}\}$ created by either duplicating or replicating the vertex $v$. To simplify the notation, the vertex $v$ will be also denoted by $v^0$. The following example illustrates these concepts. \begin{Example} Let $C_4$ be the cycle with four vertices and ${\bf d}=(-1,1,1,1)$. Thus $C_4^{\bf d}$ is the graph with eight vertices shown in Figure~\ref{fig:01}.b. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{20mm}}c} \begin{tikzpicture}[line width=1pt, scale=0.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt] \draw (225:0.8) node (v1) [draw, circle, fill=gray] {}; \draw (315:0.8) node (v2) [draw, circle, fill=gray] {}; \draw (45:0.8) node (v3) [draw, circle, fill=gray] {}; \draw (135:0.8) node (v4) [draw, circle, fill=gray] {}; \draw (225:1.1) node () {\small $a$}; \draw (315:1.1) node () {\small $b$}; \draw (45:1.1) node () {\small $c$}; \draw (135:1.1) node () {\small $d$}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) -- (v1); \end{tikzpicture} & \begin{tikzpicture}[line width=1pt, scale=0.9] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (225:0.8) node (v1) [draw, circle, fill=gray] {}; \draw (315:0.8) node (v2) [draw, circle, fill=gray] {}; \draw (45:0.8) node (v3) [draw, circle, fill=gray] {}; \draw (135:0.8) node (v4) [draw, circle, fill=gray] {}; \draw (225:1.2) node (v1p) [draw, circle, fill=gray] {}; \draw (315:1.2) node (v2p) [draw, circle, fill=gray] {}; \draw (45:1.2) node (v3p) [draw, circle, fill=gray] {}; \draw (135:1.2) node (v4p) [draw, circle, fill=gray] {}; \draw (225:1.9) node () {\small $V(C_4^{\bf d}, a)$}; \draw (315:1.9) node () {\small $V(C_4^{\bf d}, b)$}; \draw (45:1.9) node () {\small $V(C_4^{\bf d}, c)$}; \draw (135:1.9) node () {\small $V(C_4^{\bf d}, d)$}; \draw (225:0.55) node () {\small $a$}; \draw (315:0.55) node () {\small $b$}; \draw (45:0.55) node () {\small $c$}; \draw (135:0.55) node () {\small $d$}; \draw (215:1.35) node () {\small $a^1$}; \draw (325:1.4) node () {\small $b^1$}; \draw (35:1.4) node () {\small $c^1$}; \draw (145:1.35) node () {\small $d^1$}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) -- (v1); \draw (v1p) edge (v2p); \draw (v1p) edge (v2); \draw (v1) edge (v2p); \draw (v2p) edge (v3p); \draw (v2p) edge (v3); \draw (v2) edge (v3p); \draw (v3p) edge (v4p); \draw (v3p) edge (v4); \draw (v3) edge (v4p); \draw (v4p) -- (v1p); \draw (v4p) -- (v1); \draw (v4) -- (v1p); \draw (v1p) edge (v1); \end{tikzpicture} \\ $(a)$ & $(b)$ \end{tabular} \end{center} \caption{The cycle with four vertices and $C_4^{(-1,1,1,1)}$.} \label{fig:01} \end{figure} \end{Example} Critical ideals were defined in \cite{critical} as a refinement of the critical group of a graph. We now introduce the critical group of a multidigraph. The Laplacian matrix $L(G_\sigma)$ of $G_\sigma$ is the evaluation of $L(G_\sigma,X)$ at $X=D_G$, where $D_G$ is the out-degree vector of $G$. By considering $L(G_\sigma)$ as a linear map $L(G_\sigma):\mathbb{Z}^V\rightarrow \mathbb{Z}^V$, the cokernel of $L(G_\sigma)$ is the quotient module $\mathbb{Z}^{V}/{\rm Im}\, L(G_\sigma)$. The torsion part of this module is the critical group $K(G_\sigma)$ of $G_\sigma$. The critical group has been studied intensively on several contexts over the last 30 years, such that: the {\it group of components} \cite{lorenzini1991,lorenzini2008}, the {\it Picard group} \cite{bhn,biggs1999}, the {\it Jacobian group} \cite{bhn,biggs1999}, the {\it sandpile group} \cite{cone,cori}, {\it chip-firing game} \cite{biggs1999,merino}, or {\it Laplacian unimodular equivalence} \cite{gmw,merris}. Recently, the critical ideals have played an important role in understanding and classifying the graphs whose critical group has $i$ invariant factors equal to one, see~\cite{g2,g3}. In general, the relations between the critical group and other parameters of a graph G remain unknown. There are few natural constructions of graphs which behave well with respect to the critical group. For example, the critical group $K(G\sqcup H)$ of a disjoint union $G\sqcup H$ of two graphs $G$ and $H$ is isomorphic to $K(G)\oplus K(H)$. Moreover, in \cite{wagner} it was proved that if the graphic matroids of $G$ and $H$ are isomorphic, then their critical groups are isomorphic. This was proved by studying the operations of {\it splittings} or {\it mergings of one-vertex cuts} and {\it twistings of two-vertex cuts}. Other operations on graphs have been explored, such as the {\it cone of a graph} \cite{cone}, the {\it line graph} \cite{bmmpr, levine}, and the {\it clique-inserted graph} \cite{cz}. The main goal of this article is to give a description of the critical ideals of signed multidigraphs with twin vertices. More precisely, given a graph $G$ and $\delta\in \{0,1,-1\}^{|V|}$, let \[ \mathcal{T}_{\delta}(G)=\{G^{\bf d}: {\bf d}\in \mathbb{Z}^{|V|} \text{ such that } {\rm supp}({\bf d})=\delta\}, \] where \[ {\rm supp}({\bf d})_v= \begin{cases} -1 & \text{ if } {\bf d}_v < 0,\\ 1 & \text{ if } {\bf d}_v > 0,\\ 0 & \text{ otherwise.} \end{cases} \] We prove that more than one half of the critical ideals of the graphs in $\mathcal{T}_{\delta}(G)$ are determined by the critical ideals of $G$, see Theorems~\ref{teo:deq} and \ref{teo:req}. Moreover, the algebraic co-rank of any graph in $\mathcal{T}_{\delta}(G)$ is equal to the algebraic co-rank of $G^{\delta}$ (see Corollary~\ref{coro:bound}), which is less than or equal to the number of vertices of $G$ and is determined by a simple evaluation of the critical ideals of $G$. We illustrate these results by presenting a simple example. Consider the path $P_3$ with three vertices. Then $\gamma_{\mathcal{P}}(P_3)=2$ and \[ I_3(P_3, X)=\langle x_1x_2x_3-x_1-x_3\rangle=\langle p\rangle. \] Our goal is to describe the critical ideals of the graphs obtained by duplicating or replicating some of the vertices of $P_3$ and in particular we are interested in its algebraic co-rank. For our example we want to calculate the algebraic co-rank of the graphs in one of the following families: \[ \mathcal{T}_{(-1,-1,-1)}(P_3), \mathcal{T}_{(-1,-1,1)}(P_3), \mathcal{T}_{(-1,1,1)}(P_3), \text{ and }\mathcal{T}_{(-1,1,-1)}(P_3). \] Since any graph in one of these families contains $P_3$ as an induced subgraph, its algebraic co-rank is greater than or equal to two. Theorem~\ref{teo:rd} and Corollary~\ref{coro:bound} imply that the algebraic co-rank of any of these graphs is less than or equal to three, the number of vertices of $P_3$. Moreover, all the graphs in each of the families have the same algebraic co-rank and this can be computed by evaluating its third critical ideal. For instance, the algebraic co-rank of any graph in $\mathcal{T}_{(-1,-1,-1)}(P_3)$ is equal to three because \[ p(-1,-1,-1)=(-1)(-1)(-1)-(-1)-(-1)=-1+1+1=1. \] A similar argument applies to $\mathcal{T}_{(-1,-1,1)}(P_3)$ and $\mathcal{T}_{(-1,1,1)}(P_3)$. The case of $\mathcal{T}_{(-1,1,-1)}(P_3)$ is more interesting. Since $p(-1,1,-1)=3$, the algebraic co-rank depends on the base ring $\mathcal{P}$. For instance, if $\mathcal{P}=\mathbb{Z}$, then the algebraic co-rank of any graph in $\mathcal{T}_{(-1,1,-1)}(P_3)$ is two. However, if $\mathcal{P}$ is a finite field of characteristic different to three, then the algebraic co-rank of any graph in $\mathcal{T}_{(-1,1,-1)}(P_3)$ is three. Obtaining the description of the critical ideals of the graphs in a family $\mathcal{T}_{\delta}(G)$ is a difficult task. However, we can obtain information of more than one half of the critical ideals of the graphs in $\mathcal{T}_{\delta}(G)$, see Remark~\ref{half}. In Section~\ref{Sbipartite}, the reader will find a description of some of the critical ideals of $\mathcal{T}_{(1,1)}(P_2)$ computed by using results contained in this article. More precisely, while the vertices are duplicated or replicated several times, the initial critical ideals behave similarly to the critical ideals of the disjoint union of complete and trivial graphs. These results are important in the study of critical ideals of graphs, in particular, in the understanding of the algebraic co-rank of a graph. For instance, in the classification of the graphs with algebraic co-rank less than or equal to $k$, these results allow us to get some insights of the minimal $k$-forbidden graphs, which help in defining the $k$-basic signed graphs. It is important to note that there are several important families of graphs in $\bigcup_{(G, \delta)\in \mathcal{G}} \mathcal{T}_{\delta}(G)$ for some set $\mathcal{G}$ of pairs $(G, \delta)$. For instance, the complete multipartite graphs are equal to $\bigcup_{i=1}^{\infty} \mathcal{T}_{{\bf 1}_i}(K_i)$, where $K_i$ is the complete graph with $i$ vertices and ${{\bf 1}_i}$ is the vector of size $i$ where all their entries equal to $1$. Threshold graphs and quasi-threshold graphs can be described in a similar way. Moreover, cographs and distance-hereditary graphs are families of graphs with twin vertices. The article is structured as follows. In Section~\ref{rd}, we obtain relations between evaluations of the critical ideals of a signed multidigraph $G$ and the critical ideals of the graphs obtained by duplicating or replicating a number of vertices. Then, we get a partial description of the critical ideals of the graph $G^{\bf d}$ for some ${\bf d}\in \mathbb{Z}^{V(G)}$. As a consequence, we get an upper bound for the algebraic co-rank of graphs with twins. To finish this section, we pose conjectures which lead into a wide and interesting outlook of the algebraic co-rank of graphs. In Section~\ref{sec:description}, we give precise descriptions of the critical ideals of the $k$-{\it th} duplication and $k$-{\it th} replication of vertex $v$ in terms of the critical ideals of $G$. Finally we present some applications of our results. \section{An upper bound for the algebraic co-rank of graphs with twins}\label{rd} The objective of this section is to study critical ideals of graphs with twin vertices. We begin this section by calculating the minors (which are almost always equal to zero) of the union of matrices in Lemma~\ref{lema:det1}. By using this lemma, we get a first description for the critical ideals of the graph obtained by duplicating or replicating vertices (see Lemma~\ref{lema:d} and Theorem~\ref{teo:rd}). Then, we get an upper bound for the algebraic co-rank of a graph with twins (see Corollary~\ref{coro:bound}). In fact, this bound is tight since the equality holds for the complete graphs (see Example~\ref{example:completa}). This upper bound can be used in the classification of the graphs that have algebraic co-rank less than or equal to an integer $k$, see~\cite{g2} and \cite{g3}. Let $\mathcal P$ be a commutative ring with identity, and let $M_n (\mathcal{P})$ denote the set of $n\times n$ matrices with entries on $\mathcal{P}$. Given two vectors ${\bf a}\in \mathcal{P}^{q_1}$ and ${\bf b}\in \mathcal{P}^{q_2}$ and two matrices $P\in M_{p_1\times p_2}(\mathcal{P})$ and $Q\in M_{q_1\times q_2}(\mathcal{P})$ such that $p_1+q_1=p_2+q_2$, the join $J(P,{\bf a};Q,{\bf b})$ is the matrix \[ \left[\begin{array}{cc} P & {\bf 1}_{p_1}^T{\bf b}\\ {\bf a}^T {\bf 1}_{p_2} & Q\\ \end{array}\right] \in M_{p_1+q_1}(\mathcal{P}). \] Note that if $G\boxtimes H$ denotes the join of two graphs $G$ and $H$, then \[ L(G\boxtimes H, X)=J(L(G,X), -{\bf 1}; L(H,X), -{\bf 1}). \] \begin{figure}[h!] \begin{tabular}{c@{\extracolsep{2cm}}c@{\extracolsep{2cm}}c} \begin{tikzpicture}[scale=1, line width=0.7pt] \tikzstyle{every node}=[minimum width=4.5pt, inner sep=0pt, circle] \draw (0,1) node (v1) [draw, fill=gray, label=above:{\small $v_1$}] {}; \draw (0,-1) node (v2) [draw, fill=gray, label=below:{\small $v_2$}] {}; \draw (v1) edge (v2); \end{tikzpicture} & \begin{tikzpicture}[scale=1, line width=0.7pt] \tikzstyle{every node}=[minimum width=4.5pt, inner sep=0pt, circle] \draw (0,1) node (u1) [draw, fill=gray, label=above:{\small $u_1$}] {}; \draw (0,0) node (u2) [draw, fill=gray, label=right:{\small $u_2$}] {}; \draw (0,-1) node (u3) [draw, fill=gray, label=below:{\small $u_2$}] {}; \draw (u1) -- (u2) -- (u3); \end{tikzpicture} & \begin{tikzpicture}[scale=1, line width=0.7pt] \tikzstyle{every node}=[minimum width=4.5pt, inner sep=0pt, circle] \draw (-1,1) node (v1) [draw, fill=gray, label=above:{\small $v_1$}] {}; \draw (-1,-1) node (v2) [draw, fill=gray, label=below:{\small $v_2$}] {}; \draw (v1) edge (v2); \draw (1,1) node (u1) [draw, fill=gray, label=above:{\small $u_1$}] {}; \draw (1,0) node (u2) [draw, fill=gray, label=right:{\small $u_2$}] {}; \draw (1,-1) node (u3) [draw, fill=gray, label=below:{\small $u_3$}] {}; \draw (u1) -- (u2) -- (u3); \draw (v1) -- (u1) -- (v2) -- (u2) -- (v1) -- (u3) -- (v2) -- (u3) -- (v1); \end{tikzpicture} \\ $P_2$ & $P_3$ & $P_2\boxtimes P_3$ \end{tabular} \caption{The join of two paths.} \label{fig:JoinTwoPaths} \end{figure} \begin{Example} Consider the join of a path $P_2$ with $2$ vertices and a path $P_3$ with $3$ vertices, see Fig.~\ref{fig:JoinTwoPaths}. Then, \begin{eqnarray*} L(P_2\boxtimes P_3,X_{P_2\boxtimes P_3})&=&J(L(P_2,X_{P_2}),{\bf -1};L(P_3,X_{P_3}),{\bf -1})\\ &=& \left[ \begin{array}{cc|ccc} x_{v_1} & -1 & -1 & -1 & -1 \\ -1 & x_{v_2} & -1 & -1 & -1 \\ \hline -1 & -1 & x_{u_1} &-1 & 0 \\ -1 & -1 & -1 & x_{u_2} &-1 \\ -1 & -1 & 0 & -1 & x_{u_3} \\ \end{array} \right]. \end{eqnarray*} \end{Example} The following lemma describes the determinant of the join $J(P,{\bf a}; Q,{\bf b})$. \begin{Lemma}\label{lema:det1} If $P\in M_{p_1\times p_2}(\mathcal{P})$, $Q\in M_{q_1\times q_2}(\mathcal{P})$ with $p_1+q_1=p_2+q_2$, ${\bf a}\in \mathcal{P}^{q_1}$, and ${\bf b}\in \mathcal{P}^{q_2}$, then \[ {\rm det}(J(P,{\bf a}; Q,{\bf b}))= \begin{cases} {\rm det}(P)\cdot {\rm det}(Q)- {\rm det} \left[\begin{array}{cc} P&{\bf 1}^T\\ {\bf 1}&0 \end{array}\right] \cdot {\rm det} \left[\begin{array}{cc} 0&{\bf b}\\ {\bf a}^T&Q \end{array}\right] & \text{ if } p_1=p_2,\\ \\ {\rm det} \left[\begin{array}{cc} P & {\bf 1}^T \end{array}\right] \cdot {\rm det} \left[\begin{array}{c} {\bf b}\\ Q \end{array}\right] & \text{ if } p_1=p_2+1,\\ \\ {\rm det} \left[\begin{array}{c} P\\ {\bf 1} \end{array}\right] \cdot {\rm det} \left[\begin{array}{cc} {\bf a}^T &Q \end{array}\right] & \text{ if } p_2=p_1+1,\\ 0 & \text{ otherwise.} \end{cases} \] \end{Lemma} \begin{proof} The proof follows by induction on $p_1+p_2$. Note that, if $P\in M_{1\times 0}(\mathcal{P})$, then $[\begin{array}{cc} P & {\bf 1}\end{array}]=[1]$. Also, if $P\in M_{0\times 1}(\mathcal{P})$, then $[\begin{array}{cc} P & {\bf 1}\end{array}]^T=[1]$. \end{proof} Note that all square submatrices of a join of matrices are, in fact, a join of matrices. Hence, almost all minors of the join of matrices are equal to zero. This fact will be useful in obtaining a description of the critical ideals of a graph with twin vertices (see Lemmas~\ref{lema:d}, \ref{lema:r}, \ref{lema:gend} and~\ref{lema:genr}). Given ${\bf a}\in \mathcal{P}^n$, $L\in M_n(\mathcal{P})$ and $1\leq j\leq n$, let ${\rm minors}_j(L, {\bf a})$ denote the set \[ \left\{\mathrm{det}(M): M\in M_j(\mathcal{P}) \text{ and } M= \left[\begin{array}{c} {\bf a}'\\L'\end{array}\right] \text{ for a submatrix } {\bf a}'\neq \emptyset \text{ of } {\bf a} \text{ and } L'\text{ of } L, \text{ resp.}\right\}. \] In a similar way, let ${\rm minors}_j({\bf a}, L)$ be the set of determinants of some submatrices of $\left[\begin{array}{cc}{\bf a}^T& L\end{array}\right]$ of size $j$. Note that ${\rm minors}_{0}({\bf a}, L)={\rm minors}_{0}(L,{\bf b})=\emptyset$, ${\rm minors}_{1}({\bf a}, L) =\{{\bf a}_i\}_{1\leq i\leq n}$, and ${\rm minors}_{1}(L,{\bf b})=\{{\bf b}_i\}_{1\leq i\leq n}$. Let $M_j(L)$ denote the set of submatrices of $L$ of size $j$. Let $G$ be a signed multidigraph with $n\geq 2$ vertices and $v$ be a vertex of $G$. It is not difficult to see that \[ L(G,X)=\left[\begin{array}{cc}x_v& {\bf b} \\ {\bf a}^T & L(G-v,X) \end{array}\right]=J(x_v, {\bf a}; L(G-v,X), {\bf b}) \] for some ${\bf a}, {\bf b}\in \mathcal{P}^{n-1}$. The following proposition tell us that the $j$-{\it th} critical ideal of $G$ is generated by four types of minors of $L(G,X)$. \begin{Proposition}\label{eq:g} If $G$ is a signed multidigraph with $n\geq 2$ vertices and $v$ is a vertex of $G$, then the critical ideal $I_j(G,X)$ of $G$ is equal to \begin{eqnarray*} \left\langle{\rm minors}_{j}(L(G-v,X)), {\rm minors}_{j}({\bf a}, L(G-v,X)),{\rm minors}_{j}(L(G-v,X),{\bf b}),\right.\\ \left.\left\{x_v\cdot {\rm det}(M)+{\rm det}( J(0,{\bf a'};M,{\bf b'})) \, :\, J(x_v,{\bf a'};M,{\bf b'}) \in M_{j}(L(G,X)) \text{ with } {\bf a',b'} \text{ subvectors of } {\bf a,b,}, \text{ resp.}\right\}\right\rangle \end{eqnarray*} for all $1\leq j\leq n-1$, and equal to $\left\langle x_v\cdot {\rm det}(L(G-v,X))+{\rm det}(J(0,{\bf a};L(G-v,X),{\bf b}))\right\rangle$ when $j=n$. \end{Proposition} \begin{proof} The proof is simple and is similar to the one given in \cite[Claim 3.12]{critical}. \end{proof} We now give a description of the critical ideals of $d(G,v)$ in terms of the critical ideals of $G$. Let $Y$ be a subset of the variables associated to the vertices of $G$ and ${\bf a}\in\mathcal{P}^{|Y|}$. Through the paper, $I(G,X)|_{Y={\bf a}}$ will denote the evaluation of $I(G,X)$ at $Y={\bf a}$, and ${\rm minors}_{j}({\bf a}, L,{\bf b})$ will be the set \[ \left\{ {\rm det}(M)\, : \, M=J(0,{\bf a'};M,{\bf b'}) \in M_j\left(J(0,{\bf a};L,{\bf b})\right)\text{ with } {\bf a',b'}\neq \emptyset \text{ subvectors of } {\bf a,b,} \text{ respectively}\right\}. \] Note that ${\rm minors}_{1}({\bf a}, L(G- v, X),{\bf b})=\emptyset$. \begin{Lemma}\label{lema:d} Let $G$ be a signed multidigraph with $n\geq 2$ vertices, $v\in V(G)$ and $v^1$ a duplication of $v$. Then \[ I_{j}(d(G,v),X)\subseteq \langle x_{v},x_{v^1}, I_{j}(G,X)|_{x_v=0}\rangle, \] for all $1\leq j\leq n$. Moreover, $I_{j}(d(G,v),X)$ is trivial if and only if $I_{j}(G,X)|_{x_v=0}$ is trivial. \end{Lemma} \begin{proof} The main idea is to give a description of the $j$-{\it th} critical of $d(G,v)$ in terms of some types of minors, similar to the one given in Proposition~\ref{eq:g}, and then use this description to prove the containment. First, it is not difficult to see that $I_1(d(G, v),X)=\langle x_{v},x_{v^1}, I_1(G,X)|_{x_v=0}\rangle$. Now, let $\mathcal{I, I'}\subseteq [n+1]$ be two sets of size $j$, and $\mathcal I_{\{1,2\}}=\{1,2\}\cap \mathcal I$ and $\mathcal I'_{\{1,2\}}=\{1,2\}\cap \mathcal I'$. Without loss of generality, we might order the vertices such that $x_{v^1}$ is in the entry $(1,1)$ and $x_v$ is in the entry $(2,2)$ of $L(d(G,v),X)$. Clearly, \[ L(d(G,v),X)=J({\rm diag}(x_{v^1}, x_{v}), {\bf a}; L(G- v, X), {\bf b}), \] where $L(G,X)=J(x_v, {\bf a}; L(G-v,X), {\bf b})$ for some ${\bf a}, {\bf b}\in \mathcal{P}^{n-1}$. Let $m_{\mathcal{ I,I'}}={\rm det}(L(d(G,v), X)[\mathcal{I,I'}])\in I_j(d(G,v),X)$. If $\mathcal I_{\{1,2\}}= \mathcal I'_{\{1,2\}}=\{a\}$, then Lemma~\ref{lema:det1} implies that for some matrix $J(x_{v},{\bf a'};M,{\bf b'})\in M_{j}(L(G,X))$ with $M\in M_{j-1}(L(G-v,X))$, \[ m_{\mathcal{I,I'}}={\rm det}(J(x_{v^a},{\bf a'};M,{\bf b'})) = x_{v^a}\cdot {\rm det}(M)+{\rm det}(J(0,{\bf a'};M,{\bf b'})). \] And, if $|\mathcal I_{\{1,2\}}|, |\mathcal I'_{\{1,2\}}|=1$ and $\mathcal I_{\{1,2\}}\cap \mathcal I'_{\{1,2\}}=\emptyset$, then $m_{\mathcal {I,I'}}={\rm det}(J(0,{\bf a'};M,{\bf b'}))$ for some $J(0,{\bf a'};M,{\bf b'}) \in M_{j}(L(G,X))$. On the other hand, since ${\rm det}(J(x,1;1,0))={\rm det}(J(x,0;1,1))=x$, \[ m_{\mathcal{I,I'}}\in \begin{cases} \left\{ x_{v^i}\cdot {\rm minors}_{j-1}({\bf a}, L(G- v, X))\right\}_{i=0}^1 & \text{ if } |\mathcal I_{\{1,2\}}|=2, |\mathcal I'_{\{1,2\}}|=1,\\ \left\{ x_{v^i}\cdot {\rm minors}_{j-1}(L(G- v, X), {\bf b})\right\}_{i=0}^1 & \text{ if } |\mathcal I_{\{1,2\}}|=1, |\mathcal I'_{\{1,2\}}|=2. \end{cases} \] Finally, since ${\rm det}(J({\rm diag}(x_{v^1},x_{v}),(1,1);0,(1,1)))=-(x_{v^1}+x_{v})$, Lemma~\ref{lema:det1} implies that $m_{\mathcal {I,I'}}$ belongs to {\small \[ S_{j}(G,v)\,=\, \left\{x_{v}x_{v^1}\cdot {\rm det}(M)\,+\,(x_{v}+x_{v^1})\cdot {\rm det}(J(0,{\bf a'};M,{\bf b'})) \, :\, J(x_v,{\bf a}';M,{\bf b}') \in M_{j-1}(L(G,X)) \text{ with } {\bf a}',{\bf b}'\neq \emptyset \right\}, \] } when $\mathcal I_{\{1,2\}}$ and $\mathcal I'_{\{1,2\}}$ are equal to $\{1,2\}$. By convention $S_{1}(G,v)=\{x_v\}$ and $S_{2}(G,v)=\{x_{v}x_{v^1}\}$. Therefore, for $1\leq j\leq n-1$, the $j$-{\it th} critical ideal of $d(G,v)$ has the following expression: \begin{eqnarray}\label{eq:d} \nonumber I_{j}(d(G,v),X)&=&\langle {\rm minors}_{j}(L(G- v, X)), \left\{ x_{v^i}\cdot {\rm minors}_{j-1}(L(G- v, X))\right\}_{i=0}^1,\\ \nonumber && {\rm minors}_{j}({\bf a}, L(G- v, X)),\left\{ x_{v^i}\cdot {\rm minors}_{j-1}({\bf a}, L(G- v, X))\right\}_{i=0}^1,\\ && {\rm minors}_{j}(L(G- v, X),{\bf b}), \left\{ x_{v^i}\cdot {\rm minors}_{j-1}(L(G- v, X), {\bf b})\right\}_{i=0}^1,\\ \nonumber && {\rm minors}_{j}({\bf a}, L(G- v, X),{\bf b}), S_{j}(G,v) \rangle. \end{eqnarray} Thus, $I_2(d(G, v),X)\subseteq \langle x_{v},x_{v^1}, I_2(G,X)|_{x_v=0}\rangle$. Also, in a similar way, $I_{n}(d(G,v),X)$ is equal to \begin{eqnarray*} \langle \left\{ x_{v^i}\cdot {\rm det}(L(G-v,X))\right\}_{i=0}^1, {\rm det}( J(0,{\bf a};L(G-v,X),{\bf b})), \left\{ x_{v^i}\cdot {\rm minors}_{n-1}({\bf a}, L(G-v,X))\right\}_{i=0}^1,\\ \left\{ x_{v^i}\cdot {\rm minors}_{n-1}(L(G-v,X), {\bf b})\right\}_{i=0}^1, S_{n}(G,v) \rangle. \end{eqnarray*} On the other hand, by Proposition~\ref{eq:g} we have that $I_{j}(G, X)|_{x_v=0}$ is equal to { \[ \langle {\rm minors}_{j}(L(G-v,X)), {\rm minors}_{j}({\bf a}, L(G-v,X)), {\rm minors}_{j}(L(G-v,X), {\bf b}), {\rm minors}_{j}({\bf a},L(G-v,X),{\bf b}) \rangle, \] } \noindent for $1\leq j\leq n-1$, and $I_{n}(G, X)|_{x_v=0}=\langle {\rm det }(L(G,X)|_{x_v=0})\rangle= \langle {\rm det} (J(0,{\bf a}; L(G-v,X_{G-v}),{\bf b})) \rangle$. By using the previous equalities, we get that \[ I_{j}(d(G,v),X) \subseteq \langle x_{v}, x_{v^1}, I_{j}(G, X)|_{x_v=0} \rangle \] for $1\leq j\leq n$. Therefore $I_{j}(d(G,v),X)$ is trivial if and only if $I_{j}(G,X)|_{x_v=0}$ is trivial. \end{proof} Now, we give a description of the critical ideals of the replication of a vertex of a signed multidigraph. \begin{Lemma}\label{lema:r} Let $G$ be a signed multidigraph with $n\geq 2$ vertices and $v$ be a vertex of $G$. Then \[ I_{j}(r(G,v),X)\subseteq \langle x_{v}+1,x_{v^1}+1, I_{j}(G,X)|_{x_v=-1}\rangle, \] for all $1\leq j \leq n$. Moreover, $I_{j}(r(G,v),X)$ is trivial if and only if $I_{j}(G,X)|_{x_v=-1}$ is trivial. \end{Lemma} \begin{proof} We will give an analogous proof to the one of Lemma~\ref{lema:d}. We need to make a significant difference in the identity ${\rm det}(J(-1,{\bf a};M,{\bf b})) = -{\rm det}(M)+{\rm det}(J(0,{\bf a};M,{\bf b}))$. Firstly, for all $1\leq j\leq n-1$, the $j$-{\it th} critical ideal of the graph obtained by replicating vertex $v$ in $G$ has the following expression: \begin{eqnarray}\label{eq:r} \nonumber I_{j}(r(G,v),X)&=&\langle {\rm minors}_{j}(L(G- v),X), \left\{ (x_{v^i}+1)\cdot {\rm minors}_{j-1}(L(G-v,X))\right\}_{i=0}^1,\\ \nonumber && {\rm minors}_{j}({\bf a}, L(G-v,X)),\left\{ (x_{v^i}+1)\cdot {\rm minors}_{j-1}({\bf a},L(G-v,X))\right\}_{i=0}^1,\\ && {\rm minors}_{j}(L(G-v,X), {\bf b}), \left\{ (x_{v^i}+1)\cdot {\rm minors}_{j-1}(L(G-v,X), {\bf b})\right\}_{i=0}^1,\\ \nonumber && R_j(G,v), \widetilde{S}_{j}(G,v) \rangle, \end{eqnarray} \noindent where $R_j(G,v)=\left\{ {\rm det}(J(-1,{\bf a'};M,{\bf b'})) = -{\rm det}(M)+{\rm det}(J(0,{\bf a'};M,{\bf b'})) \, :\, J(x_v,{\bf a'};M,{\bf b'}) \in M_{j}(L(G,X))\right\}$ and { $\widetilde{S}_{j}(G,v)=\{(x_{v}+1)(x_{v^1}+1)\cdot {\rm det}(M)+((x_{v}+1)+(x_{v^1}+1))\cdot {\rm det}(J(-1,{\bf a'};M,{\bf b'})) \, :\, J(x_v,{\bf a'};M,{\bf b'}) \in M_{j-1}(L(G,X))\}$. } Besides, the $n$-{\it th} critical ideal of $r(G,v)$ has the following expression: \begin{eqnarray*} I_{n}(r(G,v),X)&=& \langle \left\{ (x_{v^i}+1)\cdot {\rm det}(L(G-v,X))\right\}_{i=0}^1, {\rm det}(J(-1,{\bf a};L(G-v,X),{\bf b}) ),\\ & & \left\{ (x_{v^i}+1)\cdot {\rm minors}_{n-1}({\bf a},L(G-v,X))\right\}_{i=0}^1, \\ & & \left\{ (x_{v^i}+1)\cdot {\rm minors}_{n-1}(L(G-v,X), {\bf b})\right\}_{i=0}^1, \widetilde{S}_{n}(G,v)\rangle. \end{eqnarray*} On the other hand, by Proposition \ref{eq:g} we have that $I_{j}(G, X)|_{x_v=-1}$ is equal to \[ \langle {\rm minors}_{j}(L(G-v,X)), {\rm minors}_{j}({\bf a}, L(G-v,X)), {\rm minors}_{j}(L(G-v,X), {\bf b}), R_j(G,v) \rangle, \] for all $1\leq j\leq n-1$, and $I_{n}(G, X)|_{x_v=-1}=\langle {\rm det }(L(G,X))|_{x_v=-1}\rangle= \langle {\rm det}(J(-1,{\bf a};L(G-v,X),{\bf b}) ) \rangle$. Therefore, \[ I_{j}(r(G,v),X) \subseteq \langle x_{v}+1, x_{v^1}+1, I_{j}(G, X)|_{x_v=-1} \rangle \] for all $1\leq j\leq n$. Finally, it is clear that $I_{j}(r(G,v),X)$ is trivial if and only if $I_{j}(G,X)_{x_v=-1}$ is trivial. \end{proof} The next example shows a signed multidigraph satisfying the equality in the inclusions given in Lemmas~\ref{lema:d} and~\ref{lema:r}. \begin{Example} Let $G$ be the cycle with five vertices, where the arcs $v_2v_1$ and $v_1v_5$ have negative signs, see Figure~\ref{fig:00}. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{2cm}}c} \multirow{9}{40mm}{ \vspace{40mm} \begin{tikzpicture}[scale=1, line width=0.7pt] \tikzstyle{every node}=[minimum width=4.5pt, inner sep=0pt, circle] \draw (72+18:1) node (v1) [draw, fill=gray, label=above:{\small $v_1$}] {}; \draw (144+18:1) node (v2) [draw, fill=gray, label=left:{\small $v_2$}] {}; \draw (216+18:1) node (v3) [draw, fill=gray, label=below left:{\small $v_3$}] {}; \draw (288+18:1) node (v4) [draw, fill=gray, label=below right:{\small $v_4$}] {}; \draw (18:1) node (v5) [draw, fill=gray, label=right:{\small $v_5$}] {}; \draw(v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \path[->,bend right, red] (v2) edge node[below] {\small $-$} (v1) (v1) edge node[below] {\small $-$} (v5); \path[->,bend right] (v5) edge (v1) (v1) edge (v2); \end{tikzpicture} } & \\ & $ L(G, X_G)= \left[\begin{array}{cccccc} x_1 & -1 & 0 & 0 & 1 \\ 1 & x_2 & -1 & 0 & 0 \\ 0 & -1 & x_3 & -1 & 0 \\ 0 & 0 & -1 & x_4 & -1 \\ -1 & 0 & 0 & -1 & x_5 \\ \end{array}\right] $ \end{tabular} \end{center} \caption{A signed multidigraph $G$ with five vertices and its generalized Laplacian matrix.} \label{fig:00} \end{figure} It can be check that the algebraic co-rank of the graph $G$ is equal to 3, when $\mathcal P = \mathbb Z$. Since $I_4(G,X)$ is given by $\langle x_1x_2+x_4+1, x_2x_3-x_5-1, x_3x_4+x_1-1, x_4x_5-x_2-1, x_1x_5+x_3+1 \rangle$, $I_4(G,X)|_{x_{1}=0} = \langle x_3+1, x_4+1, x_3x_4-1, x_2x_3-x_5-1,x_4x_5-x_2-1 \rangle$ = $ \langle x_3+1, x_4+1, x_2+x_5+1 \rangle$ and \begin{eqnarray*} I_4(G,X)|_{x_{1}=-1}&=& \langle -x_5+x_3+1, -x_2+x_4+1, x_4x_5-x_2-1, x_3x_4-2, x_2x_3-x_5-1 \rangle\\ &=&\langle x_3-x_5+1, x_2-x_4-1, x_4x_5-x_4-2 \rangle. \end{eqnarray*} On the other hand, the $4$-{\it th} critical ideal $I_4(d(G,v_1),X)$ is equal to $\langle x_{1}, x_{1^1}, x_3+1, x_4+1, x_2+x_5+1 \rangle$, and the $4$-{\it th} critical ideal $I_4(r(G,v_1),X)$ is equal to \[ \langle x_{1}+1, x_{1^1}+1, x_3-x_5+1, x_2-x_4-1, x_4x_5-x_4-2 \rangle. \] \end{Example} Successive applications of Lemmas~\ref{lema:d} and~\ref{lema:r} lead to the following result: \begin{Theorem}\label{teo:rd} Let $G$ be a signed multidigraph with $n\geq 2$ vertices, ${\bf d}\in \mathbb{Z}^{n}$, and \[ \phi({\bf d})_v = \begin{cases} 0 & \text{ if }{\bf d}_v>0,\\ -1 & \text{ if }{\bf d}_v<0,\\ x_v & \text{ if }{\bf d}_v=0. \end{cases} \] Then the $j$-{\it th} critical ideal $I_{j}(G^{\bf d},X)$ is included in the ideal \[ \left\langle \{\{x_{v^i}\}_{i=0}^{{\bf d}_v}\, : {\bf d}_v\geq 1\},\{\{x_{v^i}+1\}_{i=0}^{-{\bf d}_v}\, : \, {\bf d}_v\leq -1\}, I_{j}(G,X)|_{X=\phi({\bf d})} \} \right\rangle \text{ for all }1\leq j \leq n. \] Moreover, $I_{j}(G^{\bf d},X)$ is trivial if and only if $I_{j}(G,X)|_{X=\phi({\bf d})}$ is trivial. \end{Theorem} This theorem shows that the algebraic co-rank of $G^{\bf d}$ is determined by an evaluation of the critical ideals of $G$. It is well known \cite{critical} that the evaluation of the critical ideals of $G$ determines the critical group of $G$. These facts open the question about the meaning of another evaluations of the critical ideals of a graph. Next example illustrates Lemma~\ref{lema:d}, Lemma~\ref{lema:r} and Theorem~\ref{teo:rd}. \begin{Example} Let $G$ be the graph given by Figure~\ref{fig:0}. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{2cm}}c} \multirow{9}{3cm}{ \begin{tikzpicture}[line width=1pt, scale=1] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (-1,1) node (v1) [draw, circle, fill=gray] {}; \draw (1,1) node (v2) [draw, circle, fill=gray] {}; \draw (-1,0) node (v3) [draw, circle, fill=gray] {}; \draw (1,0) node (v4) [draw, circle, fill=gray] {}; \draw (-1,-1) node (v5) [draw, circle, fill=gray] {}; \draw (1,-1) node (v6) [draw, circle, fill=gray] {}; \draw (v1)+(-0.3,0) node () {\small $v_1$}; \draw (v2)+(0.3,0) node () {\small $v_2$}; \draw (v3)+(-0.3,0) node () {\small $v_3$}; \draw (v4)+(0.3,0) node () {\small $v_4$}; \draw (v5)+(-0.3,0) node () {\small $v_5$}; \draw (v6)+(0.3,0) node () {\small $v_6$}; \draw (v1) -- (v3) -- (v5) -- (v6) -- (v3) -- (v2) -- (v4) -- (v6) -- (v1) -- (v4) -- (v5) -- (v2); \end{tikzpicture} } & \\ & $ L(G, X_G)= \left[\begin{array}{cccccc} x_1 & 0 & -1 & -1 & 0 & -1 \\ 0 & x_2 & -1 & -1 & -1 & 0 \\ -1 & -1 & x_3 & 0 & -1 & -1 \\ -1 & -1 & 0 & x_4 & -1 & -1 \\ 0 & -1 & -1 & -1 & x_5 & -1 \\ -1 & 0 & -1 & -1 & -1 & x_6 \end{array}\right] $\\ & \\ \end{tabular} \end{center} \caption{A graph $G$ with eight vertices and its generalized Laplacian matrix.} \label{fig:0} \end{figure} Using a computer algebra system, we can see that $\gamma_{\mathbb{Z}}(G)=3$ and their non-trivial critical ideals are the following: {\small \begin{eqnarray*} I_{4}(G,X) &\!\!\!\!\!\!=\!\!\!\!\!\!& \langle x_3, x_4, x_1x_2+1, (x_1-1)x_6-2, (x_2-1)x_5-2, x_1x_5+x_5+2x_1, x_2x_6+x_6+2x_2, x_5x_6+x_6+x_5+2 \rangle,\\ I_5(G,X) &\!\!\!\!\!\!=\!\!\!\!\!\!& \langle x_2x_4x_5(x_6+1)-x_4x_6, x_2x_3x_4+x_2x_3x_6+x_2x_4x_6+2x_2x_3+2x_2x_4+x_3x_6+x_4x_6,\\ && x_1x_3x_4+x_1x_3x_5+x_1x_4x_5+2x_1x_3+2x_1x_4+x_3x_5+x_4x_5, x_1x_4x_6(x_5+1)-x_4x_5, \\ && x_1x_4(x_2x_6+x_2+x_6)-x_4, x_1x_3(x_2x_6+x_2+x_6)-x_3, (x_3+x_4)(x_5x_6+x_5+x_6+2)+x_3x_4,\\ && x_1x_2 (x_6+x_5)+x_5 x_6(x_1+x_2)+2(x_1x_2+x_1x_6+x_2x_5)- x_5-x_6-2 \rangle,\\ I_6(G, X)&\!\!\!\!\!\!=\!\!\!\!\!\!& \langle {\rm det}(L(G,X))\rangle. \end{eqnarray*} } From these equalities and applying Theorem~\ref{teo:rd}, we can easily obtain that the critical ideals $I_4(d(G,v_i),X)$ and $I_4(r(G,v_j),X)$ are trivial for all $i\in\{1,2\}$ and $j\in \{3,4\}$. Furthermore, the ideals $I_4(G^{{\bf e}_1-{\bf e}_6},X)$, $I_4(G^{{\bf e}_1-{\bf e}_5},X)$, $I_4(G^{{\bf e}_2-{\bf e}_5},X)$, $ I_4(G^{{\bf e}_2-{\bf e}_6},X)$, $I_4(G^{{\bf e}_5-{\bf e}_6},X)$, $I_4(G^{{\bf e}_6-{\bf e}_5},X)$ are also trivial. On the other hand, \[ I_{4}(d(G,v_6),X) = \langle x_6,x_{6^1}, I_4(G,X)|_{x_6=0}\rangle=\langle x_6,x_{6^1}, 2,x_3, x_4, x_5, x_1x_2+1 \rangle, \] \begin{eqnarray*} I_5(d(G,v_6),X) &\!\!\!\!\!\!=\!\!\!\!\!\!& \langle x_3x_6,x_3x_{6^1},x_4x_6,x_4 x_{6^1}, x_3x_5,x_4x_5,x_6(x_1x_2+1),x_{6^1}(x_1x_2+1),x_6(x_2x_5-x_5-2),\\ && x_{6^1}(x_2x_5-x_5-2), x_6(x_1x_5+x_5+2x_1), x_{6^1}(x_1x_5+x_5+2x_1), x_6x_{6^1}(x_1-1)-2(x_6+x_{6^1}),\\ && x_6x_{6^1}(x_2+1)+2x_2(x_6+x_{6^1}), (x_6x_{6^1}+x_6+x_{6^1})(x_5+1)+(x_6+x_{6^1}), x_3x_4+2x_3+2x_4,\\ && x_3(x_1x_2-1), x_4(x_1x_2-1), x_1x_2x_5+2x_1x_2+2x_2x_5-x_5-2\rangle\\ &\!\!\!\!\!\!\subsetneq \!\!\!\!\!\!&\langle x_6,x_{6^1}, I_5(G,X_G)|_{x_6=0}\rangle, \text{ and} \end{eqnarray*} \begin{eqnarray*} I_{5}(G^{{\bf e}_6-{\bf e}_5},X) &\!\!\!\!\!\!=\!\!\!\!\!\!& \langle 2(x_{5^1}\!+\!1), 2(x_5\!+\!1), x_5x_{5^1}\!-\!1, x_6\!+\!x_{6^1},x_6(x_1\!-\!1), x_6(x_2\!+\!1), x_6(x_5\!+\!1), \\ && x_6(x_{5^1}\!+\!1),x_3, x_4,x_1x_2\!-\!2x_2\!-\!1 \rangle\\ &\!\!\!\!\!\!\subsetneq\!\!\!\!\!\!&\langle x_5+1,x_{5^1}+1,x_6,x_{6^1}, I_5(G,X)|_{\{x_6=0, x_5=-1\}}\rangle. \end{eqnarray*} Note that $I_5(G,X)|_{\{x_6=0, x_5=-1\}}=\langle x_3, x_4,x_1x_2-2x_2-1 \rangle$, and $x_5x_{5^1}-1=(x_5+1)(x_{5^1}+1)-(x_5+1)-(x_{5^1}+1)$. \end{Example} As a consequence of Theorem~\ref{teo:rd}, we get the following bound for the algebraic co-rank of a signed multidigraph with twins. \begin{Corollary}\label{coro:bound} If $G$ is a signed multidigraph with $n$ vertices, then $\gamma_{\mathcal{P}}(G^{\bf d})=\gamma_{\mathcal{P}}(G^{{\rm supp}({\bf d})})\leq n$ for all ${\bf d}\in \mathbb{Z}^{n}$, where \[ {\rm supp}({\bf d})_v= \begin{cases} -1 & \text{ if } {\bf d}_v < 0,\\ 1 & \text{ if } {\bf d}_v > 0,\\ 0 & \text{ otherwise.} \end{cases} \] \end{Corollary} \begin{proof} Let ${\bf d}\in \mathbb{Z}^{n}$, $\delta={\rm supp}({\bf d})$, and $\gamma=\gamma_{\mathcal{P}}(G^{\delta})$. That is, $I_{\gamma}(G^\delta,X)=\langle 1_{\mathcal{P}}\rangle$ and $I_{\gamma+1}(G^\delta,X)\neq \langle 1_{\mathcal{P}}\rangle$. Since $G^\delta$ is an induced subdigraph of $G^{\bf d}$, by~\cite[Proposition 3.3]{critical} $\gamma_{\mathcal{P}}(G^{\bf d})\geq \gamma$. Now, we need to prove that $\gamma_{\mathcal{P}}(G^{\bf d})\leq \gamma$, that is, we need to prove that $I_{\gamma+1}(G^{\bf d},X)\neq \langle 1_{\mathcal{P}}\rangle$. Since $I_{\gamma+1}(G^{\delta}, X)$ is non-trivial and $\phi({\delta})=\phi({\bf d})$, applying Theorem~\ref{teo:rd} to $G$ and $\delta$, \[ I_{\gamma+1}(G,X)|_{X=\phi({\delta})}=I_{\gamma+1}(G,X)|_{X=\phi({\bf d})} \neq \langle 1_{\mathcal{P}}\rangle. \] Therefore, applying Theorem~\ref{teo:rd} to $G$ and ${\bf d}$ we get that $I_{\gamma+1}(G^{\bf d},X) \neq \langle 1_{\mathcal{P}}\rangle$. Finally, since $I_{n+1}(G,X)=\langle 0 \rangle$, \[ I_{n+1}(G^{\bf d}, X)\subseteq \langle \{x_v, \ldots, x_{v^{{\bf d}_v}}\, : \, {\bf d}_v\geq 1\},\{x_{v}+1,\ldots, x_{v^{-{\bf d}_v}}+1\, : \, {\bf d}_v\leq -1\}\rangle \neq \langle 1\rangle \] and we get that $\gamma_{\mathcal{P}}(G^{\bf d})\leq n$ for all ${\bf d}\in \mathbb{Z}^{n}$. \end{proof} Corollary~\ref{coro:bound} tell us that if we begin with a given graph $G$, then after several duplications (or replications) of a vertex of $G$ its algebraic co-rank can be increased. However, in certain point its algebraic co-rank stabilizes. Next example shows that the upper bound given in Corollary~\ref{coro:bound} is tight. Moreover, if $\mathrm{det}(L(G,X))|_{X=\phi({\delta})}\in \mathbb{Z}\setminus 0$, then $\gamma_{\mathcal{\mathbb{Z}}}(G^{\delta})= |V(G)|$. Therefore for any graph $G$, there exists $\delta \in \{1,-1\}^{V(G)}$ such that $\gamma_{\mathcal{\mathbb{Z}}}(G^{\delta})= |V(G)|$. \begin{Example}\label{example:completa} Let $K_n$ be the complete graph with $n\geq 2$ vertices. By \cite[Theorem 3.15]{critical}, we have that $\gamma_{\mathcal{P}}(K_n)= 1$ and $I_n(K_n,X)=\langle P\rangle$, where \[ P=\prod_{j=1}^{n} (x_j+1) - \sum_{i=1}^n \prod_{j\neq i} (x_j+1). \] Since the evaluation of $P$ at $\{x_1=0, \cdots, x_{n-1}=0, x_n=-1\}$ is equal to $-1$, by Theorem~\ref{teo:rd} and Corollary~\ref{coro:bound}, $\gamma_{\mathcal{P}}(K_n^{\bf d})=n$ for any ${\bf d}\in \mathbb{Z}^n$ such that ${\bf d}_i\geq 1\text{ for all }1\leq i\leq n-1$ and ${\bf d}_n\leq -1$. On the other hand, by \cite[Theorem 3.16]{critical} \[ I_{n-1}(K_n, X)=\left\langle \left\{\prod_{i\in \mathcal{I}} (x_i+1) \, :\, \mathcal{I}\subseteq [n] \text{ and } |\mathcal{I}|=n-2\right\}\right\rangle. \] Since $I_{n-1}(K_n, X)_{\{x_i=0\, :\, i\in[n-1]\}}=\langle 1\rangle$, by Theorem~\ref{teo:rd} and Corollary~\ref{coro:bound}, $\gamma_{\mathcal{P}}(K_n^{\bf d})=n-1$ for any ${\bf d}\in \mathbb{Z}^{n}$ such that ${\bf d}_i\geq 1$ for all $1\leq i\leq n-1$. Note that the graph $K_n^{\bf d}$ for some ${\bf d}\in \mathbb{Z}^{n}$ such that ${\bf d}_i\geq 1$ if and only if $1\leq i\leq j$ for some $1\leq j \leq n-2$ is equal to the graph $K_{j+1}^{\bf d'}$ with ${\bf d'}\in \mathbb{Z}^{j+1}$ such that ${\bf d}_i={\bf d}'_i$ for all $1\leq i \leq j$ and ${\bf d}'_{j+1}\leq -1$. \end{Example} Corollary~\ref{coro:bound} might be used to construct families of graphs with a fixed algebraic co-rank. For instance, given a graph $G$ and $\delta \in \{0,1,-1\}^{V(G)}$, let \[ \mathcal{T}_{\delta}(G)=\{G^{\bf d}: {\bf d}\in \mathbb{Z}^{|V|} \text{ such that } {\rm supp}({\bf d})=\delta\}. \] Then Corollary~\ref{coro:bound} says that the algebraic co-rank of any graph in $\mathcal{T}_{\delta}(G)$ is equal to $k=\gamma_{\mathcal{P}}(G^{\delta})$. That is, $\mathcal{T}_{\delta}(G)$ is an infinite set of graphs, all of them with algebraic co-rank equal to $k$. Moreover, these families of graphs are very useful to classify graphs with algebraic co-rank less or equal to a fixed integer. For instance, in~\cite[Theorem 4.2]{g2} it was proved that a simple connected graph has algebraic co-rank less than or equal to $2$ if and only if it is an induced subgraph of a graph in one of the two families of graphs, $\mathcal{T}_{(1,1,1)}(K_3)$ and $\mathcal{T}_{(-1,1,-1)}(P_3)$, where $K_3$ is the complete graph with three vertices and $P_3$ is the path with three vertices. On the other hand, one crucial step in the classification of the graphs with algebraic co-rank less than or equal to $2$ was the use of the concept of a minimal $k$-forbidden graph, which is a graph with algebraic co-rank greater than or equal to $k+1$ and it is minimal under induced subgraphs. In light of the Corollary~\ref{coro:bound}, we have that a minimal $k$-forbidden graph does not have more than three vertices which are twins each other. Moreover, we conjecture that only a finite number of minimal $k$-forbidden graphs and a finite set $\mathcal{G}$ of pairs $(G,\delta)$ of graphs and $\{0,1,-1\}$-vectors exist, such that any graph with algebraic co-rank less than or equal to $k$ is an induced subgraph of a graph in $\bigcup_{(G, \delta)\in \mathcal{G}} \mathcal{T}_{\delta}(G)$. That is, graphs with twins play an important role on these classification problems. This opens the question on the distribution of the algebraic co-rank of graphs with twins. In~\cite{trees}, a formula for the algebraic co-rank of a tree in terms of its $2$-matching number was given. Moreover, it can be proved the following. \begin{Proposition} If $T$ is a twin-free simple tree, then $\gamma_{\mathcal{P}}(T) \geq \lceil \frac{n+2}{2} \rceil$. \end{Proposition} \begin{proof} It follows by induction on the number of vertices of $T$. The smallest twin-free graph is the path with four vertices, which has algebraic co-rank equal to three, and the rest follows by using that $\gamma_{\mathcal{P}}(T)\geq \gamma_{\mathcal{P}}(T-e)$ for any edge $e$ of $T$, see~\cite[Lemma 2.4 and Theorem 3.8]{trees}. \end{proof} This result and an intensive computational search of graphs with less than or equal to nine vertices lead to the following conjecture. \begin{Conjecture}\label{conj:twin3} If $G$ is twin-free, then $\gamma_{\mathcal{P}}(G) \geq \lfloor \frac{n}{2} \rfloor$. \end{Conjecture} Note that this lower bound for the algebraic co-rank would be tight because the graph with seven vertices given in Figure~\ref{fig:2} is twin-free and has algebraic co-rank equal to three. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[line width=1pt, scale=1] \tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt, circle] \draw (30:1) node[draw, fill=gray, label=30:{\tiny $v_4$}] (2) {}; \draw (150:1) node[draw, fill=gray, label=150:{\tiny $v_6$}] (3) {}; \draw (270:1) node[draw, fill=gray, label=270:{\tiny $v_2$}] (7) {}; \draw (330:1) node[draw, fill=gray, label=330:{\tiny $v_3$}] (5) {}; \draw (210:1) node[draw, fill=gray, label=210:{\tiny $v_1$}] (6) {}; \draw (90:1) node[draw, fill=gray, label=90:{\tiny $v_5$}] (1) {}; \draw (0:0) node[draw, fill=gray, label=270:{\tiny $v_7$}] (4) {}; \draw (1) -- (4) -- (5) -- (2) -- (1) -- (3) -- (6) -- (4); \draw (5) -- (7) -- (2) -- (3) -- (7) -- (6); \end{tikzpicture} \end{center} \caption{A simple graph with seven vertices and algebraic co-rank equal to three.} \label{fig:2} \end{figure} This conjecture is equivalent to the following: if $\gamma_{\mathcal{P}}(G) < \lfloor \frac{n}{2} \rfloor$, then $G$ have at least a pair of twin vertices. Therefore, if Conjecture~\ref{conj:twin3} is true, then graphs with a low algebraic co-rank have twins and twin-free graphs have an higher algebraic co-rank. Given $k\geq 1$, let \[ \Gamma_{\leq k}=\{G\, :\, G \text{ is a simple connected graph with } \gamma(G)\leq k\}. \] Then $\Gamma_{\leq 1}=\mathcal{T}_{(-1)}^*(K_1)$, where $\mathcal{T}_{\delta}^*(G)$ denotes the set of induced subgraphs of one graph in $\mathcal{T}_{\delta}(G)$. As we mentioned before, in~\cite{g2} it was proved that \[ \Gamma_{\leq 2}=\mathcal{T}_{(1,1,1)}^*(K_3)\cup \mathcal{T}_{(-1,1,-1)}^*(P_3) \] and in~\cite{g3} it was given a similar result about $\Gamma_{\leq 3}$. In general given a fixed constant $k$, we expect that $\Gamma_{\leq k}$ has a similar classification. Next conjecture goes in this sense. \begin{Conjecture}\label{conj:finite} If $k\geq 1$, then \[ \Gamma_{\leq k}=\bigcup_{(G, \delta)\in \mathcal{G}} \mathcal{T}_{\delta}^*(G) \] for some finite set $\mathcal{G}$ of pairs $(G, \delta)$ with $G$ being a simple graph and $\delta \in \{0,1,-1\}^{V(G)}$. \end{Conjecture} A weak version of Conjecture~\ref{conj:finite} is given by the next conjecture, which says that an infinite family of graphs $\{G_i\}_{i=1}^\infty$ with a bounded algebraic co-rank, and such that $G_i$ is a proper induced subgraph of $G_j$ for all $i<j$, is essentiality a set of the form $\mathcal{T}_{\delta}(G)$. \begin{Conjecture}\label{conj:twin4} If $\mathcal{G}=\{G_i\}_{i=1}^\infty$ is an infinite family of simple graphs such that $G_i$ is a proper induced subgraph of $G_j$ for all $i<j$, then either \[ {\rm max}\{\gamma_{\mathcal P}(G_i)\}_{i=1}^\infty=\infty \] or a graph $G$, a vector $\delta\in \{0,1,-1\}^{V(G)}$ and an integer $M\in \mathbb{N}$ such that $G_i\in \mathcal{T}_{\delta}(G)$ for all $i\geq M$ exist. \end{Conjecture} There are families of graphs with unbounded algebraic co-rank. For instance, if $\mathcal{G}=\{P_i\}_{i=1}^\infty$ where $P_i$ is the path with $i$ vertices, then $\gamma_{\mathcal P}(P_i)=i-1$ and therefore ${\rm max}\{\gamma_{\mathcal P}(P_i)\}_{i=1}^\infty=\infty$. Now, we prove that Conjecture~\ref{conj:twin3} implies Conjectures~\ref{conj:finite} and~\ref{conj:twin4}. \begin{Theorem}\label{equivalence1} Conjecture~\ref{conj:twin3} implies Conjecture~\ref{conj:twin4}. \end{Theorem} \begin{proof} Let $\mathcal{G}=\{G_i\}_{i=1}^\infty$ be as in Conjecture~\ref{conj:twin4} with $\gamma={\rm max}\{\gamma_{\mathcal P}(G_i)\}_{i=1}^\infty< \infty$. Our strategy is to give a lower bound for the algebraic co-rank of a graph. The modular decomposition of a connected graph is obtained from a prime graph, that is, a twin-free graph, $K_2$ or $K_1$, by blowing-up each vertex with a cograph. In this way, Conjecture~\ref{conj:twin3} will allow us to reduce Conjecture~\ref{conj:twin4} to the case of a cograph. Before that, let us explain this assertion and recall the definition of modular decomposition of a graph and the concept of cograph and its cotree. Given a graph $G$, a module of $G$ is a subset $U$ of their vertices, such that \[ N_{G\setminus U}(u)=N_{G\setminus U}(v) \text{ for all } u,v\in U, \] where $N_{G}(u)$ is the open neighborhood of $u$ in $G$, that is the set of neighborhoods of the vertex $u$ in $G$. For instance, the set of true (or false) twins of a given vertex is a module. Note that one module can be a subset of another module. Also note that the entire set of vertices of $G$ and any single vertex of $G$ is a module of $G$, which are called {\it trivial modules}. The modular decomposition of a graph $G$ consists in decomposing the vertex set $V(G)$ in their modules. That is, the modular decomposition is a recursive and hierarchical decomposition of the graph, not only a partition of their vertices. For simple graphs, this decomposition is unique. A graph is called prime if all their modules are trivial. Note that a graph different from $K_2$ or $K_1\sqcup K_1$ is prime if and only if it is a twin-free graph. This hierarchical decomposition can be encoded into a tree where the root corresponds to the maximal induced prime subgraph and leaves the vertices of $G$ and the other internal vertices labeled with join ($\boxtimes$) or disjoin union ($\sqcup$) operations (similar to replications and duplications to modules). See Example~\ref{examplemodular}. \begin{Example}\label{examplemodular} The graph in Figure~\ref{modular} has four nontrivial modules $\{v_1,v_5,v_6\}$, $\{v_5,v_6\}$,$\{v_3,v_7\}$ and $\{v_4,v_8\}$. The induced subgraphs of these modules are cographs. Its maximal induced prime subgraph is the path with four vertices $P_4=u_1u_2u_3u_4$. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{10mm}}c} \begin{tikzpicture}[line width=0.5pt, scale=0.9] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \draw (-3.5,0) node (v1) {\scriptsize$v_1$}; \draw (-1,0) node (v2) {\scriptsize $v_2$}; \draw (1,0) node (v3) {\scriptsize $v_3$}; \draw (3,0) node (v4) {\scriptsize $v_4$}; \draw (-4,-1) node (v1p) {\scriptsize $v_5$}; \draw (-3,-1) node (v2p) {\scriptsize $v_6$}; \draw (1,-1) node (v3p) {\scriptsize $v_7$}; \draw (3,-1) node (v4p) {\scriptsize $v_8$}; \tikzstyle{every node}=[] \draw (-3.5,0.5) node () {\scriptsize$u_1$}; \draw (-1,0.5) node () {\scriptsize$u_2$}; \draw (1,0.35) node () {\scriptsize$u_3$}; \draw (3,0.35) node () {\scriptsize$u_4$}; \draw (v1) -- (v2) -- (v3) -- (v4); \draw (v1p) -- (v2) -- (v3p) -- (v4p); \draw (v1p) -- (v1) -- (v2p) -- (v2); \draw (v3) -- (v3p); \draw (v3) -- (v4p); \draw (v3p) -- (v4); \draw (-2.5,0) arc (0:180:1); \draw (-2.5,0) -- (-2.5,-1); \draw (-4.5,0) -- (-4.5,-1); \draw (-4.5,-1) arc (180:360:1); \draw (-4,-0.5) arc (90:270:0.5); \draw (-4,-0.5) -- (-3,-0.5); \draw (-4,-1.5) -- (-3,-1.5); \draw (-3,-1.5) arc (-90:90:0.5); \draw (1.5,0) arc (0:180:0.5); \draw (0.5,0) -- (0.5,-1); \draw (1.5,0) -- (1.5,-1); \draw (0.5,-1) arc (180:360:0.5); \draw (3.5,0) arc (0:180:0.5); \draw (2.5,0) -- (2.5,-1); \draw (3.5,0) -- (3.5,-1); \draw (2.5,-1) arc (180:360:0.5); \end{tikzpicture} & \begin{tikzpicture} [level distance=8mm, level 1/.style={sibling distance=14mm}, level 2/.style={sibling distance=8mm}, level 3/.style={sibling distance=8mm}] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \node[fill=red!70!] (root) {$P_4$} child{ child child{ child child } } child child{ child child } child{ child child }; \node[fill=orange!70!] at (root-1) {\scriptsize $\boxtimes$}; \node at (root-2) {\scriptsize $v_2$}; \node[fill=orange!70!] at (root-3) {\scriptsize $\boxtimes$}; \node[fill=blue!50!] at (root-4) {\scriptsize $\sqcup$}; \node at (root-1-1) {\scriptsize $v_1$}; \node[fill=blue!50!] at (root-1-2) {\scriptsize $\sqcup$}; \node at (root-1-2-1) {\scriptsize $v_5$}; \node at (root-1-2-2) {\scriptsize $v_6$}; \node at (root-3-1) {\scriptsize $v_3$}; \node at (root-3-2) {\scriptsize $v_7$}; \node at (root-4-1) {\scriptsize $v_4$}; \node at (root-4-2) {\scriptsize $v_8$}; \tikzstyle{every node}=[] \draw (-1,-0.2) node () {\scriptsize$u_1$}; \draw (-0.6,-0.4) node () {\scriptsize$u_2$}; \draw (0.2,-0.5) node () {\scriptsize$u_3$}; \draw (0.9,-0.2) node () {\scriptsize$u_4$}; \end{tikzpicture} \\ $(a)$ & $(b)$ \end{tabular} \end{center} \caption{A modular decomposition of a graph where each of the ellipses correspond to an non-trivial module and its associated tree.} \label{modular} \end{figure} \end{Example} Now, let us introduce the concept of cograph. There exist several alternative definitions of a cograph; one of them says that a cograph is a simple graph without the path $P_4$ with four vertices as an induced subgraph. Another characterization of a cograph says that a cograph is a graph in which every nontrivial induced subgraph has at least a pair of twins. That is, the graph corresponding to the root in the tree associated to the modular decomposition of a cograph is equal to $K_1$. Note that the non-trivial module of a graph induces a cograph. Moreover, the modular decomposition of a graph decomposes it as a twin-free graph and a blow-up (replace a vertex $v$ with a graph $H$ such that $N(u)\cap V(G)=N_G(v)$ for all $u\in H$) of each of their vertices with a cograph, see Figure~\ref{modular}. The reader may consult~\cite{cographs} and the references contained there for more details about cographs and its cotrees. Now, we will first use Conjecture~\ref{conj:twin3} to reduce Conjecture~\ref{conj:twin4} to the case of a cograph. Let \[ \mathcal{T}=\{ G\, | \, G \text{ is a twin-free induced subgraph of } G_i \text{ for some } G_i \in \mathcal{G}\} \] be the set of all twin-free graphs that are induced subgraphs of some $G_i \in \mathcal{G}$. Since the trivial graph with one vertex is twin-free, $\mathcal{T}$ is non-empty. Moreover, Conjecture~\ref{conj:twin3} implies that any graph in $\mathcal{T}$ has at most $2\gamma+1$ vertices and thus $\mathcal{T}$ is finite. Let $G$ be maximal (under the partial order given by induced subgraphs) graph in $\mathcal{T}$ and $M$ be the first natural number such that $G$ is an induced subgraph of $G_M$. For any $v\in V(G)$, let \[ L_v^j=\{ u \, | \, u \in V(G_j) \text{ and such that } N_G(v)=N_{G_j}(u)\cap (V(G)-v)\}. \] Clearly, $v\in L_v^j$ for all $v\in V(G)$ and $j\geq M$. Since $G$ is maximal, for any $u\in G_j-G$ the subgraph of $G_j$ induced by $V(G)\cup u$ is not twin-free. Also, since $G$ is twin-free, $L_u^j\cap L_v^j=\emptyset$ for all $u\neq v\in V(G)$ and for all $j$. That is, any $u\in G_j-G$ belongs to $L_v^j$ for some $v\in V(G)$ and thus $\bigsqcup_{v\in V(G)}L_v^j=V(G_j)$. Note that the $L_v^j$ are the maximal modules of $G_j$ and $G$ is the maximal prime subgraph of $G_j$. All the vertices in $L_v^j$ play the same role of $v$ in the sense that if $u\in L_v^j$, then the induced subgraph by $(V(G)-v)\cup u$ in $G_j$ is $G$. Moreover, for any $v\in V(G)$ and $j\geq M$, the subgraph $G_j[L_v^j]$ of $G_j$ induced by $L_v^j$ is a cograph. Otherwise, $G_j[L_v^j]$ would contain $P_4$ as an induced subgraph and therefore the subgraph of $G_j$ induced by the union of vertices of $G-v$ and the vertices of $P_4$ would be a twin-free graph; a contradiction to the maximality of $G$ (any vertex in $P_4$ play the same role of $v$). Note that this gives us the modular decomposition of $G_j$ as a twin-free graph with the blow-up (with a cograph) in each of their vertices. Until now, using Conjecture~\ref{conj:twin3} we have proved that the maximal twin-free subgraph (or maximal prime subgraph) of the ${G_j}'s$ stabilizes at some point. Therefore we need to prove that their maximal modules also must be stabilized (in the sense that there exists a graph $H$, $\delta\in \{0,1,-1\}^{V(H)}$, and $M\in \mathbb{N}$ such that $G_i[L_v^i]\in \mathcal{T}_{\delta}(H)$ for all $i\geq M$) at some point. Since the modules are cographs, without loss of generality we may assume that $\mathcal{G}=\{G_i\}_{i=1}^\infty$ (taking $\mathcal{G}=\{G_i[L_v^i]\}_{i=1}^\infty$) consists of cographs. Given a cograph, its cotree is the tree obtained through its modular decomposition. We will use this cotree to bound the algebraic co-rank of its cograph. Let $C$ be a cograph and $T$ be its cotree. Let $\widetilde{T}$ be the tree obtained by erasing the leaves of $T$, and $T'$ be the tree obtained by erasing the twins of $T$. Note that every twin of $T$ is a leaf, but not every leaf has a twin. For instance, in Figure~\ref{cotree}.$(b)$ the vertex $v_3$ is a leaf with no twins. Note that, \[ C=(H)^{\bf d} \text{ for some }{\bf d}\in \mathbb{Z}^{V(H)}, \] where $H$ is the cograph with cotree equal to $T'$, see Figure~\ref{cotree} $(c)$ and $(d)$. Moreover, twin vertices in $T$ correspond to twin vertices in $C$. The next example illustrate this situation. \begin{Example} Figure~\ref{cotree} contains a cograph $C$, its cotree $T$, the cotree $T'$ obtained by erasing its twins and its corresponding graph $H$. Clearly $C=H^{(1,0,-1,-1)}$. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{5mm}}c@{\extracolsep{5mm}}c@{\extracolsep{5mm}}c} \begin{tikzpicture}[line width=0.5pt, scale=0.9] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \draw (-0.7,0) node (v1) {\scriptsize $v_1$}; \draw (0.7,0) node (v2) {\scriptsize $v_2$}; \draw (-2.5,-2) node (v3) {\scriptsize $v_3$}; \draw (-0.7,-3) node (v4) {\scriptsize $v_4$}; \draw (0.7,-3) node (v5) {\scriptsize $v_5$}; \draw (2,-2.5) node (v6) {\scriptsize $v_6$}; \draw (2.8,-1.8) node (v7) {\scriptsize $v_7$}; \tikzstyle{every node}=[] \draw (0,-1) node {$C$}; \draw (v4) -- (v5); \draw (v1) -- (v3); \draw (v1) -- (v4); \draw (v1) -- (v5); \draw (v1) -- (v6); \draw (v1) -- (v7); \draw (v2) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw (v6) -- (v7); \end{tikzpicture} & \begin{tikzpicture} [level distance=8mm, level 1/.style={sibling distance=26mm}, level 2/.style={sibling distance=12mm}, level 3/.style={sibling distance=6mm}] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \node[fill=orange!70!] (root) {\scriptsize $\boxtimes$} child{ child child } child{ child child{ child child } child{ child child } }; \node at (root-2-2-1) {\scriptsize $v_4$}; \node at (root-2-2-2) {\scriptsize $v_5$}; \node at (root-2-3-1) {\scriptsize $v_6$}; \node at (root-2-3-2) {\scriptsize $v_7$}; \node at (root-1-1) {\scriptsize $v_1$}; \node at (root-1-2) {\scriptsize $v_2$}; \node at (root-2-1) {\scriptsize $v_3$}; \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=blue!50!] \node at (root-1) {\scriptsize $\sqcup$}; \node at (root-2) {\scriptsize $\sqcup$}; \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=orange!70!] \node at (root-2-2) {\scriptsize $\boxtimes$}; \node at (root-2-3) {\scriptsize $\boxtimes$}; \tikzstyle{every node}=[] \draw (0,-1) node {$T$}; \end{tikzpicture} & \begin{tikzpicture} [level distance=8mm, level 1/.style={sibling distance=12mm}, level 2/.style={sibling distance=8mm}, level 3/.style={sibling distance=6mm}] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \node[fill=orange!70!] (root) {\scriptsize $\boxtimes$} child child{ child child child }; \node at (root-1) {\scriptsize $u_1$}; \node[fill=blue!50!] at (root-2) {\scriptsize $\sqcup$}; \node at (root-2-1) {\scriptsize $u_2$}; \node at (root-2-2) {\scriptsize $u_3$}; \node at (root-2-3) {\scriptsize $u_4$}; \tikzstyle{every node}=[] \draw (0,-1) node {$T'$}; \draw (0,-2.5) node {\mbox{ }}; \end{tikzpicture} & \begin{tikzpicture}[line width=0.5pt, scale=0.9] \tikzstyle{every node}=[minimum width=0pt, inner sep=1pt, circle, draw, fill=gray!20!] \draw (0,0) node (u1) {\scriptsize $u_1$}; \draw (-1,-2) node (u2) {\scriptsize $u_2$}; \draw (0,-2) node (u3) {\scriptsize $u_3$}; \draw (1,-2) node (u4) {\scriptsize $u_4$}; \draw (u1) -- (u2); \draw (u1) -- (u3); \draw (u1) -- (u4); \tikzstyle{every node}=[] \draw (-0.2,-1) node {$H$}; \end{tikzpicture} \\ $(a)$ & $(b)$ & $(c)$ & $(d)$ \end{tabular} \end{center} \caption{A cograph $C$, its cotree $T$, a cotree $T'$ obtained from $T$ and the cograph associated to $T'$.} \label{cotree} \end{figure} \end{Example} Now, we give a lower bound for the algebraic co-rank of a connected cograph $C$ in function of the height of its cotree $T$ and the out-degree of the vertices of $\widetilde{T}$. Before we do that, we give a lower bound for the algebraic co-rank of a special class of cographs: the threshold graphs. Let $Th_1$ be the trivial graph with only one vertex (denoted by $v_1$) and \[ Th_n= \begin{cases} v_{2k}\boxtimes Th_{2k-1} & \text{ if } n=2k,\\ v_{2k+1}\sqcup Th_{2k} & \text{ if } n=2k+1, \end{cases} \] where $\boxtimes$ means the join of graphs and $\sqcup$ means the disjoint union of graphs. Since \[ L(Th_{2k},X)[\{2,4,\ldots,2k\},\{1,3,\ldots,2k-1\}]= \left[\begin{array}{cccc} -1 & 0 & \cdots & 0 \\ -1 & -1 & 0 & \vdots \\ \vdots & & \ddots & 0 \\ -1 & & \cdots & -1 \end{array}\right], \] $\gamma_{\mathcal P}(Th_{2k})\geq k$. Note that any connected threshold graph different from the trivial graph belongs to one of the families of graphs $\mathcal{T}_{(1,-1,\ldots,1,-1)}(Th_{2k})$ for all $k\geq 1$. In a similar way, it can be proved that $\gamma_{\mathcal P}(\boxtimes_{i=1}^l Th_{2k_i})\geq l-1+\sum_{i=1}^l k_i$. Now, let $r$ be the root of $T$, $u$ be one of their leaves, and consider the path $P_{u}(T)$ from $u$ to $r$. Note that if $C$ is connected, then the root of $T$ is labeled with a join operation. If $P_{u}(T)$ has length $n$ (the number of edges), then $C$ contains the threshold graph $Th_{n+1}$ as an induced subgraph when $n$ is odd and $Th_{n}$ when $n$ is even. For instance, consider the cograph given in Figure~\ref{cotree}.$(a)$ and its cotree given in Figure~\ref{cotree}.$(b)$. If we choose the vertex $v_4$, then the length of $P_{v_4}(T)$ is three, and the graph induced by $v_4,v_5,v_6,v_1$ is equal to $Th_4$. In a similar way, if we choose $v_1$, then the length of $P_{v_1}(T)$ is two, and it can be check that the graph induced by $v_1,v_3$ is equal to $Th_2$. Now, if $T$ has height $h$, then $C$ contains $Th_{h}$ as an induced subgraph. Since $\gamma_{\mathcal P}(C)\geq\gamma_{\mathcal P}(Th_{h})\geq \lfloor \frac{h}{2}\rfloor$ and $\gamma_{\mathcal P}(G_j)=\gamma < \infty$, the height of $T$ is less or equal to $2\gamma+1$. Now, we give a lower bound for the algebraic co-rank of a connected cograph $C$ in terms of the out-degree of the vertices of its associated tree $\widetilde{T}$, (the obtained from its cotree by erasing its leaves). Let $u$ be a vertex of $\widetilde{T}$. If $u$ is a leaf of $\widetilde{T}$, then its out-degree is zero. Assume that $u$ is not a leaf of $\widetilde{T}$. If $u$ is labeled with a disjoint union operation, then $C$ contains a disjoint union of the subgraphs associated to the out-neighborhoods of $u$. Since \[ \gamma_{\mathcal P}\left(\bigsqcup_{i=1}^n H_i\right)=\sum_{i=1}^n \gamma_{\mathcal P}(H_i), \] and the unique graph with algebraic co-rank equal to zero is the trivial graph, the algebraic co-rank of $C$ is at least the out-degree of $u$. If $u$ is labeled with a join operation, then $C$ contains the join of the subgraphs associated to the out-neighborhoods of $u$. On the other hand, if $\{H_i\}_{i=1}^l$ is a set of graphs, each one different to a complete graph, then $\boxtimes_{i=1}^l H_i$ contains $K_l^{(1,\ldots,1)}$ and therefore by Example~\ref{example:completa} \[ \gamma_{\mathcal P}(\boxtimes_{i=1}^l H_i)\geq \gamma_{\mathcal P}(K_l^{(1,\ldots,1)})=l-1. \] Thus the algebraic co-rank of $C$ is at least the out-degree of $u$ minus one. Since $\gamma_{\mathcal P}(G_j)=\gamma < \infty$, the out-degree of the vertices of $\widetilde{T}$ is less or equal to $\gamma+1$. Therefore the height and the out-degree of all the vertices non adjacent to a leaf of $T$ are bounded. Since $G_i$ is a proper induced subgraph of $G_j$ for all $i<j$ and any induced subgraph of a cograph $C$ corresponds to a subcotree of the cotree $T$ of $C$ (the subcotree of $T$ induced by the leaves that correspond to the vertices in the induced subgraph and rooted by the common ancestor of these leaves), there exists a cograph $C_v$ and $M'\geq M$ such that \[ G_j[L_v^j]=C_v^{\bf d}\text{ for some }{\bf d}\in \{0,1,-1\}^{V(C_v)}\text{ for all }j\geq M' \] and therefore we get the statement given in Conjecture~\ref{conj:twin4}. \end{proof} Note that any lower bound for the algebraic co-rank of a graph in terms of its number of vertices in the case of twin-free graphs and in terms of the structure of its cotree in case of cographs implies Conjectures~\ref{conj:finite} and~\ref{conj:twin4}. A key fact in the proof of Theorem~\ref{equivalence1} is to give a lower bound for the algebraic co-rank of a cograph in terms of its cotree. The lower bound presented in the proof of Theorem~\ref{equivalence1} is very loose, however we conjecture the following: \begin{Conjecture}\label{boundcograph} If $C$ is a cograph, then \[ \gamma_{\mathcal P}(C)\geq |E(\widetilde{T})|-\#\{\text{internal vertices of } \widetilde{T} \text{ labeled with the join or join operatation}\}, \] where $\widetilde{T}$ is the tree obtained from the cotree of $C$ by erasing their leaves. \end{Conjecture} As a consequence of the lower bound for the algebraic co-rank of a cograph given in the proof of Theorem~\ref{equivalence1}, we have the following result: \begin{Corollary}\label{finitecograph} If $k$ is a positive integer, then \[ \mathcal{C}_{\leq k}=\{C\, | \, C \text{ is a cograph in } \Gamma_{\leq k} \}=\bigcup_{(G, \delta)\in \mathcal{G}} \mathcal{T}_{\delta}^*(G) \] for some finite set $\mathcal{G}$ of pairs $(G, \delta)$ with $\delta \in \{0,1,-1\}^{V(G)}$. \end{Corollary} \begin{proof} Let $C$ be a cograph with algebraic co-rank less than or equal to $k$, $T$ be its cotree, $T'$ be the tree obtained from $T$ by erasing their twin vertices, and $H$ be the cograph with cotree equal to $T'$. Clearly $C=H^{\bf d}$ for some ${\bf d}\in \mathbb{Z}^{V(H)}$. From the proof of Theorem~\ref{equivalence1}, we have that the height of $T$, which is also the height of $T'$, is upper bounded and the out-degree of the vertices of $T'$ is also upper bounded. Therefore, there exists a finite number of trees that can be the $T'$ of $C$ and therefore it turns out the result. \end{proof} \begin{Theorem}\label{equivalence2} Conjecture~\ref{conj:twin3} implies Conjecture~\ref{conj:finite}. \end{Theorem} \begin{proof} Let $G$ be a graph with algebraic co-rank less than or equal to $k$. Using the modular decomposition of $G$ we have that $G$ can be decompose in a twin-free (its maximal prime) graph with a blow-up of a cograph in each of their vertices. By Conjecture~\ref{conj:twin3} we have that the size of its maximal twin-free is bounded (therefore only a finite number of possible maximal twin-free subgraphs for $G$ exist) and by Corollary~\ref{finitecograph} only a finite number of cographs $H$ exist, such that the cograph used in the blow-up of each vertex is of the form $H^{\bf d}$ for some ${\bf d}\in \mathbb{Z}^{V(H)}$. Putting together these two facts we get the result. \end{proof} Finally, we pose a variant of Conjecture~\ref{conj:twin3}, which we believe is stronger. \begin{Conjecture}\label{conj:twin1} If $\gamma_{\mathcal{P}} (G - v ) = \gamma_{\mathcal{P}} (G)$ for all $v \in V (G)$, then $G$ has at least a pair of twin vertices. \end{Conjecture} \section{Critical ideals of graphs with twin vertices}\label{sec:description} In this section we give a deep description of some of the critical ideals of a graph $G$ obtained by duplicating or replicating several times one of their vertices in terms of some of the critical ideals of $G$. In Section~\ref{rd} we saw that the algebraic co-rank of the graphs $\{d^k(G,v)\}_{k\geq 0}$ and $\{r^k(G,v)\}_{k\geq 0}$ quickly stabilizes. We will show that their critical ideals regularize, but a little bit slower. More precisely, if $\gamma_d=\gamma_{\mathcal{P}}(d(G,v))$ and $\lambda\in \{0,1\}$ is a constant that depends on $G$ and $v$, then Theorem~\ref{teo:deq} gives a description of \[ I_{\gamma_d+k}(d^{k+\lambda+i}(G,v),X) \] in terms of the critical ideals of $G$. Also, Theorem~\ref{teo:req} gives a similar description of the critical ideals of $I_{\gamma_r+k}(r^{k+\lambda+i}(G,v),X)$, where $\gamma_r=\gamma_{\mathcal{P}}(r(G,v))$. \subsection{The critical ideals of the duplication of vertices} We begin by giving a description of the critical ideals of $d^k(G,v)$ in terms of the critical ideals of $G$ and some of the minors of $G-v$. This description generalizes the description of the critical ideals of $d(G,v)$ given in Equation~\ref{eq:d}. Before doing this, we need to introduce some notation. Given a subset $S$ of natural numbers, $\binom{S}{l}$ denote the set of all subsets of $S$ of cardinality equal to $l$. Moreover, if $v$ is a vertex of a signed multidigraph, let \[ P^S_{l}(v)=\left\{\prod_{c\in C}x_{v^c}\, : \, C\in \binom{S}{l}\right\}, \] that is, $P^S_{l}(v)$ is the set of the products of $l$ of the variables associated to the duplication of one vertex of $G$. By convention we take $P^S_{0}(v)=\{1\}$. And for simplicity, $P^k_{l}(v)$ denote $P_{l}^{\{0,\ldots ,k\}}(v)$. Note that $I_{l}(T_k,X)=\langle P^k_{l}(v)\rangle$, where $T_k$ is the trivial graph with $k$ isolated vertices and $v$ is a vertex of $T_k$. We also recall that $I_{j}(G, X)=\langle 0\rangle$ for all $j> |V(G)|$. \begin{Lemma}\label{lema:gend} Let $G$ be a signed multidigraph with $n\geq 2$ vertices and $v\in V(G)$. If $k,j\geq 1$ and $m={\rm min}(k,j-1)$, then \begin{eqnarray*} I_{j}(d^k(G,v),X)&=&\left\langle \left\{ P_{l}^{k}(v) \cdot I_{j-l}(G, X)|_{x_v=0} \right\}_{l=0}^{m-1}, P_{m}^{k}(v) \cdot I_{j-m}(G- v,X),\right.\\ && P_{m}^{k}(v)\cdot {\rm minors}_{j-m}({\bf a},L(G- v,X)), \left. P_{m}^{k}(v)\cdot {\rm minors}_{j-m}(L(G- v,X), {\bf b}), S_{j}^{k}(G,v) \right\rangle \end{eqnarray*} for all $1\leq j\leq n+k$, where $S_{j}^{k}(G,v)$ is equal to $P^k_{j}(v)$ when $j\leq k+1$, and equal to \[ \left\{ {\rm det}(M)\cdot \prod_{t=0}^{k} x_{v^{t}}+{\rm det}(J(0,{\bf a'};M,{\bf b'}))\cdot\sum_{t=0}^{k} \prod_{s\neq t} x_{v^{s}} \, : \, J(x_v,{\bf a'};M,{\bf b'}) \in M_{j-k}(L(G,X))\right\} \] when $j> k+1$. \end{Lemma} The proof of this lemma is technical and very similar to those arguments given in the previous proofs. It is included in Section 3.4 at the end of this section. \begin{Remark} Note that $I_{j}(G, X)|_{x_v=0}$ is equal to \[ \langle {\rm minors}_{j}(L(G- v,X)), {\rm minors}_{j}({\bf a}, L(G- v,X)), {\rm minors}_{j}(L(G- v,X), {\bf b}), {\rm minors}_{j}({\bf a},L(G- v,X), {\bf b}) \rangle \] and the $i$-{\it th} critical ideal $I_i(T_{k+1},X)$ of the graph with $k+1$ isolated vertices is equal to $\langle P_{i}^{k}(v)\rangle$. Moreover, if $m={\rm min}(k,j-1)$, then \[ I_j(d^k(G,v),X)|_{x_{v}=0}=\left\langle \left\{ P_{i}^{\{1,\ldots,k\}}(v) \cdot I_{j-i}(G, X)|_{x_v=0} \right\}_{i=0}^{m} \right\rangle. \] By~\cite[Proposition 3.4]{critical} the $j$-{\it th} critical ideal of the disjoint union of $T_{k+1}$ and $G$ is equal to \[ I_j(T_{k+1}\sqcup G,X)=\left\langle\, \bigcup_{i=0}^j I_i(T_{k+1},X)\cdot I_{j-i}(G,X)\,\right\rangle=\left\langle\, \bigcup_{i=0}^j P_{i}^{k}(v)\cdot I_{j-i}(G,X)\,\right\rangle. \] That is, $I_j(d^k(G,v),X)|_{x_{v}=0}$ behaves almost equal as the $j$-{\it th} critical ideal of the disjoint union of $T_{k+1}$ and $G$. \end{Remark} In the next example we show how to use the description of $I_j(d^k(G,v),X)|_{x_{v}=0}$. \begin{Example} Let $Q_3$ be the hypercube with $V(Q_3)=\{v_i\}_{i=1}^8$. The reader can check that $\gamma_{\mathbb Z}(Q_3)=4$, $\gamma_{\mathbb Z}(d(Q_3,v_8))=5$. Moreover \[ I_7(d(Q_3,v_8),X)|_{x_{8}=0}=\langle x_{8^1}\cdot I_6(Q_3,X)|_{x_8=0}, I_7(Q_3,X)|_{x_8=0}\rangle, \] where $I_6(Q_3,X)_{x_8=0}=\left\langle x_1-x_6, x_2-3x_7, x_3-x_6, x_4-x_7, x_5-x_7, x_6x_7-1\right\rangle$ and \begin{eqnarray*} I_7(Q_3,X)|_{x_8=0}&=&\langle x_2x_4x_6-x_4x_5x_6-x_4x_6x_7-x_5x_6x_7-x_2-x_4+2x_5+2x_7,\\ && x_2x_3x_5-x_3x_4x_5-x_3x_4x_7-x_3x_5x_7-x_2+2x_4-x_5+2x_7,\\ && x_1x_2x_7-x_1x_4x_5-x_1x_4x_7-x_1x_5x_7-x_2+2x_4+2x_5-x_7,\\ && x_1x_3x_7-x_1x_4x_6+x_3x_4x_6-x_1x_6x_7+x_1-2x_3+x_6,\\ && x_1x_3x_5+x_1x_4x_6-x_3x_4x_6-x_3x_5x_6-2x_1+x_3+x_6,\\ && x_1x_4x_5x_6+x_1x_4x_6x_7+x_1x_5x_6x_7-x_1x_5-x_5x_6-2x_4x_6-2x_1x_7+3,\\ && x_3x_4x_5x_6+x_3x_4x_6x_7+x_3x_5x_6x_7-2x_3x_5-2x_4x_6-x_3x_7-x_6x_7+3\rangle. \end{eqnarray*} \end{Example} We now are ready to give a more accurate description of some critical ideals of $d^{i+k}(G,v)$. Given $r,s\geq 0$, let \[ \lambda(r,s)= \begin{cases} 0 & \text{ if } r=s,\\ 1 & \text{ otherwise}. \end{cases} \] As the next theorem shows, this constant plays the role of a regularity constant in the sense that the behavior of the critical ideals $I_{\gamma_d+k}(d^{k+\lambda}(G,v),X)$ is regular. Moreover, $\lambda=0$ if and only if $\gamma_{\mathcal P}(G- v)=\gamma_{\mathcal P}(d(G,v))$, that is, $\lambda$ indicates whether the removal of $v$ or the duplication of $v$ yields a change in the algebraic co-rank. \begin{Theorem}\label{teo:deq} Let $G$ be a signed multidigraph, $v$ a vertex of $G$, $\gamma_d=\gamma_{\mathcal P}(d(G,v))$, $\gamma_v=\gamma_{\mathcal P}(G- v)$ and $\lambda=\lambda(\gamma_{v},\gamma_d)$. If $\gamma_{\mathcal P}(G) \geq 2$, then $0\leq \gamma_d-\gamma_v\leq 2$ and \[ I_{\gamma_d+k}(d^{k+\lambda+i}(G,v),X)= \left\langle \left\{ P_{l}^{k+\lambda+i}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle \] for all $k\geq 1$ and $i\geq 0$. \end{Theorem} \begin{proof} Since $I_j(G,X)|_{x_v=0}\subseteq I_{j-2}(G- v, X)$, by Lemma~\ref{lema:d} we have that $0\leq \gamma_d-\gamma_v\leq 2$. Note that $\gamma_d-\gamma_v$ measures the number of steps in which the algebraic co-rank of the set of graphs $\{G-v,d^k(G,v)_{k\geq 0}\}$ stabilizes. The inequality $0\leq \gamma_d-\gamma_v\leq 2$ says that this happens in at most two steps. Now, applying Lemma~\ref{lema:gend} with $k=k+\lambda+i$ and $j=\gamma_d+k$, we get that $I_{\gamma_d+k}(d^{k+\lambda+i}(G,v),X)$ is equal to \begin{gather*} \left\langle \left\{ P_{l}^{k+\lambda+i}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{m-1}, P_{m}^{k+\lambda+i}(v) \cdot I_{\gamma_d+k-m}(G- v, X),\right.\\ \left. P_{m}^{k+\lambda+i}(v)\cdot {\rm minors}_{\gamma_d+k-m}({\bf a},L(G- v, X)),P_{m}^{k+\lambda+i}(v)\cdot {\rm minors}_{\gamma_d+k-m}(L(G- v, X), S_{\gamma_d+k}^{k+\lambda+i}(G,v)\right\rangle. \end{gather*} On the other hand, since $\gamma_d-1\geq \lambda$, $m={\rm min}(k,j-1)={\rm min}(k+\lambda+i,\gamma_d+k-1)= k+{\rm min}(\lambda+i,\gamma_d-1)\geq k+\lambda$. Also, by Lemma~\ref{lema:d}, $I_{\gamma_d}(G,X)|_{x_v=0}=\langle 1\rangle$. Note that, if $\lambda=0$, then $m=k$, $\gamma_v=\gamma_d$, $I_{\gamma_d}(G-v, X)$ is trivial, and $ P_{m}^{k+i}(v) \cdot I_{\gamma_d}(G- v, X)= P_{m}^{k+i}(v)$. Therefore, if we assume that $\lambda=0$, \begin{eqnarray*} I_{\gamma_d+k}(d^{k+i}(G,v),X) &=& \left\langle \left\{ P_{l}^{k+i}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k-1}, P_{k}^{k+i}(v)\right\rangle\\ &=& \left\langle \left\{ P_{l}^{k+i}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle. \end{eqnarray*} Otherwise ($\lambda=1$), taking $l=k$ we get that $P_{k}^{k+i+1}(v) \cdot I_{\gamma_d}(G, X)|_{x_v=0}=P_{k}^{k+i+1}(v)$ and therefore \begin{eqnarray*} I_{\gamma_d+k}(d^{k+i+\lambda}(G,v),X)=I_{\gamma_d+k}(d^{k+i+1}(G,v),X) &=& \left\langle \left\{ P_{l}^{k+i+1}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle\\ &=& \left\langle \left\{ P_{l}^{k+i+\lambda}(v) \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle. \end{eqnarray*} \end{proof} When $k=1$, Theorem~\ref{teo:deq} can be reduced to the following simpler form \[ I_{\gamma_d+1}(d^{i+1}(G,v),X)= \langle x_{v^0},x_{v^1}, \ldots, x_{v^{i+1}}, I_{\gamma_d+1}(G,X)|_{x_v=0}\rangle, \] for all $i\geq \lambda$, which is similar to Lemma~\ref{lema:d}. We recall that $\lambda=\lambda(\gamma_{\mathcal P}(G-v), \gamma_{\mathcal P}(d(G,v)))$. \begin{Remark}\label{half} Given a fixed integer $k \geq \lambda+1$, we have that Theorem~\ref{teo:deq} implies that \[ I_{\gamma_d+j}(d^{k}(G,v),X)=\left\langle \left\{ P_{l}^{k}(v) \cdot I_{\gamma_d+j-l}(G, X)|_{x_v=0} \right\}_{l=0}^{j} \right\rangle, \] for all $j$ such that $1\leq j\leq k-\lambda$. That is, Theorem~\ref{teo:deq} does not describe all the critical ideals of $d^{k}(G,v)$. \end{Remark} In order to get a better understanding of Theorem~\ref{teo:deq}, we present the following example. \begin{Example}\label{example:deq} Let $G$ be the cycle (see Figure~\ref{fig:03}) with four vertices and sign $\sigma$ given by \[ \sigma(e)= \begin{cases} -1 & \text{ if } e=v_1v_4, v_4v_3,\\ 1 & \text{ otherwise.} \end{cases} \] By using a computer algebra system, we can verify that $\gamma=\gamma_{\mathbb{Z}}(G)=2$, $\gamma_{v_1}=\gamma_{\mathbb{Z}}(G- v_1)=2$, and $\gamma_d=\gamma_{\mathbb{Z}}(d(G,v_1))=2$. Thus $\lambda(\gamma-\gamma_{v_1},\gamma_d-\gamma)=\lambda(0,0)=0$. Moreover, it can be checked that $I_3(G,X)=\langle x_2 + x_4, x_1-x_3,x_3x_4+2 \rangle$ and $I_4(G,X)=\langle x_1x_2x_3x_4+x_1x_2+x_2x_3-x_1x_4-x_3x_4-4 \rangle$. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{2cm}}c} \multirow{9}{20mm}{ \vspace{25mm} \begin{tikzpicture}[scale=1, line width=0.9pt] \tikzstyle{every node}=[minimum width=4pt, inner sep=0pt, circle] \draw (45:1) node (v1) [draw,fill=gray] {}; \draw (135:1) node (v2) [draw,fill=gray] {}; \draw (225:1) node (v3) [draw,fill=gray] {}; \draw (315:1) node (v4) [draw,fill=gray] {}; \draw[->,red, bend right] (v1) edge (v4); \draw[->,bend right] (v4) edge (v1); \draw[->,bend right] (v2) edge (v3); \draw[->,bend right] (v3) edge (v2); \draw[->,bend right] (v1) edge (v2); \draw[->,red, bend right] (v2) edge (v1); \draw[->,bend right] (v3) edge (v4); \draw[->,bend right] (v4) edge (v3); \draw (v2)+(-0.2,0.2) node () {\small $v_1$}; \draw (v3)+(-0.2,-0.2) node () {\small $v_2$}; \draw (v4)+(0.2,-0.2) node () {\small $v_3$}; \draw (v1)+(0.2,0.2) node () {\small $v_4$}; \draw[red] (0,0.6) node () {\small $-$}; \draw[red] (0.3,0) node () {\small $-$}; \end{tikzpicture} } & \\ & $ L(G, X)= \left[\begin{array}{cccccc} x_1 & -1 & 0 & 1 \\ -1 & x_2 & -1 & 0 \\ 0 & -1 & x_3 & -1 \\ -1 & 0 & 1 & x_4 \end{array}\right] $ \end{tabular} \end{center} \caption{A signed multidigraph $G$ with four vertices and its generalized Laplacian matrix.} \label{fig:03} \end{figure} \noindent Since $I_3(G,X)|_{x_{1}=0}=\langle 2,x_3,x_2 + x_4 \rangle$, Theorem~\ref{teo:deq} implies that \[ I_{3}(d^{i+1}(G,v_1),X)=\left\langle P_{1}^{i+1}(v_1), I_3(G,X)|_{x_{1}=0} \right\rangle=\left\langle \{x_{1^l}\}_{l=0}^{i+1}, 2,x_3,x_2 + x_4 \right\rangle \text{ for all }i\geq 0. \] Also, since $I_4(G,X)|_{x_{1}=0}=\langle x_2x_3-x_3x_4-4 \rangle$, by Theorem~\ref{teo:deq} {\small \begin{eqnarray*} I_4(d^{i+2}(G,v_1),X)&\!\!\!\!=\!\!\!\!& \left\langle P_{2}^{i+2}(v_1), P_{1}^{i+2}(v_1)\cdot I_3(G,X)|_{x_{1}=0}, I_4(G,X)|_{x_{1}=0}\right\rangle\\ &\!\!\!\!=\!\!\!\!& \langle \{x_{1^l}x_{1^{l'}}\}_{0\leq l<l'\leq i+2},\{2x_{1^l}\}_{l=0}^{i+2}, \{x_{1^l}x_3 \}_{l=0}^{i+2}, \{x_{1^l}(x_2\!+\!x_4)\}_{l=0}^{i+2}, x_2x_3\!-\!x_3x_4\!-\!4 \rangle \text{ for all }i\geq 0. \end{eqnarray*} } Finally, since $I_{j}(G,X)=\langle 0 \rangle$ for all $j\geq 5$, {\small \begin{eqnarray*} I_{k+2}(d^{k+i}(G,v_1),X)&\!\!\!\!=\!\!\!\!& \langle P_{k}^{k+i}(v_1), P_{k-1}^{k+i}(v_1)\cdot I_{3}(G,X)|_{x_{1}=0}, P_{k-2}^{k+i}(v_1)\cdot I_{4}(G,X)|_{x_{1}=0}\rangle\\ &\!\!\!\!=\!\!\!\!& \langle P_{k}^{k+i}(v_1),\{2, x_3,x_2\!+\!x_4\}\cdot P_{k-1}^{k+i}(v_1), (x_2x_3\!-\!x_3x_4\!-\!4)\cdot P_{k-2}^{k+i}(v_1)\rangle \text{ for all }i\geq 0,k\geq 1. \end{eqnarray*} } Moreover, the reader can check that \begin{eqnarray*} I_4(d(G,v_1), X)&=&\langle x_{v_1^0} (x_2+x_4), x_{1^1} (x_2+x_4), x_{1^0} (x_3x_4+2), x_2x_3-x_3x_4-4, \\ && x_{1^0} x_{1^1}x_4 +2x_{1^0}+2x_{1^1}, x_{1^0}x_3 + x_{1^1} x_3-x_{1^0} x_{1^1} \rangle\\ & \neq & \langle P_{2}^{i+1}(v_1), P_{1}^{i+1}(v_1)\cdot I_3(G,X)|_{x_{1}=0}, I_4(G,X)|_{x_{1}=0} \rangle. \end{eqnarray*} That is, Theorem~\ref{teo:deq} cannot be improved. \end{Example} The diagonal entries of twin vertices in the Laplacian matrix of $d^k(G,v)$ are equal (twin vertices have the same degree). Therefore, an important case of the critical ideals of $d^k(G,v)$ is given by considering the same variable associated to duplicated vertices. In this case Theorem~\ref{teo:deq} reduces to the following form: If $\gamma_{\mathcal P}(G) \geq 2$ and $x_v$ is the variable associated to the twins of $v$, then \[ I_{\gamma_d+k}(d^{k+\lambda+i}(G,v),X)= \left\langle \left\{ x_v^l \cdot I_{\gamma_d+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle \] for all $k\geq 1$ and $i\geq 0$. \subsection{The critical ideals of the replication of vertices} We now give the description of the critical ideals of $r^k(G,v)$. This part is structured similarly to the part of the critical ideals of $d^k(G,v)$. Given a subset $S$ of the natural numbers and a vertex $v\in V(G)$, let \[ \widetilde{P}_{l}^{S}(v)=\{\prod_{c\in C}x_{v^c}+1\, : \, C\in \binom{S}{l}\}. \] For convention $\widetilde{P}_{0}^S(v)=\{1\}$. Also, for simplicity $\widetilde{P}_{l}^{\{0\}\cup [k]}(v)$ will be denoted by $\widetilde{P}_{l}^{k}(v)$. Note that $I_{l+1}(K_k,X)=\langle \widetilde{P}^k_{l}(v)\rangle$ for all $1\leq l\leq k-2$, where $K_k$ is the complete graph with $k$ vertices. We will use similar arguments to those used in the proof of Lemma~\ref{lema:gend}. \begin{Lemma}\label{lema:genr} Let $G$ be a signed multidigraph with $n\geq 2$ vertices, and $v\in V(G)$. If $k,j\geq 1$ and $m=\min(k,j-1)$, then { \begin{eqnarray*} I_j(r^k(G,v),X)&=&\left\langle\{\widetilde{P}_{l}^{k}(v) \cdot I_{j-l}(G,X)|_{x_v=-1}\}_{l=0}^{m-1}, \widetilde{P}_{m}^{k}(v) \cdot I_{j-m}(G\!-\!v,X),\right.\\ && \left.\widetilde{P}_{m}^{k}(v)\cdot {\rm minors}_{j-m}({\bf a},L(G\!-\!v,X)), \widetilde{P}_{m}^{k}(v)\cdot {\rm minors}_{j-m}(L(G\!-\!v,X), {\bf b}),\widetilde{S}_j^k\right\rangle \end{eqnarray*} } for all $1\leq j\leq n+k$, where $\widetilde{S}_j^k$ is equal to \[ \left\{\prod_{s=1}^{j}(x_{v^{l_s}}\!+\!1)\!-\!\sum_{s=1}^{j}\prod_{t\neq s}(x_{v^{l_t}}\!+\!1): 0\leq l_1\!<\cdots<l_j\leq k \right\}, \] when $j\leq k+1$, and equal to \[ \left\{ \det(Q)\cdot\prod_{t=0}^{k}(x_{v^{t}}\!+\!1)\!+\!\det(J(-1,{\bf a'};Q,{\bf b'}))\sum_{t=0}^{k}\prod_{s\neq t}(x_{v^{s}}\!+\!1): J(x_v,{\bf a'};Q,{\bf b'})\in M_{j\!-\!k}(L(G,X)) \right\}, \] when $j> k+1$. \end{Lemma} Since the proof of this lemma is technical and very similar to those arguments given in previous proofs, it was included at the end of this section. We now give a similar result to Theorem~\ref{teo:deq} for the replication of vertices. \begin{Theorem}\label{teo:req} Let $G$ be a signed multidigraph, $v\in V(G)$, $\gamma=\gamma_{\mathcal P}(G)$, $\gamma_r=\gamma_{\mathcal P}(r(G,v))$, $\gamma_v=\gamma_{\mathcal P}(G- v)$, and $\lambda=\lambda(\gamma_v,\gamma_r)$. If $\gamma\geq 2$, then $0\leq \gamma_r-\gamma_v\leq 2$ and \[ I_{\gamma_r+k}(r^{k+\lambda+i}(G,v),X)= \left\langle \left\{ \widetilde{P}_{l}^{k+\lambda+i}(v) \cdot I_{\gamma_r+k-l}(G, X)|_{x_v=-1} \right\}_{l=0}^{k}\right\rangle, \] for all $k\geq 1$ and $i\geq 0$. \end{Theorem} The proof follows similar by arguments to those used in Theorem~\ref{teo:deq}. \begin{proof} First, since $I_j(G,X)|_{x_v=-1}\subseteq I_{j-2}(G-v, X)$, Lemma~\ref{lema:r} implies that $0\leq \gamma_r-\gamma_v\leq 2$. Now, applying Lemma~\ref{lema:genr} with $j=\gamma_r+k$ and $k=k+\lambda+i$ we have that $I_{\gamma_r+k}(r^{k+\lambda+i}(G,v),X)$ is equal to \begin{gather*} \left\langle \left\{ \widetilde{P}_{l}^{k+\lambda+i}(v) \cdot I_{\gamma_r+k-l}(G, X)|_{x_v=-1} \right\}_{l=0}^{m-1}, \widetilde{P}_{m}^{k+\lambda+i}(v) \cdot I_{\gamma_r+k-m}(G-v, X),\right.\\ \left.\widetilde{P}_{m}^{k+\lambda+i}(v)\cdot {\rm minors}_{\gamma_r+k-m}({\bf a},L(G-v, X)), \widetilde{P}_{m}^{k+\lambda+i}(v)\cdot {\rm minors}_{\gamma_r-k-m}(L(G-v, X), {\bf b}), \widetilde{S}_{\gamma_r+k}^{k+\lambda+i}(G,v)\right\rangle, \end{gather*} where $m={\rm min}(k+\lambda+i, \gamma_r+k-1)=k+{\rm min}(\lambda+i, \gamma_r-1)\geq k+\lambda$. By Lemma~\ref{lema:r}, $I_{\gamma_r}(G,X)|_{x_v=-1}=\langle 1\rangle$. The rest follows in a similar way to the proof of Theorem~\ref{teo:deq}. \end{proof} We now show an example in order to understand Theorem~\ref{teo:req}. \begin{Example}\label{example:req} Let $G$ be the signed multidigraph given in Figure~\ref{figure:Replication}. \begin{figure}[h] \begin{center} \begin{tabular}{c@{\extracolsep{2cm}}c} \multirow{9}{3cm}{ \begin{tikzpicture}[line width=1pt, scale=1] \tikzstyle{every node}=[inner sep=0pt, minimum width=4pt] \draw (0,0) node (v1) [draw, circle, fill=gray, label = right:{\small $v_1$}] {}; \draw (1,1) node (v2) [draw, circle, fill=gray, label = right:{\small $v_2$}] {}; \draw (1,-1) node (v3) [draw, circle, fill=gray, label = right:{\small $v_3$}] {}; \draw (-2,1) node (v4) [draw, circle, fill=gray, label = left:{\small $v_4$}] {}; \draw (-1,0) node (v5) [draw, circle, fill=gray, label = left:{\small $v_5$}] {}; \draw (-2,-1) node (v6) [draw, circle, fill=gray, label = left:{\small $v_6$}] {}; \draw[red] (1.2,0) node () {\small $-$}; \path[<-] (v1) edge (v2) edge (v3) edge[bend right] (v4) edge (v5) edge[bend left] (v6); \draw (v4) -- (v5) -- (v6) -- (v4); \draw[red] (v2) -- (v3); \end{tikzpicture} } &\\ & $ L(G, X)= \left[\begin{array}{cccccc} x_1 & 0 & 0 & 0 & 0 & 0 \\ -1 & x_2 & 1 & 0 & 0 & 0 \\ -1 & 1 & x_3 & 0 & 0 & 0 \\ -1 & 0 & 0 & x_4 & -1 & -1 \\ -1 & 0 & 0 & -1 & x_5 & -1 \\ -1 & 0 & 0 & -1 & -1 & x_6 \end{array}\right] $\\ & \\ \end{tabular} \end{center} \caption{A graph $G$ with six vertices and its generalized Laplacian matrix.} \label{figure:Replication} \end{figure} By using a computer algebra system, we have that $\gamma_{\mathbb{Z}}(G)=\gamma_{\mathbb{Z}}(G- v_1)=2$ and $\gamma_{\mathbb{Z}}(r(G,v_1))=3$. Thus $\gamma_r-\gamma=1$ and $\lambda(\gamma-\gamma_v,\gamma_r-\gamma)=1$. Also, it can be calculated that $I_{4}(G,X)|_{x_1=-1} = \langle x_4+1, x_5+1, x_6+1, x_2x_3-1 \rangle$, \[ I_{5}(G,X)|_{x_1=-1} = \langle (x_4+1)\cdot (x_2x_3-1), (x_5+1)\cdot (x_2x_3-1), (x_6+1)\cdot (x_2x_3-1), x_4x_5x_6-x_4-x_5-x_6-2 \rangle, \] and $I_{6}(G,X)|_{x_1=-1} = \langle (x_2x_3-1)\cdot (x_4x_5x_6-x_4-x_5-x_6-2) \rangle$. Then Theorem~\ref{teo:req} implies \begin{eqnarray*} I_{4}(r^{i+2}(G,v_1),X) &=& \langle \{ x_{1^l}+1\}_{0\leq l\leq i+2}, I_{4}(G,X)|_{x_1=-1} \rangle\\ &=&\langle \{ x_{1^l}+1\}_{0\leq l\leq i+2}, x_4+1, x_5+1, x_6+1, x_2x_3-1 \rangle, \end{eqnarray*} for all $i\geq 0$. Also $I_{5}(r^{i+3}(G,v_1),X)$ is equal to \[ \left\langle \left\{ (x_{1^l}+1)(x_{1^{l'}}+1)\right\}_{0\leq l<l'\leq i+3}, I_{5}(G,X)|_{x_1=-1}, \left\{ (x_{1^l}+1)\cdot I_{4}(G,X)|_{x_1=-1}\right\}_{0\leq l\leq i+3} \right\rangle \] for all $i\geq 0$. Finally, $I_{k+3}(r^{k+i+1}(G,v_1),X)$ is equal to { \[ \left\langle \widetilde{P}_{k}^{k+i+1}(v_1), \widetilde{P}_{k-1}^{k+i+1}(v_1) \cdot I_{4}(G, X)|_{x_v=-1}, \widetilde{P}_{k-2}^{k+i+1}(v_1) \cdot I_{5}(G, X)|_{x_v=-1}, \widetilde{P}_{k-3}^{k+i+1}(v_1) \cdot I_{6}(G, X)|_{x_v=-1} \right\rangle, \] } for all $k\geq 3$ and $i\geq 0$. On the other hand, it can be check that $I_{4}(r(G,v_1),X)$ is equal to \[ \langle \{ (x_{1^l}+1)(x_{l'}-1)\}_{0\leq l\leq 1, 2\leq l' \leq 3}, x_4+1, x_5+1, x_6+1, x_2x_3-1, x_{1} x_{1^1} -1 \rangle, \] which is different from $\langle x_{1}+1, x_{1^1}+1 , x_4+1, x_5+1, x_6+1, x_2x_3-1 \rangle$. Thus Theorem~\ref{teo:req} can not be improved. \end{Example} \begin{Remark} Note that $I_i(K_{k+1},X)=\langle \widetilde{P}_{i-1}^{k}(v)\rangle$ for all $1\leq i\leq k$, see~\cite[Theorem 3.16]{critical}. Moreover, if $m={\rm min}(k,j-1)$, then \[ I_j(r^k(G,v),X)|_{x_{v}=-1}=\left\langle \left\{ \widetilde{P}_{l}^{[k]}(v) \cdot I_{j-l}(G, X)|_{x_v=-1} \right\}_{l=0}^{m} \right\rangle. \] That is, $I_j(d^k(G,v),X)|_{x_{v}=-1}$ behaves almost equal to as $I_j(K_{k+1}\sqcup G, X)$. \end{Remark} In a similar way to Theorem~\ref{teo:deq}, when we equal all the variables associated to the replicated vertices of $v$, we get that Theorem~\ref{teo:req} takes the following form: If $\gamma_{\mathcal P}(G) \geq 2$ and $x_v$ is the variable associated to all the twins of $v$, then \[ I_{\gamma_r+k}(d^{k+\lambda+i}(G,v),X)= \left\langle \left\{ (x_v^l+1) \cdot I_{\gamma_r+k-l}(G, X)|_{x_v=0} \right\}_{l=0}^{k}\right\rangle \] for all $k\geq 1$ and $i\geq 0$. Examples~\ref{example:deq} and~\ref{example:req} show that the results obtained in this article are tight. By using Theorems~\ref{teo:deq} and~\ref{teo:req} we can not determine all the critical ideals of the graph $G^{\bf d}$ for some ${\bf d}\in \mathbb{Z}^{V(G)}$ in terms of the critical ideals of $G$. However, there exist some special cases in which we can determine their critical ideals using very similar ideas. For instance, in the following subsection we present the case of a complete bipartite graph. \subsection{Critical ideals of the complete bipartite graph}\label{Sbipartite} Given $m\geq n\geq 1$, let $K_{n,m}$ be the complete bipartite graph with bipartition $(U,V)$ such that $U$ contains $n$ vertices and $V$ contains $m$ vertices. If $K_2$ is the complete graph with two vertices $v_1$ and $v_2$, then it is clear that $K_{n,m}=K_2^{(n-1,m-1)}$. Now, given $0\leq j\leq n-1$, let \[ \sigma_{j,n}(v)= \begin{cases} \sum_{r=1}^n\prod_{s\neq r} x_{v^s} & \text{ if } j=n-1,\\ P_{j}^{n-1}(v) & \text{ otherwise}. \end{cases} \] \begin{Theorem}\label{Tbipartite} If $m\geq n\geq 2$, then {\small \[ I_j(K_{n, m}, X)= \begin{cases} \langle \{\sigma_{r,n}(v_1)\cdot \sigma_{s,m}(v_2): r+s=j-2, (0,0)\leq (r,s)\leq (n-1,m-1)\}\rangle & \text{ if } 2\leq j\leq n+m-2,\\ \langle \sigma_{n-1,n}(v_1)\cdot \sigma_{m-2,m}(v_2), \sigma_{n-2,n}(v_1)\cdot \sigma_{m-1,m}(v_2), P_{n-1}^{n-1}(v_1)\cdot P_{m-1}^{m-1}(v_2)\}\rangle \!\!\!\!& \text{ if } j=n+m-1,\\ \langle \prod_{r=1}^n x_{1^r} \cdot \prod_{s=1}^m x_{2^s}-\sigma_{n-1,n}(v_1)\cdot \sigma_{m-1,m}(v_2)\rangle & \text{ if } j=n+m. \end{cases} \] } \end{Theorem} The results obtained here can be used to determine a big part of the critical ideals of the complete bipartite graph. Since $K_{n, 2}=d^{n-2}(K_{2, 2},v_{1^1})$ and \[ I_j(K_{2, 2}, X)= \begin{cases} \langle 1\rangle& \text{ if }j=1,2,\\ \langle x_1+x_{1^1},x_2+x_{2^1}, x_1x_2\rangle& \text{ if }j=3,\\ \langle x_1x_{1^1}x_2x_{2^1}-x_1x_2-x_1x_{2^1}-x_{1^1}x_2-x_{1^1}x_{2^1}\rangle & \text{ if }j=4, \end{cases} \] $\gamma_v=\gamma_{\mathbb{Z}}(K_{1,2})=2=\gamma_{\mathbb{Z}}(K_{3,2})=\gamma_d$. Thus $\lambda=0$, $I_3(K_{2, 2}, X)|_{x_{1^1}=0}=\langle x_1,x_{2}+x_{2^1}\rangle$ and $I_4(K_{2, 2}, X)|_{x_{1^1}=0}=\langle x_1\cdot(x_{2}+x_{2^1}) \rangle$. Applying~Theorem~\ref{teo:deq} with $k=j-2$, $i=n-j\geq 0$ we get that \begin{eqnarray*} I_j(K_{n, 2}, X)&=&I_{2+(j-2)}(d^{n-2}(K_{2, 2},v_{1^1}), X)\\ &=&\langle P_{j-2}^{n-2}(v_{1^1}), P_{j-3}^{n-2}(v_{1^1})\cdot I_3(K_{2, 2}, X)|_{x_{1^1}=0}, P_{j-4}^{n-2}(v_{1^1})\cdot I_4(K_{2, 2}, X)|_{x_{1^1}=0}\rangle\\ &=&\langle \{\sigma_{j-2,n}(v_1)\cdot \sigma_{0,2}(v_2), \sigma_{j-3,n}(v_1)\cdot \sigma_{1,2}(v_2)\}\rangle \end{eqnarray*} for all $3\leq j\leq n+m-2$. In a similar way we can use the critical ideals of $K_{n,2}$ to determine the first $m$ critical ideals of $K_{n,m}$. That is, we can determine more than one half of the critical ideals of $K_{n,m}$. The remaining critical ideals can be determined using similar, but more specific techniques. In a more general setting~Theorem~\ref{teo:deq} can be used to determine a part of the critical ideals of the complete multipartite graphs. Moreover, Theorems~\ref{teo:deq} and~\ref{teo:req} can be improved in the special case when several vertices are duplicated and replicated simultaneously, which allows us to describe almost completely the critical ideals of complete multipartite graphs and threshold graphs. \subsection{Proofs of Lemmas~\ref{lema:gend} and~\ref{lema:genr}} \mbox{} \textbf{Proof of Lemma \ref{lema:gend}:} The generalized Laplacian matrix $L(d^k(G,v),X)$ of $d^k(G,v)$ is equal to \[ J({\rm diag}(x_{v^0},..., x_{v^k}),{\bf a};L(G-v,X),{\bf b}), \] for some ${\bf a},{\bf b} \in \mathcal{P}^{n-1}$. Let $\mathcal{I,I'}\subseteq [n+k]$ be two sets of size $j$, $h=|\mathcal{I}\cap [k+1]|$, $h'=|\mathcal{I'}\cap [k+1]|$, and \[ m_{\mathcal{I,I'}}={\rm det}(L(d^k(G,v),X)[\mathcal{I,I'}]). \] Clearly $0\leq h,h'\leq m+1$. If $h,h'=0$, then $m_{\mathcal{I,I'}}\in {\rm minors}_{j}(L(G-v,X))$ and $m_{\mathcal{I,I'}}\in I_{j}(G-v,X)$. Now assume that $h=0$. If $h'\geq 2$, then two columns of $L(d^k(G,v),X)[\mathcal{I,I'}]$ are equal, and $m_{\mathcal{I,I'}}=0$. Also, if $h'=1$, then $m_{\mathcal{I,I'}}\in {\rm minors}_j({\bf a}, L(G-v,X))$. We can use similar arguments when $h'=0$. Thus, we assume that $h,h'\geq 1$. Now by Lemma~\ref{lema:det1} we have that \[ m_\mathcal{I,I'}= \begin{cases} 0 & \text{ if } |h-h'|\geq 2,\\ {\rm det}\left[\begin{array}{cc} P&1\end{array}\right]\cdot {\rm det}\left[\begin{array}{c}{\bf b}'\\Q\end{array}\right]& \text{ if } h-h'=1,\\ {\rm det}\left[\begin{array}{c}P\\{\bf 1}\end{array}\right]\cdot{\rm det}\left[\begin{array}{cc}{\bf a}'^T &Q\end{array}\right] & \text{ if } h'-h=1,\\ {\rm det}(P)\cdot {\rm det}(Q)-{\rm det}(J(P,{\bf 1};0,{\bf 1})) \cdot {\rm det}(J(0,{\bf a'};Q,{\bf b'})) & \text{ if } h=h', \end{cases} \] for some submatrix $P$ of ${\rm diag}(x_{v^0},..., x_{v^k})$, some submatrix $Q$ of $L(G- v, X)$, and some subvectors ${\bf a}'$ of ${\bf a}$ and ${\bf b}'$ of ${\bf b}$. Clearly, ${\rm det}\left[\begin{array}{cc} P& {\bf 1}\end{array}\right]\neq 0$ if and only if (up to row and column permutations) \[ P=\left[\begin{array}{c} {\rm diag}(x_{v^{i_1}}, \ldots, x_{v^{i_{h'}}})\\ {\bf0}\end{array}\right]. \] If $h-h'=1$, then $m_\mathcal{I,I'}\in P_{h'}^{k}(v)\cdot {\rm minors}_{j-h'}(L(G- v, X),{\bf b})\subsetneq P_{h'}^{k}(v)\cdot I_{j-h'}(G,X)|_{x_v=0}$, for all $1\leq h'\leq m$. Similarly, if $h'-h=1$, then $m_\mathcal{I,I'}\in P_{h}^{k}(v)\cdot {\rm minors}_{j-h}({\bf a}, L(G- v, X))\subsetneq P_{h}^{k}(v)\cdot I_{j-h}(G,X)|_{x_v=0}$, for all $1\leq h\leq m$. On the other hand, if $h=h'$ we have the following cases: \noindent Case I: If $P$ has at least two zero rows, then ${\rm det}(P)=0$, ${\rm det}(J(P,{\bf 1};0,{\bf 1})) =0$, and $m_\mathcal{I,I'}=0$. \noindent Case II: If $P$ has only one zero row, then ${\rm det}(P)=0$, ${\rm det}(J(P,{\bf 1};0,{\bf 1})) =\prod_{t=1}^{h-1} x_{v^{i_t}}$, and \[ m_\mathcal{I,I'}=\prod_{t=1}^{h-1} x_{v^{i_t}}\cdot {\rm det}(J(0,{\bf a}';Q,{\bf b}')), \] for some $(j-h+1)\times (j-h+1)$-submatrix $J(0,{\bf a}';Q,{\bf b}')$ of $L(G, X)|_{x_v=0}$. Thus $m_\mathcal{I,I'}\in P_{h-1}^{k}(v)\cdot {\rm minors}_{j-h+1}({\bf a}, L(G- v, X),{\bf b})\subsetneq P_{h-1}^{k}(v)\cdot I_{j-h+1}(G,X)|_{x_v=0}$, for all $2\leq h \leq m-1$. \noindent Case III: If $P$ has no zero row, then \[ m_\mathcal{I,I'}= \begin{cases} \prod_{t=1}^h x_{v^{i_t}}\cdot {\rm det}(Q)+\sum_{t=1}^h \prod_{s\neq t} x_{v^{i_s}}\cdot {\rm det}(J(0,{\bf a'};Q,{\bf b'})) & \text{ if } h< j,\\ \prod_{t=1}^h x_{v^{i_t}} & \text{ if } h= j, \end{cases} \] for some $(j-h+1)\times(j-h+1)$-submatrix $J(0,{\bf a'};Q,{\bf b'})$ of $L(G, X)|_{x_v=0}$, and for all $1\leq h \leq m$. Moreover, since \[ \sum_{t=1}^h \prod_{s\neq t} x_{v^{i_s}}\cdot {\rm det}(J(0,{\bf a'};Q,{\bf b'})) \in \langle P_{h-1}^{k}(v)\cdot {\rm minors}_{j-h+1}({\bf a}, L(G-v,X),{\bf b})\rangle \] and $\prod_{t=1}^h x_{v^{i_t}}\cdot {\rm det}(Q)=m_\mathcal{I,I'}-\sum_{t=1}^h \prod_{s\neq t} x_{v^{i_s}}\cdot {\rm det}(J(0,{\bf a'};Q,{\bf b'})) \in \langle P_{h}^{k}(v)\cdot{\rm minors}_{j-h}(L(G- v, X)) \rangle\subsetneq P_{h}^{k}(v)\cdot I_{j-h}(G,X)|_{x_v=0}$ for all $0\leq h\leq m-1$, we get the result. \bigskip \textbf{Proof of Lemma \ref{lema:genr}:} Let $\mathcal{I,I'}\subseteq [n+k]$ be two sets of size $j$, $h=|\mathcal{I}\cap [k+1]|$ and $h'=|\mathcal{I'}\cap [k+1]|$. Clearly $0\leq h,h'\leq m+1$ and $L(r^k(G,v),X)=J(L(K_{k+1},X), {\bf a};L(G- v, X), {\bf b})$ for some $\textbf{a},\textbf{b}\in\mathcal{P}^{n-1}$. Let $m_\mathcal{I,I'}=\det(L(r^k(G,v),X)[\mathcal{I,I'}])$. We can use the same arguments used in the proof of Lemma~\ref{lema:gend} for the case when $h=0$ or $h'=0$. On the other hand, by Lemma~\ref{lema:det1} \[ m_\mathcal{I,I'}=\left\{\begin{array}{ll} 0&\textrm{if }|h-h'|>2,\\ \det\left[\begin{array}{cc} P& {\bf 1}^T\end{array}\right]\det\left[\begin{array}{c} \textbf{b}'\\ Q\end{array}\right]&\textrm{if }h-h'=1,\\ \det\left[\begin{array}{c} P\\ {\bf 1}\end{array}\right]\det\left[\begin{array}{cc} \textbf{a}'^T& Q\end{array}\right]&\textrm{if }h'-h=1,\\ \det(P)\det(Q)-\det\left[\begin{array}{cc} P& {\bf 1}^T\\{\bf 1}&0\end{array}\right]\det\left[\begin{array}{cc} 0& \textbf{b}'\\ \textbf{a}'^T&Q\end{array}\right]&\textrm{if }h=h', \end{array}\right. \] where $P$ is a submatrix of $L(K_{k+1},X)$, $Q$ is a submatrix of $L(G- v, X)$, $\textbf{a}'$ is a subvector of $\textbf{a}$ and $\textbf{b}'$ is a subvector of $\textbf{b}$. If $h-h'=1$ then $\det\left[\begin{array}{cc} P& 1\end{array}\right]\neq 0$ if and only if (up to row and column permutations) \[ P=\left(\begin{array}{cccc} x_{v^{i_1}}& & -1&-1\\ &\ddots& \\ -1& &x_{v^{i_{h'}}}& -1\end{array}\right)^T \] for some $0\leq l_1<\cdots< l_{h'}\leq k$. Since $\det\left[\begin{array}{cc} P& {\bf 1}^T\end{array}\right]=\prod_{s=1}^{h'} (x_{v^{i_s}}+1)$, $m_\mathcal{I,I'}\in \widetilde{P}_{h'}^{k}(v) \cdot {\rm minors}_{j-h'}(L(G-v, X),{\bf b})\subsetneq \widetilde{P}_{h'}^{k}(v) \cdot I_{j-h'}(G,X)|_{x_v=-1}$. In a similar way, if $h'-h=1$, then $m_\mathcal{I,I'}\in\widetilde{P}_{h}^{k}(v)\cdot I_{j-h}(G,X)|_{x_v=-1}$. Now assume that $h=h'$. If $P$ has two rows equal to $-{\bf 1}$, then $m_\mathcal{I,I'}=0$. Let \[ R=\left(\begin{array}{ccc} x_{v^{l_1}}& & -1\\ &\ddots& \\ -1& &x_{v^{l_h}}\end{array}\right) \] where $0\leq l_1<\cdots<l_h\leq k$. If $P$ has only a row equal to $-{\bf 1}$, then $P$ is equal to (up to row and column permutations) $R|_{x_{v^{l_h}}=-1}$. Since ${\rm det}(R|_{x_{v^{l_h}}=-1})=-\prod_{s=1}^{h-1}(x_{v^{l_s}}+1)$ and $\det(J(R|_{x_{v^{l_h}}=-1},{\bf 1};0,{\bf 1}))=-\prod_{s=1}^{h-1}(x_{v^{l_s}}+1)$, \[ m_\mathcal{I,I'}=\left(\det(J(0,{\bf a'};Q,{\bf b'}))-\det(Q)\right)\prod_{s=1}^{h-1}(x_{v^{l_s}}+1) = \det(J(-1,{\bf a'};Q,{\bf b'}))\prod_{s=1}^{h-1}(x_{v^{l_s}}+1), \text{ for all } 1\leq h \leq m. \] Thus $m_\mathcal{I,I'}\in\langle \widetilde{P}_{h-1}^{k}(v)\cdot I_{j-h+1}(G,X)|_{x_v=-1}\rangle$. Finally, if $P$ has no row equal to $-{\bf 1}$, then $P$ is equal to (up to row and column permutations) to $R$. Since ${\rm det}(R)=\prod_{s=1}^{h}(x_{v^{l_s}}+1)-\sum_{s=1}^{h}\prod_{t\neq s}(x_{v^{l_t}}+1)$ (see~\cite[Theorem 3.15]{critical}) and $\det(J(R,{\bf 1};0,{\bf 1})) = -\sum_{s=1}^{h}\prod_{t\neq s}(x_{v^{l_t}}+1)$, \begin{eqnarray*} m_\mathcal{I,I'}&=& \det(Q)\cdot \prod_{s=1}^{h}(x_{v^{l_s}}+1)+\left(\det(J(0,{\bf a'};Q,{\bf b'}))-\det(Q)\right)\cdot\sum_{s=1}^{h}\prod_{t\neq s}(x_{v^{l_t}}+1)\\ &=& \det(Q)\cdot\prod_{s=1}^{h}(x_{v^{l_s}}+1)+\det(J(-1,{\bf a'};Q,{\bf b'}))\cdot\sum_{s=1}^{h}\prod_{t\neq s}(x_{v^{l_t}}+1),\text{ for all }1\leq h \leq m. \end{eqnarray*} Since $\det(Q)\cdot\prod_{s=1}^{h}(x_{v^{l_s}}+1)=m_\mathcal{I,I'}-\det(J(-1,{\bf a'};Q,{\bf b'}))\cdot\sum_{s=1}^{h}\left(\prod_{t\neq s}(x_{v^{l_t}}+1)\right) \in \widetilde{P}_{h}^{k}(v)\cdot I_{j-h}(G,X)|_{x_v=-1}$ we get the result. \section{Acknowledgement} The authors would like to thank the anonymous referee for his helpful comments. \bibliographystyle{abbrv}
{ "timestamp": "2017-01-31T02:11:32", "yymm": "1504", "arxiv_id": "1504.06257", "language": "en", "url": "https://arxiv.org/abs/1504.06257" }
\section{Introduction} It is well-known that financial markets can be strongly correlated in such a way that their market values show a similar behavior. Knowing the exact connection between two markets would be very helpful for risk-averse investment strategies. In case that two markets are perfectly correlated it would make no difference to invest in either one of them or both together. One simply cannot diversify the risk on both markets. In case it is known that one market leads the other, one is able to use the leading market as an indicator to predict the price development of the other market. Knowing this connection between the two markets can be useful to improve the investment strategy. Therefore we develop a method for quantizing the interrelation of two markets from a different point of view: We want to be able to identify a possible phase shift between two markets if they are correlated. This subject has been approached in a variety of articles. One approach is to decompose the time series of two markets on a scale-by-scale basis into components with different frequencies using wavelets. The lead-lag relationship is studied by comparing the components of one selected level of the wavelet transformation for two markets, see e.g. \cite{Dajcman2013,Fiedor2014,IK2006,KI2005,RL1998a,RL1998}. More on wavelet methods in finance can be found in the book of Gen\c{c}ay, Sel\c{c}uk and Whitcher~\cite{GSW2001}. Other methods working with correlation, auto-correlation and similar quantities can be found e.g. in~\cite{Chan1993,JD1998,JN1997,EK2011,GT2011,Iwaisako2007,SW1990}. Didier, Love and Per\'ia~\cite{DLP2012} studied the comovements during 2007--2008 crisis. A different but related topic is the lead-lag relationship between news, e.g. on Twitter, and stock prices, see e.g. \cite{BMZ2011,MCB2011}. For the intermarket analysis from a point of view of the technical analysis see e.g. the book of Murphy~\cite{Murphy2004} and also of Ruggiero~\cite{Ruggiero1997}. However, to the best of the author's knowledge, the approaches found in the literature do not follow a geometric approach, e.g. they do not take local extreme values of the time series into account. Decomposing the time series using wavelets permits to write the time series as the sum of wavelike components with different frequency spectrum. Using these components for comparison of different markets will therefore compare only parts of the original time series. The problem is that these components can be hidden in the original time series such that a possible lag observed between the components of the same level does not necessarily mean that this lag can be observed in the time series itself, e.g. by comparing reversal points. Therefore it is not clear how to interpret the results with regards to an application. Since we want to be able to receive results giving us an observable lead-lag relationship of two time series, we prefer a geometrical approach. For this reason we need significant points to be able to uniquely identify a lead or lag if any. Very important situations are reversal points and thus the points in time of relevant local extreme values which represents the moment of reversal. A possible lead or lag can then directly be seen by comparing the local extrema of both charts. Such an ansatz could be used for trading these financial products and offers a deep insight into the lead-lag relationship between two markets because an empirical distribution over all local phase shifts can be identified. Additionally the results are not hidden in just one single value like cross-correlation. The paper is organized as follows: The search for the relevant local extreme values is far from being unique. Therefore we discuss in Section~\ref{sec:2} the approach to find these extreme values for a given pair of markets which we want to compare. Using these values we can compute local phase shifts of both markets which gives us a corresponding empirical distribution. To analyze the results we introduce the directional statistics in Section~\ref{sec:3}. Now we can apply our approach to historical data, e.g. for foreign exchange, commodities and indexes, which we do in Section~\ref{sec:4}. In Section~\ref{sec:5} we give some conclusions. \section{Method for intermarket analysis}\label{sec:2} Suppose we want to compare two financial underlyings namely market $A$ and market $B$ for lead and lag. First we take one chart for each underlying with the same bar size, e.g. a \SI{60}{\minute} chart, depending on our interest. Now we want to decide whether these two charts are correlated and show lead or lag. Of course if both underlyings are fully uncorrelated we cannot compare them. Therefore let us assume that there is a connection between these two charts. Since we prefer a geometrical ansatz we need the points in time of relevant local extreme values. If each maximum occurs for both charts at the exact same time and the same holds true for the minimal values we can say that both underlyings run perfectly synchronous. If the maximum of chart $B$ occurs shortly after the maximum of chart $A$, we observe that market $B$ has a lag compared to market $A$. Such a comparison could easily be done by hand in a very intuitive way. If one compares two markets and gets a feeling for lead-lag relationship, e.g. assume market $A$ leads $B$, one directly benefits from this knowledge because right after a reversal point in market $A$ would most likely occur a reversal point in market $B$. This can be very useful for several strategies (for position entries and also for exits). Of course doing an extensive study by hand would be very time consuming and not objective. For an automatic approach we first need an appropriate method to identify local extrema for both time series. The MinMax algorithm introduced by Maier-Paape~\cite{Maier-Paape2013} is a method which yields such a series of alternating relevant local extrema (called MinMax process) and will therefore be used in the following. This method uses a so called SAR (stop and reverse) process to identify up and down movements. If an up movement is detected the MinMax algorithm searches for a maximum and fixes this local maximal value if the movement phase reverses to a down movement. Minimal values are searched during down movement phases. The underlying SAR process could be the MACD (moving average convergence/divergence) indicator of~\cite{Appel2005} which, simplified speaking, indicates an up movement if the MACD series lies above its signal line and a down movement when its vice versa. See \cite{Maier-Paape2013} for the details. The SAR process controls the frequency of detected local extreme values and, in general, is controlled by some parameters (default for MACD are 12, 26 and 9). In this paper we will always use the MACD as SAR process. Instead of adjusting several parameters separately we use just a common factor, called timescale, that scales the three default parameters at the same time. Increasing the timescale leads to less extreme values while decreasing timescale leads to more extreme values, i.e. a finer resolution. Note that the MACD series can oscillate quickly around the signal line which leads to many small and insignificant local extreme values. To avoid this problem we require for a change of the direction of the SAR process that the distance of MACD and its signal line needs to exceed some minimal threshold of $\delta = 0.3 \cdot \mbox{ATR} (100)$, where ATR means the average true range, see \cite[Subsection 2.1]{Maier-Paape2013} for the details. From now on we use this MinMax algorithm because this is a very flexible tool to identify local extreme values. As far as we know this method is the only one which identifies local extreme values exactly and is continuously adjustable. Since a financial time series always has some noise there is no unique objective choice for relevant local extrema of a financial time series. Therefore this process needs to be parameter dependent to adjust the resolution of the minima and maxima. One question is how to choose the ``right'' parameter. This will be discussed at the end of this section. For the moment let us assume we already found ``good'' parameters for market $A$. The MinMax process then yields consecutive minima and maxima denoted by $(t_i,X_i)_{i=1,...,N}$ with points in time $t_1\leq ...\leq t_N$ and consecutive price values $X_i$. To be able to compare these points, we measure the time in seconds since 1st January 1970. For this wavelike time series we can compute the mean wavelength by \begin{align}\label{eq:mean_wavelength} \lambda := \frac{1}{N-1}\sum_{i=1}^{N-1} 2(t_{i+1}-t_{i}) = 2\frac{t_{N}-t_{1}}{N-1}. \end{align} Note that $\lambda$ depends on the parameters used in the MinMax algorithm since the minima and maxima depend on the used parameters. Fixing these parameters for the second market gives us the extreme values $(\tilde{t}_i,\tilde{X}_i)_{i=1,...,\tilde{N}}$ with mean wavelength $\tilde{\lambda}$. Of course it makes no sense to compare both markets using these extreme values for very different wavelengths $\lambda$ and $\tilde{\lambda}$. Therefore we fit the parameters of the MinMax process for market $B$ so that $\tilde{\lambda}=\lambda$ holds true. \begin{remark} Note that in general we can not expect a constant but only a time dependent wavelength which can vary a lot, see Figure \ref{fig:timedependet_wavelength}, where the moving average of wavelengths over $N-1=49$ cycles is shown, i.e. $\frac{1}{49}\sum_{i=s-49}^{s-1} 2(t_{i+1}-t_{i})$ where $s$ is the current time index. Therefore matching the mean wavelength for both markets means just matching the level of refinement and not the position of the extreme values themselves. \begin{figure} \centering \includegraphics{images/timedependet_wavelength} \caption{Moving average of wavelengths over $N-1=49$ cycles for S\&P 500 on a \SI{60}{\minute} chart.} \label{fig:timedependet_wavelength} \end{figure} \end{remark} Since we are interested in the lead-lag relationship between market $A$ and $B$ we only need to find the relationship of points in time of the extrema by finding the relative positions of $(\tilde{t}_i)_{i=1,...,\tilde{N}}$ within $(t_j)_{j=1,...,N}$. In this case we call market $A$ the primary market and market $B$ the secondary market. The overall procedure is as follows: \begin{enumerate} \item Fix the desired mean wavelength $\lambda^*>0$. \item Find all local extreme values $(t_i,X_i)_{i=1,...,N}$ and $(\tilde{t}_j,\tilde{X}_j)_{j=1,...,\tilde{N}}$ for the primary and the secondary market, respectively, such that the mean wavelengths~\eqref{eq:mean_wavelength} for both markets on the full data base matches $\lambda^*$, i.e. that we have $2\frac{t_{N}-t_{1}}{N-1} \approx \lambda^* \approx 2\frac{\tilde{t}_{\tilde{N}}-\tilde{t}_{1}}{\tilde{N}-1}$. \item Find $j_1,j_2\in\{1,...,\tilde{N}\}$ such that $\tilde{t}_{j_1}=\min\{\tilde{t}_j\,:\,\tilde{t}_j\geq t_1\}$ and $\tilde{t}_{j_2}=\max\{\tilde{t}_j\,:\,\tilde{t}_j< t_{N}\}$. \newline For each $j\in\{j_1,...,j_2\}$ do the following: \begin{enumerate} \item Find $i\in\{1,...,N-1\}$ such that $t_i\leq \tilde{t}_j <t_{i+1}$. \item Define the phase shift of extreme value $(\tilde{t}_j,\tilde{X}_j)$ regarding the extreme values $(t_i,X_i)$ and $(t_{i+1},X_{i+1})$. Here we use the linear relative distance between the corresponding extrema values measured as an angle. We set \begin{align}\label{eq:alpha} \alpha_j^{\lambda^*} &:= \frac{\tilde{t}_j - s_j} {t_{i+1} - t_i} \cdot\pi\in[-\pi,\pi), \end{align} where \begin{align} s_j :=\begin{cases} t_i,&\text{if } X_i \text{ and } \tilde{X}_j \text{ are both maxima or both minima},\\ t_{i+1},&\text{if } X_{i+1} \text{ and } \tilde{X}_j \text{ are both maxima or both minima}. \end{cases} \end{align} Figure~\ref{fig:omega} shows some examples for the position of a maximum of the secondary market relative to some extreme values of the primary market. \end{enumerate} \item We end up with the empirical circular distribution $(\alpha_j^{\lambda^*})_{j=j_1,...,j_2}\subset[-\pi,\pi)$ depending on the mean wavelength $\lambda^*$. \end{enumerate} Negative $\alpha$ resemble a front-running (lead) of the secondary market, positive $\alpha$ resemble a time lag of the secondary market. The result can be interpreted on the unit sphere $S^1=\{(\sin\alpha,\cos\alpha)\in\ensuremath{\mathbb{R}}^2\,:\,\alpha\in[-\pi,\pi)\}$ and gives us all observations of local phase shifts between two markets. \begin{figure} \centering \includegraphics{images/circular_distribution} \caption{Computation of $\alpha$ in \eqref{eq:alpha}.} \label{fig:omega} \end{figure} \begin{remark} This approach is independent of the openings of the stock exchange for market $A$ and market $B$. Since we measure the points in time $t_i$ and $\tilde{t}_j$ in seconds since 1st January 1970 we just put these values into \eqref{eq:alpha} and the machinery works straight forward. \end{remark} \begin{remark}\label{rem:parameter_choice} The above method has only one parameter, namely the mean wavelength $\lambda^*$, see step 1. Therefore we can compute different distributions for different wavelengths. It turns out that the results in most cases do not depend on the wavelength. Therefore we compute $(\alpha_i^{\lambda})_{i=1,...,n(\lambda)}$ for many values of the mean wavelength $\lambda$. For each $\lambda$ we can generate a histogram or rather a bar plot and at the end we can compute the average of all bars including standard deviation. \end{remark} \begin{remark}\label{rem:extrema_confirmed} Note that the extreme values cannot be determined in real time. There is always at least a small time lag. Therefore we can also identify such an empirical distribution if we use the point in time when the extreme value is confirmed by the MinMax algorithm instead of the point in time of the extreme value itself. \end{remark} \section{Directional statistics}\label{sec:3} Since we work with circular distributions, the mean and variance must be computed in an appropriate way, see e.g. \cite{Fisher1996,MJ1999}. This can be used to identify a possible phase shift. We introduce the basic statistical quantities in Subsection~\ref{sec:3.1}. For a deeper analysis we list some interesting statistical tests in Subsection~\ref{sec:3.2} and give an approximation of the lead or lag in Subsection~\ref{sec:3.3}. \subsection{Basic quantities}\label{sec:3.1} Now we will discuss how to calculate estimators, e.g. for the mean angular direction. Details on computations for a general distribution with a $2\pi$ periodic probability density function $f$ can be found in \cite[Section 3.2]{Fisher1996}. The first step is to identify the angles by vectors on the unit sphere $S^1$. Let $(\alpha_j)_{j=1,...,N}\subset[-\pi,\pi)$ be the outcomes of a discrete distribution for the phase shift of two markets of interest. We can identify each angle $\alpha_j$ with a point on the unit sphere \begin{align*} \mathbf{r}_j &:=\left(\begin{matrix}\sin\alpha_j \\ \cos\alpha_j\end{matrix}\right) \in S^1 \end{align*} for $j=1,...,N$. In this two-dimensional space we can compute the \textit{mean resultant vector} which is defined by \begin{align*} \hat{\mathbf{r}} & := \frac{1}{N}\sum_{j=1}^N \mathbf{r}_j. \end{align*} Note that for the length of $\hat{\mathbf{r}}$ we have $\|\hat{\mathbf{r}}\|_2\leq 1$ because it is a convex combination of vectors in $S^1$. If $\hat{\mathbf{r}}\neq 0$ choose the \textit{mean angular direction} $\hat{\alpha}\in[-\pi,\pi)$ such that \begin{align}\label{eq:NR} \left(\begin{matrix}\sin \hat{\alpha} \\ \cos \hat{\alpha}\end{matrix}\right) & = \frac{1}{\|\hat{\mathbf{r}}\|_2}\hat{\mathbf{r}}. \end{align} Of course $\hat{\mathbf{r}}$ could be zero and thus no unique mean angular direction would exist. This is the case, e.g., if the angles are uniformly distributed all around $S^1$. If this is the case for the phase shifts between two markets then there is no connection between them and the analysis of the results would already be finished. Since we are interested in at least slightly correlated markets we do not expect this behavior. Nevertheless even in the case where $\|\hat{\mathbf{r}}\|_2>0$, the length of $\hat{\mathbf{r}}$ could be small. This happens if the outcomes of the distribution have a large variance. In contrast a length of $\hat{\mathbf{r}}$ near $1$ indicates a small variance and a high concentration of the outcomes near to its mean angular direction. Therefore we need to consider the \textit{circular variance} (cf. \cite[Section~2.3.1, Equation (2.11)]{Fisher1996}) which can be defined by \begin{align*} \hat{S}&:=1-\|\hat{\mathbf{r}}\|_2 \in [0,1]. \end{align*} To be able to also measure the skewness and the peakedness we define the \textit{circular skewness} by \begin{align*} \hat{b}&:=\frac{1}{N}\sum_{j=1}^N \sin(2(\alpha_j-\hat{\alpha})) \in [-1,1] \end{align*} and the \textit{circular kurtosis} by \begin{align*} \hat{k}&:=\frac{1}{N}\sum_{j=1}^N \cos(2(\alpha_j-\hat{\alpha})) \in [-1,1]. \end{align*} Since we are interested in the possible lead or lag between two markets we want to reduce the influence of outliers which are far away from the mean angular direction. For this reason we use a hat function on $S^1$ to weight the empirical distribution with the hat near the position of the highest peak of the distribution. Then all reasonable data near the peak get high weights and thus more influence in our statistics, while less important data, i.e. the outliers, obtain small weights. We expect that the peaks of the distributions are near zero up to some lead or lag, i.e. the two markets are positive correlated. Therefore we use the hat function which has its hat (maximum) at zero and is zero (minimum) at $\pm\pi$. The first two plots of Figure~\ref{fig:von_mises} show an example for an observed distribution and its weighted counterpart, respectively. From the weighted distribution we can compute the \textit{weighted mean angular direction} $\hat{\alpha}^{(w)}$ as in \eqref{eq:NR}. \begin{figure} \centering \includegraphics[width=15.2cm]{images/comparison_of_distributions} \caption{First: Example of a possible distribution of phase shifts and a hat function on $S^1$; Second: Corresponding weighted version of the example distribution from first plot; Third: Plot of the probability density functions of von Mises distributions mean location parameter $\mu=0$ and concentration parameters $\kappa=50$; Fourth: Same as third but with $\kappa=1$.} \label{fig:von_mises} \end{figure} \subsection{Statistical tests}\label{sec:3.2} Most of the statistical tests require an underlying von Mises distribution, see e.g. \cite[Section 3.3.6]{Fisher1996}, which is often used as an analogon to normal distribution on the unit sphere. The distribution we get for our application is not exactly a von Mises distribution but has a similar shape, see Figure~\ref{fig:von_mises}. In this figure the distribution of phase shifts has a similar shape to two superposed von Mises distribution, one with a large and one with a small concentration parameter $\kappa$. Thus it is possible, that the phase shifts correspond to a von Mises distribution plus noise, e.g. white noise. Nevertheless we use the following statistical tests in order to be able to classify the results even if they are designed for von Mises distributions. Since we do not know the underlying distribution for the phase shifts we only get some realizations. Computing the quantities in Section~\ref{sec:3.1} using the formulas by putting in our observations will give us the estimators which will be denoted by $\hat{\alpha}$, $\hat{\alpha}^{(w)}$, $\hat{S}$, $\hat{b}$ and $\hat{k}$, respectively. Next we want to verify the quality of our mean angular direction. Therefore we compute the $(1-\delta)\si{\percent}$-confidence intervals for the population mean, such that $L_1:=\hat{\alpha}-d$ and $L_2:=\hat{\alpha}+d$ are the lower and upper confidence limits of the mean angular direction, respectively, see \cite[Section 26.7]{Zar2010}. For the weighted mean $\hat{\alpha}^{(w)}$ we denote the confidence interval by $d^{(w)}$. We always use $\delta=\SI{5}{\percent}$. To test for zero mean which would imply that there is no lead or lag relationship we can perform the one sample test for mean angle, which is similar to the one sample $t$-test on a linear scale. Let $\alpha_0\in[-\pi,\pi)$ be the mean angular direction for which we want to test and $\bar{\alpha}$ the mean angular direction of the underlying (unknown) distribution. We test for \begin{align*} H_0:\,\,&\bar{\alpha}=\alpha_0,\\ H_1:\,\,&\bar{\alpha}\neq\alpha_0 \end{align*} by checking whether $\alpha_0\in[L_1,L_2]$ using our estimator $\hat{\alpha}$ and its \SI{95}{\percent} confidence interval, see \cite[Section 27.1 (c)]{Zar2010}. In our case we will set $\alpha_0=0$. The result of this test is then given by \begin{align*} h_m &:= \begin{cases} 0,&\text{if }H_0\text{ can not be rejected, i.e. } \alpha_0=0\in [L_1,L_2],\\ 1,&\text{otherwise}. \end{cases} \end{align*} In Remark~\ref{rem:parameter_choice} we noted that we will generate empirical distributions for different mean wavelengths, say $n\in\ensuremath{\mathbb{N}}$ different values. To compare all these distributions for the same pair of markets we can use the one-factor ANOVA or Watson-Williams test (multi-sample test). It assesses the question whether the mean directions of two or more groups are identical or not, i.e. it tests for \begin{align*} H_0:\,\,&\text{All of $n$ groups share a common mean direction, i.e., }\bar{\alpha}^{(1)}=...=\bar{\alpha}^{(n)}.\\ H_1:\,\,&\text{Not all groups have a common mean direction,} \end{align*} see \cite[Section 27.4 (b)]{Zar2010}. The output of this test is a $p$-value, i.e. the probability of getting results which are at least as extreme as our observation assuming the null hypothesis is true. Thus a large $p$-value indicates that the null hypothesis holds true. We denote this value by $p_{ww}\in [0,1]$. \subsection{Lead or lag}\label{sec:3.3} Using the mean angular direction $\hat{\alpha}$ and its confidence interval we can roughly approximate the lead or lag. Assume we have a mean wavelength of 100 candles on a \SI{60}{\minute} chart. The mean wavelength would then be approximately $\SI{60}{\minute}\cdot 100=\SI{6000}{\minute}$. This value equates $2\pi$. Thus the mean of the lead or lag $\ell$ can be approximated by \begin{align*} \ell \approx \frac{\hat{\alpha}}{2\pi}\cdot \SI{6000}{\minute} \end{align*} and the corresponding confidence interval is approximated by $[\ell-d_{\ell},\ell+d_{\ell}]$ where \begin{align*} d_{\ell} \approx \frac{d}{2\pi}\cdot \SI{6000}{\minute}. \end{align*} Analogously we can compute the lead or lag using the weighted mean angular direction which we denote by $\ell^{(w)}$ and $d_{\ell}^{(w)}$, respectively, i.e. $\ell^{(w)}\approx\frac{\hat{\alpha}^{(w)}}{2\pi}\cdot \SI{6000}{\minute}$ and $d_{\ell}^{(w)}\approx \frac{d^{(w)}}{2\pi}\cdot \SI{6000}{\minute}$. Note that a positive value for $\ell$ and $\ell^{(w)}$ means that the primary market leads the secondary and vice versa for a negative value. To answer the question which market is ahead, if any, we make the following definition. \begin{definition}\label{def:lead_lag} For positive correlated markets, i.e. $|\hat{\alpha}^{(w)}|\leq\frac{\pi}{2}$, we say \emph{one market leads the other} if $\hat{\alpha}^{(w)}$ is significantly different from zero, i.e. \begin{align*} \text{if }\,\, \hat{\alpha}^{(w)}-d^{(w)}>0 &\quad\leadsto\quad \text{primary market leads,}\\ \text{if }\,\, \hat{\alpha}^{(w)}+d^{(w)}<0 &\quad\leadsto\quad \text{secondary market leads.} \end{align*} \end{definition} \section{Empirical study}\label{sec:4} Now we study different markets from commodities to foreign exchanges. In Subsection~\ref{sec:4.1} we explain the setting and give some details on the choice of parameters. The angular histograms and the statistical results are then shown in Subsection~\ref{sec:4.2}. \subsection{Settings}\label{sec:4.1} In this paper we focus on the \SI{60}{\minute} chart. The wavelengths we use to adjust the MinMax process for the primary market, see Remark~\ref{rem:parameter_choice}, are of size of 30 candles up to 180 candles. For the Watson-Williams test, see Section~\ref{sec:3.2}, this leads to $n=151$ groups. Since we have given the wavelength in number of candles we proceed as follows to ``synchronize'' the markets: \begin{enumerate} \item Choose the desired mean wavelength $\lambda^*_{\text{candles}}\in\{30,31,...,180\}$ in number of candles. \item Adjust the parameter for the MinMax process on the primary market, such that the wavelength of the primary market in number of candles, ignoring the time when the stock exchange is closed, matches $\lambda^*_{\text{candles}}$. \item Calculate the corresponding wavelength $\lambda^*$ in seconds for the primary market, this time considering the time when stock exchange is closed. \item Adjust the MinMax process on the secondary market, such that the wavelength of the secondary market in seconds matches $\lambda^*$, i.e. perform step 2 from Section~\ref{sec:2}, where the primary market is already fixed. \item Proceed with steps 3 and 4 from Section~\ref{sec:2}. \end{enumerate} For most computations of the directional statistics the MATLAB library CircStat~\cite{Berens2009} has been used and all angles are measured in radian. The markets which we examine including the period of time for the available candle data are listed in Table~\ref{tab:markets}. Note that the start date is not the same for all markets. If we examine a combination of markets with different initial dates we use the smaller period of time for both markets. \begin{table} \centering { \footnotesize \begin{tabular}{llr} \toprule market & underlying & initial date\\ \midrule Eurex DAX & DAX Futures & December 10, 2003\\ Eurex BUND & Euro-Bund Futures & December 10, 2003\\ Eurex DJEST50 & Euro STOXX 50 Index Futures & December 10, 2003\\ CME MINI S\&P & E-mini S\&P 500 Futures & December 12, 2003\\ CME MINI NSDQ & E-mini NASDAQ 100 Futures & September 14, 2004\\ CME CMX GLD & Gold Futures & July \phantom{0}7, 2005\\ CME CMX SIL & Silver Futures & July \phantom{0}7, 2005\\ CME PH CRDE & Crude Oil Futures & November 29, 2004\\ CME PH NG & Natural Gas (Henry Hub) Physical Futures & November 29, 2004\\ CME\_CBT 30Y TB & U.S. Treasury Bond Futures & October 18, 2004\\ ICE\_NYBOT MNRUS2K & Russell 2000 Index Futures & September 20, 2004\\ Forex EUR-USD & EUR-USD & July 17, 2009\\ Forex JPY-USD & JPY-USD & July 17, 2009\\ Forex GBP-USD & GBP-USD & July 17, 2009\\ Forex CHF-USD & CHF-USD & July 17, 2009\\ \bottomrule \end{tabular} } \caption{Examined markets and the period of time of the used candle data of the \SI{60}{\minute} chart (terminal date is always May 15, 2014). All historical data are from FIDES.} \label{tab:markets} \end{table} \subsection{Results}\label{sec:4.2} Now we look at the results for several futures, indexes and foreign exchanges. The statistical quantities for the phase shift of the extreme values are shown in Table~\ref{tab:60min} and for the points in time of the confirmation of the extreme values in Table~\ref{tab:60min_confirmed}. The corresponding empirical distributions are given, according to the following remark, by Figures~\ref{fig:res1} to \ref{fig:res17}. \begin{remark}\label{rem:figures} (Notes on figures)\\ The label of each of the following figures states ``$A$ versus $B$'' and each figure shows the following four distributions (in same order): \begin{enumerate} \item Time of extrema: $A$ as primary and $B$ as secondary market. \item Time of extrema: $B$ as primary and $A$ as secondary market. \item Time of extrema confirmed (see Remark~\ref{rem:extrema_confirmed}): $A$ as primary and $B$ as secondary market. \item Time of extrema confirmed (see Remark~\ref{rem:extrema_confirmed}): $B$ as primary and $A$ as secondary market. \end{enumerate} All plots also contains the mean angular direction and the mean angular direction of the weighted distribution (weighted with the hat function, see Figure~\ref{fig:von_mises}). These directions are the green and red line inside the circle, respectively. Additionally each bin of the histograms contains information of the single distributions for each wavelength: It shows the largest value of this bin occurred within the $151$ single distributions, the smallest value and the bin value of the combined distribution plus and minus the standard deviation. \end{remark} \begin{sidewaystable} {\footnotesize \defblack!10!white{black!10!white} \newcommand{\mc}[1]{\multicolumn{1}{c}{#1}} \newcommand{\bb}[1]{#1$^*$} \begin{tabular}{llrrrrrrrc} \toprule prime & sec & \mc{$\hat{\alpha}\pm d$} & \mc{$\hat{\alpha}^{(w)} \pm d^{(w)}$} & $\ell^{(w)}\pm d_{\ell}^{(w)}$/\si{\minute} & \mc{$\hat{S}$} & \mc{$\hat{b}$} & \mc{$\hat{k}$} & \mc{$p_{ww}$} & \mc{$h_m$}\\ \midrule \rowcolor{black!10!white} \bb{Eurex DAX} & CME MINI S\&P & $ 0.002 \pm 0.008$ & $ 0.012 \pm 0.003$ & $ 11.833 \pm 2.674$ & 0.553 & 0.028 & 0.349 & 1.000 & 0\\ \rowcolor{black!10!white} CME MINI S\&P & \bb{Eurex DAX} & $ 0.035 \pm 0.006$ & $ -0.005 \pm 0.002$ & $ -4.809 \pm 2.001$ & 0.522 & -0.084 & 0.426 & 1.000 & 1\\ Forex EUR-USD & \bb{Forex JPY-USD} & $ -0.286 \pm 0.081$ & $ -0.044 \pm 0.006$ & $ -41.693 \pm 5.571$ & 0.948 & 0.071 & 0.161 & 0.000 & 1\\ \bb{Forex JPY-USD} & Forex EUR-USD & $ 0.291 \pm 0.072$ & $ 0.028 \pm 0.006$ & $ 26.967 \pm 5.572$ & 0.941 & -0.098 & 0.142 & 0.000 & 1\\ \rowcolor{black!10!white} Forex EUR-USD & \bb{Forex GBP-USD} & $ 0.002 \pm 0.011$ & $ -0.010 \pm 0.003$ & $ -9.381 \pm 3.303$ & 0.617 & -0.031 & 0.342 & 1.000 & 0\\ \rowcolor{black!10!white} \bb{Forex GBP-USD} & Forex EUR-USD & $ 0.025 \pm 0.011$ & $ 0.007 \pm 0.004$ & $ 6.269 \pm 3.370$ & 0.637 & -0.024 & 0.341 & 1.000 & 1\\ \bb{Forex EUR-USD} & Forex CHF-USD & $ 0.000 \pm 0.008$ & $ 0.005 \pm 0.003$ & $ 4.645 \pm 2.672$ & 0.489 & 0.012 & 0.513 & 1.000 & 0\\ Forex CHF-USD & \bb{Forex EUR-USD} & $ -0.003 \pm 0.008$ & $ -0.013 \pm 0.003$ & $ -12.668 \pm 2.724$ & 0.492 & -0.036 & 0.501 & 1.000 & 0\\ \rowcolor{black!10!white} Eurex BUND & \bb{CME\_CBT 30Y TB} & $ -0.036 \pm 0.010$ & $ -0.007 \pm 0.003$ & $ -6.534 \pm 3.098$ & 0.613 & 0.038 & 0.306 & 1.000 & 1\\ \rowcolor{black!10!white} \bb{CME\_CBT 30Y TB} & Eurex BUND & $ 0.078 \pm 0.008$ & $ 0.012 \pm 0.003$ & $ 11.759 \pm 2.565$ & 0.612 & -0.099 & 0.331 & 1.000 & 1\\ CME MINI S\&P & \bb{CME MINI NSDQ} & $ -0.022 \pm 0.005$ & $ -0.009 \pm 0.002$ & $ -8.670 \pm 1.749$ & 0.415 & 0.029 & 0.578 & 1.000 & 1\\ \bb{CME MINI NSDQ} & CME MINI S\&P & $ 0.035 \pm 0.005$ & $ 0.013 \pm 0.002$ & $ 12.253 \pm 1.746$ & 0.409 & -0.050 & 0.574 & 1.000 & 1\\ \rowcolor{black!10!white} \bb{ICE\_NYBOT MNRUS2K} & CME MINI S\&P & $ 0.019 \pm 0.005$ & $ 0.005 \pm 0.002$ & $ 5.004 \pm 1.738$ & 0.393 & -0.034 & 0.596 & 1.000 & 1\\ \rowcolor{black!10!white} CME MINI S\&P & \bb{ICE\_NYBOT MNRUS2K} & $ -0.012 \pm 0.005$ & $ -0.005 \pm 0.002$ & $ -4.714 \pm 1.709$ & 0.402 & 0.017 & 0.604 & 1.000 & 1\\ ICE\_NYBOT MNRUS2K & \bb{CME MINI NSDQ} & $ -0.014 \pm 0.005$ & $ -0.012 \pm 0.002$ & $ -11.257 \pm 1.885$ & 0.464 & -0.004 & 0.542 & 1.000 & 1\\ \bb{CME MINI NSDQ} & ICE\_NYBOT MNRUS2K & $ 0.024 \pm 0.005$ & $ 0.008 \pm 0.002$ & $ 7.952 \pm 1.858$ & 0.466 & -0.032 & 0.528 & 0.948 & 1\\ \rowcolor{black!10!white} Eurex DJEST50 & \bb{Eurex DAX} & $ 0.001 \pm 0.005$ & $ -0.002 \pm 0.002$ & $ -2.368 \pm 2.052$ & 0.323 & -0.012 & 0.638 & 1.000 & 0\\ \rowcolor{black!10!white} Eurex DAX & Eurex DJEST50 & $ -0.007 \pm 0.005$ & $ -0.001 \pm 0.002$ & $ -0.999 \pm 2.058$ & 0.318 & 0.016 & 0.620 & 1.000 & 1\\ \bb{CME CMX GLD} & CME CMX SIL & $ 0.040 \pm 0.006$ & $ 0.010 \pm 0.002$ & $ 9.576 \pm 2.109$ & 0.498 & -0.060 & 0.468 & 1.000 & 1\\ \bb{CME CMX SIL} & CME CMX GLD & $ 0.014 \pm 0.006$ & $ 0.010 \pm 0.002$ & $ 9.559 \pm 2.086$ & 0.487 & -0.002 & 0.491 & 1.000 & 1\\ \rowcolor{black!10!white} \bb{CME CMX GLD} & Forex EUR-USD & $ 0.154 \pm 0.022$ & $ 0.030 \pm 0.005$ & $ 28.393 \pm 4.488$ & 0.804 & -0.091 & 0.209 & 0.861 & 1\\ \rowcolor{black!10!white} Forex EUR-USD & \bb{CME CMX GLD} & $ -0.116 \pm 0.022$ & $ -0.044 \pm 0.005$ & $ -42.000 \pm 4.334$ & 0.813 & 0.027 & 0.227 & 0.915 & 1\\ \bb{CME CMX GLD} & CME MINI S\&P & $ 0.067 \pm 0.030$ & $ 0.021 \pm 0.004$ & $ 19.585 \pm 3.631$ & 0.892 & -0.021 & 0.248 & 0.000 & 1\\ CME MINI S\&P & \bb{CME CMX GLD} & $ 0.018 \pm 0.027$ & $ -0.023 \pm 0.004$ & $ -22.367 \pm 3.573$ & 0.883 & -0.051 & 0.239 & 0.986 & 0\\ \rowcolor{black!10!white} CME CMX GLD & \bb{Eurex DAX} & $ -0.031 \pm 0.029$ & $ -0.022 \pm 0.004$ & $ -21.402 \pm 3.696$ & 0.890 & -0.016 & 0.226 & 0.005 & 1\\ \rowcolor{black!10!white} \bb{Eurex DAX} & CME CMX GLD & $ -0.018 \pm 0.039$ & $ 0.006 \pm 0.005$ & $ 5.552 \pm 4.967$ & 0.897 & 0.020 & 0.165 & 0.000 & 0\\ \bb{CME CMX GLD} & CME PH CRDE & $ 0.096 \pm 0.016$ & $ 0.028 \pm 0.003$ & $ 27.069 \pm 3.142$ & 0.803 & -0.048 & 0.260 & 1.000 & 1\\ CME PH CRDE & \bb{CME CMX GLD} & $ -0.051 \pm 0.014$ & $ -0.033 \pm 0.003$ & $ -31.441 \pm 3.153$ & 0.767 & -0.008 & 0.264 & 0.907 & 1\\ \rowcolor{black!10!white} CME PH CRDE & \bb{Eurex DAX} & $ -0.042 \pm 0.018$ & $ -0.035 \pm 0.003$ & $ -33.130 \pm 3.302$ & 0.822 & -0.024 & 0.242 & 1.000 & 1\\ \rowcolor{black!10!white} \bb{Eurex DAX} & CME PH CRDE & $ 0.098 \pm 0.024$ & $ 0.046 \pm 0.005$ & $ 44.008 \pm 4.301$ & 0.842 & 0.001 & 0.190 & 1.000 & 1\\ CME PH CRDE & \bb{Forex EUR-USD} & $ -0.114 \pm 0.018$ & $ -0.059 \pm 0.005$ & $ -56.518 \pm 4.366$ & 0.759 & 0.008 & 0.221 & 1.000 & 1\\ \bb{Forex EUR-USD} & CME PH CRDE & $ 0.149 \pm 0.018$ & $ 0.064 \pm 0.004$ & $ 61.358 \pm 4.162$ & 0.774 & -0.033 & 0.224 & 1.000 & 1\\ \rowcolor{black!10!white} CME PH CRDE & \bb{CME PH NG} & $ -0.062 \pm 0.025$ & $ -0.022 \pm 0.004$ & $ -20.535 \pm 3.643$ & 0.876 & 0.012 & 0.209 & 0.009 & 1\\ \rowcolor{black!10!white} \bb{CME PH NG} & CME PH CRDE & $ 0.082 \pm 0.025$ & $ 0.012 \pm 0.004$ & $ 11.090 \pm 3.588$ & 0.872 & -0.047 & 0.217 & 0.008 & 1\\ \bottomrule \end{tabular} } \caption{Results on 60\,\textrm{min} chart (time of extrema). $^*$This market leads the other one.} \label{tab:60min} \end{sidewaystable} \begin{sidewaystable} {\footnotesize \defblack!10!white{black!10!white} \newcommand{\mc}[1]{\multicolumn{1}{c}{#1}} \newcommand{\bb}[1]{#1$^*$} \begin{tabular}{llrrrrrrrc} \toprule prime & sec & \mc{$\hat{\alpha}\pm d$} & \mc{$\hat{\alpha}^{(w)} \pm d^{(w)}$} & $\ell^{(w)}\pm d_{\ell}^{(w)}$/\si{\minute} & \mc{$\hat{S}$} & \mc{$\hat{b}$} & \mc{$\hat{k}$} & \mc{$p_{ww}$} & \mc{$h_m$}\\ \midrule \rowcolor{black!10!white} Eurex DAX & \bb{CME MINI S\&P} & $ -0.137 \pm 0.008$ & $ -0.076 \pm 0.003$ & $ -72.647 \pm 2.722$ & 0.548 & 0.045 & 0.325 & 0.000 & 1\\ \rowcolor{black!10!white} \bb{CME MINI S\&P} & Eurex DAX & $ 0.039 \pm 0.006$ & $ 0.008 \pm 0.002$ & $ 8.038 \pm 2.041$ & 0.511 & -0.053 & 0.392 & 0.000 & 1\\ Forex EUR-USD & \bb{Forex JPY-USD} & $ -0.572 \pm 0.051$ & $ -0.082 \pm 0.006$ & $ -78.478 \pm 5.384$ & 0.917 & 0.166 & 0.080 & 0.000 & 1\\ \bb{Forex JPY-USD} & Forex EUR-USD & $ 0.494 \pm 0.047$ & $ 0.086 \pm 0.006$ & $ 81.842 \pm 5.381$ & 0.910 & -0.143 & 0.116 & 0.000 & 1\\ \rowcolor{black!10!white} Forex EUR-USD & \bb{Forex GBP-USD} & $ -0.053 \pm 0.011$ & $ -0.029 \pm 0.004$ & $ -27.782 \pm 3.471$ & 0.622 & 0.012 & 0.289 & 0.000 & 1\\ \rowcolor{black!10!white} \bb{Forex GBP-USD} & Forex EUR-USD & $ 0.053 \pm 0.011$ & $ 0.021 \pm 0.004$ & $ 19.941 \pm 3.452$ & 0.615 & -0.032 & 0.290 & 0.000 & 1\\ Forex EUR-USD & \bb{Forex CHF-USD} & $ -0.048 \pm 0.007$ & $ -0.025 \pm 0.003$ & $ -24.078 \pm 2.669$ & 0.461 & 0.037 & 0.493 & 0.002 & 1\\ \bb{Forex CHF-USD} & Forex EUR-USD & $ 0.065 \pm 0.007$ & $ 0.033 \pm 0.003$ & $ 31.507 \pm 2.723$ & 0.463 & -0.049 & 0.477 & 0.000 & 1\\ \rowcolor{black!10!white} Eurex BUND & CME\_CBT 30Y TB & $ -0.006 \pm 0.010$ & $ -0.000 \pm 0.003$ & $ -0.475 \pm 3.175$ & 0.609 & 0.006 & 0.280 & 0.000 & 0\\ \rowcolor{black!10!white} CME\_CBT 30Y TB & \bb{Eurex BUND} & $ 0.006 \pm 0.008$ & $ -0.010 \pm 0.003$ & $ -9.862 \pm 2.550$ & 0.588 & -0.033 & 0.330 & 0.000 & 0\\ \bb{CME MINI S\&P} & CME MINI NSDQ & $ -0.004 \pm 0.005$ & $ 0.004 \pm 0.002$ & $ 3.956 \pm 1.821$ & 0.410 & 0.024 & 0.523 & 0.039 & 0\\ \bb{CME MINI NSDQ} & CME MINI S\&P & $ 0.021 \pm 0.004$ & $ 0.008 \pm 0.002$ & $ 7.532 \pm 1.783$ & 0.385 & -0.027 & 0.529 & 0.150 & 1\\ \rowcolor{black!10!white} ICE\_NYBOT MNRUS2K & CME MINI S\&P & $ 0.005 \pm 0.004$ & $ -0.002 \pm 0.002$ & $ -1.512 \pm 1.807$ & 0.379 & -0.016 & 0.526 & 0.000 & 1\\ \rowcolor{black!10!white} \bb{CME MINI S\&P} & ICE\_NYBOT MNRUS2K & $ 0.022 \pm 0.005$ & $ 0.016 \pm 0.002$ & $ 15.427 \pm 1.814$ & 0.404 & -0.001 & 0.519 & 0.000 & 1\\ ICE\_NYBOT MNRUS2K & CME MINI NSDQ & $ -0.004 \pm 0.005$ & $ -0.002 \pm 0.002$ & $ -1.685 \pm 1.967$ & 0.457 & 0.005 & 0.479 & 0.138 & 0\\ \bb{CME MINI NSDQ} & ICE\_NYBOT MNRUS2K & $ 0.037 \pm 0.005$ & $ 0.017 \pm 0.002$ & $ 16.092 \pm 1.948$ & 0.457 & -0.032 & 0.457 & 0.000 & 1\\ \rowcolor{black!10!white} \bb{Eurex DJEST50} & Eurex DAX & $ 0.007 \pm 0.005$ & $ 0.003 \pm 0.002$ & $ 2.908 \pm 2.082$ & 0.317 & -0.008 & 0.615 & 1.000 & 1\\ \rowcolor{black!10!white} Eurex DAX & Eurex DJEST50 & $ -0.008 \pm 0.005$ & $ -0.001 \pm 0.002$ & $ -1.308 \pm 2.083$ & 0.318 & 0.019 & 0.612 & 1.000 & 1\\ \bb{CME CMX GLD} & CME CMX SIL & $ 0.035 \pm 0.005$ & $ 0.021 \pm 0.002$ & $ 20.262 \pm 2.045$ & 0.437 & -0.018 & 0.466 & 0.995 & 1\\ CME CMX SIL & \bb{CME CMX GLD} & $ 0.002 \pm 0.005$ & $ -0.004 \pm 0.002$ & $ -4.123 \pm 2.013$ & 0.417 & -0.017 & 0.490 & 0.013 & 0\\ \rowcolor{black!10!white} \bb{CME CMX GLD} & Forex EUR-USD & $ 0.072 \pm 0.019$ & $ 0.015 \pm 0.005$ & $ 14.315 \pm 4.385$ & 0.778 & -0.045 & 0.228 & 0.178 & 1\\ \rowcolor{black!10!white} Forex EUR-USD & \bb{CME CMX GLD} & $ -0.090 \pm 0.018$ & $ -0.023 \pm 0.004$ & $ -22.335 \pm 4.141$ & 0.776 & 0.050 & 0.234 & 0.998 & 1\\ \bb{CME CMX GLD} & CME MINI S\&P & $ -0.015 \pm 0.034$ & $ 0.008 \pm 0.004$ & $ 7.721 \pm 3.904$ & 0.906 & 0.021 & 0.200 & 0.000 & 0\\ CME MINI S\&P & \bb{CME CMX GLD} & $ -0.002 \pm 0.030$ & $ -0.018 \pm 0.004$ & $ -17.374 \pm 3.799$ & 0.896 & -0.027 & 0.204 & 0.000 & 0\\ \rowcolor{black!10!white} CME CMX GLD & Eurex DAX & $ -0.072 \pm 0.043$ & $ 0.001 \pm 0.004$ & $ 1.243 \pm 4.043$ & 0.926 & 0.043 & 0.189 & 0.000 & 1\\ \rowcolor{black!10!white} Eurex DAX & \bb{CME CMX GLD} & $ -0.067 \pm 0.072$ & $ -0.016 \pm 0.006$ & $ -15.293 \pm 5.379$ & 0.944 & 0.009 & 0.165 & 0.000 & 0\\ \bb{CME CMX GLD} & CME PH CRDE & $ 0.074 \pm 0.015$ & $ 0.030 \pm 0.003$ & $ 28.410 \pm 3.197$ & 0.787 & -0.018 & 0.235 & 0.000 & 1\\ CME PH CRDE & \bb{CME CMX GLD} & $ -0.084 \pm 0.014$ & $ -0.040 \pm 0.003$ & $ -38.052 \pm 3.225$ & 0.762 & 0.013 & 0.242 & 0.000 & 1\\ \rowcolor{black!10!white} CME PH CRDE & \bb{Eurex DAX} & $ -0.128 \pm 0.018$ & $ -0.042 \pm 0.004$ & $ -40.272 \pm 3.367$ & 0.823 & 0.040 & 0.225 & 0.147 & 1\\ \rowcolor{black!10!white} Eurex DAX & CME PH CRDE & $ 0.038 \pm 0.022$ & $ 0.002 \pm 0.005$ & $ 2.370 \pm 4.313$ & 0.830 & -0.027 & 0.172 & 0.000 & 1\\ CME PH CRDE & \bb{Forex EUR-USD} & $ -0.157 \pm 0.018$ & $ -0.074 \pm 0.005$ & $ -70.513 \pm 4.453$ & 0.765 & 0.021 & 0.216 & 1.000 & 1\\ \bb{Forex EUR-USD} & CME PH CRDE & $ 0.179 \pm 0.019$ & $ 0.074 \pm 0.004$ & $ 70.949 \pm 4.291$ & 0.789 & -0.039 & 0.219 & 0.989 & 1\\ \rowcolor{black!10!white} CME PH CRDE & \bb{CME PH NG} & $ -0.171 \pm 0.027$ & $ -0.035 \pm 0.004$ & $ -33.655 \pm 3.786$ & 0.882 & 0.059 & 0.180 & 1.000 & 1\\ \rowcolor{black!10!white} \bb{CME PH NG} & CME PH CRDE & $ 0.204 \pm 0.024$ & $ 0.041 \pm 0.004$ & $ 39.363 \pm 3.698$ & 0.871 & -0.078 & 0.183 & 0.737 & 1\\ \bottomrule \end{tabular} } \caption{Results on 60\,\textrm{min} chart (time of extrema confirmed). $^*$This market leads the other one.} \label{tab:60min_confirmed} \end{sidewaystable} First we discuss the results for the time of extrema and afterwards the results for the confirmation time of the extrema. \paragraph{Time of extrema} First we note that the results are mostly independent of the mean wavelength which we can see from the additional information of each bin, i.e. the minimal and maximal value for this bin and the standard deviation. Next we see a very weak correlation between EUR-USD vs. JPY-USD, Gold vs. EUR-USD, Gold vs. S\&P 500, Gold vs. FDAX, Gold vs. Oil, Oil vs. FDAX and Oil vs. EUR-USD. The pairs of markets also have a relatively large standard deviation $\hat{S}$ and small concentration around its mean indicated by the small kurtosis $\hat{k}$. All other combinations of markets illustrated in Table~\ref{tab:60min} and Figures~\ref{fig:res1} and \ref{fig:res3} to \ref{fig:res10} show a large peak near the mean angular direction between \SI{20}{\percent} up to \SI{53}{\percent}. This means that the probability is significantly high that extreme values for both markets are shaped in almost the exact time. Of course this leads to smaller standard deviations and higher kurtosis. \paragraph{Confirmation time of extrema} Since the point in time of confirming an extreme value by the MinMax process is more sensitive to the price development than the very fixed point in time of the extreme value itself we already expect scattered observations. However even here we can see a peak in the mean angular direction of about half of the size of the peak for the time of extrema of the strongly correlated pairs of markets. The values in Table~\ref{tab:60min_confirmed} are approximately of the same order as in Table~\ref{tab:60min}. \paragraph{All together} We see strong correlations for extrema and confirmed extrema between combinations of FDAX, Euro-Bund, Euro STOXX, S\&P 500, U.S. Treasury, NASDAQ 100, Russel 2000 and between the foreign exchanges except EUR-USD versus JPY-USD. Additionally Gold and Silver has a strong correlation whereas all other combinations with at least one market from commodities seem to be weakly correlated or even nearly uncorrelated. Thus from the point of view of local extreme values the commodities are separated from other markets. The lead-lag $\ell^{(w)}$, see Section~\ref{sec:3.3}, is between \SI{5}{\minute} and \SI{10}{\minute} for the point in time of the extrema for the indexes and foreign exchanges and also for Gold versus Silver. Note that this is just a fraction of the duration of one single period of the \SI{60}{\minute} chart. Even the points in time of the extrema are just the time stamp of a candle and not the exact time of the extreme value itself, i.e. these points in time have an uncertainty of $\SI{+-30}{\minute}$. Therefore we cannot view the value $\ell^{(w)}$ as an absolute value but more as a tendency of the lead or lag for the candles in which the extreme values occur. \begin{remark} In most of the cases our investigations of the correlation of two markets yields one market leading and one market following, e.g. DAX Futures leads E-mini S\&P 500 Futures, no matter which one is considered primary or secondary market. Note however, that in some cases the leading market is not unique like for instance the Gold Futures versus Silver Futures, or, sometimes our calculation cannot decide which market is leading. \end{remark} \def0.9\linewidth{0.9\linewidth} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Eurex_DAX_vs_CME_MINI_SP_histo} \caption{DAX Futures versus E-mini S\&P 500 Futures} \label{fig:res1} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Forex_EUR_vs_Forex_JPYUSD_histo} \caption{EUR-USD versus JPY-USD} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Forex_EUR_vs_Forex_GBP_histo} \caption{EUR-USD versus GBP-USD} \label{fig:res3} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Forex_EUR_vs_Forex_CHFUSD_histo} \caption{EUR-USD versus CHF-USD} \label{fig:res4} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Eurex_BUND_vs_CME_CBT_30Y_TB_histo} \caption{Euro-Bund Futures versus U.S. Treasury Bond Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_MINI_SP_vs_CME_MINI_NSDQ_histo} \caption{E-mini S\&P 500 Futures versus E-mini NASDAQ 100 Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/ICE_NYBOT_MNRUS2K_vs_CME_MINI_SP_histo} \caption{Russell 2000 Index Mini Futures versus E-mini S\&P 500 Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/ICE_NYBOT_MNRUS2K_vs_CME_MINI_NSDQ_histo} \caption{Russell 2000 Index Mini Futures versus E-mini NASDAQ 100 Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Eurex_DJEST50_vs_Eurex_DAX_histo} \caption{EURO STOXX 50 Index Futures versus DAX Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_CMX_GLD_vs_CME_CMX_SIL_histo} \caption{Gold Futures versus Silver Futures} \label{fig:res10} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_CMX_GLD_vs_Forex_EUR_histo} \caption{Gold Futures versus EUR-USD} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_CMX_GLD_vs_CME_MINI_SP_histo} \caption{Gold Futures versus E-mini S\&P 500 Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_CMX_GLD_vs_Eurex_DAX_histo} \caption{Gold Futures versus DAX Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_CMX_GLD_vs_CME_PH_CRDE_histo} \caption{Gold Futures versus Crude Oil Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_PH_CRDE_vs_Eurex_DAX_histo} \caption{Crude Oil Futures versus DAX Futures} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_PH_CRDE_vs_Forex_EUR_histo} \caption{Crude Oil Futures versus EUR-USD} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/CME_PH_CRDE_vs_CME_PH_NG_histo} \caption{Crude Oil Futures versus Natural Gas (Henry Hub) Physical Futures} \label{fig:res17} \end{figure} \begin{remark} For the currency Swiss franc it is more common to analyze USD-CHF instead of CHF-USD as we do in the above discussion. The reason we focus on CHF-USD is to see the positive correlation to EUR-USD and thus to have a more natural interpretation for lead and lag as in Definition~\ref{def:lead_lag}. However, it is also possible to compare (strongly) negative correlated markets as EUR-USD versus USD-CHF. In Figure~\ref{fig:res_negative_corr} we see the results for this combination. We expect that the results are the same as for the combination EUR-USD versus CHF-USD but shifted by $\pi$. If we compare Figures~\ref{fig:res4} and \ref{fig:res_negative_corr} we actually see this connection perfectly. This is also the case for the Japanese yen. \end{remark} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{images/results_60min/Forex_EUR_vs_Forex_USDCHF_histo} \caption{EUR-USD versus USD-CHF (cf. Figure~\ref{fig:res4})} \label{fig:res_negative_corr} \end{figure} \section{Conclusion and outlook}\label{sec:5} We introduced the notion of lead-lag relationship from a market technical point of view. Using the local extreme values of the markets we get an empirical distribution of their phase shifts on the unit sphere. The directional statistics helps us to illustrate and quantify the results. We observed many strongly correlated pairs of markets with respect to their extreme values while, of course, there are combinations with a very weak connection. Combinations of indexes show the highest correlation and also a measurable lead or lag. Since we use a geometrical approach based on the actual local extreme values of the chart, i.e. on some kind of reversal points, the results can directly be used for trading strategies. In future work the authors plan to localize this method to shorter time intervals so that we obtain even more meaningful results for live/real time data. \newpage
{ "timestamp": "2015-04-24T02:11:06", "yymm": "1504", "arxiv_id": "1504.06235", "language": "en", "url": "https://arxiv.org/abs/1504.06235" }
\section{Introduction} The task is to find a polyline, within a specified tolerance of the source polyline, with the minimum number of vertices. That polyline is called optimal. Usually, a subset of vertices of the source polyline is used to construct an optimal polyline~\cite{CompressionAlgorithm, CompressionReview}. However, an optimal polyline does not necessarily have vertices coincident with the source polyline vertices. One approach, to allow the resultant polyline to have flexibility in the locations of vertices, is to find the intersection between adjacent straight lines~\cite{PolylineGeneralizationCombinatorical} or geometrical primitives~\cite{PolylineGeneralization}. However, there are situations when such an approach does not work well, for example, when adjacent straight lines are almost parallel to each other or a circular arc is close to being tangent to a straight segment. The approach described in this paper evaluates a set of vertex locations (considered locations) while searching for a polyline with the minimum number of vertices. \section{Algorithm} \subsection { Discretization of the Solution \label{sec:Descretization} } \begin{figure} [b] \centering \includegraphics[width = 6 cm, keepaspectratio]{ExampleSegment.png} \caption { Example of one segment (red segment) between considered locations (black dots) within tolerance of the source polyline (blue polyline). } \label{fig:ExampleSegment} \end{figure} Any compressed polyline must be within tolerance of the source polyline; therefore, the compressed polyline must have vertices within tolerance of the source polyline. It would be very difficult to consider all possible polylines and find one with the minimum number of vertices; therefore, as an approximation, only some locations around vertices of the source polyline are considered (see the black points around the vertices of the source polyline in Fig.~\ref{fig:ExampleSegment}). The locations around vertices of the source polyline are chosen to be on an infinite equilateral triangular grid with the distance from vertices of the source polyline less than the specified tolerance. The equilateral triangular grid (see Fig.~\ref{fig:TriangularGrid}) has the lowest number of nodes versus other grids (square, hexagonal, etc.), satisfying that distance from any point to the closest node does not exceed the specified threshold. \begin{figure} [htb] \centering \begin{tikzpicture}[scale = 1.5] \tikzstyle{every node} = [font = \tiny] \begin{scope} \clip (-0.7, -0.2) -- (1.7, - 0.2) -- (1.7, 0.86602540378443864676372317075294 + 0.2) -- (-0.7, 0.86602540378443864676372317075294 + 0.2) -- cycle; \draw [dotted] (0, 0) -- (-1, 0); \draw [dotted] (1, 0) -- (2, 0); \draw [dotted] (0.5, 0.86602540378443864676372317075294) -- (-0.5, 0.86602540378443864676372317075294); \draw [dotted] (0.5, 0.86602540378443864676372317075294) -- (1.5, 0.86602540378443864676372317075294); \draw [dotted] (0, 0) -- (-0.5, 0.86602540378443864676372317075294); \draw [dotted] (1, 0) -- (1.5, 0.86602540378443864676372317075294); \draw [dotted] (-1, 0) -- (-0.5, 0.86602540378443864676372317075294); \draw [dotted] (2, 0) -- (1.5, 0.86602540378443864676372317075294); \end{scope} \draw (0, 0) -- (1, 0) -- (0.5, 0.86602540378443864676372317075294) -- cycle; \draw [dashed] (0.5, 0.28867513459481288225457439025098) -- (0, 0); \draw [dashed] (0.5, 0.28867513459481288225457439025098) -- (1, 0); \draw [dashed] (0.5, 0.28867513459481288225457439025098) -- (0.5, 0.86602540378443864676372317075294); \draw (0.5, 0.28867513459481288225457439025098) node [anchor = 210] {$O$}; \draw (0, 0) node [anchor = 90] {$A$}; \draw (1, 0) node [anchor = 90] {$B$}; \draw (0.5, 0.86602540378443864676372317075294) node [anchor = 270] {$C$}; \end{tikzpicture} \caption { The worst case distance for the equilateral triangular grid is the distance from the center of the triangle $O$ to any vertex of the equilateral triangle. If $OA = OB = OC = 1$, then $AB = BC = CA = \sqrt{3}$. } \label{fig:TriangularGrid} \end{figure} The choice for the side of an equilateral triangle in the equilateral triangular grid is calculated from the error it introduces. That error can be expressed as a proportion of the specified tolerance. For example, $q \in \left( 0, 1 \right)$ proportion of the specified tolerance means that the side of the equilateral triangle is equal to $q \sqrt{3}$ times the specified tolerance. This leads to about $\dfrac{2 \pi}{3 \sqrt{3} q^2} \approx \dfrac{1.2}{q^2}$ locations per each vertex. To decrease complexity, some locations might be skipped; if they are considered in neighbor vertices of the source polyline, however, it should be done without breaking the combinatorial algorithm described in section \ref{sec:CombinatorialApproach}. If tolerance is great, it is possible to consider locations around segments of the source polyline. In this paper, to support any tolerance, only locations around vertices of the source polyline are considered. Densification of the source polyline might be necessary to find the polyline with the minimum number of vertices. \subsection { Testing a Segment to Satisfy Tolerance \label{sec:TestingSegmentTolerance} } For a compressed polyline to be within tolerance, every segment of the compressed polyline must be within tolerance from the part of the source polyline it describes. To find the compressed polyline with the minimum number of vertices, this test has to be performed many times for all combinations of possible locations of vertices (see Fig.~\ref{fig:ExampleSegment}). \cite{OptimizedCompressionAlgorithm} describes an efficient approach to perform these tests based on the convex hull. If the convex hull is stored as a polygon, the complexity of this task is $O{\left( \log{n} \right)}$, where $n$ is the number of vertices in the convex hull~\cite{OptimizedCompressionAlgorithm}. The expected complexity of the convex hull for the $N$ random points in any rectangle is $O{\left( \log{N} \right)}$, see~\cite{ConvexHullsComplexity}. If the source polyline has parts close to an arc, the size of the convex hull tends to increase. For the worst case, the number of vertices in the convex hull is equal to the number of vertices in the original set. If there are no lines with thickness of two tolerances covering the convex hull completely, then one segment cannot describe this part of the source polyline. The complexity of this check is $O{\left( n \log{n} \right)}$. A convex hull for any part of the source polyline is constructed in the same way as in~\cite{OptimizedCompressionAlgorithm}. \subsection { Testing Segment End Points \label{sec:TestingSegmentEndPoints} } The test described in the previous section~\ref{sec:TestingSegmentTolerance} does not check the ends of the segment. The example in Fig.~\ref{fig:TopologycalTest} shows that the source polyline changes direction to the opposite several times (zigzag) before going up. Without checking end points and changes in direction, the compressed polyline might not describe some parts of the source polyline (Fig.~\ref{fig:TopologycalTest}a). Therefore, these tests are necessary to guarantee that the compressed polyline (Fig.~\ref{fig:TopologycalTest}b) describes the source polyline without missing any parts. \begin{figure} [hb] \centering \begin{tabular}{c c c} \includegraphics[height = 1.5 cm, keepaspectratio]{NoCheck.png} & \includegraphics[height = 1.5 cm, keepaspectratio]{WithChecks.png} \\ (a) & (b) \end{tabular} \caption { The blue polyline is the source polyline. The red polyline is the result of the algorithm without checking for end points and the source polyline direction (a) and with both checks performed (b). } \label{fig:TopologycalTest} \end{figure} The segment end points to be within the tolerance of the part of the source polyline are tested based on the convex hull in the same way as the test for the segment to be within tolerance performed in section~\ref{sec:TestingSegmentTolerance}. This is equivalent to the test if the segment extended in parallel and perpendicular directions by the tolerance (see Fig.~\ref{fig:SegmentEndPoints}) contains a convex hull of the part of the source polyline it describes. If more directions are used, a better approximation of the curved polygon can be obtained. The complexity of the test is $O{\left( \log{n} \right)}$. \begin{figure} [htb] \centering \begin{tikzpicture}[scale = 0.7] \draw [thick] (0, 0) -- (10, 0); \draw (-1, -1) -- (11, -1) -- (11, 1) -- (-1, 1) -- cycle; \draw (0, 1) arc (90:270:1); \draw (10, -1) arc (-90:90:1); \draw [thick] (-0.4142135623730950488016887242097, 1) -- (-1, 0.4142135623730950488016887242097) -- (-1, -0.4142135623730950488016887242097) -- (-0.4142135623730950488016887242097, -1) -- (10 + 0.4142135623730950488016887242097, -1) -- (11, -0.4142135623730950488016887242097) -- (11, 0.4142135623730950488016887242097) -- (10 + 0.4142135623730950488016887242097, 1) -- cycle; \fill [pattern = north east lines] (0, 1) arc (90:270:1) -- (10, -1) arc (-90:90:1) -- cycle; \end{tikzpicture} \caption { The diagonal striped area is the tolerance area around the segment. The thin rectangle is the approximation of the area around the segment. A thick polygon would be a better approximation. } \label{fig:SegmentEndPoints} \end{figure} \subsection { Testing Polyline Direction \label{sec:TestingZigZag} } The test for the source polyline to have a zigzag is performed by checking if the projection to the segment of backward movement exceeds two tolerances ($2 T$, where $T$ is the tolerance). Two tolerances are used because one vertex of the source polyline can shift forward by the tolerance and the vertex after that shift backward by the tolerance. The algorithm is based on analyzing zigzags before the processed point. Let $p_i$ be the vertices of the polyline, $i = \overline{0..N - 1}$, $N$ be the number of vertices in the polyline. The next algorithm constructs a table for efficient testing. \begin{enumerate}[label={}] \item Define a set of directions $\alpha_j = \dfrac{2 \pi}{N_d} j$,\\ where $j = \overline{0..N_d - 1}$, $N_d$ is the number of directions. \item Cycle over each direction $\alpha_j$, $j = \overline{0..N_d - 1}$. \begin{enumerate}[label={}] \item Define the priority queue with requests containing two numbers. The first number is the real value, and the second number is the index. Priority of the request is equal to the first number. \item Set $k = 0$. \item Cycle over each point $p_i$ of the source polyline,\\ $i = \overline{0..N - 1}$. \begin{enumerate}[label={}] \item Calculate projection of $p_i$ to the direction $\alpha_j$ (scalar product between the point and the direction vector): \begin{equation*} d = p_i \cdot \left( \cos \left( \alpha_j \right), \sin \left( \alpha_j \right) \right). \end{equation*} \item Remove all requests from the priority queue with a priority of more than $d + 2 T$. If the largest index from removed requests is larger than $k$, set $k$ equal to that index. \item Set $V_{j, i} = k$. \item Add request $\left( d, i + 1 \right)$ to the priority queue. \end{enumerate} \end{enumerate} \end{enumerate} To test if the part of the source polyline between vertices $i_s$ and $i_e$ has a zigzag. \begin{enumerate}[label={}] \item First, find the closest direction $\alpha_j$ to the direction of the segment $\alpha_{j^*}$: $ j^* = \round{\left( \dfrac{N_d}{2 \pi} \alpha \right)} \!\!\! \mod N_d $, where $\alpha$ is the direction of the segment. \item Second, if $V_{j^*, i_{e}} \leq i_{s}$, then there are no zigzags for the segment describing the part of the source polyline from vertex $i_{s}$ till $i_{e}$. \end{enumerate} Let $W_i = \min_{0 \leq j \wedge j < N_d}{\left( V_{j, i} \right)}$. If $i_{s} < W_{i_{e}}$, then one segment cannot describe the part of the source polyline from vertex $i_{s}$ till $i_{e}$. This test has some limitations: \begin{itemize} \item The tested direction is approximated by the closest one, making the check approximate. \item For some error models, a zigzag might pass the test. For example, if errors are limited by a circle, a zigzag by two tolerances is only possible if it happens directly on the segment. \end{itemize} Nevertheless, it is an efficient test to avoid absurd results, like in Fig.~\ref{fig:TopologycalTest}a. The complexity of the algorithm is $O{\left( N_d N \log \left( N \right) \right)}$ and the complexity to test any segment is $O{\left( 1 \right)}$. \subsection { Combinatorial Approach to Find an Optimal Solution \label{sec:CombinatorialApproach} } The optimal solution is found by using the algorithm described in~\cite{PolylineGeneralizationCombinatorical}. Let $p_{i, j}$ be considered locations for vertex $p_i$, where ${i = \overline{0..N - 1}}$, ${j = \overline{0..N_i - 1}}$, $N_i$ is the number of considered locations for the vertex $i$. Let pairs $\left( i_k, j_k \right)$, ${k = \overline{0..m}}$, divide the source polyline into $m$ straight segments $ \left( p_{i_k, j_k}, p_{i_{k + 1}, j_{k + 1}} \right) $ describing the source polyline from vertex $i_k$ till $i_{k + 1}$, $k = \overline{0..m - 1}$. Notice that neighbor segments are already connected in $p_{i_k, j_k}$, $k = \overline{1..m - 1}$, and this solution avoids problems in algorithms~\cite{PolylineGeneralizationCombinatorical, PolylineGeneralization} when the intersection of neighbor segments is far away from the source polyline. The goal of this algorithm is to find the solution with the minimum number of vertices while satisfying tolerance restriction, and among them with the minimum integral square differences. Therefore, minimization is performed in two parts $ \left\{ \begin{aligned} & T^{\#}\\ & T^{\epsilon} \end{aligned} \right\} $, where the first part $T^{\#}$ is the number of segments, and the second part $T^{\epsilon}$ is the integral of the square deviation between segments and the source polyline. The solutions are compared by the number of segments and, if they have the same number of segments, by square deviation between segments and the source polyline. The solution of this task, when the optimal polyline has vertices coincident with the source polyline, can be found in \cite{CombinatorialMinimumNumberSegments}. Let $P_k$, $k = \overline{0..N - 1}$ be parts of the source polyline from vertex $0$ to $k$. The optimal solution is found by induction. Define the optimal solution for polyline $P_0$ as $ \left\{ \begin{aligned} & T_{0, j}^{\#}\\ & T_{0, j}^{\epsilon} \end{aligned} \right\} = \left\{ \begin{aligned} 0\\ 0 \end{aligned} \right\} $, $ { j = \overline{0, N_0 - 1} } $. For $k = \overline{1, N - 1}$, construct the optimal solution for $P_k$ from optimal solutions for $P_{k'}$, $k' = \overline{0..k-1}$. \begin{equation*} \left\{ \begin{aligned} & T_{k, j}^{\#}\\ & T_{k, j}^{\epsilon} \end{aligned} \right\} = \min _ { \begin{aligned} 0 \leq k' \wedge k' < k\\ 0 \leq j' \wedge j' < N_{k'}\\ \functioncheck { \left( \left( k', j' \right), \left( k, j \right) \right) } \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k', j'}^{\#} + 1\\ & T_{k', j'}^{\epsilon} + \epsilon _ { \left( k', j' \right), \left( k, j \right) } \end{aligned} \right\} \right) } , \end{equation*} where $ \epsilon _ { \left( k', j' \right), \left( k, j \right) } $ is the integral square difference between segment $ \left( p_{k', j'}, p_{k, j} \right) $ and the source polyline from vertex $k'$ till $k$, $ \functioncheck { \left( \left( k', j' \right), \left( k, j \right) \right) } $ is a combination of checks described in the previous sections \ref{sec:TestingSegmentTolerance}, \ref{sec:TestingSegmentEndPoints}, and \ref{sec:TestingZigZag} to check if segment $ \left( p_{k', j'}, p_{k, j} \right) $ can describe the part of the source polyline from vertex $k'$ till $k$. To reconstruct the optimal solution, it is necessary for $ \left\{ \begin{aligned} & T_{k, j}^{\#}\\ & T_{k, j}^{\epsilon} \end{aligned} \right\} $ to store $\left\{ k', j' \right\}$ when the right part is minimal. The optimal solution is reconstructed from \begin{equation*} \min _ { 0 \leq j \wedge j < N_{N - 1} } { \left\{ \begin{aligned} & T_{N - 1, j}^{\#}\\ & T_{N - 1, j}^{\epsilon} \end{aligned} \right\} } \end{equation*} by recurrently using stored $\left\{ k', j' \right\}$ values. \subsection { Optimization \label{sec:Optimization} } It is possible to significantly reduce the complexity of the algorithm described in the previous section~\ref{sec:CombinatorialApproach} by using the approach described in~\cite{PolylineGeneralizationCombinatorical}. \begin{equation} \begin{aligned} & \min _ { \begin{aligned} k_1 \leq k' \wedge k' \leq k_2\\ 0 \leq j' \wedge j' < N_{k'}\\ \functioncheck { \left( \left( k', j' \right), \left( k, j \right) \right) } \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k', j'}^{\#} + 1\\ & T_{k', j'}^{\epsilon} + \epsilon _ { \left( k', j' \right), \left( k, j \right) } \end{aligned} \right\} \right) } \gtrapprox \\ \gtrapprox & \min _ { \begin{aligned} k_1 \leq k' \wedge k' \leq k_2\\ 0 \leq j' \wedge j' < N_{k'} \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k', j'}^{\#} + 1\\ & T_{k', j'}^{\epsilon} + \epsilon _ { \left( k', j' \right), \left( k, j \right) } ^ { \left( k_2 \right) } \end{aligned} \right\} \right) } , \end{aligned} \label{eq:OptmizationBaseFormula} \end{equation} where \begin{multline*} \epsilon _ { \left( k', j' \right), \left( k, j \right) } ^ { \left( k_2 \right) } = \\ = \min _ { \begin{aligned} 0 \leq j_2 \wedge j_2 < N_{k_2}\\ \functioncheck { \left( \left( k', j' \right), \left( k_2, j_2 \right) \right) }\\ \functioncheck { \left( \left( k_2, j_2 \right), \left( k, j \right) \right) } \end{aligned} } { \left( \epsilon _ { \left( k', j' \right), \left( k_2, j_2 \right) } + \epsilon _ { \left( k_2, j_2 \right), \left( k, j \right) } \right) } . \end{multline*} From \eqref{eq:OptmizationBaseFormula}, it follows that \begin{equation} \begin{aligned} & \min _ { \begin{aligned} k_1 \leq k' \wedge k' \leq k_2\\ 0 \leq j' \wedge j' < N_{k'}\\ \functioncheck { \left( \left( k', j' \right), \left( k, j \right) \right) } \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k', j'}^{\#} + 1\\ & T_{k', j'}^{\epsilon} + \epsilon _ { \left( k', j' \right), \left( k, j \right) } \end{aligned} \right\} \right) } \gtrapprox \\ \gtrapprox & \min _ { \begin{aligned} 0 \leq j_2 \wedge j_2 < N_{k_2}\\ \functioncheck { \left( \left( k_2, j_2 \right), \left( k, j \right) \right) } \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k_2, j_2}^{\#}\\ & T_{k_2, j_2}^{\epsilon} + \epsilon _ { \left( k_2, j_2 \right), \left( k, j \right) } \end{aligned} \right\} \right) } \end{aligned} \label{eq:OptimizationFormula1} \end{equation} and \begin{equation} \begin{aligned} & \min _ { \begin{aligned} k_1 \leq k' \wedge k' \leq k_2\\ 0 \leq j' \wedge j' < N_{k'}\\ \functioncheck { \left( \left( k', j' \right), \left( k, j \right) \right) } \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k', j'}^{\#} + 1\\ & T_{k', j'}^{\epsilon} + \epsilon _ { \left( k', j' \right), \left( k, j \right) } \end{aligned} \right\} \right) } \gtrapprox \\ \gtrapprox & \min _ { 0 \leq j_1 \wedge j_1 < N_{k_1} } { \left( \left\{ \begin{aligned} & T_{k_1, j_1}^{\#}\\ & T_{k_1, j_1}^{\epsilon} \end{aligned} \right\} \right) } + \\ & + \left\{ \begin{aligned} & \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: 1\\ & \min _ { \begin{aligned} 0 \leq j_2 \wedge j_2 < N_{k_2}\\ \functioncheck { \left( \left( k_2, j_2 \right), \left( k, j \right) \right) } \end{aligned} } { \left( \epsilon _ { \left( k_2, j_2 \right), \left( k, j \right) } \right) } \end{aligned} \right\} . \end{aligned} \label{eq:OptimizationFormula2} \end{equation} The maximum of \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} can be used to skip checking combinations between vertex $k_1$ and $k_2$. The inequalities \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} are approximate due to the use of considered locations. However, this allows finding stricter limitations for the solution inside the interval and simultaneously finding the solution for breaking at vertex $k_2$. It is possible to construct \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} with exact inequalities by constructing the optimal solution when the end point is not required to end in the considered location. Similarly, the part from vertex $k_2$ to $\left( k, j \right)$ should not be required to end in considered locations for vertex $k_2$. This is useful when the resultant polyline is required to go through the vertices of the source polyline. However, such an algorithm has a worse compression ratio than the one with the flexibility in joints. See paper~\cite{PolylineGeneralizationCombinatorical} for further details of this algorithm. \subsection{Optimal Compression of Closed Polylines} To find the optimal compression of a closed polyline, it is necessary to know the starting vertex. It is also necessary that the resultant polyline starts and ends in the same vertex. The next algorithm will be used to find the starting vertex and construct a closed resultant polyline. \begin{enumerate}[label={\arabic*.}, ref={\arabic*}] \item \label{enum:ConvexHull} Construct a convex hull for all vertices of the source polyline. \item \label{enum:SmallesAngle} Find the smallest angle of the convex hull polygon. \item \label{enum:Reorient} Take the vertex corresponding to the smallest angle as the starting vertex and reorient the closed polyline to start from that vertex. \item Apply the algorithm. \item \label{enum:ConstructFirstSolution} From the constructed solution, take one vertex in the middle as the new starting vertex and reorient the closed polyline to start from that vertex. \item Apply the algorithm once more, while for the first and the last vertex consider only the location of the previous solution for the middle vertex. \end{enumerate} Steps \ref{enum:ConvexHull}, \ref{enum:SmallesAngle}, and \ref{enum:Reorient} are important for a small closed polyline. For the small closed polyline, the resultant polyline is within tolerance of the source polyline, even with suboptimal orientation. As a consequence, without these steps, step \ref{enum:ConstructFirstSolution} may not find the optimal division of the source polyline, leading to a suboptimal solution. \subsection{Optimal Compression by Straight Segments and Arcs} This algorithm is extendible to support arcs. The arc passing through considered locations differs from the segment by the necessity to define the radius. Unfortunately, it adds significant complexity to the algorithm. Nevertheless, such an algorithm is possible. There are different ways to fit an arc to a polyline: minimum integral square differences of squares~\cite{ThomasReference2, IchokuReference3}, minimum integral square differences~\cite{RobinsonReference6, Landau4, PaperArcFitting, FittingOfCircularArcsWithO1Complexity, EfficientFittingOfCircularArcs}, minimum deviation, etc. Algorithms with complexity $O{\left( n \right)}$, where $n$ is the number of vertices in the fitted polyline, are not suitable due to the significant increase in complexity. The algorithms with acceptable complexity $O{\left( 1 \right)}$ are~\cite{ThomasReference2, IchokuReference3, FittingOfCircularArcsWithO1Complexity, EfficientFittingOfCircularArcs}; however, algorithms based on integral square differences of squares~\cite{ThomasReference2, IchokuReference3} might break for small arcs and, therefore, are not suitable. Checking that the part of the source polyline is within tolerance, end points, and zigzag will be time-consuming due to complexity $O{\left( n \right)}$. \section{Analysis of the Algorithm Complexity} The algorithm contains three steps: \begin{enumerate}[label={\arabic*.}] \item Preprocessing: construction of convex hulls (section~\ref{sec:TestingSegmentTolerance}) and filling arrays for an efficient zigzag test (section~\ref{sec:TestingZigZag}). \item Construction of the optimal solution (section~\ref{sec:CombinatorialApproach}). \item Reconstruction of the optimal solution (section~\ref{sec:CombinatorialApproach}). \end{enumerate} A significant amount of time is spent on constructing an optimal solution. It is difficult to evaluate the complexity described in section~\ref{sec:Optimization}; however, the worst complexity is \begin{equation} O{\left( N^2 \cdot \max_{0 \leq i \wedge i < N}{\left( N_i^2 \right)} \cdot \log{\left( N \right)} \right)} . \label{eq:WorseComplexity} \end{equation} The complexity of the algorithm depends on the type of polyline it processes. It is very difficult to conclude what is the practical complexity of this algorithm. If the optimal polyline does not have segments describing too many vertices of the source polyline, \eqref{eq:WorseComplexity} tends to be \begin{equation} O{\left( N \cdot \max_{0 \leq i \wedge i < N}{\left( N_i^2 \right)} \right)} . \label{eq:PracticalComplexity} \end{equation} Fig.~\ref{fig:EstimationOfAlgorithmComplexity} shows how much time it takes to process a polyline depending on the number of vertices. The dependence is very close to linear, supporting \eqref{eq:PracticalComplexity}. \begin{figure} [htb] \centering \includegraphics[width = \columnwidth, keepaspectratio]{TimeGraph.png} \caption { Time needed to process a polyline versus the number of vertices. The time is measured in CPU ticks on the processor Intel Xeon CPU $\text{E5-2670}$. The polylines are generated by the Brownian motion process. Each next vertex is randomly incremented from the previous vertex by random vector, with components normally distributed with zero mean and $0.25$~standard deviation. The tolerance was set to one. The average reduction in the number of vertices is about $50$~times. } \label{fig:EstimationOfAlgorithmComplexity} \end{figure} \section{Examples} Fig.~\ref{fig:Comparison} shows an example of the algorithm described in this paper. If the source polyline is the noisy version of a ground truth polyline, where the noise does not exceed some threshold, and the algorithm is provided with a tolerance slightly greater than the threshold to account for approximations inside the algorithm, then the resultant polyline will never have more vertices than the ground truth polyline. \begin{figure} [htb] \centering \begin{tabular}{c c} \shortstack{(a) \\ \\ \\ \\ (b) \\ \\ \\ \\ (c)} & \includegraphics[width = 7 cm, keepaspectratio]{ComparisonCombinedImage.png} \end{tabular} \caption { Comparison of the result of approximation of the Douglas-Peucker algorithm (b) and approximation of optimal polyline compression (c). The green polyline is a ground truth. The red polyline is the source polyline (a), the result of the Douglas-Peucker algorithm~\cite{CompressionAlgorithm} (b), and the result of optimal polyline compression (c). The black dots around vertices of the source polyline are considered locations for the vertices of the compressed polyline. The vertices of the source polyline are deviated from the segments of the ground truth polyline by random values uniformly distributed in the interval $\left( -0.1, 0.1 \right)$. } \label{fig:Comparison} \end{figure} The effectiveness of the approach is shown in Fig.~\ref{fig:ExampleArc}. Nine segments are sufficient to represent the arc with specified precision. The algorithm not only optimizes the number of segments, it also finds the locations of the segments that minimize integral square differences. Therefore, as shown in Fig.~\ref{fig:ExampleArc}, the algorithm tends to construct segments similar in length. \begin{figure} [t] \centering \includegraphics[width = \columnwidth, keepaspectratio]{ExampleArc2.png} \caption { The black polyline is the source polyline. The red circles are the vertices of the optimal polyline. Ground truth is the arc of $90 \degree$. The noise has uniform distribution in the circle of one percent of the arc radius. } \label{fig:ExampleArc} \end{figure} Fig.~\ref{fig:GraphCompressionEfficiency} shows the dependence from the error introduced by a discrete set of considered locations (see section~\ref{sec:Descretization}) to the efficiency of the compression. Flexibility in places where neighboring segments connect each other is very important to reach maximum compression, especially for noisy data. \section{Optimal Compression by Orthogonal Directions} The triangular grid for considered locations supports directions by $30 \degree$. Reconstruction of orthogonal buildings requires support for $90 \degree$~\cite{ReconstructionOfOrthogonalPolygonalLines} and sometimes $45 \degree$. The square grid for considered locations is more appropriate for this task. Notice that because only certain directions are allowed, only segments between pairs of considered locations aligned by these directions may be parts of the resultant polyline. Suppose that the resultant segment goes between vertex $i$ and $j$. Because it has to be within tolerance for all vertices between $i$ and $j$, it goes through their considered locations (with the exception of the segment deviating close to the tolerance due to discretization of considered locations). The optimal solution is found by induction. Define the optimal solution for polyline $P_0$ as $ \left\{ \begin{aligned} & T_{0, j, q}^{\#}\\ & T_{0, j, q}^{\epsilon} \end{aligned} \right\} = \left\{ \begin{aligned} 0\\ 0 \end{aligned} \right\} $, where $ j = \overline{0, N_0 - 1} $, $ q = \overline{0, M - 1} $, and $M$ is the number of different directions. For orthogonal case $M = 4$, and for $45 \degree$ case $M = 8$. Take directions as $\alpha_{i} = \dfrac{360 \degree}{M} \cdot i$, $i = \overline{0, M - 1}$. For $k = \overline{1, N - 1}$, construct the optimal solution for $P_k$ from the optimal solution for $P_{k - 1}$. \begin{multline*} \left\{ \begin{aligned} & T_{k, j, q}^{\#}\\ & T_{k, j, q}^{\epsilon} \end{aligned} \right\} = \\ \min _ { \begin{aligned} 0 \leq j' \wedge j' < N_{k - 1}\\ 0 \leq q' \wedge q' < M\\ 2 \left| q' - q \right| \neq M\\ \functionangle{\left( p_{k, j} - p_{k - 1, j'}, \alpha_{q'} \right)} \end{aligned} } { \left( \left\{ \begin{aligned} & T_{k - 1, j', q'}^{\#} + \delta_{q' \neq q}\\ & T_{k - 1, j', q'}^{\epsilon} + \epsilon _ { \left( k - 1, j' \right), \left( k, j \right) } \end{aligned} \right\} \right) } , \end{multline*} were $ \delta_{q' \neq q} = \left\{ \begin{aligned} 1, & \text{ if } q' \neq q,\\ 0, & \text{ otherwise}; \end{aligned} \right. $ \\ $ \functionangle{ \left( v, \alpha \right)} $ is the check that the vector $v$ has angle $\alpha$ (zero length vectors are allowed). \begin{figure} [t] \centering \includegraphics[width = \columnwidth, keepaspectratio]{ComplexityGraph2.png} \caption { The number of segments versus discretization error. The polyline was generated by the Brownian motion process in the same way as in Fig.~\ref{fig:EstimationOfAlgorithmComplexity} with $10,000$ vertices. } \label{fig:GraphCompressionEfficiency} \end{figure} The condition $2 \left| q' - q \right| \neq M$ corresponds to prohibiting changes in direction by $180 \degree$. For the $45 \degree$ case, it is possible to restrict the resultant polyline from having sharp angles by not allowing a change of direction by $135 \degree$ ($\left| 4 - \left( \left( q' - q \right) \bmod 8 \right) \right| \neq 1$). Notice that there are no checks for the tolerance, direction, and end points because they are satisfied during each induction step. Analyzing the previous solution along $M$ direction will further reduce the amount of calculations. The total complexity of the algorithm is \begin{equation*} O{\left( N \cdot \max_{0 \leq i \wedge i < N}{\left( N_i \right)} \cdot M \right)} . \end{equation*} For some data, the algorithm may produce an improper result. This happens when the introduction of a zero length segment lowers the penalty. Because the correct orientation is not known in advance, it is necessary to rotate polylines by different angles and take the solution with the lowest penalty \cite[see section 6]{ReconstructionOfOrthogonalPolygonalLines}. Fig.~\ref{fig:ExampleOrthogonalBuildings} shows an example for the reconstruction of orthogonal buildings. \begin{figure} [htb] \centering \includegraphics[width = 6 cm, keepaspectratio]{ExampleOrthogonalBuildings.png} \caption { The black polylines are reconstructed buildings from lidar data \cite{ReferenceLIDARData}. The red polylines are the resultant orthogonal shapes. The blue polylines are the ground truth taken from \cite{ReferenceGroundTruthData}. } \label{fig:ExampleOrthogonalBuildings} \end{figure} The reconstruction of buildings with $45 \degree$ sides are shown in Fig.~\ref{fig:Example45DegreeBuildings}. \begin{figure} [htb] \centering \includegraphics[width = 5 cm, keepaspectratio]{ExampleDiagonalBuildings.png} \caption { This differs from Fig.~\ref{fig:ExampleOrthogonalBuildings} by the allowance of $45 \degree$ segments. } \label{fig:Example45DegreeBuildings} \end{figure} The main difference of the algorithm described in this section and \cite{ReconstructionOfOrthogonalPolygonalLines} is in the parameters. The specification of the tolerance is easier than the specification of the penalty $\Delta$ for each additional segment. \section{Conclusion} This paper describes an approximation algorithm that finds a polyline with the minimum number of vertices while satisfying tolerance restriction. The solution is optimal with the following limitations: \begin{itemize} \item The vertices of the compressed polyline are limited to considered locations (section~\ref{sec:Descretization}). \item The test that the vertex of the compressed polyline is located between some vertices of the source polyline is approximate due to the snapping of the breaking point (section~\ref{sec:Optimization}). \item The tests for end points (section~\ref{sec:TestingSegmentEndPoints}) and zigzags are approximate (section~\ref{sec:TestingZigZag}). \end{itemize} The performance of the algorithm can be greatly improved if the number of considered locations is decreased without losing quality. This requires further research. \newcommand{\doi}[1]{\textsc{doi}: \href{http://dx.doi.org/#1}{\nolinkurl{#1}}} \begingroup \raggedright \bibliographystyle{IEEEtran}
{ "timestamp": "2015-04-27T02:11:02", "yymm": "1504", "arxiv_id": "1504.06584", "language": "en", "url": "https://arxiv.org/abs/1504.06584" }
\section{Introduction} \label{intro} Cortical networks can generating a wide variety oscillatory rhythms with frequencies spanning five orders of magnitude \citep{buzsaki04}. Slow oscillatory activity (0.1-1Hz) has been observed {\em in vivo} during decreased periods of alertness, such as slow wave sleep and anesthesia \citep{steriade93}. Furthermore, such activity can be produced {\em in vitro} when bathing cortical slices in a medium with typical extracellular ion concentrations \citep{sanchezvives00}. A key feature of these slow oscillations is that they tend to be an alternating sequence of two bistable states, referred to as the {\em up} and {\em down} states. Up states in networks are characterized by high levels of firing activity, due to depolarization in single cells. Down states in networks typically appear quiescent, due to hyperpolarization in single cells. There is strong evidence that up states are neural circuit attractors, that emerge due to synaptic feedback \citep{cossart03}. This suggests up states may be spontaneous remnants of stimulus-induced persistent states utilized for working memory \citep{wang01} and other network computations \citep{major04}. Several different cellular and synaptic mechanisms have been suggested to underlie the transitions between up and down states. One possibility is that the network is recurrently coupled with excitation, stabilizing both a quiescent and active state \citep{amit97,renart07}. Fluctuations due to probabilistic synapses, channel noise, and randomness in network connectivity can then lead to spontaneous transitions between the quiescent and active state \citep{bressloff10,litwinkumar12}. Alternatively, switches between low and high activity states may arise by some underlying systematic slow process. For instance, it has been shown that competition between recurrent excitation and the negative feedback produced by activity-dependent synaptic depression can lead to slow oscillations in firing rate whose timescale is set by the depression timescale \citep{bart05,kilpatrick10}. Excitatory-inhibitory networks with facilitation can produce slow oscillations, due to the slow facilitation of feedback inhibition that terminates the up state, the down state is then rekindled due to positive feedback from recurrent excitation \citep{melamed08}. These neural mechanisms utilize dynamic changes in the strength of neural architecture. However, \cite{compte03} proposed that single cell mechanisms can also shift network states between up and down states. The up state is maintained by strong recurrent excitation balanced by inhibition, and transitions to the down state occur due to a slow adaptation current. Once in the down state, the adaptation current is inactivated, and excitation reinitiates the up state. A similar mechanism has been utilized in models of perceptual rivalry, where dominance switches between two mutually inhibiting populations are due to the build up of a rate-based adaptation current \citep{laing02,morenobote07}. In this paper, we utilize a rate-based model of an excitatory network with spike rate adaptation to explore the impact that noise perturbations have upon the relative phase and duration of slow oscillations. We find that, as in the spiking model studied by \cite{compte03}, the interplay between recurrent excitation and adaptation produces a slow oscillation in the firing rate of the network. In fact, for slow timescale adaptation currents, the oscillations evolve as fast switches between a low and high activity state, stable fixed points of the adaptation-free system. Since the timescale and slow dynamics of the oscillation are set by the adaptation current, we mainly focus on the impact of perturbation to the adaptation variable in our model. As we will show, perturbations of the activity variable have much lower impact on the oscillation phase. Introducing noise into the adaptation variable of the population model leads to a speeding up of the slow oscillation, due to early switching between the low and high activity state. Another remarkable feature of slow oscillations, observed during slow-wave sleep and anesthesia, is that the up and down states tend to be synchronized across different regions of cortex and thalamus \citep{steriade93,massimini04}. Specifically, both the up and down states start near synchronously in cells located up to 12mm apart \citep{volgushev06}. Such remarkable coherence between distant network activity cannot be accomplished by single cell mechanisms, but require either long range network connectivity or some external signal forcing entrainment \citep{traub96,smeal10}. Activity tends to originate from several different foci in the network, quickly spreading across the rest of the network on a timescale orders of magnitude faster than the oscillation itself \citep{compte03,massimini04}. The fact that the onset of quiescence is fast and well synchronized means there must be either a rapid relay signal between all foci or there is some global signal cueing the down state. Rather than suggest a disynaptic relay, using long range excitation acting on local inhibition, we suggest that background noise can serve as a synchronizing global signal \citep{ermentrout08}. Noisy but correlated inputs have been shown to be capable of synchronizing uncoupled populations of phase oscillators \citep{teramae04} as well as experimentally recorded cells in vitro \citep{galan06}. Here we will show correlated noise is a viable mechanisms for coordinating slow oscillations in distinct uncoupled neural populations. The paper is organized as follows. We introduce the neural population model in section \ref{mod}, indicating the way external noise is incorporated into the model. In section \ref{periodicsoln}, we demonstrate the periodic solutions that emerge in the noise-free model, demonstrating it is possible to derive analytical expressions for the oscillation period in the case of steep firing rate functions. Then, in section \ref{prc_pws} we show how to derive phase sensitivity functions that describe how external perturbations to the periodic solution impact the asymptotic phase of the oscillation. As demonstrated, the impact of perturbations to the adaptation variable is much stronger than activity variable perturbations, especially for longer adaptation timescales. Thus, our studies of the impact of noise mainly focus on the effects of fluctuations in the adaptation variable. We find, in section \ref{1dnoise}, that adding noise to the adaptation variable leads to up and down state durations that are shorter and more balanced, so that the up and down state last for similar lengths of time. In section \ref{ssynch}, we demonstrate that slow oscillations in distinct populations can become entrained to one another when both populations are forced by the same common noise signal. This phenomenon is robust to the introduction of independent noise in each population, as we show in section \ref{indepnos}. \section{Adaptive neural populations: deterministic and stochastic models} \label{mod} We begin by describing the models we will use to explore the impact of external perturbations on slow oscillations. Motivated by \cite{compte03}, we will focus on a neural population model with spike rate adaptation, akin to mutual inhibitory models used to study perceptual rivalry \citep{laing02,morenobote07}. {\bf Single population model.} In a single population, neural activity $u(t)$ receives negative feedback due to a subtractive spike rate adaptation term \citep{benda03} \begin{subequations} \label{single} \begin{align} \dot{u}(t) &= -u(t) + f(\alpha u(t) - a(t) + I), \\ \tau \dot{a}(t) &= -a(t) + \phi u (t). \end{align} \end{subequations} Here, $u$ represents the mean firing rate of the neural population with excitatory connection strength $\alpha$. The negative feedback variable $a$ is spike frequency adaptation with strength $\phi$ and time constant $\tau$. For some of our analysis we will utilize the assumption $\tau \gg 1$, based on the fact that many form for spike rate adaptation tend to be much slower than neural membrane time constants \citep{benda03}. The constant tonic drive $I$ initiates the high firing rate (up) state, and slow adaptation eventually attenuates activity to a low firing rate (down) state. Weak but positive drive $I>0$ is meant to model the presence of low spiking threshold cells that spontaneously fire, utilized as a mechanism for initiating the up state in \cite{compte03}. The firing rate function $f$ is monotone and saturating function such as the sigmoid \begin{align} f(x) = \frac{1}{1 + {\rm e}^{- \gamma x}}. \label{sig} \end{align} Commonly, in studies of neural field models, the high gain limit of (\ref{sig}) is taken to yield the Heaviside firing rate function \citep{amari77,laing02} \begin{eqnarray} H(x) &=& \left\{ \begin{array}{cc} 1 : & x \geq 0, \\ 0 : & x < 0, \end{array} \right. \label{H} \end{eqnarray} which often allows for a more straightforward analytical study of model dynamics. We exploit this fact extensively in our study. Nonetheless, we have also carried out many numerical simulations of the model for a smooth firing rate function (\ref{sig}), and they correspond to the results we presentfor sufficiently high gain. Note, this form of adaptation is often referred to as {\em subtractive} negative feedback, as current is subtracted from the population input. Alternative models of slow neural population oscillations have employed short term synaptic depression \citep{tabak00,bart05,kilpatrick10}, a form of {\em divisive} negative feedback. A primary concern of this work is the response of (\ref{single}) to external perturbations, acting on the activity $u$ and adaptation $a$ variables. To do so, we will use both an exact method and a linearization to identify the phase response curve of the limit cycle solutions to (\ref{single}). Understanding the susceptibility of limit cycles (\ref{single}) to inputs will help us understand ways in which noise will influence the frequency and regularity of oscillations. {\bf Stochastic single population model.} Following our analysis of the noise-free system, we will consider how fluctuations influence oscillatory solutions to (\ref{single}). To do so, we will employ the following Langevin equation for (\ref{single}) forced by white noise \begin{subequations} \label{stochmod} \begin{align} \d u(t) &= \left[ - u(t) + f(\alpha u(t) - a(t) + I ) \right] \d t + \d \xi_u(t) \\ \d a(t) &= \left[ - a(t) + \phi u(t) \right] \d t/ \tau + \d \xi_a(t), \end{align} \end{subequations} where we have introduced the independent Gaussian white noise processes $\xi_u(t)$ and $\xi_a(t)$ with zero mean $\langle \xi_u(t) \rangle = \langle \xi_a(t) \rangle = 0$ and variances $\langle \xi_u(t)^2 \rangle = \sigma_u^2 t$ and $\langle \xi_a(t)^2 \rangle = \sigma_a^2 t$. Extending our results concerning the phase response curve, we will explore how noise forcing impacts the statistics of the resulting stochastic oscillations in (\ref{stochmod}). In particular, since we find noise tends to impact the phase of the oscillation more strongly when applied to the adaptation variable, we will tend to focus on the case $\xi_u \equiv 0$. {\bf Stochastic dual population model.} Finally, we will focus on how correlations in noise-forcing impact the coherence of two distinct uncoupled populations \begin{subequations} \label{dual} \begin{align} \d u_1 &= \left[ - u_1(t) + f(\alpha u_1(t) - a_1(t) + I ) \right] \d t + \d \xi_{u} \\ \d a_1 &= \left[ - a_1(t) + u_1(t) \right] \d t / \tau + \d \xi_{a} \\ \d u_2 &= \left[ -u_2 (t) + f(\alpha u_2(t) - a_2(t) + I ) \right] \d t + \d \xi_{u} \\ \d a_2 &= \left[ -a_2 (t) + u_2 (t) \right] \d t / \tau + \d \xi_{a}. \end{align} \end{subequations} Thus, the system (\ref{dual}) describes the dynamics of two distinct neural populations $u_1$ and $u_2$, with inputs $I$. Our main interest lies in the impact the noise terms have upon the phase relationship between the two systems' states. In this version of the model, noise to the activity variables $\xi_u$ is totally correlated, as is noise to the adaptation variables $\xi_a$. Thus, all means are zero and $\langle \xi_{u}^2 (t) \rangle = \sigma_u^2 = {\bf D}_{11} t$. Furthermore, $\langle \xi_{a}^2 (t) \rangle = \sigma_a^2 t = {\bf D}_{22} t$. For this study, we assume there are no correlations between activity and adaptation noise, so $\langle \xi_u(t) \xi_{a} (t) \rangle = 0$. A more general version of the model (\ref{dual}) would consider the possibility of independent noise in each population \begin{subequations} \label{dualind} \begin{align} \d u_1 &= \left[ - u_1(t) + f(\alpha u_1(t) - a_1(t) + I ) \right] \d t + \chi_u \d \xi_{uc} + \sqrt{1 - \chi_u^2} \d \xi_{u1} \\ \d a_1 &= \left[ - a_1(t) + u_1(t) \right] \d t / \tau + \chi_a \d \xi_{ac} + \sqrt{1 - \chi_a^2} \d \xi_{a1} \\ \d u_2 &= \left[ -u_2 (t) + f(\alpha u_2(t) - a_2(t) + I ) \right] \d t + \chi_u \d \xi_{uc} + \sqrt{1 - \chi_u^2} \d \xi_{u2} \\ \d a_2 &= \left[ -a_2 (t) + u_2 (t) \right] \d t / \tau + \chi_a \d \xi_{ac} + \sqrt{1 - \chi_a^2} \d \xi_{a2} . \end{align} \end{subequations} Noise terms all have zero mean and variances defined $\langle \xi_{uj}^2 (t) \rangle = \sigma_{uj}^2 t = {\bf D}_{uj} t$ and $\langle \xi_{aj}^2 (t) \rangle = \sigma_{aj}^2 t= {\bf D}_{aj} t$ ($j=1,2,c$). To ease calculations, we take ${\bf D}_{u1} = {\bf D}_{u2} \equiv {\bf D}_{ul} = \sigma_u^2$ and ${\bf D}_{a1} = {\bf D}_{a2} \equiv {\bf D}_{al} = \sigma_a^2$. The degree of noise correlation between populations is controlled by the parameters $\chi_u$ and $\chi_a$, so in the limit $\chi_{u,a} \to 1$, the model (\ref{dualind}) becomes (\ref{dual}). \section{Periodic solutions of a single population} \label{periodicsoln} \begin{figure} \begin{center} \includegraphics[width=6cm]{fig1a.jpg} \includegraphics[width=6cm]{fig1b.jpg} \\ \includegraphics[width=6cm]{fig1c.jpg} \includegraphics[width=6cm]{fig1d.jpg} \end{center} \caption{Single adapting neural population (\ref{single}) generates slow oscillations. ({\bf A}) Numerical simulation of (\ref{single}) for adaptation timescale $\tau = 100$ (1s) and input $I=0.2$. ({\bf B}) Partitioning of $(\tau, I)$ parameter space shows the range of inputs $I$ leading to oscillations expands as the adaptation timescale $\tau$ is increased, according to (\ref{Ihopf}). ({\bf C},{\bf D}) Bifurcation diagram showing supercritical ($I_{-H}$) and subcritical ($I_{+H}$) Hopf bifurcations that arise as the input is increased for ({\bf C}) $\tau=10$ and ({\bf D}) $\tau=100$. Firing rate function is sigmoidal (\ref{sig}). Other parameters are $\phi = 1$, $\alpha = 0.5$, and $\gamma = 15$} \label{singlefp} \end{figure} We begin by studying periodic solutions of the single population system (\ref{single}), as demonstrated in Fig. \ref{singlefp}{\bf A}. First, we note that for firing rate functions $f$ with finite gain, we can identify the emergence of oscillations by analyzing the stability of the equilibria of (\ref{single}). That is, we assume $(\dot{u}, \dot{a}) = (0,0)$, so the system becomes \begin{align*} \bar{u} &= f( \alpha \bar{u} - \bar{a} + I), \\ \bar{a} &= \phi \bar{u}, \end{align*} which can be reduced to the single equation \begin{align} \bar{u} &= f((\alpha - \phi ) \bar{u} + I) = g(\bar{u}). \label{sfpeqn} \end{align} Roots of (\ref{sfpeqn}), defining fixed points of (\ref{single}) are plotted as a function of the input $I$ in Fig. \ref{singlefp}{\bf C},{\bf D}. Utilizing the sigmoidal firing rate function $f$ given by (\ref{sig}), we can show that there will be a single fixed point as long as $\phi > \alpha$. In this case, we can compute \begin{align*} \frac{\d g(\bar{u})}{\d \bar{u}} = - (\phi - \alpha) f'((\alpha - \phi ) \bar{u} + I) = - \frac{(\phi - \alpha) {\rm e}^{- \gamma ((\alpha - \phi ) \bar{u} + I)}}{\left( 1 + {\rm e}^{- \gamma ((\alpha - \phi ) \bar{u} + I)} \right)^2} <0. \end{align*} Since $\bar{u}$ is monotone increasing, then $\bar{u} - g(\bar{u})$ is monotone increasing. Further, noting $\lim_{\bar{u} \to \pm \infty} \left[ \bar{u} - g(\bar{u}) \right] = \pm \infty$, it is clear $\bar{u} - g(\bar{u})$ crosses zero once, so (\ref{sfpeqn}) has a single root when $\phi > \alpha$. Stability of this equilibrium is given by the eigenvalues of the associated Jacobian \begin{align*} J(\bar{u}, \bar{a}) = \left( \begin{array}{cc} -1 + \alpha f'((\alpha - \phi) \bar{u} + I ) & - f'((\alpha - \phi) \bar{u} +I) \\ \phi / \tau & -1/ \tau \end{array} \right). \end{align*} We note that the sigmoid (\ref{sig}) satisfies the Ricatti equation $f' = \gamma f (1 - f)$, so we can use (\ref{sfpeqn}) to write \begin{align*} J(\bar{u}, \bar{a}) = \left( \begin{array}{cc} -1 + \alpha \gamma \bar{u} (1- \bar{u}) & - \gamma \bar{u} (1- \bar{u}) \\ \phi / \tau & -1 / \tau \end{array} \right). \end{align*} Oscillations arise when stable spiral equilibria destabilize through a Hopf bifurcation. Hopf bifurcations will occur when complex eigenvalues associated with fixed points $(\bar{u}, \bar{a})$ cross from the left to the right half plane. We can require this with the pair of expression: ${\rm tr}(J) = 0$ and ${\rm tr}(J)^2 < 4 {\rm det} (J)$. Thus, a necessary condition for the Hopf bifurcation point is that the equilibrium value $\bar{u}$ satisfy \begin{align*} \alpha \gamma \bar{u} ( 1- \bar{u}) = 1 + 1 / \tau. \end{align*} Solving this for $\bar{u}$ yields \begin{align} \bar{u}_{\pm H} = \frac{1}{2} \left[ 1 \pm \sqrt{1 - 4 \chi} \right], \hspace{4mm} \chi = \frac{1 + 1 / \tau}{\alpha \gamma}. \label{uhopf} \end{align} Thus, Hopf bifurcations will only occur when the timescale of adaptation is sufficiently large $\tau > \left[ \alpha \gamma / 4 - 1\right]^{-1}$. Plugging the formula (\ref{uhopf}) back into the fixed point equation (\ref{sfpeqn}) and solving for the input $I$, we can parameterize Hopf bifurcation curves based upon the equation \begin{align} I_{\pm H} = \frac{1}{\gamma} \ln \left[ \frac{\bar{u}_{\pm H}}{1 - \bar{u}_{\pm H}} \right] - (\alpha - \phi ) \bar{u}_{\pm H}, \label{Ihopf} \end{align} along with the additional condition ${\rm tr}(J)^2 < 4 {\rm det} (J)$ which becomes \begin{align} \frac{4}{\tau^2} < \frac{4 \phi}{\alpha \tau^2} + \frac{4 \phi}{\alpha \tau}, \label{hopfineq} \end{align} which will always hold as long as $\phi > \alpha$. We partition the parameter space $(\tau, I)$ using our formula for the Hopf curve (\ref{Ihopf}) in Fig. \ref{singlefp}{\bf B}. As demonstrated, there tend to be either two or zero Hopf bifurcation points for a given timescale $\tau$, and the coalescence of the two Hopf points is given by the point where $\tau = \left[ \alpha \gamma / 4 - 1\right]^{-1}$. In the limit of slow adaptation $\tau \gg 1$, we can separate the timescales of the activity $u$ and adaptation $a$ variables, finding $u$ will equilibrate according to the equation \begin{align} \hat{u}(t) = f(\alpha \hat{u}(t) - a(t) + I), \label{uqss} \end{align} and subsequently $a$ will slowly evolve according to the equation \begin{align} \dot{a}(t) = \left[ \phi \hat{u}(t) - a \right] / \tau. \label{aqss} \end{align} We always have an implicit formula for $\hat{u}(t)$ in terms of $a(t)$, so the dynamics will tend to slowly evolve along the direction of the $a$ variable. This demonstrates why periodic solutions to (\ref{single}) are comprised of a slow rise and decay phase of $a$, punctuated by fast excursions in the activity variable $u$. In general, it is not straightforward to analytically treat the pair of equations (\ref{uqss}) and (\ref{aqss}), but we will show how computing solutions of the singular system becomes straightforward when we take the high gain limit $\gamma \to \infty$. \begin{figure} \begin{center} \includegraphics[width=6cm]{fig2a.jpg} \includegraphics[width=6cm]{fig2b.jpg} \end{center} \caption{Analytical approximations to periodic solutions of (\ref{single}) with a Heaviside firing rate function (\ref{H}). ({\bf A}) Numerical simulation (solid lines) of the periodic solution is well approximated by the analytical approximation (dashed lines) given by (\ref{Hpersol}) when $I=0.2$ and $\tau = 100$. ({\bf B}) The period of the oscillation $T$ computed from numerical simulations (dots) is accurately approximated by the analytical formula (solid lines) given by (\ref{singper}). Other parameters are $\alpha = 0.5$ and $\phi = 1$.} \label{Hperplots} \end{figure} Having established the existence of periodic solutions to (\ref{single}) in the case of sigmoid firing rates (\ref{sig}), we now explore the system in the high gain limit $\gamma \to \infty$ whereby the firing rate function becomes a Heaviside (\ref{H}). In this case, fixed points $(\bar{u},\bar{a})$ satisfy the equations \begin{align*} \bar{u} = H( ( \alpha - \phi ) \bar{u} + I ) = \left\{ \begin{array}{ll} 1 \ &: \bar{u} < I/(\phi - \alpha), \\ 0 \ &: \bar{u} > I/(\phi - \alpha), \end{array} \right. \end{align*} and $\bar{a} = \phi \bar{u}$. Thus, assuming $\phi > \alpha$, then $\bar{u}=0$ when $I<0$ and $\bar{u} = 1$ when $I> (\phi - \alpha)$. In both cases, the fixed points are linearly stable. When $0 < I < (\phi - \alpha)$, there are no fixed points and we expect to find oscillatory solutions. Assuming $\tau \gg 1$, we can exploit a separation of timescales to identify the shape and period of these limit cycles. To begin, we note that on fast timescales \begin{align*} \dot{u}(t) &= - u(t) + H(I + \alpha u(t) - a_0), \end{align*} where $a_0$ is a quasi steady state. On slow timescales on the order of $\tau$, then $u(t)$ quickly equilibrates and \begin{subequations} \label{fastsub} \begin{align} u(t) &= H(I + \alpha u(t) - a(t)), \\ \tau \dot{a}(t) &= -a(t) + \phi u(t). \end{align} \end{subequations} Periodic solutions to (\ref{single}) must obey the condition $(u(t),a(t)) = (u(t+nT),a(t+nT))$ for $t \in [0,T]$ and $n \in {\mathbb Z}$, so we focus on the domain $t \in [0,T]$. Examining (\ref{fastsub}), we can see oscillations in (\ref{single}) involve switches between $u(t) \approx 1$ and $u(t) \approx 0$. We translate time so that $u(t) \approx 1$ on $t \in [0,T_1)$ and $u(t) \approx 0$ on $t \in [T_1,T)$. Subsequently, this means for $t \in [0,T_1]$ the system (\ref{fastsub}) becomes $u \equiv 1$ and $\tau \dot{a} = -a + \phi$ so $a(t) = \phi - (\phi - I) {\rm e}^{-t/\tau}$. We know $a(0)=I$ because $u(0^-) \equiv 0$ in (\ref{fastsub}), and the argument of $H(x)$ must have crossed zero at $t=0$. In a similar way, we find on $t \in [T_1,T)$ that $u \equiv 0$ and $a(t) = (I+ \alpha) {\rm e}^{-(t-T_1)/\tau}$. Using the conditions $a(T_1) = I+ \alpha$ and $a(T) = I$, we find that the rise time of the adaptation variable (or the duration of the {\em up} state) is \begin{align*} T_1 &= \tau \ln \left[ \frac{\phi - I}{\phi - \alpha - I} \right], \end{align*} and the decay time (or the duration of the {\em down} state) is \begin{align*} T_2 &= \tau \ln \left[ \frac{I+\alpha}{I} \right], \end{align*} and the total period of the oscillation is \begin{align} T &= \tau \ln \left[ \frac{(I+ \alpha) (\phi - I)}{I(\phi - \alpha - I)} \right]. \label{singper} \end{align} Thus, approximate periodic solutions to (\ref{single}) in the case of a Heaviside firing rate (\ref{H}) take the form \begin{subequations} \label{Hpersol} \begin{align} u(t) &= \left\{ \begin{array}{ll} 1 \ & : t \in [0,T_1), \\ 0 \ & : t \in [T_1,T], \end{array} \right. \\ a(t) &= \left\{ \begin{array}{ll} \phi - (\phi - I) {\rm e}^{-t/\tau} \ & : t \in [0,T_1), \\ (I+ \alpha) {\rm e}^{-(t-T_1)/\tau} \ & : t \in [T_1,T]. \end{array} \right. \end{align} \end{subequations} We demonstrate the accuracy of the approximation (\ref{Hpersol}) in Fig. \ref{Hperplots}{\bf A}. Furthermore, we show that relationship between the period $T$ and model parameters is well captured by the formula (\ref{singper}). Notice there is a non-monotonic relationship between the period $T$ and the input $I$. We can understand this further by noting that the rise time $T_1$ of the adaptation variable $a$ increases monotonically with input \begin{align*} \frac{\d T_1}{\d I} = \frac{\tau \alpha}{(\phi - I)(\phi - \alpha - I )} > 0, \end{align*} when $0 < I < (\phi - \alpha)$. Furthermore, the decay time $T_2$ of the adaptation variable $a$ decreases monotonically with input \begin{align*} \frac{\d T_2}{\d I} = - \frac{\tau \alpha}{I(I+ \alpha)} < 0, \end{align*} when $0 < I < (\phi - \alpha)$. Thus, as $I \to 0^+$, the slow oscillation's period $T$ is dominated by very long decay times $T_2 \gg 1$ and as $I \to (\phi - \alpha)^-$, it is dominated by very long rise times $T_1 \gg 1$. We can identify the minimal period as a function of the input $I$ by finding the critical point of $T(I)$. To do so, we differentiate and simplify \begin{align*} \frac{\d T}{\d I} = - \frac{\tau \alpha \phi (2 I - (\phi - \alpha))}{I(I+\alpha)(\phi - I)(\phi - \alpha - I)}, \end{align*} so the critical point of $T(I)$ is $I_{crit} = (\phi - \alpha)/2$, which corresponds to the minimal value of the period $T_{min}(I) = 2 \tau \ln \left[ (\phi + \alpha)/(\phi - \alpha) \right]$ as pictured in Fig. \ref{Hperplots}{\bf B}. \section{Phase response curves} \label{prc_pws} We can further understand the dynamics of the slow oscillations in (\ref{single}) by computing phase response curves for both the case of a sigmoidal firing rate (\ref{sig}) and the Heaviside firing rate (\ref{H}). As we will show, perturbations of the activity variable $u$ have decreasing impact as the timescale of adaptation $\tau$ and the gain $\gamma$ of the firing rate are increased. Perturbations of the adaptation variable $a$ tend to dominate the resulting dynamics, as it is the evolution of this slow variable that primarily determines the phase of the oscillation. \begin{figure} \begin{center} \includegraphics[width=4cm]{fig3a.jpg} \includegraphics[width=4cm]{fig3b.jpg} \includegraphics[width=4cm]{fig3c.jpg} \\ \includegraphics[width=4cm]{fig3d.jpg} \includegraphics[width=4cm]{fig3e.jpg} \includegraphics[width=4cm]{fig3f.jpg} \end{center} \caption{({\bf A}, {\bf B}, {\bf C}) Periodic solution $(u,a)$ and ({\bf D}, {\bf E}, {\bf F}) phase sensitivity function $(Z_u,Z_a)$ of (\ref{single}) plotted as a function of phase $\theta = t/T$ for a sigmoidal firing rate function (\ref{sig}). ({\bf A},{\bf D}) For shorter adaptation timescale $\tau = 10$ and input $I=0.2$, the activity variable $u$ has a more rounded trajectory, so perturbations to activity influence the oscillation phase more (note size of lobes in on $Z_u$ in ({\bf D}). ({\bf B},{\bf E}) As the adaptation timescale is increased to $\tau =100$, with $I=0.2$, the influence of perturbations to the activity variable decrease (compare lobes of $Z_u$ to those in ({\bf D}). Perturbations of the adaptation variable influence the phase more strongly as shown by the change in the relative amplitude of $Z_a$. ({\bf C},{\bf F}) Increasing the input $I=0.4$, with $\tau = 10$, increases the relative duration of the rise time of $a$. As a result, there is a wider region where perturbations to $a$ advance the phase. Other parameters are $\alpha = 0.5$, $\phi =1$, and $\gamma = 15$.} \label{prcsig} \end{figure} To begin, we derive a general formula that linearly approximates the influence of small perturbations on limit cycle solutions $(u_0(t), a_0(t))$ to (\ref{single}). Essentially, we utilize the fact that solutions ${\bf Z} (t)$ to the adjoint equation associated with linearization about the limit cycle solution $(u_0(t), a_0(t))$ provide a complete description of how infinitesimal perturbations of the limit cycle impact its phase \citep{ermentrout96,brown04}. To start, we note that \begin{align*} {\mathcal L} \left( \begin{array}{c} u_1 \\ a_1 \end{array} \right) = \left( \begin{array}{l} \dot{u_1} +u_1 - \alpha f'(\alpha u_0 - a_0 + I) u_1 + f'(\alpha u_0 - a_0 + I) a_1, \\ \dot{a_1} - \phi u_1/\tau + a_1/\tau \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right), \end{align*} is the linearization of (\ref{single}) about the limit cycle $(u_0(t), a_0(t))$. Defining the inner product on $T$-periodic functions in ${\mathbb{R}}^2$ as $\langle F(t), G(t) \rangle = \int_0^T F(t) \cdot G(t) \d t$, we can find the adjoint operator ${\mathcal L}^*$ by noting it satisfies $\langle F, {\mathcal L}G \rangle = \langle {\mathcal L}^*F, G \rangle$ for all $L^2$ integrable vector functions $F,G$. We can then compute \begin{align} {\mathcal L}^* \left( \begin{array}{c} v \\ b \end{array} \right) = \left( \begin{array}{l} - \dot{v} + v - \alpha f'(\alpha u_0 - a_0 + I) v - \phi b / \tau \\ -\dot{b} + f'(\alpha u_0 - a_0 + I) v +b/ \tau \end{array} \right). \end{align} It can be shown that the null space of ${\mathcal L}^*$ describes the response of the phase of the limit cycle $(u_0(t),a_0(t))$ to infinitesimal perturbations \citep{brown04}. Note that if $(u_0(t),a_0(t))$ is a stable limit cycle then the nullspace of ${\mathcal L}$ is spanned by scalar multiples of $(u_0'(t),a_0'(t))$. Furthermore, appropriate normalization requires that ${\bf Z}(t) \cdot (u_0'(t),a_0'(t)) = 1$ along with ${\mathcal L}^* {\bf Z} = 0$ \citep{ermentrout96}. To numerically compute ${\bf Z}(t) = (Z_u(t), Z_a(t))$, we thus integrate the system \begin{subequations} \label{adjeqns} \begin{align} \dot{Z}_u(t) &= Z_u(t) - \alpha f'(\alpha u_0(t) - a_0(t) + I) Z_u(t) - \phi Z_a(t) / \tau, \\ \dot{Z}_a(t) &= f'(\alpha u_0(t) - a_0(t) + I) Z_u(t) - Z_a(t)/ \tau, \end{align} \end{subequations} backward in time, taking the long time limit to find $(Z_u(t), Z_a(t))$ on $t \in [0,T]$, and normalizing $\langle (Z_u(t), Z_a(t)),(u_0'(t),a_0'(t)) \rangle = 1$ by rescaling appropriately. We demonstrate this result in Fig. \ref{prcsig}, showing the relationship between the shape and relative amplitude of the phase sensitivity functions $(Z_u,Z_a)$ and the parameters. Notably, perturbing the activity variable $u$ become less influential as the timescale of adaptation $\tau$ is increased ($Z_u$). Furthermore, there is a sharper transition between phase advance and phase delay region of the adaptation phase response ($Z_a$) for larger timescales $\tau$. In addition to a general formula for the phase sensitivity functions $(Z_u(t),Z_a(t))$, we can derive an amplitude dependent formula for the response of limit cycle solutions $(u_0(t),a_0(t))$ of (\ref{single}) with a Heaviside firing rate (\ref{H}), assuming $\tau \gg 1$. In this case, we utilize the formula for the period (\ref{singper}) and limit cycle (\ref{Hpersol}), derived using a separation of timescales assumption. Then, we can compute the change to the variables $(u,a)$ as a result of a perturbation $(\delta_u, \delta_a)$, which we denote $(u_0(t),a_0(t)) \stackrel{(\delta_u, \delta_a)}{\longmapsto} (\tilde{u}_0(t), \tilde{a}_0(t))$. We are primarily interested in how the relative time in the limit cycle is altered by a perturbation $\delta_u$ - how much closer or further the limit cycle is to the end of the period $T$ after being perturbed. We can readily determine this by first inverting the formula we have for $(u_0(t),a_0(t))$, given by (\ref{Hpersol}), to see how this value determines the time $t_0$ along the limit cycle \begin{align} t_0 (u_0,a_0) = \left\{ \begin{array}{cl} \tau \ln \left[ (\phi - I)/(\phi - a_0) \right] & : u_0 = 1, \\ \tau \ln \left[ (\phi - I)(I+\alpha)/a_0/(\phi - \alpha - I) \right] & : u_0 = 0. \end{array} \right. \label{invertime} \end{align} Using this formula, we can now map the value $(\tilde{u}_0,\tilde{a}_0)$ to an associated updated relative time $t_0$ along the oscillation. Here, we decompose the impact of perturbations to the $u$ and $a$ variables. We begin by studying the impact of perturbations $\delta_u$ to the activity variable $u$. We can directly compute \begin{align*} \tilde{u}_0(t) = H(I + \alpha \left[ u_0(t) + \delta_u \right] - a_0(t)). \end{align*} Thus, the singular system (\ref{fastsub}) will be unaffected by such perturbations if ${\rm sgn} (I + \alpha [u + \delta_u] - a ) = {\rm sgn} (I + \alpha u - a )$. This is related to the flatness of the susceptibility function $Z_u$ over much of the time domain in Fig. \ref{prcsig}{\bf D},{\bf E},{\bf F}. However, if ${\rm sgn} (I + \alpha [u + \delta_u] - a ) \neq {\rm sgn} (I + \alpha u - a )$, then $\tilde{u}_0(t) = 1-u_0(t)$, as detailed in the following piecewise smooth map: \begin{align*} \begin{array}{ll} u_0(t) = 0 \mapsto \tilde{u}_0(t) = 1 \ & : \delta_u > - (I - a_0(t))/ \alpha > 0 , \\[1ex] u_0(t) = 0 \mapsto \tilde{u}_0(t) = 0 \ & : \delta_u < - (I - a_0(t))/ \alpha > 0 , \\[1ex] u_0(t) = 1 \mapsto \tilde{u}_0(t) = 0 \ & : - \delta_u < - ( I + \alpha - a_0(t))/ \alpha < 0, \\[1ex] u_0(t) = 1 \mapsto \tilde{u}_0(t) = 1 \ & : - \delta_u > - ( I + \alpha - a_0(t))/ \alpha < 0, \\ \end{array} \end{align*} where $(u_0(t),a_0(t))$ are defined by (\ref{Hpersol}). The formula (\ref{invertime}) can then be utilized to compute the updated relative time $\tilde{t}_0 := t_0(\tilde{u}_0,\tilde{a}_0)$, finding \begin{align} \tilde{t}_0 = \left\{ \begin{array}{ll} \tau \ln \left[ (\phi - I)(I+ \alpha)/a_0/(\phi - \alpha - I) \right] \ & : \delta_u > ( I + \alpha - a_0)/ \alpha > 0, \ u_0 = 1 \\[1ex] T + \tau \ln \left[ ( \phi - I)/(\phi - a_0) \right] \ & : -\delta_u > (a_0 - I)/ \alpha > 0 , \ u_0 = 0 \\[1ex] t_0(u_0,a_0) \ & : {\rm otherwise}, \end{array} \right. \label{ptcforu} \end{align} where $a_0 = \phi - (\phi - I){\rm e}^{-t_0/\tau}$ if $u_0=1$ and $a_0 = (I+\alpha){\rm e}^{-(t_0-T_1)/\tau}$ if $u_0 = 0$. We can refer to the function $\tilde{t}_0/T$, where $\tilde{t}_0$ is defined by (\ref{ptcforu}), as the {\em phase transition curve} for $u$ perturbations. Thus, the function $G_u ( \theta, \delta_u) = (\tilde{t}_0 -t_0)/T$ will be the {\em phase response curve}, where $\theta = t_0/T$, and phase advances occur for positive values and phase delays occur for negative values. We plot the function $G_u(\theta, \delta_u)$ in Fig. \ref{fastslowprc}{\bf A} for different values of $\delta_u$, demonstrating the nontrivial dependence on the perturbation amplitude is not simply a rescaling but an expansion of the non-zero phase shift region. Due to the singular nature of the fast-slow limit cycle (\ref{Hpersol}), the size of the phase perturbation has a piecewise constant dependence on the amplitude of the $u$ perturbation. Note, this formulation allows us to quantify phase shifts that would not be captured by a perturbative theory for phase sensitivity functions, as computed for the general system in (\ref{adjeqns}). \begin{figure} \begin{center} \includegraphics[width=4cm]{fig4a.jpg} \includegraphics[width=4cm]{fig4b.jpg} \includegraphics[width=4cm]{fig4c.jpg} \end{center} \caption{Phase response curves of the fast-slow timescale separated system $\tau \gg 1$. ({\bf A},{\bf B}) Amplitude $\delta_u$- and $\delta_a$-dependent phase response curves $G_u(\theta, \delta_u)$ and $G_a(\theta, \delta_u)$ characterizing phase advances/delays resulting from perturbation of neural activity $u$ and adaptation $a$. We compare analytical formulae (solid lines) to numerically computed PRCs (dashed lines). ({\bf C}) Phase response curve associated with perturbations of the adaptation variable $a$ in the small amplitude $0 < |\delta_a| \ll 1$ limit. We compare the large amplitude formula (solid line) determined by (\ref{ptcfora}) to the linear approximation (dotted line) given by (\ref{prcalin}) to numerical computations (dashed line).} \label{fastslowprc} \end{figure} For perturbations $\delta_a$ of the adaptation variable $a$, there is a more graded dependence of the phase advance/delay amplitude on the perturbation amplitude $\delta_a$. We expect this, as it was a property we observed in $Z_a$ as we varied parameters in Fig. \ref{prcsig}. We can partition the limit cycle $(u_0(t),a_0(t))$ into four different regions: two advance/delay regions of exponential saturation and two early threshold crossings. First, note if $u_0(t) = 1$ and $a_0(t) + \delta_a < I + \alpha$, then \begin{align} \tilde{u}_0(t) = 1, \hspace{5mm} \tilde{a}_0(t) = \phi - (\phi - I){\rm e}^{-t/\tau} + \delta_a, \label{da1} \end{align} so $\tilde{t}_0 = T_1 - t_w$ with $t_w = \tau \ln \left[ (\phi - a_0 - \delta_a)/(\phi - I - \alpha) \right]$, but if $a_0(t) + \delta_a > I + \alpha$, then \begin{align} \tilde{u}_0(t) = 0, \hspace{5mm} \tilde{a}_0(t) = \phi - (\phi - I){\rm e}^{-t/\tau} + \delta_a. \label{da2} \end{align} Determining the relative time of the perturbed variables $(\tilde{u}_0(t), \tilde{a}_0(t))$ in (\ref{da1}) is straightforward using the mapping (\ref{invertime}). However, to determine the relative time described by (\ref{da2}), we compute the time, after the perturbation, until $\tilde{a}_0 (t) = I+\alpha$, which will be $t_w = \tau \ln \left[ (a_0 + \delta_a)/(I+\alpha) \right]$, so $\tilde{t}_0 = T_1 - t_w$. Second, note if $u_0(t) = 0$ and $a_0(t) + \delta_a > I $, then \begin{align} \tilde{u}_0(t) = 0, \hspace{5mm} \tilde{a}_0(t) = (I+ \alpha) {\rm e}^{-(t-T_1)/\tau} + \delta_a, \label{da3} \end{align} so $\tilde{t}_0 = T - t_w$ with $t_w = \tau \ln \left[ (a_0 + \delta_0)/I \right]$, but if $a_0(t) + \delta_a < I$, so that it is necessary that $\delta_a<0$, then \begin{align} \tilde{u}_0(t) = 1, \hspace{5mm} \tilde{a}_0(t) = (I+\alpha) {\rm e}^{-(t-T_1)/\tau} + \delta_a. \label{da4} \end{align} In the case of (\ref{da4}), we note that $t_w = \tau \ln \left[ (\phi - a_0 - \delta_a)/(\phi - I) \right]$, so $\tilde{t}_0 = T - t_w$. Combining our results, we find we can map the relative time to the perturbed relative time as \begin{align} \tilde{t}_0 = \left\{ \begin{array}{ll} T_1 - \tau \ln \left[ (\phi - a_0 - \delta_a)/(\phi - I - \alpha ) \right] \ & : \delta_a < I + \alpha - a_0, \ u_0 = 1, \\[1ex] T_1 - \tau \ln \left[ (a_0 + \delta_a)/(I + \alpha ) \right] \ & : \delta_a > I + \alpha - a_0, \ u_0 = 1, \\[1ex] T - \tau \ln \left[ (a_0 + \delta_a)/I \right] \ & : \delta_a > I - a_0, \ u_0 = 0, \\[1ex] T - \tau \ln \left[ (\phi - a_0 - \delta_a )/ (\phi - I) \right] \ & : \delta_a < I - a_0, \ u_0 = 0, \end{array} \right. \label{ptcfora} \end{align} where $a_0 = \phi - ( \phi - I) {\rm e}^{-t_0/\tau}$ if $u_0 = 1$ and $a_0 = (I+ \alpha) {\rm e}^{- (t_0 - T_1)/\tau}$ if $u_0 = 0$. Again, we have a {\em phase transition curve} given by the function $\tilde{t}_0/T$ and {\em phase response curve} given by $G_a(\theta, \delta_a ) = (\tilde{t}_0 - t_0)/T$, where $\theta = t_0/T$. As opposed to the case of $u$ perturbations, the phase perturbation here depends smoothly on the amplitude of the $a$ perturbation $\delta_a$. Furthermore, we can obtain a perturbative description of the phase response curve for the singular system (\ref{fastsub}) in two ways: (a) Taylor expand the amplitude-dependent phase response curve expressions defined by (\ref{ptcforu}) and (\ref{ptcfora}) and truncate to linear order or (b) solving the adjoint equation (\ref{adjeqns}) in the case of a Heaviside firing rate (\ref{H}) and long adaptation timescale $\tau \gg 1$. We begin with the first derivation, which simply requires differentiating (\ref{ptcforu}) to demonstrate that the {\em infinitesimal phase response curve (iPRC)} associated with perturbations of the $u$ variable is zero almost everywhere. However, differentiating (\ref{ptcfora}) reveals that the iPRC associated with perturbations of the adaptation variable $a$ is given by the piecewise smooth function \begin{align} \label{prcalin} Z_a (t) = \left\{ \begin{array}{ll} \displaystyle \frac{\tau}{T(\phi - I)s} {\rm e}^{t/\tau} \ & : t \in (0, T_1), \\[1ex] \displaystyle - \frac{\tau}{T(I+\alpha)} {\rm e}^{(t - T_1)/\tau} \ & : t \in (T_1, T). \end{array} \right. \end{align} Furthermore, note we could derive the same result by solving the adjoint equations (\ref{adjeqns}) in the case of Heaviside firing rate (\ref{H}), so that \begin{subequations} \label{Hadjeq} \begin{align} \dot{Z}_u(t) &= - Z_u(t) + \alpha \delta ( \alpha u_0(t) - a_0(t) + I) Z_u(t) + \phi Z_a(t) / \tau, \\ \dot{Z}_a(t) &= - \delta( \alpha u_0(t) - a_0 (t) + I) Z_u(t) + Z_a (t) / \tau. \end{align} \end{subequations} Note, we have reversed time $t \mapsto - t$, so we can simply solve the system forward. Furthermore, we can use the identity \begin{align} \delta( \alpha u_0(t) - a_0(t) + I) = \frac{\delta(t)}{u'(0) - a'(0)} + \frac{\delta(t-T_1)}{u'(T_1) - a'(T_1)}. \end{align} Utilizing the separation of timescales, $\tau \gg 1$, we find that almost everywhere (except where $t=0, T_1, T$), we have that (\ref{Hadjeq}) becomes the system \begin{align} \dot{Z}_u(t) = -Z_u(t), \hspace{6mm} \tau \dot{Z}_a(t) = Z_a(t).\label{Hadjeq2} \end{align} As before $Z_u(t)$ will be zero almost everywhere, whereas $Z_a(t) = A(t) {\rm e}^{t/\tau}$, where $A(t)$ is a piecewise constant function taking two different values on $t \in (0,T_1)$ and $t \in (T_1, T)$, determined by considering the $\delta$ distribution terms. This indicates how one would derive the formula (\ref{prcalin}) using the adjoint equations (\ref{Hadjeq}). Note, in previous work \citep{jayasuriya12}, we explored the entrainment of slowly adapting populations to external forcing, comprised of smooth and non-smooth inputs to the system (\ref{single}). In the next section, we explore the impact of external noise forcing on the slow oscillations of (\ref{single}), subsequently demonstrating that noise can be utilized to entrain the up and down states of two distinct networks. \section{Impact of noise on the timing of up/down states} \label{1dnoise} We now study the effects of noise on the duration of up and down states of the single population model (\ref{single}). Switches between high and low firing rate states can occur at irregular intervals \citep{sanchezvives00}, suggesting internal or external sources of noise determine state changes. This section focuses on how noise can reshape the mean duration of up and down residence times. Due to our findings in the previous sections, we focus on noise applied to the adaptation variable in this section. As we have shown, very weak perturbations to the neural activity variable have a negligible effect on the phase of oscillations. Analytic calculations are presented for the piecewise smooth system with Heaviside firing rate (\ref{H}), as accurate approximations of the mean up and down state durations can be computed. \begin{figure} \begin{center} \includegraphics[width=5.5cm]{fig5a.jpg} \includegraphics[width=5.5cm]{fig5b.jpg} \\ \includegraphics[width=5.5cm]{fig5c.jpg} \includegraphics[width=5.5cm]{fig5d.jpg} \end{center} \caption{Noise alters the duration of up and down states. ({\bf A}) Numerical simulation of the stochastically driven population model (\ref{stochmod}) demonstrates up and down state durations (e.g., $T_1$ and $T_2$) are variable when driven by adaption noise $\xi_a$ with $\langle \xi_a^2 \rangle = \sigma_a^2 t$, $\sigma_a = 0.01$. Switches are determined by the threshold crossings of the adaptation variable $a(t)=I$ and $a(t) = I+ \alpha$. ({\bf B}) Up/down states become more variable when the noise amplitude $\sigma_a = 0.02$. ({\bf C}) Mean durations of the up and down state, $\langle T_1 \rangle$ and $\langle T_2 \rangle$, decrease as a function noise amplitude $\sigma_a$. ({\bf D}) Impact of noise $\sigma_a$ on the balance of up to down state durations $\bar{T}_1/\bar{T}_2$ as input $I$ is varied. Firing rate is given by the Heaviside function (\ref{H}). Other parameters are $\alpha = 0.5$, $\phi = 1$, and $\tau = 50$. } \label{snoiseper} \end{figure} Our approach is to derive expressions for the mean first passage times of both the up and down state ($\bar{T}_1$ and $\bar{T}_2$) of the stochastic population model (\ref{stochmod}). Focusing on adaptation noise allows us to utilize the separation of fast-slow timescales, and recast the pair of equations as a stochastic-hybrid system \begin{align*} u(t) &= H( (\alpha u(t) + I - a(t)), \\ \d {a}(t) &= \left[ -a(t) + \phi u(t) \right] \d t/ \tau + \d \xi_a(t), \end{align*} where $\xi_a$ is white noise with mean $\langle \xi_a \rangle = 0$ and variance $\langle \xi_a^2 \rangle = \sigma_a^2 t$. To begin, assume the system has just switched to the up state, so the initial conditions are $u(0) = 1$ and $a(0) = I$. Determining the amount of time until a switch to the down state requires we calculate the time $T_1$ until the threshold crossing $a(T_1) = I + \alpha$ where $a(t)$ is determined by the stochastic differential equation (SDE) \begin{align*} \d a(t) = \left[ - a(t) + \phi \right] \d t/ \tau + \d \xi_a, \end{align*} which is the well-known threshold crossing problem for an Ornstein-Uhlenbeck process \cite{gardiner04}. The mean $\bar{T}_1$ of the passage time distribution is thus given by defining the potential $V(a)=\frac{a^2}{2 \tau} - \frac{\phi a}{\tau}$ and computing the integral \begin{align*} \bar{T}_1 &= \frac{1}{\sigma_a^2} \int_{I}^{I + \alpha} \int_{- \infty}^{x} {\rm e}^{\left[ V(x) - V(y) \right]/ \sigma_a^2} \d y \d x. \end{align*} Next, note that the duration of the down state $T_2$ will be the amount of time until the threshold crossing $a(T_2) = I$ given $u(0) = 0$ and $a(0) = I+ \alpha$, where $a(t)$ obeys the SDE \begin{align*} \d a(t) = \left[ -a(t) \right] \d t/ \tau + \d \xi_a(t). \end{align*} Again, defining the potential $V(a) = \frac{a^2}{2 \tau}$, we can compute \begin{align*} \bar{T}_2 &= \frac{1}{\sigma_a^2} \int_{-I-\alpha}^{-I} \int_{- \infty}^{x} {\rm e}^{\left[ V(x) - V(y) \right]/ \sigma_a^2} \d y \d x. \end{align*} We compare the theory we have developed utilizing passage time problems to residence times computed numerically in Fig. \ref{snoiseper}{\bf C}. Notice that increasing the noise amplitude tends to shorten both up and down state durations on average, due to early threshold crossings of the variable $a(t)$. Furthermore, we can examine how noise reshapes the relative balance of up versus down state durations. Specifically, we will explore how the relative fraction of time the up state persists $\bar{T}_1/(\bar{T}_1 + \bar{T}_2)$ changes with noise intensity $\sigma_a$ and input $I$. First, notice that, in the absence of noise the ratio \begin{align} \frac{T_1}{T_1+T_2} = \frac{\ln \left[ (\phi - I)/(\phi - \alpha - I) \right]}{\ln \left[ (I+ \alpha)(\phi - I)/I(\phi - \alpha - I) \right]}. \label{updownrat} \end{align} The up and down state have equal duration when $T_1/(T_1 + T_2) = 1/2$, or when the input $I = (\phi - \alpha)/2$, as shown in Fig. \ref{snoiseper}{\bf D}. Interestingly, this is the precise input value at which the period obtains a minimum, as we demonstrated in section \ref{periodicsoln}. Along with our plot of (\ref{updownrat}) in the noise-free case ($\sigma_a = 0$), we also study the impact of noise on this measure of up-down state balance. Noise leads to up and down state durations becoming more similar, so the ratio (\ref{updownrat}) of the means $\bar{T}_1$ and $\bar{T}_2$ flattens as a function of the input $I$. This is due to the fact that long durations, wherein the variable $a(t)$ occupies the tail of exponentially saturating functions $A_0 + A_1 {\rm e}^{-t/\tau}$, are shortened by early threshold crossings due to the external noise forcing. \section{Synchronizing two uncoupled populations} \label{ssynch} Now we demonstrate that common noise can synchronize the up and down states of two distinct and uncoupled populations. We begin with the case of identical noise and then, in section \ref{indepnos}, relax these assumptions to show that some level of coherence is still possible when each population has an intrinsic and independent source of noise. This is motivated by the observation that the neural Langevin equation derived in the large system-size limit of a neural master equation tends to possess intrinsic noise in each population, in addition to an extrinsic common noise term \citep{bressloff11}. As we will show, intrinsic noise tends to disrupt the phase synchronization due to extrinsic noise. To begin, we recast the stochastic system (\ref{dual}), describing a pair of adapting noise-driven neural populations, as a pair of phase equations: \begin{subequations} \label{dstrat} \begin{align} \d \theta_1 (t) &= \omega \d t + {\bf Z}( \theta_1(t) ) \cdot \d {\boldsymbol \xi} (t), \\ \d \theta_2 (t) &= \omega \d t + {\bf Z}( \theta_2(t)) \cdot \d {\boldsymbol \xi} (t), \end{align} \end{subequations} where $\theta_1$ and $\theta_2$ are the phase of the first and second neural populations. As we demonstrate in Fig. \ref{figstochsynch}{\bf A}, this introduction of common noise tends to drive the oscillation phases $\theta_1(t)$ and $\theta_2(t)$ toward one another. Note that since the governing equations of both populations are the same, then the phase sensitivity function ${\bf Z} (\theta)$ will be the same for both. Furthermore, the synchronized solution $\theta_1(t) = \theta_2(t)$ is absorbing -- once the phases synchronize, they remain so. We can analytically calculate the Lyapunov exponent $\lambda$ of the synchronized state to determine its stability. In particular, we will be interested in how this stability depends on the parameters that shape the dynamics of adaptation. \begin{figure} \begin{center} \includegraphics[width=5.5cm]{fig6a.jpg} \includegraphics[width=5.5cm]{fig6b.jpg} \\ \includegraphics[width=5.5cm]{fig6c.jpg} \includegraphics[width=5.5cm]{fig6d.jpg} \end{center} \caption{Synchronizing slow oscillations in two uncoupled populations described by (\ref{dual}) with sigmoidal firing rate (\ref{sig}). ({\bf A}) Single realization of the system (\ref{dual}) driven by common noise $\xi_a$ to the adaptation variable ($\langle \xi_a^2 \rangle = \varepsilon^2 t$, $\varepsilon = 0.01$) with input $I=0.2$ and adaptation timescale $\tau = 50$. Notice that the phase difference $\psi (t) = \Delta_1(t) - \Delta_2(t)$ roughly decreases over time. ({\bf B}) Plot of the log of the phase difference $y(t) = \ln \psi (t)$ for several realizations (thin lines) compared with the theory (thick line) of the mean $y(0) + \lambda t$ computed using the Lyapunov exponent (\ref{lyapapprox}). ({\bf C}) Lyapunov exponent $\lambda$ decreases as a function of the adaptation timescale $\tau$, for $I = 0.2$. We compare numerical simulations (dots) to theory (solid). ({\bf D}) Lyapunov exponent $\lambda$ varies non-monotonically with the strength of the input $I$. Other parameters are $\alpha = 0.5$, $\gamma = 15$, and $\phi = 1$.} \label{figstochsynch} \end{figure} We next convert the pair of Stratonovich differential equations into a equivalent pair of Ito differential equations: \begin{subequations} \label{dito} \begin{align} \d \theta_1 (t) &= \left[ \omega + {\bf Z}'(\theta_1(t))^T {\bf D} {\bf Z} (\theta_1(t)) \right] \d t + {\bf Z} (\theta_1 (t)) \cdot \d {\boldsymbol \xi} (t), \\ \d \theta_2 (t) &= \left[ \omega + {\bf Z}'(\theta_2 (t))^T {\bf D} {\bf Z} (\theta_2(t)) \right] \d t + {\bf Z} ( \theta_2 (t)) \cdot \d {\boldsymbol \xi} (t), \end{align} \end{subequations} introducing a drift term due to our change in definition of the noise term. Now, to determine stability of the synchronized state $\theta_1 (t) = \theta_2(t)$, we assume we are infinitesimally close to it. We define $\psi (t) = \theta_1(t) - \theta_2(t)$ and require $|\psi (t) | \ll 1$. Linearizing the system of Ito differential equations (\ref{dito}) with respect to $\psi (t)$ then yields \begin{align} \d \psi (t) = \psi (t) \left[ \left( {\bf Z}' (\theta(t) )^T {\bf D} {\bf Z} ( \theta(t) ) \right)' \d t + {\bf Z}'(\theta(t)) \cdot \d {\boldsymbol \xi} \right], \label{pdiff} \end{align} where $\theta (t)$ obeys either one of the equations in (\ref{dito}). Applying the change of variables $y (t) = \ln \psi (t) $, we can rewrite (\ref{pdiff}) as \begin{align} \d y (t) = \left( {\bf Z}' {\bf D} {\bf Z} \right)' \d t - \left( {\bf Z}'^T {\bf D} {\bf Z}' \right) \d t + {\bf Z}' \cdot \d {\boldsymbol \xi} (t). \label{ylog} \end{align} Notice, on average, the log of the phase difference $y(t)$ tends to decrease over time (Fig. \ref{figstochsynch}{\bf B}), indicating the phases $\theta_1$ and $\theta_2$ move toward one another. Subsequently, we can integrate equation (\ref{ylog}) to determine the mean drift of $y(t)$ \begin{align*} \lambda := \lim_{t \to \infty} \int_0^t \left[ \left( {\bf Z}'(\theta(s)) {\bf D} {\bf Z}(\theta(s)) \right)' - \left( {\bf Z}'^T(\theta(s)) {\bf D} {\bf Z}'(\theta(s)) \right) \right] \d s, \end{align*} so the phase difference $\psi (t)$ will tend to decay grow if the Lyapunov exponent $\lambda <0$ ($\lambda >0$), and the synchronous state will be stable (unstable). Now, utilizing ergodicity of (\ref{ylog}), we can compute $\lambda$ using the ensemble average across realizations of ${\bf Z}' (\theta(t)) \cdot {\boldsymbol \xi} (t)$, so \begin{align} \lambda &= \int_0^1 P_s (\theta) \left[ \left( {\bf Z}'(\theta) {\bf D} {\bf Z}(\theta) \right)' - \left( {\bf Z}'^T(\theta) {\bf D} {\bf Z}'(\theta) \right) \right] \d \theta, \label{ensavg} \end{align} where $P_s(\theta)$ is the steady state distribution of $\theta$. Since noise is weak (${\bf D}_{jk} \ll 1$, $j,k=1,2$), we can approximate the distribution as constant $P_s(\theta) = 1$. Upon applying this to the integrand of (\ref{ensavg}) and noting the periodicity of ${\bf Z} (\theta)$, we find we can approximate the Lyapunov exponent \begin{align} \lambda &= - \int_0^1 {\bf Z}'^T(\theta) {\bf D} {\bf Z}'(\theta) \d \theta. \label{lyapapprox} \end{align} Assuming noise to the activity variable $u$ and adaptation variable $a$ is not correlated, ${\bf D}$ will be diagonal. In this case, we can further decompose the phase sensitivity function into its Fourier expansion \begin{align*} {\bf Z} (\theta) = \sum_{k=0}^{\infty} {\bf a}_k \sin (2 \pi k \theta) + {\bf b}_k \cos (2 \pi k \theta), \end{align*} where ${\bf a}_k = ({\bf a}_{k1}, {\bf a}_{k2})^T$ and ${\bf b}_k = ({\bf b}_{k1}, {\bf b}_{k2})^T$ are vectors in ${\mathbb{R}}^2$ so that \begin{align*} {\bf Z}' (\theta) = \sum_{k=0}^{\infty} 2 \pi k \left[ {\bf a}_k \cos (2 \pi k \theta) - {\bf b}_k \sin (2 \pi k \theta) \right], \end{align*} and we can expand the terms in (\ref{lyapapprox}) to yield \begin{align*} \lambda = - \sum_{k = 0}^{\infty} 2 \pi^2 k^2 \left[ \left( {\bf a}_{k1}^2 + {\bf b}_{k1}^2 \right) D_{11} + \left( {\bf a}_{k2}^2 + {\bf b}_{k2}^2 \right) D_{22} \right]. \end{align*} Thus, as long as ${\bf Z} (\theta )$ is continuous and non-constant, the Lyapunov exponent $\lambda$ will be negative, so the synchronous state $\theta_1 = \theta_2$ will be stable. Note, continuity is not satisfied in the case of our singular approximation to ${\bf Z} (\theta)$. We demonstrate the accuracy of our theory (\ref{lyapapprox}) in Fig. \ref{figstochsynch}{\bf C},{\bf D}, showing that $\lambda$ decreases as a function of $\tau$ and is non-monotonic in $I$. Thus, slow oscillations with longer periods are synchronized more quickly, relative to the number of oscillation cycles. Since the Lyapunov exponent has highest amplitude $|\lambda|$ for both low and high values of the tonic input $I$, we also suspect this is related to the period of the oscillation $T$. \section{Impact of intrinsic noise on stochastic synchronization} \label{indepnos} \begin{figure} \begin{center} \includegraphics[width=5.5cm]{fig7a.jpg} \includegraphics[width=5.5cm]{fig7b.jpg}\end{center} \caption{Stationary density $M_0(\psi)$ of the phase difference $\psi = \theta_1 - \theta_2$ for two slowly oscillating neural population driven by both common and independent noise (\ref{dualind}). As the degree of noise correlation is decreased from ({\bf A}) $\chi_a = 0.95$ to ({\bf B}) $\chi_a = 0.90$, the density spreads, but there is still a peak at $\psi = 0$, the phase-locked state. We focus on noise in the adaptation variable, so $\sigma_u = 0$ and $\sigma_a = 0.01$. Other parameters are $\alpha = 0.5$, $\gamma = 15$, $\phi = 1$, and $\tau = 20$.} \label{figindnoise} \end{figure} We now extend our results from the previous section by studying the impact of independent noise in each population. Independent noise is incorporated into the modified model (\ref{dualind}). Noting, again there is a periodic solution to the noise-free version of this system, phase-reduction methods can be used to obtain approximate Langevin equations for the phase variables \citep{nakao07} \begin{subequations} \label{indphase} \begin{align} \d \theta_1 &= \omega \d t + {\bf Z} ( \theta_1(t) ) \cdot \left[ \d {\boldsymbol \xi}_c (t) + \d {\boldsymbol \xi}_1 (t) \right], \\ \d \theta_2 &= \omega \d t + {\bf Z} ( \theta_2(t) ) \cdot \left[ \d {\boldsymbol \xi}_c (t) + \d {\boldsymbol \xi}_2 (t) \right], \end{align} \end{subequations} where the noise vectors ${\boldsymbol \xi}_c = (\chi_u \xi_{uc}, \chi_a \xi_{ac})^T$ and ${\boldsymbol \xi}_j = (\sqrt{1- \chi_u^2} \xi_{uj}, \sqrt{1 - \chi_a^2} \xi_{aj})^T$ ($j=1,2$). We can reformulate the pair of Stratonovich differential equations as Ito stochastic differential equations given by the system \begin{subequations} \label{inditophase} \begin{align} \d \theta_1 &= A_1( {\boldsymbol \theta} ) \d t + \d \zeta_1( {\boldsymbol \theta}, t), \\ \d \theta_2 &= A_2 ( {\boldsymbol \theta} ) \d t + \d \zeta_2 ( {\boldsymbol \theta}, t), \end{align} \end{subequations} where the statistics of the noise terms $\d \zeta_j( {\boldsymbol \theta}, t) = {\bf Z} ( \theta_j(t) ) \cdot \left[ \d {\boldsymbol \xi}_c (t) + \d {\boldsymbol \xi}_j (t) \right] $ ($j=1,2$) are specified by $\langle \d \zeta_j ( {\boldsymbol \theta}, t)\rangle = 0$ and $\langle \d \zeta_j ( {\boldsymbol \theta}, t) \d \zeta_k ( {\boldsymbol \theta}, t) \rangle = C_{jk}({\boldsymbol \theta} ) \d t$ where \begin{align} C_{jk} ( {\boldsymbol \theta} ) =& \left( \chi_u {\bf D}_{uc}^{1/2} Z_u( \theta_j ) + \chi_a {\bf D}_{ac}^{1/2} Z_a( \theta_j) \right) \left( \chi_u {\bf D}_{uc}^{1/2} Z_u( \theta_k) + \chi_a {\bf D}_{ac}^{1/2} Z_a( \theta_k) \right) \nonumber \\ & + \left( \sqrt{1 - \chi_u^2} {\bf D}_{ul}^{1/2} Z_u( \theta_j) + \sqrt{1 - \chi_a^2} {\bf D}_{al}^{1/2} Z_a( \theta_j) \right)^2 \delta_{j,k}, \label{corrfunc} \end{align} separating the impact of correlated and local sources of noise. The drift terms can thus be calculated $A_j( {\boldsymbol \theta} ) = \omega + \frac{1}{4} \frac{\partial}{\partial \theta_j} C_{jj} ( {\boldsymbol \theta} )$. The Fokker-Planck equation describing the evolution of the probability density function $P( {\boldsymbol \theta}, t)$ of the phases is given \begin{align} \frac{\partial P}{\partial t} = - \sum_{j=1}^2 \frac{\partial}{\partial \theta_j} \left[ A_j( {\boldsymbol \theta} ) P \right] + \frac{1}{2} \sum_{j=1}^2 \sum_{k=1}^2 \frac{\partial^2}{\partial \theta_j \partial \theta_k} \left[ C_{jk} ( {\boldsymbol \theta} ) P \right]. \label{fpe1} \end{align} Now, we apply a change of variables to the Fokker-Planck equation (\ref{fpe1}) defined $\theta_j = \omega t + \vartheta_j$. Assuming noise is weak, the function $Q( {\boldsymbol \vartheta} , t)$ varies slowly compared with the period of the phase oscillators $\theta_j$. Thus, we average the drifts $A_j( {\boldsymbol \theta} )$ and correlation function $C_{jk} ( {\boldsymbol \theta} )$ over a single period. The resulting Fokker-Planck equation is then \begin{align*} \frac{\partial Q( {\boldsymbol \vartheta}, t)}{\partial t} = \frac{1}{2} \sum_{j=1}^2 \sum_{k=1}^2 \frac{\partial^2}{\partial \vartheta_j \partial \vartheta_k} \left[ B_{jk} ( {\boldsymbol \vartheta} ) Q \right] \end{align*} where the averaged correlation function is given by the formula \begin{align*} B_{jk} ( {\boldsymbol \vartheta} ) = g ( \theta_1 - \theta_2 ) + h(0) \delta_{j,k}, \end{align*} where the correlation functions are defined \begin{align*} g( \theta ) = \int_0^1 \left[ \chi_u^2 {\bf D}_{uc} Z_u( \theta') Z_u(\theta' + \theta ) + \chi_a^2 {\bf D}_{ac} Z_a( \theta' ) Z_a( \theta' + \theta ) \right] \d \theta' \end{align*} and \begin{align*} h( \theta ) = \int_0^1 \left[ (1 - \chi_u^2) {\bf D}_{ul} Z_u( \theta') Z_u(\theta' + \theta ) + (1 - \chi_a^2) {\bf D}_{al} Z_a( \theta' ) Z_a( \theta' + \theta ) \right] \d \theta'. \end{align*} We study the relationship between the phases $\vartheta_1$ and $\vartheta_2$ by substituting the formula for the averaged correlation matrix \begin{align} \frac{\partial Q( {\boldsymbol \vartheta} ,t)}{\partial t} = \frac{1}{2} \left[ g(0) + h(0) \right] \left[ \frac{\partial^2 Q}{\partial \vartheta_1^2} + \frac{\partial^2 Q}{\partial \vartheta_2^2} \right] + \frac{\partial^2}{\partial \vartheta_1 \partial \vartheta_2} \left[ g( \vartheta_1 - \vartheta_2) Q \right]. \label{fp2} \end{align} We can write (\ref{fp2}) as a separable equation by employing a change of variables that tracks the average $\rho = (\vartheta_1 + \vartheta_2)/2$ and phase difference $\psi = \vartheta_1 - \vartheta_2$ of the original position variables, so \begin{subequations} \begin{align} \label{fpsep} \frac{\partial U ( \rho , t)}{\partial t} &= \frac{1}{4} \left[ g(0) + g( \psi ) + h(0) \right] \frac{\partial^2 U(\rho, t)}{\partial \rho^2}, \\ \frac{\partial M ( \psi ,t)}{\partial t} &= \frac{\partial^2}{\partial \psi^2} \left[ g(0) - g( \psi ) + h(0) \right] M( \psi ,t). \end{align} \end{subequations} Thus, we can solve for the stationary solution of the system (\ref{fpsep}) by serving $U_t = M_t \equiv 0$ and requiring periodic boundary conditions. We find that the stationary distribution of the position average is $U_0(\rho) = 1$. In addition, we can integrate the stationary equation for $M( \psi ,t)$ to find \begin{align} \label{M0form} M_0( \psi ) = \frac{m_0}{\sigma_u^2 \left[ (2-\chi_u^2) g_u(0) - \chi_u^2 g_u( \psi ) \right] + \sigma_a^2 \left[ (2-\chi_a^2) g_a(0) - \chi_a^2 g_a(\psi) \right] }, \end{align} where $m_0 = 1/ \int_0^1 \left[ g(0) - g(x) + h(0) \right]^{-1} \d x$ is a normalization factor and we have simplified the expression using ${\bf D}_{u1} = {\bf D}_{u2} \equiv {\bf D}_{ul} = \sigma_u^2$ and ${\bf D}_{a1} = {\bf D}_{a2} \equiv {\bf D}_{al} = \sigma_a^2$ and defined \begin{align*} g_j( \psi ) = \int_0^1 Z_j( \theta ) Z_j( \theta + \phi ) \d \theta. \end{align*} When noise to each layer is independent ($\chi_u, \chi_a \to 0$), then $M_0(\psi ) = 1$ is constant in space. When noise is totally correlated ($\chi_u, \chi_a \to 1$), then $M_0(\psi) = \delta ( \phi )$. The stationary distribution $M_0(\psi)$ will broaden as the correlations $\chi_u$ and $\chi_a$ are decreased from unity, with a peak remaining at $\phi = 0$. We demonstrate the accuracy of the formula (\ref{M0form}) for the stationary density of the phase difference $\psi$ in Fig. \ref{figindnoise}, showing that it widens as the level noise correlation is decreased. Again, we focus on the impact of adaptation noise. Thus, even when independent noise is introduced, there is some semblance of synchronization in the phases of two noise-driven neural populations (\ref{dualind}). \section{Discussion} \label{disc} We have studied the impact of deterministic and stochastic perturbations to a neural population model of slow oscillations. The model was comprised of a single recurrently coupled excitatory population with negative feedback from a slow adaptive current \citep{laing02,jayasuriya12}. By examining the phase sensitivity function $(Z_u, Z_a)$, we found that perturbations of the adaptation variable lead to much larger changes in oscillation phase than perturbations of neural activity. Furthermore, this effect becomes more pronounced as the timescale $\tau$ of adaptation is increased. Introducing noise in the model decreases the oscillation period and helps to balance the mean duration of the oscillation's up and down states. When two uncoupled populations receive common noise, their oscillation phases $\theta_1$ and $\theta_2$ eventually become synchronized, which can be shown by deriving a formula for the Lyapunov exponent of the absorbing state $\theta_1 \equiv \theta_2$ \citep{teramae04}. When independent noise is introduced to each population, in addition to common noise, the long-term state of the system is described by a probability density for $\psi = \theta_1 - \theta_2$, which peaks at $\psi \equiv 0$. Our study was motivated by the observation that recurrent cortical networks can spontaneously generate stochastic oscillations between up and down states. Guided by previous work in spiking models \citep{compte03}, we explored a rate model of a recurrent excitatory network with slow spike frequency adaptation. One of the open questions about up and down state transitions concerns the degree to which they are generated by noise or by more deterministic mechanisms, such as slow currents or short term plasticity \citep{cossart03}. Here, we have provided some characteristic features that emerge as the level of noise responsible for transitions is increased. Similar questions have been probed in the context of models of perceptual rivalry \citep{morenobote07}. In addition, we have provided a plausible mechanism whereby the onset of up and down states could be synchronized in distinct networks \citep{volgushev06}. There are several potential extensions to this work. For instance, we could examine the impact of long-range connections between networks to see how these interact with common and independent noise to shape the phase coherence of oscillations. Similar studies have been performed in spiking models by \cite{ly09}. Interestingly, shared noise can actually stabilize the anti-phase locked state in this case, even though it is unstable in the absence of noise. Furthermore, it is known that coupling spanning long distances can be subject to axonal delays. In spite of this, networks of distantly coupled clusters of cells can still sustain zero-lag synchronized states \citep{vicente08}. Thus, we could also explore the impact of delayed coupling, determining how features of phase sensitivity function interact with delay to promote in-phase or anti-phase synchronized states. \bibliographystyle{jneurosci}
{ "timestamp": "2015-04-24T02:12:34", "yymm": "1504", "arxiv_id": "1504.06290", "language": "en", "url": "https://arxiv.org/abs/1504.06290" }
\section{Introduction} \label{sec:intro} We consider a regression model \[ y_i=\eta \left( x_i,\theta \right) +\varepsilon _i,\quad i=1,\hdots,N , \] where $y_i$ are observed variables, $\varepsilon _i$ are observation errors, which satisfy $\mathbb{E}\left( \varepsilon _i\right) =0$, and $Var\left( \varepsilon _i\right) =\sigma ^2,\,Cov\left( \varepsilon _i,\varepsilon _j\right) =0$ for $i\neq j$, $\sigma ^2$ is not supposed to be known. The value of $\theta $ is a priori restricted to a parameter space $\Theta $. In a vector notation the model is \begin{eqnarray*} {y} &=&{\eta }_X\left( \theta \right) +{\varepsilon }, \\ \mathbb{E}\left( {\varepsilon }\right) &=&{0},Var\left( { \varepsilon }\right) =\sigma ^2{I}. \end{eqnarray*} Here $X=\left( x_1,\hdots,x_N\right) $ is the exact design with points $x_i\in \mathcal{X}$, ${y}=\left( y_1,\hdots,y_N\right) ^\top$, $\varepsilon=(\varepsilon_1,\hdots,\varepsilon_N)^\top$, ${\eta } _X\left( \theta \right) =\left( \eta \left( x_1,\theta \right) ,\hdots,\eta \left( x_N,\theta \right) \right) ^\top.$ The design space $\mathcal{X}$ is supposed here to be finite. Instead of the exact design $X$ we can consider equivalently for any $x\in \mathcal{X}$ the value $\xi _X\left( x\right) $ of the relative frequency of $x$ within the design $X$. By a standard approximation procedure, we consider the set $\Xi $ of all probability measures defined on $\mathcal{X}$, as the set of all approximate designs allowed in the experiment. In the main part of the present paper we suppose the linearity of the response function $\eta \left( x_i,\theta \right) =f^\top\left( x_i\right) \theta$, and we suppose $\Theta =\mathbb{R}^p$. In a standard way, to any $\xi \in \Xi $ is associated its information matrix $$ M\left( \xi \right) =\sum_{x\in \mathcal{X}}f\left( x\right) f^\top\left( x\right) \xi \left( x\right), $$ with $f(x)=(f_1(x),\hdots,f_p(x))^\top$. According to the aim of the experiment, we may choose an optimality criterion $\phi \left( \xi \right) $, and a design $\mu $ is considered $\phi $-optimal when $\phi \left( \mu \right) =\max_{\xi \in \Xi }\phi \left( \xi \right) $. Standard criteria $\phi \left( \cdot\right) $ are concave functions on $\Xi $ having a statistical interpretation. In \cite{PP13} the criteria of $E$-, $c$-, and $G$-optimality were considered, and the corresponding criteria functions have been rewritten in a form \[ \phi \left( \xi \right) =\min_{u\in \mathbb{R}^p}\sum_{x\in \mathcal{X}}T\left( u,x\right) \xi \left( x\right) \] with given $T\left( u,x\right)$. This, together with the standard restrictions on $\xi ,$ defines an ``infinite-dimensional'' linear programming (LP) problem: to choose the values of $\xi \left( x\right) ;x\in \mathcal{X}$ and of $t\in \mathbb{R}$ so to maximize $t$ under infinitely many linear restrictions: \begin{eqnarray*} \sum_{x\in \mathcal{X}}T\left( u,x\right) \xi \left( x\right) &\geq &t\text{ \quad for any }u\in \mathbb{R}^p ,\\ \xi \left( x\right) &\geq &0\quad \text{for any }x\in \mathcal{X}\text{, and} \sum_{x\in \mathcal{X}}\xi \left( x\right) =1. \end{eqnarray*} In particular, for $E$-optimality, with $\phi _E\left( \xi \right) $ equal to the minimum eigenvalue of $M\left( \xi \right)$, we have \[ \phi _E\left( \xi \right) =\min_{u\in \mathbb{R}^p}\frac{u^\top M\left( \xi \right) u}{u^\top u }=\min_{u\in \mathbb{R}^p}\sum_{x\in \mathcal{X}}\frac{[f^\top\left( x\right) u]^2}{u^\top u}\xi \left( x\right) . \] The main idea of \cite{PP14} was to substitute the nonlinear response function $ \eta \left( x,\theta \right) $ instead of $f^\top\left( x\right) \theta $ and so to obtain new criteria for nonlinear models, with the aim to detect the lack of identifiability under the design $\xi .$ However, a second aim of \cite{PP14} was to point attention to the fact that for those expressions for criteria an LP method could be used to obtain nearly optimum designs in linear models. In the present paper we follow this second aim, but for $D$-, $A$-, and $E_k$-optimality criteria and also for the computationally not easy task to find the ``criterion robust'' optimum design in linear models, or to find a $D$-optimum design under the condition that the $A$-optimality criterion exceeds a given value. The difficulties to achieve also the first aim for $D$-, $A$-, and $E_k$-criteria are discussed in Appendix. We notice that LP method has been used to compute $c$-optimal design in \cite{HJ08} but under a quite different set-up. \section{Reformulation of the optimality criteria} \label{sec:sec1} The $D$-optimal design maximizes $\det\left( M\left( \xi \right) \right) $, hence minimizes the generalized variance of $\hat{\theta}$, the BLUE of $\theta$. The $A$-optimal design minimizes the sum of the variances of $\hat{\theta}% _1,\hdots,\hat{\theta}_p$. The $E_k$-optimal design maximizes the sum of the smallest $k$ eigenvalues of $M\left( \xi \right) $. There are many forms of expressing the corresponding criteria functions $\phi \left( \xi \right) $. All forms of $\phi \left( \xi\right) $ representing the same criterion maintain the ordering of the designs but differ by the scaling of this ordering, say $\phi \left( \xi\right) =\ln \det \left[ M\left( \xi \right) \right] $ and $\phi \left( \xi \right) =\det^{1/p}\left[ M\left( \xi \right) \right] $ for $D$-optimality, and similarly for the other criteria. Here we prefer criteria functions which are not only concave, but also positively homogeneous, $\phi \left( \alpha \xi \right) =\alpha \phi \left( \xi \right) $ for $\alpha > 0$ (see \cite{P93} for a justification). So for the $D$-optimality $\phi _D\left( \xi \right) =\det^{1/p}\left[ M\left( \xi \right) \right] $, for the $A$-optimality $\phi _A\left( \xi \right) =1/tr\left[ M^{-1}(\xi) \right] $ when $M\left( \xi \right) $ is nonsingular, and for the $E_k$-optimality $\phi _{E_k}\left( \xi \right) $ =$\lambda _1\left( \xi \right) +\hdots +\lambda _k\left( \xi \right) $ where $\lambda _1\left( \xi \right) \leq \lambda _2\left( \xi \right) \leq \hdots \leq \lambda _p\left( \xi \right) $ is the ordering of eigenvalues of $M\left( \xi \right) $ respecting their multiplicity. Denote $u_1\left( \xi \right) ,\hdots ,u_p\left( \xi\right) $ the corresponding orthonormal eigenvectors of $M\left( \xi \right) $. Denote also $\Xi ^{+}=\left\{ \mu \in \Xi :M\left( \mu \right) \text{ is nonsingular}\right\} $. $D$- and $A$-optimal designs are evidently localized on $\Xi ^{+}$, what need not to be true for the $E_k$-optimality. \begin{theorem} \label{veta1} We can write \begin{eqnarray} \nonumber\phi _D\left( \xi \right) &=&\min_{\mu \in \Xi ^{+}}\sum_{x\in \mathcal{X} }H_D(\mu,x) \xi \left( x\right) \\ &=&\min_{\mu \in \Xi ^{+}}\sum_{x\in \mathcal{X} }\left\{ \frac{\det^{1/p}\left[ M\left( \mu \right) \right] }pf^\top \left( x\right) M^{-1}\left( \mu \right) f\left( x\right) \right\} \xi \left( x\right),\label{a} \\ \nonumber \phi _A\left( \xi \text{ }\right) &=& \min_{\mu \in \Xi ^{+}}\sum_{x\in \mathcal{X} }H_A(\mu,x) \xi \left( x\right) \\ & =&\min_{\mu \in \Xi ^{+}}\sum_{x\in \mathcal{X}}\left\{ \frac{\left\| M^{-1}\left( \mu \right) f\left( x\right) \right\| ^2}{\left[tr\left( M^{-1}\left( \mu \right) \right)\right]^2 }\right\} \xi \left( x\right) \label{b} \end{eqnarray} for any $\xi \in \Xi ^{+},$ and \begin{equation} \phi _{E_k}\left( \xi \right) =\min_{\mu \in \Xi }\sum_{x\in \mathcal{X} }H_{E_k}(\mu,x) \xi \left( x\right) =\min_{\mu \in \Xi }\sum_{x\in \mathcal{X} }\left\| P^{(k)}\left( \mu \right) f\left( x\right) \right\| ^2\xi \left( x\right) \label{c} \end{equation} for any $\xi \in \Xi $. Here $P^{(k)}\left( \mu \right) $ is the $k$-dimensional orthogonal projector $P^{(k)}\left( \mu \right) =\sum_{i=1}^ku_i\left( \mu \right) u_i^\top\left(\mu \right) $, and $\left\| \cdot \right\|$ denotes the Euclidean norm. \end{theorem} \begin{proof} In the proof we shall often use that $tr\left[ AB\right] =tr\left[ BA\right] $ for any matrices $A=A_{l\times s},B=B_{s\times l}$ \citep{H00}. By the known inequality between the geometric and arithmetic means of positive numbers (cf. \cite[Chap.~2]{S04}), we obtain \[ \left\{ \det \left[ S^\top M\left( \xi \right) S\right] \right\} ^{1/p}=\left\{ \Pi _{i=1}^p\alpha _i\right\} ^{1/p}\leq \frac 1p\sum_{i=1}^p\alpha _i=\frac 1p tr\left[ S^\top M\left( \xi \right) S\right] \] for any nonsingular $p\times p$ matrix $S$. Here $\alpha _1,\hdots,\alpha _p$ are the eigenvalues of $S^\top M\left( \xi \right) S$. So $\det^{1/p}\left[ M\left( \xi \right) \right] \leq \frac 1p\det^{-1/p}[SS^\top]\sum_{x\in \mathcal{X} }f^\top\left( x\right) SS^\top f\left( x\right) \xi \left( x\right) $, and we have just to put $S=M^{-1/2}\left( \mu \right) $ to obtain the expression in~(\ref {a}). If $S=M^{-1/2}\left( \xi \right) $, then $\alpha_i=1;$ $i=1\hdots p$, and the geometric mean is equal to the arithmetic mean, so the minimum is attained. For any nonsingular $p\times p$ matrix $S$ we obtain from the Schwarz inequality \begin{eqnarray*} \left[ tr\left( S\right) \right] ^2 &=&\left\{ tr\left[ M^{-1/2}\left( \xi \right) SM^{1/2}\left( \xi \right) \right] \right\} ^2 \\ &\leq &tr\left[ M^{-1}\left( \xi \right) \right] tr\left[ M^{1/2}\left( \xi \right) S^\top S M^{1/2}\left( \xi \right) \right] =tr\left[ M^{-1}\left( \xi \right) \right]tr\left[ SM\left( \xi \right) S^\top \right] \end{eqnarray*} since in general $tr\left[ A^\top B\right] $ is a scalar product of matrices $A,B $, and since $M^{-1/2}\left( \xi \right) $ and $M^{1/2}\left( \xi \right) $ are symmetric matrices. So $\left\{ tr\left[ M^{-1}\left( \xi \right) \right] \right\} ^{-1}\leq tr\left[ SM\left( \xi \right) S^\top \right] /\left[ tr\left( S\right) \right] ^2=\sum_x\left\| Sf\left( x\right) \right\| ^2\xi \left( x\right) /\left[ tr\left( S\right) \right] ^2$, and we have just to put $S=M^{-1}\left( \mu \right) $ to obtain the expression in~(\ref{b}). When $S=M^{-1}\left( \xi \right) $, we obtain evidently an equality in the Schwarz inequality. Denote $P=P^{(k)}\left( \mu \right)$. By the definition of $ P^{(k)}\left( \mu \right) $ we have $PP=P$ and $P=P^\top$. So \[ \sum_{x\in \mathcal{X}}\left\| Pf\left( x\right) \right\| ^2\xi \left( x\right) =tr\left[ PM\left( \xi \right) P\right]. \] On the other hand, denote $U=\left( u_1\left( \xi \right) ,\hdots,u_p\left( \xi \right) \right) ,\Lambda =diag\left\{ \lambda _1\left( \xi \right) ,\hdots,\lambda _p\left( \xi \right) \right\} $, and use that $M\left( \xi \right) =U\Lambda U^\top$ to obtain \begin{eqnarray*} tr\left[ PM\left( \xi \right) P\right] &=&tr\left[ PU\Lambda U^\top P\right] =tr\left[ \Lambda \left( PU\right) ^\top \left( PU\right) \right] \\ &=&\sum_{i=1}^p\lambda _i\left( \xi \right) \left\{ \left( PU\right) ^\top \left( PU\right) \right\} _{ii}=\sum_{i=1}^p\lambda _i\left( \xi \right) \left\| Pu_i\left( \xi \right) \right\| ^2=\sum_{i=1}^p\lambda _i\left( \xi \right) w_i, \end{eqnarray*} where we denoted $w_i=\left\{ \left( PU\right) ^\top \left( PU\right) \right\} _{ii}=\left\| Pu_i\left( \xi \right) \right\| ^2$. Since $UU^\top =U^\top U=I$, we have \[ k=tr\left[ P\right] =tr\left[ P^\top P\right] =tr\left[ P^\top PUU^\top \right] =\sum_{i=1}^p\left\{ \left( PU\right) ^\top \left( PU\right) \right\} _{ii}=\sum_{i=1}^pw_i . \] Further $w_i\in [0,1]$, since $0\leq \left\| Pu_i\left( \xi \right) \right\| ^2\leq \left\| u_i\left( \xi \right) \right\| ^2=1$. So, using that $\lambda _1\left( \xi \right) \leq \hdots\leq \lambda _p\left( \xi \right) $ we obtain that $\sum_{i=1}^p\lambda _i\left( \xi \right) w_i$ is minimized exactly when the weights $w_i$ have maximum value $\left( =1\right) $ at the smallest $k$ values of $\lambda _i\left( \xi \right) $. Summarizing we obtain \begin{equation} \sum_{x\in \mathcal{X}}\left\| Pf\left( x\right) \right\| ^2\xi \left( x\right) =tr\left[ PM\left( \xi \right) P\right] =\sum_{i=1}^p\lambda _i\left( \xi \right) w_i\geq \sum_{i=1}^k\lambda _i\left( \xi \right) =\phi _{E_k}\left( \xi \right) . \label{c2} \end{equation} In the particular case that $P=P^{(k)}\left( \xi \right) =\sum_{j=1}^ku_j\left( \xi \right) u_j^\top \left( \xi \right) $ we have $w_i=\left\| P^{(k)}\left( \xi \right) u_i\left( \xi \right) \right\| ^2=\left\| u_i\left( \xi \right) \right\| ^2=1$ if $i\leq k$, $\left\| P^{(k)}\left( \xi \right) u_i\left( \xi \right) \right\| ^2=0$ if $i>k$, hence $\sum_{x\in \mathcal{X}}\left\| P^{(k)}\left( \xi \right) f\left( x\right) \right\| ^2\xi \left( x\right) =\sum_{i=1}^k\lambda _i\left( \xi \right) =\phi _{E_k}\left( \xi \right)$, which together with (\ref{c2}) yields an expression in (\ref{c}). \end{proof} \begin{remark} \label{rem1} We could write in~(\ref{b}) $\phi _A\left( \xi \right) =\min_{B\in \mathcal{B}}\sum_{x\in \mathcal{X}}\left\{ \frac{\left\| Bf\left( x\right) \right\| ^2}{\left[tr\left( B\right)\right]^2 }\right\} \xi \left( x\right) $, where $\mathcal{B}$ is any set of nonsingular matrices containing $M^{-1}\left( \xi \right) .$ When this formula should hold for all $\xi \in \Xi $, then the set $\mathcal{B=}\left\{ M^{-1}\left( \mu \right) :\mu \in \Xi ^{+}\right\} $ is the smallest of such sets. A similar modification could be done for $D$-optimality in~(\ref{a}). In~(\ref{c}) we could minimize over any set of $k$-dimensional projectors containing $ P^{(k)}\left( \xi \right) $. \end{remark} \begin{remark} \label{rem2} As follows from \cite[Chap.~9.5]{PP13} we could obtain similar results as in Theorem~\ref{veta1} by considering gradients or subgradients of $\phi \left( \xi \right) $. However, the presented direct proofs, without using a not very common notion of subgradients, can be more attractive for people in applications. \end{remark} \section{The iterative computation by LP; the algorithms and examples} \label{sec:sec2} \subsection{Algorithm for $D$-, $A$-, and $E_k$-optimality} \label{alg1} Let us write $H(\mu,x )$ instead of $H_D(\mu,x)$, $H_A(\mu,x)$, or $H_{E_k}(\mu,x)$ from Theorem~\ref{veta1}. For the maximization of $\phi$ we apply a modification of the cutting-plane method \cite{K60} as presented in \cite{PP13} and \cite{PP14}: \begin{enumerate}[start=0] \item Take any vector $\xi^{(0)}$ such that $\sum_{x\in\mathcal{X}}\xi^{(0)}(x)=1$ and $\xi^{(0)}(x)\geq 0\;\forall\;x\in\mathcal{X}$, choose $\epsilon>0$, set $\Xi^{(0)}=\emptyset $ and $n=0$. \item Set $\Xi^{(n+1)}= \Xi^{(n)} \cup \left\lbrace\xi^{(n)}\right\rbrace$. \item Use the LP solver to find $\left(\xi^{(n+1)},t^{(n+1)}\right)$ so to maximize $t$ satisfying the constraints: \begin{itemize} \item $t>0,\; \xi(x)\geq0 \; \forall \; x\in\mathcal{X} ,\; \sum_{x\in\mathcal{X}}\xi(x)=1,$ \item $\sum_{x\in\mathcal{X}} H(\mu,x)\xi(x)\geq t\; \forall \mu\in\Xi^{(n+1)}.$ \end{itemize} \item Set $\Delta^{(n+1)}=t^{(n+1)}-\phi\left(\xi^{(n+1)}\right)$, if $\Delta^{(n+1)}<\epsilon$ take $\xi^{(n+1)}$ as an $\epsilon$-optimal design and stop, or else $n\leftarrow n+1$ and continue by step 1. \end{enumerate} Notice that $\min_{\mu\in\Xi^{(n+1)}}\sum_{x\in\mathcal{X}}H(\mu,x)\xi(x)$ is an upper piecewise linear approximation of $\phi(\xi)$. Increasing $n$, the set $\Xi^{(n+1)}\subseteq\Xi$ becomes larger and the approximation is better. On the other hand, when $n$ is small, the information matrix $M\left(\xi^{(n)}\right)$ could be ill-conditioned or even singular. In order to avoid the difficulty with inverse matrices in $D$- and $A$-optimality, it is possible to use any symmetric positive definite matrix as a substitute for $M\left(\xi^{(n)}\right)$ as justified in Remark~\ref{rem1}. Alternatively, \cite{PP13} recommend the regularization $M\left(\xi^{(n)}\right)+\gamma I$, where $\gamma$ is a small positive number and $I$ is the identity matrix. Note that it is also possible to take $\Xi^{(0)}$ as an nonempty set containing $s\geq1$ initial designs. If $s$ or $n$ is large, the probability of ill-conditioned or singular information matrix $M\left(\xi^{(n)}\right)$ is less. The problem of singular information matrix does not appear in $E_k$-optimality criteria. The stopping rule used in the above algorithm follows from the upper and lower bounds for $\max_{\xi\in\Xi}\phi(\xi)$: $$ \phi\left(\xi^{(n+1)}\right)\leq \max_{\xi\in\Xi}\phi(\xi) \leq t^{(n+1)}. $$ The first inequality is obvious. Note that $t^{(n+1)}=\max_{\xi\in \Xi}\min_{\mu\in\Xi^{(n+1)}}\sum_{x\in\mathcal{X}}H(\mu,x)\xi(x)$, while $\max_{\xi\in\Xi}\phi(\xi)=\max_{\xi\in\Xi}\min_{\mu\in\Xi}\sum_{x\in\mathcal{X}}H(\mu,x)\xi(x)$, and $\Xi\supseteq \Xi^{(n+1)}$. This yields the second inequality. There are also available stopping rules based on the equivalence theorem \citep{K74,KW59}, which are considered as standard. Let $\epsilon_{stop}$ be a chosen small nonnegative number. An iterative algorithm will stop if $d\left(\xi^{(n)}\right)<\epsilon_{stop}$, where for $D$-optimality $d\left(\xi^{(n)}\right)=\left|\max_{x\in\mathcal{X}}f^\top(x)M^{-1}\left(\xi^{(n)}\right)f(x)-p\right|$ and for the criterion of $A$-optimality $d\left(\xi^{(n)}\right)=\left|\max_{x\in\mathcal{X}}f^\top(x)M^{-2}\left(\xi^{(n)}\right)f(x)-tr\left[M^{-1}\left(\xi^{(n)}\right)\right]\right|$ as seen e.g. in \cite{K74,K75}. According to \cite{H04} the stopping rule for $E_k$-optimality criteria is $d\left(\xi^{(n)}\right)=\left|\phi_{E_k}\left(\xi^{(n)}\right)- \max_{x\in\mathcal{X}} \sum_{i=1}^k \left[f^\top (x) u_i\left(\xi^{(n)}\right)\right]^2\right|$, which can be used only if $\lambda_k\left(\xi^{(n)}\right)<\lambda_{k+1}\left(\xi^{(n)}\right)$. As mentioned in \cite[Chap.~9.5]{PP13}, the cutting-plane method can have bad convergence properties (referenced to \cite{B06,N04}), one can then use the level method (see \cite{N04} or \cite{PP13}), which adds the quadratic programming step in the method of cutting planes. In the examples below we compare the known optimal designs with results of our algorithm. The computations were performed in Matlab on a bi-processor PC (3.10 Ghz) equipped with 6GB of RAM and with 64 bits Windows 8.1. LP problems were solved with interior point method. \begin{example} \label{ex1} Consider the nonlinear regression model of \cite{A93}. \begin{equation*} \eta(x,\theta)=\theta_1\left[exp(-\theta_2x)-exp(-\theta_3x)\right],\; x\in\mathbb{R}^+,\; \theta=(\theta_1,\theta_2,\theta_3)^\top. \end{equation*} We use the algorithm of Sec.~\ref{alg1} to compute local $D$- and $E_1$-optimal designs for the nominal value of the parameter $\theta^0=(21.8,0.05884,4.298)^\top$, so we shall write $\partial\eta(x,\theta)/\partial\theta|_{\theta_0}$ instead of $f(x)$ everywhere. We take a finite design space containing 24,000 points $\mathcal{X}=\{ 0.001, 0.002,\hdots,23.999,24.000\}$, $\epsilon=10^{-10}$ with $\xi^{(0)}(x)=1/3$ if $x\in \{0.2,1,23\}$ and $\xi^{(0)}(x)=0$ otherwise. The computed designs are given in Table~\ref{tabpr1}. Notice that the computed results correspond to those in \cite{A93}. \begin{table}[h!] \centering \begin{tabular}{cccccc} \hline $\phi$ & $\xi^*$ &$\phi^*$&iter. & time& $d(\xi^*)$\\ \hline $D$ & \begin{tabular}{ccc} $0.229$&$1.389$&$18.417$\\ $0.3333$&$0.3333$&$0.3333$\\ \end{tabular}&$\phi_D^*=11.739$& 64 &16m 9s&$1.5 \cdot 10^{-5}$\\ \hline \text{$E_1$} & \begin{tabular}{cccc} $0.169$&$1.394$&$23.402$&$23.403$\\ $0.1993$&$0.6623$&$0.0415$&$0.0969$\\ \end{tabular}&$\phi_{E_1}^*=0.3163$& 49 &5m 53s&$3.89\cdot 10^{-6}$\\ \hline \end{tabular} \caption{Example~\ref{ex1}: the locally optimal designs are $\xi^*_D$ and $\xi^*_{E_1}$ (column 2); $\phi^*_D=\phi_D(\xi^*_D)$ and $\phi_{E_1}^*=\phi_{E_1}(\xi^*_{E_1})$ (column 3); the number of iterations (column 4) and the computational time (column 5) required until the algorithm stopped; the corresponding value of $d(\xi^*)$ based on the equivalence theorem (column 6).} \label{tabpr1} \end{table} \end{example} \subsection{Algorithm for computing criterion robust designs} \label{alg2} The criteria of $E_k$-optimality play a special role in experimental design. We say that the design $\xi $ is not worse than the design $\mu $ with respect to the Schur ordering of designs if $\phi _{E_k}\left( \xi \right) \geq \phi _{E_k}\left( \mu \right) $ for all $k=1,\hdots,p$. Then also $\phi \left( \xi \right) \geq \phi \left( \mu \right) $ for many other optimality criteria. However, the Schur ordering is a partial ordering of designs, and a Schur-optimal design exists only in some very particular cases. On the other hand, if we denote by $\mathcal{O}$ the set of all criteria functions $\phi \left( \xi \right) $, which are concave and positive homogeneous, and moreover are orthogonally invariant in the sense that $\phi \left( \xi \right) =\Phi \left[ M\left( \xi \right) \right] $ with $\Phi \left[ M\left( \xi \right) \right] $=$\Phi \left[ U^\top M\left( \xi \right) U\right] $ for any orthogonal matrix $U,$ it makes sense to look for a design $\xi_{ef}$ which is maximin efficient with respect to such criteria, i.e. \[ \xi_{ef} =\arg \max_{\xi \in \Xi }\min_{\phi \in \mathcal{O}}\left[ \frac{\phi \left( \xi \right) }{\max_{\zeta \in \Xi }\phi \left( \zeta \right) }\right]. \] Here the ratio $\frac{\phi \left( \xi \right) }{\max_{\zeta \in \Xi }\phi \left( \zeta \right) }$ is called the $\phi$-efficiency of the design $\xi $. This maximin efficiency problem can be simplified (cf. \cite{H04}), the solution $\xi_{ef}$ coincides with the solution of \[ \xi_{ef} =\arg \max_{\xi \in \Xi }\min_{1\leq k\leq p}\left[ \frac{\phi _{E_k}\left( \xi \right) }{\max_{\zeta \in \Xi }\phi _{E_k}\left( \zeta \right) }\right] , \] i.e. with the design which is maximin efficient in the (finite) class of all $E_k$-optimality criteria. Such a design is called also ``criterion robust'' in \cite{H04}. But even this problem is computationally difficult, mainly because the $E_k$-optimality criteria are not differentiable. For us it is important that we can approach the solution of this problem by the LP programming technique. First, using Theorem~\ref{veta1} we compute $E_k\left( opt\right) =\max_{\zeta \in \Xi }\phi _{E_k}\left( \zeta \right) $ for all $k$ (see Sec.~\ref{alg1}), and then we can formulate another ``infinite-dimensional'' LP problem: to choose the values of $\xi(x);\;x\in\mathcal{X}$ and of $t\in\mathbb{R}$ so to maximize $t$ under linear constraints: \begin{eqnarray*} \sum_{x\in \mathcal{X}}\frac{ H_{E_k}(\mu,x)}{E_k(opt)} \xi\left( x\right) &\geq & t\text{ for any }\mu \in \Xi ^{+} \text{ and for every } k\in\{1,\hdots,p\},\\ \xi \left( x \right) &\geq & 0 \text{ for any } x\in\mathcal{X}, \text{ and } \sum_{x\in \mathcal{X}}\xi \left( x\right) =1. \end{eqnarray*} In order to compute the maximin efficient design, the algorithm of Sec.~\ref{alg1} needs to be modified in step 2. Actually, the constraints in the LP problem will be: \begin{itemize} \item $t>0,\; \xi(x)\geq0 \; \forall \; x\in\mathcal{X},\;\sum_{x\in\mathcal{X}}\xi(x)=1, $ \item $\sum_{x\in\mathcal{X}} \frac{H_{E_k}(\mu,x)}{E_k(opt)}\xi(x)\geq t\; \forall \mu\in\Xi^{(n+1)}$ and $\forall\;k=1,\hdots,p$, \end{itemize} where $E_k(opt)=\max_{\zeta\in\Xi}\phi_{E_k}(\zeta)$ is computed using the unmodified algorithm of Sec.~\ref{alg1} for all $k=1,\hdots,p$. \begin{example} \label{ex2} Consider the quadratic regression model on a $q$-dimensional cube: \begin{equation} \label{pr2} y=\beta_0+\sum_{i=1}^q\beta_ix_i^2+\sum_{i=1}^q\beta^{(i)}x_i+\sum_{i<j}\beta_{ij}x_ix_j+\varepsilon,\; x=(x_1,\hdots,x_q)^\top\in [-1,1]^q \end{equation} with a parameter $\beta=(\beta_0,\beta_1,\hdots,\beta_q,\beta^{(1)},\hdots ,\beta^{(q)},\beta_{12},\hdots,\beta_{q-1,q})^\top$ of dimension $p=1+3/2q+q^2/2$. The criterion robust design in the model~(\ref{pr2}) was analytically studied for $q=1$ in \cite{H04} and for $q=2$ in \cite{FH13}. The case of $q=3$ was numerically solved in \cite{FH13}. Consider the set $C_i=\lbrace x\in\{-1,0,1\}^q:\;\sum_{j=1}^q|x_j|=i \rbrace$ for $i=0,1.\hdots q$. Thus, $C_0=\lbrace(0,\hdots,0)^\top\rbrace$ and $C_q$ is the set of all vertices of the $q$-dimensional cube. We shall denote $C=\bigcup_{i=0}^qC_i$ and $\xi(C_i)=\sum_{x\in C_i}\xi(x)$. As mentioned in \cite{FH13}, for every $\phi \in \mathcal{O}$ there exists a $\phi$-optimal design $\xi^*$ with support on $C$, such that for all $i=0,1,\hdots,q$ the measure $\xi^*(C_i)$ is uniformly distributed over points $x\in C_i$ (see also \cite{G87,H92}). Before computing the criterion robust designs, we needed to evaluate $E_k(opt)$ for $k=1,\hdots p$. The algorithm of Sec.~\ref{alg1} initialized with the uniform measure on $\mathcal{X}=C$ and with $\epsilon=10^{-10}$ gave the optimal values $E_k(opt)$ summarized in Table~\ref{tabpr2a} for $q=1,2,3, 4$. We observed the same optimal designs as calculated in \cite{H04} for $q=1$ and in \cite{FH13} for $q=2,3$. \begin{table}[h!] \centering \begin{tabular}{cccc} \hline $q$ & & &time\\ \hline $1$ & \begin{tabular}{c} $k$ \\ $E_k(opt)$ \end{tabular} &\begin{tabular}{ccc} 1&2&3\\ $0.2$&$1$&$3$\\ \end{tabular} &2s\\ \hline $2$ & \begin{tabular}{c} $k$ \\ $E_k(opt)$ \end{tabular} &\begin{tabular}{cccc} 1&2&3 to 5&6\\ $0.2$&$0.407$&$k-2$&$6$\\ \end{tabular} &17s\\ \hline $3$ & \begin{tabular}{c} $k$ \\ $E_k(opt)$ \end{tabular} &\begin{tabular}{cccccc} 1&2&3&4&5 to 9&10\\ $0.2$&$0.4$&$0.667$&$1.027$&$k-3$&$10$\\ \end{tabular} &28m 17s\\ \hline $4$ & \begin{tabular}{c} $k$ \\ $E_k(opt)$ \end{tabular} &\begin{tabular}{ccccccc} 1&2&3&4&5&6 to 14&15\\ $0.2$&$0.0433$&$0.6242$&$0.4834$&$1.250$&$k-4$&15\\ \end{tabular} &23h 32m 47s\\ \hline \end{tabular} \caption{ Example~\ref{ex2}: the optimal values of the $E_k$-optimality criteria on a $q$-dimensional cube for the model~(\ref{pr2}) and the total computational time required until the optimal values for all $k=1,\hdots ,p$ together were evaluated.} \label{tabpr2a} \end{table} Then using the algorithm of Sec.~\ref{alg2} we computed criterion robust designs on $\mathcal{X}=C$ for $q=1,2,3,4$ obtaining the same results (except $q=4$) as in \cite{H04, FH13}, and the optimal mass concentrated on $C_i$ is listed in Table~\ref{tabpr2b}. Note, that for $q=3$ and $q=4$ the optimal design $\xi^*$ computed by algorithm of Sec.~\ref{alg2} does not put mass uniformly among $x\in C_i$ with $i=0,\hdots q$. By redistributing the mass $\xi^*(C_i)$ uniformly over $x\in C_i$ for $i=0,\hdots, q$, we obtained new design $\xi^{**}$ of identical $\mathcal{O}$-minimal efficiency as achieved in $\xi^*$. Thus, $\xi^{**}$ is another criterion robust design with required uniform measure on $C_i$ for any $i=0,\hdots q$. \begin{table}[h!] \centering \begin{tabular}{ccccc} \hline $q$& $\xi^*$ &$\Psi^*$&iter. & time\\ \hline $1$ & \begin{tabular}{cc} $\xi^*(C_0)$&$\xi^*(C_1)$\\ $0.3532$&$0.6468$\\ \end{tabular}&$0.7646$& 16&1s\\ \hline $2$ & \begin{tabular}{ccc} $\xi^*(C_0)$&$\xi^*(C_1)$&$\xi^*(C_2)$\\ $0.1775$&$0.2924$&$0.5304$\\ \end{tabular}&$0.7060$& 108&1m 20s\\ \hline $3$ & \begin{tabular}{cccc} $\xi^*(C_0)$&$\xi^*(C_1)$&$\xi^*(C_2)$&$\xi^*(C_3)$\\ $0.0884$&$0.2343$&$0.2306$&$0.4467$\\ \end{tabular}&$0.6642$&464&17m 9s\\ \hline $4$ & \begin{tabular}{ccccc} $\xi^*(C_0)$&$\xi^*(C_1)$&$\xi^*(C_2)$&$\xi^*(C_3)$&$\xi^*(C_4)$\\ $0.1097$&$0.0559$&$0.1437$&$0.3062$&$0.3845$\\ \end{tabular}&$0.6526$&1453&10h 20m 7s\\ \hline \end{tabular} \caption{Example~\ref{ex2}: criterion robust designs $\xi^*$ (column 2) and the $\mathcal{O}$-minimal efficiency of $\xi^*$, i.e. $\Psi^*=\max_{\xi\in\Xi}\min_k\frac{\phi_{E_k}(\xi^*)}{E_k(opt)}$ (column 3) on a $q$-dimensional cube for the model~(\ref{pr2}); the number of iterations (column 4) and the computational time (column 5) required until the algorithm stopped.} \label{tabpr2b} \end{table} Alternatively, we computed the criterion robust design for $q=2$ (thus $p=6$) on a modified design space $\mathcal{X}=\{-1,-0.95,\hdots ,0.9,0.95,1\}^2$ (i.e. $\mathcal{X}$ is a grid consisting of 1,681 two-dimensional points including the set $C$). Assuming that the values $E_k(opt)$ are known or previously computed for all $k \in{1,\hdots,p}$, the algorithm of Sec.~\ref{alg2} initialized with uniform measure on $\mathcal{X}$ and $\epsilon=10^{-10}$ converged after 102 iterations in 36m and 5s with the same results as given in Table~\ref{tabpr2b}. \end{example} \subsection{Algorithm for $D$-optimality conditioned by prescribed $A$-optimality} \label{alg3} It is not difficult to see that in the considered LP problems we can easily add some supplementary constraints linear in $\xi $, say a cost constraint $\sum_{x\in \mathcal{X}}c\left( x\right) \xi \left( x\right) =c$, where $c\left( x\right) $ is the cost of an observation at $x$ and $c$ is proportional to the total cost allowed for the whole experiment. What is less evident is that we can combine optimality criteria. Say, when we want to obtain a $D$-optimal design under the condition that the $A$-optimality criterion attains a prescribed value $a$, we have to solve the ``infinite-dimensional'' LP problem: to choose the values of $\xi(x);\;x\in\mathcal{X}$ and of $t\in\mathbb{R}$ so to maximize $t$ under linear constraints: \begin{eqnarray*} \sum_{x\in \mathcal{X}} H_D(\mu,x)\xi \left( x\right) &\geq & t\text{ for any }\mu \in \Xi ^{+},\\ \sum_{x\in \mathcal{X}} H_A(\mu,x)\xi \left( x\right) &\geq & a\text{ for any }\mu \in \Xi ^{+}, \\ \xi \left( x \right) &\geq & 0 \text{ for any } x\in\mathcal{X}, \text{ and } \sum_{x\in \mathcal{X}}\xi \left( x\right) =1. \end{eqnarray*} This problem can be solved by the algorithm of Sec.~\ref{alg1} with a modification in constraints of the LP problem and in the stopping rule. \begin{enumerate}[start=0] \item Take any vector $\xi^{(0)}$ such that $\sum_{x\in\mathcal{X}}\xi^{(0)}(x)=1$ and $\xi^{(0)}(x)\geq 0\;\forall\;x\in\mathcal{X}$, choose $\epsilon_D>0$, $\delta_A \approx 0$ , set $\Xi^{(0)}=\emptyset $ and $n=0$. \item Set $\Xi^{(n+1)}= \Xi^{(n)} \cup \left\lbrace\xi^{(n)}\right\rbrace$. \item Use the LP solver to find $\left(\xi^{(n+1)},t^{(n+1)}\right)$ so to maximize $t$ satisfying the constraints: \begin{itemize} \item $t>0,\; \xi(x)\geq0 \; \forall \; x\in\mathcal{X}, \; \sum_{x\in\mathcal{X}}\xi(x)=1 ,$ \item $\sum_{x\in\mathcal{X}} H_D(\mu,x)\xi(x)\geq t\; \forall \mu\in\Xi^{(n+1)},$ \item $\sum_{x\in\mathcal{X}} H_A(\mu,x)\xi(x)\geq a\; \forall \mu\in\Xi^{(n+1)}.$ \end{itemize} \item Set $\Delta_D^{(n+1)}=t^{(n+1)}-\phi_D\left(\xi^{(n+1)}\right)$ and $\Delta_A^{(n+1)}=\phi_A\left(\xi^{(n+1)}\right)-a$. If $\Delta_D^{(n+1)}<\epsilon_D $ and $ \Delta_A^{(n+1)}>\delta_A $ take $\xi^{(n+1)}$ as an $(\epsilon_D,\delta_A)$-optimal design and stop, or else $n\leftarrow n+1$ and continue by step 1. \end{enumerate} The constant $\delta_A$ is chosen at the beginning of the algorithm. The preferred value is $\delta_A=0$, however choosing $\delta_A<0$ but small, we can reduce the strictness of the condition on $A$-optimality. Now consider the set $\mathcal{A}^{(n)}=\{\xi\in\Xi: \sum_{x\in\mathcal{X}}H_A(\mu,x)\xi(x)\geq a\; \forall \mu \in \Xi^{(n)}\}$, then $\mathcal{A}^{(n)} \supset \mathcal{A}^{(n+1)} \supset \mathcal{A}=\{\xi\in\Xi: \phi_A(\xi)\geq a\}$. So the exact solution of our problem would be $\xi^*=\arg\max_{\xi \in \mathcal{A}}\phi_D(\xi)$. We can write: $$ \begin{aligned} t^{(n+1)}&=\max_{\xi \in \mathcal{A}^{(n+1)}}\min_{\mu\in\Xi^{(n+1)}}\sum_{x\in\mathcal{X}}H_D(\mu,x)\xi(x)\\ &\geq \max_{\xi\in\mathcal{A}^{(n+1)}}\min_{\mu \in \Xi}\sum_{x\in\mathcal{X}}H_D(\mu,x)\xi(x)=\max_{\xi\in\mathcal{A}^{(n+1)}} \phi_D(\xi), \end{aligned} $$ and then \begin{equation} \label{d1} \max_{\xi \in \mathcal{A}}\phi_D(\xi)\leq \max_{\xi \in \mathcal{A}^{(n+1)}}\phi_D(\xi)\leq t^{(n+1)}, \end{equation} \begin{equation} \label{d2} \phi_D\left(\xi^{(n+1)}\right)\leq \max_{\xi \in \mathcal{A}^{(n+1)}}\phi_D(\xi)\leq t^{(n+1)}, \end{equation} where $$ \xi^{(n+1)}=\arg \max_{\xi \in \mathcal{A}^{(n+1)}}\min_{\mu\in\Xi^{(n+1)}}\sum_{x\in\mathcal{X}}H_D(\mu,x)\xi(x). $$ Assume that $\delta_A=0$ and the algorithm stopped, i.e. $t^{(n+1)}-\phi_D\left(\xi^{(n+1)}\right)<\epsilon_D$ and $\phi_A\left(\xi^{(n+1)}\right)\geq a$. According to~(\ref{d1}) and~(\ref{d2}) there are only two possibilities: first, if $\max_{\xi \in \mathcal{A}}\phi_D(\xi)\leq \phi_D\left(\xi^{(n+1)}\right)\leq t^{(n+1)}$, then $\xi^{(n+1)}\in\mathcal{A}^{(n+1)}$ is even ``better'' design than we expected; second, $ \phi_D\left(\xi^{(n+1)}\right)\leq\max_{\xi \in \mathcal{A}}\phi_D(\xi)\leq t^{(n+1)}$, and the stopping rule implies that $\max_{\xi \in \mathcal{A}}\phi_D(\xi)- \phi_D\left(\xi^{(n+1)}\right)<\epsilon_D$, thus $\xi^{(n+1)}$ is an $\epsilon_D$-optimal design in both cases. \begin{example} \label{ex3} Consider the polynomial regression model of degree $d$: $$ {y}=\theta_0+\theta_1 x+\theta_2 x^2 + \hdots +\theta_d x^d + \varepsilon,\; x\in [-1,1],\; \theta=(\theta_0,\theta_1\hdots,\theta_d)^\top. $$ Denote by $\xi^*_{D|a}$ the $D$-optimal design under the condition that the $A$-criterion exceeds a value $a$. Set $\mathcal{X}=\{-1.00,-0.99,-0.98,\hdots,0.99,1.00\}$ as the design space, suppose that the initial design $\xi^{(0)}$ allocates the unit mass uniformly to each $x \in \mathcal{X}$, $\epsilon_D=10^{-10}$, and $\delta_A=0$. In Table~\ref{tabpr3} are given optimal designs $\xi^*_{D|a}$ for some particular values of $a$ and for $d=4$ computed by the algorithm of Sec.~\ref{alg3} with abovementioned setting. Notice that the $D$- and $A$-optimal (maximum) values are $\phi_D^*=0.1339$ and $\phi_A^*=0.0053$ respectively (see the optimal designs in polynomial regression in \cite[Chap.~11]{AD92} and \cite{PT91}). When $a$ is small, the algorithm of Sec.~\ref{alg3} will compute the $D$-optimal design. The initial knowledge of $\phi_A^*$ is necessary because if $a$ exceeds $\phi_A^*$, the algorithm does not work. Figure~\ref{Obr} displays $\phi_D$ and $\phi_A$ efficiencies of $\xi^*_{D|a}$ as a function of $a$, i.e. $\text{eff}_D(a)=\phi_D(\xi^*_{D|a})/\phi^*_D$ and $\text{eff}_A(a)=\phi_A(\xi^*_{D|a})/\phi^*_A$. \begin{table}[h!] \centering \begin{tabular}{cccccc} \hline $a$& $\xi^*_{D|a}$ &$\phi^*_{D|a}$&$\phi_A(\xi^*_{D|a})$&iter. & time\\ \hline 0.0052 & \begin{tabular}{ccccc} $-1$&$-0.68$&$0$&$0.68$&$1$\\ $0.136$&$0.2338$&$0.2604$&$0.2338$&$0.136$\\ \end{tabular}&$0.1283$&$0.0052$& 97 & 67s\\ \hline 0.005 & \begin{tabular}{ccccc} $-1$&$-0.68$&$0$&$0.68$&$1$\\ $0.1623$&$0.2194$&$0.2366$&$0.2194$&$0.1623$\\ \end{tabular}&$0.1317$&$0.005$& 97 & 53s\\ \hline 0.002 & \begin{tabular}{ccccccc} $-1$&$-0.66$&$-0.65$&$0$&$0.65$&$0.66$&$1$\\ $0.2$&$0.0847$&$0.1152$&$0.2$&$0.1151$&$0.0850$&$0.2$\\ \end{tabular}&$0.1338$&$0.0044$& 163 & 158s\\ \hline \end{tabular} \caption{Example~\ref{ex3}: the optimal designs $\xi^*_{D|a}$ (column 2) with different choices of $a$ (column 1); $\phi^*_{D|a}=\phi_D(\xi^*_{D|a})$- the value of the $D$-optimality criterion (column 3); $\phi_A(\xi^*_{D|a})$- the value of the $A$-optimality criterion (column 4); the number of iterations (column 5) and the computational time (column 6) required until the algorithm stopped.} \label{tabpr3} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth, angle =270 ]{DoptAviacDat} \caption{The graph of $\phi_A$-efficiency (dashed line) and of $\phi_D$-efficiency (solid line) of $\xi^*_{D|a}$ as a function of prescribed value $a$ in Example~\ref{ex3}.} \label{Obr} \end{figure} \end{example} \section{Reformulation of AVE criteria in nonlinear experiments} \label{sec:sec3} In general, the information matrix in nonlinear regression model ${y}=\eta_X(\theta)+\varepsilon$ is a function of the parameter $\theta$. Similarly as in Theorem~\ref{veta1}, we rewrite (local) $D$-, $A$-, and $E_k$-optimality criteria in nonlinear regression model to a form: \begin{equation} \label{eqv1} \phi(\xi,\theta)=\min_{\mu\in\Xi^*}\sum_{x\in\mathcal{X}}H(\mu,x,\theta)\xi(x). \end{equation} Here $\Xi^*$ can be replaced by $\Xi$ or $\Xi^+$ depending on the considered criterion like in Theorem~\ref{veta1}. The reformulation of expressions in Theorem~\ref{veta1} in terms of average (AVE) optimality criteria $\int_{\Theta}\phi(x,\theta)d\pi(\theta)$, where $\pi(\theta)$ is supposed to be known prior distribution, is also possible and is given in Theorem~\ref{veta3}. \begin{theorem} We can write $$ \int_{\Theta}\phi(\xi,\theta)d\pi(\theta)=\min_{\mu\in\Xi^*}\sum_{x\in\mathcal{X}}K(\mu,x,\theta)\xi(x), $$ where $K(\mu,x,\theta)=\int_{\Theta}H(\mu,x,\theta)d\pi(\theta)$. \label{veta3} \end{theorem} \begin{proof} The design space $\mathcal{X}$ is assumed to be finite, hence the summation and the integration are interchangeable. From~(\ref{eqv1}) we have $\phi(\xi,\theta)\leq\sum_{x\in\mathcal{X}}H(\mu,x,\theta)\xi(x)$ for any $\mu\in\Xi^*$ and for all $\theta\in\Theta$. We can write \begin{equation} \label{eqv2} \int_{\Theta}\phi(\xi,\theta)d\pi(\theta)\leq\sum_{x\in\mathcal{X}}\left[\int_{\Theta}H(\mu,x,\theta)d\pi(\theta)\right]\xi(x). \end{equation} Since the inequality~(\ref{eqv2}) holds for every $\mu\in\Xi^*$, evidently: \begin{equation} \label{eqv3} \int_{\Theta}\phi(\xi,\theta)d\pi(\theta)\leq \min_{\mu\in\Xi^*}\sum_{x\in\mathcal{X}}\left[\int_{\Theta}H(\mu,x,\theta)d\pi(\theta)\right]\xi(x). \end{equation} Theorem~\ref{veta1} implies that minimum is in~(\ref{eqv1}) attained at $\mu=\xi$ for any $\theta\in\Theta$, so we obtain an equality in~(\ref{eqv2}) for $\mu=\xi$, which together with~(\ref{eqv3}) proofs the theorem. \end{proof} \section*{Appendix: Reformulation of criteria in terms of nonlinear models} \label{sec:sec4} Using the notation $\eta \left( x,\theta \right) =f^\top\left( x\right) \theta $ we can rewrite the expressions from Theorem~\ref{veta1} to a form, which formally allows an extension of criteria to a nonlinear model \begin{eqnarray*} y_x &=&\eta \left( x,\theta \right) +\varepsilon _x ,\\ \theta &\in &\Theta \subset \mathbb{R}^p. \end{eqnarray*} However, for the $D$-, $A$-, and $E_k$-optimality criteria we are not so successful as for the $E$-, $c$-, and $G$-optimality criteria in \cite{PP14}. Therefore we put the corresponding constructions only in the Appendix. \begin{theorem} \label{veta2} Let $\theta ^{\left( 0\right) }\in \Theta $ be a given vector. Denote $$ \mathcal{V}_{\theta ^{\left( 0\right) }}=\left\{ \left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) :\forall _i\;\theta ^{\left( i\right) }\in \Theta ,\,\left( \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right) \neq 0\text{, }\left( \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right) ^\top\left( \theta ^{\left( j\right) }-\theta ^{\left( 0\right) }\right) =0\text{, }i\neq j\right\} .$$ Further denote by $\left\| \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right\| $ the Euclidean norm of $\theta ^{\left( i\right) }-\theta ^{\left( 0\right) },$ and \[ \left\| \eta \left( \cdot,\theta ^{\left( i\right) }\right) -\eta \left( \cdot,\theta ^{\left( 0\right) }\right) \right\| _\xi ^2=\sum_{x\in \mathcal{X}% }\left[ \eta \left( x,\theta ^{\left( i\right) }\right) -\eta \left( x,\theta ^{\left( 0\right) }\right) \right] ^2\xi \left( x\right) .\] The ``extended'' criteria defined as: $$ \begin{aligned} \phi _{eD}\left( \xi \right)& =\min_{\left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}}\frac{\left( 1/p\right) \sum_{i=1}^p\left\| \eta \left( \cdot,\theta ^{\left( i\right) }\right) -\eta \left( \cdot,\theta ^{\left( 0\right) }\right) \right\| _\xi ^2}{\left[ \prod_{j=1}^p\left\| \theta ^{\left( j\right) }-\theta ^{\left( 0\right) }\right\| ^2\right] ^{1/p}},\\ \phi _{eA}\left( \xi \right) &=\min_{\left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}}\frac{\sum_{i=1}^p\left\| \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right\| ^2\left\| \eta \left( \cdot,\theta ^{\left( i\right) }\right) -\eta \left( \cdot,\theta ^{\left( 0\right) }\right) \right\| _\xi ^2}{% \left[ \sum_{j=1}^p\left\| \theta ^{\left( j\right) }-\theta ^{\left( 0\right) }\right\| ^2\right] ^2},\\ \phi _{eE_k}\left( \xi \right) &=\min_{\left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}}\sum_{i=1}^k\frac{\left\| \eta \left( \cdot,\theta ^{\left( i\right) }\right) -\eta \left( \cdot,\theta ^{\left( 0\right) }\right) \right\| _\xi ^2}{% \left\| \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right\| ^2} \end{aligned} $$ coincide with those in Theorem~\ref{veta1} in case that the model is linear. \end{theorem} \begin{proof} Consider first the expression for $\phi _D\left( \xi \right) $ in Theorem~\ref{veta1}. Using the notation from Sec.~\ref{sec:sec1} for every $\mu \in \Xi ^{+}$ we can write $M^{-1}\left( \mu \right) =\sum_{i=1}^p\nu _i\left( \mu \right) \nu _i^\top\left( \mu \right) $ with $\nu _i\left( \mu \right) $ $=u_i\left( \mu \right) /\sqrt{\lambda _i\left( \mu \right) }$ (the normed eigenvector divided by the square root of the eigenvalue), and $% \left\| \nu _i\left( \mu \right) \right\| ^2=\lambda _i^{-1}\left( \mu \right) $. It follows that \[ \frac{\det^{1/p}\left[ M\left( \mu \right) \right] }pf^\top\left( x\right) M^{-1}\left( \mu \right) f\left( x\right) =\frac{\left( 1/p\right) \sum_{i=1}^p\left[ f^\top\left( x\right) \nu _i\left( \mu \right) \right] ^2}{% \left[ \prod_{j=1}^p\left\| \nu _i\left( \mu \right) \right\| ^2\right] ^{1/p}} .\] Denote $\theta ^{\left( i\right) }\left( \mu \right) =\theta ^{\left( 0\right) }+\nu _i\left( \mu \right) $. In the linear model $f^\top\left( x\right) \nu _i\left( \mu \right) =\eta \left( x,\theta ^{\left( i\right) }\left( \mu \right) \right) -\eta \left( x,\theta ^{\left( 0\right) }\left( \mu \right) \right) $. So from Theorem~\ref{veta1} it follows that \begin{equation} \phi _D\left( \xi \right) =\min_{\mu \in \Xi ^{+}}\frac{\left( 1/p\right) \sum_{i=1}^p\left\| \eta \left(\cdot,\theta ^{\left( i\right) }\left( \mu \right) \right) -\eta \left( \cdot,\theta ^{\left( 0\right) }\left( \mu \right) \right) \right\| _\xi ^2}{\left[ \prod_{j=1}^p\left\| \theta ^{\left( j\right) }\left( \mu \right) -\theta ^{\left( 0\right) }\left( \mu \right) \right\| ^2\right] ^{1/p}} .\label{e} \end{equation} Evidently $\left( \theta ^{\left( 1\right) }\left( \mu \right) ,\hdots,\theta ^{\left( p\right) }\left( \mu \right) \right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}$. On the other hand, for any $\left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}$ we define $B=\left[\sum_{i=1}^p\left( \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right) \left( \theta ^{\left( i\right) }-\theta ^{\left( 0\right) }\right) ^\top\right]^{-1}$. From Remark~\ref{rem1} of Theorem~\ref{veta1} it follows that we can take the minimum in~(\ref{e}) with respect to all $\left( \theta ^{\left( 1\right) },\hdots,\theta ^{\left( p\right) }\right) \in \mathcal{V}_{\theta ^{\left( 0\right) }}$ and not with respect to all $\mu\in\Xi^+$. We proceed similarly for $A$-optimality. We have $M^{-2}\left( \mu \right) =\sum_{i=1}^P\left\| \nu _i\left( \mu \right) \right\| ^2\nu _i\left( \mu \right) \nu _i^\top\left( \mu \right) $ and $tr\left[ M^{-1}\left( \mu \right) \right] =\sum_{i=1}^p\lambda _i^{-1}\left( \mu \right) $, so \begin{eqnarray*} \frac{\left\| M^{-1}\left( \mu \right) f\left( x\right) \right\| ^2}{\left\{ tr\left[ M^{-1}\left( \mu \right) \right] \right\} ^2} &=&\frac{% \sum_{i=1}^p\left\| \nu _i\left( \mu \right) \right\| ^2\left[ f^\top\left( x\right) \nu _i\left( \mu \right) \right] ^2}{\left[ \sum_{j=1}^p\left\| \nu _j\left( \mu \right) \right\| ^2\right] ^2} \\ &=&\frac{\sum_{i=1}^p\left\| \theta ^{\left( i\right) }\left( \mu \right) -\theta ^{\left( 0\right) }\right\| ^2\left[ \eta \left( x,\theta ^{\left( i\right) }\left( \mu \right) \right) -\eta \left( x,\theta ^{\left( 0\right) }\left( \mu \right) \right) \right] ^2}{\left[ \sum_{j=1}^p\left\| \theta ^{\left( j\right) }\left( \mu \right) -\theta ^{\left( 0\right) }\right\| ^2\right] ^2}. \end{eqnarray*} For the $E_k$-optimality criterion we write $P^{(k)}\left( \mu \right) =\sum_{i=1}^k\left\| \nu _i\left( \mu \right) \right\| ^{-2}\nu _i\left( \mu \right) \nu _i^\top\left( \mu \right) $, hence \begin{eqnarray*} \left\| P^{(k)}\left( \mu \right) f\left( x\right) \right\| ^2 &=&\sum_{i=1}^k\left\| \nu _i\left( \mu \right) \right\| ^{-2}\left[ f^\top\left( x\right) \nu _i\left( \mu \right) \right] ^2 \\ &=&\sum_{i=1}^k\frac{\left[ \eta \left( x,\theta ^{\left( i\right) }\left( \mu \right) \right) -\eta \left( x,\theta ^{\left( 0\right) }\left( \mu \right) \right) \right] ^2}{\left\| \theta ^{\left( i\right) }\left( \mu \right) -\theta ^{\left( 0\right) }\right\| ^2}. \end{eqnarray*} \end{proof} \begin{remark} \label{rem3} The expressions in Theorem~\ref{veta2} are evidently linear in $\xi $, so maximization of $\phi _D\left( \xi \right) ,\,\phi _A\left( \xi \right) ,$ and $\phi _{E_k}\left( \xi \right) $ with respect to $\xi $ corresponds to an ``infinite-dimensional'' LP problem even in a nonlinear model. However this problem is too complex to be used for experimental design. Moreover, in contrast to the criteria considered in \cite{PP14}, a clear statistical interpretation is still missing. \end{remark} \paragraph{Acknowledgements.} We would like to thank Luc Pronzato for helpful advises. The paper was supported by the Slovak VEGA-Grant No. 1/0163/13. \bibliographystyle{apalike}
{ "timestamp": "2015-04-24T02:10:51", "yymm": "1504", "arxiv_id": "1504.06226", "language": "en", "url": "https://arxiv.org/abs/1504.06226" }
\section*{\refname \@mkboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}} } \makeatother \title{Modal Inclusion Logic:\\ Being Lax is Simpler than Being Strict} \author{Lauri Hella\inst{1} \and Antti Kuusisto\inst{2} \and Arne Meier\inst{3}\and Heribert Vollmer\inst{3}} \institute{School of Information Sciences, University of Tampere, Kanslerinrinne 1 B,\\ 33014 University of Tampere, Finland, \email{lauri.hella@uta.fi}\and Dept. of Philosophy, Stockholm University, SE-106 91 Stockholm, Sweden, and\\ DTU Compute, Technical University of Denmark, Richard Petersens Plads 324, DK-2800, Kgs. Lyngby, Denmark. \email{antti.j.kuusisto@gmail.com} \and Institut für Theoretische~Informatik, Leibniz Universität Hannover, Appelstr.~4, 30167~Hannover, Germany, \email{\{meier,vollmer\}@thi.uni-hannover.de}} \begin{document} \maketitle \begin{abstract} We investigate the computational complexity of the satisfiability problem of modal inclusion logic. We distinguish two variants of the problem: one for strict and another one for lax semantics. The complexity of the lax version turns out to be complete for \EXPTIME, whereas with strict semantics, the problem becomes \NEXPTIME-complete. \end{abstract} \section{Introduction} Dependence logic was introduced by Jouko Väänänen \cite{vaananen07} in 2007. It is a first-order logic that enables one to explicitly talk about dependencies between variables. It thereby generalizes Henkin quantifiers and also, in a sense, Hintikka's independence-friendly logic. Dependence logic can be used to formalize phenomena from a plethora of scientific disciplines such as database theory, social choice theory, cryptography, quantum physics, and others. It extends first-order logic by specific terms $\dep{x_{1},\dots,x_{n-1},x_{n}}$ known as dependence atoms, expressing that the value of the variable $x_{n}$ depends on the values of $x_{1},\dots,x_{n-1}$, i.e., $x_{n}$ is functionally determined by $x_{1},\dots,x_{n-1}$. As such a dependence does not make sense when talking about single assignments, formulas are evaluated over sets of assignments, called \emph{teams}. The semantics of the atom $\dep{x_{1},\dots,x_{n-1},x_{n}}$ is defined such that it is true in a team $T$ if in the set of all assignments in $T$, the value of $x_{n}$ is functionally determined by the values of $x_{1},\dots,x_{n-1}$. In addition to dependence atoms, also generalized dependency atoms have been introduced in the literature. Examples include the independence atom (asserting that two sets of variables are informationally independent in a team), the non-emptiness atom (asserting that the team is non-empty), and, most importantly to the present paper, the inclusion atom $\vec{x}\subseteq\vec{y}$ for vectors of variables $\vec{x},\vec{y}$, asserting that in a team, the set of tuples assigned to $\vec{x}$ is included in the set of tuples assigned to $\vec{y}$. This corresponds to the definition of inclusion dependencies in database theory, which state that all tuples of values taken by the attributes $\vec{x}$ are also taken by the attributes $\vec{y}$. V\"a\"an\"anen \cite{vaananen08b} also introduced dependence atoms into modal logic. There teams are sets of worlds, and a dependence atom $\dep{p_{1},\dots,p_{n-1},p_{n}}$ holds in a team $T$ if there is a Boolean function that determines the value of $p_{n}$ from the values of $p_{1},\dots,p_{n-1}$ in each world in $T$. The so obtained modal dependence logic \MDL was studied from the point of view of expressivity and complexity in \cite{sev09}. Following the above mentioned developments in first-order dependence logic, modal dependence logic was also extended by generalized dependency atoms in \cite{KMSV14}, such as, e.g., independence atoms and inclusion atoms. In the context of first-order dependence logic and its variants, two alternative kinds of team semantics have been distinguished, \emph{lax} and \emph{strict semantics} \cite{Galliani12}. Lax semantics is the standard team semantics, while for strict semantics, some additional uniqueness or strictness properties are required. In the modal context, this mainly concerns the diamond modality $\Diamond$. Usually, i.e., in lax semantics, a formula $\Diamond\varphi$ holds in a team $T$ if there is a team $S$ such that every world in $T$ has at least one successor in $S$ and $\varphi$ holds in $S$. (Also, the worlds in $S$ are required to have a predecessor in $T$.) In strict semantics, we require that $S$ contains, for every world in $T$, a unique successor given by a surjection $f:T\rightarrow S$. (In first-order logic, strict semantics for the existential quantifier is defined similarly.) In both the modal and the first-order context, the operator known as \emph{splitjunction} is also defined differently for lax and strict semantics (see Section \ref{preliminaries} below). For many variants of first-order and modal dependence logic, there is no distinction in expressive power between the two semantics. However, the choice of semantics plays a role in independence and inclusion logics, i.e., team semantics over (first-order) logics with the independence and inclusion atoms. For example, in the first-order case, inclusion logic under strict semantics has the same expressive power as dependence logic, i.e., \ESO (existential second order logic) \cite{ghk13} and hence \NP, while under lax semantics it is equivalent to greatest fixpoint logic and hence can express exactly the polynomial-time decidable properties over finite ordered structures. The purpose of the present paper is to exhibit a further context in which a quite dramatic difference between the two flavours of team semantics exists. We turn to modal inclusion logic and study the computational complexity of its satisfiability problem. For lax semantics, we show $\EXPTIME$-completeness by proving the upper bound via a translation to a variant of $\logicFont{PDL}$, and the lower bound by a reduction from a succinct encoding of a $\P$-complete problem. Satisfiability under strict semantics is shown $\NEXPTIME$-complete using a translation into two-variable logic with counting (upper bound) and a chain of reductions from a dependence version of QBF-validity (lower bound). The complexity difference also holds for the finite satisfiability problem. \section{Preliminaries}\label{preliminaries} Let $\Pi$ be a countably infinite set of proposition symbols. The set of formulas of \emph{modal inclusion logic} $\Minc$ is defined inductively by the following grammar. $$ \varphi \ddfn p\mid \lnot p\mid (\varphi_1\land\varphi_2)\mid (\varphi_1\lor\varphi_2)\mid p_1\cdots p_k\subseteq q_1\cdots q_k\mid \Box\varphi\mid \Diamond\varphi, $$ where $p,p_1,\dots,p_k,q_1,\dots,q_k\in\Pi$ are proposition symbols and $k$ is any positive integer. The formulas $p_1\cdots p_k\subseteq q_1\cdots q_k$ are called \emph{inclusion atoms}. For a set $\Phi\subseteq\Pi$, we let $\Minc(\Phi)$ be the sublanguage where propositions from $\Phi$ are used. Observe that formulas are essentially in negation normal form; negations may occur only in front of proposition symbols. A Kripke model is a structure $M=(W,R,V)$, where $W\not=\emptyset$ is a set (the domain of the model, or the set of worlds/states), $R\subseteq W\times W$ is a binary relation (the accessibility or transition relation), and $V\colon\Pi\rightarrow\mathcal{P}(W)$ is a \emph{valuation} interpreting the proposition symbols. Here $\mathcal{P}$ denotes the power set operator. The language of basic unimodal logic is the sublanguage of $\Minc$ without formulas $p_1\cdots p_k\subseteq q_1\cdots q_k$. We assume that the reader is familiar with standard Kripke semantics of modal logic; we let $M,w\Vdash\varphi$ denote the assertion that the point $w\in W$ of the model $M$ satisfies $\varphi$ according to standard Kripke semantics. We use the symbol $\Vdash$ in order to refer to satisfaction according to standard Kripke semantics, while the symbol $\models$ will be reserved for \emph{team semantics}, to be defined below, which is the semantics $\Minc$ is based on. Let $T$ be a subset of the domain $W$ of a Kripke model $M$. The set $T$ is called a \emph{team}. The semantics of the inclusion atoms $p_1\cdots p_k\subseteq q_1\cdots q_k$ is defined such that $M,T\models p_1\cdots p_k\subseteq q_1\cdots q_k$ iff for each $u\in T$, there exists a point $v\in T$ such that $\bigwedge\limits_{i\, \in\, \{1,..,k\}}\bigl( u\in V(p_i)\Leftrightarrow v\in V(q_i)\bigr).$ The intuition here is that every vector of truth values taken by $p_1,\dots,p_k$, is included in the set of vectors of truth values taken by $q_1,\dots,q_k$. Let $M = (W,R,V)$ be a Kripke model and $T\subseteq W$ a team. Define the set of successors of $T\subseteq W$ to be $R(T)\dfn\{s\in W\mid\exists s'\in T:(s',s)\in R\}$. Also define $R\langle T\rangle\dfn\{\, T\, '\subseteq W\mid\forall s\in T\exists s'\in T\, '\text{ s.t.\ }(s,s')\in R\text{ and }\forall s'\in T\, '\, \exists s\in T\text{ s.t.\ }(s,s')\in R\,\}$, the set of legal successor teams. The following clauses together with the above clause for inclusion atoms define \emph{lax semantics} for $\Minc$. \[ \begin{array}{@{}l@{}l@{}l} M,T\modelslax p\ &\Leftrightarrow &\ w\in V(p)\text{ for all }w\in T.\\ M,T\modelslax\neg p\ &\Leftrightarrow &\ w\not\in V(p)\text{ holds for all }w\in T.\\ M,T\modelslax\varphi\wedge\psi\ &\Leftrightarrow &\ M,T\modelslax\varphi\text{ and }M,T\modelslax\psi.\\ M,T\modelslax\varphi\vee\psi\ &\Leftrightarrow &\ M,S\modelslax\varphi\text{ and }M,S'\modelslax\psi \text{ for some }S,S'\subseteq T\text{ such that }\\ & &\text{ we have }S\cup S' = T.\\ M,T\modelslax\Box\varphi\ &\Leftrightarrow &\ M,R(T)\modelslax\varphi.\\ M,T\modelslax\Diamond\varphi\ &\Leftrightarrow &\ \exists T\, '\in R\langle T\rangle:M,T\, '\modelslax\varphi \end{array} \] The other semantics for $\Minc$, \emph{strict semantics}, differs from the lax semantics only in its treatment of the disjunction $\vee$ and diamond $\Diamond$. Therefore, all other clauses in the the definition of $\modelsstrict$ are the same as those for $\modelslax$. The clauses for $\vee$ and $\Diamond$ in strict semantics are as follows. \[ \begin{array}{lll} M,T\modelsstrict\varphi\vee\psi\ &\Leftrightarrow &\ M,S\modelsstrict\varphi\text{ and }M,S'\modelsstrict\psi \text{ for some }S,S'\subseteq T\text{ such that }\\ & &S\cup S' = T\text{ and }S\cap S' = \emptyset.\\ M,T\modelsstrict\Diamond\varphi\ &\Leftrightarrow &$\text{ }$ M,f(T)\modelsstrict\varphi\text{ for some function } f\colon T\rightarrow W \text{ such that }\\ & &(u,f(u))\in R\text{ for all }u\in T.\ (\text{Here }f(T) = \{\, f(u)\, |\, u\in T\, \}.) \end{array} \] The difference between lax and strict semantics is as the terms suggest. In strict semantics, the division of a team with the splitjunction $\lor$ is strict; no point is allowed to occur in both parts of the division contrarily to lax semantics. For $\Diamond$, strictness is related to the use of functions when finding a team of successors. It is well known and easy to show that for a formula $\varphi$ of modal logic, i.e., a formula of $\Minc$ \emph{without} inclusion atoms, $M,T\modelslax\varphi$ iff $\forall w\in T(M,w\Vdash\varphi)$, where $\Vdash$ denotes satisfaction in the standard sense of Kripke semantics. The same equivalence holds for $\modelsstrict$. This is the so-called \emph{flatness} property. The satisfiability problem of $\Minc$ with lax (strict) semantics, is the problem that asks, given a formula $\varphi$ of $\Minc$, whether there exists a nonempty team $T$ and a model such that $M,T\modelslax\varphi$ ($M,T\modelsstrict\varphi$) holds. Two different problems arise, depending on whether lax or strict semantics is used. The corresponding finite satisfiability problems require that a satisfying model has a finite domain. \section{Computational Complexity} \label{sect:coco} \subsection{Upper bound for lax semantics} In this section we show that the satisfiability and finite satisfiability problems of \Minc with lax semantics are in $\EXPTIME$. The result is established by an equivalence preserving translation to \emph{propositional dynamic logic} extended with the global and converse modalities. It is well-known that this logic is complete for $\EXPTIME$ (see \cite{blackburn, hema, eijck}). In fact, we will only need multimodal logic with the global modality and converse modalities for our purposes. Let $\Pi$ and $\mathcal{R}$ be countably infinite sets of proposition and binary relation symbols, respectively. We define the following modal language $\mathcal{L}$ via $\varphi\, \ddfn\, p\ |\ \neg\varphi\ |\ (\varphi_1\wedge\varphi_2)\ |\ \langle R\rangle \varphi\ |\ \langle R^{-1}\rangle\varphi\ |\ \langle E\rangle\varphi$. Here $p\in\Pi$, $R\in\mathcal{R}$, and $E$ is a novel symbol. The (classical Kripke-style) semantics of $\mathcal{L}$ is defined with respect to ordinary pointed Kripke models $(M,w)$ for multimodal logic. Let $M = (W, \{R\}_{R\in\mathcal{R}}, V)$ be a Kripke model, where $V\colon\Pi\rightarrow\mathcal{P}(W)$ is the \emph{valuation} function interpreting proposition symbols. The following clauses define the semantics of $\mathcal{L}$ (notice that we use the turnstile $\Vdash$ instead of $\models$, which is reserved for team semantics in this paper). \[ \begin{array}{lll} M,w\Vdash p & \Leftrightarrow\ & w\in V(p) \qquad\text{and}\qquad M,w\Vdash \neg\varphi \Leftrightarrow M,w\not\Vdash\varphi\\ M,w\Vdash \varphi_1\wedge\varphi_2 & \Leftrightarrow\ & M,w\Vdash\varphi_1\text{ and }M,w\Vdash\varphi_2\\ M,w\Vdash \langle R\rangle\varphi & \Leftrightarrow & M,u\Vdash\varphi \text{ for some }u\text{ such that }wRu\\ M,w\Vdash \langle R^{-1}\rangle\varphi & \Leftrightarrow & M,u\Vdash\varphi \text{ for some }u\text{ such that }uRw\\ M,w\Vdash \langle E \rangle\varphi & \Leftrightarrow & M,u\Vdash\varphi \text{ for some }u\in W\\ \end{array} \] We next define a satisfiability preserving translation from modal inclusion logic to $\mathcal{L}$. We let $[R]$ and $[E]$ denote $\neg\langle R\rangle\neg$ and $\neg\langle E\rangle\neg$, respectively. Before we fix the translation, we define some auxiliary formulas. Let $\theta$ be a formula of \Minc. We let $\mathit{SUB}(\theta)$ denote the set of subformulas of $\theta$; we distinguish all instances of subformulas, so for example $p\wedge p$ has \emph{three} subformulas (the right and the left instances of $p$ and the conjunction itself). For each formula $\varphi\in\mathit{SUB}(\theta)$, fix a fresh proposition symbol $p_{\varphi}$ that does not occur in $\theta$. We next define, for each $\varphi\in\mathit{SUB}(\theta)$, a novel auxiliary formula $\chi_{\varphi}$. If $\varphi\in\mathit{SUB}(\theta)$ is a literal $p$ or $\neg p$, we define $\chi_{\varphi}\ \dfn\ [E]\bigl(\ p_{\varphi}\ \rightarrow\ \varphi\ \bigr).$ Now fix a symbol $R\in\mathcal{R}$, which will ultimately correspond to the diamond used in modal inclusion logic. For the remaining subformulas $\varphi$ of $\theta$, with the exception of inclusion atoms, the formula $\chi_{\varphi}$ is defined as follows. \begin{enumerate} \item $\chi_{\varphi\wedge\psi}\ \dfn\ [E]\bigl(\ (p_{\varphi\wedge\psi}\ \leftrightarrow\ p_\varphi)\ \wedge\ (p_{\varphi\wedge\psi} \leftrightarrow p_\psi)\ \bigr)$ \item $\chi_{\varphi\vee\psi}\ \dfn\ [E]\bigl(\ p_{\varphi\vee\psi}\ \leftrightarrow\ (p_\varphi\vee p_\psi)\ \bigr)$ \item $\chi_{\Box\varphi}\ \dfn\ [E]\bigl(\ (p_{\Box\varphi}\ \rightarrow\ [R] p_{\varphi})\ \wedge\ (p_{\varphi} \ \rightarrow\ \langle R^{-1}\rangle p_{\Box\varphi})\ \bigr)$ \item $\chi_{\Diamond\varphi}\ \dfn\ [E]\bigl(\ (p_{\Diamond\varphi}\ \rightarrow\ \langle R\rangle p_{\varphi}) \wedge (p_{\varphi} \ \rightarrow\ \langle R^{-1}\rangle p_{\Diamond\varphi}\ \bigr)\ \bigr)$ \end{enumerate} We then define the formulas $\chi_{\alpha}$, where $\alpha\in\mathit{SUB}(\theta)$ is an inclusion atom. We appoint a fresh binary relation $R_{\alpha}$ for each inclusion atom in $\theta$. Assume $\alpha$ denotes the inclusion atom $p_1\cdots p_k\subseteq q_1\cdots q_k$. We define \[ \begin{array}{ll} \chi_{\alpha}^+\ &\dfn\ \smallskip \bigwedge\limits_{i\, \in\, \{1,..,k\}} [E]\bigl(\, (p_{\alpha}\wedge p_i)\,\rightarrow\, \langle R_{\alpha}\rangle( p_{\alpha}\wedge q_i)\, \bigr),\\ \chi_{\alpha}^-\ &\dfn\ \smallskip \bigwedge\limits_{i\, \in\, \{1,..,k\}} [E]\bigl(\, (p_{\alpha}\wedge \neg p_i)\,\rightarrow\, \langle R_{\alpha}\rangle( p_{\alpha}\wedge \neg q_i)\, \bigr),\\ \chi_{\alpha}\, &\dfn\, \chi_{\alpha}^+\wedge \chi_{\alpha}^-\ \wedge\, \bigwedge\limits_{i\, \in\, \{1,\dots,k\}}[E]\bigl(\, \langle R_{\alpha}\rangle q_i \rightarrow [R_{\alpha}] q_i\, \bigr). \end{array} \] Finally, we define $\text{ }$ $\varphi_{\theta}\ \dfn\ p_{\theta}\ \wedge \bigwedge\limits_{\varphi\, \in\, \mathit{SUB}(\theta)}\chi_{\varphi}\, .$ \begin{theorem} The satisfiability and finite satisfiability problems for modal inclusion logic with lax semantics are in \EXPTIME. \end{theorem} \begin{proof} We will show that any formula $\theta$ of modal inclusion logic is satisfiable iff its translation $\varphi_{\theta}$ is. Furthermore, $\theta$ is satisfiable over a domain $W$ iff $\varphi_{\theta}$ is satisfiable over $W$, whence we also get the desired result for finite satisfiability; $\mathcal{L}$ has the finite model property since it clearly translates to two-variable logic via a simple extension of the \emph{standard translation} (see \cite{blackburn} for the standard translation). Let $M = (W,R,V)$ be a Kripke model. Let $I(\theta)\subseteq\mathit{SUB}(\theta)$ be the set of inclusion atoms in $\theta$. Assume that $M,X\modelslax\theta$, where $X$ is a nonempty team. We next define a multimodal Kripke model $N\, \dfn\, (W,R,\{R_{\alpha}\}_{\alpha\, \in\, \mathit{I}(\theta)}, V\cup U),$ where $U\colon\{\, p_{\varphi}\ |\ \varphi\in\mathit{SUB}{(\theta)}\}\rightarrow\mathcal{P}(W)$ extends the valuation function $V$. Define $U(p_{\theta}) = X$. Thus we have $M,U(p_{\theta})\modelslax\theta$. Working from the root towards the leaves of the parse tree of $\theta$, we next interpret the remaining predicates $p_{\varphi}$ inductively such that the condition $M, U(p_{\varphi})\modelslax\varphi$ is maintained. Assume $U(p_{\psi\wedge\psi'})$ has been defined. We define $U(p_{\psi}) = U(p_{\psi'}) = U(p_{\psi\wedge\psi'})$. As $M,U(p_{\psi\wedge\psi'})\modelslax\psi\wedge\psi'$, we have $M,U(p_{\psi})\modelslax\psi$ and $M,U(p_{\psi'})\modelslax\psi'$. Assume then that $U(p_{\psi\vee\psi'})$ has been defined. Thus there exist sets $S$ and $S'$ such that $M,S\modelslax\psi$ and $M,S'\modelslax\psi'$, and furthermore, $S\cup S' = U(p_{\psi\vee\psi'})$. We define $U(p_{\psi}) = S$ and $U(p_{\psi'}) = S'$. Consider then the case where $U(p_{\Diamond\varphi})$ has been defined. Call $T\dfn U(p_{\Diamond\varphi})$. As $M,T\modelslax\Diamond\varphi$, there exists a set $T\hspace{0.4mm} '\subseteq W$ such that each point in $T$ has an $R$-successor in $T\hspace{0.4mm}'$, and each point in $T\hspace{0.4mm}'$ has an $R$-predecessor in $T$, and furthermore, $M,T\hspace{0.4mm}'\modelslax\varphi$. We set $U(p_{\varphi}) \dfn T\hspace{0.4mm}'$. Finally, in the case for $p_{\Box\varphi}$, the set $U(p_{\varphi})$ is defined to be the set of points that have an $R$-predecessor in $U(p_{\Box{\varphi}})$. We have now fixed an interpretation for each of the predicates $p_{\varphi}$. The relations $R_{\alpha}$, where $\alpha$ is an inclusion atom, remain to be interpreted. Let $p_1\cdots p_k\subseteq q_1\cdots q_k$ be an inclusion atom in $\theta$, and denote this atom by $\alpha$. Call $T \dfn U(p_{\alpha})$. Let $u\in T$. Since $M,T\modelslax\alpha$, there exists a point $v\in T$ such that for each $i\in\{1,\dots,k\}$, $u\in V(p_i)$ iff $v\in V(q_i)$. Define the pair $(u,v)$ to be in $R_{\alpha}$. In this fashion, consider each point $u$ in $T$ and find exactly one corresponding point $v$ for $u$, and put the pair $(u,v)$ into $R_{\alpha}$. This fixes the interpretation of $R_{\alpha}$. Let $w\in X = U(p_{\theta})$. Recalling how the sets $U(p_{\varphi})$ were defined, it is now routine to check that $N,w\Vdash\varphi_{\theta}$. We then consider the converse implication of the current theorem. Thus we assume that $N,w\Vdash\varphi_{\theta}$, where $N$ is some multimodal Kripke model in the signature of $\varphi_{\theta}$ and $w$ a point in the domain of $N$. We let $W$ denote the domain and $V$ the valuation function of $N$. For each $\varphi\in\mathit{SUB}(\theta)$, define the team $X_{\varphi} \dfn V(p_{\varphi})$. We will show by induction on the structure of $\theta$ that for each $\varphi\in\mathit{SUB}(\theta)$, we have $N,X_{\varphi}\modelslax\varphi$. Once this is done, it is clear that $M,X_{\theta}\modelslax\theta$, where $M$ is the restriction of $N$ to the signature of $\theta$, and we have $X_{\theta}\not=\emptyset$. Now recall the definition of the formulas $\chi_{\varphi}$, where $\varphi\in\mathit{SUB}(\theta)$. Let $p\in\mathit{SUB}(\theta)$. It is clear that $N,X_p\modelslax p$, since $N,w\Vdash \chi_p$. Similarly, we infer that $N,X_{\neg q}\modelslax\neg q$ for $\neg q\in\mathit{SUB}(\theta)$. Consider then a subformula $p_1\cdots p_k\subseteq q_1\cdots q_k$ of $\varphi$. Denote this inclusion atom by $\alpha$. Consider a point $u\in X_{\alpha}$. If $u$ satisfies $p_i$ for some $i\in\{1,\dots,k\}$, then we infer that since $N,w\Vdash\chi_{\alpha}^+$, there exists a point $v_i\in X_{\alpha}$ that satisfies $q_i$. Similarly, if $u$ satisfies $\neg p_j$, we infer that since $N,w\Vdash\chi_{\alpha}^-$, there exists a point $v_j\in X_{\alpha}$ that satisfies $\neg q_j$. To conclude that $N,X_{\alpha}\modelslax \alpha$, it suffices to show that all such points $v_i$ and $v_j$ can be chosen such that such that $v_i = v_j$ for all $i,j\in\{1,\dots,k\}$. This follows due to the third conjunct of $\chi_{\alpha}$. Having established the basis of the induction, the rest of the argument is straightforward. We consider explicitly only the case where the subformula under consideration is $\Diamond\varphi$. Here we simply need to argue that for each $u\in X_{\Diamond\varphi}$, there exists a point $v\in X_{\varphi}$ such that $uRv$, and for each $u'\in X_{\varphi}$, there exists a point $v'\in X_{\Diamond\varphi}$ such that $v'Ru'$. This follows directly, since $N,w\Vdash\chi_{\Diamond\varphi}$.\qed \end{proof} \subsection{Lower bound for lax semantics} In this section we prove that the satisfiability problem of $\Minc$ with lax semantics, $\MinclaxSAT$, is hard for $\EXPTIME$. We do this by reducing the succinct version of the following $\P$-hard problem to it which is closely related to the problem PATH SYSTEMS \cite[p. 171]{greenlaw}. \begin{definition} Let $\mathrm{PER}$ be the following problem: An instance of $\mathrm{PER}$ is a structure $\mA=(A,S)$ with $A=\{1,\ldots,n\}$ and $S\subseteq A^3$. A subset $P$ of $A$ is \emph{$S$-persistent} if it satisfies the condition $(*)$\quad if $i\in P$, then there are $j,k\in P$ such that $(i,j,k)\in S$. $\mA$ is a positive instance if $n\in P$ for some $S$-persistent set $P\subseteq A$. \end{definition} It is well known that structures $(A,S)$ as above can be represented in a succinct form by using Boolean circuits. Namely if $C$ is Boolean circuit with $3\cdot l$ input gates then it defines a structure $\mA_C=(A_C,S_C)$ given below. We use here the notation $\sharp(a_1,\ldots,a_l)$ for the natural number $i$, whose binary representation is $(a_1,\ldots,a_l)$. Let $A_C=\{1,\dots,2^l\}$, and for all $i,j,k\in A$, let $(i,j,k)\in S_C$ if and only if $C$ accepts the input tuple $(a_1,\ldots,a_l,b_1,\ldots,b_l,c_1,\ldots,c_l)\in\{0,1\}^{3l}$, where $i=\mathrm{\sharp}(a_1,\ldots,a_l)$, $j=\mathrm{\sharp}(b_1,\ldots,b_l)$ and $k=\mathrm{\sharp}(c_1,\ldots,c_l)$. We say that $C$ is a \emph{succinct representation} of $\mA_C$. \begin{definition} The succinct version of $\mathrm{PER}$, $\mathrm{S\text-PER}$, is the following problem: An instance of $\mathrm{S\text-PER}$ is a circuit $C$ with $3l$ input gates. $C$ is a positive instance, if $\mA_C$ is a positive instance of $\mathrm{PER}$. \end{definition} \begin{proposition}\label{circval} $\mathrm{S\text-PER}$ is $\EXPTIME$-hard with respect to $\PSPACE$ reductions. \end{proposition} \begin{proof} (Idea.) The succinct version of the CIRCUIT VALUE problem is polynomial space reducible to $\mathrm{S\text-PER}$. Since succinct CIRCUIT VALUE is known to be \EXPTIME-complete (see \cite[ Section 20]{PapadimitriouBook}), the claim follows. For the details of the proof, see the appendix. \qed \end{proof} We will next show that $\mathrm{S\text-PER}$ is polynomial time reducible to the satisfiability problem of $\Minc$ with lax semantics, and hence the latter is also $\EXPTIME$-hard. In the proof we use the following notation: If $T$ is a team and $p_1,\ldots, p_n$ are proposition symbols, then $T(p_1,\ldots,p_n)$ is the set of all tuples $(a_1,\ldots,a_n)\in\{0,1\}^n$ such that for some $w\in T$, $ a_t=1\iff w\in V(p_t)\text{ for }t\in\{1,\ldots,n\}. $ Note that the semantics of inclusion atoms can now be expressed as $$ M,T\models p_1\cdots p_n \subseteq q_1\cdots q_n \iff T(p_1,\ldots,p_n)\subseteq T(q_1,\ldots,q_n). $$ \begin{theorem}\label{laxlower} The satisfiability and finite satisfiability problems for \Minc with lax semantics are hard for \EXPTIME with respect to $\PSPACE$ reductions. \end{theorem} \begin{proof} Let $C$ be a Boolean circuit with $3l$ input gates. Let $g_1,\ldots,g_m$ be the gates of $C$, where $g_1,\ldots, g_{3l}$ are the input gates and $g_m$ is the output gate. We fix a distinct Boolean variable $p_i$ for each gate $g_i$. Let $\Phi$ be the set $\{p_1,\ldots,p_m\}$ of proposition symbols. We define for each $i\in\{3l+1,\ldots,m\}$ a formula $\theta_i\in\Minc(\Phi)$ that describes the correct operation of the gate $g_i$: $$ \theta_i=\begin{cases} p_i\leftrightarrow \lnot p_j& \text{if $g_i$ is a NOT gate with input $g_j$}\\ p_i\leftrightarrow (p_j\land p_k)& \text{if $g_i$ is an AND gate with inputs $g_j$ and $g_k$}\\ p_i\leftrightarrow (p_j\lor p_k) &\text{if $g_i$ is an OR gate with inputs $g_j$ and $g_k$} \end{cases} $$ Let $\psi_C$ be the formula $\bigl( \bigwedge_{3l+1\le i\le m}\theta_i \bigr)\;\land \, p_m$. Thus, $\psi_C$ essentially says that the truth values of $p_i$, $1\le i\le m$, match an accepting computation of $C$. Now we can define a formula $\varphi_C$ of $\Minc(\Phi)$ which is satisfiable if and only if $C$ is a positive instance of $\mathrm{S\text-PER}$. For the sake of readability, we denote here the variables corresponding to the input gates $g_{l+1},\ldots,g_{2l}$ by $q_1,\ldots,q_l$. Similarly, we denote the variables $p_{2l+1},\ldots,p_{3l}$ by $r_1,\ldots,r_l$. $$ \varphi_C:= \psi_C \land\; q_1\cdots q_l\subseteq p_1\cdots p_l \; \land\; r_1\cdots r_l\subseteq p_1\cdots p_l\; \land\; p_m\cdots p_m \subseteq p_1\cdots p_l. $$ Note that $\varphi_C$ can clearly be constructed from the circuit $C$ in polynomial time. Assume first that $\varphi_C$ is satisfiable. Thus there is a Kripke model $M=(W,R,V)$ and a nonempty team $T$ of $M$ such that $M,T\modelslax\varphi_C$. Consider the model $\mA_C=(A_C,S_C)$ that corresponds to the circuit $C$. We define a subset $P$ of $A_C$ as follows: $P:=\{\mathrm{\sharp}(a_1,\ldots,a_l)\mid (a_1,\ldots,a_l)\in T(p_1,\ldots,p_l)\}.$ Observe first that since $M,T\modelslax p_m$ and $M,T\modelslax p_m\cdots p_m \subseteq p_1\cdots p_l$, $(1,\ldots,1)\in T(p_1,\ldots,p_l)$ and hence $2^l=\mathrm{\sharp}(1,\ldots,1)\in P$. Thus, it suffices to show that $P$ is $S_C$-persistent. To prove this, assume that $i=\mathrm{\sharp}(a_1,\ldots, a_l)\in P$. Then there is a state $w\in T$ such that $w\in V(p_t) \iff a_t=1\text{\quad for }1\le t\le l.$ Define now $b_t,c_t\in \{0,1\}$, $1\le t\le l$, by the condition $$ b_t=1\iff w\in V(q_t)\text{\quad and \quad} c_t=1\iff w\in V(r_t). $$ As $M,T\modelslax \psi_C$, it follows from flatness that $M,w\Vdash\psi_C$. By the definition of $\psi_C$, this means that the circuit $C$ accepts the input tuple $ (a_1,\ldots,a_l,b_1,\ldots,b_l,$ $c_1,\ldots,c_l). $ Thus, $(i,j,k)\in S_C$, where $j=\mathrm{\sharp}(b_1,\ldots,b_l)$ and $k=\mathrm{\sharp}(c_1,\ldots,c_l)$. We still need to show that $j,k\in P$. To see this, note that since $M,T\modelslax q_1\cdots q_l\subseteq p_1\cdots p_l$, there exists $w'\in T$ such that $$ w'\in V(p_t)\iff w\in V(q_t) \iff b_t=1\text{\quad for }1\le t\le l. $$ Thus, $(b_1,\ldots,b_l)\in T(p_1,\ldots,p_n)$, whence $j\in P$. Similarly we see that $k\in P$. To prove the other implication, assume that $C$ is a positive instance of the problem $\mathrm{S\text-PER}$. Then there is an $S_C$-persistent set $P\subseteq A_C$ such that $2^l\in P$. We let $M=(W,R,V)$ be the Kripke model and $T$ the team of $M$ such that \begin{itemize} \item $T=W$ is the set of all tuples $(a_1,\ldots,a_m)\in \{0,1\}^m$ that correspond to an accepting computation of $C$ and for which $\mathrm{\sharp}(a_1,\ldots,a_l),$ $\mathrm{\sharp}(a_{l+1},\ldots,a_{2l}),$ $\mathrm{\sharp}(a_{2l+1},\ldots,a_{3l})\in P$, \item $R=\emptyset$, and $V(p_t)=\{(a_1,\ldots,a_m)\in W\mid a_t=1\}$ for $1\le t\le m$. \end{itemize} We will now show that $M,T\modelslax\varphi_C$, whence $\varphi_C$ is satisfiable. Note first that $M,T\modelslax\psi_C$, since by the definition of $T$ and $V$, for any $w\in T$, the truth values of $p_i$ in $w$ correspond to an accepting computation of $C$. To prove $M,T\modelslax q_1\cdots q_l\subseteq p_1\cdots p_l$, assume that $(b_1,\ldots,b_l)\in T(q_1,\ldots,q_l)$. Then $i:=\mathrm{\sharp}(b_1,\ldots,b_l)\in P$, and since $P$ is $S_C$-persistent, there are $j,k\in P$ such that $(i,j,k)\in S_C$. Thus, there is a tuple $(a_1,\ldots, a_m)\in\{0,1\}^m$ corresponding to an accepting computation of $C$ such that $(a_1,\ldots,a_l)=(b_1,\ldots,b_l)$, $j=\mathrm{\sharp}(a_{l+1},\ldots,a_{2l})$ and $k=\mathrm{\sharp}(a_{2l+1},\ldots,a_{3l})$. This means that $(a_1,\ldots,a_m)$ is in $T$, and hence $(b_1,\ldots,b_l)\in T(p_1,\ldots,p_l)$. The claim $M,T\modelslax r_1\cdots r_l\subseteq p_1\cdots p_l$ is proved in the same way. Note that since $M,T\models p_m$, we have $T(p_m,\ldots,p_m)=\{(1,\ldots,1)\}$. Furthermore, since $2^l=\mathrm{\sharp}(1,\ldots,1)\in P$ and $P$ is $S_C$-persistent, there is an element $(a_1,\ldots,a_m)\in T$ such that $(a_1,\ldots,a_l)=(1,\ldots,1)$. Thus, we see that $(1,\ldots,1)\in T(p_1,\ldots, p_l)$, and consequently $M,T\modelslax p_m\cdots p_m \subseteq p_1\cdots p_l$. \qed \end{proof} \begin{corollary} The satisfiability and finite satisfiability problems of modal inclusion logic with lax semantics are $\EXPTIME$-complete with respect to $\PSPACE$ reductions. \end{corollary} Note that the formula $\varphi_C$ used in the proof of Theorem \ref{laxlower} is in \emph{propositional inclusion logic}, i.e., it does not contain any modal operators. Thus, our proof shows that the satisfiability problem of propositional inclusion logic is already $\EXPTIME$-hard. Naturally, this problem is also in $\EXPTIME$, since propositional inclusion logic is a fragment of $\Minc$. \begin{corollary} The satisfiability and finite satisfiability problems of propositional inclusion logic with lax semantics are $\EXPTIME$-complete with respect to $\PSPACE$ reductions. \end{corollary} \subsection{Upper bound for strict semantics}\label{sec:upperBoundStrict} In this section we show that the satisfiability and finite satisfiability problems for $\Minc$ with strict semantics are in $\NEXPTIME$. The proof is a simple adaptation of the upper bound argument for lax semantics, but uses \emph{two-variable logic with counting}, $\mathrm{FOC}^2$, which has $\NEXPTIME$-complete satisfiability and finite satisfiability problems \cite{pratthartmann} (but no finite model property). Let $\theta$ be a formula of $\Minc$. The equisatisfiable translation of $\theta$ is obtained from the formula $\varphi_{\theta}$, which we defined when considering lax semantics. It is clear that $\varphi_{\theta}$ translates via a simple extension of the \emph{standard translation} into $\mathrm{FOC}^2$; see \cite{blackburn} for the standard translation of modal logic. Let $t(\varphi_{\theta})$ denote the $\mathrm{FOC}^2$-formula obtained by using the (extension of the) standard translation. For each $\varphi\in\mathit{SUB}(\varphi_{\theta})$, let $t(\chi_{\varphi})$ denote the translation of the subformula $\chi_{\varphi}$ of $\varphi_{\theta}$; see the argument for lax semantics for the definition of the formulas $\chi_{\varphi}$. The only thing we now need to do is to modify the formulas $t(\chi_{\Diamond\varphi})$ and $t(\chi_{\varphi\vee\psi})$. In the case of $t(\chi_{\varphi\vee\psi})$, we simply add a conjunct stating that the unary predicates $p_{\varphi}$ and $p_{\psi}$ are interpreted as disjoint sets: $\neg\exists x( p_{\varphi}(x)\wedge p_{\psi}(x))$. To modify the formulas $t(\chi_{\Diamond\varphi})$, we appoint a novel binary relation $R_{\Diamond\varphi}$ for each formula $\Diamond\varphi\in\mathit{SUB}(\theta)$. We then define the formula $\beta$ which states that $R_{\Diamond\varphi}$ is a function from the interpretation of $p_{\Diamond\varphi}$ onto the interpretation of $p_{\varphi}$. \begin{multline*} \beta\dfn \forall{x}\bigl( p_{\Diamond\varphi}(x) \rightarrow \exists^{=1} y (R_{\Diamond\varphi}xy \wedge p_{\varphi}(y)\bigr) \wedge \forall x\forall y\bigl( R_{\Diamond\varphi}xy\rightarrow (p_{\Diamond\varphi}(x)\wedge p_{\varphi}(y))\bigr)\\ \wedge \forall y\bigl( p_{\varphi}(y)\rightarrow \exists x(p_{\Diamond\varphi}(x) \wedge R_{\Diamond\varphi}xy)\bigr). \end{multline*} Define $\beta' \dfn \forall x\forall y\bigl(R_{\Diamond\varphi}xy \rightarrow Rxy\bigr)$, where $R$ is the accessibility relation of modal inclusion logic. The conjunction $\beta\wedge\beta'$ is the desired modification of $t(\chi_{\Diamond\varphi})$. The modification of $t(\varphi_\theta)$, using the modified versions of $t(\chi_{\varphi\vee\psi})$ and $t(\chi_{\Diamond\varphi})$, is the desired $\mathrm{FOC}^2$-formula equisatisfiable with $\theta$. The proof of the following theorem is practically identical to the corresponding argument for lax semantics. \begin{theorem}\label{thm:upperBoundStrict} The satisfiability and finite satisfiability problems for $\Minc$ with strict semantics are in \NEXPTIME. \end{theorem} \subsection{Lower bound for strict semantics} \begin{theorem} The satisfiability and finite satisfiability problems for $\Minc$ with strict semantics are \NEXPTIME-hard. \end{theorem} \begin{proof} We will provide a chain of reductions from \emph{Dependence-QBF-Validity} (in short \DQBFval) to \emph{Inclusion-QBF-Validity} (in short \IQBFval), and finally to satisfiability of \Minc with strict semantics. Peterson et~al. \cite{par01} introduced a so-to-speak dependence version of QBF by extending the usual QBF syntax to allow stating on which universally quantified propositions an existentially quantified proposition solely depends. Instances of the problem are of the form $(\forall p_1)(\exists q_1\backslash P_1)\cdots(\forall p_k)(\exists q_k\backslash P_k)\ \varphi \quad (\star)$, where each set $P_i$ contains a subset of the propositions $\{p_1,\dots,p_i\}$ quantified universally superordinate to $q_i$, and $\varphi$ is a propositional logic formula in the variables $\{p_1,\dots,p_k\}\cup\{q_1,\dots,q_k\}$. The set $P_i$ indicates that the choice for the value of $q_i$ is given by a Boolean function that takes as inputs only the values of the variables in $P_i$ (see \cite{par01} for the full details). By well-known standard arguments in the field of team semantics, it is easy to show that the formula of Eqn.\ $(\star)$ can be written in the alternative form (where $\overline{p_i}$ lists the variables in $P_i$) \begin{align} (\forall p_1)(\exists q_1)\cdots(\forall p_k)(\exists q_k) \bigl(\varphi\wedge\bigwedge\limits_{i\in\{1,\dots,k\}}\mathrm{dep}(\overline{p}_i,q_i)\bigr),\label{joukoformulation} \end{align} with the following semantics (where $M$ is a Kripke model and $T$ is a team). \begin{itemize} \item $M,T\models \forall p\,\psi$ iff $ M',T^p\models\psi$, where $T^p$ is obtained from $T$ by simultaneously replacing each $w\in T$ by two new worlds $u,v$ that agree with $w$ on all propositions other than $p$, and the points $u,v$ disagree with each other on $p$. $M'$ is obtained from $M$ by modifying the domain $W$ of $M$ to the new domain $W' = T^p \cup (W\setminus T)$, and modifying the valuation of $M$ to a new one that agrees with the specification of $T^p$; outside $T^p$ the new valuation agrees with the old one. The accessibility relation does not play a role here. \item $M,T\models \exists p\, \psi$ iff $ M',T_p\models\psi$, where $T_p$ is obtained from $T$ by simultaneously replacing each $w\in T$ by a new world $u$ that agrees with $w$ on propositions other than $p$, and may or may not agree with $w$ on $p$. Similarly to the case above, $M'$ is obtained from $M$ by modifying the domain $W$ of $M$ to the new domain $W' = T_p \cup (W\setminus T)$, and modifying the valuation of $M$ to a new one that agrees with the specification of $T_p$; outside $T_p$ the new valuation agrees with the old one. The accessibility relation does not play a role here. \item The connectives $\vee$ and $\wedge$ are interpreted exactly as in the case of modal inclusion logic using strict semantics. Literals $p$, $\neg p$ are also interpreted as in modal inclusion logic. \item $M,T\models\mathrm{dep}(p_1,\dots,p_k,q)$ if each pair of worlds in $T$ that agree on the truth values of each of the propositions $p_1,\dots,p_k$, also agree on the value of $q$. \end{itemize} Our formulation of the \DQBFval problem of Peterson et~al. \cite{par01}, with alternative inputs such as those in Eqn.\ 1, is equivalent to the original problem. Peterson et~al. showed that their problem lifts the computational complexity from $\PSPACE$-completeness (for the standard quantified Boolean formula validity) to in fact $\NEXPTIME$-completeness. Inclusion-QBF (\IQBF) is a language obtained from our formulation of the Dependence-QBF (\DQBF). It translates the expressions $\mathrm{dep}(p_1,\dots,p_k,q)$ to inclusion atoms in the way we next describe. Inspired by \citeauthor{ghk13} \cite{ghk13}, we observe that inclusion atoms can simulate formulas $\mathrm{dep}(p_1,\dots,p_k,q)$, as the following example demonstrates: $\forall p\forall q\exists r(\mathrm{dep}(q,r)\land\varphi)$ is equivalent to $\forall p\forall q\exists r(\forall s(sqr\subseteq pqr)\land\varphi)$, where $\varphi$ is a formula of propositional logic. This can be generalized to work for expressions with conjunctions of atoms $\mathrm{dep}(p_1,\dots,p_k,$ $q)$ for arbitrary $k$. Now, for the last step, we need to explain how \IQBFval finally reduces to $\MincstrictSAT$. This is just a slight modification of the standard proof of Ladner showing $\PSPACE$-hardness of plain modal logic via a reduction from QBF validity \cite{lad77}. The idea is to enforce a complete assignment tree. Further, one uses clause propositions which are true if the corresponding literal holds. Let us denote the formula which enforces the described substructure by $\varphi_{\text{struc}}$ (for details, see \cite{lad77}). The final formula is obtained from an $\IQBFval$ instance $\exists r_{1}\forall r_{2}\cdots\Game r_{n}(\varphi\land\chi)$ where $\varphi$ is the conjunctive normal form formula and $\chi$ is the conjunction of the inclusion atoms (stemming from the translation above); the final formula is then a formula of type $\varphi_{\text{struc}}\land\Diamond\Box\cdots\triangle(\varphi\land\chi)$, where $\triangle=\Box$ if $\Game=\forall$ and $\triangle=\Diamond$ if $\Game=\exists$. Let us denote this translation by the function $f$ which can be computed in polynomial time. Then it is easy to verify that $\varphi\in\IQBFval$ iff $f(\varphi)\in\MincstrictSAT$. It is straightforward to observe that this covers also the case for finite satisfiability. \qed \end{proof} \begin{corollary} The satisfiability and finite satisfiability problems of modal inclusion logic with strict semantics are $\NEXPTIME$-complete. \end{corollary} \section{Conclusion} We have compared the strict and lax variants of team semantics from the perspective of satisfiability problems for modal inclusion logic \Minc. Interestingly, the problems differ in complexity. Strict semantics leads to \NEXPTIME-completeness, while lax semantics gives completeness for \EXPTIME. For the journal version we plan to include a stronger polynomial-time reduction result for the $\EXPTIME$ lower bound of \MinclaxSAT. In the future it will be interesting to study model checking problems for \Minc under strict and lax semantics. Also, the complexity of validity problems for \Minc and, related to this, proof-theoretic properties of the logic remain to be investigated.\bigskip \noindent\textbf{Acknowledgements.} The authors thank the anonymous referees for their comments. The third author is supported by DFG grant ME 4279/1-1. The second author acknowledges support from Jenny and Antti Wihuri Foundation. \bibliographystyle{plainnat}
{ "timestamp": "2015-06-15T02:08:30", "yymm": "1504", "arxiv_id": "1504.06409", "language": "en", "url": "https://arxiv.org/abs/1504.06409" }
\section{Introduction}\label{SEC:introduction} \par Confidential and non-confidential messages are often transmitted over the same channel. However, the underlying principles for constructing codes without and with secrecy are different. Without secrecy constraints, codes should use all available channel resources to reliably convey information to the destinations. The presence of confidential messages, on the other hand, requires that some resources are allocated to preserve secrecy. We study relationships between the coding strategies and the fundamental limits of communication with and without secrecy. To this end we incorporate secret and non-secret transmissions over a two-user broadcast channel (BC) by considering the BC with privacy leakage constraints (Fig. \ref{FIG:general_BC_leakage}). \begin{figure}[t!] \begin{center} \begin{psfrags} \psfragscanon \psfrag{I}[][][1]{$\mspace{-30mu}(M_0,M_1,M_2)$} \psfrag{J}[][][1]{\ \ \ \ \ \ \ \ \ Encoder} \psfrag{K}[][][1]{\ \ \ \ \ $X^n$} \psfrag{T}[][][1]{\ \ \ \ \ \ \ \ \ \ \ Channel} \psfrag{M}[][][1]{\ \ \ \ $Y_1^n$} \psfrag{N}[][][1]{\ \ \ \ $Y_2^n$} \psfrag{O}[][][1]{\ \ \ \ \ \ \ \ \ Decoder 1} \psfrag{P}[][][1]{\ \ \ \ \ \ \ \ \ Decoder 2} \psfrag{Q}[][][1]{\ \ \ \ \ \ \ \ \ \ \ $(\hat{M}_0^{(1)},\hat{M}_1)$} \psfrag{R}[][][1]{\ \ \ \ \ \ \ \ \ \ \ $(\hat{M}_0^{(2)},\hat{M}_2)$} \psfrag{X}[][][1]{\ \ \ \ \ \ \ \ \ \ \ \ $Q_{Y_1,Y_2|X}$} \psfrag{V}[][][1]{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $I(M_2;Y_1^n)\leq nL_2$} \psfrag{U}[][][1]{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $I(M_1;Y_2^n)\leq nL_1$} \includegraphics[scale = .45]{bc_leakage.eps} \caption{BC with privacy leakage constraints.} \label{FIG:general_BC_leakage} \psfragscanoff \end{psfrags} \end{center \end{figure} \par Information theoretic secrecy was introduced by Shannon \cite{Shannon_Secrecy1949} who studied communication between a source and a receiver in the presence of an eavesdropper. Wyner modeled secret communication over noisy channels when he introduced the degraded wiretap channel (WTC) and derived its secrecy-capacity region \cite{Wyner_Wiretap1975}. Csisz{\'a}r and K{\"o}rner \cite{Csiszar_Korner_BCconfidential1978} extended Wyner's result to a general BC where the source also transmits a common message to both users. The development of wireless communication, whose inherent open nature makes it vulnerable to security attacks, has inspired a growing interest in understanding the fundamental limits of secure communication. \par Multiuser settings with secrecy were extensively treated in the literature. Broadcast and interference channels with two confidential messages were studied in \cite{BC_Confidential_Yates2008}, where inner and outer bounds on the secrecy-capacity region of both problems were derived. The secrecy-capacity region for the semi-deterministic (SD) BC was established in \cite{Semi-det_BC_secrect_two2009}. The capacity region of a SD-BC where only the message of the stochastic user is kept secret from the deterministic user was derived in \cite{Semi-det_BC_secrect_one2009}. The opposite case, i.e., when the message of the deterministic user is confidential was solved in \cite{Goldfeld_Weak_Secrecy_ISIT2015}. Secret communication over multiuser channels was considered in \cite{Ulukus_Cooperative_RBC2011}, where the authors derive inner and outer bounds on the rate-equivocation region of the relay-BC (RBC) with one or two confidential messages. Gaussian multiple-input multiple-output (MIMO) BCs and WTCs were studied in \cite{Poor_Gaussian_MIMO_BC_Secrecy2009,Liu_Shamai_MIMOWTC2009,Poor_Shamai_Gaussian_MIMO_BC_Secrecy2010,Khitsi_MIMOWTC2010,Ulukus_Gaussian_Wiretap2011,Hassibi_MINOWTC2011}, while \cite{Ulukus_External_Eve2009,Bagherikaram_Gaussin_External_Eve2009,Piantanida_External_Eve2014} focused on BCs with an eavesdropper as an external entity from which all messages are kept secret. \par We study a two-user BC over which a common message for both users and a pair of private messages, each destined for a different user, are transmitted. A limited amount of rate of each private message may be leaked to the opposite receiver. The leaked rate is quantified as the normalized mutual information between the message of interest and the channel output sequence at the opposite user. Setting either leakage to zero or infinity reduces the problem to the case where the associated message is confidential or non-confidential, respectively. Thus, our problem setting captures as special cases four scenarios concerning secrecy, i.e., when both, either or neither of the private messages are secret. We derive novel inner and outer bounds on the leakage-capacity region of the BC. The bounds are tight for SD-BCs, physically degraded (PD) BCs, and BCs with a degraded message set, thus characterizing their leakage-capacity regions, which were not known before. Furthermore, we derive a condition for identifying the privacy leakage threshold values above which the inner bound saturates. \par Various past results are captured as special cases. By taking the leakage thresholds to infinity, our inner bound recovers Marton's inner bound with a common message \cite{Liang_Kramer_RBC2007}, which is optimal for every BC with a known capacity region. Making the leakage constraint inactive in our outer bound recovers the UVW-outer bound \cite{UVW_Outer2010} or the New-Jersey outer bound \cite{NJ_Outer2008}. These bounds are at least as good as previously known bounds (see \cite{Liang_PHD2005,Nair_ElGamal_Outer_Bound2007} and \cite{Nair_outer2008}). The leakage-capacity region of the SD-BC reduces to each of the regions in \cite{Semi-det_BC_secrect_two2009,Goldfeld_Weak_Secrecy_ISIT2015,Semi-det_BC_secrect_one2009} and \cite{GP_SemideterministicBC1980} by discarding the common message and choosing the leakage constraints appropriately. The capacity result also recovers the optimal regions for the BC with confidential messages \cite{Csiszar_Korner_BCconfidential1978} and the BC with a degraded message set (without secrecy) \cite{Korner_BC_DegradedMessageSet1977}. \par Our code construction splits each private message into a \emph{public} and a \emph{private} part. The public parts along with the common message constitute a public message that is decoded by both users, and therefore, each public part is leaked to the opposite receiver by default. The codebooks of the private parts are double-binned to allow joint encoding and to control the amount of rate leaked from each private part. The bin sizes are chosen to satisfy the total leakage constraints. Our coding scheme is essentially a Marton code with an additional layer of bins, whose sizes correspond to the amount of leakage; the larger these extra bins are, the smaller the leakage. The resulting achievable region is simplified using the Fourier-Motzkin elimination for information theory (FME-IT) software \cite{FME&ITIP}. The outer bound is established by using telescoping identities \cite{Kramer_telescopic2011}. A Blackwell BC (BWC) \cite{vanderMeulen_Blackwell1975,Gelfand_Blackwell1977} illustrates the results and visualizes the transition of the leakage-capacity region from the capacity region without secrecy to the secrecy-capacity regions for different secrecy scenarios. \par This paper is organized as follows. In Section \ref{SEC:preliminaries} we describe the BC with privacy leakage constraints. In Section \ref{SEC:results}, we state inner and outer bounds on the leakage-capacity region and characterize the leakage-capacity regions for the SD-BC, the BC with a degraded massage set and the PD-BC. Section \ref{SEC:special_cases} discusses past results that are captured within our framework. In Section \ref{SEC:example} we study a BWC example and visualise the results, while Section \ref{SEC:proofs} provides proofs. Finally, Section \ref{SEC:summary} summarizes the main achievements and insights of this work. \section{Notations and Problem Definition}\label{SEC:preliminaries} \par We use the following notations. Given two real numbers $a,b$, we denote by $[a\mspace{-3mu}:\mspace{-3mu}b]$ the set of integers $\big\{n\in\mathbb{N}\big| \lceil a\rceil\leq n \leq\lfloor b \rfloor\big\}$. We define $\mathbb{R}_+=\{x\in\mathbb{R}|x\geq 0\}$. Calligraphic letters denote discrete sets, e.g., $\mathcal{X}$, while the cardinality of a set $\mathcal{X}$ is denoted by $|\mathcal{X}|$. $\mathcal{X}^n$ stands for the $n$-fold Cartesian product of $\mathcal{X}$. An element of $\mathcal{X}^n$ is denoted by $x^n=(x_1,x_2,\ldots,x_n)$, and its substrings as $x_i^j=(x_i,x_{i+1},\ldots,x_j)$; when $i=1$, the subscript is omitted. Whenever the dimension $n$ is clear from the context, vectors (or sequences) are denoted by boldface letters, e.g., $\mathbf{x}$. Let $\big(\Omega,\mathcal{F},\mathbb{P}\big)$ be a probability space, where $\Omega$ is the sample space, $\mathcal{F}$ is the $\sigma$-algebra and $\mathbb{P}$ is the probability measure. Random variables over $\big(\Omega,\mathcal{F},\mathbb{P}\big)$ are denoted by uppercase letters, e.g., $X$, with conventions for random vectors similar to those for deterministic sequences. Namely, $X_i^j$ represents the sequence of random variables $(X_i,X_{i+1},\ldots,X_j)$, while $\mathbf{X}$ stands for $X^n$. The probability of an event $\mathcal{A}\in\mathcal{F}$ is denoted by $\mathbb{P}(\mathcal{A})$, while $\mathbb{P}(\mathcal{A}\big|\mathcal{B}\mspace{2mu})$ denotes conditional probability of $\mathcal{A}$ given $\mathcal{B}$. We use $\mathds{1}_\mathcal{A}$ to denote the indicator function of $\mathcal{A}$. The set of all probability mass functions (PMFs) on a finite set $\mathcal{X}$ is denoted by $\mathcal{P}(\mathcal{X})$. PMFs are denoted by the capital letter $P$, with a subscript that identifies the random variable and its possible conditioning. For example, for two random variables $X$ and $Y$ we use $P_X$, $P_{X,Y}$ and $P_{X|Y}$ to denote, respectively, the marginal PMF of $X$, the joint PMF of $(X,Y)$ and the conditional PMF of $X$ given $Y$. In particular, $P_{X|Y}$ represents the stochastic matrix whose elements are given by $P_{X|Y}(x|y)=\mathbb{P}\big(X=x|Y=y\big)$. We omit subscripts if the arguments of the PMF are lowercase versions of the random variables. The support of a PMF $P$ and the expectation of a random variable $X$ are denoted by $\mbox{supp}(P)$ and $\mathbb{E}X$, respectively. For a discrete measurable space $(\Omega,\mathcal{F})$, a PMF $Q\in\mathcal{P}(\Omega)$ gives rise to a probability measure on $(\Omega,\mathcal{F})$, which we denote by $\mathbb{P}_Q$; accordingly, $\mathbb{P}_Q\big(\mathcal{A})=\sum_{\omega\in\mathcal{A}}Q(\omega)$, for every $\mathcal{A}\in\mathcal{F}$. For a sequence of random variables $X^n$ we also use the following: If the entries of $X^n$ are drawn in an independent and identically distributed (i.i.d.) manner according to $P_X$, then for every $\mathbf{x}\in\mathcal{X}^n$ we have $P_{X^n}(\mathbf{x})=\prod_{i=1}^nP_X(x_i)$ and we write $P_{X^n}(\mathbf{x})=P_X^n(\mathbf{x})$. Similarly, if for every $(\mathbf{x},\mathbf{y})\in\mathcal{X}^n\times\mathcal{Y}^n$ we have $P_{Y^n|X^n}(\mathbf{y}|\mathbf{x})=\prod_{i=1}^nP_{Y|X}(y_i|x_i)$, then we write $P_{Y^n|X^n}(\mathbf{y}|\mathbf{x})=P_{Y|X}^n(\mathbf{y}|\mathbf{x})$. We often use $Q_X^n$ or $Q_{Y|X}^n$ when referring to an i.i.d. sequence of random variables. The conditional product PMF $Q_{Y|X}^n$ given a specific sequence $\mathbf{x}\in\mathcal{X}^n$ is denoted by $Q_{Y|X=\mathbf{x}}^n$. The empirical PMF $\nu_{\mathbf{x}}$ of a sequence $\mathbf{x}\in\mathcal{X}^n$ is \begin{equation} \nu_{\mathbf{x}}(a)\triangleq\frac{N(a|\mathbf{x})}{n} \end{equation} where $N(a|\mathbf{x})=\sum_{i=1}^n\mathds{1}_{\{x_i=a\}}$. We use $\mathcal{T}_\epsilon^{n}(P_X)$ to denote the set of letter-typical sequences of length $n$ with respect to the PMF $P_X$ and the non-negative number $\epsilon$ \cite[Ch. 3]{Massey_Applied}, \cite{Orlitsky_Roche2001}, i.e., we have \begin{equation} \mathcal{T}_\epsilon^{n}(P_X)=\Big\{\mathbf{x}\in\mathcal{X}^n:\big|\nu_{\mathbf{x}}(a)-P_X(a)\big|\leq\epsilon P_X(a),\ \forall a\in\mathcal{X}\Big\}. \end{equation} \par The BC with privacy leakage constraints is illustrated in Fig. \ref{FIG:general_BC_leakage}. The channel has one sender and two receivers. The sender randomly chooses a triple $(m_0,m_1,m_2)$ of indices uniformly and independently from the set $\big[1:2^{nR_0}\big]\times\big[1:2^{nR_1}\big]\times\big[1:2^{nR_2}\big]$ and maps them to a sequence $\mathbf{x}\in\mathcal{X}^n$, which is the channel input. The sequence $\mathbf{x}$ is transmitted over a BC with transition probability $Q_{Y_1,Y_2|X}$. If the channel transition matrix factors as $\mathds{1}_{\{Y_1=f(X)\}}Q_{Y_2|X}$, for some function $f:\mathcal{X}\to\mathcal{Y}_1$, or as $Q_{Y_1|X}Q_{Y_2|Y_1}$ we call the BC SD or PD, respectively. The output sequence $\mathbf{y}_j\in\mathcal{Y}^n_j$, where $j=1,2$, is received by decoder $j$. Decoder $j$ produces a pair of estimates $\big(\hat{m}_0^{(j)},\hat{m}_j\big)$ of $(m_0,m_j)$. \begin{definition}[Code Description] An $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n$ for the BC with leakage constraints is defined with respect to the three message sets $\mathcal{M}_j\triangleq\big[1:2^{nR_j}\big]$, $j=0,1,2$, and has: \begin{enumerate} \item A stochastic encoder that is described by a mapping $f_{\mathrm{E}}:\mathcal{M}_0\times\mathcal{M}_1\times\mathcal{M}_2\to \mathcal{P}(\mathcal{X}^n)$. \item Two decoding functions, $\phi_j: \mathcal{Y}_j^n\to \big(\mathcal{M}_0\times\mathcal{M}_j\big)\cup\{e\}$, for $j=1,2$, where $e\notin\mathcal{M}_k$, for $k=0,1,2$, is an error symbol. \end{enumerate} \end{definition} Denote the set of all $(n,R_0,R_1,R_2)$ codes for the BC with leakage constraints by $\mathfrak{C}_n$ and let $\mathbb{C}_n$ be a random variable with alphabet $\mathfrak{C}_n$ distributed according to $P_{\mathbb{C}_n}\in\mathcal{P}(\mathfrak{C}_n)$. The probability measure $\mathbb{P}$ used throughout this work is induced by an underlying PMF on $\mathfrak{C}_n\times\mathcal{M}_0\times\mathcal{M}_1\times\mathcal{M}_2\times\mathcal{X}^n\times\mathcal{Y}_1^n\times\mathcal{Y}^n_2\times\mathcal{M}_0\times\mathcal{M}_1\times\mathcal{M}_0\times\mathcal{M}_2$ given by \begin{subequations} \begin{equation} P\left(\mathcal{C}_n,m_0,m_1,m_2,\mathbf{x},\mathbf{y}_1,\mathbf{y}_2,\hat{m}_0^{(1)},\hat{m}_1,\hat{m}_0^{(2)},\hat{m}_2\right)=P_{\mathbb{C}_n}\mspace{-1mu}(\mathcal{C}_n)P^{(\mathcal{C}_n)}\mspace{-5mu}\left(m_0,m_1,m_2,\mathbf{x},\mathbf{y}_1,\mathbf{y}_2,\hat{m}_0^{(1)},\hat{m}_1,\hat{m}_0^{(2)},\hat{m}_2\right),\label{EQ:induced_PMFandcode} \end{equation} where \begin{align*} P^{(\mathcal{C}_n)}\Big(m_0,m_1,m_2,&\mathbf{x},\mathbf{y}_1,\mathbf{y}_2,\hat{m}_0^{(1)},\hat{m}_1,\hat{m}_0^{(2)},\hat{m}_2\Big)\\&=\frac{1}{|\mathcal{M}_0||\mathcal{M}_1||\mathcal{M}_2|}f_{\mathrm{E}}(\mathbf{x}|m_0,m_1,m_2)Q^n_{Y_1,Y_2|X}(\mathbf{y}_1,\mathbf{y}_2|\mathbf{x})\mathds{1}_{\bigcap_{j=1,2}\big\{(\hat{m}_0^{(j)},m_j)=\phi(\mathbf{y}_j)\big\}}.\numberthis\label{EQ:induced_PMF} \end{align*}\label{EQ:induced_PMF_both} \end{subequations} is defined by the code $\mathcal{C}_n=(f_\mathrm{E},\phi_1,\phi_2)$. \begin{definition}[Error Probability] The average error probability for an $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n$ is \begin{subequations} \begin{align*} P_e(\mathcal{C}_n)&=\mathbb{P}\Big((\hat{M}_0^{(1)},\hat{M}_0^{(2)},\hat{M}_1,\hat{M}_2)\neq(M_0,M_0,M_1,M_2)\Big| \mathbb{C}_n=\mathcal{C}_n\Big)\\ &=\frac{1}{|\mathcal{M}_0||\mathcal{M}_1||\mathcal{M}_2|}\sum_{\substack{(m_0,m_1,m_2)\\\in\mathcal{M}_0\times\mathcal{M}_2\times\mathcal{M}_2}}\mspace{8mu}\sum_{\substack{(\mathbf{y}_1,\mathbf{y}_2)\in\mathcal{Y}_1^n\times\mathcal{Y}_2^n:\\\phi_1(\mathbf{y}_1)\neq(m_0,m_1)\ \mathrm{or}\\\phi_2(\mathbf{y}_1)\neq(m_0,m_2) }}Q^n_{Y_1,Y_2|X}\big(\mathbf{y}_1,\mathbf{y}_2\big|f_{\mathrm{E}}(m_0,m_1,m_2)\big),\numberthis\label{EQ:error_prob_def} \end{align*} The average error probability for receiver $j=1,2$ is \begin{equation} P_{e,j}(\mathcal{C}_n)=\mathbb{P}_{\mathcal{C}_n}\Big((\hat{M}_0^{(j)},\hat{M}_j)\neq(M_0,M_j)\Big|\mathbb{C}_n=\mathcal{C}_n\Big). \end{equation} \end{subequations} \end{definition} \begin{definition}[Information Leakage] The information leakage of $M_1$ to receiver 2 under an $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n$ is \begin{subequations} \begin{equation} L_1(\mathcal{C}_n)=\frac{1}{n}I(M_1;Y_2^n|\mathbb{C}_n=\mathcal{C}_n). \end{equation} Similarly, the information leakage of $M_2$ to receiver 1 under $\mathcal{C}_n$ is \begin{equation} L_2(\mathcal{C}_n)=\frac{1}{n}I(M_2;Y_1^n|\mathbb{C}_n=\mathcal{C}_n). \end{equation} \end{subequations}\label{EQ:infromation_leakage_def} \end{definition} \vspace{-6.5mm} When the aforementioned quantities are subsequently used, the conditioning of $\mathcal{C}_n$ may be omitted when it is clear from the context. \begin{definition}[Achievable Rates] Let $(L_1,L_2)\in\mathbb{R}^2_+$. A rate triple $(R_0,R_1,R_2)\in\mathbb{R}_+^3$ is $(L_1,L_2)$-{\it{achievable}} if for any $\epsilon,\xi_1,\xi_2>0$ there is a sufficiently large $n$ and an $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n$ such that \begin{subequations} \begin{align} &P_e(\mathcal{C}_n)\leq\epsilon\label{EQ:error_prob}\\ &L_1(\mathcal{C}_n)\leq L_1+\xi_1\label{EQ:achieve_leakage1}\\ &L_2(\mathcal{C}_n)\leq L_2+\xi_2.\label{EQ:achieve_leakage2} \end{align}\label{EQ:achiev_realibility_leakage} \end{subequations} \end{definition} \vspace{-7mm} The $(L_1,L_2)$-{\it{leakage-capacity region}} $\mathcal{C}(L_1,L_2)$ is the closure of the set of the $(L_1,L_2)$-achievable rates. \begin{remark}[Inactive Leakage Constraints]\label{REM:inactive_leakage} Setting $L_j=R_j$, for $j=1,2$, makes \eqref{EQ:achieve_leakage1}-\eqref{EQ:achieve_leakage2} inactive and reduces the BC with privacy leakage constraints to the classic BC with a common message. This is a simple consequence of the non-negativity of entropy, which implies that for any $\mathcal{C}_n\in\mathfrak{C}_n$ \begin{equation} I(M_1;Y_2^n|\mathbb{C}_n=\mathcal{C}_n)\leq H(M_1)=nR_1\label{EQ:trivial_leakage} \end{equation} (respectively, $I(M_2;Y_1^n|\mathbb{C}_n=\mathcal{C}_n)\leq nR_2$) always holds. To simplify notation, when we henceforth refer to leakage threshold values under which \eqref{EQ:achieve_leakage1}-\eqref{EQ:achieve_leakage2} are automatically satisfied, we write $L_j\to\infty$, $j=1,2$. \end{remark} \section{Leakage-Capacity Results}\label{SEC:results} \par This section states novel inner and outer bounds on the $(L_1,L_2)$-leakage-capacity region $\mathcal{C}(L_1,L_2)$ of a BC with privacy leakage constraints. These bounds match for SD-BCs, BCs with a degraded message set and PD-BCs, which characterizes the leakage-capacity regions for these three cases. We start with the inner bound. \begin{theorem}[Inner Bound]\label{TM:inner_bound} Let $(L_1,L_2)\in\mathbb{R}^2_+$ and $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$ be the closure of the union of rate triples $(R_0,R_1,R_2)\in\mathbb{R}^3_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq I(U_1;Y_1|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1\label{EQ:region_inner11}\\ R_0+R_1 &\leq I(U_0,U_1;Y_1)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1\label{EQ:region_inner12}\\ R_0+R_1 &\leq I(U_0,U_1;Y_1)\label{EQ:region_inner13}\\ R_2 &\leq I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)+L_2\label{EQ:region_inner21}\\ R_0+R_2 &\leq I(U_0,U_2;Y_2)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)+L_2\label{EQ:region_inner22}\\ R_0+R_2 &\leq I(U_0,U_2;Y_2)\label{EQ:region_inner23}\\ R_0+R_1+R_2 &\leq I(U_0,U_1;Y_1)+I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1\label{EQ:region_inner_sum1}\\ R_0+R_1+R_2 &\leq I(U_1;Y_1|U_0)+I(U_0,U_2;Y_2)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)+L_2\label{EQ:region_inner_sum2}\\ R_0+R_1+R_2 &\leq I(U_1;Y_1|U_0)+I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)+\min\big\{I(U_0;Y_1),I(U_0;Y_2)\big\}\label{EQ:region_inner_sum3}\\ 2R_0+R_1+R_2 &\leq I(U_0,U_1;Y_1)+I(U_0,U_2;Y_2)-I(U_1;U_2|U_0)\label{EQ:region_inner_sum4} \end{align}\label{EQ:region_inner} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{U_0,U_1,U_2,X}Q_{Y_1,Y_2|X}$. The following inclusion holds: \begin{equation} \mathcal{R}_{\mathrm{I}}(L_1,L_2)\subseteq\mathcal{C}(L_1,L_2).\label{EQ:inclusion_inner} \end{equation} \end{theorem} The proof of Theorem \ref{TM:inner_bound} is given in Section \ref{SUBSEC:inner_proof} and relies on a leakage-adaptive Marton-like code construction. Rate-splitting is first used to decompose each private message $M_j$, $j=1,2$, into a public part $M_{0j}$ and a private part $M_{jj}$. A Marton code with an extra layer of bins is then constructed while treating $(M_0,M_{10},M_{20})$ as a public message and $M_{jj}$, for $j=1,2$, as private message $j$. The double-binning of the private message codebooks permits joint encoding (outer layer) and to control the total rate leakage to the other user (inner layer). The leakage analysis takes into account the rate leaked due to the decoding of the public message by both users. Also, additional leakage occurs due to the joint encoding process, which introduces correlation between the private message codewords. Accounting for the latter is the main difficulty in the leakage analysis; we treat this by relating the bin sizes in the inner and outer coding layers. \begin{remark} The region $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$ recovers Marton's inner bound for BCs with a common message \cite[Theorem 5]{Liang_Kramer_RBC2007}. By taking $L_1,L_2\to\infty$, the rate bounds in \eqref{EQ:region_inner11}-\eqref{EQ:region_inner12}, \eqref{EQ:region_inner21}-\eqref{EQ:region_inner22} and \eqref{EQ:region_inner_sum1}-\eqref{EQ:region_inner_sum2} are redundant. The remaining bounds coincide with those defining Marton's region. A full discussion on the special cases of $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$ is given in Section \ref{SUBSEC:special_cases_SDBC}. \end{remark} The following corollary states a sufficient condition on the leakage thresholds $L_1$ and $L_2$ to become inactive in $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$ from Theorem \ref{TM:inner_bound} with $R_0=0$ (i.e., when no common message is present), when evaluated under a certain input distribution $P_{U_0,U_1,U_2,X}\in\mathcal{P}(\mathcal{U}_0\times\mathcal{U}_1\times\mathcal{U}_2\times\mathcal{X})$. To state the result, let $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})$ denote the set of rate pairs $(R_1,R_2)\in\mathbb{R}_{+}^2$ satisfying \eqref{EQ:region_inner} with $R_0=0$ and when the mutual information terms are calculated with respect to $P_{U_0,U_1,U_2,X}Q_{Y_1,Y_2|X}$. Accordingly, \begin{equation} \tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2)\triangleq\bigcup_{\substack{P_{U_0,U_1,U_2,X}:\\ (U_0,U_1,U_2)-X-(Y_1,Y_2)}}\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})\label{EQ:region_nocommon_afterunion} \end{equation} corresponds to the region obtained by setting $R_0=0$ in $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$. \begin{corollary}[Inactive Leakage Constraints]\label{COR:inactive_leakage} Let $(L_1,L_2)\in\mathbb{R}^2_+$ and $P_{U_0,U_1,U_2,X}\in\mathcal{P}(\mathcal{U}_0\times\mathcal{U}_1\times\mathcal{U}_2\times\mathcal{X})$. For $j=1,2$ define \begin{equation} L_j^\star(P_{U_0,U_1,U_2,X})=I(U_0;Y_j)+I(U_j;U_{\bar{j}},Y_{\bar{j}}|U_0),\label{EQ:maximal_leakage_Lj} \end{equation} where $\bar{j}=j+(-1)^{j+1}$. We have the following results: \begin{enumerate} \item If $L_1\geq L_1^\star(P_{U_0,U_1,U_2,X})$ then $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})=\tilde{\mathcal{R}}_{\mathrm{I}}(\infty,L_2,P_{U_0,U_1,U_2,X})$. \item If $L_2\geq L_2^\star(P_{U_0,U_1,U_2,X})$ then $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})=\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,\infty,P_{U_0,U_1,U_2,X})$. \item If $L_j\geq L_j^\star(P_{U_0,U_1,U_2,X})$, for $j=1,2$, then $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})=\tilde{\mathcal{R}}_{\mathrm{I}}(\infty,\infty,P_{U_0,U_1,U_2,X})$. \end{enumerate} \end{corollary} For the proof of Corollary \ref{COR:inactive_leakage} see Section \ref{SUBSEC:inactive_leakage_proof}. According to the above, if any of the leakage thresholds $L_j$, $j=1,2$ surpasses the critical value from \eqref{EQ:maximal_leakage_Lj}, the corresponding inner bound remains unchanged if $L_j$ is further increased, and is therefore equivalent to the region where $L_j\to\infty$. \begin{remark}\label{REM:inactive_leakage_explained} Corollary \ref{COR:inactive_leakage} specifies a condition for $L_1$ and/or $L_2$ being inactive for each input probability. Getting a condition for the inactivity of the thresholds with respect to the entire region $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2)$ from \eqref{EQ:region_nocommon_afterunion} is a more challenging task. Identifying such a condition involves identifying which input distributions achieve the boundary of $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2)$. Although, in some communication scenarios this identification is possible (e.g., for the MIMO Gaussian BC with or without secrecy requirements the boundary achieving distributions are Gaussian vectors \cite{Weingarten_MIMOBC2006,Poor_Shamai_Gaussian_MIMOBC_Secrecy2010,Ulukus_GaussianBCCommon2012,Chandra_Gauss_BC2014,Goldfeld_MIMOBC_Secrecy2016}), but the structure of the optimizing distribution is unknown in general. The merit of Corollary \ref{COR:inactive_leakage} is reflected when explicitly calculating $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2)$ for a given BC. One can then identify the optimizing distribution (e.g., by means of an analytical characterization or via an exhaustive search), and can, in turn, calculate the maximum of $L_j^\star(P_{U_0,U_1,U_2,X})$ over those distributions. Denoting by $L_j^\star$ this maximal value, if $L_j<L_j^\star$ then increasing $L_j$ will further shrink the region. If, on the other hand, $L_j\geq L_j^\star$, then the region remains unchanged even if $L_j$ further grows. This notion is demonstrated in Section \ref{SEC:example}, where we calculate the $(L_1,L_2)$-leakage-capacity region of the Blackwell BC. \end{remark} Next, we state an outer bound on $\mathcal{C}(L_1,L_2)$. A proof of Theorem \ref{TM:outer_bound} is given in Section \ref{SUBSEC:outer_proof}. \begin{theorem}[Outer Bound]\label{TM:outer_bound} Let $(L_1,L_2)\in\mathbb{R}^2_+$ and $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$ be the closure of the union of rate triples $(R_0,R_1,R_2)\in\mathbb{R}^3_+$ satisfying: \begin{subequations} \begin{align} R_0&\leq \min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_outer0}\\ R_1&\leq I(U;Y_1|W,V)-I(U;Y_2|W,V)+L_1\label{EQ:region_outer11}\\ R_1&\leq I(U;Y_1|W)-I(U;Y_2|W)+L_1\label{EQ:region_outer12}\\ R_0+R_1&\leq I(U;Y_1|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_outer13}\\ R_2&\leq I(V;Y_2|W,U)-I(V;Y_1|W,U)+L_2\label{EQ:region_outer21}\\ R_2&\leq I(V;Y_2|W)-I(V;Y_1|W)+L_2\label{EQ:region_outer22}\\ R_0+R_2&\leq I(V;Y_2|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_outer23}\\ R_0+R_1+R_2&\leq I(U;Y_1|W,V)+I(V;Y_2|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_outer_sum1}\\ R_0+R_1+R_2&\leq I(U;Y_1|W)+I(V;Y_2|W,U)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_outer_sum2} \end{align}\label{EQ:region_outer} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{W,U,V}P_{X|U,V}Q_{Y_1,Y_2|X}$. $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$ is convex. The following inclusion holds: \begin{equation} \mathcal{C}(L_1,L_2)\subseteq\mathcal{R}_{\mathrm{O}}(L_1,L_2).\label{EQ:inclusion_outer} \end{equation} \end{theorem} The derivation of the outer bound uses telescoping identities (cf., e.g., \cite[Eqs. (9) and (11)]{Kramer_telescopic2011}) that result in a relatively concise proof. \begin{remark} The region $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$ recovers the UVW-outer bound from \cite[Bound 2]{UVW_Outer2010}, which is equivalent to the New-Jersey outer bound \cite{NJ_Outer2008}. This follows by setting $L_1,L_2\to\infty$ into $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$, which makes (\ref{EQ:region_outer11})-(\ref{EQ:region_outer12}) and (\ref{EQ:region_outer21})-(\ref{EQ:region_outer22}) inactive. \end{remark} The inner and outer bounds in Theorems \ref{TM:inner_bound} and \ref{TM:outer_bound} are tight for SD-BCs and give rise to the following theorem. \begin{theorem}[Leakage-Capacity for SD-BC]\label{TM:SDBC_leakage_capacity} Let $(L_1,L_2)\in\mathbb{R}^2_+$. The $(L_1,L_2)$-leakage-capacity region $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ of a SD-BC $\mathds{1}_{\{Y_1=f(X)\}}Q_{Y_2|X}$ with privacy leakage constraints is the closure of the union of rate triples $(R_0,R_1,R_2)\in\mathbb{R}^3_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1|W,V,Y_2)+L_1\label{EQ:region_SDBC11}\\ R_0+R_1 &\leq H(Y_1|W,V,Y_2)+I(W;Y_1)+L_1\label{EQ:region_SDBC12}\\ R_0+R_1 &\leq H(Y_1)\label{EQ:region_SDBC13}\\ R_2 &\leq I(V;Y_2|W)-I(V;Y_1|W)+L_2\label{EQ:region_SDBC21}\\ R_0+R_2 &\leq I(W,V;Y_2)-I(V;Y_1|W)+L_2\label{EQ:region_SDBC22}\\ R_0+R_2 &\leq I(W,V;Y_2)\label{EQ:region_SDBC23}\\ R_0+R_1+R_2 &\leq H(Y_1|W,V,Y_2)+I(V;Y_2|W)+I(W;Y_1)+L_1\label{EQ:region_SDBC_sum1}\\ R_0+R_1+R_2 &\leq H(Y_1|W,V)+I(V;Y_2|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_SDBC_sum2}\\ 2R_0+R_1+R_2 &\leq H(Y_1|W,V)+I(W,V;Y_2)+I(W;Y_1)\label{EQ:region_SDBC_sum3} \end{align}\label{EQ:region_SDBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{W,V,Y_1,X}Q_{Y_2|X}$ for which $Y_1=f(X)$. $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ is convex. \end{theorem} The direct part of Theorem \ref{TM:SDBC_leakage_capacity} follows from Theorem \ref{TM:inner_bound} by taking $U_0=W$, $U_1=Y_1$ and $U_2=V$, while Theorem \ref{TM:outer_bound} is used for the converse. See Section \ref{SUBSEC:SDBC_proof} for the full details. \begin{remark} By taking $L_j=0$, the SD-BC with leakage constraints is reduced to the corresponding BC in which $M_j$ is secret. Similarly, setting $L_j\to\infty$ results in the problem without a secrecy constraint on $M_j$. All four cases of the SD-BC concerning secrecy (i.e., when neither, either or both messages are secret) are solved and their solutions are retrieved from $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ by inserting the appropriate values of $L_j$, $j=1,2$. This property of $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ is discussed in Section \ref{SUBSEC:special_cases_SDBC}. \end{remark} \par The inner and outer bounds in Theorems \ref{TM:inner_bound} and \ref{TM:outer_bound} also match when the message set is degraded, i.e., when there is only one private message. The leakage-capacity region of the BC where $M_2=0$ is defined only by the threshold $L_1$ and is stated next.\footnote{Equivalently, one may consider the case where $M_1=0$} \begin{theorem}[Leakage-Capacity for BC with Degraded Message Set]\label{TM:DMBC_leakage_capacity} Let $L_1\in\mathbb{R}_+$. The $L_1$-leakage-capacity region $\mathcal{C}_{\mathrm{DM}}(L_1)$ of a BC with a degraded message set and a privacy leakage constraint is the closure of the union of rate pairs $(R_0,R_1)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_0 &\leq I(W;Y_2)\label{EQ:region_DMBC0}\\ R_1 &\leq I(U;Y_1|W)-I(U;Y_2|W)+L_1\label{EQ:region_DMBC11}\\ R_0+R_1 &\leq I(W,U;Y_1)-I(U;Y_2|W)+L_1\label{EQ:region_DMBC12}\\ R_0+R_1 &\leq I(U;Y_1|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_DMBC13} \end{align}\label{EQ:region_DMBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{W,U}P_{X|U}Q_{Y_1,Y_2|X}$. $\mathcal{C}_{\mathrm{DM}}(L_1)$ is convex. \end{theorem} \begin{IEEEproof} The direct part follows by setting $R_2=0$, $U_1=U$ and $U_2=0$ in Theorem \ref{TM:inner_bound}. For the converse we show that $\mathcal{R}_{\mathrm{O}}(L_1,L_2)\subseteq\mathcal{C}_{\mathrm{DM}}(L_1)$. Clearly, \eqref{EQ:region_DMBC0}, \eqref{EQ:region_DMBC11} and \eqref{EQ:region_DMBC13} coincide with \eqref{EQ:region_outer0}, \eqref{EQ:region_outer12} and \eqref{EQ:region_outer13}, respectively. Inequality \eqref{EQ:region_DMBC12} follows by merging \eqref{EQ:region_outer0} and \eqref{EQ:region_outer12}. \end{IEEEproof} \begin{remark} The BC with a degraded message set and a privacy leakage constraint captures the BC with confidential messages \cite{Csiszar_Korner_BCconfidential1978} and the BC with a degraded message set \cite{Korner_BC_DegradedMessageSet1977}. The former is obtained by taking $L_1=0$, while $L_1\to\infty$ recovers the latter. Setting $L_1=0$ or $L_1\to\infty$ into $\mathcal{C}_{\mathrm{DM}}(L_1)$ recovers the capacity regions of these special cases (see Section \ref{SUBSEC:special_cases_BC_Conf} for more details). \end{remark} \begin{corollary}[Leakage-Capacity for PD-BC]\label{COR:PDBC_leakage_capacity} The $L_1$-leakage-capacity region $\mathcal{C}_{\mathrm{PD}}(L_1)$ of a PD-BC without a common message and transition probability $Q_{Y_1|X}Q_{Y_2|Y_1}$ is the closure of the union over the same domain as $\mathcal{C}_{\mathrm{DM}}(L_1)$ of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying (\ref{EQ:region_DMBC0})-(\ref{EQ:region_DMBC11}) and (\ref{EQ:region_DMBC13}), while replacing $R_0$ with $R_2$ and noting that $\min\big\{I(W;Y_1),I(W;Y_2)\big\}=I(W;Y_2)$ . \end{corollary} The proof of Corollary \ref{COR:PDBC_leakage_capacity} is similar to that of Theorem \ref{TM:DMBC_leakage_capacity} and is omitted. \begin{remark} Bounds on the cardinality of the auxiliary random variables in Theorems \ref{TM:inner_bound}, \ref{TM:outer_bound}, \ref{TM:SDBC_leakage_capacity} and \ref{TM:DMBC_leakage_capacity} can be derived using, e.g., the perturbation method \cite[Appendix C]{ElGamal2011} or techniques such as in \cite{UVW_Outer2010} and \cite{Nair_Matron_Bounds2013}. The computability of the derived regions is not in the scope of this work. \end{remark} \section{Special Cases}\label{SEC:special_cases} \subsection{Marton's Inner Bound}\label{SUBSEC:special_cases_Marton} Theorem \ref{TM:inner_bound} generalizes Marton's region to the case with privacy leakage constraints, i.e., $\mathcal{R}_{\mathrm{I}}(\infty,\infty)$ recovers Marton's region. Moreover, $\mathcal{R}_{\mathrm{I}}(L_1,L_2)$ is tight for every BC with a known capacity region. \subsection{UVW-Outer Bound}\label{SUBSEC:special_cases_UVW} The New-Jersey outer bound was derived in \cite{NJ_Outer2008} and shown to be at least as good as the previously known bounds. A simpler version of this outer bound was established in \cite{UVW_Outer2010} and was named the UVW-outer bound. The UVW-outer bound is given by $\mathcal{R}_{\mathrm{O}}(\infty,\infty)$. \subsection{Liu-Mari{\'c}-Spasojevi{\'c}-Yates Inner Bound for BCs with Secrecy}\label{SUBSEC:special_cases_Liu} In \cite{BC_Confidential_Yates2008} an inner bound on the secrecy-capacity region of a BC with two confidential messages (each destined for one of the receivers and kept secret from the other) was characterized as the set of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq I(U_1;Y_1|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)\label{EQ:region_liu_secrecy1}\\ R_2 &\leq I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)\label{EQ:region_liu_secrecy2} \end{align}\label{EQ:region_liu_secrecy} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{U_0,U_1,U_2}P_{X|U_1,U_2}Q_{Y_1,Y_2|X}$. This inner bound is tight for SD-BCs \cite{Semi-det_BC_secrect_two2009} and MIMO Gaussian BCs \cite{Poor_Shamai_Gaussian_MIMO_BC_Secrecy2010}. Setting $R_0=0$ into $\mathcal{R}_{\mathrm{I}}(0,0)$ recovers (\ref{EQ:region_liu_secrecy}). \subsection{SD-BCs with and without Secrecy}\label{SUBSEC:special_cases_SDBC} The SD-BC without a common message, i.e., when $R_0=0$, is solved when both, either or neither private messages are secret (see \cite{Semi-det_BC_secrect_two2009,Goldfeld_Weak_Secrecy_ISIT2015,Semi-det_BC_secrect_one2009} and \cite{GP_SemideterministicBC1980}, respectively). Setting $L_j=0$, for $j=1,2$, reduces the SD-BC with privacy leakage constraints to the problem where $M_j$ is secret. Taking $L_j\to\infty$ results in a SD-BC without a secrecy constraint on $M_j$. We use Theorem \ref{TM:SDBC_leakage_capacity} to obtain the leakage-capacity region of the SD-BC without a common message. \begin{corollary}[Leakage-Capacity for SD-BC without Common Message]\label{COR:SDBC_nocommon_leakage_capacity} Let $(L_1,L_2)\in\mathbb{R}^2_+$. The $(L_1,L_2)$-leakage-capacity region $\mathcal{C}_{\mathrm{SD}}^0(L_1,L_2)$ of a SD-BC $\mathds{1}_{\{Y_1=f(X)\}}Q_{Y_2|X}$ with privacy leakage constraints and without a common message is the closure of the union over the domain stated in Theorem \ref{TM:SDBC_leakage_capacity} of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1|W,V,Y_2)+L_1\label{EQ:region_nocommon_SDBC11}\\ R_1 &\leq H(Y_1)\label{EQ:region_nocommon_SDBC12}\\ R_2 &\leq I(V;Y_2|W)-I(V;Y_1|W)+L_2\label{EQ:region_nocommon_SDBC21}\\ R_2 &\leq I(W,V;Y_2)\label{EQ:region_nocommon_SDBC22}\\ R_1+R_2 &\leq H(Y_1|W,V,Y_2)+I(V;Y_2|W)+I(W;Y_1)+L_1\label{EQ:region_nocommon_SDBC_sum1}\\ R_1+R_2 &\leq H(Y_1|W,V)+I(V;Y_2|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\label{EQ:region_nocommon_SDBC_sum2}. \end{align}\label{EQ:region_nocommon_SDBC} \end{subequations} \end{corollary} \subsubsection{Neither Message is Secret}\label{SUBSEC:special_cases_neither} If $L_1,L_2\to\infty$, the SD-BC with leakage reduces to the classic case without secrecy \cite{GP_SemideterministicBC1980}, for which the capacity region is the closure of the union of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1)\label{EQ:region_neither_SDBC1}\\ R_2 &\leq I(V;Y_2)\label{EQ:region_neither_SDBC2}\\ R_1+R_2 &\leq H(Y_1|V)+I(V;Y_2)\label{EQ:region_neither_SDBC_sum} \end{align}\label{EQ:region_neither_SDBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{V,Y_1,X}Q_{Y_2|X}$ for which $Y_1=f(X)$. The region (\ref{EQ:region_neither_SDBC}) coincides with $\mathcal{C}_{\mathrm{SD}}^0(\infty,\infty)$ by first noting that the bound \begin{equation} R_1+R_2 \leq H(Y_1|W,V)+I(V;Y_2|W)+I(W;Y_1)\label{EQ:region_neither_redundant} \end{equation} is redundant because if for some PMF $P_{W,V,X}Q_{Y_1,Y_2|X}$ (\ref{EQ:region_neither_redundant}) is active, then setting $\tilde{W}=0$ and $\tilde{V}=(W,V)$ achieves a larger region. Removing (\ref{EQ:region_neither_redundant}) from $\mathcal{C}_{\mathrm{SD}}^0(\infty,\infty)$ and setting $\tilde{V}=(W,V)$ recovers (\ref{EQ:region_neither_SDBC}). This agrees with the discussion in Section \ref{SUBSEC:special_cases_Marton} since Marton's inner bound is tight for the SD-BC \subsubsection{Only $M_1$ is Secret}\label{SUBSEC:special_cases_m1only} The SD-BC in which $M_1$ is a secret is obtained by taking $L_1=0$ and $L_2\to\infty$. The secrecy-capacity region was derived in \cite[Corollary 4]{Goldfeld_Weak_Secrecy_ISIT2015} and is the closure of the union over the same domain as (\ref{EQ:region_neither_SDBC}) of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1|V,Y_2)\label{EQ:region_m1only_SDBC1}\\ R_2 &\leq I(V;Y_2).\label{EQ:region_m1only_SDBC2} \end{align}\label{EQ:region_m1only_SDBC} \end{subequations} \vspace{-6.5mm} \noindent $\mathcal{C}^0_{\mathrm{SD}}(0,\infty)$ and (\ref{EQ:region_m1only_SDBC}) match by dropping \begin{equation} R_1+R_2 \leq H(Y_1|W,V,Y_2)+I(V;Y_2|W)+I(W;Y_1)\label{EQ:region_m1only_SDBC_redundant} \end{equation} based on arguments similar to those used to remove (\ref{EQ:region_neither_redundant}) from $\mathcal{C}_{\mathrm{SD}}^0(\infty,\infty)$, and setting $\tilde{V}=(W,V)$. \begin{remark} The optimal code for the SD-BC with a secret message $M_1$ relies on double-binning the codebook of $M_1$, while $M_2$ is transmitted at maximal rate and no binning of its codebook is performed. Referring to the bounds in Section \ref{SUBSEC:inner_proof}, inserting $L_1=0$ and $L_2\to\infty$ into our code construction results in (\ref{EQ:achiev_rb1}) and (\ref{EQ:achiev_extra_rb2}) becoming inactive since (\ref{EQ:achiev_extra_rb1}) is the dominant constraint. Furthermore, $L_1=0$ combined with (\ref{EQ:achiev_partial_rates_bound2}) implies that the public message consists of a portion of $M_2$ \emph{only}. Keeping in mind that the public message is decoded by both receivers, unless $R_{10}=0$ (i.e., unless the public message contains no information about $M_1$) the secrecy constraint will be violated. \end{remark}% \subsubsection{Only $M_2$ is Secret}\label{SUBSEC:special_cases_m2only} The SD-BC in which $M_2$ is secret is obtained by taking $L_1\to\infty$ and $L_2=0$. The secrecy-capacity region is the closure of the union of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1)\label{EQ:region_m2only_SDBC11}\\ R_1 &\leq H(Y_1|W)+I(W;Y_2)\label{EQ:region_m2only_SDBC12}\\ R_2 &\leq I(V;Y_2|W)-I(V;Y_1|W)\label{EQ:region_m2only_SDBC2} \end{align}\label{EQ:region_m2only_SDBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{W,V,Y_1,X}Q_{Y_2|X}$ for which $Y_1=f(X)$ \cite[Theorem 1]{Semi-det_BC_secrect_one2009}. Using Corollary \ref{COR:SDBC_nocommon_leakage_capacity}, $\mathcal{C}^0_{\mathrm{SD}}(\infty,0)$ is the union over the same domain as (\ref{EQ:region_m2only_SDBC}) of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1)\label{EQ:region_m2only_alt_SDBC1}\\ R_2 &\leq I(V;Y_2|W)-I(V;Y_1|W)\label{EQ:region_m2only_alt_SDBC2}\\ R_1+R_2 &\leq H(Y_1|W,V)+I(W,V;Y_2).\label{EQ:region_m2only_alt_SDBC_sum} \end{align}\label{EQ:region_m2only_alt_SDBC} \end{subequations} \vspace{-6.5mm} \noindent The second bound on $R_1+R_2$ in $\mathcal{C}^0_{\mathrm{SD}}(\infty,0)$ is redundant since it follows by adding (\ref{EQ:region_m2only_alt_SDBC1}) and (\ref{EQ:region_m2only_alt_SDBC2}). Both (\ref{EQ:region_m2only_SDBC}) and (\ref{EQ:region_m2only_alt_SDBC}) describe the secrecy-capacity region of the SD-BC with a secret message $M_2$. In Appendix \ref{APPEN:SDBC_region_equality} we prove the equivalence by using bidirectional inclusion arguments. By symmetry of our code construction, the effect of $L_1\to\infty$ and $L_2=0$ on the scheme in Section \ref{SUBSEC:inner_proof} is analogous to the one described in Section \ref{SUBSEC:special_cases_m1only}.\\ \subsubsection{Both Messages are Secret}\label{SUBSEC:special_cases_both} Taking $L_1=L_2=0$ recovers the SD-BC where both messages are secret. The secrecy-capacity region for this case was found in \cite[Theorem 1]{Semi-det_BC_secrect_two2009} and is the closure of the union of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq H(Y_1|W,V,Y_2)\label{EQ:region_both_SDBC1}\\ R_2 &\leq I(V;Y_2|W)-I(V;Y_1|W)\label{EQ:region_both_SDBC2} \end{align}\label{EQ:region_both_SDBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{W,V}P_{X|V}Q_{Y_2|X}$ for which $Y_1=f(X)$. The region in (\ref{EQ:region_both_SDBC}) coincides with $\mathcal{C}^0_{\mathrm{SD}}(0,0)$. Restricting the union in $\mathcal{C}^0_{\mathrm{SD}}(0,0)$ to encompass only PMFs that satisfy the Markov relation $W-V-X$ does not shrink the region. This is since in the proof of Theorem \ref{TM:outer_bound} we define $V_q\triangleq (M_2,W_q)$, and therefore, $X_q-V_q-W_q$ forms a Markov chain for every $q\in[1:n]$. The optimality of PMFs in which $X-V-W$ is a Markov chain follows. \begin{remark} The coding scheme that achieves (\ref{EQ:region_both_SDBC}) uses double-binning for the codebooks of both private messages. To preserve confidentiality, the rate bounds of each messages includes the penalty term $I(U_1;U_2|V)$ (without the confidentiality constraints, Marton's coding scheme \cite{Marton_BC1979} requires only that the sum-rate has that penalty term). This is evident from our scheme by setting $L_1=L_2=0$ in (\ref{EQ:achiev_extra_rb1}), (\ref{EQ:achiev_extra_rb2}) and (\ref{EQ:achiev_partial_rates_bound2}), which results in (\ref{EQ:achiev_rb1}) being redundant. \end{remark} \subsection{BCs with One Private Message}\label{SUBSEC:special_cases_BC_Conf} Consider the BC with leakage constraints in which $M_2=0$; its leakage-capacity region $\mathcal{C}_{\mathrm{DM}}(L_1)$ is stated in Theorem \ref{TM:DMBC_leakage_capacity}. We show that $\mathcal{C}_{\mathrm{DM}}(L_1)$ recovers the secrecy-capacity region the BC with confidential messages \cite{Csiszar_Korner_BCconfidential1978} and the capacity region of the BC with a degraded message set (without secrecy) \cite{Korner_BC_DegradedMessageSet1977}.\\ \subsubsection{BCs with Confidential Messages} The secrecy-capacity region of the BC with confidential messages was derived in \cite{Csiszar_Korner_BCconfidential1978} and is the union over the same domain as in Theorem \ref{TM:DMBC_leakage_capacity} of rate pairs $(R_0,R_1)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_0 &\leq I(W;Y_1)\label{EQ:region_BCConf01}\\ R_0 &\leq I(W;Y_2)\label{EQ:region_BCConf02}\\ R_1 &\leq I(U;Y_1|W)-I(U;Y_2|W).\label{EQ:region_BCConf1} \end{align}\label{EQ:region_BCConf} \end{subequations} \vspace{-6.5mm} \noindent Inserting $L_1=0$ into the result of Theorem \ref{TM:DMBC_leakage_capacity} yields $\mathcal{C}_{\mathrm{DM}}(0)$ which is the union over the same domain as (\ref{EQ:region_BCConf}) of rate pairs $(R_0,R_1)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_0 &\leq I(W;Y_2)\label{EQ:region_alt_BCConf0}\\ R_1 &\leq I(U;Y_1|W)-I(U;Y_2|W)\label{EQ:region_alt_BCConf1}\\ R_0+R_1 &\leq I(W,U;Y_1)-I(U;Y_2|W).\label{EQ:region_alt_BCConf_sum} \end{align}\label{EQ:region_alt_BCConf} \end{subequations} \vspace{-6.5mm} \noindent The regions (\ref{EQ:region_BCConf}) and (\ref{EQ:region_alt_BCConf}) are equal and a proof of the equality is given in Appendix \ref{APPEN:BCCOnf_region_equality}. Inserting $L_1=0$ and $U_2=0$ into the code construction in Section \ref{SUBSEC:inner_proof} reduces it to a superposition code in which the outer codebook (that is associated with the confidential message) is binned \begin{remark} The BC with confidential messages captures the WTC by setting $M_0=0$. Thus, the WTC is also a special case of the BC with privacy leakage constraints. \end{remark} \subsubsection{BCs with a Degraded Message Set} If $L_1\to\infty$, we get the BC with a degraded message set \cite{Korner_BC_DegradedMessageSet1977}. Inserting $L_1\to\infty$ into $\mathcal{C}_{\mathrm{DM}}(L_1)$ and setting $U=X$ recovers the capacity region which is the union of rate pairs $(R_0,R_1)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_0 &\leq I(W;Y_2)\label{EQ:region_BC_deg0}\\ R_0+R_1 &\leq I(X;Y_1|W)+I(W;Y_2)\label{EQ:region_BC_deg_sum1}\\ R_0+R_1 &\leq I(X;Y_1)\label{EQ:region_BC_deg_sum2} \end{align}\label{EQ:region_BC_deg_sum} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all PMFs $P_{V,X}Q_{Y_1,Y_2|X}$. In fact, (\ref{EQ:region_BC_deg_sum}) is an alternative characterization of the capacity region of the BC with a degraded message set, as described in \cite[Theorem 7]{Liang_Kramer_RBC2007} and \cite[Eq. (8.1)]{ElGammalKim10LectureNotes}. \section{Example}\label{SEC:example} Suppose the channel from the transmitter to receivers 1 and 2 is the BWC without a common message as illustrated in Fig. \ref{FIG:blackwell} \cite{vanderMeulen_Blackwell1975,Gelfand_Blackwell1977}. Using Corollary \ref{COR:SDBC_nocommon_leakage_capacity}, the $(L_1,L_2)$-leakage-capacity region of a deterministic BC (DBC) is the following. \begin{figure}[t!] \begin{center} \begin{psfrags} \psfragscanon \psfrag{A}[][][1]{$0$} \psfrag{B}[][][1]{$1$} \psfrag{C}[][][1]{$2$} \psfrag{D}[][][1]{\ $X$} \psfrag{E}[][][1]{\ \ Encoder} \psfrag{F}[][][1]{\ \ \ $Y_1$} \psfrag{G}[][][1]{$0$} \psfrag{H}[][][1]{$1$} \psfrag{I}[][][1]{$0$} \psfrag{J}[][][1]{$1$} \psfrag{K}[][][1]{\ \ \ Decoder 1} \psfrag{L}[][][1]{\ \ \ $Y_2$} \psfrag{M}[][][1]{\ \ \ Decoder 2} \psfrag{N}[][][1]{\ \ $R_{12}$} \psfrag{X}[][][1]{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $I(M_2;Y_1^n)\leq nL_2$} \psfrag{Y}[][][1]{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $I(M_1;Y_2^n)\leq nL_1$} \includegraphics[scale = .55]{blackwell_channel.eps} \caption{Blackwell BC with privacy leakage constraints.} \label{FIG:blackwell} \psfragscanoff \end{psfrags} \end{center} \end{figure} \begin{corollary}[Leakage-Capacity Region for DBC]\label{COR:DBC_leakage_capacity} The $(L_1,L_2)$-leakage-capacity region $\mathcal{C}_{\mathrm{D}}(L_1,L_2)$ of the DBC with privacy leakage constraints and no common message is the union of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1 &\leq \min\big\{H(Y_1)\mspace{3mu},\mspace{3mu}H(Y_1|Y_2)+L_1\big\}\label{EQ:region_DBC1}\\ R_2 &\leq \min\big\{H(Y_2)\mspace{3mu},\mspace{3mu}H(Y_2|Y_1)+L_2\big\}\label{EQ:region_DBC2}\\ R_1+R_2 &\leq H(Y_1,Y_2)\label{EQ:region_DBC_sum} \end{align}\label{EQ:region_DBC} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all input PMFs $P_X$. \end{corollary} The proof of Corollary \ref{COR:DBC_leakage_capacity} is relegated to Appendix \ref{APPEN:DBC_leakage_capacity_proof}. We parameterize the input PMF $P_X$ in Corollary \ref{COR:DBC_leakage_capacity} as \begin{equation} P_X(0)=\alpha\ ,\ P_X(1)=\beta\ ,\ P_X(2)=1-\alpha-\beta,\label{EQ:input_dist_param} \end{equation} where $\alpha,\beta\in\mathbb{R}_+$ and $\alpha+\beta\leq1$. Using (\ref{EQ:input_dist_param}), the $(L_1,L_2)$-leakage-capacity region $\mathcal{C}_{\text{BWC}}(L_1,L_2)$ of the BWC is descried as the union of rate pairs $(R_1,R_2)\in\mathbb{R}^2_+$ satisfying: \begin{subequations} \begin{align} R_1&\leq \min\left\{H_b(\beta)\mspace{3mu},\mspace{3mu}(1-\alpha)H_b\left(\frac{\beta}{1-\alpha}\right)+L_1\right\}\label{EQ:region_blackwell_r1}\\ R_2&\leq \min\left\{H_b(\alpha)\mspace{3mu},\mspace{3mu}(1-\beta)H_b\left(\frac{\alpha}{1-\beta}\right)+L_2\right\}\label{EQ:region_blackwell_r2}\\ R_1+R_2&\leq H_b(\alpha)+(1-\alpha)H_b\left(\frac{\beta}{1-\alpha}\right)\label{EQ:region_blackwell_sum} \end{align}\label{EQ:region_blackwell} \end{subequations} \vspace{-6.5mm} \noindent where the union is over all $\alpha,\beta\in\mathbb{R}_+$ with $\alpha+\beta\leq 1$. \begin{figure}[t!] \begin{center} \begin{psfrags} \psfragscanon \psfrag{A}[][][0.9]{$R_1$ [bits/use]} \psfrag{B}[][][0.9]{$R_2$ [bits/use]} \psfrag{C}[][][0.9]{ } \psfrag{D}[][][0.9]{ } \psfrag{E}[][][0.9]{ } \psfrag{W}[][][0.6]{\ \ $\mspace{-2mu}L=0$} \psfrag{X}[][][0.6]{\ \ \ \ \ \ \ $\mspace{-6mu}L=0.05$} \psfrag{Y}[][][0.6]{\ \ \ \ \ $\mspace{-2mu}L=0.1$} \psfrag{Z}[][][0.6]{\ \ \ \ \ \ $\mspace{-4mu}L=0.4$} \subfloat[]{\includegraphics[scale = 0.412]{Blackwell_Leakage_L1only.eps}} \subfloat[]{\includegraphics[scale = 0.412]{Blackwell_Leakage_L2only.eps}} \subfloat[]{\includegraphics[scale = 0.412]{Blackwell_Leakage_L1L2both}} \caption{$(L_1,L_2)$-leakage-capacity region of the BWC for three cases: (a) $L_1=L$ and $L_2\to\infty$; (b) $L_1\to\infty$ and $L_2=L$; (c) $L_1=L_2=L$.}\label{FIG:blackwell_region} \psfragscanoff \end{psfrags} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \begin{psfrags} \psfragscanon \psfrag{A}[][][0.9]{$L$ [bits/use]} \psfrag{B}[][][0.9]{$R_1+R_2$ [bits/use]} \includegraphics[scale = 0.5]{Sum_vs_Leakage.eps} \caption{The sum-rate capacity versus the allowed leakage for $L_1=L_2=L$.}\label{FIG:Sum_vs_Leakage} \psfragscanoff \end{psfrags} \end{center} \end{figure} \par Fig. \ref{FIG:blackwell_region} illustrates $\mathcal{C}_{\mathrm{BWC}}(L_1,L_2)$ for three cases. In Fig. \ref{FIG:blackwell_region}(a) $L_2\to\infty$ while $L_1\in\{0,0.05,0.1,0.4\}$. The blue (inner) line corresponds to $L_1=0$ and is the secrecy-capacity region of a BWC where $M_1$ is secret \cite[Fig. 5]{Goldfeld_Weak_Secrecy_ISIT2015}. The red (outer) line corresponds to $L_1=0.4$ (which is sufficiently large and can be thought of as $L_1\to\infty$) and depicts the capacity region of the classic BWC. As $L_1$ grows, the inner (blue) region converges to coincide with the outer (red) region. Fig. \ref{FIG:blackwell_region}(b) considers the opposite case, i.e., where $L_1\to\infty$ and $L_2\in\{0,0.05,0.1,0.4\}$, and is analogous to Fig. \ref{FIG:blackwell_region}(a). In Fig. \ref{FIG:blackwell_region}(c) we choose $L_1=L_2=L$, where $L\in\{0,0.05,0.1,0.4\}$, and we demonstrate the impact of two leakage constraints on the region. When $L=0$, one obtains the secrecy-capacity region of the BWC when both messages are confidential \cite{Semi-det_BC_secrect_two2009}. In each case, the capacity region grows with $L$ and saturates at the red (outer) region, for which neither message is secret. Focusing on the symmetric case in Fig. \ref{FIG:blackwell_region}(c), we note that the saturation of the region at $L=0.4$ is not accidental and is implied by Corollary \ref{COR:inactive_leakage}. For the Blackwell BC with $L_1=L_2=L$, and some $\alpha,\beta\in\mathbb{R}_+$ with $\alpha+\beta\leq 1$, we denote by $L^\star(\alpha,\beta)$ the threshold from \eqref{EQ:maximal_leakage_Lj}, which reduces to \begin{equation} L^\star(\alpha,\beta)=I(Y_1;Y_2)=H_b(\beta)-(1-\alpha)H_b\left(\frac{\beta}{1-\alpha}\right). \end{equation} As explained in Remark \ref{REM:inactive_leakage_explained}, for each leakage value $L$, Corollary \ref{COR:inactive_leakage} (along with some numerical calculations) can be used to tell whether a further increase of $L$ will induce a larger region or not. Accordingly, for each $L\in\{0,0.05,0.1,0.4\}$, we have calculated the maximum of $L^\star(\alpha,\beta)$ over the distributions that achieve the boundary points of the capacity region $\mathcal{C}_{\mathrm{BWC}}(L,L)$. Denoting the value of the maximal $L^\star$ that corresponds to the allowed leakage $L\in\{0,0.05,0.1,0.4\}$ by $L^\star(L)$, we have \begin{equation} L^\star(0)=L^\star(0.05)=0.15897\quad;\quad L^\star(0.1)=0.20101\quad;\quad L^\star(0.4)=0.38317. \end{equation} For $L=0.4$, we see that $L^\star(L)\leq L$, and consequently, Corollary \ref{COR:inactive_leakage} and Remark \ref{REM:inactive_leakage_explained} imply that further increasing $L$ will not change the leakage-capacity region. Evidently, $\mathcal{C}_{\mathrm{BWC}}(L,L)$ saturates at $L=0.4$. For $L\in\{0,0.05,0.1\}$, however, $L^\star(L)> L$ and consequently $\mathcal{C}_{\mathrm{BWC}}(L',L')\subsetneq\mathcal{C}_{\mathrm{BWC}}(L,L)$, for $L,L'\in\{0,0.05,0.1\}$ with $L'<L$. The variation of the sum of rates $R_1+R_2$ as a function of $L$ is shown by the blue curve in Fig. \ref{FIG:Sum_vs_Leakage}; the red dashed vertical lines correspond to the values of $L$ considered in Fig. \ref{FIG:blackwell_region}. Note that for $0\leq L\leq 0.09818$, (\ref{EQ:region_blackwell_sum}) is inactive, and therefore, $R_1+R_2$ is bounded by the summation of (\ref{EQ:region_blackwell_r1}) and (\ref{EQ:region_blackwell_r2}). Thus, for $0\leq L\leq 0.09818$, the sum of rates $R_1+R_2$ increases linearly with $L$. For $L>0.09818$, the bound in (\ref{EQ:region_blackwell_sum}) is no longer redundant, and because it is independent of $L$, the sum rate saturates. \begin{figure}[t!] \begin{center} \begin{psfrags} \psfragscanon \psfrag{A}[][][1]{$R_1$} \psfrag{G}[][][1]{\ \ $R_2$} \psfrag{D}[][][1]{$0$} \psfrag{E}[][][1]{$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ H(Y_2|Y_1)$} \psfrag{F}[][][1]{$\ \ \ \ \ H(Y_2)$} \psfrag{B}[][][1]{$\ \ \ \ H(Y_1)$} \psfrag{I}[][][1]{$\ \ \ \ \ \ \ \ \ \ \ \ \ H(Y_1|Y_2)$} \includegraphics[scale = 0.6]{Blackwell_Region_Secrecy_Cases.eps} \caption{The pentagons/rectangles whose union produces the capacity region of a BWC for different secrecy cases: The outer pentagon corresponds to the case without secrecy; the red and blue rectangles correspond to $L_1=0$ and $L_2=0$, respectively; the inner rectangle is associated with $L_1=L_2=0$.}\label{FIG:secrecy_regions} \psfragscanoff \end{psfrags} \end{center} \end{figure} The regions in Fig. \ref{FIG:blackwell_region} are a union of rectangles or pentagons, each corresponds to a different input PMF $P_X$. In Fig. \ref{FIG:secrecy_regions} we illustrate a typical structure of these rectangles and pentagons for a fixed $P_X$ at the extreme values of $L_1$ and $L_2$. When both $L_1$ and $L_2$ are sufficiently large, the leakage constraints degenerate and the classic BWC is obtained. Its capacity region (the red (outer) line in, e.g., Fig. \ref{FIG:blackwell_region}(c)) is a union of the pentagons depicted in Fig. \ref{FIG:secrecy_regions}. The secrecy-capacity region for $L_1=0$ and $L_2\to\infty$ (depicted by the blue line in Fig. \ref{FIG:blackwell_region}(a)) is a union of the red rectangles in Fig. \ref{FIG:secrecy_regions}. Similarly, when $L_2=0$ and $L_1\to\infty$ the secrecy-capacity region is a union of the blue rectangles in Fig. \ref{FIG:secrecy_regions}. Finally, if $L_1=L_2=0$ and both messages are secret, the secrecy-capacity region of the BWC is the union of the dark rectangles in Fig. \ref{FIG:secrecy_regions}, i.e., the intersection of the blue and the red regions. Fig. \ref{FIG:secrecy_regions} highlights that as $L_1$ and/or $L_2$ decrease, the underlying pentagons/rectangles (the union of which produces the admissible rate region) shrink and results in a smaller region. \section{Proofs}\label{SEC:proofs} \subsection{Proof of Theorem \ref{TM:inner_bound}}\label{SUBSEC:inner_proof} For simplicity, we assume that expressions of the form $2^{nR}$, for some $R\in\mathbb{R}_+$, are integers. Fix $(L_1,L_2)\in\mathbb{R}^2_+$, a single-letter PMF $P_{U_0,U_1,U_2,X,Y_1,Y_2}\triangleq P_{U_0,U_1,U_2,X}Q_{Y_1,Y_2|X}\in\mathcal{P}(\mathcal{U}_0\times\mathcal{U}_`\times\mathcal{U}_1\times\mathcal{X}\times\mathcal{Y}_1\times\mathcal{Y}_2)$ and $\epsilon,\xi_1,\xi_2>0$. \par\textbf{Codebook $\bm{\mathcal{B}_n}$:} Split each message $m_j$, $j=1,2$, into two sub-messages denoted by $(m_{j0},m_{jj})$. The triple $m_p\triangleq(m_0,m_{10},m_{20})$ is referred to as a \emph{public message} while $m_{jj}$, $j=1,2$, serve as \emph{private message} $j$. The rates associated with $m_{j0}$ and $m_{jj}$, $j=1,2$, are denoted by $R_{j0}$ and $R_{jj}$, while the corresponding alphabets are $\mathcal{M}_{j0}$ and $\mathcal{M}_{jj}$, respectively. The partial rates $R_{j0}$ and $R_{jj}$, $j=1,2$, satisfy \begin{subequations} \begin{align} &R_j=R_{j0}+R_{jj}\label{EQ:achiev_partial_rates_sum}\\ &0\leq R_{j0}\leq R_j\label{EQ:achiev_partial_rates_bound1}\\ &R_{j0}\leq L_j.\label{EQ:achiev_partial_rates_bound2} \end{align}\label{EQ:achiev_partial_rates} \end{subequations} The random variables $M_{j0}$ and $M_{jj}$ are independent and uniform over $\mathcal{M}_{j0}$ and $\mathcal{M}_{jj}$. We use the notations $M_p\triangleq(M_0,M_{10},M_{20})$, $\mathcal{M}_p\triangleq\mathcal{M}_0\times\mathcal{M}_{10}\times\mathcal{M}_{20}$ and $R_p\triangleq R_0+R_{10}+R_{20}$. Note that $M_p$ is uniformly distributed over $\mathcal{M}_p$ and that $|\mathcal{M}_p|=2^{nR_p}$. Moreover, let $(W_1,W_2)$ be a pair of independent random variables, where $W_j$, $j=1,2$, is uniformly distributed over $\mathcal{W}_j=\big[1:2^{n\tilde{R}_j}\big]$ and independent of $(M_0,M_1,M_2)$ (which implies their independence of $(M_p,M_{11},M_{22})$ as well). All subsequent notations of codebooks and codewords omit the blocklength $n$. Let $\mathbb{B}_0\triangleq\big\{\mathbf{U}_0(m_p)\big\}_{m_p\in\mathcal{M}_p}$ be a random public message codebook that comprises $2^{nR_p}$ i.i.d. random vectors $\mathbf{U}_0(m_p)$, each distributed according to $P_{U_0}^n$. A realization of $\mathbb{B}_0$ is denoted by $\mathcal{B}_0\triangleq\big\{\mathbf{u}_0(m_p,\mathcal{B}_0)\big\}_{m_p\in\mathcal{M}_p}$. Fix a public message codebook $\mathcal{B}_0$. For every $m_p\in\mathcal{M}_p$, let $\mathbb{B}_j(m_0)\triangleq\big\{\mathbf{U}_j(m_p,m_{jj},i_j,w_j)\big\}_{(m_{jj},i_j,w_j)\in\mathcal{M}_{jj}\times\mathcal{I}_j\times\mathcal{W}_j}$, where $\mathcal{I}_j\triangleq\big[1:2^{nR_j'}\big]$, be a random codebook of private messages for $j=1,2$, consisting of conditionally independent random vectors each distributed according to $P^n_{U_j|U_0=\mathbf{u}_0(m_p,\mathcal{B}_0)}$. We further set $\mathbb{B}_j=\big\{\mathbb{B}_j(m_p)\big\}_{m_p\in\mathcal{M}_p}$. A realization of $\mathbb{B}_j$ is denoted by $\mathcal{B}_j$ and we also define $\mathcal{B}_{0,j}=\big\{\mathcal{B}_0,\mathcal{B}_j\big\}$, for $j=1,2$, and $\mathcal{B}=\big\{\mathcal{B}_0,\mathcal{B}_1,\mathcal{B}_2\big\}$. For each $m_p\in\mathcal{M}_p$, we use $\mathcal{B}_j(m_p)\triangleq\big\{\mathbf{u}_j(m_p,m_{jj},i_j,w_j,\mathcal{B}_{0,j})\big\}_{(m_{jj},i_j,w_j)\in\mathcal{M}_{jj}\times\mathcal{I}_j\times\mathcal{W}_j}$. Based on the above labeling, the codebook $\mathcal{B}_j(m_p)$ has a $u_j$-bin associated with every pair $(m_{jj},w_j)\in\mathcal{M}_{jj}\times\mathcal{W}_j$, each containing $2^{nR_j'}$ $u_j$-codewords. Denote the set of all possible codebooks of the above structure by $\mathfrak{B}$. The probability of drawing a codebook $\mathcal{B}=\big\{\mathcal{B}_0,\mathcal{B}_1,\mathcal{B}_2\big\}\in\mathfrak{B}$ is \begin{equation} P_\mathbb{B}(\mathcal{B})=\prod_{m_p\in\mathcal{M}_p}P^n_{U_0}\big(\mathbf{u}_0(m_p,\mathcal{B}_0)\big)\prod_{j=1,2}\prod_{\substack{(m^{(j)}_p,m_{jj},i_j,w_j)\\\in\mathcal{M}_p\times\mathcal{M}_{jj}\times\mathcal{I}_j\times\mathcal{W}_j}}P^n_{U_j|U_0}\Big(\mathbf{u}_j\big(m_p^{(j)},m_{jj},i_j,w_j,\mathcal{B}_{0,j}\big)\Big|\mathbf{u}_0\big(m_p^{(j)},\mathcal{B}_0\big)\Big).\label{EQ:codebook_probability} \end{equation} For a fixed codebook $\mathcal{B}\in\mathfrak{B}$ we next describe its associated encoding function $f^{(\mathcal{B})}_\mathrm{E}$ and decoding functions $\phi^{(\mathcal{B})}_j$, for $j=1,2$. \par\textbf{Encoder $\bm{f^{(\mathcal{B})}_\mathrm{E}}$:} To transmit the message pair $(m_0,m_1,m_2)$ the encoder transforms it into the triple $\big(m_p,m_{11},m_{22})$, and draws $W_j$ uniformly over $\mathcal{W}_j$, $j=1,2$. Then it searches for a pair of indices $(i_1,i_2)\in\mathcal{I}_1\times\mathcal{I}_2$ such that \begin{equation} \Big(\mathbf{u}_0(m_p,\mathcal{B}_0),\mathbf{u}_1(m_p,m_1,i_1,w_1,\mathcal{B}_{0,1}),\mathbf{u}_2(m_p,m_{22},i_2,w_2,\mathcal{B}_{0,2})\Big)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})\label{EQ:achiev_encoder_typicality} \end{equation} where $w_j$ denotes the realization of $W_j$. If the set of appropriate index pairs contains more than one element, the encoder chooses a pair uniformly over the set; if the set is empty, a pair is chosen uniformly over $\mathcal{I}_1\times\mathcal{I}_2$. The channel input sequence is then generate according to the conditional product distribution $P^n_{X|U_0=\mathbf{u}_0(m_p,\mathcal{B}_0),U_1=\mathbf{u}_1(m_p,m_1,i_1,w_1,\mathcal{B}_{0,1}),U_2=\mathbf{u}_2(m_p,m_{22},i_2,w_2,\mathcal{B}_{0,2})}$ and is transmitted over the channel. \par\textbf{Decoder $\bm{\phi_j^{(\mathcal{B})}}$:} Decoder $j=1,2$, searches for a unique triple $(\hat{m}_p,\hat{m}_{jj},\hat{w}_j)\in\mathcal{M}_p\times\mathcal{M}_{jj}\times\mathcal{W}_j$ for which there is an index $\hat{i}_j\in\mathcal{I}_j$ such that \begin{equation} \Big(\mathbf{u}_0(\hat{m}_p,\mathcal{B}_0),\mathbf{u}_j(\hat{m}_p,\hat{m}_{jj},\hat{i}_j,\hat{w}_j,\mathcal{B}_{0,j}),\mathbf{y}_j\Big)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_j,Y_j}).\label{EQ:achiev_decoderj} \end{equation} If such a unique triple is found set $\phi_j(\mathbf{y}_j)=\big(\hat{m}_0,(\hat{m}_{j0},\hat{m}_{jj})\big)$, and otherwise $\phi_j=e$. The triple $\left(f^{(\mathcal{B})}_{\mathrm{E}},\phi^{(\mathcal{B})}_1,\phi^{(\mathcal{B})}_2\right)$ defined with respect to a codebook $\mathcal{B}\in\mathfrak{B}$ constitutes an $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n\in\mathfrak{C}_n$ for the BC with privacy leakage constraints. We henceforth omit the blocklength $n$ writing $\mathcal{C}$ and $\mathfrak{C}$ instead of $\mathcal{C}_n$ and $\mathfrak{C}_n$, respectively. When a random codebook $\mathbb{B}$ is used, we denote the corresponding random code by $\mathbb{C}$. Note that $\mathbb{C}$ is distributed according to \begin{equation} P_{\mathbb{C}}(\mathcal{C})=P_{\mathbb{C}}\left(\big(f^{(\mathcal{B})}_{\mathrm{E}},\phi^{(\mathcal{B})}_1,\phi^{(\mathcal{B})}_2\big)\right)=P_{\mathbb{B}}(\mathcal{B}),\quad\forall\mathcal{C}\in\mathfrak{C},\label{EQ:code_probability} \end{equation} where $P_\mathbb{B}$ is specified in \eqref{EQ:codebook_probability}. The PMF $P_\mathbb{C}$ along with \eqref{EQ:induced_PMF} give rise to the PMF from \eqref{EQ:induced_PMFandcode} and to its induced probability measure $\mathbb{P}$. \par By standard error probability analysis (see Appendix \ref{APPEN:error_analysis}), reliability requires \begin{subequations} \begin{align} R_1'+R_2'&>I(U_1;U_2|U_0)\label{EQ:achiev_rb1}\\ R_{11}+R'_1+\tilde{R}_1 &< I(U_1;Y_1|U_0)\label{EQ:achiev_rb2}\\ R_0+R_{20}+R_1+R'_1+\tilde{R}_1 &< I(U_0,U_1;Y_1)\label{EQ:achiev_rb3}\\ R_{22}+R'_2+\tilde{R}_2 &< I(U_2;Y_2|U_0)\label{EQ:achiev_rb4}\\ R_0+R_{10}+R_2+R'_2+\tilde{R}_2 &< I(U_0,U_2;Y_2).\label{EQ:achiev_rb5} \end{align}\label{EQ:achiev_rb} \end{subequations} \indent The leakage analysis requires two properties in addition to reliability. Namely, for a fixed $m_1\in\mathcal{M}_1$ (respectively, $m_2\in\mathcal{M}_2$) Decoder 2 (respectively, Decoder 1) should be able to decode $W_1$ (respectively, $W_2$) with an arbitrarily low error probability based on $(\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2)$ (respectively, $(\mathbf{U}_0,\mathbf{U}_1,\mathbf{Y}_1)$). For a fixed code $\mathcal{C}\in\mathfrak{C}$ (specified by a fixed codebook $\mathcal{B}\in\mathfrak{B}$), denote by $\lambda_{m_1}(\mathcal{C})$ the probability that Decoder 1 or Decoder 2 fail to do so using $\mathcal{C}$. As explained in Appendix \ref{APPEN:error_analysis}, $\mathbb{E}\lambda_{m_1}(\mathbb{C})=\mathbb{E}\lambda_1(\mathbb{C})\to 0$ as $n\to\infty$, for every $m_1\in\mathcal{M}_1$, if \begin{subequations} \begin{align} \tilde{R}_1&<I(U_1;Y_2|U_0,U_2)\label{EQ:achiev_rb6}\\ \tilde{R}_2&<I(U_2;Y_1|U_0,U_1)\label{EQ:achiev_rb7}. \end{align}\label{EQ:achiev_rb_more} \end{subequations} \vspace{-6.5mm} \indent\textbf{Leakage Analysis:} We compute an upper bound on $\mathbb{E}L_1(\mathbb{C})$ and on $\mathbb{E}L_2(\mathbb{C})$. By symmetry, only the analysis for the expected rate-leakage of $M_1$ to the 2nd receiver is presented. The corresponding derivation for $M_2$ follows similar lines. In all subsequent arguments, the random vectors $\mathbf{U}_0$, $\mathbf{U}_1$ and $\mathbf{U}_2$ stand for the $u_0$-, $u_1$- and $u_2$-codewords chosen by the encoder. Noting that $\mathbb{E}L_1(\mathbb{C})=I(M_1;\mathbf{Y}_2|\mathbb{C})$, we have \begin{align*} H(M_1|\mathbf{Y}_2,\mathbb{C})&\stackrel{(a)}{\geq} H(M_1|\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})\\ &= H(M_1,\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})-H(\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})\\ &= H(M_1,\mathbf{U}_1,\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})-H(\mathbf{U}_1|M_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})-H(\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})\\ &\stackrel{(b)}\geq H(\mathbf{U}_1|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})-H(\mathbf{U}_1|M_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})-H(\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})\\&\mspace{492mu}+H(\mathbf{Y}_2|M_1,\mathbf{U}_0,\mathbf{U}_1,\mathbf{U}_2,\mathbb{C})\\ &\stackrel{(c)}= H(\mathbf{U}_1|\mathbf{U}_0,\mathbb{C})-I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})-H(\mathbf{U}_1|M_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})-I(\mathbf{U}_1;\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})\numberthis\label{EQ:secrecy_analysis} \end{align*} where (a) and (b) follow because conditioning cannot increase entropy, while (c) follows since $\mathbf{Y}_2-(\mathbf{U}_0,\mathbf{U}_1,\mathbf{U}_2,\mathbb{C})-M_1$ forms a Markov chain (this can be shown using functional dependence graphs \cite{Kramer_FDG2003}). \par We evaluate each term in (\ref{EQ:secrecy_analysis}) separately using the following lemmas. \begin{lemma}\label{LEMMA:1} For any $\epsilon_1,\epsilon_2>0$, there is a sufficiently large $n$ for which \begin{subequations} \begin{align} I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})&\leq nI(U_1;U_2|U_0)+n\epsilon_1\label{EQ:lemma1_ineq1}\\ I(\mathbf{U}_1;\mathbf{Y}_2|\mathbf{U}_0,\mathbf{U}_2,\mathbb{C})&\leq nI(U_1;Y_2|U_0,U_2)+n\epsilon_2\label{EQ:lemma1_ineq2}. \end{align} \end{subequations} \end{lemma} \begin{lemma}\label{LEMMA:2} For any $\epsilon_3>0$ there is a sufficiently large $n$ for which \begin{equation} H(\mathbf{U}_1|M_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})\leq n\epsilon_3.\label{EQ:lemma2_ineq} \end{equation} \end{lemma} The proofs of Lemmas \ref{LEMMA:1} and \ref{LEMMA:2} are similar to those of \cite[Lemmas 2 and 3]{BC_Confidential_Yates2008}. For completeness, we give the proofs in Appendix \ref{APPEN:lemmas_proof}. Next, let $I_1$ denote the random variable that represents the choice of the index $i_1\in\mathcal{I}_1$ and observe tha \begin{align*} H(\mathbf{U}_1|\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})&=H(M_{11},W_1,I_1,\mathbf{U}_1|\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})-H(M_{11},W_1,I_1|\mathbf{U}_1,\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})\\ &\stackrel{(a)}=H(M_{11},W_1,I_1|\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})-H(M_{11},W_1,I_1|\mathbf{U}_1,\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})\\ &\stackrel{(b)}\geq H(M_{11},W_1,I_1|\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})-n\epsilon_4\\ &\stackrel{(c)}=n(R_{11}+\tilde{R}_1+R_1')-n\epsilon_4\numberthis\label{EQ:first_term1} \end{align*} where:\\ (a) follows since conditioned on $\mathbf{U}_0=\mathbf{u}_0$ and $\mathbb{C}$, $\mathbf{U}_1$ is defined by $(M_{11},W_1,I_1)$;\\ (b) follows from Fano's inequality. Namely, by (\ref{EQ:achiev_rb2}) we have that the error probability in decoding $(M_{11},W_1,I_1)$ from $(\mathbf{u}_0,\mathbf{U}_1)$ is upper bounded by $\eta_\epsilon^{(4)}$, which is arbitrarily small for sufficiently large $n$. Fano's inequality implies that \begin{equation} H(M_{11},W_1,I_1|\mathbf{U}_1,\mathbf{U}_0=\mathbf{u}_0,\mathbb{C})\leq n\epsilon_4, \end{equation} where $\epsilon_4=\frac{1}{n}+\eta_\epsilon^{(4)}(R_{11}+\tilde{R}_1+R_1')$;\\ (c) follows by the symmetry of the random codebook, which implies that conditioned on $\mathbf{U}_0=\mathbf{u}_0$, the triple $(M_{11},W_1,I_1)$ attains $2^{n(R_{11}+\tilde{R}_1+R_1')}$ values with equal probabilities. Based on (\ref{EQ:first_term1}) we have \begin{equation} H(\mathbf{U}_1|\mathbf{U}_0,\mathbb{C})\geq n(R_{11}+\tilde{R}_1+R_1')-n\epsilon_4.\label{EQ:uniform_entropy} \end{equation} Inserting (\ref{EQ:uniform_entropy}) into (\ref{EQ:secrecy_analysis}) and using Lemmas \ref{LEMMA:1} and \ref{LEMMA:2} yields \begin{align*} H(M_1|\mathbf{Y}_2,\mathbb{C})&\geq n\big(R_{11}+\tilde{R}_1+R_1'-\epsilon_4-I(U_1;U_2|U_0)-\epsilon_1-\epsilon_3-I(U_1;Y_2|U_0,U_2)-\epsilon_2\big)\\ &\stackrel{(a)}=n\big(R_1+\tilde{R}_1+R'_1-R_{10}-I(U_1;Y_2|U_0,U_2)-I(U_1;U_2|U_0)-\epsilon_5\big)\\ &\stackrel{(b)}\geq nR_1 -n(L_1+\epsilon_5) \end{align*} where (a) follows by denoting $\epsilon_5\triangleq\sum_{i=1}^4\epsilon_i$ and using (\ref{EQ:achiev_partial_rates_sum}) and (\ref{EQ:achiev_partial_rates_bound2}), while (b) follows by taking \begin{subequations} \begin{align} \tilde{R}_1+R'_1-R_{10}&> I(U_1;Y_2|U_0,U_2)+I(U_1,U_2|U_0)-L_1\label{EQ:achiev_extra_rb3}\\ R_1'+L_1-R_{10}&>I(U_1;U_2|U_0)\label{EQ:achiev_extra_rb1}. \end{align}\label{EQ:achiev_extra_rb_m1} \end{subequations} \vspace{-6.5mm} \noindent The bound in (\ref{EQ:achiev_extra_rb1}) insures that an $\tilde{R}_1>0$ that satisfies (\ref{EQ:achiev_rb6}) and (\ref{EQ:achiev_extra_rb3}) is feasible. Note that $\epsilon_5$ can be made arbitrarily small with $n$, which implies that there is an $n$ for which $\mathbb{E}L_1(\mathbb{C})\leq L_1+\xi_1$. A similar analysis of the average rate leaked from $M_2$ to the 1st receiver shows that $\mathbb{E}L_2(\mathbb{C})\leq L_2+\xi_2$ for sufficiently large $n$ if \begin{subequations} \begin{align} \tilde{R}_2+R'_2-R_{20}&> I(U_2;Y_1|U_0,U_1)+I(U_1,U_2|U_0)-L_2\label{EQ:achiev_extra_rb4}\\ R_2'+L_2-R_{20}&>I(U_1;U_2|U_0)\label{EQ:achiev_extra_rb2} \end{align}\label{EQ:achiev_extra_rb_m2} \end{subequations} \vspace{-6.5mm} \indent By applying the Selection Lemma \cite[Lemma 2.2]{Bloch_Barros_Secrecy_Book2011} to the sequence of random variables $\big\{\mathbb{C}_n\big\}_{n\in\mathbb{N}}$ and the functions $P_e$, $L_1$ and $L_2$, we conclude that there exists an $n\in\mathbb{N}$ sufficiently large and a realization $\mathcal{C}_n$ of $\mathbb{C}_n$ that satisfies \eqref{EQ:achiev_realibility_leakage}. Finally, we apply FME on \eqref{EQ:achiev_rb}-\eqref{EQ:achiev_rb_more} and \eqref{EQ:achiev_extra_rb_m1}-\eqref{EQ:achiev_extra_rb_m2} while using \eqref{EQ:achiev_partial_rates} and the non-negativity of the involved terms, to eliminate $R_{j0}$, $R_j'$ and $\tilde{R}_j$, for $j=1,2$. Since all the above linear inequalities have constant coefficients, the FME can be performed by a computer program, e.g., by the FME-IT software \cite{FME&ITIP}. This shows the sufficiency of (\ref{EQ:region_inner}). \begin{remark} Applying FME on (\ref{EQ:achiev_rb})-(\ref{EQ:achiev_rb_more}) and (\ref{EQ:achiev_extra_rb_m1})-(\ref{EQ:achiev_extra_rb_m2}) gives the rate bounds (\ref{EQ:region_inner}) as well as the inequality \begin{equation} I(U_1;Y_1|U_0)+I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)\geq 0.\label{EQ:achiev_redun_ineq} \end{equation} However, if (\ref{EQ:achiev_redun_ineq}) is active then $P_{U_0,U_1,U_2}$ is not a good choice for code design. Setting $U_1=U_2=0$ and keeping the same $U_0$ (a choice which always satisfies (\ref{EQ:achiev_redun_ineq})) achieves a larger region than the one achieved by $P_{U_0,U_1,U_2,X}Q_{Y_1,Y_2|X}$. \end{remark} \subsection{Proof of Corollary \ref{COR:inactive_leakage}}\label{SUBSEC:inactive_leakage_proof} Fix $(L_1,L_2)\in\mathbb{R}^2_+$ and $P_{U_0,U_1,U_2,X}\in\mathcal{P}(\mathcal{U}_0\times\mathcal{U}_1\times\mathcal{U}_2\times\mathcal{X})$. The rate bounds describing $\tilde{\mathcal{R}}_{\mathrm{I}}(L_1,L_2,P_{U_0,U_1,U_2,X})$ are: \begin{subequations} \begin{align} R_1&\leq I(U_1;Y_1|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1\label{EQ:region_nocommon_inner11}\\ R_1&\leq I(U_0,U_1;Y_1)\label{EQ:region_nocommon_inner13}\\ R_2&\leq I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)+L_2\label{EQ:region_nocommon_inner21}\\ R_2&\leq I(U_0,U_2;Y_2)\label{EQ:region_nocommon_inner23}\\ R_1+R_2&\leq I(U_0,U_1;Y_1)+I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1\label{EQ:region_nocommon_inner_sum1}\\ R_1+R_2&\leq I(U_0,U_1;Y_1)+I(U_2;Y_2|U_0)-I(U_1;U_2|U_0)\label{EQ:region_nocommon_inner_sum2}\\ R_1+R_2&\leq I(U_1;Y_1|U_0)+I(U_0,U_2;Y_2)-I(U_1;U_2|U_0)-I(U_2;Y_1|U_0,U_1)+L_2\label{EQ:region_nocommon_inner_sum3}\\ R_1+R_2&\leq I(U_1;Y_1|U_0)+I(U_0,U_2;Y_2)-I(U_1;U_2|U_0).\label{EQ:region_nocommon_inner_sum4} \end{align}\label{EQ:region_nocommon_inner} \end{subequations} \vspace{-6.5mm} To prove the first claim, assume that $L_1\geq L_1^\star(P_{U_0,U_1,U_2,X})$. Consequently, the RHS of \eqref{EQ:region_nocommon_inner11} satisfies \begin{align*} I(U_1;Y_1|U_0)-I(U_1;U_2|U_0)-I(U_1;Y_2|U_0,U_2)+L_1&=I(U_0,U_1;Y_1)-I(U_0:Y_1)-I(U_1;U_2,Y_2|U_0,U_2)+L_1\\ &\geq I(U_0,U_1;Y_1),\numberthis \end{align*} which makes \eqref{EQ:region_nocommon_inner11} inactive due to \eqref{EQ:region_nocommon_inner13}. Similarly, \eqref{EQ:region_nocommon_inner_sum1} is redundant due to \eqref{EQ:region_nocommon_inner_sum2}, and therefore, $\tilde{\mathcal{R}}_{\mathrm{O}}(L_1,L_2,P_{U_0,U_1,U_2,X})=\tilde{\mathcal{R}}_{\mathrm{O}}(\infty,L_2,P_{U_0,U_1,U_2,X})$. An analogous argument with respect to $L_2$ proves the second claim (essentially by showing that if $L_2\geq L_2^\star$ then \eqref{EQ:region_nocommon_inner21} and \eqref{EQ:region_nocommon_inner_sum3} are inactive due \eqref{EQ:region_nocommon_inner23} and \eqref{EQ:region_nocommon_inner_sum4}, respectively). The third claim follows by combining both preceding arguments. \subsection{Proof of Theorem \ref{TM:outer_bound}}\label{SUBSEC:outer_proof} We show that given an $(L_1,L_2)$-achievable rate triple $(R_0,R_1,R_2)$, there is a PMF $P_{W,U,V}P_{X|U,V}Q_{Y_1,Y_2|X}$, for which (\ref{EQ:region_outer}) holds. Due to the symmetric structure of the rate bounds defining $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$, we present only the derivation of (\ref{EQ:region_outer0})-(\ref{EQ:region_outer13}) and (\ref{EQ:region_outer_sum1}). The other inequalities in (\ref{EQ:region_outer}) are established by similar arguments. Since $(R_0,R_1,R_2)$ is $(L_1,L_2)$-achievable, for every $\epsilon,\xi_1,\xi_2>0$ there is a sufficiently large $n$ and an $(n,R_0,R_1,R_2)$ code $\mathcal{C}_n$ for which \eqref{EQ:achiev_realibility_leakage} holds. The conditioning on $\mathcal{C}_n$ is omitted throughout this proof. Instead, we note that subsequent entropy and information measures are calculated with respect to the PMF from \eqref{EQ:induced_PMF} that is specified by $\mathcal{C}_n$. Fix $\epsilon,\xi_1,\xi_2>0$ and a corresponding $n$. By Fano's inequality we have \begin{equation} H(M_0,M_j|Y_j^n)\leq 1+n\epsilon R_j\triangleq n\epsilon_n^{(j)}.\label{EQ:outer_Fano} \end{equation} Define $\epsilon_n=\max\big\{\epsilon_n^{(1)},\epsilon_n^{(2)}\big\}$. Next, by (\ref{EQ:achieve_leakage1}), we write \begin{align*} n(L_1+\xi_1)&\geq I(M_1;Y_2^n)\\ &=I(M_1;M_0,M_2,Y_2^n)-I(M_1;M_0,M_2|Y_2^n)\\ &\stackrel{(a)}\geq I(M_1;Y_2^n|M_0,M_2)-H(M_0,M_2|Y_2^n)\\ &\stackrel{(b)}\geq I(M_1;Y_2^n|M_0,M_2)-n\epsilon_n\numberthis\label{EQ:outer_secrecy_cond_info} \end{align*} where (a) follows from the independence of $M_1$ and $(M_0,M_2)$ and the non-negativity of entropy, while (b) follows from (\ref{EQ:outer_Fano}). (\ref{EQ:outer_secrecy_cond_info}) implies \begin{equation} I(M_1;Y_2^n|M_0,M_2)\leq nL_1+n(\xi_1+\epsilon_n).\label{EQ:outer_secrecy_cond_info2} \end{equation} Similarly, we have \begin{equation} I(M_1;Y_2^n|M_0)\leq nL_1+n(\xi_1+\epsilon_n).\label{EQ:outer_secrecy_cond_info3} \end{equation} \par The common message rate $R_0$ satisfies \begin{subequations} \begin{align*} nR_0&=H(M_0)\\ &\stackrel{(a)}\leq I(M_0;Y_1^n)+n\epsilon_n\\ &=\sum_{i=1}^nI(M_0;Y_{1,i}|Y_1^{i-1})+n\epsilon_n\\ &\leq\sum_{i=1}^nI(M_0,Y_1^{i-1};Y_{1,i})+n\epsilon_n\numberthis\label{EQ:outer_0UB_final1a}\\ &\stackrel{(b)}\leq\sum_{i=1}^nI(W_i;Y_{1,i})+n\epsilon_n\numberthis\label{EQ:outer_0UB_final1b} \end{align*} \end{subequations} where (a) follows by (\ref{EQ:outer_Fano}) and (b) follows by defining $W_i\triangleq(M_0,Y_1^{i-1},Y_{2,i+1}^n)$. By reversing the roles of $Y_1^n$ and $Y_2^n$ and repeating similar steps, we also have \begin{subequations} \begin{align} nR_0&\leq\sum_{i=1}^nI(M_0,Y_{2,i+1}^n;Y_{2,i})+n\epsilon_n\label{EQ:outer_0UB_final2a}\\ &\leq \sum_{i=1}^nI(W_i;Y_{2,i})+n\epsilon_n.\label{EQ:outer_0UB_final2b} \end{align} \end{subequations} For $R_1$, it follows that \begin{align*} nR_1&=H(M_1|M_0,M_2)\\ &\stackrel{(a)}\leq I(M_1;Y_1^n|M_0,M_2)-I(M_1;Y_2^n|M_0,M_2)+nL_1+n\delta^{(1)}_n\\ &\stackrel{(b)}=\sum_{i=1}^n\Big[I(M_1;Y_1^i,Y_{2,i+1}^n|M_0,M_2)-I(M_1;Y_1^{i-1},Y_{2,i}^n|M_0,M_2)\Big]+nL_1+n\delta^{(1)}_n\\ &=\sum_{i=1}^n\Big[I(M_1;Y_{1,i}|M_2,W_i)-I(M_1;Y_{2,i}|M_2,W_i)\Big]+nL_1+n\delta^{(1)}_n\\ &\stackrel{(c)}=\sum_{i=1}^n\Big[I(U_i;Y_{1,i}|W_i,V_i)-I(U_i;Y_{2,i}|W_i,V_i)\Big]+nL_1+n\delta^{(1)}_n\numberthis\label{EQ:outer_1UB_final1} \end{align*} where:\\ (a) follows from (\ref{EQ:outer_Fano}) and (\ref{EQ:outer_secrecy_cond_info}), and by denoting $\delta^{(1)}_n=2\epsilon_n+\xi_1$;\\ (b) follows from a telescoping identity \cite[Eqs. (9) and (11)]{Kramer_telescopic2011};\\ (c) follows by defining $U_i\triangleq (M_1,W_i)$ and $V_i\triangleq (M_2,W_i)$. $R_1$ is also upper bounded as \begin{align*} nR_1&=H(M_1|M_0)\\ &\stackrel{(a)}\leq I(M_1;Y_1^n|M_0)-I(M_1;Y_2^n|M_0)+nL_1+n\delta^{(1)}_n\\ &\stackrel{(b)}=\sum_{i=1}^n\Big[I(M_1;Y_1^i,Y_{2,i+1}^n|M_0)-I(M_1;Y_1^{i-1},Y_{2,i}^n|M_0)\Big]+nL_1+n\delta^{(1)}_n\\ &\stackrel{(c)}=\sum_{i=1}^n\Big[I(U_i;Y_{1,i}|W_i)-I(U_i;Y_{2,i}|W_i)\Big]+nL_1+n\delta^{(1)}_n\numberthis\label{EQ:outer_1UB_final2} \end{align*} where:\\ (a) follows from (\ref{EQ:outer_Fano}) and (\ref{EQ:outer_secrecy_cond_info3});\\ (b) follows from a telescoping identity;\\ (c) follows by the definition of $(W_i,U_i)$. For the sum $R_0+R_1$, we have \begin{align*} n(R_0+R_1)&= H(M_0,M_1)\\ &\stackrel{(a)}\leq I(M_0,M_1;Y_1^n)+n\epsilon_n\\ &= \sum_{i=1}^n I(M_0,M_1;Y_{1,i}|Y_1^{i-1})+n\epsilon_n\\ &\stackrel{(b)}\leq \sum_{i=1}^n I(W_i,U_i;Y_{1,i})+n\epsilon_n\numberthis\label{EQ:outer_1UB_final3} \end{align*} where (a) follows from (\ref{EQ:outer_Fano}) and (b) follows by the definition of $(W_i,U_i)$. Moreover, consider \begin{align*} n(R_0+R_1)&= H(M_1|M_0)+H(M_0)\\ &\stackrel{(a)}\leq I(M_1;Y_1^n|M_0)+I(M_0;Y_2^n)+n\epsilon_n\\ &\leq \sum_{i=1}^n \Big[I(M_1,Y_{2,i+1}^n;Y_{1,i}|M_0,Y_1^{i-1})+I(M_0;Y_{2,i}|Y_{2,i+1}^n)\Big]+n\epsilon_n\\ &= \sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i)+I(Y_{2,i+1}^n;Y_{1,i}|M_0,Y_1^{i-1})+I(M_0;Y_{2,i}|Y_{2,i+1}^n)\Big]+n\epsilon_n\\ &\stackrel{(b)}= \sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i)+I(Y_1^{i-1};Y_{2,i}|M_0,Y_{2,i+1}^n)+I(M_0;Y_{2,i}|Y_{2,i+1}^n)\Big]+n\epsilon_n\\ &\stackrel{(c)}\leq \sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i)+I(W_i;Y_{2,i})\Big]+n\epsilon_n\numberthis\label{EQ:outer_1UB_final4} \end{align*} where (a) follows from (\ref{EQ:outer_Fano}), (b) follows from Csisz{\'a}r's sum identity, while (c) follows by the definition of $(W_i,U_i)$. \par To bound the sum $R_0+R_1+R_2$, we start by writing \begin{align*} H(M_1|M_0,M_2)&\stackrel{(a)}\leq I(M_1;Y_1^n|M_0,M_2)+n\epsilon_n\\ &=\sum_{i=1}^nI(M_1;Y_{1,i}|M_0,M_2,Y_1^{i-1})+n\epsilon_n\\ &\leq\sum_{i=1}^nI(M_1,Y_{2,i+1}^n;Y_{1,i}|M_0,M_2,Y_1^{i-1})+n\epsilon_n\\ &\stackrel{(b)}= \sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i,V_i)+I(Y_{2,i+1}^n;Y_{1,i}|M_0,M_2,Y_1^{i-1})\Big]+n\epsilon_n\numberthis\label{EQ:outer_sumUB_temp1|02} \end{align*} where (a) follows from (\ref{EQ:outer_Fano}), while (b) follows by the definition of $(W_i,U_i,V_i)$. Moreover, we have \begin{align*} H(M_2|M_0)&\stackrel{(a)}\leq I(M_2;Y_2^n|M_0)+n\epsilon_n\\ &\stackrel{(b)}=\sum_{i=1}^n\Big[I(M_2;Y_{2,i}^n|M_0,Y_1^{i-1})-I(M_2;Y_{2,i+1}^n|M_0,Y_1^i)\Big]+n\epsilon_n\\ &\begin{multlined}[b][.87\textwidth]\stackrel{(c)}=\sum_{i=1}^n\Big[I(M_2;Y_{2,i+1}^n|M_0,Y_1^{i-1})+I(V_i;Y_{2,i}|W_i)-I(M_2;Y_{1,i},Y_{2,i+1}^n|M_0,Y_1^{i-1})\\+I(M_2;Y_{1,i}|M_0,Y_1^{i-1})\Big]+n\epsilon_n\end{multlined}\\ &\stackrel{(d)}=\sum_{i=1}^n\Big[I(V_i;Y_{2,i}|W_i)-I(V_i;Y_{1,i}|W_i)+I(M_2;Y_{1,i}|M_0,Y_1^{i-1})\Big]+n\epsilon_n\numberthis\label{EQ:outer_sumUB_temp2|0} \end{align*} where:\\ (a) follows from (\ref{EQ:outer_Fano});\\ (b) follows from a telescoping identity;\\ (c) follows from the mutual information chain rule and the definition of $(V_i,U_i)$;\\ (d) follows by the mutual information chain rule. \par Combining (\ref{EQ:outer_sumUB_temp1|02}) and (\ref{EQ:outer_sumUB_temp2|0}) yields \begin{subequations} \begin{align*} n(R_1+R_2)&\leq \sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i,V_i)+I(V_i;Y_{2,i}|W_i)-I(V_i;Y_{1,i}|W_i)+I(M_2,Y_{2,i+1}^n;Y_{1,i}|M_0,Y_1^{i-1})\Big]+2n\epsilon_n\\ &=\sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i,V_i)+I(V_i;Y_{2,i}|W_i)+I(Y_{2,i+1}^n;Y_{1,i}|M_0,Y_1^{i-1})\Big]+2n\epsilon_n.\numberthis\label{EQ:outer_sumUB_temp12|0a} \end{align*} By applying Csisz{\'a}r's sum identity on the last term in (\ref{EQ:outer_sumUB_temp12|0a}), we have \begin{equation} n(R_1+R_2)=\sum_{i=1}^n\Big[I(U_i;Y_{1,i}|W_i,V_i)+I(V_i;Y_{2,i}|W_i)+I(Y_1^{i-1};Y_{2,i}|M_0,Y_{2,i+1}^n)\Big]+2n\epsilon_n.\label{EQ:outer_sumUB_temp12|0b} \end{equation} \end{subequations} Combining (\ref{EQ:outer_0UB_final1a}) with (\ref{EQ:outer_sumUB_temp12|0a}) and (\ref{EQ:outer_0UB_final2a}) with (\ref{EQ:outer_sumUB_temp12|0b}) yields \begin{equation} n(R_0+R_1+R_2)\leq\sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i,V_i)+I(V_i;Y_{2,i}|W_i)+I(W_i;Y_{1,i})\Big]+3n\epsilon_n\label{EQ:outer_sumUB_final1} \end{equation} and \begin{equation} n(R_0+R_1+R_2)\leq\sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i,V_i)+I(V_i;Y_{2,i}|W_i)+I(W_i;Y_{2,i})\Big]+3n\epsilon_n,\label{EQ:outer_sumUB_final2} \end{equation} respectively. By repeating similar steps, we obtain bounds related to the remaining rate bounds in (\ref{EQ:region_outer}): \begin{align} nR_2&\leq\sum_{i=1}^n\Big[I(V_i;Y_{2,i}|W_i,U_i)-I(V_i;Y_{1,i}|W_i,U_i)\Big]+nL_2+n\delta^{(2)}_n\label{EQ:outer_2UB_final1}\\ nR_2&\leq\sum_{i=1}^n\Big[I(V_i;Y_{2,i}|W_i)-I(V_i;Y_{1,i}|W_i)\Big]+nL_2+n\delta^{(2)}_n\label{EQ:outer_2UB_final2}\\ n(R_0+R_2)&\leq\sum_{i=1}^n I(W_i,V_i;Y_{2,i})+n\epsilon_n\label{EQ:outer_02UB_final1}\\ n(R_0+R_2)&\leq\sum_{i=1}^n \Big[I(V_i;Y_{2,i}|W_i)+I(W_i;Y_{1,i})\Big]+n\epsilon_n\label{EQ:outer_02UB_final2}\\ n(R_0+R_1+R_2)&\leq\sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i)+I(V_i;Y_{2,i}|W_i,U_i)+I(W_i;Y_{1,i})\Big]+3n\epsilon_n\label{EQ:outer_sumUB_final3}\\ n(R_0+R_1+R_2)&\leq\sum_{i=1}^n \Big[I(U_i;Y_{1,i}|W_i)+I(V_i;Y_{2,i}|W_i,U_i)+I(W_i;Y_{2,i})\Big]+3n\epsilon_n\label{EQ:outer_sumUB_final4} \end{align} where $\delta_n^{(2)}=2\epsilon_n+\xi_2$. The bounds are rewritten by introducing a time-sharing random variable $Q$ that is uniformly distributed over the set $[1:n]$. For instance, the bound (\ref{EQ:outer_1UB_final1}) is rewritten as \begin{align*} R_1&\leq\frac{1}{n}\sum_{q=1}^n\Big[I(U_q;Y_{1,q}|W_q,V_q)-I(U_q;Y_{2,q}|W_q,V_q)\Big]+L_1+\delta^{(1)}_n\\ &=\sum_{i=q}^n\mathbb{P}\big(Q=q\big)\Big[I(U_Q;Y_{1,Q}|W_Q,V_Q,Q=q)-I(U_Q;Y_{2,Q}|W_Q,V_Q,Q=q)\Big]+L_1+\delta^{(1)}_n\\ &\leq I(U_Q;Y_{1,Q}|W_Q,V_Q,Q)-I(U_Q;Y_{2,Q}|W_Q,V_Q,Q)+L_1+n\delta^{(1)}_n\numberthis\label{EQ:outer_1UBQ_final1} \end{align*} Denote $Y_1\triangleq Y_{1,Q},\ Y_2\triangleq Y_{2,Q},\ W\triangleq (W_Q,Q)$, $U\triangleq (U_Q,Q)$ and $V\triangleq (V_Q,Q)$. We thus have the bounds of (\ref{EQ:region_outer}) with small added terms such as $\epsilon_n$ and $\delta_n^{(1)}$. But for large $n$ we can make these terms approach 0. The converse is completed by noting that since the channel is memoryless and without feedback, and because $U_q=(M_1,W_q)$ and $V_q=(M_2,W_q)$, the chain \begin{equation} (Y_{1,q},Y_{2,q})-X_q-(U_q,V_q)-W_q\label{EQ:outer_Markov} \end{equation} is Markov for every $q\in[1:n]$. This implies that $(Y_1,Y_2)-X-(U,V)-W$ is a Markov chain. \subsection{Proof of Theorem \ref{TM:SDBC_leakage_capacity}}\label{SUBSEC:SDBC_proof} To establish the direct part of Theorem \ref{TM:SDBC_leakage_capacity} we show that $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)\subseteq\mathcal{R}_{\mathrm{I}}(L_1,L_2)$, which follows by setting $U_0=W$, $U_1=Y_1$ and $U_2=V$ in Theorem \ref{TM:inner_bound}. For the converse we show that $\mathcal{R}_{\mathrm{O}}(L_1,L_2)\subseteq\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$. For every PMF $P_{W,V,Y_1,X}Q_{Y_2|X}$ for which $Y_1=f(X)$, we have the following chains of inequalities. The right-hand side (RHS) of (\ref{EQ:region_outer11}) is upper bounded by the RHS of (\ref{EQ:region_SDBC11}) since \begin{align*} R_1&\leq I(U;Y_1|W,V)-I(U;Y_2|W,V)+L_1\\ &=H(Y_1|W,V)-H(Y_1|W,V,U)-I(U;Y_2|W,V)+L_1\\ &\stackrel{(a)}\leq H(Y_1|W,V)-I(Y_1;Y_2|W,V,U)-I(U;Y_2|W,V)+L_1\\ &=H(Y_1|W,V)-I(U,Y_1;Y_2|W,V)+L_1\\ &\stackrel{(b)}\leq H(Y_1|W,V,Y_2)+L_1\numberthis\label{EQ:SDBC_proof_bound1} \end{align*} where (a) follows by the non-negativity of entropy and (b) follows because conditioning cannot increase entropy. Repeating similar steps while combining (\ref{EQ:region_outer0}) with (\ref{EQ:region_outer11}) yields (\ref{EQ:region_SDBC12}), i.e., we have \begin{equation} R_0+R_1\leq H(Y_1|W,V,Y_2)+I(W;Y_1)+L_1.\label{EQ:SDBC_proof_bound01a} \end{equation} Inequality (\ref{EQ:region_outer13}) implies (\ref{EQ:region_SDBC13}) since \begin{equation} R_0+R_1\leq I(W,U;Y_1)\leq H(Y_1).\label{EQ:SDBC_proof_bound01b} \end{equation} The rate bound (\ref{EQ:region_SDBC21}) coincides with (\ref{EQ:region_outer22}), combining (\ref{EQ:region_outer0}) with (\ref{EQ:region_outer22}) implies (\ref{EQ:region_SDBC22}), while (\ref{EQ:region_SDBC23}) follows from~(\ref{EQ:region_outer23}). For the sum of rates, (\ref{EQ:region_SDBC_sum1}) follows from (\ref{EQ:region_outer23}) and (\ref{EQ:SDBC_proof_bound1}), while (\ref{EQ:region_SDBC_sum2}) is implied by (\ref{EQ:region_outer_sum1}) since \begin{equation} I(U;Y_1|V,W)\leq H(Y_1|V,W).\label{EQ:SDBC_proof_temp} \end{equation} Finally, by combining (\ref{EQ:region_outer0}) and (\ref{EQ:region_outer_sum1}) while using (\ref{EQ:SDBC_proof_temp}) we have \begin{align*} 2R_0+R_1+R_2&\leq I(U;Y_1|W,V)+I(V;Y_2|W)+2\min\big\{I(W;Y_1),I(W;Y_2)\big\}\\ &\leq H(Y_1|W,V)+I(W,V;Y_2)+I(W;Y_1), \end{align*} which implies (\ref{EQ:region_SDBC_sum3}). Dropping the rest of the bounds from (\ref{EQ:region_outer}) only increases the region and shows that $\mathcal{R}_{\mathrm{O}}(L_1,L_2)\subseteq\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ (note that $\mathcal{R}_{\mathrm{O}}(L_1,L_2)$ is described by a union over PMFs that satisfy the Markov relation $X-(U,V)-W$, while in $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ this restriction is dropped). This characterizes $\mathcal{C}_{\mathrm{SD}}(L_1,L_2)$ as the $(L_1,L_2)$-leakage-capacity region of the SD-BC. \section{Summary and Concluding Remarks}\label{SEC:summary} \par We considered the BC with privacy leakage constraints. Under this model, all four scenarios concerning secrecy (i.e., when both, either or neither of the private messages are secret) become special cases and are recovered by properly setting the leakage thresholds. Novel inner and outer bounds on the leakage-capacity region were derived and shown to be tight for SD and PD BCs, as well as for BCs with a degraded message set. Furthermore, we derived a condition on the allowed leakage values that differentiates whether a further increase of each leakage induces a larger inner bound or not. The condition effectively lets one (numerically) calculate privacy leakage threshold values above which the inner bound saturates. The coding strategy that achieved the inner bound relied on Marton's coding scheme with a common message, but with an extra layer of binning. Each private message was split into a public and a private part and the codebooks of the private parts were double-binned. Taking into account that the rate of the public parts is always leaked, the sizes of the bins in the extra layer were chosen to satisfy the total leakage constraints. The outer bound was derived by leveraging telescoping identities. \par The results for the BC with leakage captures various past works. Large leakage thresholds reduce our inner and outer bounds to Marton's inner bound \cite{Liang_Kramer_RBC2007} and the UVW-outer bound \cite{UVW_Outer2010}, respectively. The leakage-capacity region of the SD-BC without a common message recovers the capacity regions where both \cite{Semi-det_BC_secrect_two2009}, either \cite{Goldfeld_Weak_Secrecy_ISIT2015,Semi-det_BC_secrect_one2009}, or neither \cite{GP_SemideterministicBC1980} private message is secret. The result for the BC with a degraded message set and a privacy leakage constraint captures the capacity regions for the BC with confidential messages \cite{Csiszar_Korner_BCconfidential1978} and the BC with a degraded message set (without secrecy) \cite{Korner_BC_DegradedMessageSet1977}. Furthermore, our code construction for the inner bound is leakage-adaptive and recovers the best known codes for the aforementioned cases. A Blackwell BC example visualizes the transition of the leakage-capacity region from the capacity region without secrecy to the secrecy-capacity regions for different cases. \section*{Acknowledgements} The authors would like to thank the associate editor and the anonymous reviewers that helped us improve the presentation of this paper. We also kindly thank Ido B. Gattegno for his work on the FME-IT software \cite{FME&ITIP} that assisted us with technical details of proofs. \appendices \section{Equivalence of the Regions in (\ref{EQ:region_m2only_SDBC}) and (\ref{EQ:region_m2only_alt_SDBC})}\label{APPEN:SDBC_region_equality} \par Denote the region in (\ref{EQ:region_m2only_SDBC}) by $\mathcal{C}$ and recall that the region in (\ref{EQ:region_m2only_alt_SDBC}) is denoted by $\mathcal{C}^0_{\mathrm{SD}}(\infty,0)$. The inclusion $\mathcal{C}\subseteq\mathcal{C}^0_{\mathrm{SD}}(\infty,0)$ follows since (\ref{EQ:region_m2only_alt_SDBC1})-(\ref{EQ:region_m2only_alt_SDBC2}) coincide with (\ref{EQ:region_m2only_SDBC11})-(\ref{EQ:region_m2only_SDBC2}), while for (\ref{EQ:region_m2only_alt_SDBC_sum}) we have \begin{equation} H(Y_1|W,V)+I(W,V;Y_2)=H(Y_1|W)+I(W;Y_2)+I(V;Y_2|W)-I(V;Y_1|W)\stackrel{(a)}\geq R_1+R_2. \end{equation} Here (a) is due to (\ref{EQ:region_m2only_SDBC12})-(\ref{EQ:region_m2only_SDBC2}). To see that $\mathcal{C}^0_{\mathrm{SD}}(\infty,0)\subseteq\mathcal{C}$, let $(R_1,R_2)\in\mathcal{C}^0_{\mathrm{SD}}(\infty,0)$ be a rate pair achieved by $(W,V,X)$. We show that there is a triple $(W^\star,V^\star,X^\star)$ for which $(R_1,R_2)\in\mathcal{C}$. First, suppose that (\ref{EQ:region_m2only_alt_SDBC2}) holds with equality: \begin{equation} R_2=I(V;Y_2|W)-I(V;Y_1|W).\label{EQ:assumption_SDBC1} \end{equation} By taking $W^\star=W$, $V^\star=T$ and $X^\star=X$, (\ref{EQ:region_m2only_SDBC11}) and (\ref{EQ:region_m2only_SDBC2}) hold by (\ref{EQ:region_m2only_alt_SDBC1})-(\ref{EQ:region_m2only_alt_SDBC2}), while (\ref{EQ:region_m2only_SDBC12}) is satisfied since \begin{align*} H(Y_1|W^\star)+I(W^\star;Y_2)&\stackrel{(a)}=H(Y_1|W)+I(W;Y_2)+I(V;Y_2|W)-I(V;Y_1|W)-R_2\\ &=H(Y_1|W,V)+I(W,V;Y_2)-R_2\\ &\stackrel{(b)}\geq R_1,\numberthis \end{align*} where (a) and (b) follow from (\ref{EQ:assumption_SDBC1}) and (\ref{EQ:region_m2only_alt_SDBC_sum}), respectively. Next, assume that a strict inequality holds in (\ref{EQ:region_m2only_alt_SDBC2}), i.e., there is a real number $\gamma>0$, such that \begin{equation} R_2=I(V;Y_2|W)-I(V;Y_1|W)-\gamma.\label{EQ:assumption_SDBC2} \end{equation} Define $W^\star\triangleq(\Theta,\widetilde{W})$, where $\Theta$ is a binary random variable independent of $(W,V,X)$ that takes values in $\mathcal{O}=\{\theta_1,\theta_2\}$ with probabilities $\lambda>0$ and $1-\lambda$, respectively, and \begin{eqnarray} \widetilde{W}=\begin{cases} W\ ,& \Theta=\theta_1\\ (W,V)\ ,& \Theta=\theta_2 \end{cases}.\label{EQ:SDBC_Wstar} \end{eqnarray} Furthermore, let \begin{equation} \lambda=\frac{I(V;Y_2|W)-I(V;Y_1|W)-\gamma}{I(V;Y_2|W)-I(V;Y_2|W)},\label{EQ:lambda_SDBC} \end{equation} $V^\star=(W,V)$ and $X^\star=X$. Note that $X^\star-V^\star-W^\star$ forms a Markov chain and that (\ref{EQ:region_m2only_SDBC11}) follows from (\ref{EQ:region_m2only_alt_SDBC1}). To see that (\ref{EQ:region_m2only_SDBC2}) holds consider: \begin{equation} I(V^\star;Y_2|W^\star)-I(V^\star;Y_1|W^\star)\stackrel{(a)}=\lambda\Big[I(V;Y_2|W)-I(V;Y_1|W)\Big]\stackrel{(b)}=I(V;Y_2|W)-I(V;Y_1|W)-\gamma\stackrel{(c)}=R_2\numberthis\label{EQ:R1_holds_SDBC} \end{equation} where (a) follows from the definition of $(W^\star,V^\star)$, while (b) and (c) follow from (\ref{EQ:lambda_SDBC}) and (\ref{EQ:assumption_SDBC2}), respectively. We conclude the proof by showing that (\ref{EQ:region_m2only_SDBC12}) also holds. This follows because \begin{align*} H(Y_1|W^\star)+I(W^\star;Y_2)&\stackrel{(a)}=H(Y_1|W^\star)+I(W^\star;Y_2)+I(V^\star;Y_2|W^\star)-I(V^\star;Y_1|W^\star)-R_2\\ &=H(Y_1|W^\star,W^\star)+I(W^\star,V^\star;Y_2)-R_2\\ &\stackrel{(b)}=H(Y_1|W,V)+I(W,V;Y_2)-R_2\\ &\stackrel{(c)}\geq R_1\numberthis \end{align*} where (a) follows from (\ref{EQ:R1_holds_SDBC}), (b) follows since $Y_1-V^\star-W^\star$ forms a Markov chain and $V^\star=(W,V)$, while (c) follows from (\ref{EQ:region_m2only_alt_SDBC_sum}). \section{Equivalence of the Regions in (\ref{EQ:region_BCConf}) and (\ref{EQ:region_alt_BCConf})}\label{APPEN:BCCOnf_region_equality} \par Denote the region in (\ref{EQ:region_BCConf}) by $\mathcal{C}_{CK}$ while the region in (\ref{EQ:region_alt_BCConf}) is denoted by $\mathcal{C}_{\mathrm{DM}}(0)$. Since this proof mostly follows by arguments akin to those presented in Appendix \ref{APPEN:SDBC_region_equality}, we omit some of the detail. First, $\mathcal{C}_{CK}\subseteq\mathcal{C}_{\mathrm{DM}}(0)$ follows since (\ref{EQ:region_BCConf02})-(\ref{EQ:region_BCConf1}) imply that (\ref{EQ:region_alt_BCConf0})-(\ref{EQ:region_alt_BCConf1}) holds, while (\ref{EQ:region_alt_BCConf_sum}) follows by combining (\ref{EQ:region_BCConf01}) and (\ref{EQ:region_BCConf1}). For the opposite inclusion $\mathcal{C}_{\mathrm{DM}}(0)\subseteq\mathcal{C}_{CK}$, let $(R_0,R_1)\in\mathcal{C}_{\mathrm{DM}}(0)$ be a rate pair achieved by $(W,U,X)$. We construct a triple $(W^\star,U^\star,X^\star)$ that satisfies $W^\star-U^\star-X^\star-(Y_1,Y_2)$ for which $(R_0,R_1)\in\mathcal{C}_{CK}$. If (\ref{EQ:region_alt_BCConf1}) holds with equality, i.e., if \begin{equation} R_1=I(U;Y_1|W)-I(U;Y_2|W),\label{EQ:assumption1} \end{equation} then we take $W^\star=W$, $U^\star=U$ and $X^\star=X$. With respect to this choice (\ref{EQ:region_BCConf02})-(\ref{EQ:region_BCConf1}) follows from (\ref{EQ:region_alt_BCConf0})-(\ref{EQ:region_alt_BCConf1}), while (\ref{EQ:region_BCConf01}) is satisfied by combining (\ref{EQ:assumption1}) with (\ref{EQ:region_alt_BCConf_sum}). If, on the other hand, a strict inequality holds in (\ref{EQ:region_alt_BCConf1}), i.e., we have \begin{equation} R_1=I(U;Y_1|W)-I(U;Y_2|W)-\gamma\label{EQ:assumption2} \end{equation} where $\gamma$ is a real and positive number, then we define $W^\star\triangleq(\Theta,\widetilde{W})$. Here $\Theta$ is a binary random variable independent of $(W,U,X)$ as in Appendix \ref{APPEN:SDBC_region_equality}, and \begin{eqnarray} \widetilde{W}=\begin{cases} W\ ,& \Theta=\theta_1\\ U\ ,& \Theta=\theta_2 \end{cases}.\label{EQ:BCConf_Wstar} \end{eqnarray} Furthermore, set \begin{equation} \lambda=\frac{I(U;Y_1|W)-I(U;Y_2|W)-\gamma}{I(U;Y_1|W)-I(U;Y_2|W)},\label{EQ:lambda} \end{equation} $U^\star=U$ and $X^\star=X$. Note that $(Y_1,Y_2)-X^\star-U^\star-W^\star$ forms a Markov chain and consider the following. \begin{equation} I(W^\star;Y_2)=\lambda I(W;Y_2)+(1-\lambda)I(U;Y_2)\stackrel{(a)}=I(W;Y_2)+(1-\lambda)I(U;Y_2|W)\geq I(W;Y_2)\stackrel{(b)}=R_0\numberthis\label{EQ:R01_holds} \end{equation} where (a) follows from (\ref{EQ:lambda}) and (b) follows from (\ref{EQ:region_alt_BCConf0}). Thus, (\ref{EQ:region_BCConf02}) is satisfied. To see that (\ref{EQ:region_BCConf1}) holds consider: \begin{equation} I(U^\star;Y_1|W^\star)-I(U^\star;Y_2|W^\star)\stackrel{(a)}=\lambda\Big[I(U;Y_1|W)-I(U;Y_2|W)\Big]\stackrel{(b)}=I(U;Y_1|W)-I(U;Y_2|W)-\gamma\stackrel{(c)}=R_1\numberthis\label{EQ:R1_holds} \end{equation} where (a) follows from the definition of $(W^\star,U^\star)$, while (b) and (c) follow from (\ref{EQ:lambda}) and (\ref{EQ:assumption2}), respectively. It remains to show that (\ref{EQ:region_BCConf01}) also holds. We begin by writing \begin{equation} I(U^\star;Y_2|W^\star)\stackrel{(a)}=\lambda I(U;Y_2|W)\leq I(U;Y_2|W),\numberthis\label{EQ:R02_temp1} \end{equation} where (a) follows from the definition of $(W^\star,U^\star)$. Finally, (\ref{EQ:region_BCConf01}) follows because \begin{align*} I(W^\star;Y_1)&\stackrel{(a)}=I(U^\star;Y_1|W^\star)-I(U^\star;Y_2|W^\star)+I(W^\star;Y_1)-R_1\\ &\stackrel{(b)}\geq I(W^\star,U;Y_1)-I(U;Y_2|W)-R_1\\ &\stackrel{(c)}\geq I(W,U;Y_1)-I(U;Y_2|W)-R_1\\ &\stackrel{(d)}\geq R_0\numberthis \end{align*} where (a) follows from (\ref{EQ:R1_holds}); (b) follows because $U^\star=U$ and by using (\ref{EQ:R02_temp1}); (c) follows since $Y_1-U-W^\star$ and $Y_1-U-W$ form Markov chains, which implies that $I(W^\star,U;Y_1)=I(U;Y_1)=I(W,U;Y_1)$; (d) follows from (\ref{EQ:region_alt_BCConf_sum}). \section{Proof of Corollary \ref{COR:DBC_leakage_capacity}}\label{APPEN:DBC_leakage_capacity_proof} The region $\mathcal{C}_{\mathrm{D}}(L_1,L_2)$ is obtained from $\mathcal{C}_{\mathrm{SD}}^0(L_1,L_2)$ by setting $W=0$ and $V=Y_2$, which implies that $\mathcal{C}_{\mathrm{D}}(L_1,L_2)\subseteq\mathcal{C}_{\mathrm{SD}}^\star(L_1,L_2)$. For the converse, the RHS of (\ref{EQ:region_nocommon_SDBC11}) is upper bounded by \begin{equation} R_1\leq H(Y_1|W,V,Y_2)+L_1\leq H(Y_1|Y_2)+L_1. \end{equation} For (\ref{EQ:region_nocommon_SDBC21}) and (\ref{EQ:region_nocommon_SDBC22}), respectively, we have \begin{align*} I(V;Y_2|W)-I(V;Y_1|W)+L_2&\leq I(V;Y_1,Y_2|W)-I(V;Y_1|W)+L_2\\ &=I(V;Y_2|W,Y_1)+L_2\\ &\leq H(Y_2|Y_1)+L_2\numberthis \end{align*} and \begin{equation} I(W,V;Y_2)\leq H(Y_2). \end{equation} Finally, (\ref{EQ:region_DBC_sum}) is implied by (\ref{EQ:region_nocommon_SDBC_sum2}) since \begin{align*} R_1+R_2&\leq H(Y_1|W,V)+I(V;Y_2|W)+\min\big\{I(W;Y_1),I(W;Y_2)\big\}\\ &\leq H(Y_1|W,V)+I(W,V;Y_2)\\ &\leq H(Y_1,Y_2|W,V)+I(W,V;Y_1,Y_2)\\ &= H(Y_1,Y_2).\numberthis \end{align*} To complete the proof we drop (\ref{EQ:region_nocommon_SDBC_sum1}), which can only increase $\mathcal{C}_{\mathrm{SD}}^{(0)}(L_1,L_2)$ \section{Error Probability Analysis for the Proof of Theorem \ref{TM:inner_bound}}\label{APPEN:error_analysis} By the symmetry of the codebook construction with respect to $(M_0,M_1,M_2,W_1,W_2)$ and due to their uniformity, we may assume that $(M_0,M_1,M_2,W_1,W_2)=(1,1,1,1,1)$. Furthermore, because we are dealing with the expected error probability over the ensemble of codebooks, the subsequent error events are defined with respect to a new PMF $\Gamma\in\mathcal{P}(\mathcal{B}\times\mathcal{I}_1\times\mathcal{I}_2\times\mathcal{Y}_1^n\times\mathcal{Y}^n_2)$ that describes the random experiment of transmitting $(M_0,M_1,M_2,W_1,W_2)=(1,1,1,1,1)$ using a random codebook. Specifically, we have \begin{align*} &\Gamma\left(\big\{\mathbf{u}_0(m_p)\big\}_{m_p},\big\{\mathbf{u}_1(m^{(1)}_p,m_{11},i'_1,w_1)\big\}_{(m^{(1)}_p,m_{11},i'_1,w_1)},\big\{\mathbf{u}_2(m^{(2)}_p,m_{22},i'_2,w_2)\big\}_{(m^{(2)}_p,m_{22},i'_2,w_2)},i_1,i_2,\mathbf{y}_1,\mathbf{y}_2\right)\\ &\begin{multlined}[b][.92\textwidth]=\prod_{m_p}P^n_{U_0}\big(\mathbf{u}_0(m_p)\big)\prod_{j=1,2}\left(\prod_{(m^{(j)}_p,m_{jj},i'_j,w_j)}P^n_{U_j|U_0}\Big(\mathbf{u}_j(m^{(j)}_p,m_{jj},i'_j,w_j)\Big|\mathbf{u}_0(m^{(j)}_p)\Big)\right)\\ \mspace{200mu}\times\Gamma\Big(i_1,i_2\Big|\mathbf{u}_0(1),\big\{\mathbf{u}_1(1,1,i'_1,1)\big\}_{i'_1},\big\{\mathbf{u}_2(1,1,i'_2,1)\big\}_{i'_2}\Big)\\\times Q^n_{Y_1,Y_2|U_0,U_1,U_2}\big(\mathbf{y}_1,\mathbf{y}_2\big|\mathbf{u}_0(1),\mathbf{u}_1(1,1,i_1,1),\mathbf{u}_2(1,1,i_2,1)\big)\end{multlined},\numberthis\label{EQ:random_code_PMF} \end{align*} where $\Gamma\Big(i_1,i_2\Big|\mathbf{u}_0(1),\big\{\mathbf{u}_1(1,1,i'_1,1)\big\}_{i'_1},\big\{\mathbf{u}_2(1,1,i'_2,1)\big\}_{i'_2}\Big)$ chooses $(i_1,i_2)\in\mathcal{I}_1\times\mathcal{I}_2$ according to the encoding rule described in Section \ref{SUBSEC:inner_proof}, and $$Q_{Y_1,Y_2|U_0,U_1,U_2}(y_1,y_2|u_0,u_1,u_2)\triangleq\sum_{x\in\mathcal{X}}P_{X|U_0,U_1,U_2}(x|u_0,u_1,u_2)Q_{Y_1,Y_2|X}(y_1,y_2|x).$$ The probability measure induced by $\Gamma$ is denote by $\mathbb{P}_\Gamma$. \subsection{Encoding/Decoding Errors} Consider the following error events. \textbf{Encoding errors:} An encoding error event is described a \begin{equation} \mathcal{E}=\bigcap_{(i_1,i_2)\in\mathcal{I}_1\times\mathcal{I}_2}\Big\{\big(\mathbf{U}_0(1),\mathbf{U}_1(1,1,i_1,1),\mathbf{U}_2(1,1,i_2,1)\big)\notin\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})\Big\}\label{EQ:analysis_event0}. \end{equation} \textbf{Decoding errors:} To account for decoding errors, define \begin{subequations} \begin{equation} \mathcal{D}_0=\Big\{\big(\mathbf{U}_0(1),\mathbf{U}_1(1,1,I_1,1),\mathbf{U}_2(1,1,I_2,1),\mathbf{Y}_1\big)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2,Y_j})\Big\}\label{EQ:analysis_event_LLN} \end{equation} and \begin{equation} \mathcal{D}_j(m_p,m_{jj},i_j,w_j)=\Big\{\big(\mathbf{U}_0(m_p),\mathbf{U}_1(m_p,m_{jj},i_j,w_j),\mathbf{Y}_1\big)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_j,Y_j})\Big\}\label{EQ:analysis_event1}, \end{equation}\label{EQ:decoding_errors} \end{subequations} where $j=1,2$. By the union bound, the expected error probability is bounded as \begin{align*} &\mathbb{E}P_e(\mathbb{C})\leq\mathbb{P}_{\Gamma}\bBigg@{4}(\mathcal{E}\cup\mathcal{D}_0^c\cup\mathcal{D}_1(1,1,I_1,1)^c\cup\mathcal{D}_2(1,1,I_2,1)^c\\ &\mspace{80mu}\cup\left\{\bigcup_{(\tilde{m}_p,\tilde{m}_{11},\tilde{w}_1)\neq(1,1,1)}\mspace{-15mu}\mathcal{D}_1(\tilde{m}_p,\tilde{m}_{11},I_1,\tilde{w}_1)\right\}\cup\left\{\bigcup_{(\tilde{m}_p,\tilde{m}_{22},\tilde{w}_2)\neq(1,1,1)}\mspace{-15mu}\mathcal{D}_2(\tilde{m}_p,\tilde{m}_{2},I_2,\tilde{w}_2)\right\}\bBigg@{4})\\ &\begin{multlined}[b][.87\textwidth]\leq\mathbb{P}_{\Gamma}\big(\mathcal{E}\big)+\mathbb{P}_{\Gamma}\big(\mathcal{D}_0^c\cap\mathcal{E}^c\big)+\sum_{j=1,2}\bBigg@{5}[\mathbb{P}_{\Gamma}\Big(\mathcal{D}_j(1,1,I_j,1)^c\cap\mathcal{D}_0\Big)\\ +\mathbb{P}_{\Gamma}\left(\bigcup_{(\tilde{m}_p,\tilde{m}_{jj},\tilde{w}_j)\neq(1,1,1)}\mathcal{D}_j(\tilde{m}_p,\tilde{m}_{jj},I_j,\tilde{w}_j)\right)\bBigg@{5}]\end{multlined}\\ &\begin{multlined}[b][.87\textwidth]\leq\underbrace{\mathbb{P}_{\Gamma}\big(\mathcal{E}\big)}_{P_0^{[1]}}+\underbrace{\mathbb{P}_{\Gamma}\big(\mathcal{D}_0^c\cap\mathcal{E}^c\big)}_{P_0^{[2]}}+\sum_{j=1,2}\bBigg@{5}[\underbrace{\mathbb{P}_{\Gamma}\Big(\mathcal{D}_j(1,1,I_j,1)^c\cap\mathcal{D}_0\Big)}_{P_j^{[1]}} +\underbrace{\sum_{\tilde{i}_j\in\mathcal{I}_j}\Gamma\big(\tilde{i}_j\big)\mathbb{P}_{\Gamma}\left(\bigcup_{m_p\neq 1}\mathcal{D}_j(\tilde{m}_p,1,\tilde{i}_j,1)\right)}_{P_j^{[2]}}\\ +\underbrace{\mathbb{P}_{\Gamma}\left(\bigcup_{\substack{(\tilde{m}_{jj},\tilde{w}_j)\neq(1,1),\\\tilde{i}_j\in\mathcal{I}_j}}\mathcal{D}_j(1,\tilde{m}_{jj},\tilde{i}_j,\tilde{w}_j)\right)}_{P_j^{[3]}}+\underbrace{\mathbb{P}_{\Gamma}\left(\bigcup_{\substack{(\tilde{m}_p,\tilde{m}_{jj},\tilde{w}_j)\neq(1,1,1),\\\tilde{i}_j\in\mathcal{I}_j}}\mathcal{D}_j(\tilde{m}_p,\tilde{m}_{jj},\tilde{i}_j,\tilde{w}_j)\right)}_{P_j^{[4]}}\bBigg@{5}]\end{multlined}\numberthis\label{EQ:analysis_error_prob_UB} \end{align*} Note that $P_0^{[1]}$ is the probability of an encoding error, while $P_0^{[2]}$ and $P_j^{[k]}$, for $k\in[1:4]$, correspond to decoding errors by Decoder $j$. We proceed with the following steps \begin{enumerate} \item By the Multivariate Covering Lemma \cite[Lemma 8.2]{ElGammalKim10LectureNotes}, $P_0^{[1]}\to 0$ as $n\to\infty$ if we have \begin{equation} R'_1+R'_2>I(U_1;U_2|U_0),\label{EQ:analysis_covering rb} \end{equation} while the Conditional Typicality Lemma \cite[Section 2.5]{ElGammalKim10LectureNotes} implies that $P_0^{[2]}\to 0$ as $n$ grows. Furthermore, the definitions in \eqref{EQ:decoding_errors} clearly imply that $P_j^{[1]}=0$, for all $n\in\mathbb{N}$.\\ \item For $P_j^{[3]}$, $j=1,2$, we have \begin{align*} P_j^{[3]}&\stackrel{(a)}\leq\sum_{\substack{(\tilde{m}_{jj},\tilde{w}_j)\neq (1,1),\\\tilde{i}_j\in\mathcal{I}_j}}2^{-n\big(I(U_j;Y_j|U_0)-\delta^{[3]}_\epsilon\big)}\\ &\leq2^{n(R_{jj}+R'_j+\tilde{R}_j)}2^{-n\big(I(U_j;Y_j|U_0)-\delta^{[3]}_\epsilon\big)}\\ &=2^{n\big(R_{jj}+R'_j+\tilde{R}_j-I(U_j;Y_j|U_0)+\delta^{[3]}_\epsilon\big)} \end{align*} where (a) follows since for any $(\tilde{m}_{jj},\tilde{w}_j)\neq (1,1)$ and $\tilde{i}_j\in\mathcal{I}_j$, $\mathbf{U}_j(1,\tilde{m}_{jj},\tilde{i}_j,\tilde{w}_j)$ is independent of $\mathbf{Y}_j$ while both of them are drawn conditioned on $\mathbf{U}_0(1)$. Moreover, $\delta^{[3]}_\epsilon\to 0$ as $\epsilon\to 0$. Hence, for the probability $P_j^{[3]}$ to vanish as $n\to\infty$, we take: \begin{equation} R_{jj}+R'_j+\tilde{R}_j<I(U_j;Y_j|U_0),\quad j=1,2.\label{EQ:analysis_RB1} \end{equation} \item For $P_j^{[4]}$, $j=1,2$, consider \begin{align*} P_j^{[4]}&\stackrel{(a)}\leq\sum_{\substack{(\tilde{m}_p,\tilde{m}_{jj},\tilde{w}_j)\neq (1,1,1),\\\tilde{i}_j\in\mathcal{I}_j}}2^{-n\big(I(U_0,U_1;Y_1)-\delta^{[4]}_\epsilon\big)}\\ &\leq2^{n(R_p+R_{jj}+R'_j+\tilde{R}_j)}2^{-n\big(I(U_0,U_j;Y_j)-\delta^{[4]}_\epsilon\big)}\\ &=2^{n\big(R_p+R_{jj}+R'_j+\tilde{R}_j-I(U_0,U_j;Y_j)+\delta^{[4]}_\epsilon\big)} \end{align*} where (a) follows since for any $(\tilde{m}_p,\tilde{m}_{jj},\tilde{w}_j)\neq(1,1,1)$ and $\tilde{i}_j\in\mathcal{I}_j$, $\mathbf{U}_0(\tilde{m}_p)$ and $\mathbf{U}_j(\tilde{m}_p,\tilde{m}_{jj},\tilde{i}_j,\tilde{w}_j)$ are correlated with one another but independent of $\mathbf{Y}_j$. As before, $\delta^{[4]}_\epsilon\to 0$ as $\epsilon\to 0$, and therefore, we have that $P_j^{[4]}\to 0$ as $n\to\infty$ if \begin{equation} R_p+R_{jj}+R'_j+\tilde{R}_j<I(U_0,U_j;Y_j),\quad j=1,2.\label{EQ:analysis_RB2} \end{equation} \item For $j=1,2$, similar steps as in the upper bounding of $P_j^{[3]}$ show that the rate bound that ensures that $P_j^{[2]}\to0$ as $n\to\infty$ is redundant. This is since for every $\tilde{m}_p\neq 1$ and $\tilde{i}_j\in\mathcal{I}_j$, the codewords $\mathbf{U}_0(\tilde{m}_p)$ and $\mathbf{U}_j(\tilde{m}_p,1,\tilde{i}_j,1)$ are independent of $\mathbf{Y}_j$. Hence, taking \begin{equation} R_p<I(U_0,U_j;Y_j),\quad j=1,2,\label{EQ:analysis_RB3} \end{equation} suffices for $P_j^{[2]}$ to vanish. However, the RHS of \eqref{EQ:analysis_RB3} coincides with the RHS of \eqref{EQ:analysis_RB2}, while the left-hand side (LHS) of \eqref{EQ:analysis_RB3} is with respect to $R_p$ only. Clearly, \eqref{EQ:analysis_RB2} is the dominating constraint.\\ \end{enumerate} \par Summarizing the above results, while substituting $R_p=R_0+R_{10}+R_{20}$, we find that the RHS of \eqref{EQ:analysis_error_prob_UB} decays as the blocklength $n\to\infty$ if the conditions in \eqref{EQ:achiev_rb} are met. \subsection{Leakage Associated Errors} To satisfy the leakage constraints in \eqref{EQ:achieve_leakage1}-\eqref{EQ:achieve_leakage2} we account for the error in decoding $\mathbf{U}_j$ from $\big(M_{11},\mathbf{U}_0(1),\mathbf{U}_{\bar{j}}(1,1,I_{\bar{j}},1),\mathbf{Y}_{\bar{j}}\big)$, where $j=1,2$ and $\bar{j}=j+(-1)^{j+1}$. Since $M_1=1$ is fixed and $\mathbf{U}_0(1)$ and $\mathbf{U}_{\bar{j}}(1,1,I_{\bar{j}},1)$ are given, the code design implies that decoding $\mathbf{U}_j$ boils down to decoding $W_j$. By repeating similar arguments to those presented in the encoding/decoding error analysis we have the $\mathbb{E}\lambda_1(\mathbb{C})\to 0$ as $n\to\infty$ if \eqref{EQ:achiev_rb_more} hold. \begin{comment} Define the events Define the events: \begin{equation} L_j\big(\tilde{i}_j,\tilde{w}_j)=\Big\{\mspace{-1mu}\big(\mathbf{U}_0(1),\mathbf{U}_j(1,1,\tilde{i}_j,\tilde{w}_j),\mathbf{U}_{\bar{j}}(1,1,I_{\bar{j}},1),\mathbf{Y}_{\bar{j}}\big)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_j,U_{\bar{j}},Y_{\bar{j}}})\mspace{-1mu}\Big\}\label{EQ:analysis_event2}, \end{equation} for $j=1,2$. For $P_j^{[6]}$, $j=1,2$, consider \begin{align*} P_j^{[6]}&\stackrel{(a)}\leq\sum_{\tilde{w}_j\neq w_j}2^{-n\big(I(U_j;Y_{\bar{j}}|U_0,U_{\bar{j}})-\delta_\epsilon\big)}\\ &\leq2^{nR_p}2^{-n\big(-I(U_j;Y_{\bar{j}}|U_0,U_{\bar{j}})-\delta_\epsilon\big)}\\ &=2^{n\big(R_p-I(U_j;Y_{\bar{j}}|U_0,U_{\bar{j}})+\delta_\epsilon\big)} \end{align*} where (a) follows since for every $\tilde{w}_j\neq w_j$, $\mathbf{U}_j(m_p,m_{jj},i_j,\tilde{w}_j)$ is independent of $\mathbf{Y}_{\bar{j}}$ while both of them are drawn conditioned on $\big(\mathbf{U}_0(m_p),\mathbf{U}_{\bar{j}}(m_p,m_{\bar{j}\bar{j}},i_{\bar{j}},w_{\bar{j}})\big)$. Since $\delta_\epsilon\to 0$ as $n\to\infty$, to make $P_j^{[6]}$ decay to 0 as $n\to\infty$, we take \begin{equation} \tilde{R}_j<I(U_j;Y_{\bar{j}}|U_0,U_{\bar{j}}),\ j=1,2.\label{EQ:analysis_RB4} \end{equation} \par Summarizing the above results, while substituting $R_p=R_0+R_{10}+R_{20}$, we find that the RHS of (\ref{EQ:analysis_error_prob_UB}) decays as the blocklength $n\to\infty$ if the conditions in (\ref{EQ:achiev_rb})-(\ref{EQ:achiev_rb_more}) are met. \end{comment} \section{Proof of Lemmas \ref{LEMMA:1} and \ref{LEMMA:2}}\label{APPEN:lemmas_proof} \subsection{Proof of Lemma \ref{LEMMA:1}}\label{APPEN:lemma1_proof} We prove (\ref{EQ:lemma1_ineq1}) only. The proof of (\ref{EQ:lemma1_ineq2}) follows similar lines. For every $(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\in\mathcal{U}^n\times\mathcal{U}_1^n\times\mathcal{U}_2^n$ define \begin{equation} \nu(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)=\begin{cases}1,\ (\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\notin\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})\\ 0,\ \mbox{otherwise}\end{cases},\label{EQ:lemma1_proof_nu} \end{equation} which we abbreviate as $\nu$. The multi-letter mutual information term in the LHS of (\ref{EQ:lemma1_ineq1}) is expanded as follows \begin{align*} I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})&\leq I(\mathbf{U}_1,\nu;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})\\ &=I(\nu;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})+I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\nu,\mathbb{C})\\ &=I(\nu;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})+\sum_{j=0}^1\mathbb{P}(\nu=j)I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\nu=j,\mathbb{C}).\numberthis\label{EQ:lemma1_proof_UB1} \end{align*} Note that \begin{align*} \mathbb{P}(\nu=1)I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\nu=1,\mathbb{C})&\leq\mathbb{P}\big((\mathbf{U}_0,\mathbf{U}_1,\mathbf{U}_2)\notin\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})\big)H(\mathbf{U}_2|\nu=1,\mathbb{C})\\ &\leq n\mathbb{P}\big((\mathbf{U}_0,\mathbf{U}_1,\mathbf{U}_2)\notin\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})\big)\log|\mathcal{U}_2|\\ &\stackrel{(a)}\leq n\eta_\epsilon^{(1)}\log|\mathcal{U}_2|\numberthis\label{EQ:lemma1_proof_UB11}. \end{align*} Here (a) follows by the properties the random code construction and $\eta_\epsilon^{(1)}$ decreases as $e^{-cn}$ for some constant $c>0$ \cite[Lemma 5]{Orlitsky_Roche2001}. Furthermore, we have \begin{align*} \mathbb{P}(\nu=0)I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\nu=0,\mathbb{C})&\leq I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\nu=0,\mathbb{C})\\ &=\sum_{(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})}\mspace{-20mu}P(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\log\left(\frac{P(\mathbf{u}_1,\mathbf{u}_2|\mathbf{u}_0)}{P(\mathbf{u}_1|\mathbf{u}_0)P(\mathbf{u}_2|\mathbf{u}_0)}\right)\\ &=\sum_{(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\in\mathcal{T}_\epsilon^{n}(P_{U_0,U_1,U_2})}\mspace{-20mu}P(\mathbf{u}_0,\mathbf{u}_1,\mathbf{u}_2)\log\left(\frac{2^{-nH(U_1,U_2|U_0)(1-\epsilon)}}{2^{-nH(U_1|U_0)(1+\epsilon)}2^{-nH(U_2|U_0)(1+\epsilon)}}\right)\\ &\leq nI(U_1;U_2|U_0)+n\eta_\epsilon^{(2)}\numberthis\label{EQ:lemma1_proof_UB12} \end{align*} where $\eta_\epsilon^{(2)}=3\epsilon H(U_1,U_2|U_0)$. Inserting (\ref{EQ:lemma1_proof_UB11})-(\ref{EQ:lemma1_proof_UB12}) into (\ref{EQ:lemma1_proof_UB1}) yields \begin{align*} I(\mathbf{U}_1;\mathbf{U}_2|\mathbf{U}_0,\mathbb{C})&\stackrel{(a)}\leq n\eta_\epsilon^{(1)}\log|\mathcal{U}_2|+nI(U_1;U_2|U_0)+n\eta_\epsilon^{(2)}+1\\ &\stackrel{(b)}=nI(U_1;U_2|U_0)+n\epsilon_1\numberthis \end{align*} where (a) follows since $I(\nu;\mathbf{U}_1|\mathbf{U}_0,\mathbb{C})\leq H(\nu)\leq 1$, while (b) follows by setting $\epsilon_1=\eta_\epsilon^{(1)}\log|\mathcal{U}_2|+\eta_\epsilon^{(2)}+\frac{1}{n}$. \subsection{Proof of Lemma \ref{LEMMA:2}}\label{APPEN:lemma2_proof} Recall that $\lambda_{m_1}(\mathcal{C})$ denotes the error probability in decoding $\mathbf{u}_1(m_p,m_{11},i_1,w_1,\mathcal{B}_{0,1})$ from $\big(\mathbf{u}_0(m_p,\mathcal{B}_0),\mathbf{u}_2(m_p,m_{22},i_2,w_2,\mathcal{B}_{0,2}),\mathbf{y}_2\big)$ when $M_1=m_1\in\mathcal{M}_1$ is fixed and the code $\mathcal{C}\in\mathfrak{C}$ is used. By the properties of the random code $\mathbb{C}$ we have \begin{equation} \mathbb{E}\lambda_{m_1}(\mathbb{C})\leq \eta_\epsilon^{(3)},\quad \forall m_1\in\mathcal{M}_1, \end{equation} where $\eta_\epsilon^{(3)}$ decreases as $e^{-\gamma n}$ for some real number $\gamma>0$. By Fano's inequality, we have \begin{equation} H(\mathbf{U}_1|M_1=m_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})\leq n\epsilon_3, \end{equation} where $\epsilon_3=\frac{1}{n}+\eta_\epsilon^{(3)}R_1$, which implies \begin{equation} H(\mathbf{U}_1|M_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})=\sum_{m_1\in\mathcal{M}_1}2^{-nR_1}H(\mathbf{U}_1|M_1=m_1,\mathbf{U}_0,\mathbf{U}_2,\mathbf{Y}_2,\mathbb{C})\leq n\epsilon_3. \end{equation} \newpage \bibliographystyle{unsrt} \bibliographystyle{IEEEtran}
{ "timestamp": "2016-08-18T02:06:57", "yymm": "1504", "arxiv_id": "1504.06136", "language": "en", "url": "https://arxiv.org/abs/1504.06136" }
\section{Introduction} Small particles of many kinds occur naturally in biological systems and the environment, and are used at some stage in many industrial processes~\cite{Merkus}. Frequently the size of the particles is crucial to their function. Thus measuring particle size is important, and many techniques have been developed for this purpose~\cite{Merkus}. One such technique, dynamic light scattering (DLS), can be applied to nanometric particles, size from a few nm to about 1\,$\mu$m, which can be suspended in a liquid. DLS has the advantages of being quick and reproducible and of providing well-defined (though often limited) information about the particles. In DLS, coherent laser light scattered by a particle suspension forms a random diffraction pattern that fluctuates in time as the particles move in Brownian diffusion~\cite{berne, pusey1, iso}. For equal-sized particles, analysis of the time dependence of the scattered light yields the particles' diffusion constant from which its size can be calculated. In the more common situation where the suspension is polydisperse --- there is a distribution of particle sizes --- DLS yields the Laplace transform of the distribution of diffusion coefficients. Thus, in principle, the latter, usually the quantity of interest, can be obtained by inverse Laplace transformation of the data. A variety of techniques have been used to perform this operation, including exponential sampling \cite{ostrowsky}, regularization \cite{provencher}, maximum entropy \cite{livesey}, maximum likelihood \cite{sun} and non-negatively constrained least squares \cite{zhu}. Because inverse Laplace transformation is particularly sensitive to inevitable experimental uncertainties in the data, these techniques are best suited to broad distributions (which may be multi-modal). Furthermore, the techniques can be quite complicated to operate and may require the input of prior information about the sample such as minimum and maximum particle size. A survey and critique of these methods, 20 years old but still valuable, was given by Finsy~\cite{Finsy}. A simpler approach to DLS data analysis, which was in fact the first one to go beyond fitting a single exponential, is the so-called method of cumulants~\cite{koppel, pusey2, brown}. This provides a few lower moments, or cumulants, of the distribution of diffusion coefficients, and is the topic addressed in this paper. In the early days of DLS, when computing power was limited, cumulant analysis used a linear fitting method which did not require an iterative program. Later Frisken \cite{frisken} pointed out that a much more versatile non-linear, iterative, fitting procedure is possible with modern computers. Frisken \cite{frisken, patty} and others \cite{hassan} demonstrated the value of this approach to analyze real experimental systems. Here we use realistic computer-generated data, where the parameters of the size distribution can be set explicitly, to evaluate further the potential and limitations of non-linear cumulant analysis. In the next section we describe the background to the current situation. While much of this material has appeared before, we believe that it is helpful to provide a coherent account here. Section \ref{methods-sec} describes generation of the synthetic data and the analysis methods used. In section \ref{results-sec} we present the results which are discussed in section \ref{discuss-sec}. We find that, if the distribution of diffusion coefficients is not too broad, non-linear cumulant analysis offers a straightforward and robust method for determining its mean and variance. However, the prospects for obtaining higher moments are not promising. \section{Background} General references for this section are \cite{berne, pusey1, koppel, pusey2, brown}. DLS measures the normalised time correlation function $g^{(2)}(\tau)$ of the scattered light intensity $I$: \begin{equation} g^{(2)}(\tau) = \frac{ \langle I(0) I(\tau) \rangle }{ \langle I \rangle^2 }, \label{first-dls-eqn} \end{equation} where $\tau$ is the correlation delay time. In many cases the intensity correlation function can be written in terms of the correlation function $g^{(1)}(\tau)$ of the scattered light field through the so-called Siegert relation \cite{siegert,pusey1}: \begin{equation} g^{(2)}(\tau) = 1 + \beta \left[ g^{(1)}(\tau) \right]^2, \label{siegert-eqn} \end{equation} where $\beta$ is the coherence factor, determined largely by the ratio of the detector area to the coherence area of the scattered light; $\beta$ is usually regarded as an unknown parameter to be fitted in the data analysis. In the simplest case of a dilute suspension of identical spherical particles in Brownian motion, $ g^{(1)}(\tau)$ is given by \begin{equation} g^{(1)}(\tau) = \exp (-\Gamma \tau ) \label{mono-g1-eqn} \end{equation} where the decay rate $\Gamma$ is \begin{equation} \Gamma = Dq^2, \label{diff-eqn} \end{equation} $D$ is the translational diffusion coefficient of the particles and $q$ is the scattering vector (set by the scattering angle $\theta$ and the wavelength $\lambda$ of the light in the sample through $q = (4 \pi / \lambda) \sin (\theta / 2)$). In turn, for spherical particles, $D$ is given by the Stokes-Einstein relation \begin{equation} D = \frac{k_B T}{6 \pi \eta R} \label{last-dls-eqn} \end{equation} where $k_B T $ is the thermal energy, $\eta$ the viscosity of the solvent and $R$ the particles' radius. Equations (\ref{first-dls-eqn}--\ref{last-dls-eqn}) form the basis of particle sizing by DLS. When a sample is polydisperse, containing particles of different sizes, each species gives rise to its own exponential decay in the field correlation function so that \begin{equation} g^{(1)}(\tau) = \int G(\Gamma)\exp(-\Gamma \tau) d\Gamma, \label{poly-g1-eqn} \end{equation} where $G(\Gamma)$ is the normalized distribution of decay rates $\Gamma$ ($\int G(\Gamma) d\Gamma = 1$). Thus $g^{(1)}(\tau)$, obtained from the measurement of $g^{(2)}(\tau) $ through Eq. (\ref{siegert-eqn}), is the Laplace transform of $G(\Gamma)$. In principle, therefore, the latter can be obtained by inverse Laplace transformation of the data. In practice, inverse Laplace transformation is an ill-conditioned problem in the sense that it converts small uncertainties in the data into large uncertainties in the recovered $G(\Gamma)$ \cite{mcwhirter}. Put another way, unless there is a wide spread of particle size, the sum of exponentials implied by Eq. (\ref{poly-g1-eqn}) looks not too different from a single, average, exponential. This limitation can be recognized and exploited by writing \begin{equation} \exp (-\Gamma \tau ) = \exp (-\bar{\Gamma} \tau) \exp \left[ -(\Gamma - \bar{\Gamma}) \tau \right], \end{equation} where $\bar{\Gamma}$ is the mean value of $G(\Gamma)$, \begin{equation} \bar{\Gamma} = \int \Gamma G(\Gamma) d\Gamma, \end{equation} and expanding the second exponential to give, in Eq. (\ref{poly-g1-eqn}), \begin{equation} g^{(1)}(\tau) = \exp (-\bar{\Gamma} \tau) \left[ 1 + \frac{1}{2} \mu_2 \tau^2 - \frac{1}{3!} \mu_3\tau^3 + \frac{1}{4!} \mu_4 \tau^4 - \dots \right], \label{g1-moments-eqn} \end{equation} where \begin{equation} \mu_n = \int (\Gamma - \bar{\Gamma})^n G(\Gamma) d\Gamma \label{mun-eqn} \end{equation} are the central moments of the distribution of decay rates (the moments about the mean). Equation (\ref{g1-moments-eqn}) shows clearly how DLS data can be represented by an average exponential with correction terms that depend on $G(\Gamma)$ (and hence on the particle size distribution). For a reasonably narrow distribution of decay rates and a reasonable range of scaled delay time $\bar{\Gamma}\tau$, the higher-order terms in Eq. (\ref{g1-moments-eqn}) become increasingly unimportant. Rewriting Eq. (\ref{g1-moments-eqn}) in terms of scaled time, \begin{equation} \fl g^{(1)}(\tau) = \exp (-\bar{\Gamma} \tau) \left[ 1 + \frac{1}{2} \frac{\mu_2}{\bar{\Gamma}^2} (\bar{\Gamma} \tau)^2 - \frac{1}{3!} \frac{\mu_3}{\bar{\Gamma}^3} (\bar{\Gamma} \tau)^3 + \frac{1}{4!} \frac{\mu_4}{\bar{\Gamma}^4} (\bar{\Gamma} \tau)^4 - \dots \right], \end{equation} shows that $\mu_2 / \bar{\Gamma}^2 $, the normalized variance of $G(\Gamma)$, is the simplest measure of the departure of $g^{(1)} (\tau) $ from a single exponential. With Eq. (\ref{siegert-eqn}), Eq. (\ref{g1-moments-eqn}) can also be written as \begin{equation} \fl \ln \sqrt{g^{(2)}(\tau) - 1 } = \frac{1}{2} \ln \beta - \bar{\Gamma} \tau + \frac{1}{2} \mu_2 \tau^2 - \frac{1}{3!} \mu_3 \tau^3 + \frac{1}{4!} (\mu_4 - 3\mu_2^2) \tau^4 -\dots, \label{linear-g1-fit-eqn} \end{equation} showing further how non-exponentiality appears as departure from linearity in a semi-logarithmic plot of the data (see Fig. \ref{sim-res-fig}). The original method of cumulants \cite{koppel, pusey2, brown} follows Eq. (\ref{linear-g1-fit-eqn}): the left-hand side, calculated from the data, is fitted to a polynomial of a few terms in delay time $\tau$, hence providing estimates of $\beta$, $\bar{\Gamma}$, $\mu_2$ etc. This method has the advantage that least-squares fitting to a polynomial which is linear in the unknown coefficients is a soluble problem that does not require iteration in the computer program \cite{bevington}. A disadvantage of the method is that, to keep the higher-order terms in Eq. (\ref{linear-g1-fit-eqn}) small, the data have to be truncated at around $\bar{\Gamma} \tau = 1$(i.e. only data for $ \bar{\Gamma} \tau \leq 1 $ are kept) and it is not straightforward to determine the optimum truncation. There is, in fact, a trade-off between large random errors in the fitted parameters if the data are truncated at too small a value of $\bar{\Gamma} \tau $ and large systematic errors (but smaller random ones) if too much of the data are used \cite{brown}. Later, following rapid development of computer power, Frisken \cite{frisken} pointed out that iterative, non-linear fitting of \begin{equation} \fl g^{(2)}(\tau) = B + \beta \left\{ \exp(-\bar{\Gamma} \tau) \left[ 1 + \frac{1}{2} \mu_2 \tau^2 - \frac{1}{3!} \mu_3 \tau^3 + \frac{1}{4!} \mu_4 \tau^4 - \dots \right] \right\}^2, \label{fit-eqn} \end{equation} obtained from Eqs. (\ref{siegert-eqn}) and (\ref{g1-moments-eqn}), is a more robust procedure. This approach has several advantages over the linear method. First, it is not necessary to truncate the data since the divergence of the higher-order terms in Eq. (\ref{fit-eqn}) is suppressed by the decaying exponential pre-factor. Second, the method allows the ``baseline'' $B$ to be regarded as a parameter to be fitted. In an ideal experiment, $B$ should be 1 (Eq. (\ref{siegert-eqn})). In practice $B$ can differ slightly from 1. For example, slow drift of the laser intensity or of the gain of the detector leads to a spurious correlation, $B > 1$, which is almost independent of time over the span of the data. Frisken \cite{frisken} used Eqs. (\ref{linear-g1-fit-eqn}) and (\ref{fit-eqn}) to analyze experimental data and clearly demonstrated the advantages of the non-linear method outlined above. Subsequently Hassan and Kulshreshtha \cite{hassan} performed a similar analysis of experimental data and also considered simulated data for known distributions of decay rate $G(\Gamma)$. However they only included terms up to second order in time and their simulated data did not take account of the uncertainty (noise) that is inevitable in an experiment. In this paper we compare the two methods of analysis using simulated data with realistic noise over a range of polydispersities, with the aim of determining more precisely the potential and limitations of the non-linear method. A note on terminology: Koppel \cite{koppel} pointed out that, formally, the logarithm of $g^{(1)}(\tau)$, Eq. (\ref{poly-g1-eqn}), is the cumulant generating function \cite{kendall} for the distribution $G(\Gamma)$. Thus the expansion of this quantity, Eq. (\ref{linear-g1-fit-eqn}), is a power series in which the coefficients are the cumulants of $G(\Gamma)$; it is from this observation that the linear analysis based on Eq. (\ref{linear-g1-fit-eqn}) gets its commonly-used name, the ``method of cumulants''. However, in the non-linear analysis using Eq. (\ref{fit-eqn}) emphasized here, it is the central moments rather than the cumulants that are relevant; thus what, to follow the custom, we have here called non-linear cumulant analysis might more logically be called the ``method of moments''. (In fact, comparing Eqs. (\ref{linear-g1-fit-eqn}) and (\ref{fit-eqn}), we see that the cumulants only differ from the central moments at order 4 and higher \cite{kendall}.) \section{Methods} \label{methods-sec} Mainly for mathematical convenience, we assume a Schulz distribution of decay rates: \begin{equation} G(\Gamma) = \frac{1}{\bar{\Gamma}} \frac{(z+1)^{z+1}}{z!} \left(\frac{\Gamma}{\bar{\Gamma}}\right)^{z} \exp \left( - \frac{\Gamma}{\bar{\Gamma}} (z + 1) \right); \label{schultz-eqn} \end{equation} this is a two-parameter distribution defined by mean decay rate $\bar{\Gamma}$ and width $\sigma$ (normalized standard deviation), given by \begin{equation} \sigma^2 \equiv \frac{\overline{\Gamma^2} - \bar{\Gamma}^2}{\bar{\Gamma}^2} = \frac{1}{z+1}. \end{equation} Sample plots of the Schulz distribution are shown in Fig. (\ref{schultz-fig}). \begin{figure}[htbp] \begin{center} \includegraphics[width=\columnwidth]{1Xcorrect.pdf} \caption{The Schulz distribution, Eq. (\ref{schultz-eqn}), for indicated values of standard deviation $\sigma$.} \label{schultz-fig} \end{center} \end{figure} Substitution of Eq. (\ref{schultz-eqn}) into Eq. (\ref{poly-g1-eqn}) gives \begin{equation} g^{(1)}(\tau) = \left(1 + \sigma^2 \bar{\Gamma}\tau\right)^{- 1 / \sigma^2}; \label{schultz-g1-eqn} \end{equation} (series expansion verifies that Eq. (\ref{schultz-g1-eqn}) reduces to a single exponential, Eq. (\ref{mono-g1-eqn}), as the width $\sigma$ of the distribution tends to zero). The moments about the origin of the Schulz distribution are \begin{equation} \fl \overline{\Gamma^n} \equiv \int{ \Gamma^n G(\Gamma d\Gamma) } = \bar{\Gamma}^n ( 1 + (n - 1) \sigma^2 ) ( 1 + (n - 2) \sigma^2 ) \dots (1 + \sigma^2) , \end{equation} giving, for the central moments, Eq. (\ref{mun-eqn}), \begin{equation} \frac{\mu_2}{\bar{\Gamma}^2} = \sigma^2 \textrm{, } \frac{\mu_3}{\bar{\Gamma}^3} = 2 \sigma^4 \textrm{ and } \frac{\mu_4}{\bar{\Gamma}^4} = 3 \left(\sigma^4 + 2 \sigma^6 \right) ; \label{moments-eqn} \end{equation} the fourth cumulant of the distribution is \begin{equation} \frac{\mu_4}{\bar{\Gamma}^4} - 3 \left( \frac{\mu_2}{\bar{\Gamma}^2} \right)^2 = 6 \sigma^6. \end{equation} Synthetic ``data'' $g^{(2)}(\tau)$ were constructed for a range of distribution widths $0 \leq \sigma \leq 1$ from \begin{equation} g^{(2)}(\tau) = B + \beta \left[ g^{(1)}(\tau) \right]^2 \label{sim-g2-eqn} \end{equation} with $g^{(1)}(\tau)$ given by Eq. (\ref{schultz-g1-eqn}); here $B$ is the baseline, cf. Eq. (\ref{fit-eqn}). To mimic modern photon correlators, 150 data points were logarithmically spaced in scaled delay time $\bar{\Gamma} \tau$ in the range $10^{-2} \leq \bar{\Gamma} \tau \leq 10^2$. To mimic experimental noise, each value of $g^{(2)}(\tau)$ was multiplied by a random number drawn from a Gaussian distribution of mean 1 and standard deviation $s$. For most of the analysis we took $s = 10^{-3}$, corresponding to an uncertainty of one part in a thousand on each data point. This is the typical magnitude of counting errors in a DLS experiment. We also looked briefly at noisier data, up to $ s = 10^{-2} $. For each value of $\sigma$, 20 data sets with different random noise were analyzed, allowing calculation of the means and standard deviations of the fitted parameters. Two analyses of the data were performed. In the standard cumulant analysis, we assumed $B$ to take its ideal value of 1 in Eq. (\ref{sim-g2-eqn}). Then $\ln \sqrt{g^{(2)}(\tau) - 1 }$, calculated from Eq. (\ref{siegert-eqn}), was fitted by linear least squares to Eq. (\ref{linear-g1-fit-eqn}) \cite{bevington}. Fits to polynomials in $\tau$ of order one, two, three and four were performed, providing estimates of an increasing number of the moments $\mu_n$. For simplicity, both $\bar{\Gamma}$ and $\beta$ were assumed to be 1 when generating the data, but were taken as parameters to be fitted in the analysis. In this linear cumulant analysis, the data were truncated when $g^{(2)}(\tau) - 1$ dropped to 10\% of its initial value. In the second, non-linear, analysis, the simulated data for $g^{(2)}(\tau)$ were fitted to Eq. (\ref{fit-eqn}) using a variable metric method \cite{james}. As with the standard analysis, four orders of fit were performed. The data were not truncated and the background $B$ was regarded as an additional floating parameter. \section{Results} \label{results-sec} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.75\textwidth]{fig2a.pdf} \includegraphics[width=0.75\textwidth]{fig2b.pdf} \caption{Example of a fit to simulated data. \emph{(a, top)} Crosses: simulated data, Eqs. (\ref{sim-g2-eqn}) and (\ref{schultz-g1-eqn}), with $B = 1.01, \beta = 1, \bar{\Gamma} = 1 , \sigma = 0.4$ and noise $s = 10^{-3}$. Solid line: fourth-order non-linear fit of the data (Eq. (\ref{fit-eqn})). Residuals, difference between data and fit, are indicated. Dashed line; single-exponential decay with $\bar{\Gamma} = 1$. \emph{(b, bottom)} Semi-logarithmic representation of the same data.} \label{sim-res-fig} \end{center} \end{figure} Figure \ref{sim-res-fig}(a) shows an example of data fitted successfully by the fourth-order non-linear procedure. Input values were $\sigma = 0.4$ and $B = 1.01$, and all the fitted parameters are within the expected uncertainty of the input values. The dashed line shows a single exponential with decay rate $\bar{\Gamma} = 1$. The figure illustrates how even a significant spread of particle size (see Fig. \ref{schultz-fig}), $\sigma = 0.4$, leads to a correlation function that does not differ much from a single exponential. In the semi-logarithmic representation of Fig. \ref{sim-res-fig}(b), the difference, at larger delay times, is more apparent. \subsection{Mean decay rate} \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{figure3-notitle.pdf} \caption{Deviation of the fitted mean decay rate $\bar{\Gamma}$ from its input value, 1, as a function of polydispersity $\sigma$ for both linear and non-linear first- to fourth-order fits. The error bar indicates the standard deviation of the fitted $\bar{\Gamma}$ for the fourth-order non-linear fit across the 20 generated data sets at $\sigma=0.4$. See text for discussion.} \label{gamma-dev-fig} \end{center} \end{figure} Figure \ref{gamma-dev-fig} shows the deviation of the fitted mean decay rate $\bar{\Gamma}$ from its input value, 1, as a function of polydispersity $\sigma$ for both linear and non-linear first- to fourth-order fits. A fit of order 1 is the equivalent of force-fitting the data to a single exponential. It is clear that, as soon as polydispersity becomes significant, the first-order fits seriously underestimate $\bar{\Gamma}$. However, adding just one parameter, $\mu_2$, in the second-order fits immediately allows reliable estimates of $\bar{\Gamma}$ up to polydispersities of 0.4 to 0.5; fourth-order fits estimate $\bar{\Gamma}$ reliably over almost the whole range of polydispersity considered. The error bar in Fig. \ref{gamma-dev-fig} indicates the standard deviation of the fitted $\bar{\Gamma}$ for the fourth-order non-linear fit across the 20 generated data sets at $\sigma = 0.4$, and shows that $\bar{\Gamma}$ can be obtained with an accuracy of better than 0.5\% for moderately polydisperse samples. In the estimation of $\bar{\Gamma}$ for moderately polydisperse systems, there is little to choose between non-linear and (truncated) linear fits. Intriguingly, at third-order, the non-linear fit actually does worse than the linear one. We note that, for a symmetrical distribution of decay rates, the odd-order central moments are zero. Thus, in general, $\mu_4 \tau^4$ can be larger than $\mu_3 \tau^3$ in Eq. (\ref{fit-eqn}) even at $\bar{\Gamma} \tau < 1$, and there is no justification for a fit that includes $\mu_3$ but not $\mu_4$. \subsection{Second moment} \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{figure4-notitle.pdf} \caption{Deviation of the fitted second central moment $\mu_2$ from its input value $\sigma^2$ (Eq. (\ref{moments-eqn}) with $\bar{\Gamma} = 1$) for linear and non-linear fits up to fourth order. Similar to Fig. \ref{gamma-dev-fig}, the error bar indicates the standard deviation of the fitted $\mu_2$ for the fourth-order non-linear fit across the 20 generated data sets at $\sigma=0.4$.} \label{mu2-dev-fig} \end{center} \end{figure} Under the same conditions as for Fig. \ref{gamma-dev-fig}, Fig. \ref{mu2-dev-fig} shows the deviation of the fitted second central moment $\mu_2$ from its input value $\sigma^2$ (Eq. (\ref{moments-eqn}) with $\bar{\Gamma} = 1$). For this parameter, all three non-linear fits do much better than the linear ones, giving accurate estimates of $\mu_2$ for polydispersities up to 0.3. The fourth-order non-linear fit gives a good estimate of $\mu_2$ up to about $\sigma = 0.6$. At $\sigma = 0.4$, where $\mu_2 = \sigma^2 = 0.16$, uncertainty in the determination of $\mu_2$ is about $\pm 0.02$, a finding consistent with experience in experiments \cite{brown}. These values translate into a quite accurate determination of the width, $\sigma(\textrm{fitted}) = 0.40 \pm 0.025$ \subsection{Third moment} \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{figure5-notitle.pdf} \caption{Deviation of the fitted third central moment $\mu_3$ from its input value $2\sigma^4$ (Eq. (\ref{moments-eqn}) with $\bar{\Gamma} = 1$).} \label{mu3-dev-fig} \end{center} \end{figure} Figure \ref{mu3-dev-fig} shows the deviation of the third moment from its input value $2\sigma^4$ (Eq. (\ref{moments-eqn}) with $\bar{\Gamma} = 1$). We have mentioned above that there is no justification for a third-order fit. At first sight the fourth-order fit appears to do well up to $\sigma = 0.4$. However we note that, for the fourth-order fit at $\sigma = 0.4$, the uncertainty, $\pm 0.045$, in the recovered $\mu_3$ is barely smaller than the value, 0.0512, of $\mu_3$ itself. Thus, at least for this distribution and noise level, DLS can do little more than hint at the sign of $\mu_3$. \subsection{Fourth moment} We find that, up to about $\sigma = 0.4$, the uncertainty in the fitted value of $\mu_4$ is comparable to its actual value. On the other hand, above $\sigma = 0.4$, as $\mu_4$ becomes larger, its fitted value is consistently lower than its actual value. Thus, as with $\mu_3$, we obtain little useful information. \subsection{Baseline} Provided that the data extend to long enough times, as in Fig. \ref{sim-res-fig}, we find that the baseline $B$ is recovered accurately by the non-linear fits. Furthermore, treating the baseline as a parameter to be fitted introduces almost no further uncertainty in the other fitted parameters (compared to the case where the baseline is fixed at its input value). \subsection{Initial guesses} The non-linear fitting procedure, Eq. (\ref{fit-eqn}), appears to be very stable with respect to the choice of initial values for the fitted parameters. The simplest approach is to obtain initial values of $\beta$ and $\bar{\Gamma}$ from a first-order linear fit of a short-time portion of the data, to assume that the initial value of the baseline $B$ is 1, and to take initial values for the higher moments, $\mu_2, \mu_3,$ etc., to be zero. \subsection{Larger noise} With noisier data, $s = 10^{-2}$ (see section 3), fourth-order fits were not useful, giving large uncertainties in both $\bar{\Gamma}$ (several percent) and $\mu_2$ (typically $\pm 0.20$). However second-order non-linear fits gave $\bar{\Gamma}$ to within about 2\% with an uncertainty in $\mu_2$ of about $\pm 0.05$, meaning that polydispersities $\sigma \left( = \sqrt{\mu_2} \right)$ greater than 0.20 to 0.25 could still be detected. \section{Discussion} \label{discuss-sec} Several conclusions may be drawn from these results. First, we find that non-linear fits are more accurate and more straightforward to perform than linear fits (where a somewhat arbitrary truncation of the data is necessary). This is seen clearly in Fig. \ref{mu2-dev-fig} which shows that non-linear fits return a much smaller systematic error in the second moment $\mu_2$ than linear fits. Second, we have pointed out that there is no justification for performing third-order fits, either linear or non-linear: the $\tau^4$-term in Eqs. (\ref{linear-g1-fit-eqn}) and (\ref{fit-eqn}) can, in general, be comparable to or larger than the $\tau^3$-term even at small $\bar{\Gamma}\tau$. Third, performing a fourth-order, rather than a second-order, fit reduces the systematic error in the fitted $\mu_2$ (Fig. \ref{mu2-dev-fig}). However, even for polydispersities as large as $\sigma = 0,6$, fourth-order fits do not provide reliable estimates of the higher moments, $\mu_3$ and $\mu_4$. This finding agrees with experience: we are not aware of any experiment where $\mu_3$ or $\mu_4$ have been convincingly measured. It might at first appear surprising that including unknown parameters $\mu_3$ and $\mu_4$ in the non-linear fit should improve the fitting of $\mu_2$ even though the determined values of $\mu_3$ and $\mu_4$ are themselves unreliable. Comparisons of the $\chi^2$ surface \cite{bevington} in response to combined changes of $\mu_2$ with $\bar{\Gamma}$ and $\mu_2$ with $\mu_3$ reveal that the additional fit parameters significantly reduce the correlations between $\mu_2$ and $\bar{\Gamma}$ i.e. the minimum area in the $\chi^2$ surface becomes perpendicular to either the $\mu_2$ or the $\bar{\Gamma}$ axes when $\mu_3$ and $\mu_4$ are included in the fit. Without these parameters the $\chi^2$ minimum is typically aligned parallel to $\mu_2 = \bar{\Gamma}$ making it more difficult to determine the optimal fit values. Despite the somewhat negative conclusions of the first paragraph of this section, we \emph{are} able to suggest a robust and relatively straightforward procedure to obtain the first two moments of the distribution of diffusion constants, $\bar{\Gamma}$ and $\mu_2$: \begin{enumerate} \item Set up a photon correlator with channels logarithmically-spaced in time so that the measured intensity correlation function $g^{(2)}(\tau)$ extends well into the long-time baseline $B$; run the experiment for long enough that the typical noise on a data point is about 1 part in $10^3$. \item Obtain initial estimates of the coherence factor $\beta$ and mean decay rate $\bar{\Gamma}$ from a short-time linear fit of the data, and take other initial estimates as baseline $B = 1$, higher moments $\mu_2, \mu_3, \mu_4 = 0$. \item Perform a \emph{fourth}-order non-linear fit of $g^{(2)}(\tau)$ to Eq. (\ref{fit-eqn}). If the fitted second moment $\mu_2 / \bar{\Gamma}^2$ is less than about 0.40 (corresponding to $\sigma \approx 0.6$ in Fig. 4), then the fitted $\bar{\Gamma}$ and $\mu_2 / \bar{\Gamma}^2$ should be good estimates of the true values, accurate to around 0.5\% and $\pm 0.02$ respectively. \item If, in (iii), the fitted second moment $\mu_2 / \bar{\Gamma}^2$ is less than $\sim 0.1$ (polydispersity $\sigma \leq 0.3$), then more precise estimates of the values of $\bar{\Gamma}$ and $\mu_2 / \bar{\Gamma}^2$ should be obtainable from a \emph{second}-order non-linear fit of $g^{(2)}(\tau)$ to Eq. (\ref{fit-eqn}) (see Figs. 3 and 4). \end{enumerate} Here we have only considered a Schulz distribution of decay rates $G(\Gamma)$ (Eq. (\ref{schultz-eqn})). However it is straightforward to show that the moments of a distribution can be written~\cite{Krieger, Pusey-Fijnaut-Vrij} \begin{equation} \overline{\Gamma^n} = \overline{\Gamma}^n \left[1 + \frac{n(n-1)}{2}\sigma^2 + O(\sigma^3)\right]. \label{narrow-eqn} \end{equation} \noindent For, fairly narrow, fairly symmetrical distributions it is sufficient to keep only the first two terms so that the higher moments can, to a good approximation, be written purely in terms of the standard deviation $\sigma$. Thus we may expect that the general discussion above will apply to any distribution which is narrow enough. For the Schulz distribution, we have shown here that ``narrow enough'' means $\sigma$ less than about 0.6; but, for more skewed or flatter (larger kurtosis) distributions than the Schulz, this upper limit might need to be reduced somewhat. Repeating the programme of this paper for other distributions such as the lognormal or a two-component mixture would provide more quantitative conclusions, but the general picture is unlikely to change. The method of cumulants described here is relatively simple to implement and does not require the input of any prior information about the sample. It should therefore be the first approach of any experimenter presented with an unknown sample. When this first measurement returns a large enough second moment, it is probably worth trying some of the more complex analysis methods listed in section 1. Experience with these methods~\cite{Finsy} shows that they have difficulty resolving bimodal distributions when the ratio of the two sizes is less than about 3. This corresponds to a standard deviation $\sigma \approx 0.5$ and a second moment $\mu_2 / \bar{\Gamma}^2 \approx 0.25$. In general, though, these methods work best for significantly broader distributions than this. In this paper we have limited consideration to analysis of a single DLS measurement. For completeness, we mention that, for particles large enough that there is significant angular variation in the intensity of light that they scatter (radius greater than about 50 nm), performing a combined analysis of data taken at different scattering angles can frequently provide much more detailed information. For example, by measuring the angular dependence of the apparent average diffusion constant of particles which show a minimum in their angular intensity profile, it is possible to measure very small polydispersities~\cite{Pusey&VanMegen}. It has recently been demonstrated that a Bayesian analysis of the full correlation functions measured at several scattering angles can be very powerful in resolving multi-modal distributions~\cite{Naiim}. We note that obtaining information about the distribution $G(\Gamma)$ of decay rates (or diffusion constants) is usually not the ultimate goal of an analysis of DLS data; rather, one is interested in the distribution of particle sizes. Because the contribution of each particle species to $G(\Gamma)$ is weighted by the intensity of light scattered by that species and because the particle radius is, through Eqs. (\ref{diff-eqn}) and (\ref{last-dls-eqn}), inversely proportional to the decay rate $\Gamma$, obtaining the size distribution from $G(\Gamma)$ is not straightforward. Nevertheless, for many systems, such as solid spheres \cite{pusey1}, spherical shell-like particles \cite{patty} and random-coil polymers \cite{brown}, the measured moments $\bar{\Gamma}$ and $\mu_2 / \bar{\Gamma}^2$ can be related to moments of the particle size distribution. For example, for homogeneous spheres much smaller than the wavelength of light, it can be shown \cite{pusey1, Pusey&VanMegen} that the effective particle radius obtained by substituting the measured $\bar{\Gamma}$ in Eqs. (\ref{diff-eqn}) and (\ref{last-dls-eqn}) is \begin{equation} R_{\textrm{eff}} \left[ = \frac{k_B T q^2}{6 \pi \eta \bar{\Gamma}} \right] = \overline{R^6} / \overline{R^5}, \end{equation} and that \begin{equation} \frac{\mu_2}{\bar{\Gamma}^2} = \frac{\overline{R^4} \, \overline{R^6}}{(\overline{R^5})^2} - 1, \end{equation} where the $\overline{R^n}$ are moments of the distribution of particle radii. For narrow size distributions, where an approximation like that of Eq. (\ref{narrow-eqn}) can be applied, these results reduce to the simpler (and useful) expressions $R_{\textrm{eff}} = \overline{R} \left( 1 + 5\sigma^2_R\right)$ and $\mu_2 / \bar{\Gamma}^2 = \sigma_R^2$ where $\sigma_R$ is the standard deviation of the radius. Finally we comment that the conclusions reached here --- useful determination of $\bar{\Gamma}$ and $\mu_2$, but little prospect of obtaining higher moments --- are not too different from those reached by Koppel \cite{koppel} when introducing the linear cumulant method more than 40 years ago. However we have shown that non-linear fitting gives a simpler and more robust procedure which avoids the necessity of a rather arbitrary truncation of the data; we have estimated the upper limit of reliability of cumulants methods at about $\mu_2 / \bar{\Gamma}^2 = 0.4$, polydispersity $\sigma \approx 0.6$; and we have suggested that if an initial analysis by the cumulants method yields a second moment more than about $\mu_2 / \bar{\Gamma}^2 = 0.25$, it is probably worth trying to obtain more information from one of the more complex analysis methods, based on inverse Laplace transformation, mentioned above. \section*{References}
{ "timestamp": "2015-04-27T02:08:47", "yymm": "1504", "arxiv_id": "1504.06502", "language": "en", "url": "https://arxiv.org/abs/1504.06502" }
\section{\label{}} \section{Introduction} Higgs inflation~\cite{Bezrukov:2007ep} is a unique inflationary model that is based on the discovered scalar boson. Its prediction of the spectral index and the tensor-to-scalar ratio is consistent with the Planck 2015 results~\cite{Ade:2015lrj}. It is interesting to ask whether this scenario can be embedded in a theoretically motivated framework beyond the Standard Model (SM) of Particle Physics. In this contribution, we study the possibility of large-field Higgs inflation in the Minimal Supersymmetric Standard Model (MSSM) taking into account the supergravity effects. Inflation driven by the Higgs fields in the MSSM has been studied by several authors. A small-field inflection-point inflation model was studied in Ref.~\cite{Chatterjee:2011qr} where a higher order Higgs term was added to the MSSM superpotential. A large field model was studied in Ref.~\cite{Ibanez:2014kia, Ibanez:2014swa} where $\mu$ term as well as soft SUSY breaking mass terms are of order the inflation scale as high as $\mathcal{O}(10^{13})$ GeV. The monodromy structure and string corrections are utilized in the model. Here, we do not add higher order terms to the superpotential, and consider the possibility of Higgs inflation in the $\mathcal{N}=1$ supergravity framework. The possibility of large field Higgs inflation in supergravity was studied and concluded that it does not occur due to the negative potential in the large field region~\cite{Einhorn:2009bh, BenDayan:2010yz}. Instead, Higgs inflation is realized in the Next-to-Minimal Supersymmetric Standard Model (NMSSM)~\cite{Einhorn:2009bh, BenDayan:2010yz, Lee:2010hj, Ferrara:2010yw, Ferrara:2010in} in which a singlet is added to the MSSM particle contents. One of the reasons of the success is that the singlet can be naturally identified with the ``stabilizer field'' which is used for large field inflation in supergravity~\cite{Kawasaki:2000yn, Kallosh:2010ug}. Recently, Ketov and the author proposed a new method for making large field inflation possible in supergravity~\cite{Ketov:2014qha, Ketov:2014hya}. In this approach, the stabilizer field is no longer required, and the positivity of the potential is restored by the shift-symmetric quartic term in the inflaton K\"{a}hler potential~\footnote{There are other similar but different approaches~\cite{Roest:2015qya, Linde:2015uga} that do not rely on the stabilizer field. Despite the long history of studies on inflation in supergravity, these studies, including Refs.~\cite{Ketov:2014qha, Ketov:2014hya}, began just recently apart from a classic work~\cite{Goncharov:1983mw}.}. It is, however, non-trivial whether the new method is applicable to the case of non-singlet fields (Higgs fields are charged electroweakly), and what kinds of inflaton potential are available. We thus re-examine the possibility of Higgs inflation in the MSSM (\textit{i.e.} without the stabilizer singlet) by introducing higher dimensional terms in the Higgs K\"{a}hler potential. \section{Large field inflation with Higgs(-like) fields} Shift symmetry is a key to realize large field inflation in supergravity~\cite{Kawasaki:2000yn} since the $F$-term scalar potential has an exponential factor of the K\"{a}hler potential, \begin{align} V=e^{K}\left( K^{\bar{j}i}\left( W_i + K_i W\right) \left( \overline{W}_{\bar{j}}+K_{\bar{j}}\overline{W}\right) - 3|W|^2 \right), \end{align} where a bar ($\bar{\phantom{w}}$) denotes complex conjugation, and we use the reduced Planck unit, $M_{\text{P}}/\sqrt{8\pi}=1$. Since Higgs fields are charged, we consider shift transformation of a singlet combination of Higgs fields like $H_u H_d$ or its power $(H_u H_d)^n$~\cite{Nakayama:2010sk}, where $H_u H_d$ is a short-hand notation for the SU(2) invariant contraction $H_u^{\text{t}}i\sigma_2 H_d=H_u^{+}H_d^{-}-H_u^0 H_d^0$. More generally, we consider the shift symmetry under the following transformation \begin{align} J(H_u H_d)\rightarrow J(H_u H_d)+ic, \label{shift} \end{align} where $J$ is an arbitrary holomorphic function, and $c$ is a constant. We take $J(H_u H_d)=(\kappa H_u H_d )^n$ as a benchmark choice, where $\kappa$ is a constant. Consider the following K\"{a}hler potential and superpotential. \begin{align} K=&|H_u|^2+|H_d|^2+c\left( J ( H_u H_d ) +\bar{J}( \overline{H_u}\, \overline{H_d}) \right) \nonumber \\ & +\frac{1}{2}\left( J ( H_u H_d ) +\bar{J}( \overline{H_u}\, \overline{H_d}) \right)^2 -\frac{\zeta}{4}\left( J ( H_u H_d ) +\bar{J}( \overline{H_u}\, \overline{H_d}) \right)^4, \label{Kpol} \\ W=& \mu H_u H_d . \label{WMSSM} \end{align} The first two terms in the K\"{a}hler potential are responsible for the Higgs kinetic terms, and they break the shift symmetry~\eqref{shift}. The superpotential is that of the MSSM. There are $2\times 4-3$(would-be Nambu-Goldstone bosons)$=5$ scalar degrees of freedom. We truncate this theory to one with less degrees of freedom. The mass of charged Higgs $(\sim g/\sqrt{\kappa})$ is larger than the Hubble scale during inflation if we take not too large $\kappa$. Then, charged Higgs is decoupled, and we neglect them hereafter. The K\"{a}hler potential is approximately constant along the quasi-$K$-flat direction, \begin{align} J(- H_u^0 H_d^0 ) + \bar{J} ( - \overline{H_u^0}\, \overline{H_d^0} )=0. \end{align} The $D$-term is constant along the $D$-flat direction, \begin{align} |H_u^0|=|H_d^0|. \end{align} In the $K$- and $D$-flat direction, there is one scalar component, the inflaton. In the field region satisfying $\left( |H_u^0|^2+|H_d^0|^2\right)|J'|^2\gg1$, the effect of the canonical terms on the kinetic term becomes negligible. Taking $\kappa \gg 10^3$, we can also ignore its effects on the potential. In the $D$-flat direction, the truncated theory becomes \begin{align} K\simeq c \left( \Phi+\bar{\Phi} \right) +\frac{1}{2}\left( \Phi+\bar{\Phi} \right)^2 - \frac{\zeta}{4}\left( \Phi+\bar{\Phi} \right)^4, \end{align} where we defined a chiral superfield $\Phi=J(-H_u^0 H_d^0)$. Note that the above K\"{a}hler potential is same as that of single superfield framework with quartic stabilization~\cite{Ketov:2014qha}. The $K$-flat direction is now written as $\Phi+\bar{\Phi}=0$, \textit{i.e.} the imaginary direction, in which the inflaton rolls down. The MSSM superpotential becomes \begin{align} W=\mu J^{-1}(\Phi), \label{WJ} \end{align} where $J^{-1}$ is the inverse function of $J$. Thus, the potential in terms of the redefined inflaton superfield generically has a quite different shape from the original one. This is a manifestation of the mechanism of running kinetic inflation~\cite{Nakayama:2010kt, Nakayama:2010sk}. In our scenario, the inflationary scale ($\sim 10^{13}$ GeV in the case of typical chaotic inflation) is given by $\mu / \kappa$. If we want to take $\mu$ as light as $\mathcal{O}(1)$ TeV, $\kappa$ has to be as small as $\mathcal{O}(10^{-10})$, but we need large $\kappa$ to neglect the canonical terms. There is a possibility that the $\mu$ parameter is actually a dynamical field and its value changes after the inflation. We do not pursue this scenario here since we do not include a singlet. Next, let us consider the case $\mu\sim 10^{13}$ GeV. First of all, SUSY should be broken also at this scale so that the $\mu$ parameter and soft SUSY breaking parameters cancel each other to reproduce the electroweak scale. That is, fine tuning is required. In view of the heavy (125 GeV) Higgs mass, absence of SUSY signals at the LHC, and string landscape, this large $\mu$ term and fine tuning may not be a problem. However, it is hard to control both effects of soft SUSY breaking terms and radiative corrections on the inflationary potential~\footnote{ Alternatively, one may consider the case where soft SUSY breaking terms drive inflation~\cite{Ibanez:2014kia, Buchmuller:2015oma}. In our case, it leads to the potential $|V|\sim |(\mu^2 / \kappa ) J^{-1}(\Phi) |$. This leads to the potential of fractional power $1/n$ when we choose $J=(\kappa H_u H_d)^n$. }. Also, a complete analysis would involve dynamics of both the Higgs-inflaton and SUSY breaking sectors. We leave these issues for future investigation. In the following, we switch to the possibilities of inflaton being MSSM Higgs-{\em like} fields. We mean by ``MSSM Higgs-like'' that they have same quantum numbers as MSSM Higgses, and they have K\"{a}hler and superpotentials~\eqref{Kpol} and \eqref{WMSSM}, but they do not have large Yukawa couplings and their $\mu$ term may be larger than the electroweak scale. Then, all of the above problems can be circumvented. Let us specify the non-minimal K\"{a}hler function $J$ and see the resulting potential. For simplicity, we take the monomial function $J$ of power $n$, $J(H_u H_d)=(\kappa H_u H_d)^n$, and we obtain the inflaton potential of asymptotic power $2/n$. \begin{align} V=& \left| \frac{\mu}{\kappa}\right|^2 \left( (c^2-3) \left( \frac{\chi}{\sqrt{2}}\right)^{\frac{2}{n}} + \frac{c^2}{n^2} \left( \frac{\chi}{\sqrt{2}}\right)^{\frac{2}{n}-2} \right), \label{V2/n} \end{align} where $\chi=\sqrt{2}\text{Im}\Phi$ is the inflaton. Apparently, the potential diverges at the origin for $n> 1$, but this is an artifact of inappropriately extrapolating description in terms of the composite field $\Phi=J(-H_u^0 H_d^0 )$. In FIG.~\ref{fig:ns_r}, predictions of $2/n$-th power potential on the $(n_{\text{s}}, r)$-plane are shown with the Planck contours. The lines correspond to purely quadratic, linear, $2/3$-th, $1/2$-th, $2/5$-th, and $1/5$-th power potentials ($2/n$-th power potentials with $n=1, 2, 3, 4, 5$ and 10), respectively. The quadratic potential is disfavored by the latest observations, but the linear and fractional power potentials are within the $2\sigma$ contour of Planck constraints~\cite{Ade:2015lrj}. In the Figure, we do not include the effects of subleading term in Eq.~\eqref{V2/n} because we have not take into account subleading terms in the function $J$, and also because it introduces a free parameter $c$. \begin{figure}[ht] \centering \includegraphics[width=80mm]{ns_r_FracPower.eps} \caption{Inflationary predictions of the fractional ($2/n$-th) power potentials. The lines represent $n=1, 2, 3, 4, 5$, and 10, from top to bottom. The left (right) points correspond to the $e$-folding number $N=50$ (60). The light green contours are the constraint of Planck TT+lowP+BKP+lensing+BAO+JLA+$H_0$ (Fig. 21 in Ref.~\cite{Planck:2015xua}). In this Figure, the correction to the $2/n$-th power term in Eq.~\eqref{V2/n} is not included.} \label{fig:ns_r} \end{figure} We can also consider logarithmic K\"{a}hler potentials, \begin{align} K=& -a \ln \left( 1+\frac{1}{\sqrt{a}}\left( J(H_uH_d)+\bar{J}(\overline{H_u}\overline{H_d} )\right) -\frac{1}{a} \left( \left| H_u \right| ^2 + \left| H_d \right|^2 \right)+\frac{\zeta}{a^2} \left( J(H_uH_d)+\bar{J}(\overline{H_u}\overline{H_d}) \right)^4 \right), \label{KHiggsLog} \end{align} where $a\geq 3$ is a constant. This is invariant under the shift symmetry~\eqref{shift} up to the canonical terms $|H_u|^2$ and $|H_d|^2$. We truncate the theory as above, and consider the $K$- and $D$-flat directions. In the appropriate field range, \begin{align} \frac{ 1}{|J'|^2 } \left( 1+\frac{2}{\sqrt{a}} \left( H_u^0 H_d^0 J'+\overline{H_u^0}\overline{H_d^0}\bar{J}' \right)\right) \ll |H_u^0|^2+|H_d^0|^2 \ll a, \label{kappaNeglectC} \end{align} we neglect the canonical terms. In terms of the inflaton superfield $\Phi=J(H_u H_d)$, the K\"{a}hler potential is the similar form to that in Ref.~\cite{Ketov:2014hya}, \begin{align} K\simeq -a \ln \left( 1+\frac{1}{\sqrt{a}}\left(\Phi+\bar{\Phi}\right) +\frac{\zeta}{a^2}\left(\Phi+\bar{\Phi}\right)^4 \right). \end{align} Let us take the simple $J$ function again, $J=(\kappa H_u H_d)^n$. The potential becomes \begin{align} V=&\left| \frac{\mu}{\kappa} \right|^2 \left( (a-3) \left( \frac{\chi}{\sqrt{2}}\right)^{\frac{2}{n}} +\frac{1}{n^2} \left( \frac{\chi}{\sqrt{2}}\right)^{\frac{2}{n}-2} \right). \label{VHiggsALog} \end{align} The cases of $a\leq 2$ lead to negative potentials. For $a=3$, the first term vanishes, and the potential has only the term with a non-positive power. It is either a constant (for $n=1$), or a run-away type potential (for $n\geq 2$). In the case of $a\geq 4$, the qualitative feature of the potential is same (asymptotically $2/n$-th power) as the potential~\eqref{V2/n} of the polynomial K\"{a}hler potential~\eqref{Kpol}. \section{Conclusions} We have found that large field inflation driven by MSSM Higgs inflaton is non-trivial to achieve in supergravity. Even if the specifically chosen K\"{a}hler potential and fine-tuning to reproduce the electroweak scale are accepted, the soft masses and radiative corrections affect the inflaton potential. We then loosened the requirements and consider some Higgs-like fields. The resultant potential is different from the plateau-type potential of the original Higgs inflation, and it is derived from the inverse function of an arbitrary holomorphic function in the effective superpotential (see Eq.~\eqref{WJ}). With simplifying assumptions stated above, we have fractional $2/n$-th power potentials. The simplest case ($n=1$; quadratic potential) is now disfavored by the Planck data, but it is interesting that next-to-simplest cases ($n\geq 2$) can be tested in near future. Let us finally discuss what the ``MSSM Higgs-like fields'' are. A primary candidate is a GUT Higgs, which requires some modifications of the model considered here. It would explain the large inflationary scale with less tuning. One could also consider a Kaluza-Klein excitation of the MSSM Higgs fields, which can be as heavy as the inflationary scale. Moreover, in a superstring setup with multiple Higgs doublets, a small $\mu$-term can be obtained without fine tuning~\cite{Abe:2015uma}. In this model, the $\mu$-terms for other Higgses can be in the range between $\mathcal{O}(10^{12})$ GeV and $\mathcal{O}(10^{18})$ GeV, so they could be used as the inflaton if the K\"{a}hler potential can be modified as Eqs.~\eqref{Kpol} or \eqref{KHiggsLog}. In any case, one has to ensure small enough couplings not to break the shift symmetry significantly. A large $\kappa$ suppresses couplings in terms of the canonically normalized field $\Phi$ and improves the description. Taking $\kappa\sim10^3$ or $10^5$, one can have a GUT or Planck scale $\mu$ term, which are compatible with the above candidates. Studies on these possibilities are to be done elsewhere. \begin{acknowledgments} I am grateful to T.~Kitahara, K.~Mukaida, S.~Shirai, and M.~Takimoto for useful discussion. I also thank Y.~Tatsuta for introducing me Ref.~\cite{Abe:2015uma} and discussion on it. I am supported by a Grant-in-Aid for JSPS Fellows, and a JSPS Grant-in-Aid for Scientific Research No.~2610619. \end{acknowledgments} \bigskip
{ "timestamp": "2015-04-24T02:10:55", "yymm": "1504", "arxiv_id": "1504.06230", "language": "en", "url": "https://arxiv.org/abs/1504.06230" }
\section{Introduction} A recent trend in nonlinear optics is the development and design of waveguide systems with tunable nonlinearities. In addition to a broad tuning range, these systems are characterized by the ability to separate the contributions of the material constituents from the device geometry. This is in contrast to early approaches in both glass \cite{cohen1979tailoring} and semiconductors \cite{aitchison1997nonlinear} which required a change in material composition to modify the waveguide properties. The role of geometry drastically changed with the advent of micro-structured fibers and the demonstration that fabrication parameters could be the dominant contribution to the dispersion \cite{knight1996all}. More recently, it has been shown that gas-filled hollow-core fibers can activate or suppress nonlinearities such as the Raman effect \cite{russell2014hollow}. In parallel, rapid advances in integrated semiconductor devices have pushed the forefront of optical science by reducing nonlinear thresholds to sub-femtojoule energy levels \cite{nozaki2010sub} while simultaneously incorporating dispersion control \cite{Colman:12}. Among nanostructures, photonic crystal waveguides (PhCWGs) are of extreme interest due to the link between geometric fabrication and direct modulation of the electric field, giving rise to new physical phenomena such as slow-light and enhanced nonlinearity. Slow-light refers to light propagating at a reduced group-velocity in the medium. Interest in this unique property has inspired a large body of research investigating the linear and nonlinear properties of slow-light in two-dimensional (2D) PhCWGs over the past decade \cite{baba2008slow,PhysRevE.64.056604}. Recall the group index $n_g$ is related to the waveguide dispersion relation $\omega(k)$ and the group-velocity $v_g$: $n_g=\frac{c}{v_g}=c\frac{\partial\omega}{\partial k}$, with frequency $\omega$, wavevector $k$, and the speed of light in vacuum $c$. Of particular significance, it was shown that optical $\chi^{(3)}$ effects such as the Kerr nonlinearity scale with the group index squared in the presence of slow-light \cite{PhysRevE.64.056604}. Briefly, one factor of $n_g$ arises from a larger electric field for a given power (nonlinear enhancement), with the second from longer effective optical path length (linear enhancement). We note that the slow-light-enhancement described here is derived from the \textit{structure}. In contrast, \textit{material} slow light from atomic resonances does not exhibit this enhancement \cite{boyd2011material}. In PhCWGs we write the effective nonlinear Kerr parameter as $\gamma_{eff} =\gamma \left(\frac{n_g}{n_o}\right)^2=\frac{\omega}{c} \frac{n_2}{A_{eff}}\left(\frac{n_g}{n_o}\right)^2$, with the bulk Kerr coefficient $n_2$, modal area $A_{eff}$, and linear refractive index $n_o$. While the $\gamma$ term is well described in the literature, research into the slow-light enhancement contribution $\frac{n_g}{n_o}$ in 2D PhCWGs required significant advances in nanofabrication techniques which were only mastered the past few years. A 2D PhCWG consists of a periodic array of low-index dielectric embedded in a high-index material. A common experimental configuration which we consider here consists of a hexagonal pattern of air holes etched in an air-suspended semiconductor slab. Importantly, the dispersion of these 2D PhCWGs is highly tunable due to selected geometric modifications of the periodic lattice known as \textit{dispersion engineering} \cite{Colman:12,Li:08}. The precise modulation of the waveguide group index enables exquisite control over the dispersion and therefore the nonlinear properties of the medium. Experimental reports of slow-light enhanced nonlinear Kerr effects in 2D PhCWGs include demonstrations of solitons, third-harmonic generation, and four-wave mixing, amongst others \cite{NatPhot_Colman10,corcoran2009green,mcmillan2010FWM}. Despite this strong interest in slow-light, the dispersion of the Kerr $\chi^{(3)}(\omega)$ nonlinearity, or \textit{self-steepening} (SS) term $\tau_{NL}=\frac{1}{\gamma_{eff}}\partial_\omega \gamma_{eff}$, has received surprisingly little attention in these systems. The earliest investigations of SS in waveguide systems were carried out in glass fiber with these studies emphasizing the spectral re-shaping properties \cite{fork1983femtosecond,fiberSS} or the formation of optical shock fronts \cite{anderson1983}. Later it was shown SS is essential for extending the validity of the Generalized Nonlinear Schr\"odinger Equation (GNLSE) envelope approximation down to the single-cycle regime \cite{brabec1997nonlinear}, and for explaining broadband supercontinuum generation \cite{gaeta2000catastrophic}. In these systems the SS term is determined almost exclusively by the wave angular frequency ${\omega}$ and therefore exhibits a fixed response of about a femtosecond for optical frequencies. A more recent numerical investigation showed the wavelength dependence of the nonlinear Kerr effect in silicon channel waveguides leads to values up to tens of femtoseconds near mode cutoff \cite{siliconSS}. An alternative approach for generating SS using cascaded $\chi^{(2)}$ media as an \textit{effective} tunable $\chi^{(3)}$ was shown with similar strength to traditional media \cite{moses2006}. To date, self-steepening in tunable $\chi^{(3)}$ media has not been explicitly experimentally demonstrated. In this article, we investigate self-steepening in 2D photonic crystal waveguides. Importantly we show the large variation in waveguide group index $n_g$ leads to a new physical mechanism for generating self-steepening with a characteristic time scale $\tau_{NL}$ on the order of hundreds of femtoseconds, two orders of magnitude larger than in non-periodic waveguide systems \cite{fiberSS,siliconSS}. We derive an analytic formulation and describe the origin of this effect. Further, we describe structures in which the values of $\tau_{NL}$ are \textit{anomalous} (negative), hence leading to notably different physical effects than previously known $\chi^{(3)}$ systems. The broad tuning range of $\tau_{NL}$ enabled by dispersion-engineering make PhCWGs an ideal system for further studies of SS. While the magnitude of $\tau_{NL}$ is quite large, the presence of other effects such as group velocity dispersion (GVD,~$\beta_2=\frac{1}{c}\frac{\partial n_g}{\partial\omega}$), and higher order nonlinearities such as multi-photon absorption or free-carrier effects can disrupt the ideal dynamics. We consequently describe the experimental situations in which SS is expected to contribute significantly in the semiconductor system under consideration using a numerical model. This analysis supports recent experimental results showing pulse temporal advance in PhCWGs \cite{Husko_SciRep13,Raineri_PRB13}. Though the Raman-effect is narrowband in semiconductors and negligible here, this analysis could be extended to include periodic glass media where Raman and Brillouin effects must be considered \cite{freeman2005chalco}. These results provide the first theoretical description of giant and tunable self-steepening in nanoscale optical waveguides. More generally, this investigation applies to all periodic media (1D, 2D, 3D) where a dispersive $\chi^{(3)}(\omega)$ arises due to strong group-index modulation near the band-edge. Given the importance of the dispersive nonlinearity $\tau_{NL}$ in explaining supercontinuum broadening in fibers, we expect the new terms elucidated here to be critical for accurately describing this phenomenon in photonic crystals. \section{Self-steepening in PhCWGs} \begin{figure}[*htb] \centerline{\includegraphics[width=9cm]{Figure_I.eps}} \caption{Group index curves and self-steepening parameters of PhC waveguides. (a) We show three typical waveguides: (b,green) a \textit{dispersion-engineered} structure with a quasi flat plateau, (c,cyan) a W1 line-defect waveguide, and (d,blue) and a \textit{dispersion-engineered} waveguide with a peak. (b)-(d) Show the self-steepening parameter $\tau_{NL}=\gamma_1/\gamma_{eff}$ (black line) as a function of wavelength for the three structures. The different contributions to SS are shown: $1/\omega$(black-dashed), $2\partial_\omega n_{g}/n_g$(red) and $\partial_\omega A_{eff}/A_{eff}$(blue).} \label{Fig:SS_PhCWG} \end{figure} Though near-arbitrary dispersion profiles are possible in periodic media \cite{Colman:12,Li:08}, here we focus on three specific experimentally demonstrated structures for clarity. \figs{Fig:SS_PhCWG} shows three group index curves for PhCWGs with different dispersion relations: (i) a standard line defect waveguide of one missing row of holes in a hexagonal lattice (W1), (ii) a dispersion engineered waveguide\cite{Colman:12} exhibiting a plateau, and (iii) a dispersion with a pronounced group index peak. The dot indicates a point of interest we investigate in this work. If we assume the Kerr nonlinearity evolves linearly then the impact of dispersive nonlinearity on pulse propagation dynamics is modeled by adding a first order Kerr correction term to the GNLSE \cite{brabec1997nonlinear,siliconSS}: \begin{equation} \partial_z A+\frac{i}{2}\beta_2\partial_{tt}A = i \gamma_{eff}(1 + i \tau_{NL} \partial_t) |A|^2A. \label{eq:GNLSE} \end{equation} \noindent Here \textit{t} is the relative time in the reference frame of the pulse with $A(z,t)~=~\sqrt{P_o(z,t)}e^{i\phi(z,t)}$ the electric-field envelope with power $P_o$ and phase $\phi$. The last term $\tau_{NL}$ is referred to the \textit{self-steepening} or \textit{shock term} and expressed as: \begin{equation} \tau_{NL}=\frac{\gamma_1}{\gamma_{eff}}=\frac{1}{\omega}+\frac{1}{n_{2}}\frac{\partial n_{2}}{\partial\omega}-\frac{1}{A_{eff}}\frac{\partial A_{eff}}{\partial\omega}+\frac{2}{n_g}\frac{\partial n_g}{\partial\omega}, \label{eq:dOmega} \end{equation} \noindent where we used the relationship $\gamma_1~=~\partial_\omega\gamma_{eff}$. The two first terms exist in bulk material and contribute up to a few femtoseconds. In practice the Kerr dispersion is often neglected. These are the \textit{traditional} self-steepening terms known in unstructured waveguides and do not play a role here with these small magnitudes \cite{fiberSS}. The third term encompassing effective modal areas $A_{eff}$ was considered in fibers \cite{fiberModeArea}, and theoretically in silicon channel waveguides \cite{siliconSS} with surprisingly little attention in PCFs \cite{Dudley:06}. Further, in those earlier works the area term contributed a few percent whereas here the slow-light modes exhibit rapid variation in spatial profile with $\omega$ and this term contributes approximately $\approx$~25~\% to the total $\tau_{NL}$. The key physical insight of this work is the realization that the final term due to the dispersive group-index $n_g$ gives rise to a previously unknown mechanism for controlling self-steepening. In contrast to earlier observations, in our photonic crystal waveguide the magnitude of $\tau_{NL}$ is set by a combination of the group-index $n_g$, GVD ($\beta_2$) and effective modal area $A_{eff}$. As these three parameters depend strongly on the geometry, we clearly see the advantage of using nanostructures to study the dispersive nonlinearity. We now examine this effect in detail. \figss{Fig:SS_PhCWG}(b)-(d) show the evolution of the ratio $\tau_{NL}$ for the waveguides in \fig{Fig:SS_PhCWG}(a). The contribution of the different terms composing Eqn. \ref{eq:dOmega} are also represented. We note three major observations unique to the PhCWG system. First, $\tau_{NL}$ is two orders of magnitude larger than unstructured waveguides (black dashed line) where $\tau_{NL}=\frac{1}{\omega}\approx$~1~fs. Notice the $\frac{1}{\omega}$ contribution appears to be near-zero and completely flat compared to the PhCWG contributions on this scale. Second, the sign of $\tau_{NL}$ is negative. Consequently the spectral and temporal properties of the nonlinear waves behave in an opposite or \textit{anomalous} manner as we will examine. Third, the dominant contribution arises from the dispersion term (red) with a modification in the opposite direction due to $A_{eff}$ (blue). If we ignore the area contribution, we approximate $\tau_{NL} \approx \frac{2}{n_g}\frac{\partial n_g}{\partial\omega}=\frac{2c\beta_2}{n_g}$. Examining the characteristics of the waveguides individually, we find different trends for each. Regarding the W1 waveguide \fig{Fig:SS_PhCWG}(c), $\tau_{NL}$ steadily decreases approaching the mode cutoff (increasing wavelength). A value of about -200 fs is obtained close the the band edge, however in a region where propagation loss is large in this type of PhCWG \cite{OFaolain2010loss}. On the contrary for the dispersion-engineered waveguides the values are mostly negative with a small positive region for the structures presented. Notice in this case values as large as -200 fs (\fig{Fig:SS_PhCWG}(b)) and -400 fs (\fig{Fig:SS_PhCWG}(d)) are reached away from the band edge where propagation losses are reduced \cite{Patterson_PRB2010,Mann_OL13}. The lower linear loss of the latter structures has implications for practical observation of these effects. \section{Temporal and spectral properties due to anomalous self-steepening in PhCWGs} \par We now describe the physical implications of the self-steepening term on nonlinear wave propagation in PhCWGs. For that purpose we consider typical parameters found in recent nonlinear experiments \cite{Husko_SciRep13,Colman2012,NatPhot_Colman10,Raineri_PRB13,combrie2009}. We take $\gamma_{eff}$~=~1600~(W.m)$^{-1}$ ($n_g$=15, $n_o$=3.17 for GaInP), an anomalous dispersion of $\beta_2$~=~-7.7 ps$^2$/mm, $n_2$~=~6$\times10^{-18}$m$^2$/W and modal area $A_{eff}$~=~0.34 $\mu$m$^2$ \cite{Colman:12}. The dispersive nonlinearity is $\tau_{NL}$=~-220 fs as detailed above. The input pulses are $T_{FWHM}$~=~2.3 ps (full-width at half-maximum of a hyperbolic secant, $T_{FWHM}=1.76~T_{o}$) with $P_0$~=~3~-~10~W~(6 - 20 pJ/pulse). The dispersion length $L_D=\frac{T_o^2}{\beta_2}$ is computed as 220~$\mu$m. Importantly, throughout this work we purposely maintain small soliton numbers ($N<$~2) so as to avoid more complicated soliton dynamics modulating the peak intensity and pulse duration \cite{Husko_SciRep13}. At this point we are focusing on the basic physical effects resulting from the unique photonic crystal dispersion in the `ideal' system. In the next section we will introduce the full effects present in typical semiconductor waveguides and describe how these results are modified. While based on actual experimental structures, note that the conditions may not be optimal for emphasizing the self-steepening effect and we invite the community to explore the parameter space further. The normalized self-steepening parameter $s$ is: \begin{equation} s = \frac{\tau_{NL}}{T_0}. \label{eq:shockParameter} \end{equation} \noindent For our parameters $s$=~-0.1, more than five times larger than non-periodic waveguides \cite{fiberSS}. This is even more remarkable when one considers the pulses are 2.3 ps long compared to the sub-100 fs pulses required in unstructured media where $s$~=~$\frac{1}{\omega T_0}$. The large value of $s$ requires much shorter length scales to observe the associated effects of self-steepening. A pulse experiencing self-steepening will eventually develop a shock front after propagating a \textit{shock length}\cite{anderson1983} of about: \begin{equation} z_s = 0.43 \frac{L_{NL}}{|s|}, \label{eq:shockLength} \end{equation} \noindent where $L_{NL}~=~(\gamma_{eff} P_o)^{-1}$ is the nonlinear effective length, and the numerical constant depends on the actual pulse shape, here a hyperbolic secant \cite{anderson1983}. Typical shock lengths in our structures are $z_s\approx$ 350 $\mu$m for a 2.3 ps pulse with peak power of 8 W. As this article is focused on self-steepening, we do not describe the physics of optical shock waves in detail here. Nonetheless, this is a well known and useful length scale for estimating the relative scaling of self-steepening which we adopt here for convenience. Note that the mechanism presented here is not the only possibility for developing shock fronts. A highly nonlinear medium in presence of weak normal dispersion, for example, could also lead to shock formation even though no dispersive nonlinearity is present \cite{Trillo:PRA14}. The nonlinear dispersion $\tau_{NL}$ plays a key role in the pulse dynamics of both the temporal and spectral properties. We first address the impact of SS on temporal shape and delay. A subtle point that must be addressed is the dual role of $\beta_2$. First, it has been shown in earlier work that $\beta_2$ dissipates shock fronts \cite{fiberSS}. Second, and separate to this point, we showed above that $\beta_2$ is intrinsically linked to the large magnitude of $\tau_{NL}$. As a result, one cannot ignore $\beta_2$ for a SS effect arising from a strong modulation of the group index $n_g$ and consequently observing a tilted line-shape is unlikely for SS derived from this method. While the temporal shape is not modulated in this case, the temporal arrival time is affected as we show below. \figs{fig:negativeSS}(a) shows the temporal profile obtained for the point indicated by the blue dot in \fig{Fig:SS_PhCWG}(a-c) after a propagation distance of 200~$\mu m$. The input pulse (black line) has a peak power of 8~W ($L_{NL}$=80 $\mu$m), hence the pulse has completed about two and a half nonlinear lengths $L_{NL}$. The thick-red line shows the case where we include only the Kerr and SS contributions to Eqn. \ref{eq:GNLSE}. That is, we include the $\beta_2$ contribution to $\tau_{NL}$ but neglect the temporal dispersion term $\partial_{tt}$. Notice that since $s$ is negative here the pulse peak tilts \textit{forwards} in time, which is opposite to earlier studies with a positive self-steepening term. Moreover, with the large $s$ value here, a steep temporal leading edge is already clearly visible after a propagation of a just few $L_{NL}$. Simulations with all the terms in Eqn. \ref{eq:GNLSE} are shown as the dashed blue line. In contrast to the ideal dispersion-less case (thick red) where the temporal trace exhibits a clear abrupt leading edge characteristic of shock formation, the shock front is less pronounced for the same propagation length and the pulse tends to preserve its initial shape (soliton effect). A separate temporal effect in the presence of self-steepening is a shift in pulse arrival time. Assuming the pulse duration $T_0>\tau_{NL}$ and a moderate soliton number, the dispersive nonlinearity acts as a small perturbation that slightly modifies the group velocity $\Delta T=\frac{z}{L_{NL}}\tau_{NL}$ \cite{anderson1983,PhysRevE.57.4751,chen2010soliton}. Here with the negative $\tau_{NL}$ value, the pulses are expected to advance in time, once again in contrast to the delay of earlier observations. This effect is clearly visible by the temporal advance of the two traces examined in \fig{fig:negativeSS}. One of the most challenging aspects of observing self-steepening in real PhCWGs is the strong resemblance of SS and free-carrier dispersion (FCD) in the time domain. We investigate this in detail below. \figs{fig:negativeSS}(b) shows typical pulse spectra for the negative $s$ values characteristic of PhCWGs. The pulse spectrum is clearly rendered asymmetric in the presence of self-steepening. The blue-shifted peaks become more intense, while the red components are notably broader compared to the symmetric broadening characteristic of SPM. The role of $\beta_2$ is less pronounced. \begin{figure}[h] \centerline{\includegraphics[width=9cm]{Figure_II.eps}} \caption{Temporal and spectral pulse properties of \textit{anomalous} self-steepening in PhC waveguides. (a) Temporal pulse shapes. (b) Spectral pulse shapes. Notice the behavior is opposite to unstructured media exhibiting normal self-steepening. Traces are shown for propagation of 200~$\mu$m$~\approx~2.5 L_{NL}$. Legend: Kerr and SS only (thick red), full Eqn. \ref{eq:GNLSE} (dashed blue), and input pulse (black).} \label{fig:negativeSS} \end{figure} \section{Self-steepening in semiconductors} In our analysis thus far we have ignored several effects present in practical systems. The role of linear loss has been previously studied and in the limit $\alpha z_s>$1 shock waves are unobservable altogether \cite{anderson1983}. For the waveguide experiments referenced above the linear loss is given by $\alpha$~=~15 dB/cm~(at~$n_g$~=~15). This yields $\alpha z_s$~=~0.11, indicating these waveguides can in principle support shocks. However, in semiconductor media we must also consider nonlinear absorption of multiple photons across the electronic bandgap. For a typical wavelength of 1550 nm (photon energy of 0.8 eV), silicon ($E_g$~=~1.1 eV) is restricted by two-photon absorption (TPA) and the wide-gap material GaInP ($E_g$~=~1.9 eV) is limited by three-photon absorption (3PA). While the optical properties of silicon have been widely studied, only over the past years we have investigated the $\chi^{(3)}$ properties of GaInP and established this material as a viable platform for nonlinear optics at 1.5~$\mu$m~\cite{Husko_SciRep13,NatPhot_Colman10,huskoOptExp2009,combrie2009}. In simplest terms, nonlinear absorption damp the dynamics similar to linear absorption. An order of magnitude estimate shows loss due to TPA (3PA) requires $\alpha_2 I z_s<1$ ($\alpha_3 I^2 z_s<1$), with the intensity $I=\frac{P_o}{A_{eff}}$, to observe a shock. Physically these ratios compare the strength of the nonlinear loss to the self-steepening term. For the TPA case (e.g. Silicon, $\alpha_{2}$=1 GW/cm \cite{bristow2007}) the ratio $\alpha_2 I z_s=\frac{0.43}{|s|}~\frac{\alpha_2 }{k_o n_2}\approx 1.8$ indicating SS-induced shocks are not generally accessible in this material, even for this large $s$=~-0.1. Here we have defined a new nonlinear figure-of-merit (FOM) for self-steepening and TPA. Its form is noticeably similar to the well-known version for Kerr-TPA switching $(\frac{\alpha_2}{k_on_2})$ \cite{mizrahi1989} with the additional term for SS. Notice this ratio is independent of power for TPA and therefore SS will always be much weaker than TPA. Since TPA dominates the SS effect, we focus mainly on the 3PA system in the following analysis. The 3PA material (e.g.~GaInP, $\alpha_{3}$=0.013~GW$^2$/cm$^3$ \cite{wherrett1984}) is much more favorable, yielding $\alpha_3 I^2 z_s=\frac{0.43}{|s|} \frac{\alpha_3 I}{k_o n_2}~\leq$~1 which is satisfied for intensities up to about $\approx$50~GW/cm$^2$. This threshold is intensity-dependent due to the different nonlinear orders of $\chi^{(3)}$ Kerr and $\chi^{(5)}$ 3PA. Including the slow-light enhancements $\alpha_3\propto n_g^3$ and $n_2 \propto n_g^2$ this relation becomes $\frac{0.43}{|s|} \frac{\alpha_3 I}{k_o n_2} \left(\frac{n_g}{n_o}\right)\leq$~1 and is satisfied up to $\approx$10~GW/cm$^2$ for the conditions here. In contrast, the TPA-limited SS case does not scale with slow-light. Note these estimates are only indicative and do not correspond to strict thresholds. For larger powers, free-carrier absorption would play an important role until the peak power falls below its threshold. Now that we have established the role of nonlinear loss, we describe the free-carrier effects. Free-carriers generated via these nonlinear absorption mechanisms have an equally significant impact in the pulse dynamics through both dispersive (FCD, $n_{FC}$) and absorptive (FCA, $\sigma$) contributions. Importantly, the physical signatures of anomalous self-steepening strongly resemble those of FCD, especially in the time domain. Namely, like anomalous SS, the pulse also advances as a function of input power due to FCD combined with anomalous GVD as our recent experiments show \cite{Husko_SciRep13,blancoFCD2014}. A broader GNLSE including all of these effects is: \begin{multline} \frac{\partial A}{\partial z} =-\frac{\alpha}{2}A - i\frac{\beta_2}{2} \frac{\partial^2A}{\partial t^2} + \frac{\beta_3}{6} \frac{\partial^3A}{\partial t^3} +(ik_o n_{FC}-\frac{\sigma}{2})N_cA \\ + (i\gamma_{eff} - \gamma_1 \frac{\partial}{\partial t}-\frac{\alpha_{2eff}}{2})|A|^2A -\frac{\alpha_{3eff}}{2}|A|^4A. \label{eqn:fullNLSE} \end{multline} \begin{figure}[b] \centerline{\includegraphics[width=9cm]{Figure_III.eps}} \caption{Temporal and spectral behavior of anomalous self-steepening in after a propagation distance of 500~$\mu m$ in realistic PhCWGs. (a) Temporal traces for 3PA-limited materials: full (green), FCD alone (blue), self-steepening alone (red). The black lines correspond to the input pulse. (b) Spectra for 3PA-limited materials. (c)-(d) Pulse temporal advance as a function of power for (c) 3PA-limited (solid) and (d) TPA-limited (dashed) systems. Green curves correspond to the full model in Eqn. \ref{eqn:fullNLSE}. Blue lines are the case where we neglect SS, whereas red lines indicate when FCD is neglected. We include the full 3PA result in (d) to compare the relative scales of the two systems. } \label{fig:temporalSpectral} \end{figure} \noindent For the modeling below, we take the slow-light scaled values of the bulk parameters as described in prior literature \cite{huskoOptExp2009,blancoFCD2014}. The parameters are $\alpha_{3eff}=\frac{\alpha_3}{A_{5eff}^2}\left(\frac{n_g}{n_0}\right)^3$=~18~m$^{-1}$.W$^{-2}$, $n_{FC,eff}$=$n_{FC}\left(\frac{n_g}{n_0}\right)$=~-1.8 $\times10^{-26}$m$^3$, $\sigma_{eff}=\sigma\left(\frac{n_g}{n_0}\right)$=~1.3 $\times10^{-20}$m$^2$. The 3PA area is $A_{5eff}$\cite{huskoOptExp2009}. We have also included the third-order dispersion (TOD, $\beta_3=\frac{1}{c}\frac{d^2n_g}{d\omega^2}$~=~+0.7 ps$^3$/mm) for completeness as this cannot be ignored in the real system. Regarding one case briefly presented below where the material is limited by TPA (silicon), we take $\alpha_{2eff}=\frac{\alpha_2}{A_{eff}}\left(\frac{n_g}{n_{0,Si}}\right)^2$~=~570~(W.m)$^{-1}$. It is worthwhile to point out that these parameters correspond to actual experimental parameters and therefore are immediately realizable in current systems. \figs{fig:temporalSpectral}(a) shows the pulse temporal shift due to the competing self-steepening and FCD effects in the 3PA limited material ($\alpha_2=0$) at a peak power of 8~W. When considered in isolation the FCD-GVD curve (blue, no SS) and SS-only curve (red, no FCD) each would contribute a few picoseconds of delay. Moreover they have a relatively similar temporal shape. Note these effects do not add linearly, but rather compete for power and interact dynamically to yield the full result (green). Importantly, the spectral features of FCD and SS are distinct. \figs{fig:temporalSpectral}(b) shows the spectral properties of pulses propagating in the 3PA system. Considering only the self-steepening effect (red curve) results in a minor modulation to the symmetric shape expected from SPM-only and does not shift the spectral center-of-mass. In contrast, the FCD induces a clear blue shift in the pulse center-of-mass \cite{Husko_SciRep13}. When we consider these effects simultaneously (green curve), the FCD is clearly the dominant contribution. Thus these effects are more easily distinguished in the spectral domain. The nonlinear scaling laws of SS and the FCD-GVD temporal shift are also notably different. Notice $\tau_{NL}~\propto~P_o$ whereas FCD$^{(2)}\propto~P_o^2$ (TPA) or FCD$^{(3)}\propto~P_o^3$ (3PA), where we have written FCD$^{(m)}$, with $m$ indicating the order of nonlinear absorption generating the free-carriers. Figures \ref{fig:temporalSpectral}(c) and (d) report the scaling of SS and FCD as a function of power for (c) 3PA and (d) TPA-limited materials. We observe SS (red curve) scales much more slowly compared to FCD (blue curve). The full simulation (green dashed) more closely follows SS at low powers and the FCD trend at higher power. For the TPA case shown in (d) the SS induced delay is noticeably smaller due to stronger nonlinear TPA loss which caps the peak power more than the 3PA case. In (d) we also show the 3PA curve for comparison, highlighting the greater temporal shift in this case. The striking similarity of the temporal advance and pulse shapes from both SS and FCD-GVD are highlighted in recent experimental results. The GaInP waveguides in both cases are similar to that in \fig{Fig:SS_PhCWG}(c), with relatively small values of $\tau_{NL}$ compared to that highlighted in this work. In a ThPA system the temporal advances are attributed to FCD-GVD supported with numerical modelling \cite{Husko_SciRep13}, however with no SS term. Similar results were shown in Refs.\cite{blancoFCD2014,blanco2014observation} in silicon. In contrast, a pulse advance attributed to SS only was reported in Ref. \cite{Raineri_PRB13}. Critically that report did not include the important contribution of FCD-GVD term, but rather attributed it to SS alone. As we have shown here, the FCD-GVD advance plays an equally important role as SS and cannot be ignored. The true physical situation is likely a combination of these effects though the exact scaling would be challenging due to soliton dynamics modulating the peak power and requires careful experiments and numerical modeling to discern the effects. Nonetheless, one could use the moments method in Ref. \cite{lefrancois2015} to estimate the magnitude of the temporal shift from FCD-GVD or the SS time shift equation given above for order-of-magnitude values. One last important aspect to consider is the relative impact of the instantaneous SS and non-local FCD effects on pulse delay during propagation. The SS induced delay depends directly on the instantaneous peak power and hence will eventually decrease due to linear and nonlinear loss. In contrast, the effect of FCD on the group velocity persists even if the pulse peak power decreases. Physically this results from the spectral FCD frequency blue-shift and accompanying change in the group-velocity experienced by the pulse \cite{lefrancois2015}. Finally we show the evolution of the nonlinear dynamics of the pulse propagating down the waveguide. \figs{fig:Delay_Z}(a) shows the change in pulse delay along the waveguide length for the two mechanisms in the 3PA system at a peak power of 8~W. The change in delay due to the SS effect (red, FCD=0 and FCA=0) is strongest in the first 200 $\mu$m and then tapers off due to loss. The FCD effect (blue, $\gamma_1$=0) continues to experience a change in delay right until the end of the propagation with the full curve (green) more closely resembling this case. \figs{fig:Delay_Z}(b) shows the corresponding peak power evolution which gives insight into the dominant nonlinear mechanism as the pulse evolves. We attribute the initial increase in peak power to soliton compression. \begin{figure}[h] \centerline{\includegraphics[width=6cm]{Figure_IV.eps}} \caption{Role of instantaneous SS and non-local FCD on pulse delay. (a) Contribution of SS alone (red) and FCD alone (blue) to the total pulse delay (green) for propagation lengthts up to 1 mm. (b) Peak power evolution corresponding to the cases presented in the upper panel. Note the low overall peak power after 500 $\mu$m of propagation. Legend: full model (green), no free carrier effects (red), no SS (blue). } \label{fig:Delay_Z} \end{figure} \vspace{-1 cm} \section{Conclusion} In this Letter, we investigated the nonlinear self-steepening effect in photonic crystal waveguides. Our first principles derivation in the nanostructured periodic waveguides revealed a self-steepening term two orders of magnitude larger than typical systems. Importantly the self steepening coefficient $\tau_{NL}$ is determined by the geometric parameters of the waveguide, offering a large tuning range of both positive and negative values. The origin of this giant $\tau_{NL}$ is the strong dispersion of PhCWGs counterbalanced by a modal area contribution. We considered the role of higher-order effects in practical systems with new figures of merit, concluding that the nonlinear loss quenches the dynamics. We showed that the principal physical signature of the \textit{anomalous} SS effect is a temporal forward tilt and pulse advance in contrast to the delay observed in normal SS media. In the semiconductor waveguides these effects compete with FCD which also advance the pulse, whereas the spectral signatures of self-steepening and FCD are distinct. We suggest future experiments exploring the full range of these dynamics be undertaken to reveal the full dynamics of this giant tunable self-steepening mechanism, especially in the supercontinuum regime. \\ \noindent \textbf{Acknowledgements}\\ This work was supported by the VKR fundet through the centre of excellence NATEC and the Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA DE120102069).
{ "timestamp": "2015-04-27T02:05:34", "yymm": "1504", "arxiv_id": "1504.06412", "language": "en", "url": "https://arxiv.org/abs/1504.06412" }
\section{Introduction} \label{sec:intro} Deterministic inventory theory provides streamlined optimization models that attempt to capture tradeoffs in managing the flow of goods through a supply chain. We consider two classical models in deterministic inventory theory: the \emph{Joint Replenishment Problem} (JRP) and the \emph{Inventory Routing Problem} (IRP). These inventory models have been studied extensively in the literature (see, e.g., \cite{aksoy1993multi}, \cite{joneja1990joint}) and recently there has been significant progress on many variants of these models (see, e.g., \cite{levi2006primal}, \cite{levi2008constant}, \cite{nonner2013efficient}, \cite{CELS2014}, \cite{FNR14}). In this paper, we present a unified approach that yields approximation algorithms for both models with generalized cost structure -- the JRP with submodular setup cost and the IRP with arbitrary embedding metric. The JRP with deterministic and non-stationary demand is a fundamental yet notoriously difficult problem in inventory management. In these models, there are multiple item types, and we need to coordinate a sequence of (joint) orders to satisfy the demands for different item types before their respective due dates. Ordering inventory in a time period results in setup costs (or fixed ordering costs), and holding inventory before it is due results in holding costs. The objective is to find a feasible ordering policy to satisfy every demand point on time over a finite planning horizon so as to minimize the sum of setup and holding costs. The JRP is a natural extension of the classical economic lot-sizing model that considers the optimal trade-off between setup costs and holding costs for a single item type (see \cite{wagner1958dynamic}). With multiple item types, the JRP adds the possibility of saving costs via coordinated replenishment, a common phenomenon in supply chain management. Most of the literature on deterministic JRP is on the \emph{additive} joint setup cost structure. Under this structure, there is a one-time setup cost if any item type is ordered, and there is an individual item setup cost for each item type ordered; the joint setup cost for this particular order is simply the sum of the one-time setup cost and these individual item setup costs. The additive joint setup cost structure loses significant amount of modeling power and flexiblity (see \cite{queyranne1986polynomial}, \cite{federgruen1992joint}, \cite{teo2001multistage}, \cite{CELS2014}). In this paper, we adopt the joint setup cost structure introduced recently in \cite{CELS2014} that satisfies two natural properties known as \emph{monotonicity} and \emph{submodularity}. The monotonity property means that the joint setup cost increases with the set of item types ordered. The submodularity property captures economies of scale in ordering more item types, i.e., the marginal cost of adding any specific item type to a given order decreases in the set of item types included. The IRP is also a classical problem in inventory management that captures the trade-off between the holding costs for inventory and the routing costs for replenishing the inventory at various locations in a supply chain (see, e.g., \cite{Burns1985}, \cite{Coelho2014}, \cite{FZ1984}, \cite{FNR14}). The problem involves multiple item types that are stocked in a single depot, that must be shipped to meet the demand for these item types arising at multiple retailers specified over the course of a planning horizon. Similar to the JRP, the costs of holding a unit of inventory at each retailer are specified to compute the inventory holding costs. Different than the JRP, we consider transportation (or vehicle routing) costs in some metric defined by the depot and retailers in the IRP, instead of joint setup costs considered in the JRP. \subsection{Main Results and Contributions} We present a unified approach that yields $\mathcal{O}\left(\frac{\log T}{\log\log T}\right)$-approximation algorithms for both the JRP with submodular setup costs and the IRP with any embedding metric, when the holding costs are polynomial functions (which subsumes conventional linear costs as a special case). This is the first sub-logarithmic approximation ratio for either problem under these cost structures. We remark that if the setup cost function in submodular-JRP is time-dependent then the problem (even with zero holding costs) becomes as hard to approximate as set cover~\cite{F98}. The same observation is true if the metric in IRP is time-dependent. So our sub-logarithmic ratio approximation algorithm relies crucially on the uniformity of these costs over time. For the submodular JRP, \citet{CELS2014} obtained constant-factor approximation algorithms under several special submodular functions (i.e., tree, laminar and cardinality). In contrast, we consider general submodular functions with special (polynomial) holding costs. For the IRP, \citet{FNR14} considered a restricted class of ``periodic policies'' and obtained a constant-factor approximation algorithm. Whereas our result is for arbitrary policies and polynomial holding costs. A straightforward modification of our algorithm for polynomial holding costs also yields $\mathcal{O}(\log T)$-approximation algorithms for submodular JRP and IRP with arbitrary (monotone) holding costs. The submodular JRP result improves upon the approximation ratio of $\mathcal{O}(\log(NT))$ by \cite{CELS2014}. The IRP result is incomparable to the $\mathcal{O}(\log n)$ approximation ratio mentioned in~\cite{FNR14}, where $n$ is the number of retailers. \subsection{Our Approach} At a high-level, the algorithm for submodular JRP has the following steps. (The algorithm for IRP is very similar -- we in fact present an algorithm for a unified problem formulation.) First, we solve a natural time-indexed LP relaxation that was also used in~\cite{CELS2014}. Then we construct a ``shadow interval'' for each demand point that corresponds to fractionally ordering half unit of the item. We also stretch each shadow interval appropriately (depending on the degree of the holding cost function) so as to obtain an optimal trade-off between holding and setup costs: this is what results in the $\mathcal{O}\left(\frac{\log T}{\log\log T}\right)$ approximation ratio. Next, we partition these stretched intervals into multiple groups based on well-separated widths. Finally we place a separate sequence of orders for each group, and argue using submodularity of the setup cost function that the total setup cost of each group is bounded by the LP setup cost. This step relies on the {\em fractional subadditivity} property of submodular functions. It turns out that we do not require the full strength of submodular functions: the algorithm and analysis work even for functions satisfying an approximate notion of fractional subadditivity (see Definition \ref{def_afs}) as long as the natural LP relaxation can be solved approximately. This allows us to also obtain an approximation algorithm for IRP since the TSP cost function satisfies $1.5$-approximate fractional subadditivity and there is a $2+o(1)$ approximation algorithm for its LP relaxation (see Section~\ref{sec_solve} for details). We believe that some of our techniques may be useful in obtaining a constant factor approximation algorithm for both problems in their full generality. \subsection{Literature review} \label{literature} As mentioned earlier, most of the existing literature on deterministic JRP with non-stationary demand uses the additive joint setup cost structure. \citet{arkin1989computational} showed that the additive JRP is NP-hard. \citet{nonner2013efficient} further showed that the additive JRP is in fact APX-hard with nonlinear holding cost structure. There have been several approximation algorithms for the additive JRP (see \cite{CELS2014} and the references therein). The state-of-the-art approximation algorithms for the additive JRP are due to \cite{levi2006primal}, \cite{levi2008constant} and \cite{bienkowski2013better}, with approximation ratios 2, 1.80 and 1.791, respectively. Due to the limited modeling power of the additive JRP, \citet{CELS2014} first studied the submodular JRP in which the setup costs are submodular. They gave an $\mathcal{O}(\log(NT))$-approximation algorithm for the general submodular JRP (where $N$ is number of items and $T$ is number of periods). They also analyzed three special cases of submodular functions which are laminar, tree and cardinality cases. They showed that the laminar case can be solved optimally in polynomial time using dynamic programming, and obtained a 3-approximation for the tree case and a 5-approximation for the cardinality case. Our work contributes to the literature by giving approximation algorithms for the general submodular JRP with special holding cost structures. The IRP has also been studied extensively in the literature (see \cite{Burns1985}, \cite{Coelho2014}, \cite{FZ1984}, \cite{FNR14} for an overview of this problem). The problem can be cast a mathematical program (see, e.g., \cite{Campbell2004}) and solution approaches typically involve heuristics that trade-off between holding and transportation costs (see \cite{Anily1990,Anily1993}, \cite{Chan1998}, \cite{Chien1989}, \cite{VM1997}, \cite{Bertazzi2008}). Closer to our work, \citet{FNR14} gave constant factor approximation algorithms for the IRP restricting to periodic schedules. In contrast, our results do not require the schedule to be periodic but require polynomial holding costs. \subsection{Structure of this paper and some notations} We organize the remainder of the paper as follows. In Section~\ref{sec_model}, we present a unified formulation for the submodular JRP and the IRP with arbitrary embedding metric, and state our main result. In Section~\ref{sec_algorithm}, we propose a unified approximation algorithm for both problems. In Section~\ref{sec_solve}, we discuss how to solve the LP relaxation efficiently. We conclude our paper in Section~\ref{sec_cf}. Throughout the paper, we use the notation $\lfloor x \rfloor$ and $\lceil x \rceil$ frequently, where $\lfloor x \rfloor$ is defined as the largest integer value which is smaller than or equal to $x$; and $\lceil x \rceil$ is defined as the smallest integer value which is greater than or equal to $x$. Additionally, for any real numbers $x$ and $y$, we denote $x^+=\max\{x, 0\}$, $x \vee y=\max\{x,y\}$, and $x \wedge y=\min\{x, y\}$. The notation $:=$ reads ``is defined as". \section{A Unified Formulation for the JRP and the IRP} \label{sec_model} In this section, we formally describe a unified problem statement that includes two classical deterministic inventory problems as special cases, i.e., the joint replenishment problem (JRP) with submodular setup costs and the inventory routing problem (IRP) with arbitrary embedding metric. We also present a unified framework for this problem, and state our main result. \subsection{Problem Statement} There are $N$ elements (e.g., item types in the JRP or retailers in the IRP) that are needed to serve external demands over a finite planning horizon of $T$ periods; these elements are denoted by the ground set $\mathcal{N}=\{1,\ldots, N\}$, and the time periods are denoted by the set $\mathcal{T}=\{1,\ldots, T\}$. For each time period $t\in \mathcal{T}$ and each element $i \in \mathcal{N}$ , there is a known demand $d_{it} \ge 0$ units of that element. We use $\mathcal{D}$ to denote the set of all strictly positive demand points $(i,t)$ with $d_{it} > 0$. To satisfy these demands, an order may be placed in each time period. Each demand point $(i,t) \in \mathcal{D}$ has to be served by an order containing element $i$ before or at time period $t$, i.e., no backlogging or lost-sales are allowed. The inventory system incurs two types of cost -- the joint ordering cost and the holding cost. \begin{itemize} \item The joint ordering cost is a function of the elements that place strictly positive orders in any given period. More specifically, for any time period $t$ and a subset of elements $S \subseteq \mathcal{N}$, the joint ordering cost of ordering demand for elements in $S$ in period $t$ is a function of $S$, which is denoted by $f(S)$. \item Because the setup cost of ordering an element is independent of the number of units ordered, there is an incentive to place large orders to meet the demand not just for the current time period, but for subsequent time periods as well. This is balanced by a cost incurred by holding inventory over time periods. We use $h_{st}^{i}$ to denote the holding cost incurred by ordering one unit of inventory in period $s$, and using it to meet the demand for element $i$ in period $t$. We assume that $h_{st}^{i}$ is non-negative and, for each demand point $(i,t)$, is a nonincreasing function of $s$, i.e., holding inventory longer is always more costly. Thus, if the demand point $(i,t)$ is served by an order at time period $s$, then the system incurs a holding cost of $H_{st}^{i} := d_{it}h_{st}^{i}$. \end{itemize} The goal is to coordinate a sequence of (joint) orders to satisfy all the demand points on time so as to minimize the sum of joint ordering and holding costs over the $T$ periods. The above unified problem statement encompasses two classical deterministic inventory problems described below. {\bf The submodular JRP.} The JRP involves multiple item types and a single retailer who faces demands. In each time step, any subset of item-types can be ordered incurring a joint ordering cost which is submodular. The objective is find a sequence of orders that satisfies all demands and minimizes the total ordering and holding costs. The elements in the above problem statement are the item types in the JRP; and the joint ordering cost $f(\cdot)$ is commonly referred to as the setup cost (or equivalently, the fixed ordering cost) in the JRP. The submodular JRP considers a special class of $f(\cdot)$ called submodular functions (see, e.g., \cite{CELS2014}). More precisely, we assume that the function $f(\cdot)$ is non-negative, monotone non-decreasing, and also submodular. The non-negativity and monotonicity assert that for every $S_{1} \subseteq S_{2} \subseteq \mathcal{N}$, we have $0 \le f(S_{1}) \le f(S_{2})$. Submodularity requires that for every set $S_{1},S_{2} \subseteq \mathcal{N}$, we have $$ f(S_{1})+f(S_{2}) \ge f(S_{1} \cup S_{2})+f(S_{1} \cap S_{2}). $$ There is an equivalent definition that conveys the economies of scale more clearly. That is, for every set $S_{1} \subseteq S_{2} \subseteq \mathcal{N}$ and any item type $i \in \mathcal{N}$, we have $f(S_{2}\cup \{i\}) - f(S_{2}) \le f(S_{1}\cup \{i\}) - f(S_{1})$, i.e., the additional cost of adding an item type to the joint order is decreasing as more item types have been included in that order. {\bf The IRP with arbitrary embedding metric.} The IRP involves a single depot $r$ that stocks items, and a set of retailer locations (denoted by the ground set $\mathcal{N}$) facing demands. In each time step, any subset of locations can be visited using a vehicle originating from the depot. The objective here is to satisfy all demands while minimizing the sum of routing and holding costs. The elements in the above unified problem statement are the retailers in the IRP; and the joint ordering cost $f(\cdot)$ is the shipping or routing cost. The IRP is specified by a complete graph on vertices $V$ with a metric distance function $w: \binom{V}{2} \rightarrow \mathbb{R}_+$ that satisfies symmetry (i.e. $w(ba) =w(ab)$ for any $a,b\in V$) and triangle inequality (i.e. $w(ab)+w(bc) \ge w(ac)$ for any $a,b,c \in V$). The vertex set $V=\mathcal{N}\cup r$, containing the depot and the set of retailers. The shipping or routing cost $f(S)$ can be defined as the travelling salesman (TSP) cost of visiting the retailers in $S \subseteq V$. Formally, \begin{equation} \label{def:tsp} f(S)\quad := \quad \mbox{minimum length of tour that visits each vertex in }S\cup\{r\},\qquad \forall S\subseteq \mathcal{N}. \end{equation} \subsection{IP Formulation and its LP Relaxation} The unified problem described above can be written as an integer programming problem as follows (see also \cite{CELS2014}). First we define two types of binary variables $y_{s}^{S}$ and $x_{st}^{i}$ such that \begin{eqnarray*} y_{s}^{S} &=& \begin{cases} 1, & \text{if the subset of elements } S \subseteq \mathcal{N} \text{ is ordered in period } s,\\ 0, & \text{otherwise} . \end{cases} \\ x_{st}^{i} &=& \begin{cases} 1, & \text{if the demand point } (i,t) \text{ is satisfied using an order from period } s,\\ 0, & \text{otherwise} . \end{cases} \end{eqnarray*} Then the integer programming (IP) formulation is given by \begin{eqnarray} \label{IP} \text{\bf (IP)} \qquad \text{min} && \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)y_{s}^{S} + \sum_{(i,t)\in \mathcal{D}} \sum_{s=1}^{t} H_{st}^{i}x_{st}^{i} \\ \text{s.t.} && \sum_{s=1}^{t} x_{st}^{i}=1, \qquad \forall (i,t) \in \mathcal{D}\nonumber \\ && x_{st}^{i} \le \sum_{S: i\in S \subseteq \mathcal{N} }y_{s}^{S}, \qquad \forall (i,t) \in \mathcal{D}, \forall s =1,\ldots, t \nonumber \\ && x_{st}^{i}, y_{s}^{S} \in \{0,1\}, \qquad \forall (i,t) \in \mathcal{D}, \forall s =1,\ldots, t, \forall S \subseteq \mathcal{N}. \nonumber \end{eqnarray} The first constraint in (\ref{IP}) enforces that every demand point $(i,t)$ must be served by an order before or in time period $t$. The second constraint in (\ref{IP}) ensures that the joint order $S$ has to contain element $i$ if any demand $(i,t)$ is served at time period $s$. There is a natural linear programming (LP) relaxation of (IP) that relaxes the integer constraints on $x_{st}^{i}$ and $y_{s}^{S}$ to non-negativity constraints. To obtain approximation algorithms for the IP (\ref{IP}) using our framework, we only need to assume that the set function $f(\cdot)$ satisfies an approximate notion of fractional subaddivitity (which is much weaker than submodularity). \begin{definition}[$\beta$-approximate fractional subadditivity] \label{def_afs} The set function $f(\cdot)$ is $\beta$-approximate fractional subadditive, if for any collection $\{S_i, \lambda_i\}$ of weighted subsets with $0\le \lambda_{i} \le 1$ and $\sum_{i|v \in S_{i}} \lambda_{i}\ge 1$ for each $v\in S$, we have $f(S) \le \beta \cdot \sum \lambda_{i} f(S_{i})$. Namely, if the sets $S_{i}$ form a \emph{fractional cover} of $S\subseteq {\cal N}$, then the cost of $S$ is at most $\beta$ times the sum of the costs $f(S_i)$ weighted by the corresponding coefficients. \end{definition} It is known that if a function is submodular, then it is also fractional subadditive (see \cite{Feige2006}), i.e., the notion of submodularity is stronger. For the submodular JRP, the setup cost function $f(S)$ is submodular and hence also fractional subadditive (or equivalently, $1$-approximate fractional subadditive). For the IRP with arbitrary embedding metric, the vehicle routing cost $f(S)$ although not submodular, can be shown to be $1.5$-approximate fractional subadditive. This follows from the fact that the natural LP-relaxation for TSP has an integrality gap at most $1.5$ (see \cite{Wolsey1980} and \cite{Shmoys1990281}). Note that the LP relaxation of (\ref{IP}) has an exponential number of variables; we need to ensure that this LP relaxation can be (at least approximately) solved efficiently. \begin{definition}[$\gamma$-approximate LP solution] \label{def_alp} We say that a feasible LP solution is $\gamma$-approximate, if its objective value is at most $\gamma$ times the optimal LP objective value. \end{definition} Using the ellipsoid method one can compute efficiently: an exact LP solution for the submodular JRP and a $(2+ o(1))$-approximate LP solution for IRP. We delegate the discussion on this to Section \ref{sec_solve} for better readability of this paper. \subsection{Our Main Results} \begin{assumption}[$\alpha$-degree polynomial holding cost] \label{assump1} For each element $i \in \mathcal{N}$ and $1\le s \le t \in \mathcal{T}$, the holding cost of holding an inventory unit of element $i$ from period $s$ to $t$ is $$ h_{st}^{i} =(t-s)^{\alpha}\bar{h}^{i}_t, $$ for some base per-unit holding cost $\bar{h}^{i}_t > 0$ and some $\alpha \ge 1$. \end{assumption} Note that when $\alpha=1$, this reduces to the conventional linear holding cost. We also have $H_{st}^{i} =d_{t}^{i}(t-s)^{\alpha}\bar{h}^{i}_t$. Now we are in a position to formally state our main result (which will be proved in the following section). \begin{theorem} \label{mainresult} Under Assumption \ref{assump1}, there is an $\mathcal{O}\left(\alpha \beta \gamma \cdot \frac{\log T}{\log \log T}\right)$-approximation algorithm for the integer program defined in (\ref{IP}), provided that $f(\cdot)$ is $\beta$-approximate fractional subadditive, and a $\gamma$-approximate solution to the LP relaxation of \eqref{IP} can be found in polynomial time. \end{theorem} Corollary \ref{mainresult2} below is an immediate consequence of Theorem \ref{mainresult}, since \begin{enumerate} \item $\beta=\gamma=1$ for the submodular JRP; \item $\beta=1.5$ and $\gamma=2+o(1)$ for the IRP. \end{enumerate} \begin{corollary} \label{mainresult2} Under Assumption \ref{assump1}, there is an $\mathcal{O}\left(\frac{\log T}{\log \log T}\right)$-approximation algorithm for both the submodular JRP and the IRP with arbitrary embedding metric. \end{corollary} To the best of our knowledge, this is the first sub-logarithmic approximation ratio for either problem. We remark that it is immediate that our approach yields $\mathcal{O}(\log T)$-approximation algorithms for submodular JRP and IRP with arbitrary (monotone) holding costs (i.e., waiving Assumption \ref{assump1}). \section{LP-Rounding Algorithm} \label{sec_algorithm} We present an LP-rounding algorithm for the integer program (\ref{IP}) under Assumption \ref{assump1} in Section \ref{subsec_algo}, and then carry out a worst-case performance analysis in Section \ref{subsec_analysis}. \subsection{Algorithm Description} \label{subsec_algo} We describe our procedure of rounding a $\gamma$-approximate solution $(y,x)$ of (LP). We set $\rho := \lfloor (\log T)^{1/(2\alpha)} \rfloor $. \paragraph{\bf Step 1 -- Constructing extended shadow intervals.} We first construct what-we-call \emph{extended shadow intervals} as follows. For each demand point $(i,t)$, we take the $\gamma$-approximate LP solution and find a time period $s^{\prime}_{(i,t)}$ such that $$ \sum_{s=s^{\prime}_{(i,t)}}^{t}x_{st}^{i} \ge 1/2 \text{ and } \sum_{s=s^{\prime}_{(i,t)}+1}^{t}x_{st}^{i} < 1/2, $$ i.e., finding the closest $s$ to the left of $t$ such that the sum of $x$ variables for $(i,t)$ contains a half point. Then $[s^{\prime}_{(i,t)},t]$ is called the \emph{shadow interval} for this particular demand point $(i,t)$. We also measure its length $l_{(i,t)}: = t-s^{\prime}_{(i,t)}$. Next for each demand point $(i,t)$, we round the length $l_{(i,t)}$ up to the nearest power of $\rho$. If $s^{\prime}_{(i,t)}=t$ then we set $s^{*}_{(i,t)}=t$. Else we find the smallest integer $m\ge 1$ such that $\rho^{m} \ge t-s^{\prime}_{(i,t)},$ and then stretch the original shadow interval from $t$ to $s^{*}_{(i,t)}$ where $$ s^{*}_{(i,t)} = t - \rho^{m} \le s^{\prime}_{(i,t)}. $$ We call the interval $[s^{*}_{(i,t)},t]$ the \emph{extended shadow interval} for the demand point $(i,t)$, and also measure its length $l^{*}_{(i,t)} := t-s^{*}_{(i,t)}$. Figure \ref{fig_si} below gives a graphical representation of this step. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{fig3} \caption{Illustration of an extended shadow interval for demand point $(i,t)$.} \label{fig_si} \end{center} \end{figure} \paragraph{\bf Step 2 -- Partitioning demand points according to extended shadow intervals.} Next we partition the demand points according to the length of their extended shadow intervals. For each demand point $(i,t)$, its length $l^{*}_{(i,t)}$ falls into exactly one of the values below (recall by construction $l^{*}_{(i,t)}$ is either zero or an integer power of $\rho$). $$ \{0, \rho^{1}, \rho^{2}, \ldots , \rho^{k-2} , \rho^{k-1} \wedge T \}, \text{ where } k= 1+\left\lceil \log_{\rho} T \right\rceil. $$ In this way, we have partitioned the demand points into $k = \mathcal{O}\left(\alpha \frac{\log T}{\log \log T}\right)$ number of groups as follows, \begin{eqnarray*} \mathcal{L}_{0} = \left\{(i,t) \in \mathcal{D}: l^*_{(i,t)} = 0 \right\} \quad \mbox{and}\quad \mathcal{L}_{m} = \left\{(i,t) \in \mathcal{D}: l^*_{(i,t)} = \rho^{m} \wedge T) \right\}, \,\, \forall \; m \in \{1,\ldots,k-1\}. \end{eqnarray*} That is, the shadow intervals within each group $\mathcal{L}_{m}$ share the same length: $$w_m := \left\{ \begin{array}{ll} 0 & \mbox{ if } m=0\\ \rho^m \wedge T & \mbox{ if } m \in \{1,2,\ldots,k-1\} \end{array} \right.$$ \paragraph{\bf Step 3 -- Placing orders.} Based on the above partition of demand points, we describe our ordering procedure. Now fix an $m \in \{0,1,\ldots,k-1\}$ and focus on the demand group $\mathcal{L}_{m}$. Let $\tau_{j} = 1 + j\cdot w_m$ $(\le T)$ for $j=0,1,\ldots$. We place a tentative (joint) order in each period $\tau_{j}$ $(j=0,1, \ldots)$, i.e., once every $w_m$ periods. In each period $\tau_{j} \le T$ $(j=0,1,2 \ldots)$, we identify the set of elements $$ A_{m}^{j} = \left\{i: (i,t)\in \mathcal{L}_{m} \text{ and } \tau_{j} \in [s^{*}_{(i,t)},t] \right\}, $$ i.e., all the elements within $\mathcal{L}_{m}$ whose shadow intervals contain (or intersect with) time period $\tau_{j}$. We then place an actual joint order that serves the demand points associated with $A_{m}^{j}$ in period $\tau_{j}$. Figure \ref{fig2} gives one specific example of how the algorithm places these orders. \ignore{ Then starting from period $\tau_{0}=1$, we identify the set of elements (i.e., item types in the JRP and retailers in the IRP) $$ A_{m}^{0} = \left\{i: (i,t)\in \mathcal{L}_{m} \text{ and } 1\in [s^{*}_{(i,t)},t] \right\}, $$ i.e., all the elements within $\mathcal{L}_{m}$ whose shadow intervals contain (or intersect with) time period $1$. We then place an actual joint order that serves the demand points associated with $A_{m}^{0}$ in period $1$. In each subsequent period $\tau_{j} \le T$ $(j=1,2, \ldots)$, we identify the set of elements $$ A_{m}^{j} = \left\{i: (i,t)\in \mathcal{L}_{m} \text{ and } \tau_{j} \in [s^{*}_{(i,t)},t] \text{ and } (i,t) \notin A_{m}^{0}\cup \ldots \cup A_{m}^{j-1}\right \}, $$ i.e., all the elements within $\mathcal{L}_{m}$ whose shadow intervals contain (or intersect with) time period $\tau_{j}$ but have not been ordered yet. We then place an actual joint order that serves the demand points associated with $A_{m}^{j}$ in period $\tau_{j}$. Figure \ref{fig2} gives one specific example of how the algorithm places these orders. } We repeat the above procedure for all groups $m=0,1, \ldots, k-1$. In any given period, if there is more than one joint order (across different groups), we simply merge them into a single joint order. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.65]{fig2} \caption{Placing actual orders for the demand points within $\mathcal{L}_{m}$} \label{fig2} \end{center} \end{figure} This concludes the description of our LP-rounding algorithm. \subsection{Worst-case Analysis} \label{subsec_analysis} We shall prove that our LP-rounding algorithm gives an $\mathcal{O}\left(\alpha \beta \gamma \frac{\log T}{\log \log T}\right)$-approximation for the unified problem. For brevity, we just call the $\gamma$-approximate solution $(y,x)$ to the LP relaxation of (\ref{IP}) \emph{the $\gamma$-approximate LP solution}. \subsubsection*{Analysis of Holding Cost} \begin{lemma} \label{lemma_holding_1} Let $[s^{\prime}_{(i,t)},t]$ be the shadow interval for some demand point $(i,t)$. Then we have \begin{eqnarray} H_{s^{\prime}_{(i,t)}t}^{i} \le 2\cdot \sum_{s=1}^{t} H_{st}^{i}x_{st}^{i}, \end{eqnarray} where $x_{st}^{i}$'s are in the $\gamma$-approximate LP solution. \end{lemma} \begin{proof} By the construction of shadow intervals, we have $\sum_{s=s^{\prime}_{(i,t)}+1}^{t}x_{st}^{i} < 1/2$. Now since $\sum_{s=1}^{t}x_{st}^{i}=1$ by the first constraint in (\ref{IP}), we must have $\sum_{s=1}^{s^{\prime}_{(i,t)}} x_{st}^{i} \ge 1/2$. Hence we have \begin{eqnarray} H_{s^{\prime}_{(i,t)}t}^{i} \le 2 \cdot \sum_{s=1}^{s^{\prime}_{(i,t)}} H_{s^{\prime}_{(i,t)}t}^{i}x_{st}^{i} \le 2 \cdot \sum_{s=1}^{s^{\prime}_{(i,t)}} H_{st}^{i}x_{st}^{i} \le 2 \cdot \left(\sum_{s=1}^{s^{\prime}_{(i,t)}} H_{st}^{i}x_{st}^{i} + \sum_{s=s^{\prime}_{(i,t)}+1}^{t} H_{st}^{i}x_{st}^{i} \right) = 2\cdot \sum_{s=1}^{t} H_{st}^{i}x_{st}^{i}, \end{eqnarray} where the second inequality is due to $H_{s^{\prime}_{(i,t)}t} \le H_{st}$ for all $s\le s^{\prime}_{(i,t)}$ by monotonicity of holding costs. \end{proof} \begin{lemma} \label{lemma_holding_poly} The total holding cost for the solution found by the LP-rounding algorithm is at most $$ \mathcal{O}( \sqrt{\log T}) \cdot \sum_{(i,t)\in \mathcal{D}} \sum_{s=1}^{t} H_{st}^{i}x_{st}^{i}, $$ where $x_{st}^{i}$'s are in the $\gamma$-approximate LP solution. \end{lemma} \begin{proof} By the polynomial holding cost structure, for each demand point $(i,t)$ in some $\mathcal{L}_{m}$, we have \begin{eqnarray*} H_{s^{*}_{(i,t)}t}^{i} &=& d_{t}^{i} (t-s^{*}_{(i,t)})^{\alpha} = d_{t}^{i} (w_m)^{\alpha} \bar{h}^{i}, \\ H_{s^{\prime}_{(i,t)}t}^{i} &=& d_{t}^{i} (t- s^{\prime}_{(i,t)})^{\alpha} \ge d_{t}^{i} (w_m/\rho)^{\alpha} \bar{h}^{i}, \end{eqnarray*} where the inequality follows from the construction of extended shadow intervals. Hence it is clear that $$H_{s^{*}_{(i,t)}t}^{i} \le \rho^{\alpha} H_{s^{\prime}_{(i,t)}t}^{i}.$$ By the LP-rounding algorithm, for each demand point $(i,t)$, we must have placed a (joint) order containing element $i$ inside its extended shadow interval $[s^{*}_{(i,t)},t]$. Due to monotonicity of holding costs, the worst-case (that gives the highest possible holding cost) happens when our algorithm places the order at exactly time period $s^{*}_{(i,t)}$ to satisfy the demand point $(i,t)$. Hence, the total holding cost associated with the demand point $(i,t)$ is upper bounded by \begin{eqnarray*} H_{s^{*}_{(i,t)}t}^{i} \le \rho^{\alpha} H_{s^{\prime}_{(i,t)}t}^{i} \le 2\rho^{\alpha} \cdot \sum_{s=1}^{t} H_{st}^{i}x_{st}^{i}, \end{eqnarray*} where the second inequality follows from Lemma \ref{lemma_holding_1}. Now setting $\rho = \lfloor (\log T)^{1/(2\alpha)} \rfloor$ yields the result. \end{proof} The intuition behind Lemma \ref{lemma_holding_poly} is that when we stretch the shadow interval to the nearest integer power of $\rho$, the holding cost within the extended shadow interval does not grow too large due to the polynomial holding cost structure. In particular it grows by at most a factor of $\mathcal{O}(\sqrt{\log T})$. On the other hand, stretching the shadow intervals in this manner gives us a tighter bound on the ordering cost (as shown below). \subsubsection*{Analysis of Ordering Cost} To analyze the ordering cost component, we introduce the following bridging problem: \begin{eqnarray} \label{Covering} \text{\bf (Covering-LP)} \qquad \text{min} && \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)z_{s}^{S} \\ \text{s.t.} && \sum_{s=s^{*}_{(i,t)}}^{t} \sum_{S: i\in S \subseteq \mathcal{N} } z_{s}^{S} \ge 1, \qquad \forall (i,t) \in \mathcal{D} \nonumber \\ && z_{s}^{S} \ge 0, \qquad \forall s =1,\ldots, t, \forall S \subseteq \mathcal{N}. \nonumber \end{eqnarray} The intuition behind introducing this bridging problem is as follows: if our algorithm places an order to satisfy the demand within its extended shadow interval then Lemma \ref{lemma_holding_poly} implies that the holding cost can be bounded by $\mathcal{O}(\sqrt{\log T})$ times the LP holding cost. Thus, the problem reduces to finding a ``cover'' for these intervals as defined in Problem (\ref{Covering}). In the remainder of the worst-case analysis, we will focus on analyzing this Covering-LP. \begin{lemma} \label{lemma_clp1} The optimal objective value of the Covering-LP is at most $$ 2\cdot \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)y_{s}^{S}. $$ where $y_{s}^{S}$'s are in the $\gamma$-approximate LP solution. \end{lemma} \begin{proof} We first check $\bar{z}_{s}^{S} = 2y_{s}^{S}$ (where $y_{s}^{S}$ is the $\gamma$-approximate LP solution) is feasible to the Covering-LP defined in (\ref{Covering}). It is obvious that $\bar{z}_{s}^{S} = 2y_{s}^{S} \ge 0$ for all $s =1,\ldots, t$ and for all $S \subseteq \mathcal{N}$. It suffices to verify the first set of constraints. Indeed, for each $(i,t) \in \mathcal{D}$, we have \begin{equation} \sum_{s=s^{*}_{(i,t)}}^{t} \sum_{S: i\in S \subseteq \mathcal{N} } \bar{z}_{s}^{S}=\sum_{s=s^{*}_{(i,t)}}^{t} \sum_{S: i\in S \subseteq \mathcal{N} } 2y_{s}^{S} \ge \sum_{s=s^{*}_{(i,t)}}^{t} 2x_{st}^{i} \ge 1, \end{equation} where the first inequality follows from the second constraint in (\ref{IP}), and the second inequality follows from the fact that $\sum_{s=s^{*}_{(i,t)}}^{t} x_{st}^{i} \ge \sum_{s=s^{\prime}_{(i,t)}}^{t} x_{st}^{i} \ge 1/2$ (by the definition of shadow intervals and their extensions). Hence, the optimal objective value of the Covering-LP \begin{equation} \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)z_{s}^{S} \le \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)\bar{z}_{s}^{S} = 2\sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)y_{s}^{S}, \end{equation} where the first inequality follows from that $z_{s}^{S}$ is optimal while $\bar{z}_{s}^{S}$ is feasible. \end{proof} Fix an $m\in \{0,1,\ldots, k-1\}$ and we focus our attention on the demand group $\mathcal{L}_{m}$ (which have equal length of shadow intervals). We shall show that the total ordering cost associated with the set $\mathcal{L}_{m}$ by our LP-rounding algorithm can be upper bounded by $2\beta$ times the Covering-LP cost. Our proof strategy relies on the notion of approximate fractional subadditivity (see Definition \ref{def_afs}). \begin{lemma} \label{lemma_clp2} The total ordering cost associated with the set $\mathcal{L}_{m}$ by our LP-rounding algorithm is at most $$ 2 \beta \cdot \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)z_{s}^{S}. $$ where $z_{s}^{S}$'s are the optimal Covering-LP solutions. \end{lemma} \begin{proof} Recall that for each demand group $\mathcal{L}_{m}$, the LP-rounding algorithm places a tentative (joint) order in each period $\tau_{j} \le T$ $(j=0,1,\ldots)$. Then in each time period $\tau_{j}$ the algorithm identifies the elements $A^j_m$ within $\mathcal{L}_{m}$ whose shadow intervals contain (or intersect with) $\tau_{j}$ and places an actual (joint) order $A_{m}^{j}$ that includes all of these ``intersecting" elements. Now take any $\tau_{j} \le T$: since the length of the shadow intervals in $\mathcal{L}_{m}$ is exactly $w_m$, all the shadow intervals associated with the order $A_{m}^{j}$ must lie within the interval $(\tau_{j-1}, \tau_{j+1}]$ (see Figure \ref{fig2} as an example). Our LP-rounding algorithm places an actual (joint) order $A_{m}^{j}$ in period $\tau_{j}$ and incurs an ordering cost $f(A_{m}^{j})$. We will show that the Covering-LP provides us with a fractional cover of $A_{m}^{j}$ which will be used to upper bound $f(A_{m}^{j})$. Indeed, for each demand point $(i,t)$ associated with the order $A_{m}^{j}$, we have \begin{eqnarray} \sum_{S:i\in S \subseteq \mathcal{N}} \sum_{s>\tau_{j-1}}^{\tau_{j+1}} z_{s}^{S} = \sum_{s>\tau_{j-1}}^{\tau_{j+1}} \sum_{S:i\in S \subseteq \mathcal{N}} z_{s}^{S} \ge \sum_{s=s^{\prime}_{(i,t)}}^{t} \sum_{S:i\in S \subseteq \mathcal{N}} z_{s}^{S} \ge 1, \end{eqnarray} where the first inequality holds because every shadow interval associated with the order $A_{m}^{j}$ must lie within the interval $(\tau_{j-1}, \tau_{j+1}]$ and the last inequality follows from the first constraint in the Covering-LP (\ref{Covering}). Since $f(\cdot)$ is $\beta$-approximate fractional subadditive, then according to Definition \ref{def_afs}, \begin{eqnarray} f(A_{m}^{j}) \le \beta \cdot \sum_{S: S \subseteq \mathcal{N}} \sum_{s>\tau_{j-1}}^{\tau_{j+1}} z_{s}^{S} f(S). \end{eqnarray} It is then immediate that the ordering cost associated with the set $\mathcal{L}_{m}$ by our LP-rounding algorithm \begin{eqnarray} \sum_{j\ge 0}f(A_{m}^{j}) \,\, \le \,\, \beta \cdot \sum_{S: S \subseteq \mathcal{N}} \sum_{j\ge 0} \sum_{s>\tau_{j-1}}^{\tau_{j+1}} z_{s}^{S} f(S) \,\, \le \,\, 2 \beta \cdot \sum_{S: S \subseteq \mathcal{N}} \sum_{s=1}^{T} z_{s}^{S} f(S). \end{eqnarray} \end{proof} \begin{lemma} \label{lemma_clp3} The total ordering cost for the solution by our LP-rounding algorithm is at most $$ \mathcal{O}\left(\alpha \beta \frac{\log T}{\log \log T}\right) \cdot \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)y_{s}^{S}, $$ where $y_{s}^{S}$'s are in the $\gamma$-approximate LP solution. \end{lemma} \begin{proof} By Lemmas \ref{lemma_clp1} and \ref{lemma_clp2}, for each group $\mathcal{L}_{m}$ $(m=0,1, \ldots,k-1)$, we conclude that the total ordering cost associated with the set $\mathcal{L}_{m}$ in our LP-rounding algorithm is at most $$ 2\beta \cdot \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)z_{s}^{S} \le 4 \beta \cdot \sum_{S\subseteq \mathcal{N}} \sum_{s=1}^{T} f(S)y_{s}^{S}, $$ where $z_{s}^{S}$'s are the optimal Covering-LP solution and $y_{s}^{S}$'s are the $\gamma$-approximate LP solution. Then the result follows from the fact that the number of groups $k = \mathcal{O}\left( \alpha \frac{\log T}{\log \log T}\right)$. \end{proof} Now we are ready to prove our main result Theorem \ref{mainresult}. \noindent{\bf Proof of Theorem \ref{mainresult}:} Combining the results from Lemmas \ref{lemma_holding_poly} and \ref{lemma_clp3}, the total holding and ordering costs for the solution by our LP-rounding algorithm is at most $\mathcal{O}\left(\alpha \beta \frac{\log T}{\log \log T}\right)$ times the $\gamma$-approximate LP solution. \hfill$\blacksquare$ \noindent {\bf Remark:} The $\mathcal{O}\left(\frac{\log T}{\log \log T}\right)$ approximation ratio is the best tradeoff achievable (in our approach) between the loss in holding and ordering costs, even under linear holding costs. Recall that for a given set $W$ of widths for extended shadow intervals, the loss in ordering cost is just the number $|W|$ of distinct widths and the loss in holding cost depends on the aggregate stretch-factor incurred when the width of each shadow interval is increased to a value in $W$. Even if we allow for an arbitrary set $W$ of widths (that may depend on the LP solution) and compute the worst ratio (using a ``factor revealing linear program'' as in~\cite{JMMSV03}) then we obtain $\mathcal{O}\left(\frac{\log T}{\log \log T}\right)$ as the approximation ratio. \subsubsection*{A Special Case with Perishable Goods} We now consider a special holding cost which models perishable items with a fixed life-time $c>0$. For each demand point $(i,t)$, we can only start satisfying this order $c$ periods before $t$, i.e., the ordering window is $[t-c, t]$. This setting is equivalent to the following holding cost structure. For each $i \in \mathcal{N}$ and $1\le s \le t \in \mathcal{T}$, \begin{equation*} h_{st}^{i} = \begin{cases} 0 & \text{if } t-c \le s \le t ,\\ \infty & \text{if } s < t-c. \end{cases} \end{equation*} We also have $H_{st}^{i} =d_{t}^{i}h_{st}^{i}$. In this setting, for each demand point $(i,t)$, the extended shadow interval is simply $[t-c, t]$ with length $c$. Hence our LP-rounding algorithm and its worst-analysis will apply with just a single group, and we obtain: \begin{theorem} \label{thm_perishable} When items are perishable with a fixed life-time and the holding cost is negligible, the LP-rounding algorithm gives a $2$-approximation for the submodular JRP, and a $(6+o(1))$-approximation for the IRP. \end{theorem} \section{Solving the LP Relaxation} \label{sec_solve} As mentioned earlier in Section \ref{sec_model}, the LP relaxation has an exponential number of variables and we first argue that there is an efficient way of solving this LP. We can readily write the dual of (LP) as \begin{eqnarray} \label{Dual-LP} \text{\bf (DLP)} \qquad \text{max} && \sum_{(i,t) \in \mathcal{D}} b_{t}^{i} \\ \text{s.t.} && b_{t}^{i} \le H_{st}^{i} + \bar{b}_{st}^{i}, \qquad \forall (i,t) \in \mathcal{D}, \forall s =1,\ldots, t \nonumber \\ && f(S) - \sum_{i \in S}\sum_{t=s}^{T} \bar{b}_{st}^{i} \ge 0, \qquad \forall s =1,\ldots, T, \forall S \subseteq \{1,\ldots N\} \nonumber \\ && \bar{b}_{st}^{i} \ge 0, \qquad \forall (i,t) \in \mathcal{D}, \forall s =1,\ldots, t. \nonumber \end{eqnarray} The $b_{t}^{i}$ and $\bar{b}_{st}^{i}$ are the dual variable corresponding to the first and second constraints in the LP relaxation of (\ref{IP}). Note that the dual formulation (\ref{Dual-LP}) has an exponential number of constraints. \subsubsection*{Submodular JRP} In the submodular JRP, the left hand side of the second constraint $f(S) - \sum_{i \in S}\sum_{t=s}^{T} \bar{b}_{st}^{i}$ is clearly submodular. Thus, there is an efficient separation oracle by using submodular function minimization~\cite{S-book} to find violated constraints. This implies that the dual problem (and therefore the primal) can be efficiently solved using the ellipsoid method. This was also discussed in \cite{CELS2014}. \subsubsection*{Approximately Solving the LP for IRP} The TSP costs $f(\cdot)$ are not submodular, and in fact the above separation problem is NP-hard. However, there is an approximate separation oracle (see Lemma \ref{oracle}) which suffices to compute an approximately optimal solution to (DLP). The main ingredient is an approximation algorithm for the following auxiliary problem: \begin{definition}[Minimum ratio TSP] The input is a metric $(V,w)$ with a designated depot $r\in V$ and rewards $a:V\rightarrow \mathbb{R}_+$ on vertices. The goal is to find a subset $S\subseteq V$ that minimizes: $$\frac{f(S)}{a(S)}, \qquad \mbox{where $f(S)$ is the TSP cost as defined in~\eqref{def:tsp} and } a(S)=\sum_{i\in S} a_i.$$ \end{definition} \begin{theorem}[Garg \cite{Garg2005}]\label{thm:rTSP} There is a $(2+o(1))$-approximation algorithm for the minimum ratio TSP problem. \end{theorem} \begin{proof} The algorithm for minimum ratio TSP uses the $2$-approximation algorithm for the related $k$-TSP problem (i.e. given a metric, depot $r$ and target $k$, find a minimum length tour from the depot that visits at least $k$ vertices). The algorithm which is based on standard scaling arguments, is given below for completeness: \begin{enumerate} \item Guess (by enumerating over $|V|$ choices) the maximum reward vertex $u$ in an optimal solution. \item Remove all vertices with reward more than $a_u$. \item For each $v\in V$ set its new reward $\bar{a}_v$ to be the largest integer such that $\bar{a}_v\cdot \frac{a_u}{n^2}\le a_v$. \item For each $k=1,\cdots, n^3$, run the $k$-TSP algorithm with target $k$ on the modified metric containing $\bar{a}_v$ co-located vertices at each $v\in V$. \item Output the best ratio solution found (over all choices of $u$ and $k$). \end{enumerate} It is easy to see that this algorithm runs in polynomial time since each $\bar{a}_v \le n^2$. We now show that it has an approximation ratio of $\gamma=2+o(1)$. Let $S^*$ denote an optimal solution and $u\in S^*$ the maximum reward vertex. Consider the run of the above algorithm for this choice of $u$: note that none of the vertices from $S^*$ is removed. By the definition of new rewards, we have $a_v -\frac{a_u}{n^2} < \bar{a}_v\cdot \frac{a_u}{n^2}\le a_v$ for all $v\in S^*$. So $\frac{a_u}{n^2}\sum_{v\in S^*} \bar{a}_v > a(S^*) - \frac{a_u}{n}\ge (1-1/n)a(S^*)$, which implies (as $\bar{a}$ is integer valued) that $\bar{a}(S^*)\ge k := \lceil \frac{n^2}{a_u}(1-\frac1n)a(S^*)\rceil$. For this choice of $k$, the $k$-TSP algorithm is guaranteed to find a subset $S\subseteq V$ with $\bar{a}(S)\ge k$ and $f(S)\le 2\cdot f(S^*)$. The ratio of this solution is: $$\frac{f(S)}{a(S)} \le \frac{2f(S^*)}{a(S)} \le \frac{n^2}{a_u}\cdot \frac{2f(S)}{\bar{a}(S)} \le \frac{2}{1-1/n}\cdot \frac{f(S^*)}{a(S^*)}.$$ Hence the above algorithm achieves a $2+o(1)$ approximation guarantee for minimum ratio TSP. \end{proof} \def\mathbf{b}{\mathbf{b}} \def\bbar{\mathbf{\bar{b}}} Let $\gamma=2+o(1)$ denote the approximation guarantee for minimum ratio TSP from Theorem~\ref{thm:rTSP}. We now show that this algorithm leads to an approximate separation oracle. \begin{lemma} \label{oracle} There is a polynomial time algorithm, that given vectors $\mathbf{b}= \{b^i_t : (i,t)\in {\cal D}\}$ and $\bbar=\{\bar{b}^i_{st} : (i,t)\in{\cal D}, 1\le s\le t\}$, outputs one of the following: \begin{enumerate} \item A constraint in (DLP) that is violated by $(\mathbf{b},\bbar)$. \item Certificate that $(\frac1\gamma \mathbf{b},\, \frac1\gamma \bbar)$ is feasible in (DLP). \end{enumerate} \end{lemma} \begin{proof} The first set of constraints in (DLP) and non-negativity of $\bar{b}^i_{st}$ are easy to verify since they are polynomial in number. Below we assume that these are satisfied. In order to verify the second set of constraints in (DLP), we use Theorem~\ref{thm:rTSP}. For each $s\in [T]$ define an instance ${\cal I}_s$ of minimum ratio TSP on metric $(V,w)$, depot $r$ and with rewards $a_v:=\sum_{t=s}^T \bar{b}^v_{st}$ for all $v\in V$. Let $A_s$ denote the solution found by the approximation algorithm of Theorem~\ref{thm:rTSP} on ${\cal I}_s$. If any solution $A_s$ has ratio less than one then it provides a violated constraint for $(\mathbf{b},\bbar)$. This corresponds to the first condition in the lemma. If every solution $A_s$ has ratio at most one, we will show that the second condition in the lemma holds. The non-negativity constraints are clearly satisfied by $(\frac1\gamma \mathbf{b},\, \frac1\gamma \bbar)$. To check the first set of constraints in (DLP), note that for any $(i,t)\in{\cal D}$ and $s\in[T]$, $$\frac{1}{\gamma}\left( b^i_t-\bar{b}^i_{st} \right) \le \max\left\{ 0,\, b^i_t-\bar{b}^i_{st} \right\} \le H^i_{st}.$$ To check the second set of constraints, note that for any $s\in [T]$ we have by the approximation guarantee in Theorem~\ref{thm:rTSP}, $$\min_{S\subseteq V} \, \frac{f(S)}{\sum_{v\in S} \sum_{t=s}^T \bar{b}^v_{st}} \ge \frac{1}{\gamma}.$$ This implies that $(\frac1\gamma \mathbf{b},\, \frac1\gamma \bbar)$ satisfies all these constraints. \end{proof} Using the above separation oracle for (DLP) within the ellipsoid algorithm, we obtain a $\gamma$-approximately optimal solution to (DLP), see eg.~\cite{J03}. Then solving (LP) restricted to the (polynomially many) variables that are dual to the constraints generated in solving (DLP), we obtain a $\gamma$-approximately optimal solution to (LP) as well. \medskip \paragraph{Running time.} Using the linear programming algorithms in~\cite{V90,V96} along with some preprocessing, the running time of the above approach is dominated by $\tilde{O}({\cal D}^3 T^3)$ plus $\tilde{O}({\cal D} T^2)$ calls\footnote{The $\tilde O$ notation hides logarithmic factors.} to a subroutine for: \begin{itemize} \item submodular function minimization in case of JRP. \item minimum-ratio TSP in case of IRP. \end{itemize} \section{Concluding Remarks} \label{sec_cf} We presented an $\mathcal{O}\left(\frac{\log T}{\log \log T}\right)$-approximation algorithm for submodular-JRP and IRP when holding costs are polynomial functions. Moreover, this approach applies to any ordering cost for which the correponding LP relaxation can be solved approximately and the ordering cost satisfies an approximate notion of fractional subadditivity. Obtaining a constant-factor approximation algorithm for submodular JRP and IRP on general metrics (even with linear holding costs) remain the main open questions. \section*{Acknowledgments} The authors have benefited from valuable comments from and discussions on this work with Retsef Levi (MIT). \bibliographystyle{ormsv080}
{ "timestamp": "2015-04-27T02:10:13", "yymm": "1504", "arxiv_id": "1504.06560", "language": "en", "url": "https://arxiv.org/abs/1504.06560" }
\section{Introduction} We are interested in studying further regularity in time for non-homogeneous parabolic equations of the form \begin{align*} u_t-F(D^2u,Du,x) = f(x,t) \text{ in } B_1\times(-1,0]. \end{align*} Our estimates do not assume that $f$ is differentiable, nor that the homogeneous problem with frozen coefficients has interior $C^{2,\bar\a}$ estimates for which the H\"older continuity of $u_t$ is already well known. Let us now recall how these cases can be treated. We can get an estimate for $u_t$ by considering the equation obtained by taking the time derivative of the original problem. However, this relies on $f(x,\cdot) \in C^{0,1}$. Approximation techniques can be used when $f \in C^{\bar\gamma}$ and the homogeneous problem has $C^{2,\bar\a}$ estimates, see \cite{Wang92-2} or Chapter 8 from \cite{Caffarelli95}. This actually implies $u \in C^{2,\gamma}$ for every $\gamma \in (0,\bar\a)\cap(0,\bar\gamma]$, therefore by the scaling of the equation, $u(x,\cdot) \in C^{1,\gamma/2}$. Finally, if only a $C^{1,\bar\a}$ estimate is available for the homogeneous problem, then a similar approach proves that $u(x,\cdot) \in C^{0,(1+\gamma)/2}$, see \cite{Wang92-2,Caffarelli95}. H\"older estimates for $u_t$, without assuming smooth coefficients or $C^{2,\bar\a}$ estimates, seem to be unknown up to this moment. Given that $f \in C^{0,\gamma}$, the scaling of the problem suggests that $u_t \in C^{0,\gamma}$ as well. Our main theorem establishes such an estimate for $\gamma\in(0,\bar\a)$ where $\bar\a\in(0,1)$ is the universal H\"older exponent from the Krylov-Safonov estimate. \begin{theorem} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} (defined in Section \ref{preliminaries}) with $F(0,0,x) = 0$ and $u$ satisfies, \begin{align*} u_t - F(D^2u,Du,x) = f(x,t) \text{ in the viscosity sense in } Q_1 \end{align*} Assume that $f \in C^{0,\gamma}(Q_1)$ for some $\gamma \in (0, \bar \a)$ where $\bar \a$ is the exponent from the Krylov-Safonov theorem. Then $u_t$ exists pointwise, and for some constant $C>0$ depending on $\min(\gamma,(\bar\a-\gamma))$, \[ \|u_t\|_{C^{0,\gamma}(Q_{1/4})} \leq C\1\sup_{Q_1}|u| + \sup_{x\in B_1}\|f(x,\cdot)\|_{C^{0,\gamma/2}[-1,0]}\2. \] \end{theorem} Notice that the scaling of this estimate corresponds to the scaling for $u_t$ which formally satisfies an equation with the singular right-hand side given by $f_t$. This already suggests that we should establish a diminish of oscillation for $u_t$ and not necessarily $u$. Keep in mind that a diminish of oscillation for $u$, leading to a similar conclusion would imply that $u$ has second derivatives in space which is known to be false for a general $F$, see for instance the celebrated counterexamples by N. Nadirashvili and S. Vl{\u{a}}du{\c{t}} in \cite{Nadirashvili-2013} and the references therein. As $u_t$ is not controlled a priori, we establish a diminish of oscillation for the difference quotients $\frac{\d_\t u(x,t)}{\t^\b} := \frac{u(x,t)-u(x,t-\t)}{\t^\b}$ where $\b\in(0,1)$. This allows us to control in each step a higher order difference quotient and, after a finite number of iterations, obtain the desired regularity for $u_t$ as is done in Chapter 5 of \cite{Caffarelli95}. To show a diminish of oscillation for $\frac{\d_\t u}{\t^\b}$ brings several challenges. First of all the equation for $\frac{\d_\t u}{\t^\b}$ has a right-hand side that might degenerate as $\t$ approaches zero. On the other hand, by using the scaling for $\frac{\d_\t u}{\t^\b}$ we make $u$ grow. The key idea is to assume some small a priori H\"older continuity for the difference quotient which gives a way to control the difference quotients for $\t \in (0,\bar\t)$ by the difference quotients with $\t>\bar\t$. This is rigorously established in the proof of Lemma \ref{thm:improvement}. Moreover, we also obtain a $C^{0,\gamma}$ estimate for $\frac{\d_\t u}{\t^\gamma}$ for any $\gamma\in(0,1)$ by only assuming $f$ bounded. This is actually the first step for the proof of our main theorem contained in Section \ref{further}. Finally we would like to point out that the constant in our estimate degenerates as $\gamma$ approaches zero or $\bar\a$. Whether this estimate can be improved or the existence of counterexamples remains open. Another interesting question is whether the result can be extended for $F$ depending on the time variable. Notice that for $F = F(M,t)$ uniformly elliptic, \begin{align*} \frac{d}{dt}F(D^2u,t) = F_M(D^2u,t)D^2u_t + F_t(D^2u,t). \end{align*} The first term is the one that can be used in the linearized equation. By uniform ellipticity $F_M(D^2u,t)$ is bounded from above and below away from zero. The second term is a bad one, keep in mind that without assuming $C^{1,1}$ estimates $F_t(D^2u,t)$ might be unbounded. For example, consider \[ F(D^2u,t) = \sup_\a\inf_\b\trace(A_{\a,\b}(t)D^2u). \] \subsection{Applications to fully non-linear, integro-differential, parabolic equations} The main interest for the authors to study this problem comes from fully non-linear, integro-differential, parabolic equations; let us recall first a singular counterexample concerning a time regularity issue. Given $\s\in(0,2)$, is it known from \cite{Davila12-p} that there exists some Dirichlet data in $(B_1^c\times(-1,0])\cup (\R^n\times\{-1\})$ such that the solution to the fractional heat equation $u_t-\D^{\s/2} u=0$ in $B_1\times(-1,0]$ can not be smoother than Lipschitz continuous in the time variable. This is surprising, as it is well known that for $\s=2$ the solutions are smooth. H\"older estimates for fully nonlinear and non local parabolic problems were established by G. D\'avila and the first author in \cite{Davila12-p}. Further regularity estimates, as the analogue of the parabolic Evans-Krylov theorem \cite{evans1982classical, krylov1983boundedly, caffarelli2010evans, MR2831115}, seem to require either better time estimates for the solution or strong hypothesis on the data as was considered in \cite{2014arXiv1408.5149L}. The authors plan to investigate further regularity in time for fully non-linear, integro-differential, parabolic equations. We expect that a H\"older modulus of continuity for the boundary data, just in time, makes the time derivative of the solution H\"older continuous in space and time. For example, by truncating the solution of a homogeneous fractional heat equation we can transfer the Dirichlet data to a right-hand side. Notice that the smoothness of the kernel associated with $\D^{\s/2}$ (outside of the origin) has only a regularizing effect in space but not in time. At this point it is not difficult to see that the time derivative has a modulus of continuity in space and time if the Dirichlet data had a modulus of continuity in time, at least for the heat equation. Let us take the opportunity at this point to mention that a similar phenomenon was found by J. Serra in \cite{Serra14-2} for the $C^{k,\bar\a'}$ ($k=\lfloor\s+\bar\a\rfloor, \ \bar\a'=k-(\s+\bar\a) \neq 0$) estimates of concave non-local equations of order $\s$ with rough kernels. In this work the Dirichlet data is assumed to be $C^{0,\bar\a}$; moreover, it proves that the $C^{k,\bar\a'}$ estimate is false without this assumption by giving a counterexample. This technique was first introduced by the same author in \cite{Serra14} for the parabolic setting were it concludes that $u \in C^{1,\bar\a}$, which was known only for the case of smooth kernels. Thanks to the scaling, $u_t(x,\cdot) \in C^{(1+\bar\a)/\s} \ss C^{1,\bar\a'}$ if $\s\in(0,1]$. Notice that as $\s\to2$ the estimate leaves a gap between $C^{0,(1+\bar\a)/2} \ss C^{0,1/2}$ and the well known $C^{1,\bar\a}$ regularity in time. \section{Preliminaries.}\label{preliminaries} We use the following notation, which is standard for second order parabolic problems. Given $\W\subset\R^n, \ A\subset\R^n\times\R$ and $\a,\t\in(0,1)$, \begin{align*} \p_p\1\W \times (t_1,t_2]\2 &:= \1\W \times \{t_1\}\2 \cup \1\p\W\times(t_1,t_2]\2,\\ [u]_{C^\a(A)} &:= \sup_{(x,t),(x',t') \in A} \frac{|u(x,t)-u(x',t')|}{(|x-x'| + |t-t'|^{1/2})^\a},\\ \d_\t u(x,t) &:= u(x,t) - u(x,t-\t). \end{align*} We will frequently use the parabolic cylinders $Q_r(x,t)=B_r(x)\times (t-r^2,t)$. The cylinder $Q_r$ is centered on $(0,0)$. Let $\mathcal S \ss \R^{n\times n}$ be the space of $n$ by $n$ symmetric matrices and $I$ its identity matrix. A continuous function $F:\mathcal S \times \R^n \times \W \to\R$ is said to be uniformly elliptic with respect to $0<\l\leq\L<\8$ if for each $x \in \R^n$, \begin{equation} \cM^-(M-N) - \L|p-q| \leq F(M,p,x)-F(N,q,x) \leq \cM^+(M-N) + \L|p-q|, \end{equation} where \begin{align*} \cM^+M &:= \sup\{\trace(AM): A \in \mathcal S, \l I\leq A \leq \L I\},\\ \cM^-M &:= \inf\{\trace(AM): A \in \mathcal S, \l I\leq A \leq \L I\}. \end{align*} Any constant that depends on $n, \ \l$ and $\L$ is considered universal. The dependence of various values on these quantities will be assumed without being stated explicitly. Solutions are considered in the viscosity sense as in \cite{Wang92,Caffarelli95}. Another good reference is the lecture notes by C. Imbert and L. Silvestre, \cite{Silvestre-Imbert-2013}. This notion is sufficiently weak to allow for existence of continuous solutions to the Dirichlet problem by Perron's method. For smooth functions $u$ and $v$ satisfying, \begin{align*} \label{eq:H} u_t - F(D^2u,Du,x) = f(x,t) \qquad\text{and}\qquad v_t - F(D^2v,Dv,x) = g(x,t), \end{align*} the uniform ellipticity of $F$ implies for $w:=u-v$ the following inequalities, \[ w_t - \cM^+(D^2w) - \L|Dw| \leq f(x,t) - g(x,t) \leq w_t - \cM^-(D^2w) + \L|Dw|. \] However, the question of whether two viscosity solutions imply the same inequalities in the viscosity sense is a delicate one. A sufficient condition is that $F$ satisfies the Lipschitz estimate \begin{align} \label{eq:H}\tag{H} |F(M,p,x)-F(M,p,y)|\leq C |x-y|(1 + |M|+|p|). \end{align} See \cite{Jensen89,Crandall92} for a rather comprehensive discussion and many references. It is important to note that any regularity of $F$ needed to ensure this uniform ellipticity identity in the viscosity sense is used only qualitatively, and there is no dependence on it in our estimates. The following basic interior regularity estimate is a consequence of the Krylov-Safonov Harnack inequality. \begin{theorem}[Krylov-Safonov]\label{thm:KS} There exists a universal exponent $\bar\a\in (0,1)$ and constant $C$ such that for $u$ satisfying in $Q_1$ \begin{align}\label{eq:basic} u_t - \cM^+ u - \L|Du| &\leq |f| \quad \text{ and } \quad u_t - \cM^- u + \L|Du| \geq -|f|, \end{align} in the viscosity sense, then \begin{align*} \|u\|_{C^{\bar\a}\1 Q_{1/2}\2} \leq C\1\sup_{Q_1}|u| +\|f\|_{L^{n+1}(Q_1)}\2. \end{align*} \end{theorem} From now on we fix $\bar\a \in (0,1)$ to be the exponent in Theorem \ref{thm:KS}. \section{Bounded Right-Hand Side}\label{further} The goal of this section is to show that under the assumption that $f$ is bounded, we have that $u$ is in every H\"older space with exponent $\gamma\in(0,1)$. The argument will proceed iteratively, with each step proving that the difference quotients $\frac{\d_\t u}{\t^\b}$ are bounded for progressively higher $\b$. The crucial step is to control $\frac{\d_\t u}{\t^{\b+\a/2}}$ given that $\frac{\d_\t u}{\t^\b}$ is already controlled ($\a,\b\in(0,1)$ and $\b+\a/2\in(0,1)$), similar to what is done in Chapter 5 from \cite{Caffarelli95} in order to prove $C^{1,\bar\a}$ estimates. It turns out to be useful to control quantities like $\|\frac{\d_\t u}{\t^\b}\|_{C^{0,\e}}$ for some small $\e$, as this allows to have control on the difference quotients for $\t$ approaching zero by using Corollary \ref{lem:appendix3} in the appendix. This entire section is devoted to proving the following theorem: \begin{theorem}\label{thm:improv_reg_1} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} with $F(0,0,x) = 0$ and $u$ satisfies, \begin{align*} u_t - F(D^2u,Du,x) = f(x,t) \text{ in the viscosity sense in } Q_2 \end{align*} Given $\gamma\in(0,1)$ there exists $\e\in(0,1-\gamma)$ and $C>0$ depending on $(1-\gamma)$ such that, \[ \sup_{\t\in(0,1/4)} \left\|\frac{\d_\t u}{\t^\gamma}\right\|_{C^\e(Q_{1/2})} \leq C\1\osc_{Q_2}u+\sup_{Q_2}|f|\2. \] \end{theorem} The key step is stablished in the following Lemma. Notice that for $\e=0$ the statement is a diminish of oscillation leading to a $C^{0,\a}$ estimate for the difference quotient. \begin{lemma}\label{thm:improvement} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} and $u$ satisfies, \begin{align*} &u_t-F(D^2 u,Du,x)=f(x,t) \text{ in the viscosity sense in } Q_2, \end{align*} Let $\b\in(0,1), \a\in(0,\bar\a)$ and $\e\in(0,\min(1-\b,\bar\a-\a))$ where $\bar\a$ is the exponent from Krylov-Safonov estimates. Then there exists constants $\m,\d\in(0,1)$ depending on $\a$ and $\e$, such that, \[ \sup_{\t\in(0,1)} \left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq 1 \qquad \text{ and } \qquad \sup_{\substack{\t\in(0,1)\\(x,t)\ss Q_1}}|\d_\t f(x,t)| \leq \d, \] imply, \[ \sup_{\t\in(0,1)}\left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}(Q_\m)}\leq \m^\a. \] \end{lemma} \begin{proof} The value $\mu\in(0,1)$ will remain fixed for the duration of the proof; it will be specified later explicitly. Assume by contradiction that for $\d \in(0,1)$, there exists $F$ and $u$ such that, \begin{align*} &u_t-F(D^2 u,Du,x)=f(x,t) \text{ in the viscosity sense in } Q_2,\\ &\sup_{\t\in (0,1)} \left[\frac{\d_\t u}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq 1, \qquad \text{ and } \qquad \sup_{\substack{\t\in(0,1)\\(x,t)\in Q_1}}|\d_\t f(x,t)| \leq \d, \end{align*} however, there exists a cylinder $Q_{r}(x_0,t_0) \ss Q_\m$ for which, \[ \sup_{\t\in (0,1)}\osc_{Q_{r}(x_0,t_0)}\frac{\d_\t u}{\t^\b}>\mu^\a r^\e. \] Consider the following rescaling for $\kappa := r/\mu$, \[ w(x,t) := \kappa^{-(2\b+\e)}u\1\kappa x+x_0,\kappa^2 t+t_0\2. \] It satisfies, \[ w_t(x,t)-\tilde{F}(D^2 w,Dw,x)=\tilde{f}(x,t) \text{ in the viscosity sense in } Q_2 \] where, \begin{align} \nonumber &\tilde{F}(M,p,x):=\kappa^{2-2\b-\e}F\1\kappa^{-(2-2\b-\e)}M,\kappa^{-(1-2\b-\e)}p,\kappa x+x_0\2,\\ \nonumber &\tilde{f}(x,t):= \kappa^{2-2\b-\e}f\1\kappa x+x_0,\kappa^2 t+t_0\2. \end{align} Notice that the hypothesis for $f$ implies the following for $\tilde f$ provided that $\b\in(0,1-\e/2)$, \begin{align} \label{eq:rescaledrhs} &\sup_{\substack{\t\in(0,\kappa^{-2})\\(x,t)\in Q_1}}|\d_\t \tilde{f}(x,t)| \leq \d. \end{align} The hypotheses for the difference quotients of $u$ imply that \begin{align} \label{eq:rescaledupbd} &\sup_{\t\in(0,\kappa^{-2})}\left[\frac{\d_\t w}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq \sup_{\t\in(0,1)}\left[\frac{\d_\t u}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq 1,\\ \label{eq:contradiction} &\sup_{\t\in(0,\kappa^{-2})}\osc_{Q_{\mu}}\frac{\d_\t w}{\t^\b}=\kappa^{-\e}\sup_{\t\in(0,1)}\osc_{Q_{r}(x_0,t_0)}\frac{\d_\t u}{\t^\b}>\mu^{\a+\e}. \end{align} The next step consists on showing that a hypothesis similar to \eqref{eq:contradiction} holds taking the supremum with respect to $\tau$ away from zero. Namely $\t\in(\bar\t,\kappa^{-2})$ for some $\bar\t\in(0,\k^{-2})$ depending on $\m$ and $\e$. Indeed, define for $(x,t),(y,s)\in Q_\m$, \[ z(a)=w\1x,t+\kappa^{-2}a\2-w\1y,s+\kappa^{-2}a\2, \] applying Corollary \ref{lem:appendix3} to $z$: \begin{align*} &\sup_{\t\in(\bar\t,\kappa^{-2})} \left|\frac{\d_\t w(x,t)}{\t^{\b}}-\frac{\d_\t w(y,s)}{\t^{\b}}\right|,\\ \geq &\frac{1}{2}\sup_{\t\in(0,\kappa^{-2})}\left|\frac{\d_\t w(x,t)}{\t^{\b}}-\frac{\d_\t w(y,s)}{\t^{\b}}\right| - C \bar\t^{\e/2}\sup_{\t\in(0,\kappa^{-2})}\left[\frac{\d_\t w}{\t^\b} \right]_{C^{0,\e}(Q_1)}. \end{align*} The second term on the right-hand side is controlled from \eqref{eq:rescaledupbd}. After taking the supremum in $(x,t),(y,s)\in Q_\m$ and using \eqref{eq:contradiction}, this gives \begin{align} \label{eq:contradiction2} \sup_{\t\in(\bar\t,\kappa^{-2})} \osc_{Q_\m}\frac{\d_\t w}{\t^\b} \geq \frac{\m^{\a+\e}}{2}-C\bar\t^{\e/2} \geq \frac{\m^{\a+\e}}{4}, \end{align} provided that $\bar\t^{\e/2}$ is sufficiently small with respect to $\m^{\a+\e}$. Let $\t\in(\bar\t,\kappa^{-2})$ and \[ v(x,t) = \frac{\d_\t w}{\t^\b}(x,t) - \frac{\d_\t w}{\t^\b}(0,0) \] By time translation invariance and the hypothesis \eqref{eq:H} we get that $v$ satisfies two viscosity inequalities in $Q_1$, \begin{align*} v_t - \cM^+(D^2 v) -\L|Dv| \leq \frac{\d_\t \tilde{f}}{\t^\b} \qquad \text{ and } \qquad v_t - \cM^-(D^2 v) + \L|Dv| \geq \frac{\d_\t \tilde{f}}{\t^\b}. \end{align*} In order to apply the estimate in the Krylov-Safanov Theorem \ref{thm:KS} we need to control the two terms on the right hand side. Using \eqref{eq:rescaledupbd} we get that $\osc_{Q_1} v \leq 1$. The right-hand side gets also controlled by one provided we take $\d\in(0, \bar\t^\b)$, \[ \sup_{Q_1}\left|\frac{\d_\t \tilde{f}}{\t^\b}\right| \leq \frac{\d}{\t^\b} \leq 1. \] Finally, by the H\"older estimate in Theorem \ref{thm:KS}, and fixing now $\m$ sufficiently small in terms of $(\bar\a-(\a+\e))$, we get the following contradiction to \eqref{eq:contradiction2}, \[ \osc_{Q_\m} v \leq C\m^{\bar\a}\leq \frac{\m^{\a+\e}}{8}. \] \end{proof} Notice that the previous lemma is independent on the size of the oscillation of the solution $u$. This allows us to prove the following corollary by considering an appropriated rescaling for $\frac{\d_\t u}{\t^\b}$. The fact that the oscillation of $u$ increases by this rescaling is actually harmless. \begin{corollary}\label{cor:iteration} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} and $u$ satisfies, \begin{align*} &u_t-F(D^2 u,Du,x)=f(x,t) \text{ in the viscosity sense in } Q_2, \end{align*} Let $\b\in(0,1), \a\in(0,\min(1-\b,\bar\a))$ and $\e\in(0,\min(1-(\b+\a/2),\bar\a-\a))$ where $\bar\a$ is the exponent from Krylov-Safonov estimates. Then there exists constants $\m,\d\in(0,1)$ depending on $\a$ and $\e$, such that, \[ \sup_{\t\in(0,1)} \left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq 1 \qquad \text{ and } \qquad \sup_{\substack{\t\in(0,1)\\(x,t)\ss Q_1}}|\d_\t f(x,t)| \leq \d, \] imply for every $i\in\N$, \[ \sup_{\t\in(0,\m^{2i})}\left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}\1 Q_{\m^i}\2}\leq \m^{\a i}. \] \end{corollary} \begin{proof} Assume for some $i\in\N$ the inductive hypothesis, \[ \sup_{\t\in(0,\m^{2i})}\left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}\1 Q_{\m^i}\2}\leq \m^{\a i}. \] Let \begin{align*} v(x,t) := \frac{u(\m^i x,\m^{2i}t)}{\m^{2i(\b+\a/2+\e/2)}}. \end{align*} such that, \begin{align*} &v_t-\tilde F(D^2 v,Dv,x)=\tilde f(x,t) \text{ in the viscosity sense in } Q_2, \end{align*} where, \begin{align*} \tilde F(M,p,x) &:= \m^{2i(1-\b-\a/2-\e/2)}F(\m^{-2i(1-\b-\a/2-\e/2)} M,\m^{-2i(1/2-\b-\a/2-\e/2)}p,\m^i x),\\ \tilde f(x,t) &:= \m^{2i(1-\b-\a/2-\e/2)}f(\m^i x,\m^{2i} t). \end{align*} Given that $\b+\a/2+\e/2<1$ we obtain that, \begin{align}\label{eq:6} \sup_{\substack{\t\in(0,1)\\(x,t)\ss Q_1}}|\d_\t \tilde f(x,t)| \leq \d. \end{align} Moreover, the inductive hypothesis tells us that, \[ \sup_{\t\in(0,1)} \left[\frac{\d_{\t} v}{\t^\b}\right]_{C^{0,\e}(Q_1)} = \m^{-\a i}\sup_{\t\in(0,\m^{2i})} \left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}\1 Q_{\m^i}\2} \leq 1. \] By applying Lemma \ref{thm:improvement} to $v$ we now obtain the desired inductive step and the proof the corollary. \end{proof} The next corollary establishes an estimate over a higher order difference quotient by sacrificing a little bit of the $\e$ H\"older exponent. \begin{corollary}\label{cor:improvement_dif_quot} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} with $F(0,0,x) = 0$ and $u$ satisfies, \begin{align*} &u_t-F(D^2 u,Du,x)=f(x,t) \text{ in the viscosity sense in } Q_2, \end{align*} Let $\b\in(0,1), \a\in(0,\min(2-2\b,\bar\a))$ and $\e\in(0,\min(1-(\b+\a/2),\bar\a-\a))$ where $\bar\a$ is the exponent from Krylov-Safonov estimates. Then there exists constants $\m,\d\in(0,1)$ depending on $\a$ and $\e$, such that, \[ \osc_{Q_2} u + \sup_{\t\in(0,1)} \left[\frac{\d_{\t} u}{\t^\b}\right]_{C^{0,\e}(Q_1)}\leq 1 \qquad \text{ and } \qquad \sup_{Q_2}|f| \leq \d, \] imply, \[ \sup_{\t\in(0,1/4)}\left[\frac{\d_{\t} u}{\t^{\b+\a/2}}\right]_{C^{0,\bar\a\e/4}\1 Q_{1/2}\2}\leq C. \] \end{corollary} \begin{proof} By a standard covering argument applied to Corollary \ref{cor:iteration} we know that there exists $C>0$ such that, \begin{align}\label{eq:2} \sup_{\substack{Q_r(x,t)\ss Q_{1/2}\\\t\in(0,r^2)}}\osc_{Q_r(x,t)}\frac{\d_\t u}{r^{\a+\e}\t^\b}\leq C. \end{align} Our goal is to bound instead, \begin{align*} \sup_{\substack{Q_r(x,t)\ss Q_{1/2}\\\t\in(0,1/4)}}\osc_{Q_r(x,t)}\frac{\d_\t u}{r^{\bar\a\e/4}\t^{\b+\a/2}}. \end{align*} Let $x\in B_{1/2}$ and $v(t)=u(x,t)$. By hypothesis, \[ \osc_{(-1/4,0]}v \leq \osc_{Q_2} u \leq 1. \] On the other hand, using \eqref{eq:2} with $\t=r^2$, \begin{align*} \left|\frac{\d^2_\t v(t)}{\t^{\b+\a/2+\e/2}}\right| \leq \osc_{Q_{\t^{1/2}}(x,t)} \frac{\d_\t u}{\t^{\b+\a/2+\e/2}} \leq C. \end{align*} We can apply now to $v$ the proof of Lemma 5.2 from \cite{Caffarelli95} using that $\b+\a/2+\e/2<1$ in order to obtain the following estimate independent of $x \in B_{1/2}$, \begin{align*} \sup_{\t\in(0,1/4)}\osc_{(-1/4,0]} \frac{\d_\t v}{\t^{\b+\a/2+\e/2}} \leq C. \end{align*} In particular, by the triangular inequality, \begin{align*} \sup_{\t\in(0,1/4)}\osc_{Q_{1/2}} \frac{\d_\t u}{\t^{\b+\a/2+\e/2}} \leq C. \end{align*} We now fix $Q_r(x,t)\ss Q_{1/2}$ and consider two cases. If $\t\in(0,r^{\bar\a/2})$ then from the previous estimate, \begin{align*} \osc_{Q_r(x,t)}\frac{\d_\t u}{r^{\bar\a\e/4}\t^{\b+\a/2}} \leq C\frac{\t^{\e/2}}{r^{\bar\a\e/4}}\leq C. \end{align*} On the other hand, if $\t\in[r^{\bar\a/2},1/4)$ then we use the Krylov-Safanov estimate \ref{thm:KS} and the fact that $2>\b+\a/2+\e/2$, \begin{align*} \osc_{Q_r(x,t)}\frac{\d_\t u}{r^{\bar\a\e/4}\t^{\b+\a/2}} \leq C\frac{r^{\bar\a(1-\e/4)}}{\t^{\b+\a/2}} \leq C. \end{align*} \end{proof} By iterating Corollary \ref{cor:improvement_dif_quot} we get to control higher order difference quotients of the solution. This is the same approach found to address the H\"older estimates for the derivatives of a solution that satisfies a translation invariant equation, see Chapter 5 from \cite{Caffarelli95}. The consequence is the proof of Theorem \ref{thm:improv_reg_1}. \begin{proof}[Proof of Theorem \ref{thm:improv_reg_1}] Let us assume without loss of generality that for $\d\in(0,1)$ sufficiently small, \begin{align*} \osc_{Q_2} u \leq 1 \qquad\text{and}\qquad \sup_{Q_2}|f| \leq \d. \end{align*} Consider $\b_0 \in(0,3\bar\a/8)$, $\a:=\bar\a/2$ and $\b_k := \b_0 + k\a/2$. Our goal is to prove that as long $\b_k<1$ there exists some $\e_k\in(0,1)$ such that, \[ \sup_{\t\in (0,2^{-2k})} \left[\frac{\d_\t u}{\t^{\b_k}}\right]_{C^{0,\e_k}\1Q_{2^{-k}}\2} \leq C(k). \] Then the result follows by a standard covering argument both for the domain of the equation and the interval of H\"older exponents. Notice also that $\b_k<1$ implies $k<4/\bar\a$ such that any dependence on $k$ is actually universal. The first step is to establish the following bound for $\e\in(0,\bar\a/8)$ by using the Krylov-Safanov Theorem \ref{thm:KS}, \begin{align*} \sup_{\t\in(0,1/4)}\left[\frac{\d_\t u}{\t^{\b_0}}\right]_{C^{\e}(Q_{1/2})} \leq C. \end{align*} Same as in the proof of Corollary \ref{cor:improvement_dif_quot} we consider two cases. If $\t\in(0,r^2)$ then we bound the oscillation in the time variable in terms of $\t$ by the Krylov-Safanov estimate and then use the triangular inequality to obtain, \begin{align*} \sup_{Q_r(x,t)\ss Q_{1/2}} \osc_{Q_r(x,t)} \frac{\d_\t u}{r^{\e} \t^{\b_0}} \leq C\frac{\t^{\bar\a/2-\b_0}}{r^{\e}} \leq C. \end{align*} If $\t\in[r^2,1/4)$ then we use instead that $\osc_{Q_r(x,t)} \d_\t u \leq \osc_{Q_r(x,t)} u + \osc_{Q_r(x,t-\t)} u$, for which each term gets controlled in terms of $r$, once again using the Krylov-Safanov estimate, \begin{align*} \sup_{Q_r(x,t)\ss Q_{1/2}} \osc_{Q_r(x,t)} \frac{\d_\t u}{r^{\e} \t^{\b_0}} \leq C\frac{r^{\bar\a-\e}}{\t^{\b_0}} \leq C. \end{align*} At this point we plan to iterate Corollary \ref{cor:improvement_dif_quot} $k$ times for some $k$ such that $\b_k = \b_0+k\a/2=\b_0+k\bar\a/4<1$. Let $\e_0 := \min(1-\b_k,\bar\a/16)$ and $\e_k := \e_0(\bar\a/4)^k$. Then, as long as $\b_k+\e_{k-1}/2<1$ we get that, \[ \sup_{\t\in (0,2^{-2k})} \left[\frac{\d_\t u}{\t^{\b_k}}\right]_{C^{0,\e_k}\1Q_{2^{-k}}\2} \leq C(k). \] This establishes the desired estimate with constants that depend on $(1-\b_k)$ besides universal quantities. \end{proof} \section{H\"older Right-Hand Side} In this section we assume $f$ to be H\"older continuous and establish a similar modulus of continuity for $u_t$. The main idea consists into applying Lemma \ref{thm:improvement} for $\b$ sufficiently close to one followed by some modifications to the proofs of Corollaries \ref{cor:iteration} and \ref{cor:improvement_dif_quot}. \begin{theorem}\label{thm:improv_reg_2} Let $F$ be uniformly elliptic satisfying hypothesis \eqref{eq:H} with $F(0,0,x) = 0$ and $u$ satisfies, \begin{align*} u_t - F(D^2u,Du,x) = f(x,t) \text{ in the viscosity sense in } Q_1 \end{align*} Assume that for all $x\in B_2$, $f(x,\cdot) \in C^{0,\gamma/2}[-4,0]$ for some $\gamma \in (0, \bar \a)$ where $\bar \a$ is the exponent from the Krylov-Safonov theorem. Then $u_t$ exists pointwise, and for some constant $C>0$ depending on $\min(\gamma,(\bar\a-\gamma))$, \[ \|u_t\|_{C^{0,\gamma}(Q_{1/4})} \leq C\1\sup_{Q_2}|u| + \sup_{x\in B_2}\|f(x,\cdot)\|_{C^{0,\gamma/2}(-4,0]}\2. \] \end{theorem} \begin{proof} Let us assume without loss of generality that for some $\d\in(0,1)$ sufficiently small, \[ \sup_{Q_2}|u|\leq 1 \qquad\text{ and }\qquad \sup_{x\in B_2}\|f(x,\cdot)\|_{C^{0,\gamma/2}(-4,0]}\leq \d. \] By Theorem \ref{thm:improv_reg_1} we know that for $\b := 1-(\bar\a-\gamma)/4$ there exists some $\e\in(0,1)$ such that, \[ \sup_{\t\in(0,1/4)} \left\|\frac{\d_\t u}{\t^\b}\right\|_{C^{0,\e}(Q_{1/2})} \leq C. \] Then we apply Lemma \ref{thm:improvement} to get the diminishment for $\a = (\bar\a+\gamma)/2$ and some $\m\in(0,1/2)$, \[ \sup_{\t\in(0,1/4)} \left[\frac{\d_\t u}{\t^\b}\right]_{C^{0,\e}(Q_\m)} \leq C\m^\a. \] Then we would like to apply Corollary \ref{cor:iteration}, however we have $\b+\a/2 = 1+\gamma/4$ which is not smaller than one as required in the statement of such corollary. The necessity of such hypothesis appears in the rescaling we considered, so let us recall the setup, \begin{align*} v(x,t) &:= \frac{u(\m^i x,\m^{2i}t)}{\m^{2i(\b+\a/2+\e/2)}},\\ \tilde F(M,p,x) &:= \m^{2i(1-\b-\a/2-\e/2)}F(\m^{-2i(1-\b-\a/2-\e/2)} M,\m^{-2i(1/2-\b-\a/2-\e/2)} p,\m^i x),\\ \tilde f(x,t) &:= \m^{2i(1-\b-\a/2-\e/2)}f(\m^i x,\m^{2i} t). \end{align*} At this point we can use the H\"older hypothesis for $f$ to obtain, \[ \sup_{\substack{\t\in(0,1/4)\\(x,t)\in Q_{1/2}}}|\d_\t \tilde f(x,t)| \leq \d\mu^{2i(1-\b-\a/2-\e/2)}\m^{i\gamma} = \d\m^{i(\gamma/2-\e)} \leq 1, \] provided that $\e\in(0,\gamma/2)$. In conclusion, the same argument from the proof of Corollary \ref{cor:iteration} applies in order to obtain, after a standard covering argument that, \begin{align}\label{eq:inth1} \sup_{\substack{Q_r(x,t)\ss Q_{1/4}\\\t\in(0,r^2)}}\osc_{Q_r(x,t)}\frac{\d_\t u}{r^{\a+\e}\t^{\b}} \leq C. \end{align} Now we use Lemma \ref{lem:appendix4} to get the bounds \begin{equation}\label{eq:inti2} \sup_{Q_{1/4}}|u_t| + \sup_{\substack{(x,t)\in Q_{1/4}\\\t\in(0,1/4)}} \left| \frac{\d_\t u_t(x,t)}{\t^{\gamma/4}} \right| \leq C. \end{equation} At this moment all we have to show is that given $(y,s)\in Q_r(x,t)\ss Q_{1/4}$, \begin{align}\label{eq:7} \left| u_t(x,t) - u_t(y,s)\right| \leq Cr^{\gamma/2}. \end{align} Let $\t = r^2$ such that from \eqref{eq:inth1} we obtain, \[ \left| \frac{\d_{r^2}u(x,t)}{r^2} - \frac{\d_{r^2}u(y,s)}{r^2}\right| \leq r^{\gamma/2}. \] On the other hand, using \eqref{eq:inti2}, \[ \left|\frac{\d_{r^{2}}u}{r^2}-u_t\right|(y,s) \leq \frac{1}{r^2}\int_{-r^2}^0|u_t(y,s+a)-u_t(y,s)|da\leq Cr^{\gamma/2} \] Finally the desired estimate results from the triangular inequality by adding and subtracting $\1\frac{\d_{r^2}u(x,t)}{r^2} - \frac{\d_{r^2}u(y,s)}{r^2}\2$ inside the absolute value in \eqref{eq:7}. \end{proof} \section{Appendix} In this appendix we establish a few interpolation results about H\"older spaces. The first lemma can be understood as a maximum principle. \begin{lemma} Let $\a,\b\in(0,1)$. Then for any $u \in C[-1,0]$, \begin{align*} &u(-1)=u(0)=0, \qquad \sup_{\t\in(0,1)}\left[\frac{\d_{\t}u}{\t^{\b}}\right]_{C^\a[-1+\t,0]} \leq 1\qquad \Rightarrow \qquad \osc_{[-1,0]}u \leq 2. \end{align*} \end{lemma} \begin{proof} Let, \[ \varphi(t) = 2\max\1|t|^{(\a+\b)/4},|t+1|^{(\a+\b)/4}\2. \] We want to show that $u \leq \varphi$ in $[-1,0]$ and therefore $u \leq 2$. Assume that there exists $\t\in(0,1/2]$ such that $-\t$ realizes the positive maximum of $(u-\varphi)$ in $(-1,0)$. Then we obtain the following contradiction, \[ -2\1 2-2^{(\a+\b)/4}\2 \t^{(\a+\b)/4} \geq \d_\t^2\varphi(0) \geq \d_\t^2 u(0) \geq -\t^{\a+\b}. \] A similar contradiction happens if $\t\in(1/2,1)$ by considering the second order differences at $-1$. \end{proof} The proof of Lemma 5.2 in \cite{Caffarelli95} shows that if $\a+\b<1$ then there exits some constant $C>0$ depending on $1-(\a+\b)$ such that the following estimate holds, \[ \sup_{\t\in(0,1)}\left|\frac{\d_\t u(0)}{\t^{\a+\b}}\right| \leq C\1\osc_{[-1,0]}u + \sup_{\t\in(0,1)}\left[\frac{\d_{\t}u}{\t^{\b}}\right]_{C^\a[-1+\t,0]}\2. \] By applying this result followed by the maximum principle to, \[ \bar u(s) := \frac{u(\bar\t s) + s u(-\bar\t) - (s+1)u(0)}{\bar\t^{\a+\b}\sup_{\t\in(0,\bar\t)}\left[\frac{\d_\t u}{\t^\b}\right]_{C^\a[\t-\bar\t,0]}} \] we get the following corollary. \begin{corollary}\label{lem:appendix3} Let $\a,\b\in(0,1)$ such that $\a+\b<1$. There exists a constant $C>0$ depending on $1-(\a+\b)$ such that for any $u \in C[-1,0]$ and $\bar\t\in(0,1)$, \[ \sup_{\t\in(0,\bar\t)}\left|\frac{\d_\t u(0)}{\t^{\b+\a}}\right| \leq \bar\t \left|\frac{\d_{\bar\t} u(0)}{\bar\t^{\b+\a}}\right| + C\sup_{\t\in(0,1)}\left[\frac{\d_\t u}{\t^\b}\right]_{C^\a[-1+\t,0]}. \] In particular, \[ \sup_{\t\in(\bar\t,1)}\left|\frac{\d_{\t} u(0)}{\t^{\b}}\right| \geq \frac{1}{2}\sup_{\t\in(0,1)}\left|\frac{\d_\t u(0)}{\t^{\b}}\right| - C\bar\t^\a\sup_{\t\in(0,1)}\left[\frac{\d_\t u}{\t^\b}\right]_{C^\a[-1+\t,0]}. \] \end{corollary} Finally, this last lemma establishes a H\"older estimate for the derivative when $\a+\b>1$. \begin{lemma}\label{lem:appendix4} Let $\a,\b\in(0,1)$ such that $\a+\b>1$ and $u:[-1,1]\to\R$ such that, \[ \sup_{\t\in(0,1)}\left\|\frac{\d_\t u}{\t^\b}\right\|_{C^\a(-1+\t,1)} \leq 1. \] Then for some universal constant $C$, \[ \|u_t\|_{C^{1,\a+\b-1}(-1,1)} \leq C. \] \end{lemma} \begin{proof} By Lemma 5.6 from \cite{Caffarelli95} we know that $u$ is Lipschitz and therefore differentiable almost everywhere. By a density argument it suffices to show that that for each point of differentiability $t_0\in(-1,1)$, \begin{align*} |u(t)-u(t_0)-u_t(t_0)(t-t_0)| \leq C|t|^{\a+\b} \text{ for } t \in [-1,1]. \end{align*} Assume without loss of generality that $t_0 = u(t_0) = u_t(t_0) = 0$. If there exists $h\in(0,1]$ such that $u(h) > Ch^{\a+\b}$, then by iterating the hypothesis of the Lemma we get for every $i\in\N$, \begin{align*} \frac{u(2^{-i}h)}{2^{-i}h} > \1C-\sum_{j=0}^{i-1}2^{-(\a+\b-1)j}\2 h^{\a+\b-1} \geq \frac{C}{2}h^{\a+\b-1}>0, \end{align*} provided that $C = 4/(2^{\a+\b-1}-1)$. This contradicts $u_t(0)=0$ as $i\to\8$. \end{proof} {\bf Acknowledgments:} DK was partially supported by National Science Foundation grant DMS-1065926. \bibliographystyle{plain}
{ "timestamp": "2015-04-24T02:12:38", "yymm": "1504", "arxiv_id": "1504.06294", "language": "en", "url": "https://arxiv.org/abs/1504.06294" }
\section{Introduction} Thermodynamic systems can exhibit remarkably complex phase transition dynamics when they are driven by an external force. Well-studied examples include ferromagnetic systems and crystalline systems that are driven by steady shear \cite{Onuki,Miyama}, oscillating external fields \cite{Chakrabarti,Klapp}, and athermal noise \cite{Sancho,Behn}. The general treatment that concerns how the driving forces affect the nature of phase transitions has not been established yet. To identify and classify various types of phase transitions peculiar to out of equilibrium systems can give valuable insights for developing non-equilibrium statistical mechanics. The ferromagnetic and crystalline phases are characterized by long-range order (LRO), wherein the order parameter takes a finite value. As a qualitatively different behavior, the two-dimensional XY model exhibits quasi-long-range order (QLRO), wherein the order parameter remains zero but the spatial correlation decays in a power-law form at low temperatures \cite{KT}. Furthermore, at the transition between the QLRO phase and the disordered phase, which is called the Kosterlitz-Thouless (KT) transition, there is no singularity in thermodynamic quantities such as specific heat and susceptibility \cite{Gupta}. This peculiar type of phase transition has attracted considerable research attention. However, the possibility that a driving force causes the KT transition has not been clearly discussed or established. Consequently, the question arises as to whether there is a system that does not exhibit the KT transition in equilibrium, while it does exhibit the transition in the presence of a driving force. To the best of our knowledge, such a system has not been reported thus far. In this paper, we consider three-dimensional O($N$) spin models driven with a uniform velocity over a random field as examples of such systems. This ``non-equilibrium KT transition'' may define a novel type of universality class, wherein the interplay between disorder and driving plays a crucial role. The dynamics of an ordered system driven by an external force over a random substrate have been a topic of intensive research in statistical physics. A rich variety of complex phenomena results from the interplay between elasticity, quenched disorder and driving. The best-known example of such systems may be driven vortex lattices in dirty superconductors \cite{LeDoussal98,Stroud,Bishop}. In such systems, impurities and crystalline defects act as random pinning potentials for vortex lines. There are other well-studied systems, e.g., charge-density waves \cite{Gruner} or colloids \cite{Reichhardt} driven by an external field. The driven vortex lattice systems exhibit dynamical reordering transition for a finite driving velocity. In equilibrium, the vortex lattice system has two different phases \cite{Menon}. In the strong disorder regime, the vortex glass phase is realized, in which the spatial correlation function of the displacements of the vortices exponentially decays. In the weak disorder regime, the Bragg glass phase is realized, in which the spatial correlation function exhibits power-law decay \cite{LeDoussal94}. When the vortex glass is driven with a finite velocity, the effective disorder that the vortices experience becomes weaker because the random potential varies rapidly in a moving frame. At a large driving velocity, a first-order phase transition to the Bragg glass phase from the vortex glass phase occurs \cite{Stroud,Bishop}. While the vortex lattice systems that exhibit the first-order phase transition in the absence of the disorder are well studied, very few studies have focused on a driven system over a random substrate that exhibits a second-order phase transition in the absence of disorder. In this paper, we consider the nature of the dynamical reordering transition of three-dimensional O($N$) spin models when they are driven with a uniform velocity over a random field. We show that the models with $N=2$ and $3$ exhibit QLRO at low temperatures and that the transition from the QLRO phase to the disordered phase resembles the KT transition. This paper is organized as follows: In Sec.~\ref{sec:Model}, we introduce the models and review their behavior in equilibrium. In Sec.~\ref{sec:SW}, for the case that $N=2$ (XY model), we show that this model exhibits QLRO at low temperatures by using the spin-wave approximation. In Sec.~\ref{sec:results}, the phase diagram for $N=2$ and $3$ with respect to temperature, disorder, and driving velocity is determined by means of numerical experiments. We calculate the specific heat and show that it exhibits no singularity at the transition point. In Sec.~\ref{sec:conclusions}, we summarize our results. We also discuss nematic liquid crystals flowing in a random medium as an experimental realization of our model. \section{Model} \label{sec:Model} Let $\{\phi^{\alpha} \}^N_{\alpha=1}$ be an $N$-component ($N \geq 2$) real vector field. The Hamiltonian of the three-dimensional O($N$) models with a quenched random field is given by \begin{eqnarray} H[\bvec{\phi};\bvec{h}]=\int {\rm d}^3 \bvec{r} \biggl[ \frac{1}{2}K|\bvec{\nabla} \bvec{\phi} (\bvec{r})|^2-\frac{r}{2} |\bvec{\phi}(\bvec{r})|^2 \nonumber \\ +\frac{g}{4N}| \bvec{\phi}(\bvec{r})|^4- \bvec{h}(\bvec{r}) \cdot \bvec{\phi}(\bvec{r}) \biggr] , \end{eqnarray} where $K$, $r$, and $g$ represent positive parameters. The quenched random field $\bvec{h}(\bvec{r})$ obeys a Gaussian distribution with $ \langle \bvec{h}(\bvec{r}) \rangle = \bvec{0} $ and $ \langle h^{\alpha}(\bvec{r}) h^{\beta}(\bvec{r'}) \rangle=h_0^2 \delta_{\alpha \beta} \delta(\bvec{r}-\bvec{r'})$. The time evolution of the field $\bvec{\phi}(\bvec{r},t)$ is described by \begin{equation} \frac{\partial \phi^{\alpha}(\bvec{r},t)}{\partial t}+\bvec{v} \cdot \bvec{\nabla} \phi^{\alpha}(\bvec{r},t)=-\Gamma \frac{\delta H[\bvec{\phi};\bvec{h}]}{\delta \phi^{\alpha}(\bvec{r},t)}+\eta^{\alpha}(\bvec{r},t), \label{EM-LF} \end{equation} where $\bvec{v}=v \bvec{e_x}$ denotes a parameter independent of $(\bvec{r},t)$, and $\eta^{\alpha}(\bvec{r},t)$ represents thermal noise that satisfies $ \langle \eta^{\alpha}(\bvec{r},t) \eta^{\beta}(\bvec{r'},t') \rangle=2 \Gamma T \delta_{\alpha \beta} \delta(\bvec{r}-\bvec{r'}) \delta(t-t')$. This equation describes the relaxation dynamics of the ordered system that is driven with a uniform velocity over the quenched random field. Examples of such systems include nematic liquid crystals flowing in a random medium such as an aerogel or a porous substrate, where the random field corresponds to the random anchoring that occurs due to the complicated surface structure of the porous substrate. We investigate the non-equilibrium steady states of the models with $N=2$ (XY model) and $N=3$ (Heisenberg model). We call these models as driven random field O($N$) models (DRFO($N$)Ms). Let us review the behavior of these models in equilibrium ($\bvec{v}=0$). The random field O($N$) models (RFO($N$)Ms) are one of the simplest disordered systems in which spins with O($N$) symmetry are coupled to a quenched random field. The Imry-Ma argument and ``dimensional reduction'' property state that LRO with continuous symmetry breaking is destroyed by an infinitesimally weak random field below four dimensions \cite{Imry-Ma,Aharony,Parisi}. The absence of LRO below four dimensions was also rigorously proved by Aizenman and Wehr \cite{Aizenman}. While the theoretical description that concerns the existence of LRO has been established, whether QLRO exists or not in three dimensions is a more subtle problem. From the analogy of the Bragg glass phase in a vortex lattice system \cite{LeDoussal94}, theoretical studies based on the renormalization group analysis have suggested the existence of QLRO for the three-dimensional random field XY model and the random anisotropy Heisenberg model \cite{Fisher85,Feldman}. In order to confirm this predicted QLRO, numerical simulations have also been conducted \cite{Gingras,Itakura}. However, definitive evidence for the existence of QLRO has not thus far been obtained because the correlation length rapidly increases when the strength of the random field decreases. Moreover, in Ref.~\cite{Zannoni00}, the experimental investigation of nematic liquid crystals in aerogels did not lead to the observation of any QLRO. Recently, more sophisticated renormalization group studies have negated the existence of QLRO in three dimensions \cite{Tissier,LeDoussal06}. In summary, to the best of our knowledge, we can conclude that three-dimensional RFO($N$)Ms with $N \geq 2$ do not exhibit any phase transitions. \section{Spin-wave approximation} \label{sec:SW} Let us calculate the spin correlation function for the XY model ($N=2$) by using the spin-wave approximation at zero temperature. The magnitude of the spin is assumed to be fixed to unity. The order parameter field is represented by $ \bvec{\phi}(\bvec{r})=(\phi^1(\bvec{r}), \phi^2(\bvec{r}))=( {\rm cos}\theta(\bvec{r}), {\rm sin}\theta(\bvec{r}) ) $. The Hamiltonian is rewritten as \begin{equation} H[\theta]=\int {\rm d}^3 \bvec{r} \left[ \frac{1}{2}K(\nabla \theta(\bvec{r}))^2-h(\bvec{r}){\rm cos}(\theta(\bvec{r})-\xi(\bvec{r})) \right], \end{equation} where the random field is written as $ \bvec{h}(\bvec{r})=(h^1(\bvec{r}),h^2(\bvec{r}))=(h(\bvec{r}) {\rm cos}\xi(\bvec{r}), h(\bvec{r}) {\rm sin}\xi(\bvec{r})) $. The equation of motion at zero temperature is given by \begin{eqnarray} \frac{\partial \theta(\bvec{r},t)}{\partial t} + v \frac{\partial \theta(\bvec{r},t)}{\partial x}=\Gamma \bigl[K \nabla^2 \theta(\bvec{r},t) \nonumber \\ -h(\bvec{r}){\rm sin}(\theta(\bvec{r},t)-\xi(\bvec{r})) \bigr]. \end{eqnarray} We define the Green function $G(\bvec{r})$ by its Fourier transform ${\tilde G}(\bvec{q})=(\Gamma K q^2 +ivq_x )^{-1}$. Subsequently, the formal solution for the steady state is given as \begin{eqnarray} \theta(\bvec{r})=\int {\rm d} \bvec{r'} G(\bvec{r}-\bvec{r'}) \nonumber \\ \times \Gamma \left\{-h^1(\bvec{r'}){\rm sin}\theta(\bvec{r'})+h^2(\bvec{r'}){\rm cos}\theta(\bvec{r'}) \right\}. \end{eqnarray} The mean square relative displacement $ \langle (\theta(\bvec{r_1})-\theta(\bvec{r_2}))^2 \rangle $ is calculated as \begin{eqnarray} \langle (\theta(\bvec{r_1})-\theta(\bvec{r_2}))^2 \rangle \nonumber \\ = \int {\rm d}^3 \bvec{r'} {\rm d}^3 \bvec{r''} \left\{G(\bvec{r_1}-\bvec{r'})-G(\bvec{r_2}-\bvec{r'}) \right\} \nonumber \\ \times \left\{G(\bvec{r_1}-\bvec{r''})-G(\bvec{r_2}-\bvec{r''}) \right\} \nonumber \\ \times \Gamma^2 \{ \: \langle h^1(\bvec{r'})h^1(\bvec{r''}){\rm sin}\theta(\bvec{r'}){\rm sin}\theta(\bvec{r''}) \rangle \nonumber \\ -2 \langle h^1(\bvec{r'})h^2(\bvec{r''}){\rm sin}\theta(\bvec{r'}){\rm cos}\theta(\bvec{r''}) \rangle \nonumber \\ + \langle h^2(\bvec{r'})h^2(\bvec{r''}){\rm cos}\theta(\bvec{r'}){\rm cos}\theta(\bvec{r''}) \rangle \: \}. \end{eqnarray} We use factorization approximations such as $ \langle h^{\alpha}(\bvec{r})h^{\beta}(\bvec{r'}){\rm sin}\theta(\bvec{r}){\rm sin}\theta(\bvec{r'}) \rangle \simeq \langle h^{\alpha}(\bvec{r})h^{\beta}(\bvec{r'}) \rangle \nonumber \\ \times \langle {\rm sin}\theta(\bvec{r}){\rm sin}\theta(\bvec{r'}) \rangle $ \cite{Garanin}. This factorization is justified when the correlation length of $ \theta(\bvec{r}) $, which is denoted by $ \xi $, is much larger than that of the random field $ \xi_R $. We will consider a self-consistent condition leading to $ \xi \gg \xi_R $ after the calculation of the mean square relative displacement. If this condition is satisfied, we have \begin{eqnarray} \langle (\theta(\bvec{r_1})-\theta(\bvec{r_2}))^2 \rangle \nonumber \\ = \Gamma^2 h_0^2 \int {\rm d} \bvec{r'} \left\{G(\bvec{r_1}-\bvec{r'})-G(\bvec{r_2}-\bvec{r'}) \right\}^2. \label{MSRD-G} \end{eqnarray} Substituting the explicit form of the Green function, we have \begin{eqnarray} \langle (\theta(\bvec{r_1})-\theta(\bvec{r_2}))^2 \rangle \nonumber \\ = 2 \Gamma^2 h_0^2 \int \frac{{\rm d}^3 \bvec{q}}{(2 \pi)^3} \frac{1-{\rm cos} \left\{ \bvec{q} \cdot (\bvec{r_1}-\bvec{r_2}) \right\} }{\Gamma^2 K^2 q^4+v^2 q_x^2}. \label{MSRD} \end{eqnarray} From Eq.~(\ref{MSRD}), we obtain the asymptotic behavior over a large distance $r \gg \Gamma K/v$, \begin{eqnarray} \langle (\theta(\bvec{r})-\theta(\bvec{0}))^2 \rangle \simeq \begin{cases} \frac{\Gamma h_0^2}{4 \pi K v} {\rm ln}\: r, \:\:(\bvec{r} \parallel \bvec{v}), & \\ \frac{\Gamma h_0^2}{2 \pi K v} {\rm ln}\: r, \:\:(\bvec{r} \perp \bvec{v}). & \end{cases} \label{MSRD-asymp} \end{eqnarray} The detailed calculation is presented in the Appendix A. This logarithmic dependence on the distance is similar to that of the KT transition in the two-dimensional pure XY model \cite{Goldenfeld}. Let us consider a self-consistent condition leading to $ \xi \gg \xi_R $. We define the correlation length $ \xi $ by $ \langle (\theta(\xi)-\theta(0))^2 \rangle \sim 1 $. The correlation length of the random field $ \xi_R $ is equal to a cut-off for short length scale $ \Lambda^{-1} $ because the correlation function is given by $ \langle h^{\alpha}(\bvec{r}) h^{\beta}(\bvec{r'}) \rangle=h_0^2 \delta_{\alpha \beta} \delta(\bvec{r}-\bvec{r'})$. If we suppose $ \xi \gg \xi_R $ as an ansatz, we obtain from the above calculation \begin{equation} \frac{\xi}{\Lambda^{-1}} \sim {\rm exp} \left[ \frac{2 \pi K v}{\Gamma h_0^2} \right], \end{equation} which becomes infinitely large in the weak disorder limit or in the strong driving limit. This argument suggests that $ \xi \gg \xi_R $ when \begin{equation} \frac{2 \pi K v}{\Gamma h_0^2} \gg 1. \label{scale-separation} \end{equation} Thus, if the condition Eq.~(\ref{scale-separation}) is satisfied, the factorization approximation used in the calculation of Eq.~(\ref{MSRD-G}) is justified. In the next section, we will confirm that this condition is satisfied in the region of QLRO. The correlation function $ C(r) \equiv \langle \bvec{\phi}(\bvec{r}) \cdot \bvec{\phi}(\bvec{0}) \rangle $ is calculated as follows: \begin{eqnarray} C(r) = \langle e^{i(\theta(r)-\theta(0))} \rangle \nonumber \\ = {\rm exp} \left[ \sum_{n=1}^{\infty} \frac{1}{n!} i^n \langle (\theta(r)-\theta(0))^n \rangle_c \right], \end{eqnarray} where $ \langle (...)^n \rangle_c $ denotes a $n$-th cumulant. If one admits the factorization approximation noted above, it is shown that all higher cumulants approximately vanish because the random field $h^{\alpha}$ obeys a Gaussian distribution. Thus, we have $C(r) = e^{-1/2 \langle (\theta(r)-\theta(0))^2 \rangle} $. From Eq.~(\ref{MSRD-asymp}), we have \begin{eqnarray} C(r) \propto \begin{cases} r^{-\alpha_{\parallel}}, \:\:(\bvec{r} \parallel \bvec{v}), & \\ r^{-\alpha_{\perp}}, \:\:(\bvec{r} \perp \bvec{v}), & \end{cases} \end{eqnarray} where the exponents are $\alpha_{\parallel}=\Gamma h_0^2/(8 \pi Kv)$ and $\alpha_{\perp}=\Gamma h_0^2/(4 \pi Kv)$. Therefore, we have shown that DRFO(2)M exhibits anisotropic QLRO at low temperatures, in which the spin-wave approximation is valid. \section{Numerical results} \label{sec:results} We investigate the transition between the QLRO phase and the disordered phase by numerically solving Eq.~(\ref{EM-LF}). The calculation is implemented in a moving frame with velocity $\bvec{v}$. The periodic boundary conditions and free boundary conditions are imposed for the directions perpendicular and parallel to the driving velocity $\bvec{v}$, respectively. The random field is continuously generated in the front boundary of the simulation box, and it moves with velocity $-\bvec{v}$. The detailed method of the numerical calculation is presented in the Appendix B. Time integration is performed by employing the Euler method. The parameter values are fixed as $K=1$, $\Gamma=1$, $r=5$, and $g=10$. We set the time and space discretization as $ \delta t=0.005 $ and $ \delta x=1 $, respectively. \subsection{XY model} We first calculate the spin correlation function $C(r)$ for the XY model ($N=2$). We start from a random initial condition and obtain a steady state after a sufficiently long time. The correlation function is calculated from a field configuration $\bvec{\phi}(\bvec{r})$ of the steady state. For a sufficiently large system, the average with respect to the random field and the thermal noise can be replaced by the spatial average because of the self-averaging property. We also take the time average, which is equivalent to the ensemble average with respect to the random field and the thermal noise. The detailed explanation for the method of the numerical calculation of the correlation function is presented in the Appendix B. The correlation functions for different values of temperature are displayed in Fig.~\ref{fig:correlation}. The upper (a) and lower (b) panels depict $C(r)$ for the directions parallel and perpendicular to the driving velocity $\bvec{v}$, respectively. In order to verify the finite size effect, we calculate $C(r)$ for system sizes of $60^3$, $100^3$, and $150^3$. The system size dependence of $C(r)$ for the perpendicular direction is larger than that for the parallel direction because of the periodic boundary conditions. We can observe the phase transition from the low temperature regime, in which the correlation function exhibits a power-law decay, to the high temperature regime, in which it displays an exponential decay. The panel (c) represents the exponents as a function of temperature. The horizontal lines represent the theoretical prediction of the spin-wave approximation. The exponents $\alpha_{\parallel}$ and $\alpha_{\perp}$ increase linearly at low temperatures, but they exhibit strong temperature dependence near the transition temperature. \begin{figure} \centering \includegraphics[width=60mm]{figure-1.eps} \caption{(Color online) Spin correlation function $C(r)$ for different values of temperature. The upper (a) and lower (b) panels depict $C(r)$ for the directions parallel and perpendicular to the driving velocity $\bvec{v}$, respectively. The values of temperature are $T=0.40$, $0.60$, $0.70$, $0.80$ and $0.90$ from the top to the bottom of the panels. The other relevant parameters are $h_0=1.0$ and $v=0.5$. The transition temperature is estimated as $T_{\rm c}=0.70 \pm 0.05$. The symbols $ \circ $, $ + $, and $ \times $ represent $C(r)$ for system sizes of $60^3$, $100^3$, and $150^3$, respectively. The error-bars are comparable with the size of the symbols. The panel (c) shows the temperature dependence of the exponents $\alpha_{\parallel}$ and $\alpha_{\perp}$ determined from the data corresponding to the system size of $60^3$. The error due to the fitting with a power function is displayed. The horizontal lines represent the values predicted from the spin-wave approximation. } \label{fig:correlation} \end{figure} We next determine the transition temperature $T_{\rm c}$ as a function of the strength of the random field $h_0$ and the driving velocity $v$ by using the non-equilibrium relaxation method \cite{Ito07}. With this method, we observe the relaxation of the magnetization $\bvec{M}(t)=V^{-1} \int \bvec{\phi}(\bvec{r}) {\rm d} \bvec{r}$ from the complete ordered state $\phi^1(\bvec{r}) \equiv 1$ and $\phi^2(\bvec{r}) \equiv 0 $. The asymptotic behavior of the magnetization for $T \geq T_{\rm c}$ is summarized as \begin{eqnarray} M(t) \sim \begin{cases} {\rm exp}(-t/\tau(T)), \:\:\:\:\:(T > T_{\rm c}), & \\ t^{-\lambda}, \:\:\:\:\:(T = T_{\rm c}), & \end{cases} \end{eqnarray} where $\tau(T)$ is the relaxation time. In order to determine the relaxation time as a function of temperature for the disordered phase, we assume the following scaling form, \begin{equation} M(t)=\tau(T)^{-\lambda}m(t/\tau(T)), \label{scaling} \end{equation} for $T > T_{\rm c}$. The relaxation of the magnetization $M(t)$ and its scaling plot are displayed in the panels (a) and (b) of Fig.~\ref{fig:magnetization}. The system size is $60^3$. 100 independent runs are performed for averaging. From the analogy to the KT transition of the two-dimensional pure XY model \cite{Ito03}, the correlation length is expected to diverge exponentially as $\xi \sim {\rm exp}(a/\sqrt{T-T_{\rm c}})$. Thus, we assume that the relaxation time diverges in the same way, \begin{equation} \tau(T) = B \: {\rm exp} \left( \frac{A}{\sqrt{T-T_{\rm c}}} \right). \label{rel-time} \end{equation} Fitting $\tau(T)$ into Eq.~(\ref{rel-time}) with parameters $A$, $B$, and $T_{\rm c}$, we obtain the transition temperature. The best fitting is shown in the panel (c) of Fig.~\ref{fig:magnetization}. \begin{figure} \centering \includegraphics[width=80mm]{figure-2.eps} \caption{(Color online) Panel (a): Relaxation of the magnetization $M(t)$ for different values of temperature. The other parameters are $h_0=1.4$ and $v=1.0$. The values of temperature are $T=0.52,\: 0.55,\: 0.58,\: 0.65$, and $0.75$ from the up to the bottom. Panel (b): Scaling plot of the magnetization to Eq.~(\ref{scaling}) with appropriately chosen $\tau(T)$ and $\lambda$. $ \lambda=0.08 $ and $ \tau(0.75)=1,\: \tau(0.65)=4.0,\: \tau(0.58)=18,\: \tau(0.55)=60,\: \tau(0.52)=300. $ Panel (c): Relaxation time $\tau$ as a function of temperature. The curve fitted to Eq.~(\ref{rel-time}) with $T_{\rm c}=0.43$ is shown. } \label{fig:magnetization} \end{figure} Fig.~\ref{fig:phase} displays the schematic phase diagram with respect to the strength of the disorder $h_0$, driving velocity $v$, and temperature $T$. The QLRO phase appears in the large-$v$ and low-$T$ regime. In the region in which $h_0=0$, the LRO phase exists because the model is identical to the three-dimensional pure XY model in the moving frame. The small-$v$ and high-$T$ regime corresponds to the disordered phase. The phase boundary at $T=0$ is given by $\Gamma h_0^2 \sim Kv$. It is noteworthy that the infinitesimally small random field breaks the LRO and leads to the QLRO. For an arbitrarily large value of $h_0$, the QLRO is observed for sufficiently large values of $v$. We have checked that Eq.~(\ref{scale-separation}) is satisfied in the region of the QLRO phase displayed in the phase diagram Fig.~\ref{fig:phase} at low temperatures. This justifies the validity of the spin-wave approximation. \begin{figure} \centering \includegraphics[width=80mm]{figure-3.eps} \caption{(Color online) Schematic phase diagram with respect to the disorder $h_0$, driving velocity $v$, and temperature $T$. The parameter $T_{\rm c0}$ denotes the transition temperature of the three-dimensional pure XY model. The abbreviation PM denotes the paramagnetic or disordered phase. The inset depicts $T_{\rm c}$ as a function of $v$. The solid lines serve as a visual guide. The squares (red) and circles (green) denote the values of $T_{\rm c}$ for $h_0=1.0$ and $h_0=1.2$, respectively. } \label{fig:phase} \end{figure} In order to compare this transition with the KT transition of the two-dimensional pure XY model, we calculate the specific heat as a function of temperature. We define the specific heat as $c(T)=\partial \langle H \rangle_{\rm ss}/\partial T$, where $ \langle ...\rangle_{\rm ss}$ represents the average with respect to the non-equilibrium steady state. Since the system is not in equilibrium, this specific heat is not related to the energy fluctuation. Fig.~\ref{fig:specific-heat} shows the specific heat as a function of temperature for different values of the disorder strength $h_0$. The upper arrows represent the transition temperature $T_{\rm c}$ as determined by the non-equilibrium relaxation method. The case that $h_0=0$ corresponds to the three-dimensional pure XY model. In the presence of a finite amount of disorder, the discontinuity of $c(T)$ disappears and a smooth peak remains above the transition temperature. The position of the peak decreases with increase in the strength of the disorder. It is to be noted that the specific heat does not exhibit any singularity at the transition temperature. The absence of a singularity and the existence of the smooth peak resemble the KT transition of the two-dimensional pure XY model \cite{Gupta}. \begin{figure} \centering \includegraphics[width=60mm]{figure-4.eps} \caption{(Color online) Specific heat $c(T)$ as a function of temperature for system size of $60^3$. The driving velocity is $v=1.0$. The crosses (red), squares (green), and circles (blue) denote $c(T)$ values for $h_0=0.0, \:1.0$, and $1.4$, respectively. The upper arrows represent the transition temperature $T_{\rm c}$ as determined by the non-equilibrium relaxation method. } \label{fig:specific-heat} \end{figure} \subsection{Heisenberg model} Next, we consider the Heisenberg model ($N=3$). We found that the phase diagram of the Heisenberg model is qualitatively similar to that of the XY model. Fig.~\ref{fig:phase-Heisenberg} shows the transition temperature determined by using the non-equilibrium relaxation method. We display the specific heat as a function of temperature in Fig.~\ref{fig:specific-heat-Heisenberg}. In the presence of a finite amount of disorder, the discontinuity of $c(T)$ disappears and a smooth peak remains above the transition temperature. The presence of the QLRO for $N=3$ contrasts with the cases of the two-dimensional pure O($N$) models, in which the KT transition does not exist for $N=3$ \cite{Zinn-Justin}. \begin{figure} \centering \includegraphics[width=55mm]{figure-5.eps} \caption{(Color online) Transition temperature $T_{\rm c}$ as a function of $v$ for the Heisenberg model ($N=3$). The squares (red) and circles (green) denote $T_{\rm c}$ values for $h_0=0.8$ and $h_0=1.0$, respectively. The solid lines serve as a visual guide.} \label{fig:phase-Heisenberg} \end{figure} \begin{figure} \centering \includegraphics[width=60mm]{figure-6.eps} \caption{(Color online) Specific heat $c(T)$ as a function of temperature for the Heisenberg model ($N=3$). The driving velocity is $v=1.0$. The crosses (red), squares (green), and circles (blue) denote $c(T)$ values for $h_0=0.0, \:0.8$, and $1.0$, respectively. The upper arrows represent the transition temperature $T_{\rm c}$ as determined by the non-equilibrium relaxation method. } \label{fig:specific-heat-Heisenberg} \end{figure} \section{Conclusions} \label{sec:conclusions} We investigated the non-equilibrium phase transition of three-dimensional O($N$) models driven with a uniform velocity over a quenched random field. For the cases that $N=2$ and $3$ (XY and Heisenberg model, respectively), we show that QLRO appears in the strong driving regime. Furthermore, the transition from the QLRO phase to the disordered phase resembles the KT transition in the two-dimensional pure XY model. It is noteworthy that there exists a critical value $N_{\rm c}$ such that QLRO does not exist for $N > N_{\rm c}$. The renormalization group analysis is required to determine this value of $N_{\rm c}$. Moreover, it is also necessary to clarify the relation between this QRLO and the Bragg glass phase in the vortex lattice systems \cite{LeDoussal94,LeDoussal98}. We plan to investigate these theoretical aspects in our future studies. Finally, we remark on the topics related to the experimental realization of the QLRO under consideration. We consider nematic liquid crystals (NLCs) flowing in a random substrate such as an aerogel or a porous medium. Recently, the dynamics of liquid crystals confined to a complex geometry has attracted considerable attention due to not only fundamental research interest but also its industrial applications \cite{Araki,Sengupta}. For NLCs in a random substrate, the random anchoring, which results from the complicated surface structure, significantly influences the ordering structure and thermodynamic properties \cite{Marinelli,Petridis}. Here, we remark that certain factors concerning NLCs flowing in the random substrate are not included in our models. For example, we ignore the inhomogeneity of the velocity field. The inhomogeneity of the velocity field acts as an additional random perturbation for the directors of the NLCs. Since the correlation of the spatial fluctuation of hydrodynamic velocity field exhibits power-law decay, this random perturbation is qualitatively different from the random anchoring whose correlation decays over a short distance. The consequences of this hydrodynamic effect and other factors excluded in our models should be investigated in future work. \begin{acknowledgments} The author thanks Shin-ich Sasa for fruitful discussions. The present study was supported by JSPS KAKENHI No.15J01614 and by the JSPS Core-to-Core program ``Non-equilibrium dynamics of soft-matter and information.'' \end{acknowledgments} \setcounter{equation}{0} \def\theequation{A\arabic{equation}}
{ "timestamp": "2015-10-20T02:14:04", "yymm": "1504", "arxiv_id": "1504.06411", "language": "en", "url": "https://arxiv.org/abs/1504.06411" }
\section{Introduction} \label{sec_Intro} The minimum time planar tilting problem of a spacecraft consists of controlling the spacecraft, with certain prescribed terminal conditions on the attitude angles, accelerations, and the velocity direction, while minimizing the maneuver time and keeping constant the yaw and rotation angles. This problem is of interest for (at least) two reasons. The first one is that the resulting optimal strategy can be used during the rocket ascent phase, along which the attitude and the orbit motions are strongly coupled. The second one is that, as we will prove in this paper, the optimal trajectories, solutions of the problem, exhibit a chattering phenomenon which is, in itself, difficult and thus interesting to analyze, but which is also rather a bad news in view of practical issues. We thus analyze it in detail, providing sufficient conditions on the terminal conditions under which the optimal strategy does not involve any chattering, and in case chattering occurs, we provide alternative sub-optimal strategies. \subsection{The optimal control problem} \label{sec_ProbStat} Let us formulate the minimum time planar tilting maneuver problem (pitching movement of the spacecraft). Throughout the paper, we restrict our study to the planar case, in the sense that the spacecraft movement remains in a plane. \paragraph{Model.} We assume that the Earth is a fixed ball in the inertial space, and that the velocity of the wind is zero. We consider an axial symmetric spacecraft (see Figure \ref{planar_model}). Taking coordinates $(x,y)$, we adopt the following notations: \begin{itemize} \item $v_x$ and $v_y$ are the velocity components of the velocity vector $\vec{v}$; \item $\theta$ is the pitch angle of the spacecraft; \item $\omega$ is the angular velocity with respect to the Earth; \item $r>0$ is the distance between the spacecraft mass center $O_b$ and the center $O$ of the Earth; \item $\ell>0$ is the distance from the thrust point $P$ to the mass center of the spacecraft $O_b$; \item $\vec{e}_a$ is the unit vector along the symmetric axis of the spacecraft, and $\vec{e}_c$ is the unit vector perpendicular to $\vec{e}_a$ pointing to the North; \item $I$ is the moment of inertia along the $\vec{e}_a \times \vec{e}_c$ axis; \item $\mu$ is the angle between the thrust vector $\vec{T}$ and the symmetric axis $\vec{e}_a$ of the spacecraft, and we must have $|\mu| \leq \mu_{max}$; \item $\gamma$ is the flight path angle defined as the angle between the velocity $\vec{v}$ and the axis $\vec{x}$. \end{itemize} \begin{figure}[h] \centering \includegraphics[scale = 0.9]{planar_model.eps} \caption{ Frames and parameters in problem ${\bf (MTTP)}$.} \label{planar_model} \end{figure} The motion of the spacecraft is controlled by the angle $\mu$. Since $\mu$ is small in practice (between $\pm 10$ degrees), we assume that $cos \mu \approx 1$ and $sin \mu \approx \mu$. Under this small angle assumption, the spacecraft evolves in time according to the system \begin{equation}\label{sys1} \begin{split} \dot{v}_x&= a \cos \theta - c v_x v_y , \\ \dot{v}_y&= a \sin \theta + c v_x^2 - g_0, \\ \dot{\theta} &= \omega - c v_x, \\ \dot{\omega}&= b u , \end{split} \end{equation} with control $u=-\mu/\mu_{max} \in [-1,1]$ and $\d{a=T/m}$, $\d{c=1/r}$, $\d{b=T \ell \mu_{max}/I}$ being positive constants. Actually, in our numerical simulations, we will use the parameters of Ariane 5 launchers (see Table \ref{sim_param}). The modulus of the velocity $v = \sqrt{v_x^2+v_y^2}$ takes values in $[0,v_{m}]$, and for the pitch angle and the angular velocity we have the estimate $\vert\theta\vert\leq \theta_{\max}$ and $\vert\omega\vert\leq \omega_{\max}$. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $a$ & $b$ & $c$ & $v_{m}\ (m/s)$ & $\omega_{\max}\ (rad/s)$ & $\theta_{\max} (rad)$\\ \hline Value & $12$ & $0.02$ & $1\times 10^{-6}$ & $5000$ & $0.3$ & $\pi$\\ \hline \end{tabular} \caption{System parameters.} \label{sim_param} \end{table} In the sequel, for convenience, we set $x_1=v_x$, $x_2=v_y$, $x_3=\theta$ and $x_4=\omega$. Denoting by $x=(x_1,x_2,x_3,x_4)$, the system \eqref{sys1} can be written as the \emph{single-input control-affine system} \begin{equation}\label{singleinputcontrolaffinesystem} \dot{x} = f_0 (x) + u f_1(x) , \end{equation} where $f_0$ and $f_1$ are the smooth vector fields on $\mathbb{R}^4$ defined by \begin{equation}\label{def_f0f1} f_0 = ( a \cos x_3 - c x_1 x_2 ) \frac{\partial}{\partial x_1} + (a \sin x_3 + c x_1^2- g_0 ) \frac{\partial}{\partial x_2} + (x_4 - c x_1) \frac{\partial}{\partial x_3} ,\qquad f_1 = b \frac{\partial}{\partial x_4} . \end{equation} \paragraph{Terminal conditions.} The requirements are the following: \begin{itemize} \item all initial variables are fixed; \item the final values of the variables $\theta$ and $\omega$ are prescribed, and we require that, at the final time $t_f$ (which is let free), the velocity vector $\vec{v}(t_f)$ be parallel to the spacecraft axis $\vec{e}_a(t_f)$. \end{itemize} It is indeed natural to consider $\vec{v}(t_f) \parallel \vec{e}_a(t_f)$ as a terminal condition, because the spacecraft considered is of rocket-type, and such spacecrafts are usually planned to maintain a small angle of attack along the flight. Note that, here, the angle of attack is the angle between the spacecraft axis $\vec{e}_a$ and the velocity $\vec{v}$. The zero angle of attack condition ensures that the aerolift is null in order to avoid excessive loading of the structure (see \cite{BLAKELOCK}). Since $\gamma = \arctan (x_2/x_1)$ and $v = \sqrt{x_1^2 + x_2^2}$, we have \begin{equation} \label{flightangle} \dot{\gamma} = (a \sin (x_3 - \gamma)- g_0 \cos \gamma )/ v+c v \cos \gamma ,\qquad \dot{v} = a \cos (x_3 - \gamma) - g_0 \sin \gamma . \end{equation} The final condition above is then written as $\gamma(t_f) = x_{3}(t_f)$. In term of $v$ and $\gamma$, the velocity components $x_1$ and $x_2$ are $x_{1}=v \cos \gamma$ and $x_{2}=v \sin \gamma$. We set $v(0)=v_0$ and $\gamma(0)=\gamma_0$. \paragraph{Minimum time planar tilting problem.} Let $x_0 \in \mathbb{R}^4$, and let $v_0$, $\gamma_0$, $x_{30}$, $x_{40}$ and $x_{3f}$ be real numbers. In terms of the variables $x=(x_1,x_2,x_3,x_4)$, the initial point is defined by $$ x_0 =(x_{10},x_{20},x_{30},x_{40}), $$ with $x_{10}=v_0 \cos \gamma_0$ and $x_{20}=v_0 \sin \gamma_0$, and the final target is the submanifold of $\mathbb{R}^4$ defined by \begin{equation*}\label{} M_1 = \{ (x_1,x_2,x_3,x_4)\in\mathbb{R}^4 \mid x_2\cos x_{3f} - x_1 \sin x_{3f} = 0,\ x_3 = x_{3f},\ x_4=0 \}. \end{equation*} Throughout the paper, we consider the optimal control problem, denoted in short ${\bf (MTTP)}$, of steering the control system \eqref{sys1} from $x(0)=x_0$ to the final target $M_1$ in minimal time $t_f$, under the control constraint $u(t) \in [-1,1]$. \subsection{State of the art} The minimum time spacecraft attitude maneuver problem has been widely studied (see, e.g., \cite{Bilimoria,Fleming,Proulx,Shen}). Besides, there are many works on the coupled attitude orbit problem (see, e.g., \cite{Gong,Knutson,Wang}) and on the minimum time orbit transfer (see, e.g., \cite{Caillau,BFT,Kim,Thorne,Yue}). The problem ${\bf (MTTP)}$ under consideration in this paper is however more related to the well-known \emph{Markov-Dubins problem} (in short, MD problem) and to variants of it. Indeed, if the system were to be directly controlled by the variable $x_3$, then, by taking the target manifold to be a single point ($x(t_f)=x_f$) and letting $a=b=1$, $c=0$, $g_0=0$, the system \eqref{sys1} would be written as $$ \dot{x}_1= \cos x_3, \quad \dot{x}_2= \sin x_3, \quad \dot{x}_3= u, $$ and therefore, the problem ${\bf (MTTP)}$ coincides with the MD problem, which was first settled in \cite{Markov} and was analyzed in detail by Dubins and many others (see, e.g., \cite{Dubins,Reeds,Sussmann}). It has been shown that the optimal strategy for the MD problem consists in first reaching the singular arc with a single bang arc, then, in following this singular arc until one is sufficiently close to the final target, and finally, in leaving the singular arc in order to reach the target with a single bang arc. If we assume that $g_0 \neq 0$, i.e., if we have the system $$ \dot{x}_1= \cos x_3, \quad \dot{x}_2= \sin x_3 - g_0, \quad \dot{x}_3= u, $$ then the problem ${\bf (MTTP)}$ coincides with the \emph{Zermelo-Markov-Dubins problem} (in short, ZMD problem) with constant wind field $(w_x,w_y)=(0,-g_0)$ (see, e.g., \cite{Bakolas,McGee,Glizer,Techy}). The optimal strategy of this problem consists of a finite number of bang and singular arcs. Both the MD and the ZMD problems may involve a singular arc because the singular controls of these problems are of intrinsic order one (see further in the present paper for this notion). However, this is not the case for the problem ${\bf (MTTP)}$ for which the singular control is of intrinsic order two. In this sense, a problem closer to ${\bf (MTTP)}$ (with $a=b=1$, $c=0$, $g_0=0$) is the \emph{Markov-Dubins problem with angular acceleration control} (in short, MDPAAC) (see \cite{Laumond,Sussmann2}). In that problem, the model is a dynamic extension of the MD system, given by $$ \dot{x}_1= \cos x_3, \quad \dot{x}_2= \sin x_3 , \quad \dot{x}_3= x_4,\quad \dot{x}_4=u, $$ The existence of a chattering phenomenon for MDPAAC was first put in evidence in \cite{Sussmann2}. Although the optimality status of these chattering arcs remains unclear, the discussion of the chattering phenomenon brings interesting issues for the analysis of the present problem ${\bf (MTTP)}$. The system we consider here can also be seen as a variation of the MD system, with nonconstant wind and controlled by the inertial control. Thus, we expect the solution of the problem ${\bf (MTTP)}$ to share properties similar to MDPAAC (in particular, chattering), MD and ZMD (in view of the global behavior of the solution). In fact, using \cite{Zelikin1}, we will be able to prove the existence and the optimality of the chattering phenomenon in the problem ${\bf (MTTP)}$. The chattering phenomenon (also occuring in MDPAAC) is caused by singular controls of intrinsic order two. It makes the optimal synthesis for the problem ${\bf (MTTP)}$ essentially different from that of the MD or ZMD problem. However, in some sense the optimal solution of problem ${\bf (MTTP)}$ consists as well of three pieces: the first piece consists of bang arcs to reach the singular arc, the second piece is a singular arc, and the third piece consists of a succession of bang arcs finally reaching the target submanifold. Since the chattering phenomenon causes difficulties in practical use, we will also provide sufficient conditions on the terminal conditions, under which the chattering arcs do not appear in the optimal solution. This prediction result will be useful in order to decide which numerical method (either direct, or indirect, or sub-optimal) is the most appropriate. \subsection{Chattering phenomenon} \label{sec_chat} Let us recall that we speak of a \textit{chattering phenomenon} (sometimes also called a Fuller's phenomenon), when the optimal control switchings an infinite number of times over a compact time interval. It is well known that, if the optimal trajectory of a given optimal control problem involves a singular arc of higher order, then no connection with a bang arc is possible and then bang arcs asymptotically joining the singular arc must chatter. On Figure \ref{chattering}(b), the control is singular over $(t_1,t_2)$, and the control $u(t)$ with $t \in (t_1-\epsilon_1,t_1) \cup (t_2,t_2+\epsilon_2)$, $\epsilon_1>0$, $\epsilon_2>0$ is chattering. The corresponding optimal trajectory is called a chattering trajectory. On Figure \ref{chattering}(a), the chattering trajectory ``oscillates'' around the singular part and finally ``gets off" the singular trajectory with an infinite number of switchings. In this paper, we call \textit{singular junction}, the junction point between a singular arc and a non-singular arc. \begin{figure}[h] \centering \includegraphics[scale=0.9]{chattering.eps} \caption{An illustration of chattering phenomenon.} \label{chattering} \end{figure} To better explain the chattering phenomenon, we recall the well-known Fuller problem (see \cite{Fuller,MARCHAL}), which is the optimal control problem \begin{equation*} \left\{ \begin{split} & \min \int_0^{t_f} x_1(t)^2 \, dt , \\ & \dot{x}_1(t)=x_2(t),\ \dot{x}_2(t)=u(t), \quad |u(t)| \leq 1,\\ & x_1(0)=x_{10},\ x_2(0)=x_{20},\ x_1(t_f)=0,\ x_2(t_f)=0,\quad t_f\ \textrm{free}. \end{split} \right. \end{equation*} We define $ \xi = \left( \frac{\sqrt{33}-1}{24} \right)^{1/2}$ as the unique positive root of the equation $\d{\xi^4+\xi^2/12-1/18=0}$, and we define the sets \begin{equation*} \begin{split} \Gamma_{+}&=\{ (x_1,x_2)\in \mathbb{R}^2 \mid x_1= \xi x_2^2,\ x_2<0 \} , \quad\ \ R_{+}=\{ (x_1,x_2)\in \mathbb{R}^2 \mid x_1 < - \mathrm{sign} (x_2) \xi x_2^2 \} , \\ \Gamma_{-}&=\{ (x_1,x_2)\in \mathbb{R}^2 \mid x_1= - \xi x_2^2,\ x_2>0 \} , \quad R_{-}=\{ (x_1,x_2)\in \mathbb{R}^2 \mid x_1 > - \mathrm{sign} (x_2) \xi x_2^2 \} . \end{split} \end{equation*} Then the optimal synthesis of the Fuller problem is the following (see \cite{Fuller2,SCHATTLER,Wonham}). The optimal control is given in feedback form by \begin{equation*} u^{\ast}=\begin{cases} \phantom{-}1 & \textrm{if}\ x \in R_{+} \bigcup \Gamma_{+} , \\ -1& \textrm{if}\ x \in R_{-} \bigcup \Gamma_{-} . \end{cases} \end{equation*} The control switchings from $u=1$ to $u=-1$ at points on $\Gamma_{-}$ and from $u=-1$ to $u=1$ at points on $\Gamma_{+}$. The corresponding trajectories crossing the switching curves $\Gamma_{\pm}$ transversally are chattering arcs with an infinite number of switchings that accumulate with a geometric progression at the final time $t_f>0$. The optimal synthesis for the Fuller problem is drawn on Figure \ref{Fuller}. The solutions of the Fuller problem are chattering solutions since they switch transversally on the switching curves $\Gamma_{\pm}$ until finally reaching the target point on the singular surface defined by the union of all singular solutions. \begin{figure}[h] \centering \includegraphics[scale=0.4]{Fuller.eps} \caption{Optimal synthesis for the Fuller problem.} \label{Fuller} \end{figure} In fact, the optimal control of the Fuller problem, denoted as $u^{\ast}$, contains a countable set of switchings of the form \begin{equation*} u^{\ast}(t)=\begin{cases} \phantom{-}1 & \textrm{if}\ t \in [t_{2k},t_{2k+1}), \\ -1& \textrm{if}\ t \in [t_{2k+1},t_{2k+2}] , \end{cases} \end{equation*} where $\{ t_{k} \}_{k \in \mathbb{N}}$ is a set of switching times that satisfies ${ (t_{i+2} - t_{i+1}) < (t_{i+1} - t_{i})}$, $i \in \mathbb{N}$ and converges to $t_f < +\infty$. This means that the chattering arcs contain an infinite number of switchings within a finite time interval $t_f>0$. \medskip The analysis of chattering arcs is challenging. Based on a careful analysis of the Fuller problem, M.I. Zelikin and V.F. Borisov obtained a geometric portrait of solutions in the vicinity of the second order singular solutions (see \cite{Zelikin1,Zelikin2}). These solutions are called chattering solutions. Using their results, we will be able to prove rigorously the existence and optimality of chattering solutions in our problem ${\bf (MTTP)}$. The basic idea of their approach to provide sufficient conditions for optimality is based on the following well-known sufficient optimality condition: \begin{quote} \emph {Let $M$ be a smooth manifold of dimension $n$, and let $T^*M$ be its cotangent bundle, endowed with its canonical symplectic structure. If a submanifold $L$ of $T^*M$ generated by a given Hamiltonian system on $T^*M$ is Lagrangian, then a ``nice'' regular projection of trajectories of $L$ onto $M$ can also be seen, by canonical injection, as a Lagrangian submanifold of $T^*M$, and the trajectories are locally optimal in $C^0$ topology.} \end{quote} Recall that a submanifold $L$ of a smooth manifold $M$ is said to be Lagrangian if ${\oint_{\gamma} p \, dx =0}$ for every piecewise smooth closed contour $\gamma$ on the manifold. Hence, the manifold consisting of the solutions of a Hamiltonian system with transversality condition ($p\, dx=0$ on the target manifold) is Lagrangian. Denote the cost functional to be minimized as $C(\cdot,\cdot)$. A trajectory $\bar{x}(\cdot)$ is said to be locally optimal in $C^0$ topology if, for every neighborhood $V$ of $\bar{x}(\cdot)$ in the state space, for every real number $\eta$ so that $| \eta | \leq \epsilon$, for every trajectory $x(\cdot)$, associated to a control $v$ on $[0,T+\eta]$, contained in $W$, and satisfying $x(0) = \bar{x}(0) = x_0$, $x(T+\eta) = \bar{x}(T)$, there holds $C(T+\eta,v) \geq C(T,u)$. Hence, the problem of proving the local optimality of a solution comes down to constructing a Lagrangian submanifold. The usual way to construct a Lagrangian submanifold is to integrate backward in time the Hamiltonian system from the target point. However, this is not applicable for the chattering arcs because the control is not anymore piecewise constant and the length of switching intervals goes to zero at the singular junction. In order to overcome this flaw of the usual approach, M.I. Zelikin and V.F. Borisov proposed an explicit procedure to construct Lagrangian submanifolds filled by chattering trajectories. The main difficulty of this construction procedure is to analyze the regularity of the projections of the extremal lifts to the state space. \medskip When using numerical methods to solve an optimal control problem, the occurrence of chattering arcs may be an obstacle to convergence. Recall that there are two main types of numerical methods for solving optimal control problems: indirect methods and direct methods (see, e.g., the survey paper \cite{Trelat}). The direct methods (see \cite{Betts}) consist of discretizing the state and the control and thus of reducing the problem to a nonlinear optimization problem (nonlinear programming) with constraints. Using standard optimization routines, it is then possible to make converge the algorithm for the Fuller problem. Of course, the numerical solution which is obtained can only have a finite number of switchings, because in the approximation scheme, the chattering control is actually approximated with a piecewise constant control. The indirect methods consist of numerically solving a boundary value problem obtained by applying the Pontryagin Maximum Principle, by means of a shooting method. An indirect method is also called a shooting method (see \cite{StoerBulirsch}). In \cite{Bonnans}, it is shown that the presence of chattering arcs may imply ill-posedness (non-invertible Jacobian) of shooting methods for single-input problems. According to \cite{Zelikin2}, the difficulty is due to the numerical integration of the discontinuous Hamiltonian system (i.e., the right-hand side of the Hamiltonian system is discontinuous) because the chattering solutions worsen the approximation and error estimates during calculation for the standard numerical integration methods. \subsection{Structure of the paper} The paper is structured as follows. In Sections \ref{sec_PMP} and \ref{sec_CompSingArcs}, the Pontryagin Maximum Principle (PMP) and an usual way to compute singular controls are recalled. Section \ref{sec_geomchatter} is devoted to recall some results of \cite{Zelikin1,Zelikin2}, explaining geometric features of the chattering phenomenon, based on a semi-canonical form of the Hamiltonian system along singular extremals of order two, with the objective of showing how these theoretical results can be applied in practice. The non-singular (bang-bang) extremals of ${\bf (MTTP)}$ are analyzed in Section \ref{sec_ReguArcs}, and the Lie bracket configuration is given in \ref{sec_Lie}. We prove in Section \ref{sec_SingArcs} that the singular controls for ${\bf (MTTP)}$ are of intrinsic order two, which implies the existence of chattering arcs. Based on the results of M.I. Zelikin and V.F. Borisov, we prove in Section \ref{sec_ChatArcs} that the optimal chattering arcs of the problem ${\bf (MTTP)}$ are locally optimal in $C^0$ topology. In Section \ref{sec_SingPred}, we provide, for the cases with $c=0$ and $c >0$ respectively, sufficient conditions on the initial values under which the optimal solutions do not contain any singular arc, and do not chatter. Numerical simulations, in Section \ref{sec_NumeChatPred}, illustrate these conditions. Since chattering is not desirable in view of practical issues, we propose some sub-optimal strategies in Section \ref{sec_SuboSolu}, by approximating the chattering control with piecewise constant controls. Our numerical results provide evidence of the convergence of sub-optimal solutions to optimal solutions (but this convergence is not analyzed from the theoretical point of view in the present paper). \section{Geometric analysis of chattering} Let $M$ be a smooth manifold of dimension $n$, and let $M_1$ be a submanifold of $M$. We consider on $M$ the minimal time control problem \begin{equation} \label{pb_ocp} \left\{ \begin{split} & \min t_f , \\ & \dot{x}(t) = f_0(x(t))+u(t) f_1(x(t)),\quad |u(t)| \leq 1 , \\ & x(0) = x_0,\ x(t_f) \in M_1 , \quad t_f\geq 0\ \textrm{free}, \end{split}\right. \end{equation} where $f_0$ and $f_1$ are two smooth vector fields on $M$. Since the system and the instantaneous cost are control-affine, and the control constraint is compact and convex, according to classical results (see, e.g., \cite{Cesari,Trelatbook}), there exists at least one optimal solution $(x(\cdot),u(\cdot))$, defined on $[0,t_f]$. \subsection{Application of the Pontryagin maximum principle} \label{sec_PMP} According to the Pontryagin maximum principle (in short, PMP, see \cite{PONTRYAGIN}), there must exist an absolutely continuous mapping $p(\cdot)$ defined on $[0,t_f]$ (called adjoint vector), such that $p(t)\in T^*_{x(t)}M$ for every $t\in[0,t_f]$, and a real number $p^0 \leq 0$, with $(p(\cdot),p^0)\neq 0$, such that \begin{equation} \label{Hamiltonsys} \dot{x}(t) = \frac{\partial H}{\partial p}(x(t),p(t),p^0,u(t)),\quad \dot{p}(t) = -\frac{\partial H}{\partial x}(x(t),p(t),p^0,u(t)) , \end{equation} almost everywhere on $[0,t_f]$, where \begin{equation} \label{Hamiltonianfun} H(x,p,p^0,u) = \langle p, f_0(x)\rangle+u \langle p,f_1(x) \rangle + p^0 \end{equation} is the Hamiltonian of the optimal control problem \eqref{pb_ocp}, and (the final time $t_f$ being free) \begin{equation} \label{Hamiltonsys2} H(x(t),p(t),p^0,u(t)) = \max_{-1\leq v(t)\leq 1} H(x(t),p(t),p^0,v(t)), \end{equation} almost everywhere on $[0,t_f]$. Moreover, we have the transversality condition \begin{equation} \label{Hamiltonsys3} p(t_f) \perp T_{x(t_f)} M_1 , \end{equation} where $T_{x(t_f)}M_1$ denotes the tangent space to $M_1$ at the point $x(t_f)$. The quadruple $(x(\cdot),p(\cdot),p^0,u(\cdot))$ is called the extremal lift of $x(\cdot)$. An extremal is said to be normal (resp., abnormal) if $p^0 < 0$ (resp., $p^0 = 0$). We define the functions \begin{equation} \label{HamilFun} h_0(x,p)=\langle p, f_0(x) \rangle ,\quad h_1(x,p)=\frac{\partial H}{\partial u}(x,p,p^0,u) = \langle p, f_1(x) \rangle . \end{equation} It follows from \eqref{Hamiltonsys2} that $u(t) = \mathrm{sign}(\varphi(t))$, whenever $\varphi(t) = h_1(x(t),p(t))\neq 0$. For this reason, the function $\varphi$ is also called the switching function. \paragraph{Bang arcs.} We say that the trajectory $x(\cdot)$ restricted to a sub-interval $I$ of $[0,t_f]$ is a \emph{bang arc} if $u(t)$ is constant along $I$, equal either to $+1$ or to $-1$. We say that the trajectory is bang-bang on $[0,t_f]$ if it is the concatenation of bang arcs. \paragraph{Singular arcs.} If $\varphi(t)=h_1(x(t),p(t))=0$ along a sub-interval $I$ of $[0,t_f]$, then the relation \eqref{Hamiltonsys2} does not allow to directly infer the control, and in that case we speak of a \emph{singular arc}, or of a \emph{singular extremal}. \medskip Equivalently, a singular control is defined as follows. The end-point mapping $E: \mathbb{R}^n \times \mathbb{R} \times L^\infty(0,+\infty;\mathbb{R}) \to \mathbb{R}^n$ of the system is defined by $E(x_0,t_f,u)=x(x_0,t_f,u)$ where $t \mapsto x(x_0,t,u)$ is the trajectory solution of the control system, corresponding to the control $u$, such that $x(x_0,0,u)=x_0$ (the domain of definition is then the set of controls for which the trajetory is indeed globally defined on $[0,t_f]$). A trajectory $x(\cdot)$, defined on $[0,t_f]$, with $x(0)=x_0$, associated with a control $u$, is said to be \emph{singular} if the differential $\partial_u E(x_0,t_f,u)$ is not of full rank. Accordingly, we speak of a \emph{singular control}. It is well known that a trajectory $x(\cdot)$ is singular on $[0,t_f]$ if and only if it has an extremal lift $(x(\cdot),p(\cdot),p^0,u(\cdot))$, satisfying \eqref{Hamiltonsys} and $h_1(x(t),p(t))=0$ on $[0,t_f]$ (see \cite{BonnardChyba,Trelatbook}). This extremal lift is called a \emph{singular extremal}. \subsection{Computation of singular arcs} \label{sec_CompSingArcs} In order to compute a singular control, the usual method (see \cite{BonnardChyba}) consists of differentiating repeatedly the relation \begin{equation} \label{singularcond1} \varphi(t)=h_1(x(t),p(t))=0 \end{equation} with respect to time, until the control appears in a nontrivial way. Using the Hamiltonian system \eqref{Hamiltonsys}, such derivations are done thanks to Poisson brackets and Lie brackets. By differentiating \eqref{singularcond1} a first time (along the interval $I$), we obtain \begin{equation} \label{singularcond2} 0 = \dot\varphi(t) = \{h_0,h_1\}(x(t),p(t)) = \langle p(t),[f_0,f_1](x(t))\rangle, \end{equation} which is a new constraint. Differentiating a second time, we obtain \begin{equation*} \begin{split} 0 = \ddot\varphi(t) & =\{h_0,\{h_0,h_1\}(x(t),p(t)) + u(t) \{h_1,\{h_0,h_1\}(x(t),p(t)) \\ &= \langle p(t),[f_0,[f_0,f_1](x(t))\rangle + u(t) \langle p(t),[f_1,[f_0,f_1](x(t))\rangle, \end{split} \end{equation*} in which the control now appears in a nontrivial way provided that $\{h_1,\{h_0,h_1\}\}(x(t),p(t)) < 0$. The latter condition is known as \emph{strengthened Legendre-Clebsch condition}. Under this condition, we can indeed compute the singular control as $$ u(t) = -\frac{\{ h_0,\{ h_0,h_1\}\}(x(t),p(t))}{\{ h_1,\{ h_0,h_1\}\}(x(t),p(t))}. $$ It can be noted that the first derivative of $\varphi(\cdot)$ does not make appear the control. Hence, two derivations in time are at least necessary in order to make appear the control in a nontrivial way. Such controls are also said to be of \emph{minimal order}, and actually this property is generic (see \cite{Bon-Kup97,Chitour_Jean_Trelat}). Hereafter, due to the fact that optimal singular arcs have to appear with an even number of derivations, we also say that such singular arcs are of \emph{intrinsic order one}. If $\{h_1,\{h_0,h_1\}\}(x(t),p(t))= 0$ identically on $I$, then the above computation does not suffice and we need to differentiate more. In that case, we see that we have two additional constraints: \begin{equation} \label{singularcond3} \{h_0,\{h_0,h_1\}\}(x(t),p(t)) = \langle p(t),[f_0,[f_0,f_1]](x(t))\rangle = 0, \end{equation} and \begin{equation*} \{h_1,\{h_0,h_1\}\}(x(t),p(t)) = \langle p(t),[f_1,[f_0,f_1]](x(t)) = 0, \end{equation*} for every $t\in I$. Let us recall the concept of the order of a singular control. Roughly speaking, it is the first integer $m$ such that the control $u$ appears in a nontrivial way in the $(2m)^\textrm{th}$-derivative of the switching function $\varphi(\cdot)$ (see \cite{SCHATTLER,Zelikin2}). \begin{defi} \label{def_singularorder} The singular control $u$ (along the sub-interval $I$) is said to be of \emph{local order} $k$ if the conditions \begin{equation*} \frac{\partial}{\partial u} \varphi^{(i)}(x(t),p(t)) = 0, \quad i=0,1,\cdots,2k-1, \quad \frac{\partial}{\partial u} \varphi^{(2k)}(x(t),p(t)) \neq 0, \end{equation*} hold along the sub-interval $I$. If moreover the Lie brackets $[f_1,[\mathrm{ad}^{i}f_0.f_1]]$, $i=0,\cdots,2k-2$, are identically equal to zero (over the whole space), then the singular control $u$ is said to be of \emph{intrinsic order} $k$. \end{defi} We adopt the usual notations $\mathrm{ad}f_0.f_1 = [f_0,f_1]$ (resp., $\mathrm{ad}h_0.h_1 = \{h_0,h_1\}$) and $\mathrm{ad}^if_0.f_1 = [f_0,\mathrm{ad}^{i-1}f_0.f_1]$ (resp., $\mathrm{ad}^ih_0.h_1 = \{h_0,\mathrm{ad}^{i-1}h_0.h_1\}$). \begin{rem} If a singular control $u$ is of \emph{local} order two, then the conditions (along $I$) $$ \frac{\partial}{\partial u} \varphi^{(2)}(t) =\langle p(t),[f_1,\mathrm{ad} f_0.f_1] (x(t))\rangle= 0, $$ and $$ \frac{\partial}{\partial u} \varphi^{(3)}(t) =2 \langle p(t),[f_1,\mathrm{ad}^2 f_0.f_1] (x(t))\rangle + u(t) \langle p(t),[f_1,[f_1,\mathrm{ad} f_0.f_1]] (x(t))\rangle= 0, $$ are additional constraints that must be satisfied along the singular arc. In contrast, if $u$ is of \emph{intrinsic} order two, then these conditions are trivially satisfied since $[f_1,\mathrm{ad} f_0.f_1]\equiv0$ and $[f_1,\mathrm{ad}^2 f_0.f_1]\equiv0$. In the present paper, we are in the situation of singular arcs of intrinsic order two, and we will then focus on that case. \end{rem} Actually, we did not consider, in the above definition, the case where the first nonzero derivative is of odd order. Indeed, such singular controls are actually never optimal, and hence we do not consider them in our analysis. This fact is due to the following well-known result, usually referred to as Kelley's condition for singular extremals of local order $k$ (see \cite{Kelley,Krener}): \begin{quote} \textit{ If a trajectory $x(\cdot)$, associated with a singular control $u(\cdot)$, is locally time-optimal on $[0,t_f]$ in $L^\infty$ topology, then the generalized Legendre-Clebsch condition $$ (-1)^k \frac{\partial }{\partial u} \frac{d^{2k} h_1}{dt^{2k}} \leq 0, $$ is satisfied along the extremal. Recall that a trajectory $\bar{x}(\cdot)$ is said to be locally optimal in $L^\infty$ topology if, for every neighborhood $V$ of $u$ in $L^\infty([0,T+\epsilon],U)$, for every real number $\eta$ so that $| \eta | \leq \epsilon$, for every control $v \in V$ satisfying $E(x_0,T+\eta,v) = E(x_0,T,u)$ there holds $C(T+\eta,v) \geq C(T,u)$, where $E: \mathbb{R}^n \times \mathbb{R} \times L^\infty(0,+\infty;\mathbb{R}) \to \mathbb{R}^n$ is the end-point mapping defined by $E(x_0,t_f,u)=x(x_0,t_f,u)$. } \end{quote} Therefore, the generalized Legendre-Clebsch condition for a singular control of \emph{local} order $2$ is \begin{equation*} \langle p(t),[f_1,\mathrm{ad}^3f_0.f_1] (x(t)) + [f_0,[f_0,[f_1,[f_0,f_1]]]] (x(t)) + [f_0,[f_1,\mathrm{ad}^2f_0.f_1]] (x(t))\rangle \leq 0, \end{equation*} and if the singular control of \emph{intrinsic} order $2$, then this condition takes the simpler form $$ \langle p(t),[f_1,\mathrm{ad}^3f_0.f_1] (x(t))\rangle \leq 0. $$ \medskip Turning back to the previous computation, if the singular control is of intrinsic order two, then by differentiating $\ddot{\varphi}(t) = \{h_0,\{h_0,h_1\}\}(x(t),p(t))$, we get \begin{equation*} \begin{split} 0 = \varphi^{(3)}(t) & =\{h_0,\mathrm{ad}^2h_0.h_1\}(x(t),p(t)) + u(t) \{h_1,\mathrm{ad}^2h_0.h_1\}(x(t),p(t)) \\ &= \langle p(t), [f_0,\mathrm{ad}^2f_0.f_1](x(t))\rangle + u(t) \langle p(t), [f_1,\mathrm{ad}^2f_0.f_1](x(t))\rangle, \end{split} \end{equation*} which, using the fact that $[f_1,\mathrm{ad}^2f_0.f_1]\equiv 0$, leads to the additional constraint \begin{equation} \label{singularcond4} \{h_0,\mathrm{ad}^2h_0.h_1\}(x(t),p(t)) = \langle p(t), [f_0,\mathrm{ad}^2f_0.f_1](x(t))\rangle = 0. \end{equation} Differentiating again, we get \begin{equation*} \begin{split} 0 = \varphi^{(4)}(t) & =\{h_0,\mathrm{ad}^3 h_0.h_1\}(x(t),p(t)) + u(t) \{h_1,\mathrm{ad}^3 h_0.h_1\}(x(t),p(t)) \\ &= \langle p(t), [f_0,\mathrm{ad}^3 f_0.f_1](x(t))\rangle + u(t) \langle p(t), [f_1,\mathrm{ad}^3 f_0.f_1](x(t))\rangle. \end{split} \end{equation*} By definition, we have $\langle p(t), [f_1,\mathrm{ad}f_0^3.f_1](x(t))\rangle \neq 0$, and thus the singular control is \begin{equation} \label{singularcontrolorder2} u(t) = -\frac{ \mathrm{ad}^4 h_0.h_1 (x(t),p(t))}{\{h_1, \mathrm{ad}^3 h_0.h_1 \}(x(t),p(t))}, \end{equation} which is smooth. \begin{rem} Along such a singular arc of intrinsic order two, the singular control is given by \eqref{singularcontrolorder2} and the constraints \eqref{singularcond1}, \eqref{singularcond2}, \eqref{singularcond3}, \eqref{singularcond4} must be satisfied along the arc. \end{rem} \medskip In this paper, we are actually concerned with optimal singular trajectories of intrinsic order two, which cause the occurrence of a chattering phenomenon in our problem. Let us recall the following result (see \cite{Kelley,McDanell,Zelikin1}). \begin{lem}\label{thm_bs} We assume that the optimal solution $x(\cdot)$ of the optimal control problem \eqref{pb_ocp} involves a singular arc (on a sub-interval $I$) of intrinsic order two, for which the strengthened generalized Legendre-Clebsch condition $$ \frac{\partial }{\partial u} \frac{d^{4} h_1(t)}{dt^{4}} = \{h_1,\mathrm{ad}^3h_0.h_1 \}(x(t),p(t)) < 0 $$ holds true along an extremal lift. If we have $\vert u(t)\vert < 1$ along the singular arc, then the singular arc cannot be matched directly with any bang arc. In particular, if $I$ is a proper subset of $[0,t_f]$, then the optimal solution chatters, in the sense that there is an infinite number of bang arcs accumulating at the junction with the singular arc. \end{lem} Although this result is known, we will provide a short proof of it when analyzing our spacecraft problem in Section \ref{sec_SingArcs}. \begin{rem} Note that the Fuller problem can be adapted to fit in the framework above, although this is not a minimum time problem. Actually, it suffices to add the objective as a third state variable $x_3$, evolving according to $\dot{x}_3 = x_1^2/2$, and then the Fuller problem can be interpreted, by uniqueness of the solution, as a minimum time problem with the vector fields $f_0(x)=(x_2,0,x_1^2/2)^\top$ and $f_1=(0,1,0)^\top$. The corresponding singular extremal is therefore given by $u =0$, $x_1=x_2=p_1=p_2=p^0=0$ and $p_3 <0$ being constant. The solutions of the Fuller problem are optimal abnormal extremals for this three-dimensional problem. Moreover, it is easy to see that $u=0$ is a singular control of intrinsic order two, along which the strengthened generalized Legendre-Clebsch condition is satisfied ($p_3 <0$). Then Lemma \ref{thm_bs} can be applied. \end{rem} \subsection{Geometric analysis of the chattering phenomenon}\label{sec_geomchatter} In this section, we recall some results on chattering solutions established in \cite{Zelikin1,Zelikin2}. Since these references are not always easy to read, our objective is also to provide a more pedagogical exposition of these results and to show how they can be used in practice. Recall that a chattering solution is the optimal solution corresponding to the chattering control which switches an infinite number of times over a compact time interval. \subsubsection{Semi-canonical form} The semi-canonical form (see \cite{Kuppa,Zelikin1}) is a way of writing the Hamiltonian system \eqref{Hamiltonsys} in a neighborhood of its singular arcs, which will be used later to analyze the solutions near (in $C^0$ topology) singular arcs of intrinsic or local order two. The main idea is to design a variable change that leads to a form involving the switching function and its derivatives directly as variables. This makes the analysis of the extremals near the singular arcs more convenient. Let $x(\cdot)$ be an optimal trajectory of \eqref{pb_ocp} on $[0,t_f]$, and let $(x(\cdot),p(\cdot),p^0,u(\cdot))$ be an extremal lift (coming from the PMP). We assume that $x(\cdot)$ involves a singular arc of intrinsic second order two, along the sub-interval $I$, satisfying the strengthened generalized Legendre-Clebsch condition. The Hamiltonian \eqref{Hamiltonianfun} can be rewritten as $H=h_0+u h_1+p^0$, with $h_0$ and $h_1$ defined by \eqref{HamilFun}. We assume that \begin{equation} \label{funindpcond1} \dim \mathrm{Span}\{f_1, \mathrm{ad}f_0.f_1, \mathrm{ad}^2f_0.f_1, \mathrm{ad}^3f_0.f_1\} = 4 . \end{equation} We define the new coordinates \begin{equation} \label{XPtoZW} z_1= h_1,\quad z_2= h^{(1)}_1= \{ h_0,h_1\},\quad z_3= h^{(2)}_1= \mathrm{ad}^2 h_0.h_1,\quad z_4= h_1^{(3)}= \mathrm{ad}^3 h_0.h_1 , \end{equation} and using that $[f_1,[f_0,f_1]] \equiv 0$ and that $\{h_1,\mathrm{ad}^3h_0.h_1\}<0$ along $I$, we have \begin{equation*} \dot{z}_1=z_2 ,\quad \dot{z}_2=z_3 ,\quad \dot{z}_3=z_4 ,\quad \dot{z}_4=\alpha(x,p)+ u \beta(x,p) , \end{equation*} where $\alpha = \mathrm{ad}^4 h_0.h_1$ and $\beta=\{h_1,\mathrm{ad}^3h_0.h_1\} < 0$. Note that $z_1$ is chosen as the switching function $\varphi(t)=h_1(x(t),p(t))$ and $z_i$ is chosen as the $(i-1)$-th derivative of the switching function. In fact, using that $[f_1,[f_0,f_1]] \equiv 0$ and using Jacobi's identity, we have $$ \{h_1,\{h_0,\{h_0,h_1\}\}\} = -\{h_0,\{\{h_0,h_1\},h_1\}\} - \{\{h_0,h_1\},\{h_1,h_0\}\} = \{h_0,\{h_1,\{h_0,h_1\}\}\} \equiv 0 . $$ This, together with $\beta < 0$, indicates that the singular control considered here is of intrinsic order two and satisfies the generalized Legendre-Clebsch condition. By definition, we have $z_i=0$, $i=1,2,3,4$, along such a singular arc. From \eqref{funindpcond1}, we infer that $z_1$, $z_2$, $z_3$, $z_4$ are functionally independent in the neighborhood of the extremal lift $(x(\cdot),p(\cdot))$, along $[0,t_f]$. We complement $z=(z_1,z_2,z_3,z_4)$ with $w=(w_1,\cdots,w_{2n-4}) \in \mathbb{R}^{2n-4}$ such that the Jacobi matrix of the mapping $ (x,p) \mapsto (z,w)$ is nondegenerate, i.e., \begin{equation*} \det \left( \frac{D(z,w)}{D(x,p)} \right ) \neq 0 , \end{equation*} along the extremal. Since our point of view is local, we assume that $(x,p)$ and $(z,w)$ live in $\mathbb{R}^{2n}$. The Hamiltonian system \eqref{Hamiltonsys} can be rewritten, locally along the extremal, as \begin{equation} \label{Hamsys} \dot{z}_1=z_2 ,\quad \dot{z}_2=z_3 ,\quad \dot{z}_3=z_4 ,\quad \dot{z}_4=\alpha(z,w)+ u \beta(z,w),\quad \dot{w}=F(z,w,u) , \end{equation} and the extremal control is given by \begin{equation*} u(t)=\begin{cases} 1 & \textrm{if}\ z_1(t)>0 ,\\ -\alpha/\beta & \textrm{if}\ z_1(t)=0 , \\ -1 & \textrm{if}\ z_1(t)<0 . \end{cases} \end{equation*} Accordingly, we define the \emph{singular surface} (smooth manifold consisting of singular extremals of second order) as \begin{equation*} S = \{(z,w) \mid (z_1,z_2,z_3,z_4)=(0,0,0,0)\}, \end{equation*} and the \emph{switching surface} as \begin{equation*} \Gamma = \{(z,w) \mid z_1=0\}. \end{equation*} If a trajectory $z(\cdot)$ is a solution of \eqref{Hamsys}, then a straightforward calculation yields that $z_{\lambda}=G_{\lambda}(z(t/\lambda))$ is also a solution of \eqref{Hamsys}, for any number $\lambda>0$, where \begin{equation} \label{mappingG} G_{\lambda} (z(\frac{t}{\lambda})) = \left( \lambda^4 z_1\left(\frac{t}{\lambda}\right),\lambda^3 z_2\left(\frac{t}{\lambda}\right),\lambda^2 z_3\left(\frac{t}{\lambda}\right),\lambda z_4\left(\frac{t}{\lambda}\right) \right). \end{equation} This is an important property for the Fuller problem (self-similar solutions) The system \eqref{Hamsys} is useful in order to analyze the qualitative behavior of solutions near the singular surface consisting of singular extremals of intrinsic order two. To include some Hamiltonian systems having singular arcs of local order two, we consider a small perturbation of the system \eqref{Hamsys} in the neighborhood of a given point $(0,w_0) \in S$, given by \begin{equation} \label{small_perturbation} \left\{\begin{array}{l} \d{\dot{z_1} = z_2 + f_1(z,w,u)},\\ \d{\dot{z_2} = z_3 + f_2(z,w,u)},\\ \d{\dot{z_3} = z_4 + f_3(z,w,u)},\\ \d{\dot{z_4}= \alpha(w) + u \beta(w) + f_4(z,w,u)},\\ \d{\dot{w}= F(z,w,u) }, \end{array}\right. \end{equation} with $f_i(z,w,u)=\mathrm{o}(z_{i+1})$, i.e., \begin{equation} \label{sp_cond} \lim_{\lambda \rightarrow 0^+} \lambda^{-(5-i)} \vert f_i(G_{\lambda}(z(t/\lambda)),w,u) \vert < +\infty, \quad i=1,2,3,4. \end{equation} The system \eqref{small_perturbation}-\eqref{sp_cond} is called a \emph{semi-canonical form}. \begin{rem} The variables $(z,w)$ can be chosen differently from \eqref{XPtoZW} in order to get a simpler local system \eqref{small_perturbation}. This is why this form is called semi-canonical, and not canonical. Moreover, this change of variable is not unique. \end{rem} \subsubsection{Geometry of chattering extremals} The first result concerns the existence of chattering solutions. In contrast to Lemma \ref{thm_bs}, this result can also be applied to the case of singular arcs of \emph{local} order two, and it describes the phase portrait of optimal extremals in the vicinity of a manifold of singular arcs of order two. Recall that the singular surface $S$ for the system \eqref{small_perturbation} is of codimension $4$. The surface $S$ satisfies four constraints $z_1=0$, $z_2=0$, $z_3=0$, $z_4=0$ corresponding respectively to null derivatives of the switching functions $\varphi^{(i)}$, $i=0,1,2,3$. Considering a point $(0,w_0) \in S$, if $\beta(w_0)<0$ and $| \alpha(w_0) | < -\beta(w_0)$, there exists a neighborhood of this point in which the singular extremals passing through it satisfy the generalized Legendre-Clebsch condition and the singular control $|u|=|-\alpha(w) / \beta(w)|<1$ is admissible. The following proposition indicates that, for any point in such a neighborhood, there exists a family of chattering extremals coming into this point, and there is another family of chattering extremals emanating from this point. Note that a family of chattering extremals is a one-parameter family, with the parameter $\lambda$ defined in \eqref{mappingG}. \begin{prop}[Bundles with chattering fibers] \label{thm1} Consider the system \eqref{small_perturbation}, in an open neighborhood of the point $(0,w_0)$. If $\beta(w_0)<0$ and $|\alpha(w_0)|< - \beta(w_0)$, then there exists an open neighborhood $\mathcal{O}$ of $w_0$ in $\mathbb{R}^{2n-4}$ such that, for any $w \in \mathcal{O}$, there are two one-parameter families of chattering extremals intersecting only at the point $(0,w)$. The extremals of the families fill two manifolds $\mathcal{N}_w^{+}$ and $\mathcal{N}_w^{-}$, each of them being of dimension $2$ and homeomorphic to $ \mathbb{R}^{2}$, coming respectively into and out of the point $(0,w)$. The switching points of $\mathcal{N}_w^{\pm}$ fill two piecewise-smooth curves $\Gamma_w^{\pm}$. The union $\cup_{w \in \mathcal{O}} \mathcal{N}_w^{\pm}$ of all those submanifolds is endowed with the bundle structure with base $\mathcal{O}$ and two-dimensional piecewise smooth fibers filled by chattering extremals. \end{prop} \begin{figure}[h] \centering \includegraphics[scale = 0.7]{th1.eps} \caption{Phase portrait of optimal extremals near the singular surface.} \label{th1_portrait} \end{figure} Figure \ref{th1_portrait} illustrates Proposition \ref{thm1}. The extremals living in the submanifolds $\mathcal{N}_w^{+}$ and $\mathcal{N}_w^{-}$ are chattering. More precisely, the extremals in $\mathcal{N}_w^{+}$ reach $(0,w)$ (in finite time) with infinitely many switchings, and the extremals in $\mathcal{N}_w^{-}$ leave $(0,w)$ with infinitely many switchings. The submanifolds $\mathcal{N}_w^{\pm}$ can be seen as two-dimensional fibers. \begin{proof} The complete proof of Proposition \ref{thm1} is done in \cite{Zelikin1}. Let us however sketch the main steps. Assume that $z_2>0$. \begin{enumerate} \item Prove that there exist self-similar solutions (i.e., the one-parameter family of chattering solutions) for the unperturbed system \eqref{Hamsys} using the Poincar\'{e} mapping $\Phi$ of the switching surface to itself. \item Prove that the points on $S$ are the stable points of $\Phi\circ\Phi$, by calculating the eigenvalues of $d(\Phi\circ\Phi)(0,w_0)$. Applying the invariant manifold theorem, there exists a one-dimensional $\Phi\circ\Phi$-invariant submanifold transversal to $S$ and passing through the point $(0,w_0)$. The restriction of $\Phi\circ\Phi$ to this submanifold is a contracting mapping. It follows the existence of a two-dimensional manifold $\mathcal{N}_{w_0}^+$ in the $(z,w)$-space, filled by chattering extremals entering into $(0,w_0)$. Moreover, the smooth dependence theorem leads to the bundle structure of $\cup_{w_0} \mathcal{N}_{w_0}^+$. \item Prove that for the small perturbation system \eqref{small_perturbation}, the Poincar\'{e} mapping $\Phi$ is well defined and smooth at the points in the neighborhood of $\mathcal{N}_{w_0}^+$. Using similar techniques as in the first and second steps, prove that the solutions of the perturbed system have the same structure than that of the unperturbed system. \end{enumerate} When $z_2<0$, another two-dimensional manifold $\mathcal{N}_{w_0}^-$ in $(z,w)$-space filled by chattering extremals that coming out of the point $(0,w_0)$ can be found and $\cup_{w_0} \mathcal{N}_{w_0}^-$ is also endowed with a bundle structure. \end{proof} The subbundles described in Proposition \ref{thm1} are given by \begin{equation*} \Sigma^{\pm} = \cup_{w \in \mathcal{O}} \mathcal{N}_w^{\pm} , \end{equation*} where the subbundle $\Sigma^{+}$ (resp., $\Sigma^{-}$) is filled by chattering arcs that come into (resp., come out of) the singular surface. Moreover, we denote the switching surfaces as ${\Gamma^{\pm} = \cup_{w \in \mathcal{O}} \Gamma_w^{\pm}}$. Note that it suffices to consider only the subbundle $\Sigma^{+}$, since the properties of $\Sigma^{-}$ can be obtained similarly. We consider the canonical projection $\pi:\Sigma^{+} \to \mathcal{O}$ from the subbundle to the base. \subsubsection{Optimality status} We now raise the question of knowing whether these chattering extremals are optimal or not. Let us consider again the Fuller problem to give an intuitive idea. Using \eqref{XPtoZW}, we choose the new variables $z=(p_2, -p_1, -2 x_1, -2 x_2)$ and then clearly the singular surface coincides with the origin. According to Proposition \ref{thm1}, there are two integral submanifolds of dimension $2$ that are filled by chattering extremals coming into and out of the origin within finite time, with infinitely many switchings. We consider the canonical projection $\pi^\ast: (z,w) \rightarrow x$ from the $(z,w)$-space to the $x$-space (state space). It is known that the extremals fill a Lagrangian submanifold in the $(z,w)$-space. Their projection on the state space are the trajectories, of which we would like to ensure their local optimality status. According to the conjugate point theory (see \cite{AGRACHEV,Bonnard}), it suffices to ensure that the projection $\pi^\ast$ be regular along the Lagrangian manifold (in other words, we require that its differential be surjective along that manifold). Note that we can consider as well the projection from the $(x,p)$-space to the $x$-space, instead of $\pi^\ast$, because the coordinate change $(x,p) \mapsto (z,w)$ is bijective in the neighborhood of a point $(x,p) \in S$. Indeed, this coordinate only needs to be regular for providing the regularity of projection from $(x,p)$-space to $x$-space. As illustrated on Figure \ref{fuller_ct}(a), the above regularity condition ensures that the trajectories in the $x$-space do not intersect each other before reaching the target point or submanifold, and thus ensures to avoid the loss of local optimality of the trajectories at the intersection point (i.e., the conjugate point). Figures \ref{fuller_ct}(b) and \ref{fuller_ct}(c) show the optimal synthesis of the chattering trajectories $\pi^\ast (\mathcal{N}_w^{+})$ and $\pi^\ast (\mathcal{N}_w^{-})$ for the Fuller problem, respectively. These chattering solutions do not intersect and they are locally optimal. \begin{figure}[h] \centering \includegraphics[scale = 0.85]{fuller_CT.eps} \caption{ (a) Illustration of sufficient optimality condition; (b)-(c) Optimal synthesis of the Fuller problem.} \label{fuller_ct} \end{figure} Let $M_1$ be a a target submanifold contained in the projection of the singular surface $\pi^\ast S$. For any point $x \in M_1$, we define its lift $(x,p(x))$ satisfying $(x,p(x))\in S$, $H(x,p(x))=0$ and $p(x)\,dx=0$ (transversality condition). The union $N$ of all such points $(x,p(x))$ must be transversal to the flow of the singular extremals in $S$. Thus, the singular extremals reaching the submanifold $N$ fill a submanifold $N^\ast$. In short, the submanifold $N$ is a lift of the target $M_1$ that intersects with the singular extremals. It is easy to see that the submanifold $N^\ast$ is Lagrangian. Hence the subbundle $\pi^{-1}(N^\ast)$ is Lagrangian as well. Therefore, according to the theory on Lagrangian manifolds and sufficient optimality conditions, it suffices to check the regularity of the projection $\pi^\ast$ restricted to $\pi^{-1}(N^\ast)$. The following proposition provides sufficient optimality conditions (see \cite{Zelikin2}) when the submanifolds $N$ and $N^\ast$ are of dimension $n-3$ and $n-2$ respectively. \begin{prop}\label{prop_opti} Consider the subbundle $\pi ^{-1} (N^{\ast})$ of the bundle $\Sigma^+$. Assume that the restriction of the projection $\pi^\ast$ on any smooth part of the bundle $\pi^{-1} (N^{\ast})$ is regular and can be regularly extended to boundary points of the smooth part. Assume that the target manifold $M_1$ is connected. Then the projection of the solutions of the system \eqref{small_perturbation} filling $\pi ^{-1} (N^{\ast})$ are locally optimal in $C^0$ topology. \end{prop} The target submanifold has to be chosen adequately and must be of order $n-3$ in order to use this proposition. This condition on the dimension is used to take into account the two-dimensional fibers mentioned in Proposition \ref{thm1}. \begin{figure}[h] \centering \includegraphics[scale = 1.0]{th2.eps} \caption{Illustration of Proposition \ref{prop_opti}.} \label{th2_illustration} \end{figure} As shown in Figure \ref{th2_illustration}, due to the endowed bundle structure, for every given initial point $(z_0,w_0)$ in the neighborhood of the singular surface $S$ in $(z,w)$-space, there is a neighborhood $\mathcal{V}$ of the point $(z_0,w_0)$ such that all extremals starting from the points inside $\mathcal{V}$ reach a point on $N^{\ast}$ in finite time with infinitely many switchings. Then, these extremals reach the target manifold $N$ along the singular extremals in $N^\ast$. If the projection $\pi^\ast$ is regular, then the projected trajectories in the $x$-space are locally optimal in $C^0$ topology. \medskip The condition of being a regular projection is the most difficult one to check. We set \begin{equation*} \Sigma^{\ast} = \pi ^{-1} (N^{\ast}), \quad \Gamma^\ast = \Sigma^\ast \cap \Gamma^+, \quad S_0 = S \cap \{H = 0\}. \end{equation*} In \cite{Zelikin2}, the authors provide the following sufficient condition for having a regular projection of $\Sigma^{\ast}$ into the $x$-space. \begin{lem} \label{lem_opti} Let $\mathcal{L}$ be spanned by the vector $\partial / \partial z_3$ and by the vectors of the tangent plane to the switching surface $\Gamma^\ast$. Assume that the restriction of $d\pi^\ast$ to $\mathcal{L}$ is surjective. Then, the restriction of $\pi^\ast$ to $\Sigma^{\ast}$ is regular as well. \end{lem} \begin{rem} \label{rem_thm3} Lemma \ref{lem_opti} indicates that $d\pi.\frac{\partial}{\partial z_3}$ should be transversal to the tangent plane to the switching surface of the chattering family generated by the submanifold $N$. Note that, at the points of the curve $N$, the tangent plane of the switching surface $\Gamma^\ast$ consists of three types of vectors: the nonsingular velocity vector, the singular velocity vector and the tangent vector to the curve $N$. \end{rem} \section{Application to the planar tilting maneuver problem} In this section, we analyze the bang-bang, singular and chattering extremals of the problem ${\bf (MTTP)}$. We will see that, when the strategy involves a singular arc, then this singular arc is of intrinsic order two, and according to the previous section, this causes a chattering phenomenon. We will prove that chattering extremals are locally optimal in $C^0$ topology. \subsection{Extremal equations}\label{sec_ReguArcs} The Hamiltonian of the problem ${\bf (MTTP)}$ is of the form $H=h_0+ uh_1 + p^0$, where $h_0 = \langle p,f_0(x) \rangle$ and $h_1 = \langle p,f_1(x) \rangle=bp_4$, and the adjoint vector $p=(p_1,p_2,p_3,p_4)$ satisfies the adjoint equations \begin{equation} \label{sys2} \begin{cases} \dot{p}_1 &= c(p_1 x_2 - 2 p_2 x_1 + p_3) ,\\ \dot{p}_2 &= c p_1 x_1,\\ \dot{p}_3 &= a (p_1 \sin x_3 - p_2 \cos x_3),\\ \dot{p}_4 &= - p_3. \end{cases} \end{equation} Since $b>0$, we infer from the maximization condition of the PMP that $u(t) = \mathrm{sign} (p_4(t))$, provided that $\varphi(t)=bp_4(t)\neq 0$ (bang arcs). The final condition $x(t_f)\in M_1$ yields the transversality condition \begin{equation*} p_1(t_f)\cos(\gamma_f)+p_2(t_f) \sin(\gamma_f) = 0. \end{equation*} \subsection{Lie bracket configuration of the system} \label{sec_Lie} Before proceeding with the analysis of singular extremals, it is very useful to compute the Lie brackets of the vector fields $f_0$ and $f_1$ defined by \eqref{def_f0f1}. This is what we call the Lie bracket configuration of the control system \eqref{singleinputcontrolaffinesystem}. \begin{lem}\label{Lieconfig} We have \begin{equation*} \begin{split} & f_0 = ( a \cos x_3 - c x_1 x_2 ) \frac{\partial}{\partial x_1} + (a \sin x_3 + c x_1^2- g_0 ) \frac{\partial}{\partial x_2} + (x_4 - c x_1) \frac{\partial}{\partial x_3} ,\\ & f_1 = b \frac{\partial}{\partial x_4} ,\quad [f_0,f_1] = -b \frac{\partial}{\partial x_3},\\ & [f_0,[f_0,f_1]] = -ab\sin x_3 \frac{\partial}{\partial x_1} + ab\cos x_3 \frac{\partial}{\partial x_2},\qquad [f_1,[f_0,f_1]] \equiv 0, \\ & \mathrm{ad}^3f_0.f_1 = -ab((x_4-2cx_1)\cos x_3+c x_2\sin x_3)\frac{\partial}{\partial x_1}-ab\sin x_3(x_4-3cx_1 )\frac{\partial}{\partial x_2} -abc\sin x_3 \frac{\partial}{\partial x_3}, \\ \end{split} \end{equation*} \begin{equation*} \begin{split} [f_1,[f_0,[f_0,f_1]] =& [f_0,[f_1,[f_0,f_1]] = [f_1,[f_1,[f_0,f_1]] =0 ,\\ \mathrm{ad}^4f_0.f_1 =& ab((-4cx_1x_4+cg_0 +x_4^2 +4c^2x_1^2-c^2x_2^2)\sin x_3-2ac+4ac \cos^2 x_3+(cx_1 \\ & -2x_4)cx_2\cos x_3 )\frac{\partial}{\partial x_1} + ab(-c^2x_1x_2 \sin x_3+4ac \sin x_3\cos x_3+(-x_4^2+6cx_1x_4\\ & -7c^2x_1^2)\cos x_3)\frac{\partial}{\partial x_2}+abc(3cx_1\cos x_3-2x_4\cos x_3-cx_2\sin x_3)\frac{\partial}{\partial x_3},\\ [f_1,\mathrm{ad}^3f_0.f_1] =& -ab^2\cos x_3 \frac{\partial}{\partial x_1} -ab^2\sin x_3 \frac{\partial}{\partial x_2}.\\ \end{split} \end{equation*} and $$ \dim\mathrm{Span}(f_1,[f_0,f_1],[f_0,[f_0,[f_0,f_1]])=3 $$ \end{lem} It follows from this lemma that the Poisson brackets $\{h_1,\{h_0,h_1\}\}$ and $\{h_1,\{h_0,\{h_0,h_1\}\}\}$ are identically equal to $0$. This is the main reason why we will have singular extremals of higher order, as shown in the next section. \subsection{Singular extremals}\label{sec_SingArcs} In this section, we compute all possible optimal singular extremals arcs. Later on, we are going to provide sufficient conditions on the initial conditions, under which the optimal strategy of the problem ${\bf (MTTP)}$ does not involve (optimal) singular arcs. Before that, let us first assume that singular arcs do exist, and let us establish some necessary conditions along them. \begin{lem} \label{lem_singular} Let $x(\cdot)$ be a singular arc, defined on the sub-interval $(t_1,t_2)$, and let $(x(\cdot),p(\cdot),p^0,u(\cdot))$ be an extremal lift. Then: \begin{itemize} \item along that singular extremal, we must have (omitting $t$ for readability) \begin{equation} \label{switch_derives3} \begin{split} & p_1 \left( a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 \right) + p^0 \cos x_3 = 0 , \\ & p_2 \left( a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 \right) + p^0 \sin x_3 = 0, \\ & p_3 = p_4 = 0 , \end{split} \end{equation} and \begin{equation} \label{singularcontrol} u = \frac{c}{2b} \big( (-cx_2^2+2x_1x_4-3cx_1^2+g_0)\sin 2x_3+2cx_1x_2\cos 2x_3 +4a\cos x_3-4x_2x_4\cos^2 x_3 \big) ; \end{equation} \item $p^0 \neq 0$ (in other words, there is no abnormal singular extremal), and then we set $p^0=-1$; \item the four constraints \eqref{switch_derives3} are functionally independent; \item one has $\vert u(t)\vert < 1$, for almost every $t\in(t_1,t_2)$ (in other words, any singular arc is admissible); \item $u$ is of intrinsic order two; \item the strengthened generalized Legendre-Clebsch condition along the singular extremal reads \begin{equation} \label{kelleyarea} a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 > 0 . \end{equation} \end{itemize} \end{lem} In particular, the last item of the lemma states that optimal singular arcs, if they exist, must live in the region of the state space $\mathbb{R}^4$ defined by \eqref{kelleyarea}. The third item of the lemma implies that the singular extremals of the problem are in a submanifold of codimension $4$, i.e., the singular surface of ${\bf (MTTP)}$ is of codimension $4$. \begin{proof} Along the interval $I=(t_1,t_2)$ on which the singular arc is defined, the switching function $\varphi(t)=h_1(x(t),p(t))=bp_4(t)$ must be identically equal to zero. Differentiating with respect to time, we get that $\{ h_0,h_1 \} = - bp_3 = 0$ along $I$. Differentiating again, we get $\{h_0,\{ h_0,h_1 \}\} + u \{h_1,\{h_0,h_1\}\} = 0$, and since the Poisson bracket $\{h_1,\{h_0,h_1\}\}$ is identically equal to $0$ (see Lemma \ref{Lieconfig}), we have $\{h_0,\{ h_0,h_1 \}\} = \mathrm{ad}^2 h_0.h_1 = -ab ( p_1 \sin x_3 - p_2 \cos x_3) = 0$ along $I$ (and the equation $\{h_1,\{h_0,h_1\}\}=0$ does not bring any further information). Differentiating again, we get $\{h_0,\{h_0,\{ h_0,h_1 \}\}\} + u \{h_1,\{h_0,\{h_0,h_1\}\}\} = 0$, and there, again from Lemma \ref{Lieconfig}, the Poisson bracket $\{h_1,\{h_0,\{h_0,h_1\}\}\}$ is identically equal to $0$ (and thus brings no additional information). Hence $$ \mathrm{ad}^3h_0.h_1 = -ab \big(x_4 (p_1 \cos x_3 + p_2 \sin x_3)+cp_1x_2\sin x_3 -3cp_2x_1\sin x_3-2cp_1x_1\cos x_3 \big) = 0, $$ which gives a new constraint. Finally, a last derivation yields $\mathrm{ad}^4h_0.h_1 + u \{h_1,\mathrm{ad}^3h_0.h_1\} = 0$ and since $\{ h_1, \mathrm{ad}^3 h_0.h_1 \} \neq 0$, we infer that $$ u = -\frac{\mathrm{ad}^4 h_0.h_1}{\{ h_1, \mathrm{ad}^3 h_0.h_1 \}}, $$ along $I$, and \eqref{singularcontrol} is obtained. Here, we have $$ \mathrm{ad}^4 h_0.h_1 = \langle p, \mathrm{ad}^4f_0.f_1(x) \rangle, $$ and $$ \{ h_1, \mathrm{ad}^3 h_0.h_1 \} = \langle p, [f_1, \mathrm{ad}^3f_0.f_1](x) \rangle= - a b^2 (p_1 \cos x_3 + p_2 \sin x_3) . $$ Hence, we have obtained the constraints \begin{equation*} \begin{split} & p_3 = p_4 = 0,\qquad p_1 \sin x_3 - p_2 \cos x_3 = 0,\\ & -x_4(p_1\cos x_3+p_2\sin x_3)+3cp_2x_1\sin x_3+cp_1(2x_1\cos x_3-x_2\sin x_3)= 0 . \end{split} \end{equation*} They are functionally independent because $\dim\mathrm{Span}(f_1,[f_0,f_1],[f_0,[f_0,[f_0,f_1]])=3$ (see Lemma \ref{Lieconfig}). Moreover, using the fact that $H \equiv 0$ along an extremal, we infer the relations \eqref{switch_derives3}. Setting \begin{equation*} \begin{split} y_1 &= p_1 \left( a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 \right) + p^0 \cos x_3, \\ y_2 &= p_1 \left( a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 \right) + p^0 \sin x_3, \end{split} \end{equation*} we have $ \mathrm{rank} \frac{\partial (y_1,y_2,p_3,p_4)}{\partial (x,p)}=4, $ provided that $p^0 \neq 0$ and $p_1 \neq 0$, $p_2 \neq 0$. This implies that these four functions are functionally independent. If $p_1=0$ or $p_2=0$, then it is easy to see that $p_1=p_2=p^0=0$, which violates the PMP. Hence $p_1 \neq 0$ and $p_2 \neq 0$. If $p^0$ were to be zero, then it would follow from $p_1 \neq 0$ and $p_2 \neq 0$ that $y=a -c x_1 x_2 \cos x_3 - (g_0 - c x_1^2) \sin x_3 \equiv 0$ along $I$. Differentiating, we get $\dot{y} \equiv 0$ and $\ddot{y}=\alpha_c +u_c \beta_c \equiv 0$. By substituting $p_1 \sin x_3 = p_2 \cos x_3$ into $-x_4(p_1\cos x_3+p_2\sin x_3)+3cp_2x_1\sin x_3+cp_1(2x_1\cos x_3-x_2\sin x_3=0$, we get $$ y_3= -x_4+cx_1(2+\sin x_3^2)-cx_2\sin x_3 \cos x_3 =0. $$ Then, setting $ y_4=u_c - u_s=-\alpha_c/\beta_c-u, $ we check that $y=0$, $\dot{y}=0$, $y_3=0$ and $y_4=0$ are four functionally independent constraints on the $x$-space. Hence, the trajectory along $I$ becomes some points. To stay along $I$ on this abnormal extremal, we need in addition $u=0$ which is another independent constraint, and so the number of constraints has exceeded the dimension of the extremal $(x,p)$-space. Therefore $p^0 \neq 0$. Using the numerical values of Table \ref{sim_param}, we have \begin{equation} \label{sc_real} |u| \leq \frac{c}{2b} \big( 4a+6v_m\omega_{max}+cv_m^2 \big) \leq 0.3 , \end{equation} and thus $\vert u\vert < 1$. Hence, for the problem ${\bf (MTTP)}$, we have, along any singular extremal arc, \begin{equation*} \frac{\partial}{\partial u} \frac{d^k}{dt^k} h_1 = 0, \quad k=0,1,2,3 ,\qquad \frac{\partial}{\partial u} \frac{d^4}{dt^4} h_1 = \beta(x,p)= -a b^2 (p_1 \cos x_3 + p_2 \sin x_3) \neq 0 , \end{equation*} and then, according to Definition \ref{def_singularorder}, the singular solutions (which are admissible from \eqref{sc_real}) are of intrinsic order two. The strengthened generalized Legendre-Clebsch condition for the problem ${\bf (MTTP)}$ is written here as $\beta(x,p) <0$, and hence, using \eqref{switch_derives3} and taking $p^0 = -1$, we obtain \eqref{kelleyarea}. \end{proof} \begin{cor} For the problem ${\bf (MTTP)}$, any optimal singular arc cannot be connected with a nontrivial bang arc. We must then have chattering, in the following sense. Let $u$ be an optimal control, solution of ${\bf (MTTP)}$, and assume that $u$ is singular on the sub-interval $(t_1,t_2)\subset[0,t_f]$ and is non-singular elsewhere. If $t_1>0$ (resp., if $t_2<t_f$) then, for every $\varepsilon>0$, the control $u$ switchings an infinite number of times over the time interval $[t_1-\varepsilon,t_1]$ (resp., on $[t_2,t_2+\varepsilon]$). \end{cor} \begin{proof} This result follows from Lemma \ref{thm_bs} and Lemma \ref{lem_singular}. However, the proof is simple and we provide hereafterin the argument. It suffices to prove that the existence of an extremal consisting of the concatenation of a singular arc of higher order with a non-singular arc violates the PMP. The reasoning goes by contradiction. Assume that $t_1>0$ and that there exists $\varepsilon>0$ such that $u(t) = 1$ over $(t_1-\epsilon,t_1)$. By continuity along the singular arc, we have $\varphi(t_1)=\varphi^{(1)}(t_1)=\varphi^{(2)}(t_1)=\varphi^{(3)}(t_1)=0$, and it follows from the strengthened generalized Legendre-Clebsch condition $\beta (x,p)< 0$ that \begin{multline*} 0=\varphi^{(4)}(t_1^+) = \mathrm{ad}^4h_0.h_1(t_1)+\{h_1,\mathrm{ad}^3h_0.h_1\}(t_1) u(t_1^+) \\ > \mathrm{ad}^4h_0.h_1(t_1)+\{h_1,\mathrm{ad}^3h_0.h_1\}(t_1) u(t_1^-)=\varphi^{(4)}(t_1^-), \end{multline*} and hence the switching function $t\mapsto \varphi(t)=h_1(x(t),p(t))$ has a local maximum at $t=t_1$ and thus is nonnegative over $(t_1-\epsilon,t_1)$, provided that $\varepsilon>0$ is small enough. It follows from the maximization condition of the PMP that $u(t_1)=-1$ over $(t_1-\epsilon,t_1)$. This contradicts the assumption. \end{proof} \subsection{Optimality status of chattering extremals}\label{sec_ChatArcs} In this section, we analyze the optimality status of chattering extremals in the problem ${\bf (MTTP)}$. \begin{lem} \label{lem_chattering1} Assume that $x_3 \neq \pi/2 + k\pi$, $k \in \mathbb{Z}$. The Hamiltonian system, consisting of \eqref{sys1} and \eqref{sys2}, can be written as a small perturbation system, in the form \eqref{Hamsys}, as \begin{equation} \label{as_S} \begin{cases} \dot{z_1} &= z_2 ,\\ \dot{z_2} &= z_3 ,\\ \dot{z_3} &= z_4 + f_3(z,w,u),\\ \dot{z_4} &= \alpha_0(w) + u \beta_0(w) + f_4(z,w,u),\\ \dot{w} &= F(z,w,u) .\\ \end{cases} \end{equation} where $u \in [-1,1]$ and \begin{equation} \label{smallpertcond} \lim_{\lambda \to +0} \frac{f_3(G_{\lambda}(z),w,u)}{\lambda^{(5-3)}}=0, \:\: \lim_{\lambda \to +0} \frac{f_4(G_{\lambda}(z),w,u)}{\lambda^{(5-4)}} < \infty . \end{equation} by choosing new variable $(z,w)$ as \begin{equation} \label{var_change} \begin{cases} z_1 = p_4,\:\: z_2 = p_4 ^{(1)},\:\: z_3 = p_4 ^{(2)}, z_4 = p_4 ^{(3)} + a c \sin x_3 p_3,\\ \d{w_1 = a ( p_1 \sin x_3 + p_2 \cos x_3)} , \\ \d{w_2 = a(x_4\cos x_3+cx_2\sin x_3-2cx_1\cos x_3)p_1}+a\sin x_3 (-x_4+3cx_1)p_2,\\ \d{w_3 = p_2/p_1} ,\\ \d{w_4 = x_1} . \end{cases} \end{equation} in the neighborhood of the singular surface defined by $z=0$ in the $(z,w)$-space. In addition, the strengthened generalized Legendre-Clebsch condition for system (\ref{as_S}) yields \begin{equation} \label{kelleyarea2} w_1 w_3>0 . \end{equation} \end{lem} \begin{proof} We have proved that $p_1 \neq 0$ and $p_2 \neq 0$ along the singular arc. Then, from $x_3 \neq \pi/2 + k\pi$, $k \in \mathbb{Z}$ we can prove that the Jacobi matrix of this variable change is of full rank by direct calculations, i.e., \begin{equation*} \mathrm{rank} \big( \frac{D(z,w)}{D(x,p)} \big)= 8, \end{equation*} After some manipulations, we can express $(x,p)$ by the new variables $(z,w)$ chosen in (\ref{var_change}), as \begin{equation}\label{Var_Trans} \begin{split} &\d{x_1=w_4}, \quad \d{x_4=3c w_4 - \frac{z_4+w_2}{w_3(w_1-z_3)}}, \quad p_3=-z_2, \quad p_4=z_1,\\ &\d{x_2=\frac{w_1w_2-w_1z_4-w_2z_3+z_3z_4}{c(w_1-z_3)^2}+\frac{w_1w_2+w_1z_4+w_2z_3+z_3z_4-cw_3(w_4w_1^2-w_4z_3^2)}{cw_3^2(w_1-z_3)^2}} ,\\ &x_3 = -2 \mathrm{arctan}\big(w_1+z_3 \pm(w_1^2w_3^2+w_1^2-2w_1w_3^2z_3+2w_1z_3+w_3^2z_3^2+z_3^2)/(w_3w_1-w_3z_3) \big),\\ &\d{p_1=\mp \sqrt{w_1^2w_3^2+w_1^2-2w_1w_3^2z_3+2w_1z_3+w_3^2z_3^2+z_3^2}/ (2aw_3)},\\ &\d{p_2=\mp \sqrt{w_1^2w_3^2+w_1^2-2w_1w_3^2z_3+2w_1z_3+w_3^2z_3^2+z_3^2}/ (2a)}. \end{split} \end{equation} Although this variable transformation is not one to one in the whole $(x,p)$-space, it does not matter, because the semi-canonical system we use is a local system and so we just need to consider separately the domain $x_3 \in \mathcal{D}_1=(-\pi/2,\pi/2)$ and $x_3 \in \mathcal{D}_2=(-\pi,-\pi/2)\cup(\pi/2,\pi)$. The manifold of singular trajectories specified by $z=0$ can be written as \begin{equation*} S = \{(x,p)|p_3=0,p_4=0,p_2=p_1\tan x_3,x_4=cx_1(2+\sin^2 x_3)-cx_2\sin x_3 \cos x_3 \} . \end{equation*} Differentiating $(z,w)$ defined in \eqref{var_change} with respect to time with the help of \eqref{sys1} and \eqref{sys2}, we get the system in form, \begin{equation} \label{syssemixp} \begin{cases} \dot{z_1} &= z_2,\\ \dot{z_2} &= z_3,\\ \dot{z_3} &= z_4+f_3(x,p),\\ \dot{z_4} &= A(x,p)+B(x,p) u,\\ \dot{w} &= F(x,p,u), \end{cases} \end{equation} with\begin{equation*} u=\begin{cases} 1 & \textrm{if}\ z_1 >0 ,\\ -1 &\textrm{if}\ z_1 <0 ,\\ -A(x,p)/B(x,p) &\textrm{if}\ z_1 =0 , \end{cases} \end{equation*} where \begin{equation*} A(x,p) = \{ h_0(x,p), z_4 \} ,\quad B(x,p) = \{ h_1(x,p), z_4 \} = \beta(x,p)/b . \end{equation*} Note that here we have $u = -A/B = u_s$, where $u_s$ is given in \eqref{singularcontrol}. Hence we infer $|u|<1$. By substituting \eqref{Var_Trans} into \eqref{syssemixp}, we can obtain $f_3(z,w)$, $A(z,w)$, $B(z,w)$ and $F(z,w,u)$. Then we expand $A(z,w)$ and $B(z,w)$ in the vicinity of $S$ by \begin{equation*} A(z,w)=A(0,w)+ \sum_{k=1}^{\infty} \frac{\partial^k A}{\partial z^k}(0,w) \frac{z^k}{k !},\:\: B(z,w)=B(0,w)+ \sum_{k=1}^{\infty} \frac{\partial^k B}{\partial z^k}(0,w) \frac{z^k}{k !}. \end{equation*} By taking $\alpha_0(w) = A(0,w)$, $\beta_0(w) = B(0,w)$, system (\ref{as_S}) is derived. We can see that system (\ref{as_S}) is a small perturbation of system (\ref{Hamsys}) since condition (\ref{sp_cond}) holds, i.e., condition \eqref{smallpertcond} holds. Moreover, the strengthened generalized Legendre-Clebsch condition \eqref{kelleyarea2} is derived from \begin{equation*} \beta_0(w)=-b\frac{w_1(1+w_3^2)}{2w_3} < 0 , \end{equation*} and \eqref{syssemixp} is transformed into a small perturbation system of form \eqref{as_S}. \end{proof} The functions $f_3(z,w)$, $\alpha_0(w)$ and $F(z,w,u)$ are different for $x_3 \in \mathcal{D}_1$ and $x_3 \in \mathcal{D}_2$. However, we will see next that this difference does not have any influence in the demonstration of the optimality result of chattering extremals. \begin{cor} \label{cor_chattering1} For the problem ${\bf (MTTP)}$, there exit two subbundles $\Sigma^{+}$ and $\Sigma^{-}$ having the singular surface $S$ as a base, and two fibers $\mathcal{N}^+$ and $\mathcal{N}^-$ of dimension two filled by chattering solutions. \end{cor} \begin{proof} It suffices to apply Lemma \ref{lem_chattering1} and Proposition \ref{thm1}. \end{proof} We define $S_0 = S \cap \{ H\equiv 0 \}$. Let us consider an optimal solution $x(\cdot)$ of ${\bf (MTTP)}$, and let us assume that $x(\cdot)$ contains a singular arc defined on $(t_1,t_2)$. Let \begin{equation*} M_1^\ast = \{ x_2 = \Psi_1(x_1) \} \cap \pi^\ast(S_0) , \end{equation*} be the submanifold where the extremals come into and out of the image of the singular surface $\pi^\ast(S_0)$, as shown in Figure \ref{targets}. \begin{figure}[h] \centering \includegraphics[scale = 1]{targets.eps} \caption{Illustration of $M^\ast_1$.} \label{targets} \end{figure} In the sequel, we want to analyze the optimality status of the chattering solutions with the ``target'' submanifold $M^\ast_1$. The optimality status of the chattering solutions starting from the submanifold $M^\ast_1$ can be analyzed similarly by considering the subbundle $\Sigma^-$. We denote by $N_1$ the lift of $M^\ast_1$ in $(x,p)$-space by associating $x \in M^\ast_1$ with the point $(x,p(x))$ that belongs to $S_0$ and satisfies the transversality condition $p_1 = -p_2 \Psi_1^{\prime}(x_1) $ (following from \eqref{Hamiltonsys3}). \begin{lem} \label{lem_taget} The submanifold $N_1$ is Lagrangian submanifold of $\mathbb{R}^8$ of codimension 7. Moreover, the function $\Psi_1(\cdot)$ can be chosen such that the submanifold $N_1$ is transversal to the velocity vector of the singular extremals in $S$. \end{lem} \begin{proof} From the definition of $N_1$ and \ref{lem_singular}, we infer that that $(x,p(x))$ satisfies \begin{equation*} \begin{split} & p_1 = \frac{\cos x_3}{a -c x_1 x_2 \cos x_3 - (g_0 + c x_1^2) \sin x_3}, \\ & p_2 = \frac{\sin x_3}{a -c x_1 x_2 \cos x_3 - (g_0 + c x_1^2) \sin x_3} , \\ & p_3 = 0, \:\: p_4 = 0,\\ & x_2 - \Psi_1(x_1) = 0,\\ & - x_4 + cx_1(1+\sin^2 x_3) - cx_2 \sin x_3 \cos x_3=0,\\ & \Psi_1^{\prime}(x_1) \tan x_3+1=0. \end{split} \end{equation*} Then, the $x$-component of the tangent vector to $N_1$ can be written as \begin{equation*} v_1 = \left( 1,\frac{\partial x_2}{\partial x_1},\frac{\partial x_3}{\partial x_1},\frac{\partial x_4}{\partial x_1}\right)^\top , \end{equation*} where \begin{equation*} \begin{split} \frac{\partial x_2}{\partial x_1} &= \Psi_1^{\prime}(x_1),\quad \frac{\partial x_3}{\partial x_1} = -\frac{\Psi_1^{\prime \prime}(x_1)} {\Psi_1^{\prime}(x_1)} \sin x_3 \cos x_3,\\ \frac{\partial x_4}{\partial x_1} &= c(2+\sin^2 x_3) +\frac{c}{4}\frac{\Psi_1^{\prime \prime}(x_1)}{\Psi_1^{\prime}(x_1)} \big( 2x_1(2+\sin 2x_3) \sin 2x_3-x_2 \sin 4x_3 \big). \end{split} \end{equation*} Therefore, the $1$-form $\bar{\omega} = p dx = p_1 dx_1 + p_2 dx_2 + p_3 dx_3 + p_4 dx_4$ vanishes on every tangent vector to the submanifold $N_1$. Thus $N_1$ is of codimension $7$ and it is Lagrangian. Moreover, the $x$-component of the velocity on the singular trajectories is \begin{equation*} v_2 = (\dot{x}_1,\dot{x}_2,\dot{x}_3,\dot{x}_4)=( a \cos x_3 - c x_1 x_2 , a \sin x_3 + c x_1^2- g_0, x_4 - c x_1, bu_s)^\top , \end{equation*} with $u = - a(w)/b(w)$. Hence, to provide transversality, it suffices to choose the function $\Psi_1$ such that $v_1$ and $v_2$ are not proportional, e.g., $\Psi_1^{\prime} \neq \frac{a \sin x_3 + c x_1^2-g_0}{a \cos x_3 - c x_1 x_2}$. \end{proof} It follows from this lemma that the submanifold $N^\ast$ filled by singular extremals coming into $N_1$ is Lagrangian. According to Proposition \ref{prop_opti}, it suffices to prove the regularity of the projection $\pi^\ast$ on $\pi^{-1}(N^\ast)$ using \ref{lem_opti}. We denote by $v_3$ the nonsingular velocity vector and by $v_k$ the derivative of the projection $\pi^\ast$ of $\d{\partial / \partial z_3}$. We set $V = (v_1, v_2, v_3, v_k)$. \begin{thm}\label{thm24} If the function $\Psi_1(\cdot)$ is chosen such that \begin{equation} \label{cond_rank} \det V \neq 0, \end{equation} then the chattering solutions of the problem ${\bf (MTTP)}$ are locally optimal in $C^0$ topology. \end{thm} \begin{proof} On $S_0$ we have \begin{equation*} \d{d\pi^\ast\left(\frac{\partial}{\partial z_3}\right) = \frac{\partial x_1}{\partial z_3} \frac{\partial}{\partial x_1} +\frac{\partial x_2}{\partial z_3} \frac{\partial}{\partial x_2} +\frac{\partial x_3}{\partial z_3} \frac{\partial}{\partial x_3} +\frac{\partial x_4}{\partial z_3} \frac{\partial}{\partial x_4}} , \end{equation*} where \begin{align*} &\d{\frac{\partial x_1}{\partial z_3}=0}, \quad \d{\frac{\partial x_2}{\partial z_3}=\frac{w_2 w_3^2-2cw_1w_4w_3+3w_2}{cw_1^2w_3^2}}, \quad \d{\frac{\partial x_3}{\partial z_3}=-\frac{2w_3}{w_1(1+w_3^2)}}, \quad \d{\frac{\partial x_4}{\partial z_3}=-\frac{w_2}{w_1^2w_3}} , \\ \end{align*} and hence it follows that \begin{equation*} d\pi^\ast\left(\frac{\partial}{\partial z_3}\right) = \left(0,\frac{w_2 w_3^2-2cw_1w_4w_3+3w_2}{cw_1^2w_3^2},-\frac{2w_3}{w_1(1+w_3^2)},-\frac{w_2}{w_1^2w_3}\right)^\top. \end{equation*} Denote $\d{d\pi^\ast\left(\frac{\partial}{\partial z_3}\right)}$ as $v_k$. Using \eqref{switch_derives3} and \eqref{var_change} we can get $v_k(w)$ as a vector depending on state variable $x$, i.e. $v_k(x)$. According to Lemma \ref{lem_opti} and Remark \ref{rem_thm3}, if $v_1$, $v_2$, $v_3$ and $\d{d\pi\left(\frac{\partial}{\partial z_3}\right)}$ are linearly independent, then the projection is regular. In our problem, we have that the nonsingular velocity vector associated with $u = 1$ is \begin{equation*} v_3 = (\dot{x}_1,\dot{x}_2,\dot{x}_3,\dot{x}_4)=( a \cos x_3 - c x_1 x_2 , a \sin x_3 + c x_1^2- g_0, x_4 - c x_1, b)^\top . \end{equation*} Therefore, if the condition \eqref{cond_rank} is satisfied on the points of $N_1$, then, the curve $N_1$ has been chosen such that the conditions of Proposition \ref{prop_opti} are fulfilled and so it generates the field of locally optimal chattering solutions in $C^0$ topology. \end{proof} \begin{rem} The condition \eqref{cond_rank} in Theorem \ref{thm24} is always satisfied if one chooses an appropriate function $\Psi_1(\cdot)$. Indeed, we have $$ \det V = \sum_{i=1}^4 v_{i,1} D_{i,1}, $$ where $D_{i,1}$ is the $(i,1)$ minor. Some calculations show that $D_{4,1}=0$, and hence \eqref{cond_rank} becomes $\det V = D_{1,1}-\Psi_1' D_{2,1}-\Psi_1''/\Psi_1' D_{3,1} \neq 0$. It suffices to ensure that $D_{i,1}$, $i=1,2,3$, do not vanish simultaneously. We prove this fact by contradiction: otherwise, it is easy to check that they yield three independent constraints in the $x$-space, and moreover, they are independent of the constraints $y_3=0$ and $x_2=\Psi(x_1)$ for the singular surface $S$. In this case, the number of constraints is larger than the dimension of the $x$-space. Therefore, $D_{i,1}$, $i=1,2,3$ do not vanish simultaneously. \end{rem} \section{Chattering prediction}\label{sec_SingPred} Since the chattering phenomenon causes deep difficulties for practical implementation, due to the fact that an infinite number of control switchings within finite time cannot be realized in real-life control strategies, in this section our objective is to provide precise conditions under which we can predict that an optimal singular arc does not appear, and thus there is no chattering arcs. A maneuver with $\gamma_f \geq \gamma_0$ (resp., $\gamma_f < \gamma_0$) is said to be a \textit{anticlockwise} maneuver (resp., a \textit{clockwise} maneuver). In practice, the values of $x_{30}$ and $x_{3f}$ are chosen in $(0,\pi/2)$, and the values of $\gamma_0$, $x_{40}$ are chosen such that $| \gamma_0-x_{30} | \leq 0.1$ and $|x_{40}| \leq 0.1$. We distinguish between those two maneuvers because of the gravity force imposed to the spacecraft. We will see further, in the numerical results, that the clockwise maneuver is easier to perform than the anticlockwise maneuver, in the sense that the clockwise maneuver time is shorter (in time), and there is less possibility of encountering a singular arc. This fact is due to the nonlinear effects caused by the gravity force. Indeed, intuitively, the gravity force tends to reduce the value of $x_2$, and hence the value of $\tan \gamma=x_2/x_1$ tends to get smaller. Then the spacecraft velocity turns naturally to the ground (pitch down) under the effect of the gravity. This tendency helps the clockwise maneuver to be ``easier''. Although we have set $c=10^{-6}$ (see Table \ref{sim_param}), we have $c \leq 10^{-6}$ in real-life, since $c=1/r$ where $r$ is a distance not less than the radius of the Earth. The case $c=0$ corresponds to a \textit{flat-Earth case}, and the case $c \in (0,10^{-6}]$ will be referred to as the \textit{non-flat case}. \subsection{Flat-Earth case $c = 0$} \label{sec_limicase} We can see from \eqref{flightangle} that $\dot{\gamma}$ is much smaller than $\dot{x}_3$ and $\dot{x}_4$ given by \eqref{singleinputcontrolaffinesystem}. Therefore, the main factor that affects the total maneuver time is the time to change $\gamma$ from $\gamma(0) = \gamma_0$ to $\gamma(t_f) = \gamma_f= x_{3f}$. In order to shorten the maneuver time, it is required to keep $\dot{\gamma}$ as large as possible. To this aim, we consider the time minimum control problem in which $x_3$ is seen as a control. We call this problem the \emph{problem of order zero}, i.e., \begin{align*} & \min t_f \quad s.t.\\ & \dot{x}_1= a \cos x_3, \quad \dot{x}_2= a \sin x_3-g_0,\\ & x_1(0)=v_0 \cos \gamma_0,\quad x_2(0)= v_0 \sin \gamma_0, \quad x_2(t_f)-x_1(t_f) \tan \gamma_f = 0, \end{align*} The optimal solution of this problem is easy to compute (explicitly) with the PMP. The optimal control on $[0,t_f]$ is given by \begin{equation} \label{x3_s} x_3(t) = x_3^{\ast} =\begin{cases} \gamma_f + \pi/2, & \textrm{if}\ \ x_{20}\cos \gamma_f \leq x_{10}\sin \gamma_f,\\ \gamma_f - \pi/2, & \textrm{if}\ \ x_{20}\cos \gamma_f > x_{10}\sin \gamma_f, \end{cases} \end{equation} and the maneuver time is $$ t_f =\frac{(x_{10}\tan \gamma_f-x_{20})\cos \gamma_f}{a \sin(x_3^\ast-\gamma_f)-g_0\cos \gamma_f}. $$ Moreover, the adjoint vector is given by $$ (p_1,p_2)=(\sin \gamma_f,-\cos \gamma_f) \frac{p^0}{a \sin (x_3^\ast-\gamma_f)-g_0\cos \gamma_f}. $$ \begin{rem} From this expression, we see that $g_0$ makes the anticlockwise maneuver slower, i.e., $ t_f \geq \frac{(x_{10}\tan \gamma_f-x_{20})\cos \gamma_f}{a \sin(x_3^\ast-\gamma_f)}, $ and the clockwise maneuver faster, i.e., $ t_f \leq \frac{(x_{10}\tan \gamma_f-x_{20})\cos \gamma_f}{a \sin(x_3^\ast-\gamma_f)}. $ Therefore, the clockwise maneuver is ``easier'' to perform than the anticlockwise maneuver thanks to the gravity, which corresponds to intuition, as saif at the beginning of this section. \end{rem} Turning back to the problem ${\bf (MTTP)}$ in the flat-Earth case, from Lemma \ref{lem_singular}, the singular surface is given by \begin{multline*} S=\{(x,p) \mid x_3=x_3^\ast,\quad x_4=0, \quad p_1=-p^0\cos x_3^\ast /(a-g_0\sin x_3^\ast),\\ p_2 =-p^0\sin x_3^\ast /(a-g_0\sin x_3^\ast),\quad p_3=0, \quad p_4=0 \}. \end{multline*} It is interesting to see that the solution of the problem of order zero coincides with the singular solution of problem ${\bf (MTTP)}$ in the flat-Earth case. We have the following results. \begin{lem} Let $x(\cdot)$ be an optimal solution of ${\bf (MTTP)}$ in the flat-Earth case, associated with the control $u$. If $x(\cdot)$ contains at most one point of $S_2=\{ (x,p)| x_3=x_3^\ast\}$, then the control $u$ is bang-bang and switches at most two times. \end{lem} \begin{proof} If $u(\cdot)$ is singular, then $(x(\cdot),p(\cdot))$ is contained in $S \subset S_2$. From the definition of $S$, it is easy to prove that $x(t_1) \neq x(t_2)$ for any $t_1 \neq t_2$ in $[0,t_f]$, which means that $(x(t_1),p(t_1))$ and $(x(t_2),p(t_2))$ are two different points of $S_2$. This contradicts the condition that $x(\cdot)$ contains at most one point of $S_2$. Therefore, $u(\cdot)$ is bang-bang. It suffices to prove that if $x(\cdot)$ contains at most one point of $S_2$, then $\ddot{\varphi}(t)$, $t \in [0,t_f]$, remains of constant sign and has at most one zero. Indeed, if this is true, then $\dot{\varphi}(t)=-p_3(t)$ is monotone, and it follows that the first derivative of the switching function $\varphi(t)$ has at most two zeros, which means that the control $u$ has at most two switchings. Let us prove this fact by contradiction. If there exists $t_1 \in [0,t_f]$ such that $(x(t_1),p(t_1)) \in S_2$, using $p_1+\tan \gamma_f p_2=0$ (transversality condition) and $p_1,p_2 \neq 0$, we have $$ \ddot{\varphi}(t_1) = -\dot{p}_3(t_1)=-a(p_1\sin x_3-p_2\cos x_3) = a p_2\cos(x_3-\gamma_f)/\cos \gamma_f=0. $$ From the continuity of $\ddot{\varphi}(\cdot)$, we get that there exist two times $\tau_i<t_1$ and $\tau_j>t_1$ such that $\ddot{\varphi}(\tau_i)\ddot{\varphi}(\tau_j)<0$. It follows that $x_3(t_1)$ and $x_3(t_2)$ are on different sides of $x_3^\ast=\gamma_f \pm \pi/2$, i.e., $ (x_3(t_1)-x_3^\ast)(x_3(t_2)-x_3^\ast)<0. $ However, we know that $x_{30}$ and $x_{3f}=\gamma_f$ are on the same side of $x_3^\ast$, i.e., $ (x_{30}-x_3^\ast)(x_{3f}-x_3^\ast)<0, $ and hence there must exist another time $t_2$ at which $\ddot{\varphi}(t_2)=0$ ($(x(t_2),p(t_2)) \in S_2$) in order to allow the trajectory to reach the terminal submanifold. This is a contradiction. \end{proof} We denote a bang arc with $u=1$ (resp. $u=-1$) as $A_+$ (resp. $A_-$), and we denote a chattering arc and a singular arc by $A_c$ and $A_s$, respectively. Let $\mathcal{F}_{x_3}$ be the union of all trajectories $x(\cdot)$ consisting of three different bang arcs satisfying the terminal conditions $x(0)=x_0$ and $x_3(t_f)=x_{3f}$, $x_4(t_f)=0$. These trajectories are of the form $A_+A_-A_+$ or $A_-A_+A_-$. Easy calculations show that the optimal control $u(t)$ and the trajectory $x(t)$ of the form $A_+A_-A_+$ (resp. $A_-A_+A_-$) are given by \begin{equation*} u(t) = \begin{cases} +1, t\in [0,\tau_1),\quad (\textrm{resp.}, -1)\\ -1, t\in [\tau_1,\tau_2)\cup[\tau_2,\tau_3), \quad (\textrm{resp.}, +1)\\ +1, t\in [\tau_3,t_f], \quad (\textrm{resp.}, -1) \end{cases} \end{equation*} and \begin{equation} \label{pitchup_x3} \begin{split} x_1(t) & = v_0 \cos \gamma_0 +\int_0^t a\cos x_3(s) ds,\\ x_2(t) & = v_0 \sin \gamma_0 +\int_0^t a\sin x_3(s)-g_0 ds,\\ x_3(t)& = \begin{cases} x_{30} + x_{40} t + b t^2/2,\quad t \in [0,\tau_1),\quad (\textrm{resp.}, x_{30} - x_{40} t- b t^2/2,)\\ x_3(\tau_1) + (x_{40} + b\tau_1) (t - \tau_1) - b(t - \tau_1)^2/2,\quad t \in [\tau_1,\tau_2),\\ (\textrm{resp.}, x_3(\tau_1) - (x_{40} + b\tau_1) (t - \tau_1) + b(t - \tau_1)^2/2,)\\ \bar{x}_3 - b(t-\tau_2)^2/2,\quad t \in [\tau_2,\tau_3),\quad (\textrm{resp.}, \bar{x}_3 + b(t-\tau_2)^2/2,) \\ x_3(\tau_3) - b(\tau_3 - \tau_2)(t - \tau_3) + b(t - \tau_3)^2/2,\quad t \in [\tau_3,t_f],\\ (\textrm{resp.}, x_3(\tau_3)+b(\tau_3 - \tau_2)(t - \tau_3)-b(t - \tau_3)^2/2,) \end{cases} \\ x_4(t)& = \begin{cases} x_{40}+bt,\quad t \in [0,\tau_1), (\textrm{resp.}, x_{40}-bt,)\\ x_{40}+b\tau_1-b(t-\tau_1),\quad t \in [\tau_1,\tau_2), (\textrm{resp.}, x_{40}-b\tau_1+b(t-\tau_1),)\\ -b(t-\tau_2),\quad t \in [\tau_2,\tau_3), (\textrm{resp.}\:b(t-\tau_2),)\\ -b(\tau_3-\tau_2)+b(t-\tau_3),\quad t \in [\tau_3,t_f],(\textrm{resp.}, b(\tau_3-\tau_2)-b(t-\tau_3),) \end{cases} \end{split} \end{equation} with \begin{equation*} \tau_1 = - \frac{x_{40}}{b} + \sqrt{\frac{x_{40}^2}{2b^2} - \frac{x_{30} -\bar{x}_3}{b}}, \quad \tau_2 = 2 \tau_1 + \frac{x_{40}}{b}, \quad \tau_3 = \tau_2 + \sqrt{- \frac{x_{3f} - \bar{x}_3}{b}}, \quad t_f = 2 \tau_3 - \tau_2, \end{equation*} \begin{multline*} \Big( \textrm{resp.}, \tau_1 = - \frac{x_{40}}{b} + \sqrt{\frac{x_{40}^2}{2b^2} + \frac{x_{30} - \bar{x}_3}{b}},\quad \tau_2 = 2 \tau_1 + \frac{x_{40}}{b}, \quad \tau_3 = \tau_2 + \sqrt{ \frac{x_{3f} - \bar{x}_3}{b}}, \\ t_f= 2 \tau_3 - \tau_2, \Big) \end{multline*} where $\bar{x}_3$ is the maximal (resp., minimal) value of $x_3(t)$, $t\in[0,t_f]$. Besides, by integration, we have \begin{equation*} p_3(t)=p_3(0)+\int_0^t a(p_1\sin x_3(\tau)-p_2\cos x_3(\tau)) \, d\tau = p_3(0)-\frac{ap_2}{\cos \gamma_f}\int_0^t \cos(x_3(\tau)-\gamma_f)\, d\tau, \end{equation*} and \begin{equation} \label{p4t} p_4(t)=p_4(0)-p_3(0)t+\frac{ap_2}{\cos \gamma_f}\int_0^t \int_0^s \cos(x_3(\tau)-\gamma_f) \, d\tau \,ds. \end{equation} Using $p_4(\tau_1)=p_4(\tau_3)=0$, we get \begin{equation} \label{p30} p_3(0)=\frac{ap_2}{(\tau_3-\tau_1) \cos \gamma_f} \left( \int_0^{\tau_3} \int_0^s \cos(x_3(\tau)-\gamma_f) \,d\tau\,ds -\int_0^{\tau_1} \int_0^t \cos(x_3(\tau)-\gamma_f) \,d\tau\,ds \right), \end{equation} and \begin{multline} \label{p40} p_4(0)=\frac{ap_2}{\cos \gamma_f} \Big(\frac{ \tau_1}{(\tau_3-\tau_1)} \int_0^{\tau_3} \int_0^s \cos(x_3(\tau)-\gamma_f) \,d\tau\,ds \\ -\frac{ \tau_3}{(\tau_3-\tau_1)} \int_0^{\tau_1} \int_0^s \cos(x_3(\tau)-\gamma_f) \,d\tau\,ds \Big). \end{multline} From $H(0)=0$ and using the transversality condition, we infer $p_1$ and $p_2$ as functions of $\bar{x}_3$ provided $p^0 \neq 0$. Actually, $p^0$ is indeed nonzero, otherwise, using $H(0)=0$, the transversality condition and equations \eqref{p30} and \eqref{p40}, we would infer that $p=0$, which is absurd. We see that, if moreover $x_2(t_f)=x_1(t_f)\tan x_{3f}$, then the trajectories $x(t)$ in $\mathcal{F}_{x_3}$ together with $p(t)$ satisfy all necessary conditions of the PMP. The value $\bar{x}_3$ can be numerically derived from the condition $x_2(t_f)=x_1(t_f)\tan x_{3f}$, and then $(x(t),p(t))$ is obtained. In fact, for given terminal conditions $x(0)=x_0$, $x_3(t_f)=x_{3f}$ and $x_4(t_f)=0$, $\mathcal{F}_{x_3}$ can be seen as a one-parameter family of trajectories with parameter $\bar{x}_3$. Hence, for any given $\d{ \bar{x}_3 \in (\max(\bar{x}_{30},x_{3f}),x_3^\ast]}$ (resp., $\d{\bar{x}_3 \in [x_3^\ast, \min (\bar{x}_{30},x_{3f}))}$) with $\bar{x}_{30}=x_{30}-\frac{x_{40}^2}{2b}\mathrm{sign} x_{40} $, we have \begin{equation} \label{gammafx3} \gamma_f(\bar{x}_3)=\gamma(t_f(\bar{x}_3))=\arctan x_2(t_f(\bar{x}_3))/x_1(t_f(\bar{x}_3)). \end{equation} If we have \begin{equation} \label{dgammadx3} \begin{split} \frac{\partial \gamma_f(\bar{x}_3)} {\partial \bar{x}_3} &= \frac{1}{v} \left( \frac{\partial x_2(t_f(\bar{x}_3))} {\partial \bar{x}_3} \cos \gamma_f - \frac{\partial x_1(t_f(\bar{x}_3))} {\partial \bar{x}_3} \sin \gamma_f \right) \\ &=\frac{1}{v_f t_f} \int_0^{t_f}\left( a \big( T_1 \sin(\bar{x}_3-\gamma_f)+ t_f \cos(\bar{x}_3-\gamma_f)\big) - g_0 T_1\cos \gamma_f \right) dt \\ &=\frac{ 1}{v_f t_f} \int_0^{t_f} \left( a \sqrt{T_1^2+t_f^2} \sin(\bar{x}_3-\gamma_f+\bar{\varphi})- g_0 T_1 \cos \gamma_f \right) dt > 0, \end{split} \end{equation} where \begin{equation*} \begin{split} v_f &= \sqrt{x_1(t_f(\bar{x}_3))^2+x_2(t_f(\bar{x}_3))^2},\quad \bar{\varphi} = \arctan \left( \frac{t_f}{T_1} \right), \\ T_1&=\frac{1}{\sqrt{(\bar{x}_3-x_{30})/b+x_{40}^2/(2b^2)}}+\frac{1}{\sqrt{(\bar{x}_3-\gamma_f)/b}}, \end{split} \end{equation*} for all $x(t)$ in $\mathcal{F}_{x_3}$, then we have that $\gamma_f(\bar{x}_3)$ is monotone with $\bar{x}_3$. Therefore, the value of $\gamma_f(\bar{x}_3)$ reaches its maximum (resp., minimum) when $\bar{x}_3 = x_3^\ast$. In this sense, we have a reachable set of $\gamma_f$ as a function of $\bar{x}_3$. \begin{rem} \label{rem_casedist} In the anticlockwise case, the trajectories generally take the form of $A_+A_-A_+$. However, if the condition \eqref{dgammadx3} is valid, $\gamma_f(\bar{x}_3)$ achieves a minimum extremal value over $[\max(\bar{x}_{30},x_{3f}),x_3^\ast]$ when $\bar{x}_3=\max(\bar{x}_{30},x_{3f})$. Then, if $\gamma_f < \gamma_f(x_{3f})$, the trajectory takes the form $A_-A_+A_-$. There exists a $\hat{x}_3=\bar{x}_3 \in [x_3^\ast,\min(\bar{x}_{30},x_{3f}))$ such that $\gamma_f(\hat{x}_3)=\gamma_0$. Hence, $\bar{x}_3$ takes value in $\mathcal{D}_{x_3}^{ac}=(\max(\bar{x}_{30},x_{3f}),x_3^\ast] \cup [\hat{x}_3,\min(\bar{x}_{30},x_{3f}))$ for anticlockwise maneuvers. For the clockwise maneuvers, we have that $\bar{x}_3$ takes value in $\mathcal{D}_{x_3}^{c}=(\max(\bar{x}_{30},x_{3f}), \hat{x}_3] \cup [x_3^\ast, \min(\bar{x}_{30},x_{3f}))$ where $\hat{x}_3 \in (\max(\bar{x}_{30},x_{3f}),x_3^\ast]$ being the extremal value such that $\gamma_f(\hat{x}_3)=\gamma_0$. \end{rem} The positivity condition \eqref{dgammadx3} is hard to check explicitly, however, numerically this condition can be verified easily for given terminal conditions. This is why we take it as an assumption. Accordingly, we make the following assumptions throughout this section. The first assumption is that $\tau_1$, $\tau_2$, $\tau_3$ and $t_f$ are nonnegative real numbers. The second assumption is that \eqref{dgammadx3} holds. The third one is that the spacecraft would not crash after the maneuver. The results of our numerical simulations are consistent with these assumptions: \begin{itemize} \item the real numbers $x_{30}$, $x_{40}$, $x_{3f}$ are chosen such that $\tau_1 \geq 0$, $\tau_2 \geq 0$, $\tau_3 \geq 0$ and $t_f >0$; \item for every $ \bar{x}_3 \in (\max(\bar{x}_{30},x_{3f}),x_3^\ast]$ (resp., $\bar{x}_3 \in [x_3^\ast, \min (\bar{x}_{30},x_{3f}))$) with $\bar{x}_{30}=x_{30}-\frac{x_{40}^2}{2b}\mathrm{sign} x_{40} $, we have \begin{align*} \int_0^{t_f} \sin(\bar{x}_3-\gamma_f+\bar{\varphi}) \, dt > \frac{g_0 t_f \cos \gamma_f}{a \sqrt{1+\tan^2\bar{\varphi}} } ; \end{align*} \item $x_1(t_f)>0$, $x_2(t_f)>0$. \end{itemize} Under these assumptions, we have the following chattering prediction result. \begin{thm}[Chattering prediction]\label{thm_predic} Let $x(\cdot) \in \mathcal{F}_{x_3}$ be an optimal trajectory of ${\bf (MTTP)}$ in the flat-Earth case. In the anticlockwise case (resp., in the clockwise case), if \begin{equation} \label{pitch_cond} S_C \geq 0\qquad (\textrm{resp., if}\ S_C \leq 0), \end{equation} with $S_C$ defined by \begin{equation} \label{Scdef} S_C = x_2(t_f(x_3^\ast))-x_1(t_f(x_3^\ast))\tan \gamma_{f}, \end{equation} where $x_1(t_f(x_3^\ast)) = x_{10} +\int_0^{t_f(x_3^\ast)} a \cos x_3(t) \, dt$, $x_2(t_f(x_3^\ast)) = x_{20} +\int_0^{t_f(x_3^\ast)} (a \sin x_3(t) -g_0) \, dt$, and $x_3(t)$ is calculated from \eqref{pitchup_x3} with $\bar{x}_3=x_3^\ast$, then $x(\cdot)$ does not involve any singular arc. \end{thm} \begin{proof} In the conterclockwise case, if $S_C \geq 0$, then we get from \eqref{gammafx3} and \eqref{Scdef} that $\tan \gamma_f(x_3^\ast) \geq \tan \gamma_f$ provided that $x_1(t_f)>0$ and that $x_2(t_f)>0$. Using that $\gamma_f = x_{3f} \in \mathcal{D}_f$, it follows that $ (\gamma_f(x_3^\ast)-\gamma_0) \geq (\gamma_f -\gamma_0). $ Since $\partial \gamma_f(\bar{x}_3) / \partial \bar{x}_3 > 0$, we infer from the implicit function theorem that there exists a $\bar{x}_3=\mathcal{X}(\gamma_f) \in \mathcal{D}_{x_3}^{ac}$, where $\mathcal{X}(\cdot)$ is $C^1$ function, such that $\gamma_f(x_3^\ast) \geq \gamma_f(\bar{x}_3)$ and that the corresponding trajectory $x(t)$ is an optimal trajectory for ${\bf (MTTP)}$ with terminal value of $\gamma_f(\bar{x}_3)$. The proof is similar in the clockwise case. \end{proof} \begin{rem} \label{rem_predic} If \eqref{pitch_cond} is not satisfied, then there are two possible types of solutions: one has more bang arcs, the other has a singular arc with chattering arcs around the singular junctions. The points of $S_2$ actually correspond to the zeros of the second-order time derivative of the switching function ($z_3=0$ in the semi-canonical form), and so the zeros of $S_2$ will impose an immediate effect on the switching function, but ensure the switching function to have more possible switchings. The numerical results show that the additional bang arcs lead to extremals that are closer to the singular surface with an exponential speed. \end{rem} \subsection{Non-flat case $c>0$} \label{sec_normcase} If $c > 0$ then the analysis of the problem ${\bf (MTTP)}$ becomes more complicated, but we are able as well to describe the set of initial data for which optimal trajectories do not have any singular arc, as we will see next. We assume that the condition \eqref{dgammadx3} still holds, i.e., $ \frac{\partial \gamma_f(\bar{x}_3)} {\partial \bar{x}_3} >0, $ and the real numbers $x_{30}$, $x_{40}$, $x_{3f}$ are chosen such that $\tau_1 \geq 0$, $\tau_2 \geq 0$, $\tau_3 \geq 0$ and $t_f >0$. Assume moreover that the real numbers $v_0$ and $\gamma_0$ are chosen such that the two components of the velocity are positive along the whole trajectory, i.e., $x_1(t)>0$, $x_2(t)>0$, $t\in [0,t_f]$. Using Table \ref{sim_param}, we have \begin{equation*} \begin{split} &\dot{x}_1(t) \in [a \cos x_3 - c v_{max}^2, a \cos x_3], \\ &\dot{x}_2(t) \in [a \sin x_3 - g_0, a \sin x_3 - g_0 + c v_{max}^2], \\ &\dot{x}_3(t) \in [x_4 - c v_{max}, x_4], \end{split} \end{equation*} where $v_{max}^2 \approx (x_{10}+a T)^2+(x_{20}+a T + a^2 c T^3/3)^2 < v_m^2$. It can be seen that the terms in $c$ in the dynamics cause a decrease of $x_1$ and $x_3$, and an increase of $x_2$. We consider the auxiliary problem \begin{equation*} \left\{\begin{split} & \min t_f \\ & \dot{x}_1= a \cos x_3-c_1 x_1 x_2, \quad \dot{x}_2= a \sin x_3-g_0+c_1x_1^2,\quad \dot{x}_3=x_4-cv_{max},\quad \dot{x}_4=bu, \\ & x(0)=x_0,\quad x_3(t_f)=x_{3f}, \quad x_4(t_f)=0, \end{split}\right. \end{equation*} where $c, c_1\in[0,10^{-6}]$. Similarly to the flat-Earth case, the solutions of this problem, of the form $A_+A_-A_+$ (resp., $A_-A_+A_-$), can be obtained by integrating the dynamical system, by using the control \begin{equation*} u(t) = \begin{cases} +1, t\in [0,\tilde{\tau}_1),\quad (\textrm{resp.}, -1)\\ -1, t\in [\tilde{\tau}_1,\tilde{\tau}_2)\cup[\tilde{\tau}_2,\tilde{\tau}_3), \quad (\textrm{resp.}, +1)\\ +1, t\in [\tilde{\tau}_3,\tilde{t}_f], \quad (\textrm{resp.}, -1) \end{cases} \end{equation*} with \begin{equation*} \label{} \begin{split} &\tilde{\tau}_1= - \frac{(x_{40}-c v_{max})}{b} + \sqrt{\frac{(x_{40}-c v_{max})^2}{2b^2} - \frac{x_{30} - \bar{x}_3}{b}},\quad \tilde{\tau}_2 = 2 \tau_1 + \frac{(x_{40}-c v_{max})}{b}, \\ &\tilde{\tau}_3 = \tau_2 + \sqrt{\frac{(-c v_{max})^2}{2b^2} - \frac{x_{3f} - \bar{x}_3}{b}},\qquad \tilde{t}_f = 2 \tau_3 - \frac{c v_{max}}{b} - \tau_2 . \end{split} \end{equation*} \begin{multline*} \Big( \textrm{resp.}, \tilde{\tau}_1= - \frac{(x_{40}-c v_{max})}{b} + \sqrt{\frac{(x_{40}-c v_{max})^2}{2b^2} + \frac{x_{30} - \bar{x}_3}{b}},\quad \tilde{\tau}_2 = 2 \tau_1 + \frac{(x_{40}-c v_{max})}{b}, \\ \tilde{\tau}_3 = \tau_2 + \sqrt{\frac{(-c v_{max})^2}{2b^2} + \frac{x_{3f} - \bar{x}_3}{b}},\qquad \tilde{t}_f = 2 \tau_3 - \frac{c v_{max}}{b} - \tau_2 \Big) \end{multline*} Let $\tilde{\gamma}_f(x_3^\ast,c,c_1)=\min \big(\gamma(t_f(x_3^\ast),c>0,c_1=0),\gamma(t_f(x_3^\ast),c>0,c_1>0)\big)$ for this problem. Based on the numerical results, we make the following assumptions: \begin{itemize} \item $\tilde{\gamma}_f(x_3^\ast,c,c_1) \geq \gamma_f(x_3^\ast)=\gamma_f(t_f(x_3^\ast),c=0,c_1=0)$ in the anticlockwise maneuvers; \item $\tilde{\gamma}_f(x_3^\ast,c,c_1) \leq \gamma_f(x_3^\ast)=\gamma_f(t_f(x_3^\ast),c=0,c_1=0)$ in the clockwise maneuvers; \end{itemize} Under these assumptions, we have the following result. \begin{cor} \label{cor_predic} Let $x(\cdot)$ be an optimal trajectory of ${\bf (MTTP)}$ in the non-flat case. Then: \begin{itemize} \item for an anticlockwise maneuver, if \eqref{pitch_cond} holds true then $x(\cdot)$ does not have any singular arc; \item for a clockwise maneuver, if $$ \tilde{S}_C= x_2(\tilde{t}_f(x_3^\ast),c,c_1)-x_1(\tilde{t}_f(x_3^\ast),c,c_1)\tan \gamma_{f} \leq 0, $$ where $x_2(\tilde{t}_f(x_3^\ast),c,c_1)/x_1(\tilde{t}_f(x_3^\ast),c,c_1)=\tan\tilde{\gamma}_f(x_3^\ast,c,c_1)$, then $x(\cdot)$ does not have any singular arc. \end{itemize} \end{cor} \begin{proof} For an anticlockwise maneuver (resp. a clockwise maneuver), we have that if $S_C \geq 0$ (resp. $\tilde{S}_C \leq 0$), then $ 0 \leq \gamma_f \leq \gamma_f(x_3^\ast) \leq \tilde{\gamma}_f(x_3^\ast,c,c_1) , $ (resp., $ 0 \geq \gamma_f \geq \tilde{\gamma}_f(x_3^\ast,c,c_1)$), and thus there exists a $\bar{x}_3=\tilde{\mathcal{X}}(\gamma_f)$ such that $ \big(\gamma_f(\bar{x}_3,c>0,c_1>0) - \gamma_0 \big) \leq \big( \tilde{\gamma}_f(x_3^\ast,c,c_1) -\gamma_0 \big), $ (resp., $ \big(\gamma_f(\bar{x}_3,c>0,c_1>0) - \gamma_0 \big) \geq \big( \tilde{\gamma}_f(x_3^\ast,c,c_1) -\gamma_0 \big) $), and its associated trajectory is an optimal solution of the problem ${\bf (MTTP)}$. \end{proof} \begin{rem} \label{rem_predi_c} Similarly to Remark \ref{rem_predic}, in the non-flat case, numerical results show that if the conditions in Corollary \ref{cor_predic} are not satisfied, then the trajectories will have more bang arcs until the singular arc finally appear with chattering type junctions. \end{rem} \section{Numerical Results} \label{sec_NumeSimu} In this section, we compute numerical optimal strategies, for different initial conditions, either by means of a direct method, or by means of an indirect one (shooting method). It is important to note that, if the optimal trajectory involves a singular arc and thus has chattering, then the shooting method fails in general. Indeed, the infinite number of switchings may cause a failure in the numerical integration of the dynamical system, and then direct methods may therefore be more appropriate to approach chattering. However, since they are based on a discretization, they can only provide a sub-optimal solution of the problem, having a finite number of switchings. In the first subsection, we provide several numerical simulations, where the optimal solutions are computed by means of a shooting method, in situations where the optimal trajectory is known to be bang-bang, without any singular arc, and with a finite number of switchings. In the second subsection, we describe in more details sub-optimal strategies, and we provide evidence of their relevance in cases where we have chattering. In our numerical simulations, we consider the initial and final conditions settled in Table \ref{cond_init_fina}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c||c|c|c|} \hline & $x_{30}$ & $x_{40}$ & $x_{10}$ & $x_{20}$ & $x_{3f}$ & $x_{40}$ & $x_{2f}-x_{1f}\tan x_{3f}$ \\ \hline Counter-clockwise & $1.3$ & $0.0$ & $v_0\cos x_{30}$ & $v_0\sin x_{30}$ & $1.5$ & $0.0$ & $0.0$ \\ \hline Clockwise & $1.5$ & $0.0$ & $v_0\cos x_{30}$ & $v_0\sin x_{30}$ & $1.3$ & $0.0$ & $0.0$ \\ \hline \multicolumn{8}{|c|}{$\d{v_0=(x_{10}^2+x_{20}^2)^{1/2}}$} \\ \hline \end{tabular} \caption{Initial and final conditions.}\label{cond_init_fina} \end{table} Here, we set $\gamma(0)=\gamma_0=x_{30}$, meaning that before the maneuver the spacecraft was on a trajectory with angle of attack equal to zero. Recall that when the optimal trajectory contains a singular arc, then the extremal is normal, i.e. $p^0 \ne 0$ (see Lemma \ref{lem_singular}). Moreover, in the flat-Earth case, we have seen from the analysis in Section \ref{sec_limicase} that the bang-bang extremals are normal in case of two control switchings. The argument was based on equations \eqref{p30} and \eqref{p40}. Furthermore, it is not difficult to see that if the control switches at least two times, then the extremals are normal. Therefore, abnormal extremals may only occur whenever the control switches at most one time. In the non-flat case, since $c>0$ is very small, we can assume that $p^0 <0$ though the abnormal extremals may also exist with a few certain terminal conditions. Thus, the adjoint vector can be normalized by $p^0=-1$. The results of the numerical simulations are consistent with this assumption. \subsection{Chattering prediction}\label{sec_NumeChatPred} In practice, the terminal condition that can take very different values is the initial modulus of velocity $v_0$. Hence, we next investigate the influence of $v_0$ on the occurrence of optimal singular arcs. \paragraph{Flat-Earth case with two switchings.} If we consider $v_0$ as variable and if we take $c = 0$, then, by solving $S_C = 0$, we get $\bar{v}_{up} = v_0 = 1086.2\,m/s$ (resp., $\bar{v}_{down} = v_0 = 1694.3\,m/s$) for anticlockwise maneuvers (resp., for clockwise maneuvers). When $v_0 \leq \bar{v}_{up}$ (resp., $v_0 \leq \bar{v}_{down}$), we have $S_C \geq 0$ for anticlockwise maneuvers (resp., $S_C \leq 0$ for clockwise maneuvers). In this case, according to Theorem \ref{thm_predic}, there is no singular arc in the optimal solution. Moreover, the maneuver times for both maneuvers are the same, i.e., $t_f = 36.5437\, s$. Using an indirect method (shooting method), we compute the optimal solutions of the problem ${\bf (MTTP)}$, in the absence of a singular arc. Recall that the indirect method does not work when there are chattering arcs. From the prediction above, we should therefore be able to use successfully an indirect method when $v_0\leq\bar{v}_{up}$. We will see in numerical simulations that the indirect method works when the trajectory consists of three bang arcs, but fails otherwise due to chattering. Figure \ref{compare_1} provides the solutions for two different values of the initial velocity modulus $v_0$ for the anticlockwise case, i.e., $v_0=\bar{v}_{up}=1086.2\,m/s$ (plotted in solid lines) and $v_0=1080\,m/s$ (plotted in dashed lines). Figure \ref{compare_1_clock} shows the solutions of clockwise maneuvers with $v_0=\bar{v}_{down}=1694.3\, m/s$ (plotted in solid lines) and $v_0=1690\, m/s$ (plotted in dashed lines). The red star points represent the touching point of the trajectories with the surface $S_2$ (where $x_3(t)$ touches $x_3^\ast$). It is shown in Figure \ref{compare_1} that there is no singular arc in the trajectories when $v_0 < \bar{v}_{up}$ (resp., in Figure \ref{compare_1_clock} when $v_0 < \bar{v}_{down}$). The control switchings two times and the $\bar{x}_3$ associated with the dashed line is smaller than $x_3^\ast=x_{3f}+\pi/2$ (resp. bigger than $x_3^\ast=x_{3f}-\pi/2$). \begin{figure}[h] \includegraphics[scale=0.6]{compare_1.eps} \caption{Time history of $x_3$, $x_4$ and $u$ when $v_0=\bar{v}_{up}$ and $v_0=1080\,m/s$} \label{compare_1} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{compare_1_clock.eps} \caption{Time history of $x_3$, $x_4$ and $u$ when $v_0=\bar{v}_{down}$ and $v_0=1690\,m/s$} \label{compare_1_clock} \end{figure} In Remark \ref{rem_predic}, we mentioned that, when the condition \eqref{pitch_cond} is not satisfied, there are more bang arcs until the appearance of a singular arc. We will show next the solutions with more switchings. As remarked, these results will show that the extremals will get closer to the singular surface $S$ when more bang arcs are present. \paragraph{Flat-Earth case with more switchings.} In fact, we can compute the corresponding value of $v_0$ for optimal controls with different number of switchings, in the following way. Let us assume that the optimal control $u$ has $2m$, $m=1,\cdots,N$ switchings and $u(0)=u_0$ being $+1$ or $-1$, i.e., \begin{equation*} u(t) = \begin{cases} u_0, & t\in [\tau_0,\tau_1],\\ -u_0, & t\in [\tau_{4j-3},\tau_{4j-1}], \\ u_0, & t\in [\tau_{4j-1},\tau_{4j+1}], \\ -u_0, & t\in [\tau_{4m-3},\tau_{4m-1}], \\ u_0, & t\in [\tau_{4m-1},\tau_{4m}], \end{cases} \end{equation*} with $j=1,\cdots,(m-1)$, $t_0=\tau_0$, $t_f=\tau_{4m}$, then we know that $\varphi(\tau_{2k+1})=p_4(\tau_{2k+1})=0$, $k=0,\cdots,2m-1$. Here we have additionally $h_1(\tau_{2m})=p_4(\tau_{2m})=0$, because the maximum $v_0$ corresponding to $2m$ switchings happens when $u$ is about to have one more switchings between $\tau_{2m-1}$ and $\tau_{2m+1}$. Let $q = \big( x_3(\tau_{2k}) \big)_{k=1,\cdots,2m-1}$ be the variable vector (of dimension $2m-1$). On Figure \ref{moreswitches} are represented $p_4(t)$, $p_3(t)$, $x_4(t)$ and $x_3(t)$ for an anticlockwise maneuver with $m=3$, the variable $q=(q_1,\cdots,q_5)$ is of dimension $2m-1$. \begin{figure}[h] \centering \includegraphics[scale=0.8]{moreswitches.eps} \caption{Example of trajectory associated with optimal control of $6$ switchings.} \label{moreswitches} \end{figure} Using \eqref{p4t}, we derive $2m-1$ constraints on $q$ without the adjoint vector $p$, i.e., \begin{equation} \label{qeqn} \frac{\tau_{k_1} - \tau_{k_2}}{\tau_{k_3}-\tau_{k_4}} = \frac{\int_0^{\tau_{k_1}} \int_0^\tau \cos(x_3(s)-\gamma_f) \,ds\,d\tau-\int_0^{\tau_{k_2}} \int_0^s \cos(x_3(s)-\gamma_f) \,ds\,d\tau} {\int_0^{\tau_{k_3}} \int_0^\tau \cos(x_3(s)-\gamma_f) \,ds\,d\tau - \int_0^{\tau_{k_4}} \int_0^\tau \cos(x_3(s)-\gamma_f) \,ds\,d\tau}, \end{equation} where $k_1,k_2,k_3,k_4 \in \{2k+1\mid k=0,\cdots,2m-1\} \cup \{2m\}$ and $k_1 \neq k_2$, $k_3 \neq k_4$. Note that at least one of these equations must involve $\tau_{2m}$. Since $x_3(\tau_{2k})$, $k=1,\cdots,2m-1$ are local extrema, we must have $x_4(\tau_{2k})=0$, $k=1,\cdots,2m-1$. By integrating the system from $x(0)=x_0$ and requiring that \begin{align*} & x_4(\tau_{2k})=0, \quad k=1,\cdots,2m-1,\\ & x_3(\tau_{2k})=q, \quad k=1,\cdots,2m-1,\\ & x_3(t_f)=x_{3f},\quad x_4(t_f)=0, \end{align*} we can parametrize the $\tau_k$, $k=1,\cdots,4m$ by $q$, and hence as well the trajectories $x_3(t)$ and $x_4(t)$ which are parametrized by $\tau_k$, $k=1,\cdots,4m$. More precisely, we have \begin{align*} & \tau_1 = -\frac{x_{40}}{b} + \sqrt{\frac{x_{40}^2}{2b^2}+\frac{|q(1)-x_{30}|}{b}},\quad \tau_2 = \tau_1 + \sqrt{\frac{x_{40}^2}{2b^2}+\frac{|q(1)-x_{30}|}{b}},\\ & \tau_{2k+1} = \tau_{2k} + \sqrt{ \frac{|q(k)-q(k+1)|}{b}},\quad \tau_{2k+2} = \tau_{2k+1} + \sqrt{ \frac{|q(k)-q(k+1)|}{b}},\quad k=1,\cdots,2m-2,\\ & \tau_{4m-1} = \tau_{4m-2} + \sqrt{\frac{|q(2m-1)-x_{3f}|}{b}},\quad \tau_{4m} = \tau_{4m-1} + \sqrt{\frac{|q(2m-1)-x_{3f}|}{b}}. \end{align*} Hence, we can get the value of $q$ by solving \eqref{qeqn}. Then taking $v_0$ as variable and $\gamma(t_f)=\gamma_f$ as shooting function, we can derive the maximum $v_0$ that can be used when we expect the control to have $2m-1$ switchings. Using this method, we get that, in the anticlockwise case, when $v_0 \in (\bar{v}_{up},1183.4 ]\, m/s$, the control $u(t)$ has two switchings. When $v_0 \in (1183.4,1999.3]\, m/s$, the control $u(t)$ has four switchings. Then when $v_0 \in (1999.3,2132.1]\,m/s$, the control $u(t)$ has six switchings. \begin{figure}[h] \centering \includegraphics[scale=0.45]{switchfunc_4t.eps} \caption{Switching function $\varphi(t)$ when $v_0=1999.3\,m/s$ in the anticlockwise case.} \label{4swiches} \end{figure} Figures \ref{4swiches} and \ref{6swiches} give the time history of the switching function $\varphi(t)=h_1(t)$ when $v_0=1999.3\,m/s$ and $v_0=2132.1\,m/s$, respectively. Observing from the zoom-in windows of the figures, we see that the switching function is almost equal to zero when $t \in [22,26]\,s$ and $t \in [22,28]\,s$. This implies that the associated extremals are very close to the singular surface $S$ along these time intervals. These figures also show that the additional bang arcs lead rapidly the extremals to get closer to the singular surface $S$ (see Remark \ref{rem_predic}). \begin{figure}[h] \centering \includegraphics[scale=0.45]{switchfunc_6t.eps} \caption{Switching function $\varphi(t)$ when $v_0=2132.1\,m/s$ in the anticlockwise case.} \label{6swiches} \end{figure} Note that when $u(t)$ has $2m$ switchings, the trajectory of $x_3(t)$ has between $\max(0,2m-2)$ and $2m$ contact points with the surface $S_2$. Figure \ref{compare_2} shows the comparison of solutions with $v_0=1350\,m/s$ (solid line) and $v_0=1683\,m/s$ (dashed line). They both belong to the four switchings case, i.e., $m=2$. We can see that the solid line touches the surface $S_2$ two times, while the dashed line touches four times. \begin{figure}[h] \includegraphics[scale=0.6]{compare_2.eps} \caption{Time history of $x_3$, $x_4$ and $u$ when $v_0=1350\,m/s$ and $v_0=1683\,m/s$} \label{compare_2} \end{figure} \medskip \paragraph{Non-flat case.} When $c >0$, according to Corollary \ref{cor_predic}, there does not exist any singular arc for anticlockwise maneuvers when $v_0 \leq \bar{v}_{up}$. For clockwise maneuvers, if $v_0 \leq \tilde{v}_{down} = 1624.3\,m/s$, then there is no singular arc (this condition is obtained by solving $\tilde{S}_C=0$ with $c=0$ and $c_1=10^{-6}$). The assumptions are also verified. In Figure \ref{compare_3}, setting $v_0=\bar{v}_{up}$, we compare in anticlockwise case the solution with $c>0$ (plotted with solid line) and the solution with $c=0$ (plotted with dashed line). The trajectory $x_3(t)$ in the flat-Earth case in fact reaches the surface $S_2$ in smaller time than in the non-flat case. \begin{figure}[h] \includegraphics[scale=0.6]{compare_3.eps} \caption{Time history of $x_3$, $x_4$ and $u$ when $c=0$, $c=10^{-6}$ and $v_0=\bar{v}_{up}$} \label{compare_3} \end{figure} Let $v_0=\tilde{v}_{down}=1624.3\, m/s$. Figure \ref{compare_4} gives a comparison in the clockwise cases of the solution with $c>0$ (plotted with solid lines) and the solution with $c=0$ (plotted with dashed lines). Both trajectories do not touch the surface $S_2$ and the trajectory in the non-flat case gets ``closer" to $S_2$. The control switchings two times and there is no singular arc. \begin{figure}[h] \includegraphics[scale=0.5]{compare_4.eps} \caption{Time history of $x_3$, $x_4$ and $u$ when $c=0$ and $c=10^{-6}$} \label{compare_4} \end{figure} In other simulations, we also observe that when $v_0<\tilde{v}_{down}$, the optimal control only has two switchings and $x_3$ will not reach to $x_3^\ast$. However, when $v_0>\tilde{v}_{down}$, new bang arcs appear and the trajectory tends to have chattering arcs. These results illustrate Corollary \ref{cor_predic} and Remark \ref{rem_predi_c}. \subsection{Sub-optimal strategies}\label{sec_SuboSolu} Let $N$ be a positive integer. We consider a subdivision $0=t_0\leq t_1\leq\cdots\leq t_N=t_f$ of the interval $[0,t_f]$ (where $t_i$ are unknown), and we consider piecewise constant controls over this subdivision, thus enforcing the control to switch at most $N$ times. We consider the optimal control problem ${\bf (MTTP)}$ with this restricted class of controls, that we denote by ${\bf (MTTP)}_N$. Solving this problem provides what we call a \emph{sub-optimal strategy} (with at most $N$ switchings), because the optimal value of ${\bf (MTTP)}_N$ must be less than or equal to the optimal value of ${\bf (MTTP)}$. By the way, we expect that, ${\bf (MTTP)}_N$ $\Gamma$-converges to ${\bf (MTTP)}$ as $N\rightarrow+\infty$, meaning that, in particular, the optimal value of ${\bf (MTTP)}_N$ converges to that of ${\bf (MTTP)}$. We will come back on this issue later. As in classical direct methods in optimal control, we propose to solve numerically the problem ${\bf (MTTP)}_N$, where the unknowns are the nodes $t_i$ of the subdivision, and the values $u_i$ of the control over each interval $(t_i,t_{i+1})$. More conveniently, instead of considering the switching times $t_i$ as unknowns, we consider the durations $t_{i+1}-t_i$ as unknowns. Note that these durations may be equal to $0$. The control is kept constant along each interval of the subdivision, but in order to discretize the state in a finer way, we consider another (much) finer subdivision to compute the discretized state. We solve the resulting optimization problem by using \texttt{IPOPT} (see \cite{IPOPT}) combined with the modeling language \texttt{AMPL} (see \cite{Fourer}). \paragraph{Numerical results for anticlockwise maneuvers.} We consider first the case of anticlockwise maneuvers. Let $v_0=3000\,m/s$. For $N=500$, the numerical optimal solution of ${\bf (MTTP)}_N$ is provided on Figures \ref{control_optimal} and \ref{state_optimal}. This simulation provides numerical evidence of the fact that we have a singular arc for $t \in [25.7,28.1] \, s$, with a chattering phenomenon at the junction points with the singular arc (see Figure \ref{control_optimal}, on the right, where a zoom is made on those points). The singular control takes values in $[-0.0016,-0.0013]$. Moreover, the coordinates $x_3(t)$ and $x_4(t)$ oscillate around $x_3=x_3^{\ast}$ and $x_4=0$ respectively, and the coordinates $x_1(t)$ and $x_2(t)$ oscillate around a straight line in the vicinity of the singular arc of the flat-Earth case. This indicates that the singular arc of the non-flat case does not vary much from that of the flat-Earth case. \begin{figure}[h] \centering \includegraphics[scale=0.6]{control_bocop_up.eps} \caption{Control $u(t)$ in anticlockwise maneuver} \label{control_optimal} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{state_bocop_up.eps} \caption{State variable $x(t)$ in anticlockwise maneuver.} \label{state_optimal} \end{figure} \paragraph{Numerical results for clockwise maneuvers.} For clockwise maneuvers, still taking $N=500$, the numerical optimal solution of ${\bf (MTTP)}_N$ is provided on Figures \ref{control_optimal_2} and \ref{state_optimal_2}. By comparing the clockwise maneuver in Figure \ref{control_optimal_2} and \ref{state_optimal_2} and the anticlockwise maneuver in Figure \ref{control_optimal} and \ref{state_optimal}, we see that, when $x_4(0)=0$, to realise the same $|\gamma_f-\gamma_0|$, one need $62.43\,s$ for the anticlockwise case and only $59.26\,s$ for the clockwise case. \begin{figure}[h] \centering \includegraphics[scale=0.6]{control_bocop_down.eps} \caption{Optimal control in clockwise maneuver.} \label{control_optimal_2} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6]{state_bocop_down.eps} \caption{State $x(t)$ in clockwise maneuver.} \label{state_optimal_2} \end{figure} \paragraph{$\Gamma$-convergence of ${\bf (MTTP)}_N$ towards ${\bf (MTTP)}$.} It seems natural to expect that, if $N\rightarrow +\infty$, then the solution of ${\bf (MTTP)}_N$ converges to the solution (if it is unique) of ${\bf (MTTP)}$. At least, $\Gamma$-convergence is expected. Such an analysis is beyond the scope of the present paper, however it is interesting to provide numerical simulations, for an anticlockwise maneuver, with several values of $N$: \begin{equation*} N\in\{6,8,10,12,14,16,18,20,30,40,50,100,200,300,400\}. \end{equation*} Figure \ref{fig_uN} provides the numerical optimal control obtained for ${\bf (MTTP)}_N$. We observe that, when $N$ becomes larger, then the optimal control seems to converge to its expected limit, that is the optimal control of ${\bf (MTTP)}$ with a singular arc and chattering. On Figure \ref{fig_tN}, we have reported the values of the maneuver time, in function of $N$. We observe that they seem decrease exponentially with respect to $N$. This numerical observation is important because, in practice, this means that it is not necessary to take $N$ too large. Even with quite small values of $N$, the minimal time obtained for ${\bf (MTTP)}_N$ seems to be very close to the minimal time for ${\bf (MTTP)}$. Hence the sub-optimal strategy seems to be a very good solution in practice, to bypass the problems due to chattering. \begin{figure}[h] \includegraphics[scale=0.55]{u_N.eps} \caption{Control $u(t)$ with different discretization step $N$.} \label{fig_uN} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.6]{T_N.eps} \caption{Maneuver time $t_f$ with respect to the discretization step $N$.} \label{fig_tN} \end{figure} We conclude with the following conjecture. \medskip \noindent\textbf{Conjecture.} With obvious notations, we denote by $(x^N(\cdot),u^N(\cdot),t_f^N)$ the optimal solution of ${\bf (MTTP)}_N$, and by $(x(\cdot),u(\cdot),t_f)$ the optimal solution of ${\bf (MTTP)}$ (assuming that they are unique). Then $t_f^N\rightarrow t_f$ exponentially, $x^N(\cdot)\rightarrow x(\cdot)$ in $C^0$-topology, and $u^N(\cdot)\rightarrow u(\cdot)$ in $L^1$-topology, as $N\rightarrow +\infty$. \begin{rem} Such convergence properties have been established in \cite{HT,ST}, but for problems not involving any singular arc. Here, the difficulty of establishing such a result (in particular, for the control) is in the presence of an optimal singular arc. \end{rem} \begin{rem} These simulations were done by using hot-restart, that is, by using the solution of the problem ${\bf (MTTP)}_N$ to initialize the problem with a larger value of $N$. \end{rem}
{ "timestamp": "2015-04-24T02:10:43", "yymm": "1504", "arxiv_id": "1504.06219", "language": "en", "url": "https://arxiv.org/abs/1504.06219" }
\section{Introduction} An understanding of physiological time series such as the heart-beat intervals is important to many areas, like heart-attack prediction, cardiovascular health, sport and exercise, etc. The study of time series can reveal underlying mechanisms of the physiological system, which usually contains both deterministic and stochastic components. Therefore the analysis of time series is very complicated because of the nonlinear and non-stationary characteristics of physiological time series data. Over the past years, time series analysis methods are applied to quantify physiological data for identification and classification \citep{Kantz, Schreiber}. The applications of physiological time series analysis commonly focus on measuring different aspects of time series data such as complexity, regularity, predictability, dimensionality, randomness, self similarity, etc. The tools used in these techniques include but not restrict to the mean, standard deviation, Fourier transform, wavelet, entropy, fractal dimension, pattern detection \citep{Kantz2,Tong}. Recently a new mathematical tool, empirical mode decomposition (EMD), was proposed by Norden Huang and his collaborators \citep{Huang, Huang2}. It decomposes a time series into a finite sum of intrinsic mode functions (IMF) that generally admit well-behaved Hilbert transforms. This decomposition is based on the local characteristic time scale of the data, which makes EMD applicable to analyze nonlinear and non-stationary signals. EMD and Hilbert transform together, called the Hilbert-Huang transform (HHT), usually allow to construct meaningful time-frequency representations of signals using instantaneous frequency of the data. EMD and HHT have been applied with great success in many application areas such as biological and medical sciences, geology, astronomy, engineering, and others \citep{Huang,Chen,Echeverria,Huang2,Pines,Liu}. Another interesting set of examples is the work of L.Yang and his collaborators, who have successfully applied EMD based techniques for texture analysis and Chinese handwriting recognition \citep{Yang,Yang2,Zheng}. The main purpose of this paper is to develop a new approach for the analysis of physiological times series. Our approach is motivated by two intuitions and coupled with modern machine learning techniques. The first intuition comes from a belief that a physiological system should contain a deterministic part that reflects the basic mechanism for the system to survive and a stochastic part that represents the variability of resilience. Mathematically they can be represented by the low frequency and high frequency components of a physiological signal. This motivates the application of methods of decomposing signals into various components according to frequencies in the quantitative analysis of physiological time series. Examples include the Fourier transform, wavelets, and EMD. In our method we will use an iterative convolution filter which is an alternative of EMD. The second intuition comes from a statistical perspective of irregularity. A lot of study has proved that normal physiological systems show irregularity due to the existence of stochastic components while the decrease of irregularity usually implies the abnormality. From statistical perspective, irregularity of a data set is represented by the ``outliers". This motivates us to study the statistics of outliers in physiological time series. However, we must be careful in doing so. Practical physiological times series usually contain noise which may also appear as outliers. We have to guarantee the ``outliers" we examined are not pure noise. This is possible because true outliers do not have informative structures and could be detected. The second intuition is the motivation for our feature construction in Section \ref{sec:feature}. These two intuitions enable us to decompose the physiological times series and construct features for our quantitative analysis. Combining with the well established feature selection techniques in machine learning we can remove the redundancy of the features and find relevant statistics for classification of physiological time series. Support vector machine recursive feature elimination(SVM-RFE) is suggested in this paper for linear classification problems. The details of our approach will be described in Section \ref{sec:method}. We will use our approach to analyze the heart beat interval time series and study the congestive heart failure problem. The study of heart diseases such as congestive heart failure by using heart beat interval times series has a long history. Decrease of heart rate variability or cardiac chaos has been found in congestive heart failure \citep{poon1997decrease, casolo1989decreased}. In the literature, many methods and metrics have been proposed to analyze the difference between the heart rate times series of healthy people and congestive heart failure patients, to name a few, the detrended fluctuation analysis \citep{peng1995quantification}, multifractality \citep{ivanov1999multifractality}, multiscaling entropy \citep{med1}, hierarchical entropy \citep{jiang2011hierarchical}. Our approach is different from the methods in the literature. It incorporates advanced machine learning techniques and allows the data ``to speak by itself.'' By applying our approach, the purposes are two-fold: The first is to build good classifiers to enable good diagnosis. The second is to find what kind of irregularity is associated to the heart health. The results and discussions are summarized in Section \ref{sec:experiment}. The novelty of our method is mainly the following two points. Firstly, although we decompose the time series into components of different frequencies, we do not compare them from the frequency domain. Secondly, we proved that the outliers in a physiological time series are usually not pure noise but are informative instead. Interestingly, although this idea is motivated by physiological times series analysis, it is also found successful in the stylometry analysis of artworks \citep{hughes2012empirical}. \section{Method} \label{sec:method} \subsection{Signal decomposition} Let $L$ be a low pass filter. Denote by $T$ the weak limit of the the operator $(I-L)^n$ as $n\to \infty$, i.e., for a discrete signal $X$ and time $t$ $$T(X)(t) = \lim_{n\to \infty} (I - L)^n (X)(t).$$ Using this operator iteratively, a signal $X$ can be decomposed as follows: Let $F_1 = T(X)$ and for $k\ge 2$, $$F_k = T\left(X-\sum_{i=1}^{k-1} F_i\right).$$ After $m$ steps we get $F_1, \ldots, F_m$ which we call mode functions and the residual $$R = X-\sum_{i=1}^m F_i.$$ Then we have $$ X = F_1 + F_2 + \ldots +F_m +R.$$ In this decomposition, roughly speaking the former mode functions are noise or high frequency components and the latter mode functions are low frequency components and $R$ is the trend. This procedure follows the spirit of the traditional EMD introduced in \cite{Huang}. In the traditional EMD, the low pass filter $L$ is chosen as the average of the upper envelope (the cubic spline connecting the local maxima) and the lower envelope (the cubic spline connecting the local minima). This method, although has been successfully used in many applications, is lack of theoretical foundation and has its limitations. In \cite{Lin} a new approach is proposed. In this new approach the low pass filter is a moving average generated by a mask $\mathbf a = (a_j)_{j=-N}^{N}$ that gives the $L(X)$ as the convolution of $a$ and $X$, i.e., $$L(X) (t) = \sum_{j=-N}^N a_j X(j+t).$$ With this choice of $L$ we call the operator $T$ an iterative convolution filter. A rigorous mathematical foundation and convergence analysis is given in \cite{Lin, Wang2}. Note the mask $\mathbf a$ is finitely supported on $[-N, N]$ and $N$ is called the window size. The flexibility to choose the window size is crucial in applications and forms a main advantage of this method. Similar to decompositions by many other methods like Fourier transform and wavelets, the trend and low frequency components are usually assumed to characterize the profile of the signal and the high frequency components characterize the details. In different applications we need the features of difference components. \subsection{Feature extraction} \label{sec:feature} After decomposing the signal into the mode functions and the trend, we need to extract statistics that can characterize the essential features of these components. This step requires a priori knowledge of the problem under consideration. It could be rather weak. But without any priori knowledge, it is difficult to get proper statistics. Also, this step is strongly problem dependent. In the following let us use the heart-beat intervals as an example to illustrate how to construct the features. In this application, each time series is a record of heart beat intervals in 24 hours \citep{med1}. It is first decomposed into several mode functions. To extract the features, for each mode function $F_i,$ we first get its mean $m_i$ and standard deviation $\sigma_i$. By the previous studies \citep{poon1997decrease, casolo1989decreased,med1} the healthy heart beats more irregularly than the unhealthy heart. This motivates us to design the statistics to measure the irregularity. To this end, we consider the terms that are larger than $m_i+\sigma_i$ and find their mean $m_{i,1}$ and standard deviation $\sigma_{i,1}$. We also find the mean $m_{i,2}$ and standard deviation $\sigma_{i,2}$ of the terms that are larger than $m_i+2\sigma_i$. Symmetrically we also get the mean $m_{i, -1}$ and standard deviation $\sigma_{i,-1}$ of those terms that are smaller than $m_i-\sigma_i$, and the mean $m_{i, -2}$ and standard deviation $\sigma_{i,-2}$ of those terms that are smaller than $m_i-2\sigma_i.$ This procedure gives us 10 statistics. Note all those terms that are outside the one or two standard deviations are in some sense ``outliers'' and it is natural to use their statistics ($m_{i,j}$ and $\sigma_{i,j}$ for $j=1,2,-1,-2$) to characterize the irregularity. Next we consider the times series $U_i$ composed of local maxima of $F_i$ and the time series $L_i$ composed of the local minima of $F_i$. These two series measure the local amplitude. For each series we compute the 10 statistics by the same procedure above as for $F_i$. Therefore for each mode function $F_i$ we get 30 statistics. Unlike in \citep{med1}, we use the whole 24-hour heart beat time series and assume we do not know the periods for different activities such as sleeping and walking. We think the statistics for different periods should be different and not all of them represent the difference between the healthy and unhealthy people. This motivates the idea of splitting the whole time series into subseries. Suppose we split the time series into $K$ subseries for each subject. Correspondingly we also split each mode function $F_i$ into $K$ subcomponents, which are denoted by $F_{i}^k,\ k=1,\ldots, K.$ For each subcomponent $F_{i}^k$, we compute the 30 statistics as above: 10 for $F_i^k$ itself, 10 for the local maxima $U_i^k,$ and 10 for the local minima $L_i^k.$ For each $i$ and each statistics, we have $K$ values from the $K$ subcomponents. We compute the mean, the first quartile (the 25th quantile), the third quartile (the 75th quantile) of these $K$ values to obtain 3 features. This gives 90 features. So for each model function $F_i$ we get 120 features in total. For physiological signals, we believe the trend and low frequency components are determined by the fundamental mechanism while the individual differences should be reflected by the high frequency components. In case that we do not have much knowledge about the disease to be diagnosed we may assume the features may also come from the trend. So the same 120 statistics are also computed for the trend component. To represent these features, we introduce the notations for the statistics and three subscripts to indicated how the statistics is calculated. The detailed descriptions are listed in Table \ref{table:notation}. \begin{table}[h] \begin{center} \begin{tabular}{l|c|l} \hline\hline Type & Notations or Values & Description \\ \hline\hline \multirow{8}{*}{Statistics} & $m$ & mean of the time series \\ & $\sigma$ & standard deviation of the time series \\ & $mm$ & mean of subcomponent means \\ & $m\sigma$ & mean of subcomponent standard deviations \\ & $qm$ & 1st quartile of subcomponents means \\ & $q\sigma$ & 1st quartile of subcomponent standard deviations \\ & $Qm$ & 3rd quartile of subcomponents means \\ & $Q\sigma$ & 3rd quartile of subcomponent standard deviations \\ \hline\hline \multirow{2}{*}{Subscript 1} & $i=1, 2, \ldots, m$ & Statistics computed from $F_i$. \\ & $i=R$ & Statistics computed from $R$. \\ \hline\hline \multirow{3}{*}{Subscript 2} & $j=0$ or omitted & Statistics for the whole series or subseries.\\ & $j=+1$ or $+2$ & Statistics for the terms greater than $m+j\sigma$.\\ & $j=-1$ or $-2$ & Statistics for the terms less than $m-|j|\sigma$. \\ \hline\hline \multirow{3}{*}{Subscript 3} & $0$ or omitted & Statistics computed from $F_i$ or $R$.\\ & $L$ & Statistics computed from local minima.\\ & $U$ & Statistics computed from local maxima.\\ \hline\hline \end{tabular} \caption{\label{table:notation} The notations for the features.} \end{center} \end{table} \subsection{Feature subset selection} After the above two steps we get a large number of features for the data. Usually only a small part of them are related to the diagnosis and the physiological mechanism of the disease. The task of the third step is to find the relevant ones. This will be realized by eliminating the irrelevant ones step by step. Firstly, if a statistic is almost constant, then it is useless in the diagnosis and should be eliminated. For example, the means of the mode functions $m_i$ are all approximately zero and should be eliminated. Next we use the SVM-RFE method \citep{SVMRFE} to rank the features. In this method, given a set of training samples, we first train linear SVM to get a linear classifier and then rank the features according to the weights. Because of large feature size and small training samples, the classifier might not be as good. Also, the high correlation between the features may result the relevant features to have small weights. These reasons could lead the rank to be inaccurate. In order to refine the rank we eliminate the least important feature and repeat the process to re-rank the remained features. Running this process iteratively we finally get the refined rank of the features. With this rank of features we can conclude which statistics are useful for the diagnosis and characterize the essence of the underlying physiological mechanism. Good classifiers can then be built to make accurate diagnosis. \section{Experiments and Results} \label{sec:experiment} In this section we apply our new method described above to the heart beat interval times series and report our results and conclusions. \subsection{The data set} The data set includes the heart beat interval time series of 72 healthy people and 43 CHF patients. For each subject the heart beat interval is measured for 24 hours under various activities. In our experiment we will assume the activity period is not known. The CHF of $43$ patients are classified into 4 degrees where the degree I is a slight CHF and the degree IV is a severe CHF. Most CHF patients are of the degree III. \subsection{A primary study}\label{sec:primary} Before using our new method, we study the classification ability of two simple statistics: mean and variance. In Figure \ref{fig:meanvar} we plot the mean and variance of the heart beat intervals for the healthy people and CHF patients. We see that the healthy people and the CHF patients can be roughly separated. The average heart beat interval of healthy people is larger and so is the variance. It shows the heart of healthy people beats slower and more irregularly. This observation is consistent with the previous studies. \begin{figure} \begin{center} \includegraphics[width = .7\textwidth]{meanvar.pdf} \end{center} \caption{The mean and variance (in second) of heart beat interval times series, `o' for healthy subjects and `*' for CHF patients. \label{fig:meanvar} } \end{figure} At the same time, we notice that several CHF patients falling into the cluster of healthy people show to be severe CHF patients. So we conjecture that the mean and variance might not reflect the essence of the underlying mechanism, although they have good separability. \subsection{Experiment: feature extraction} For each time series, we use the iterative convolution filter to realize the signal decomposition. In this step we need to specify the window size of the mask. It turns out it should be chosen between 50 and 100 to be stable. In our experiment it is chosen to be $50.$ We then calculate the statistics proposed in Section \ref{sec:feature}. Here we need to specify the parameter $K$, the number of subseries. If a statistic really captures the essence of the data set, it should be stable and independent of the choice of $K$ once it is chosen within a reasonable interval. Our experiments show that $K=50$ is a good choice. Most heart beat signals were recorded for a little bit more than 24 hours. Thus when $K=50$, each subseries is around 30 minutes of record. Previous studies have shown that healthy heart beats irregularly. In statistics, irregularity could be measured by statistics of ``outliers" that are not due to noise. This motivates us to consider the statistics of upward and downward fluctuations. At the same time, from the study in Section \ref{sec:primary} we find that a healthy heart beats slower than an unhealthy heart in average. These two intuitions enlighten us to conjecture that those larger heart beat intervals (i.e. slower heart beats) in the time series characterize the difference between the healthy people and CHF patients. To confirm this, we do a correlation analysis. For each of the first two mode functions and each $j=1,2,-1,-2$, we calculate and sort the means $m_{ij}^k$ and standard deviations $\sigma_{ij}^k$ for the $K=50$ subcomponents. For each order statistics we compute its correlation to the CHF disease. The result is plotted in Figure \ref{fig:updown}. From the comparison we see that, in average, correlations of the statistics associated to upward fluctuations are larger than those associated to downward fluctuations. This observation tells that we may disregard the statistics for the downward fluctuations. \begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth]{updown.pdf} \end{center} \caption{The correlations of various statistics to the CHF disease. The first column is for the first mode function $F_1$ and the second column is for the second mode function $F_2$. The first row is for order statistics of $m_{i,1}^k$ (red solid line) versus the order statistics of $m_{i,-1}^k$ (blue dotted line). The second row is for order statistics of $\sigma_{i,1}^k$ (red solid line) versus the order statistics of $\sigma_{i,-1}^k$ (blue dotted line). The third row is for order statistics of $m_{i,2}^k$ (red solid line) versus the order statistics of $m_{i,-2}^k$ (blue dotted line). The last row is for order statistics of $\sigma_{i,2}^k$ (red solid line) versus the order statistics of $\sigma_{i,-2}^k$ (blue dotted line). \label{fig:updown}} \end{figure} \subsection{Feature ranking and subset selection} To rank the features, we randomly split the data set into two subsets as the training set and the test set, respectively. In the training set we have 50 healthy subjects and 30 CHF subjects and in the test set there are 22 healthy and 13 CHF subjects. We use the training set to build the SVM classifier and use the test set to control the accuracy. Using the SVM-RFE methods described in Subsection 2.3 we rank the features. To guarantee the stability of the rank we repeat this procedure 1000 times and choose the statistics that appear most frequently in the model. In all 1000 repeats, the classification error on the test data set is summarized in the following table: \begin{table}[ht] \begin{center} \begin{tabular}{c | c |c|c| c|c | c } \hline\hline number of errors & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline number of repeats & 823 & 116 & 42 & 14 & 4 &1 \\ \hline\hline \end{tabular} \label{tbl01} \caption{Number of errors and the corresponding number of repeats.} \end{center} \end{table} The top 10 features selected by the procedure are listed in Table \ref{table:topfeatures}. We see 9 of them are related to the first two IMFs. Although the trend is in general not considered relevant, the last feature, associated to the trend, appears. We suspect a probable reason is that using only two mode functions in the signal decomposition leaves some relevant information in the trend. It is interesting to notice that these 10 statistics that appear most frequently in the model all measure the irregularity of the local amplitude. Take Statistics 1 and Statistics 7 as the example. They are obtained as the following. To get Statistics 1, for the first mode function $F_1$, find the local maxima $U_1$ and compute its mean $m_{1,0,U}$ and the standard deviation $\sigma_{1, 0,U}$. Then we choose terms greater than $m_{1,0, U}+2\sigma_{1,0,U}$ and find their standard deviation. To get Statistics 7, for the subcomponents of the second mode function, $F_{2}^k, k=1,\ldots,K$, compute the mean $m_{2}^k$ and the standard deviation $\sigma_{2}^k$. Then we choose terms greater than $m_{2}^k+2\sigma_{2}^k$ of $F_{2}^k$ and find their standard deviations $\sigma_{2,2}^k.$ Then we compute the mean of $K$ such standard deviations. In Figure \ref{fig01} we show the distribution of the healthy people and CHF patients using these two statistics. It is easy to see that healthy people and CHF patients are well separated. \begin{table}[h] \begin{center} \begin{tabular}{l|c|c|c|c|c} \hline\hline Feature Rank & 1 & 2 & 3 &4 &5 \\ \hline Statistics & $\sigma_{1, 2, U}$ & $\sigma_{1, -2, U}$ & $m\sigma_{1, 2}$ & $m\sigma_{1, 2, U}$ & $m\sigma_{1, 2, L}$ \\ \hline\hline Feature Rank &6& 7& 8& 9 & 10 \\ \hline Statistics & $\sigma_{2,2}$ & $m\sigma_{2,2}$ & $m\sigma_{2, 2, U}$ & $m\sigma_{2, -2, L}$ & $m\sigma_{R, 1, U}$ \\ \hline\hline \end{tabular} \caption{\label{table:topfeatures} The top 10 features.} \end{center} \end{table} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\textwidth]{stat20vsstat208.pdf} \caption{ Distribution of CHF patients vs healthy subjects using Statistics 1 ($\sigma_{1,2,U}$) and Statistics 7 ($m\sigma_{2,2}$). \label{fig01}} \end{center} \end{figure} Observing these two statistics, we find that both of them measure the ability of the heart beat to become extremely slower than usual. It leads to the conjecture that the strong adaptability of extremely slower heart beat might be the irregularity that characterizes the healthy hearts. \subsection{Reliability of the top features} \label{sec:stability} We have found that the most relevant features are statistics for the ``outliers" in the mode functions, i.e., those items larger than mean plus two times standard deviations, or items less than mean minus two times standard deviations. A natural question arises: ``Is this accidental?" This is equivalent to ask whether the outliers taken into account are noise or informative. In order to answer this question we further analyze these outliers. Firstly we notice that the up and down fluctuations are not balanced for both healthy people and CHF patients. The percentage of items larger than mean plus two times standard deviation for healthy people is 2.84\% and those items smaller than the mean minus two stand deviation is only 2.35\%. For CHF patients the percentages are 2.49\% and 2.17\%, respectively. This observation is the first evidence that outliers are not due to noise because otherwise they should be balanced distributed. Moreover, recall for Gaussian white noise the percentage of one-side outliers outside the two times standard deviation is 2.28\%. We see the outliers for CHF is closer to it while those for healthy subjects are much larger. We think that the outliers for CHF patients involve more noise while the outliers for healthy subjects are probably informative. To further confirm our conclusion, we do the following test: for $F_1$, we calculate the statistics for the terms greater than the mean plus $v$ times standard deviation with the variable $v$ changing from 0 to 2 and investigate their correlation to the CHF disease. Here we consider mean of the 50 standard deviations of such terms in the 50 subcomponents. Note Statistics 3 in Table \ref{table:topfeatures} corresponds to $v=2.$ The correlation is plotted in Figure \ref{fig:stable}. From this analysis, we see the correlation increases with $v$. Such a trend appears also in other statistics. This clear trend implies that the relevancy between these statistics and the CHF disease is not accidental. Instead, we should consider the outliers informative and their properties characterize the essence difference between healthy people and CHF patients. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{stable.pdf} \end{center} \caption{Correlations between CHF disease and the mean of the 50 standard deviations of those terms greater than the mean plus $v$ times standard deviation in the 50 subcomponents. The value of $v$ varies from 0 to 2.\label{fig:stable}} \end{figure} \section{Conclusions and discussions} In this paper we developed a new approach for the analysis of the physiological times series. The motivation comes from that the physiological times series usually contains both deterministic and stochastic parts and they can be represented by the low and high frequency components of the times series. Our new method uses an iterative filter to realize the decomposition of the times series into high and low frequency components and study their statistics. SVM-RFE is then used to select highly relevant features. Our method is applied to analyze the heart beat interval time series for CHF disease. The top features are found to measure the ability of hearts to beat extremely slowly. Healthy hearts show strong ability which we conjecture is due to the strong resilience to the environment and human activities. \section*{Acknowledgement} Y. Wang was partially supported by NSF DMS-1043032 and AFOSR FA9550-12-1-0455. \bibliographystyle{abbrvnat}
{ "timestamp": "2015-04-24T02:12:13", "yymm": "1504", "arxiv_id": "1504.06274", "language": "en", "url": "https://arxiv.org/abs/1504.06274" }
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC (China); CNRS/IN2P3 (France); BMBF, DFG, HGF and MPG (Germany); INFN (Italy); FOM and NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FANO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); NSF (USA). The Tier1 computing centres are supported by IN2P3 (France), KIT and BMBF (Germany), INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom). We are indebted to the communities behind the multiple open source software packages on which we depend. We are also thankful for the computing resources and the access to software R\&D tools provided by Yandex LLC (Russia). Individual groups or members have received support from EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union), Conseil g\'{e}n\'{e}ral de Haute-Savoie, Labex ENIGMASS and OCEVU, R\'{e}gion Auvergne (France), RFBR (Russia), XuntaGal and GENCAT (Spain), Royal Society and Royal Commission for the Exhibition of 1851 (United Kingdom).
{ "timestamp": "2015-09-10T02:13:12", "yymm": "1504", "arxiv_id": "1504.06339", "language": "en", "url": "https://arxiv.org/abs/1504.06339" }
\section{Transformation from the spin to the bosonic language} The spin-$\tfrac{1}{2}$ Heisenberg model is defined on the bcc and fcc lattices, shown in Fig.~\ref{Fig:lattices}. By using the Matsubara-Matsuda transformation introduced in the main text, the Hamiltonian is transformed into the bosonic language, up to a constant term: \begin{align} \hat{H}&=\frac{J_1}{2} \sum_{\langle ij \rangle} \left( b_i^\dagger b_j +b_j^\dagger b_i \right) +\frac{J_2}{2} \sum_{\langle \langle ij \rangle \rangle} \left( b_i^\dagger b_j +b_j^\dagger b_i \right) \nonumber \\ &\quad +\frac{J_3}{2} \sum_{\langle \langle \langle ij \rangle \rangle \rangle} \left( b_i^\dagger b_j +b_j^\dagger b_i \right)+\frac{U}{2} \sum_i n_i (n_i-1) \nonumber \\ &\quad +J_1 \sum_{\langle ij \rangle} n_i n_j+J_2 \sum_{\langle \langle ij \rangle \rangle}n_i n_j +J_3 \sum_{\langle \langle \langle ij \rangle \rangle \rangle} n_i n_j \nonumber \\ &\quad -\left( \frac{z_1 J_1+z_2 J_2+z_3 J_3}{2}-H \right) \sum_i n_i \end{align} where $U$ is the on-site hard-core repulsion, which is sent to infinity in the calculation, and $z_1,z_2,z_3$ are the coordination numbers of the 1st, 2nd and 3rd nearest neighbors. \begin{figure}[!htbp] \includegraphics[width=0.35\textwidth]{lattices} \caption{The Heisenberg interactions $J_1,J_2,J_3$ are defined on the 1st, 2nd and 3rd nearest neighbors. (a) bcc lattice. (b) fcc lattice.} \label{Fig:lattices} \end{figure} By Fourier transformation $b_i^\dagger = \frac{1}{\sqrt{N}} \sum_{\bm{k}} e^{-{i\mkern1mu} \bm{k} \cdot \bm{r}_i} b_{\bm{k}}^\dagger$, the Hamiltonian is written down in $\bm{k}$-space: \begin{align} \hat{H}&=\sum_{\bm{k}} \left[ \epsilon (\bm{k})-\epsilon (0)+H \right] b_{\bm{k}}^\dagger b_{\bm{k}} \nonumber \\ &\quad +\frac{1}{2N} \sum_{\bm{k},\bm{k}^\prime,\bm{q}}(U+V_{\bm{q}}) b_{\bm{k}+\bm{q}}^\dagger b_{\bm{k}^\prime - \bm{q}}^\dagger b_{\bm{k}^\prime} b_{\bm{k}} \end{align} where \begin{equation} \epsilon (\bm{k}) \!=\! \frac{J_1}{2} \sum_{\eta_1} e^{{i\mkern1mu} \bm{k} \cdot \bm{r}_{\eta_1}}+\! \frac{J_2}{2} \sum_{\eta_2} e^{{i\mkern1mu} \bm{k} \cdot \bm{r}_{\eta_2}}+\! \frac{J_3}{2} \sum_{\eta_3} e^{{i\mkern1mu} \bm{k} \cdot \bm{r}_{\eta_3}} \end{equation} here $\bm{r}_{\eta}$ denote the positions of the neighboring sites. And \begin{equation} V_{\bm{q}}=2 \epsilon (\bm{q}) \end{equation} To be explicit, For bcc lattice: \begin{align} \epsilon (\bm{k})&=4J_1 \cos \frac{k_x}{2} \cos \frac{k_y}{2} \cos \frac{k_z}{2} +J_2 \Big( \cos k_x \nonumber \\ &\quad + \cos k_y+ \cos k_z \Big)+2J_3 \Big( \cos k_x \cos k_y \nonumber \\ &\quad + \cos k_y \cos k_z +\cos k_z \cos k_x \Big) \end{align} For fcc lattice: \begin{align} \epsilon (\bm{k})&=2J_1 \Big( \cos \frac{k_x}{2} \cos \frac{k_y}{2}+\cos \frac{k_y}{2} \cos \frac{k_z}{2} \nonumber \\ &\quad +\cos \frac{k_z}{2} \cos \frac{k_x}{2} \Big)+J_2 \Big( \cos k_x +\cos k_y+ \nonumber \\ &\quad \cos k_z \Big) +4 J_3 \Big( \cos k_x \cos \frac{k_y}{2} \cos \frac{k_z}{2}+ \nonumber \\ &\quad \cos k_y \cos \frac{k_z}{2} \cos \frac{k_x}{2}+\cos k_z \cos \frac{k_x}{2} \cos \frac{k_y}{2} \Big) \end{align} \begin{figure}[!tbp] \includegraphics[width=0.45\textwidth]{phd_SingleMagnon} \caption{The single-magnon phase diagrams, with each region denoted either by the number of minima (when there are multiple minima at incommensurate $\bm{\mathit Q}$-vectors), or denoted by the positions of the $\bm{\mathit Q}$-vectors (when $\bm{\mathit Q}$ is commensurate). (a)(b) bcc lattice. (c)(d) fcc lattice.} \label{Fig:phd_SingleMagnon} \end{figure} We define the minimum value of $\epsilon (\bm{k})$ to be $\epsilon_\text{min}$, in this way $\omega_{\bm{k}}\equiv \epsilon (\bm{k})-\epsilon_\text{min}$ has minimum value equals to zero. The Hamiltonian is rewritten as: \begin{equation} \hat{H} \!\!=\!\!\sum_{\bm{k}}\! ( \omega_{\bm{k}}-\mu ) b_{\bm{k}}^\dagger b_{\bm{k}}+\frac{1}{2N} \!\!\sum_{\bm{k},\bm{k}^\prime,\bm{q}} \!\!(U+V_{\bm{q}}) b_{\bm{k}+\bm{q}}^\dagger b_{\bm{k}^\prime - \bm{q}}^\dagger b_{\bm{k}^\prime} b_{\bm{k}} \end{equation} where the chemical potential: \begin{equation} \mu=\left[ \epsilon(0)-\epsilon_\text{min} \right]-H\equiv H_\text{sat}-H \end{equation} Because of the frustration, the single magnon dispersion $\omega_{\bm{k}}$ can have multiple degenerate minima at different $\bm{\mathit Q}$-vectors. In Fig.~\ref{Fig:phd_SingleMagnon}, we compute the number of minima in $\omega_{\bm{k}}$, for both bcc and fcc lattices. For concreteness, we focus on the regions with 6 degenerate minima, whose positions are denoted by $\pm \bm{\mathit Q}_n=\pm Q \,{\bf \hat{e}}_n$, where $n=1,2,3$. The value of $Q$ is given by $\cos \tfrac{Q}{2}=-J_1/(J_2+4 J_3)$ for the bcc lattice and $\cos \tfrac{Q}{2}=-(J_1+2J_3)/(J_2+4J_3)$ for the fcc lattice. Correspondingly, the saturation field values are: \begin{subequations} \begin{align} H_\text{sat}^{\text{bcc}}&=\frac{2 J_1^2}{J_2+4J_3}+4J_1+2J_2+8J_3 \\ H_\text{sat}^{\text{fcc}}&=\frac{2(J_1+2J_3)^2}{J_2+4J_3}+4J_1+2J_2+16J_3 \end{align} \end{subequations} \section{Calculation of Effective Interactions} The effective interactions in the dilute limit for hard-core bosons are calculated by the Bethe-Salpeter equation, which is equivalent to summing over all the ladder diagrams (Fig.~\ref{Fig:ladder}). \begin{equation}\label{Eq:Bethe-Salpeter} \Gamma_{\bm{q}}(\bm{k},\bm{k^\prime})=U+V_{\bm{q}}-\int \frac{d^3 \bm{q}^\prime}{V_{\text{BZ}}} \frac{\Gamma_{\bm{q}^\prime}(\bm{k},\bm{k}^\prime) (U+V_{\bm{q}-\bm{q}^\prime})}{\omega_{\bm{k}+\bm{q}^\prime}+ \omega_{\bm{k}^\prime-\bm{q}^\prime}} \end{equation} where $V_{\text{BZ}}$ is the volume of the 1st BZ. \begin{figure}[!htbp] \includegraphics[width=0.45\textwidth]{Bethe_Salpeter} \caption{Ladder diagrams.} \label{Fig:ladder} \end{figure} When the magnetic field $H$ is close to the saturation value $H_{\text{sat}}$, the system is unstable towards BEC at the dispersion minima. In this case we can take the long wave length limit $\bm{k} \rightarrow \pm \bm{\mathit Q}_i$, and calculate the corresponding vertex functions (schematically shown in Fig.~\ref{Fig:vertex}): \begin{equation} \begin{split} \Gamma_1 &= \Gamma_0 (\bm{\mathit Q}_n, \bm{\mathit Q}_n) \\ \Gamma_2 &= \Gamma_0 (\bm{\mathit Q}_n,-\bm{\mathit Q}_n)+\Gamma_{-2 \bm{\mathit Q}_n} (\bm{\mathit Q}_n,-\bm{\mathit Q}_n) \\ \Gamma_3 &= \Gamma_0 (\bm{\mathit Q}_n,\bm{\mathit Q}_m) +\Gamma_{\bm{\mathit Q}_m-\bm{\mathit Q}_n} (\bm{\mathit Q}_n,\bm{\mathit Q}_m) \\ \Gamma_4 &= \Gamma_{\bm{\mathit Q}_m-\bm{\mathit Q}_n} (\bm{\mathit Q}_n,-\bm{\mathit Q}_n) + \Gamma_{-\bm{\mathit Q}_m-\bm{\mathit Q}_n} (\bm{\mathit Q}_n,-\bm{\mathit Q}_n) \end{split} \end{equation} \begin{figure}[!htbp] \includegraphics[width=0.45\textwidth]{vertex_label} \caption{Schematic plot of the vertex functions in the long wave length limit.} \label{Fig:vertex} \end{figure} To solve the Bethe-Salpeter equation, we start from the following ansatz: \begin{equation}\label{Eq:ansatz} \Gamma_{\bm{q}}= \langle \Gamma \rangle + \sum_\eta A_\eta V(\bm{r}_\eta) e^{{i\mkern1mu} \bm{q} \cdot \bm{r}_\eta} \end{equation} where $\bm{r}_\eta$ denotes the positions of the 1st, 2nd, and 3rd neighboring sites. Tthe $\bm{k},\bm{k^\prime}$ index in $\Gamma_{\bm{q}}(\bm{k},\bm{k^\prime})$ are omitted for simplicity, and $\langle \Gamma \rangle=\int \frac{d^3 \bm{q}^\prime}{V_{\text{BZ}}} \Gamma_{\bm{q}^\prime}$. We also assume that $V_{\bm{q}}$ is centro-symmetric, ie \begin{equation} \int d^3 \bm{q} V(\bm{q})=0 \end{equation} By substituting the ansatz into the Bethe-Salpeter equation and taking the hard-core limit, we get the following form of linear equations: \begin{subequations} \begin{align} \sum_\eta V(\bm{r}_\eta)(\tau_1^\eta)^* A_\eta +\tau_0 \langle \Gamma \rangle&=1 \\ \sum_\nu (\tau_2^{\eta \nu}V(\bm{r}_\nu)+\delta_{\eta \nu}) A_\nu+ \tau_1^\eta \langle \Gamma \rangle &=1 \end{align} \end{subequations} where the integrals are defined as: \begin{subequations} \begin{align} \tau_0 &= \int \frac{d^3 q}{V_{\text{BZ}}} \frac{1}{\omega_{\bm{k}+\bm{q}}+ \omega_{\bm{k}^\prime-\bm{q}}} \\ \tau_1^\eta &= \int \frac{d^3 q}{V_{\text{BZ}}} \frac{e^{-i\, \bm{q}\cdot \bm{r}_\eta}} {\omega_{\bm{k}+\bm{q}}+\omega_{\bm{k}^\prime-\bm{q}}} \\ \tau_2^{\eta \nu} &= \int \frac{d^3 q}{V_{\text{BZ}}} \frac{e^{-i\, \bm{q}\cdot (\bm{r}_\eta - \bm{r}_\nu)}} {\omega_{\bm{k}+\bm{q}}+\omega_{\bm{k}^\prime-\bm{q}}} \end{align} \end{subequations} Denote: \begin{subequations} \begin{align} B_{\eta \nu} &= \tau_2^{\eta \nu} V(\bm{r}_\nu)+\delta_{\eta \nu} \\ C_\eta &= V(\bm{r}_\eta) (\tau_1^\eta)^* \end{align} \end{subequations} The above equations are now organized into a matrix form: \begin{equation}\label{Eq:mat} \left( \begin{array}{cccc} B_{11} & \cdots & B_{1z} & \tau_1^1 \\ \vdots & \ddots & \vdots & \vdots \\ B_{z1} & \cdots & B_{zz} & \tau_1^z \\ C_1 & \cdots & C_z & \tau_0 \end{array} \right) \left( \begin{array}{c} A_1 \\ \vdots \\ A_z \\ \langle \Gamma \rangle \end{array} \right) = \left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ 1 \end{array} \right) \end{equation} By solving the linear equations Eq.~\eqref{Eq:mat}, we obtain all the unknown coefficients in the ansatz Eq.~\eqref{Eq:ansatz}. Then we can substitute the values of $\Gamma_1,\ldots,\Gamma_4$ into the expression of effective energy, and determine which multi-$\bm{\mathit Q}$ state will be stabilized. \section{Effect of symmetric exchange anisotropy} We consider short-range symmetric exchange anisotropy (cutoff at 2nd nearest neighbor): \begin{equation} \hat{H}_A \propto \sum_{\langle ij \rangle} -3(\bm{S}_i \cdot \bm{r}_{ij}) (\bm{S}_j \cdot \bm{r}_{ij}) \end{equation} such terms can arise directly from dipole-dipole interactions, or perburbatively from spin-orbit coupling\cite{Moriya1960}. Similar to the treatment of the Heisenberg exchange interactions, we choose the quantization axis along [111] direction, and represent the spin-$\tfrac{1}{2}$ operators with hard-core bosons. In the long-wavelength limit, for both bcc and fcc lattices: \begin{align} \hat{H}_A & \propto \Big[ (\frac{\sqrt{3}}{2}+i\, \frac{1}{2})b_{Q_1}^\dagger b_{-Q_1}^\dagger +(-\frac{\sqrt{3}}{2}+i\, \frac{1}{2})b_{Q_2}^\dagger b_{-Q_2}^\dagger \nonumber \\ & \quad -i \, b_{Q_3}^\dagger b_{-Q_3}^\dagger \Big]+h.c \end{align} Then we condense the bosons by $\langle b_{\pm \bm{\mathit Q}_n} \rangle/\sqrt{N}= \sqrt{\rho_{\pm \bm{\bm{\mathit Q}_n}}} \exp \left( i \phi_{\pm \bm{\mathit Q}_n}\right)$, which gives the energy correction of symmetric exchange anisotropy: \begin{equation} E_A \propto J_A \sum_n \sqrt{\rho_{\bm{\mathit Q}_n} \rho_{-\bm{\mathit Q}_n}} \cos (\Phi_n +2n \pi/3-\pi/2). \end{equation} where $\Phi_n=\phi_{Q_n}+\phi_{-Q_n}$.
{ "timestamp": "2015-09-03T02:00:51", "yymm": "1504", "arxiv_id": "1504.06270", "language": "en", "url": "https://arxiv.org/abs/1504.06270" }
\section{Introduction} The super-massive black hole (SMBH) Sagittarius A* (Sgr A*) at the centre of the Milky Way (MW) has a mass of roughly 4 million solar masses and is located at a distance of about 8 kpc (e.g., Ghez et al. 2008; Gillessen et al. 2009). It is surrounded by a dense and massive star cluster (the NSC, Nuclear Star Cluster) which has an estimated mass of 3$\pm$1.5$\times$10$^7$ solar masses (Launhardt et al. 2002). The mid-infrared images from the Spitzer Space Telescope show how the NSC stands out as a separate structure at the centre of the MW (Stolovy et al. 2006). In the mid-infrared, the central parsec shows a spiral structure, called the minispiral, made of dust and ionized gas. It is surrounded by a clumpy, circumnuclear ring of dense gas and dust, the CND (G\"usten et al. 1987). Due to its proximity the Galactic Centre can be studied in detail by the direct observation of individual stars, gas and dust. It represents an ideal and unique case allowing one to analyse thoroughly the direct environment of a central SMBH. The Galactic Centre is obscured mainly by extinction from the diffuse interstellar medium (ISM) present along the line-of-sight. About less than one third of this extinction arises from the dense foreground ISM (Whittet et al. 1997). It is shown that the mean value of the visual extinction towards prominent sources within the central stellar cluster is about $\sim 27$mag (e.g. Sch\"odel et al. 2010, Scoville et al. 2003, Lebofsky, Rieke, Tokunaga, 1982) with a rather smooth distribution across the central 10 to 20 arcsec at an angular resolution of 2 arcsec (Sch\"odel et al. 2010, see also Scoville et al. 2003). Nishiyama \& Sch\"odel (2013) find that young, massive star candidates can be detected throughout the nuclear star cluster. Moreover, Eckart et al. (2013, 2014) showed that there is a substantial number of faint compact infrared excess sources in the central stellar cluster. So far, we can only speculate on their nature and origin (see also Mu$\check{\bf z}$i\'c et al. 2010, Jalali et al. 2013). Many may be associated with young stars or proto-stellar aggregates. In particular, there is a small but dense cluster of comoving sources (IRS13N) located $\sim$3 arcsec west of SgrA* just 0.5 arcsec north of the bright IRS13E cluster of Wolf-Rayet and O-type stars. They are either dust-embedded stars older than a few Myr, or very young, dusty stars with ages younger than 1~Myr (Eckart et al. 2004, 2013, 2014, Mu$\check{\bf z}$i\'c et al. 2008). Eckart et al. (2014) present a first K$_s$-band identification and proper motions of the IRS13N members, the high-velocity dusty S-cluster object (DSO; Eckart et al. 2014, also referred to as G2 by Gillessen et al. 2012, 2013ab), and other infrared excess sources in the central field. For the IRS13N sources the common K- and L-band proper motions indicate that they are not only broad-band spectroscopically but also dynamically young (Mu$\check{\bf z}$i\'c et al. 2008). For the DSO the colour information indicates that it may be a dust-embedded star rather than a cloud. In Moultaka et al. (2004, 2005), we studied the distribution of water ices and hydrocarbons towards a number of bright sources in the centre of the MW. We estimated the line-of-sight extinction in the L-band of the spectrum using a novel approach described by Moultaka et al. (2004) and we derived intrinsic spectra of the sources. The results show that a substantial amount of water ices and hydrocarbons are present in the central parsec around the central SMBH.\\ In Moultaka et al. (2009) we investigated the circumstellar material around the brightest dust-enshrouded sources of the central stellar cluster using slit-spectroscopy in the spectral range of the M filter from 4.6 to 5.1$\mu$m. For the 15 bright sources studied in that paper, the observations resulted in M-band spectra showing the vibration-rotational P and R branch absorptions of the gaseous $^{12}$CO (4.666$\mu$m) and $^{13}$CO (4.77$\mu$m). The broad P and R branch envelopes are separated by a gap at 4.666$\mu$m (the $v=1-0$ band centre; Allamandola 1984). In addition to that, we found a strong absorption, centreed at 4.675$\mu$m, attributed to a mixture of polar and apolar CO ices. Applying a similar method to the one by Moultaka et al. (2004) to the data presented by Moultaka et al. (2009) we performed a first-order correction of the line-of-sight absorption due to CO-ice and $^{13}$CO gas. We found residual absorptions of the solid- and gas-phases of the CO that can be attributed to local material in the minispiral and the circumstellar medium. In combination with other data we obtained gas masses of the circum-stellar shells of the order of 10$^{-3}$ and 10$^{-2}$M$_\odot$. Previously, Moneti, Cernicharo \& Pardo (2001) associated the strong gaseous CO absorption to material with a bulk kinetic temperature of $\sim 10$K towards SgrA*. Fabry-Perot measurements by Geballe, Baas \& Wade (1989) of the CO R(2) and R(5) lines towards IRS~1, 2, 3, 5, 6, 7 and 8 revealed that most of the absorption takes place in the velocity interval between 0 and 75 km/s. The bulk of the foreground absorption is most likely taking place at 20 km/s and 50 km/s associated with two giant molecular clouds close to the Galactic Centre. The CO gas-phase absorption has also been observed in IRS~3, IRS~7 and IRS~12 by Geballe (1986) and McFadzean et al. (1989), but the P and R branches were not resolved in these low resolution spectra. While emission from the 4.6$\mu$m vibrational transitions of CO may originate from the inner few au of young stars, absorption is a potentially powerful probe of the wind/envelope structure at larger distances from embedded young stars. Herczeg et al. (2011) detect emission in the 4.6$\mu$m CO fundamental line from 14 of the YSOs in a sample of 18 objects originating from the inner discs for the lower luminosity objects and from slow outflows for objects with high bolometric luminosity. In contrast to the gaseous CO absorption, the solid CO has a single absorption band at 4.675$\mu$m ($\sim$0.01$\mu$m FWHM) that is widened and shifted due to ice impurities (see Hagen, Allamandola \& Greenberg 1979, 1980). Initial investigation of the solid-phase $4.675\mu$m CO absorption towards IRS~12 by McFadzean et al. (1989) suggested that it is dominated by foreground material with temperature less than 100~K and possibly as cold as only 20~K. Potentially young stars that are still embedded in dense molecular envelopes are difficult to identify and study because emission from a disc can be confused with possible outflow emission or a larger envelope that may e.g. be associated with the minispiral. In Moultaka et al. (2009), we also detect a broad absorption feature at 4.62$\mu$m, discovered by Soifer et al. (1978) and resolved by Lacy et al. (1984), that is strongest towards the M2 supergiant IRS7. It can be associated with the XCN feature due to cold grain mantles that contain molecules with C-N bonds resulting from UV-photolysis. A similar feature is produced by laboratory UV-photolysis of CO and NH$_3$ (Lacy et al. 1984, Bernstein et al. 1995) and is assigned to OCN$^-$. Van Broekhuizen et al. (2005) report on the detection of this feature in the M-band spectra of 34 deeply embedded young stellar objects (YSOs), observed with high signal-to-noise ratio (S/N) and high spectral resolution. Their data enabled us to do the first studies of the solid OCN$^-$ abundance towards a significant number of low-mass YSOs. The authors show that the OCN$^-$ abundances are presumably due to photochemical and surface chemistry formation mechanism. \\ In this paper, we present the analysis of a complete imaging spectroscopy data cube obtained via slit spectroscopy and covering the 4.6-5.1$\mu$m M-band spectrum for the inner 0.5~pc of the central cluster\footnote{Resulting from ESO VLT observations of program ID numbers 083.C-0675A, 085.C-0469A and 087.C-0214A.}. In the following section we describe the observations and the data reduction. In section 3, we explain how we derive the line-of-sight extinction from gaseous and solid CO. In section 4, we analyse the spatial distribution of the CO absorption in its gas- and solid-phase across the entire central stellar cluster and show that we are able to distinguish between the amount of source intrinsic and line-of-sight absorption. Finally, in the last section we discuss our results and provide a conclusion. \section{Observations and data reduction} We used ISAAC spectrograph located at the ESO UT3-VLT telescope to map the central half parsec of our Galaxy in the M-band spectroscopic domain. We needed three periods of observations (083.C - June 2009, 085.C - June 2010 and 087.C - August 2011) to complete our programme. This ended up with the first data cube of the region in the M-band (from 4.4 to 5.1$\mu$m). In Figs.~\ref{Mband} and \ref{Mbandsm3} we show the integrated and smoothed integrated map, respectively, covering the entire spectral range.\\ The optical seeing varied from 0.39 to 1.9 arcsec during period 083.C, from 0.4 to 0.8 arcsec during period 085.C and from 0.7 to 1.5 arcsec during period 087.C. We used a slit width of 0.6 arcsec providing a spectral resolution of $\sim$800 (i.e. corresponding to a velocity resolution of $\Delta v=375$km/s).\\ To map the central half parsec, we needed 22 slit positions that we placed parallel to each others. But because of technical problems, we were not able to observe the regions shown in black in Fig.~\ref{Mbandblank}. Therefore, we interpolated the spectra at these slit positions to fill in the whole data cube (see Figs.~\ref{Mband} and \ref{Mbandsm3}). Besides, all slits were not observed with the same integration time, therefore, the S/N ratio was different from one slit to another. \\ In the following, we explain the data reduction and the building of the data cube, step by step. There are five different steps. \begin{itemize} \item First, all the array images were flatfielded, corrected for cosmic rays and for distortions along the axis of dispersion. The sky emission was removed using the chopping technique (offered for ISAAC) combined with telescope nodding. The chopper throw distance was 20 arcsec along the slit. Each chopped frame is composed of a positive trace image and a negative one (see an example in Fig.~\ref{Data_reduc1}). Telescope nodding results in two consecutive chopped frames A and B shifted 20 arcsec relative to each other. The shift distance is equal to the chopper throw, such that the positive trace image in the first image A is located at the same location on the array as the negative one in image B (see example in Fig.~\ref{Data_reduc1}). Thus, subtracting one chopped frame from the second (A-B) results in an image with two negative traces and one positive trace with twice the intensity of the negative ones (see Fig.~\ref{Data_reduc1}). The goal of this subtraction is to increase the Signal-to-Noise ratio and to remove the sky emission lines from the spectra. \item In the second step, we extracted spectra of the bright sources from all the frames and wavelength calibrated them using a Xenon-Argon lamp and the third order grating. Then, we made a relative flux calibration of the spectra and corrected them for telluric lines using A0V-type standard stars and the data reduction commands from IRAF software. The procedure to remove telluric lines is to divide the extracted science spectrum by a modified telluric standard spectrum. The modified spectrum is the standard star spectrum shifted in wavelength to correct for possible errors in the zeropoint dispersion and scaled in intensity (following an exponential law that involves a scaling parameter to correct for airmass deviations). The optimal "shift" and "scale" parameters (x$_i$ and s$_i$ in the illustration of Fig.~\ref{Data_reduc2}, respectively) obtained during the telluric lines correction are stored for each of the extracted spectra of a single frame. Telluric standard stars were observed as close to the corresponding airmass at which the Galactic Centre was observed as possible. Hence we minimized the possibility of calibration uncertainties due to the fact that the sky lines can be narrower than our spectral resolution elements. \item In the third step, for each science frame, the mean value of the $x_i$ and $s_i$ parameters is calculated on all extracted spectra ($\bar{x_i}$ and $\bar{s_i}$ in Fig.~\ref{Data_reduc3}). This allows us to build a modified telluric star spectrum shifted and scaled with the optimal parameters allowing us to correct all the spectra of a single science frame at a time. The resulting shifted and scaled star spectrum is then stacked along the slit axis 1024 times, corresponding to the number of detector pixels (see example in Fig.~\ref{Data_reduc3}). In the following, we will call the resulting image a 'stacked standard star' frame. \item The fourth step is to divide each of the science frames flatfielded, corrected for cosmic rays, distorsions and sky lines, by its corresponding 'stacked standard star' frame. This allows an optimal correction for telluric lines of all spectra in the totality of the science frames without extracting the spectra (see example in Fig.~\ref{Data_reduc4}). The reduced science frames corresponding to the same slit position were then added to each other to improve the S/N. This results in one final reduced frame per slit position. \item In the final step, we constructed the data cube using DPUSER software\footnote{Developped by Thomas Ott http://www.mpe.mpg.de/~ott/dpuser/; see also Eckart \& Duhoux (1991).} by positioning properly the resulting 22 slit frames (see Fig.~\ref{Data_reduc5}). To this end, the different slit frames were shifted adequately along the slit axis (i.e. along the declination axis) to recover the observed field with the right relative positions of the bright sources. The right ascension axis is obtained by positioning the 22 slit frames correctly next to each others. This step was done by eye but the resulting field of the data cube is recovered very successfully as one can see in Fig.~\ref{reconstructedfield}. This figure shows the smoothed map of the integrated intensities along the M-band with contours of an M-band image obtained independently with ISAAC imager. This figure shows that the positions of the sources are well determined within a $\leq$0.5 arcsec distance accuracy. \end{itemize} Most of the sky emission was removed in step one, after the A-B subtraction between the two consecutive chopped images. But in six slit positions, it was very difficult to remove the sky properly. This results in the vertical stripes (seen in Figs.~\ref{Mband} and \ref{Mbandsm3}). The slit positions are shown in white in Fig.~\ref{Mbandblank}. ~At these positions, all the results described in the paper are obtained by interpolating the measurements between the neighbouring slits.\\ \begin{figure} \includegraphics[width=22pc]{line390_553interp_coorscalePrint.eps} \caption{\label{Mband} M-band integrated map obtained from the observed data cube. } \end{figure} \begin{figure} \includegraphics[width=22pc]{line390_553interpsm3_coorscalePrint.eps} \caption{\label{Mbandsm3} M-band integrated map smoothed with a boxcar of radius 0.3 arcsec and then using a Gaussian of 0.3 arcsec FWHM.} \end{figure} \begin{figure} \includegraphics[width=22pc]{line390_553interp_coor_blank_Print.eps} \caption{\label{Mbandblank} M-band integrated map with the locations of the slit positions contaminated by the sky emission lines (shown in white stripes) and the locations where no data were obtained (shown in black stripes) because of bad weather conditions.} \end{figure} \begin{figure} \includegraphics[width=22pc]{line390_553interpCorrsm3_coorcontMgood2_annote_scale.eps} \caption{\label{reconstructedfield} M-band integrated map obtained with the data cube corrected for the line-of-sight extinction due to dust and to the $^{12}$CO solid-phase absorption. Contours: M-band image obtained with ISAAC imager of the UT3 ESO/VLT telescope. This image shows that our reconstructed field is very successful where the positions of the sources are determined within a $\leq$1 arcsec distance accuracy.} \end{figure} \section{The line-of-sight extinctions} \subsection{Correcting the line-of-sight solid ice $^{12}$CO band and dust extinction} In Moultaka et al.~(2009) we formerly corrected the M-band spectra of 15 bright sources in the central parsec for the foreground contribution of the solid-phase $^{12}$CO absorption. In that paper, we found that this absorption, located at 4.675$\mu$m, is very prominent in the spectrum of IRS~2L. The red source IRS~2L is located in the central parsec but outside the minispiral and the CND molecular disk (see its location in Fig.\ref{reconstructedfield}). Moreover, it is an early-type star (B-type; Cl\'enet et al. 2001) showing no CO bandheads in its K-band spectrum. We could, therefore, assume that this star is not affected by the local absorption and that the $^{12}$CO absorption observed in its M-band spectrum is mainly due to the line-of-sight extinction. We smoothed the spectrum of this star to derive a shape of the $^{12}$CO ice absorption free from all other features like the CO gaseous line complex and the hydrogen Pf$_\beta$ emission line at 4.65$\mu m$. Then, we normalized to one the resulting spectrum at the wavelengths where the continuum is expected not to be absorbed. This spectrum represents the shape of the foreground ice absorption continuum normalized to unity. Let us call it the 'template extinction spectrum of the solid CO band'. It is shown in Fig.~6a of Moultaka et al. (2009) and in Fig.~\ref{solidCOforeground}. This spectrum shows a broad short wavelength shoulder due to a possible XCN absorption present in the spectrum of IRS~2L (see Fig.~6(a) in Moultaka et al. 2009). Therefore, we can consider that the template spectrum also calibrates the foreground XCN absorption feature.\\ Here, we assume that the dominant contribution to the extinction by the cold foreground material is in fact due to two components: 1) dust extinction and 2), the $^{12}$CO ice absorption. The first component (dust extinction) can be approximated by a constant continuum over the wavelength range (we call it $k$ in equations~\ref{I_obs} and~\ref{I_intr}). This component cannot be estimated from spectroscopy (or from our data), but its estimate is not necessary for the results presented in this paper. Concerning the second component, the $^{12}$CO ice absorption, its amount can be variable across the region. This is why we diluted the template spectrum by an additive constant continuum $d$ to allow for a possible variation in the amount of absorption. To determine this constant $d$, we adopted the following criterion: when we divide the observed spectrum of a Galactic Centre source by this diluted template, the resulting spectrum is free from the solid CO absorption; this means that the resulting spectrum should show a non-absorbed continuum. In this case, the gaseous $^{12}$CO and $^{13}$CO line complexes should approximate essential features of the theoretical spectrum by Moneti et al. (2001) shown by Moultaka et al. (2009) at our spectral resolution (the spectrum is also shown here in Fig.\ref{Moneti}). This means that the flux in the wavelength range around 4.665$\mu$m, between the P and R branches of the $^{12}$CO line complex, is at the same level as that at 4.73$\mu m$ and 4.8$\mu m$ (see Fig.\ref{Moneti}). \\ In Moultaka et al. (2009), we applied this procedure and divided the observed Galactic Centre spectra by a diluted template spectrum with the most appropriate constant $d$ for each case, satisfying the previous criterion. The corrected spectra are shown in Fig.9 by Moultaka et al. (2009)\footnote{The fluxes of the corrected spectra for foreground extinction, shown in Fig. 9 of Moultaka et al. (2009), are smaller than the non corrected spectra of figs 2 to 5 of that paper. This should not be the case since the intrinsic corrected spectra are not obscured and therefore are brighter. The reason for the small fluxes of fig.~9 is that the absolute value of dust extinction in the present wavelength range is not known and cannot be derived from our observations. This is a multiplicative factor, called $k$ in the present paper, that would rescale the corrected spectra.}. We found a mean value for the constant diluting continuum $\bar{d}=3.7$ and a median value of 4. The Galactic Centre sources studied in that paper are spread all over the 0.5pc central region. This suggests that the template spectrum diluted with an additive continuum of 4 is a good approximation to describe the foreground overall extinction due to the solid $^{12}$CO absorption. \\ This motivated us to use a median value of 4 for the diluting constant $d$ to correct our present data cube for this foreground absorption. It ends up that , if we call $I_{intr\,\lambda}$, the intrinsic flux, at wavelength $\lambda$, of a given source in the Galactic Centre (or at a given spatial pixel in the field of our data cube), $I_{obs\,\lambda}$ the observed flux, at wavelength $\lambda$, of the source (or at the given pixel) and $E_{^{12} {CO\, band\, ext\,\lambda}}$, the template extinction spectrum of the solid $^{12}$CO band, then one can write: \begin{equation} I_{intr\,\lambda} * [4 + E_{^{12}{CO\,band\,ext\,\lambda}}] * k = I_{obs\,\lambda} \label{I_obs} \end{equation} where $k$ accounts for the dust extinction continuum. Let us call $E_{^{12}{CO\, dil\,\lambda}}$, the diluted template extinction spectrum of the ice $^{12}$CO band : \begin{equation} E_{^{12}{CO\, dil\,\lambda}}=4+E_{^{12}{CO\, band \,ext\,\lambda}} \end{equation} Then, equation (\ref{I_obs}) becomes \begin{equation} I_{intr\,\lambda} = \frac{I_{obs\,\lambda}}{ E_{^{12}{CO\, dil\,\lambda}} * k} \label{I_intr} \end{equation} Here, we derive a corrected data cube for the foreground extinction by dividing the whole cube with the diluted template extinction spectrum. The data cube is not corrected for the absolute extinction due to obscuration by dust (i.e. $k=1$). \\ All these approximations can be done since the line-of-sight extinction across the central 0.5 pc is shown to be constant within the uncertainties of $\Delta$$A_{K_S}$$<$0.3 mag (Sch\"odel et al. 2010; Fig.7) i.e. $\Delta$$M$$<$0.015 mag (Viehmann et al. 2005). In addition, with Moultaka et al. (2009), we found only a little variation of the additive constant continuum across the region; moreover, the corrected spectra of the bright sources obtained in that paper (shown in their Fig. 9) are consistently different for different source nature. In particular, the corrected spectra of the Helium stars outside the minispiral area show no ice absorption (e.g. IRS 16NE) while the dust embedded sources in the minispiral have a prominent ice absorption in their spectra (e.g. IRS 21). If the residual ice absorption in the corrected spectra were due to foreground clouds, one would expect that it is also present in the corrected spectra of the non-embedded sources like the IRS16 sources and not only in the dust embedded ones. \subsection{The foreground $^{13}$CO~R(0) gaseous line correction} The observed spectra of the bright sources in the central parsec show two absorption line complexes (see Moultaka et al. 2009). These correspond to the P and R branches of the gaseous $^{12}$CO and $^{13}$CO rotation-vibration lines. These molecules are probably located in molecular clouds along the line-of-sight and in the local medium of the Galactic Centre. \\ To estimate the amount of these gases in the foreground material and in the local medium, we choose to use the optically thin isotopic lines for which the optical depth is significantly smaller compared to the $^{12}$CO lines, since the $^{13}$CO gas is less abundant in the ISM (e.g. Geballe et al. 1989, Moneti et al. 2001). In particular, we choose the $^{13}$CO~R(0) line located at 4.765~$\mu$m where the spectra are well corrected for telluric lines and not contaminated by sky emission.\\ We constructed the optical depth map of the gaseous $^{13}$CO~R(0) line using the corrected data cube for the solid $^{12}$CO foreground absorption and dust extinction. The $^{13}$CO line is not affected by any residual solid CO absorption that is located at bluer wavelengths. To derive the optical depths, we approximated the continuum by a straight line connecting the spectra from 4.58$\mu$m to 4.8$\mu$m. The optical depth is then derived from the formula: $\tau_{^{13} {CO}} = -ln(\frac{I_{cont\,at\,\lambda}}{I_{line\, at\,\lambda}})$ where $I_{cont\, at\,\lambda} $ and $I_{line\,at\,\lambda}$ are the fluxes of the continuum and the line at the $^{13}$CO R(0) absorption wavelength, respectively.\\ The optical depths were derived at spatial pixels of the data cube where the S/N in the integrated map was higher than 7 on average (since each slit position was observed with a different integration time). For the remaining pixels, we adopted a zero value. \\ The resulting map is shown in Fig.~\ref{od13Corrcont}. In this map, we interpolated the values of the optical depths at the spatial pixels of the slit positions contaminated by the sky emission lines (i.e. the slit positions shown in white in Fig.~\ref{Mbandblank}). The resulting map agrees well with our previous work described by Moultaka et al.~(2009). Indeed, here we find roughly the same values of the optical depths observed towards the bright sources discussed in that paper. In particular, for IRS~16C, we find an optical depth of about 0.10. This value is equal to the one we found in our previous paper that was of $\sim 0.11 \pm 0.02$. As explained in that paper, we use this value to estimate the line-of-sight gaseous absorption. Indeed, IRS~16C is an early-type hot Helium star (e.g. Krabbe et al. 1995, Najarro et al. 1997) located off the minispiral area (see its location in Fig.\ref{reconstructedfield}). Its spectrum shows a pronounced $^{13}$CO R(0) line in a spectral region well corrected for telluric lines. Therefore, we could safely assume that this line is representative of the amount of line-of-sight gaseous extinction.\\ The agreement between the values obtained in the present work and those obtained in our previous work is very encouraging. Therefore, we use, hereafter, the optical depth value of the $^{13}$CO~R(0) line observed in the spectrum of IRS~16C to correct the data cube for the contribution of this gaseous absorption in the foreground molecular clouds. \\ \begin{figure} \includegraphics[width=20pc]{optdepth13COinterpCorr_coorcontMband_scale.eps}\hspace{2pc}% \caption{\label{od13Corrcont} Optical depth map of the gaseous $^{13}$CO R(0) line obtained from the corrected data cube for the foreground $^{12}$CO ice band and dust extinction. Contours of the integrated M-band map are overlaid.} \end{figure} \section{The residual $^{13}$CO~R(0) gaseous and $^{12}$CO solid absorptions} The results obtained in the previous section make the estimate of the continuum in the spectral range quite robust. In the following, we use this continuum to derive optical depth maps of the residual absorptions. \\ {\it The $^{13}$CO gas absorption: } To correct the map shown in Fig.~\ref{od13Corrcont} for the foreground $^{13}$CO~R(0) gaseous line, one has to subtract $0.1\pm 0.02$. The residual optical depths imply then residual gaseous CO, possibly in the local medium. In Fig.~\ref{od13Corrcont}, all pixels with green to red colours typically show residual $^{13}$CO~R(0) gaseous line. The pixels falling at the position of IRS~1W and IRS~21 show optical depths between 0.1 and 0.17-0.18. This implies that the corrected pixels for the foreground gaseous absorption lie in the interval 0 to 0.07$\pm$0.02 which is in agreement with the values found in Moultaka et al. (2009) of 0.04 for IRS~1W and of 0.03 for IRS~21. At the position of IRS 3, we find corrected values for the foreground gaseous absorption between 0 and 0.04$\pm 0.02$ (in agreement with our previous analysis that resulted in an optical depth of 0.03). This implies that 10 to 30$\%$ of the $^{13}$CO absorption occurs in the local medium at the location of these sources. In general, it is not straightforward to compare our results with those of the high resolution data by Goto et al. (2014) who observed the above mentioned three sources of the Galactic Centre. Indeed, their spectral resolution of~3 km/s is more than 100 times larger than ours and the slit width of 0.2 arcsec they used on CRIRES instrument is three times narrower than ours. Despite that, our finding is in agreement with that of Goto et al. (2014) who show that about 20$\%$ of the CO absorption towards IRS~1 and IRS~3 occur in gas local to the Galactic Centre. They find three main foreground absorption complexes due to cold and dense clouds in the three sipral and lateral arms at negative velocities and 0 km/s. The residual absorptions outside these velocity ranges, in the positive side, were interpreted by the authors as being due to an extension of warm and dense clouds from the CND. In addition, they associate a broad trough at negative values in the spectra of H$^+_3$ lines with warm diffuse gas from the Central Molecular Zone (CMZ) as also pointed out by Oka et al. (2005) and Goto et al. (2008). \\ {\it The $^{12}$CO ice absorption: } To derive the optical depth map of the residual solid-phase $^{12}$CO absorption from the corrected data cube for the foreground extinction, we used the same continuum as previously and considered the band flux at wavelength $\lambda$ ($I_{band\,at\,\lambda}$) and the continuum flux ($I_{continuum\,at\,\lambda}$) at the same wavelength (i.e. $\tau_{band\, at\, 4.675\mu m} = -ln(\frac{I_{continuum\,at\,4.675\mu m}}{I_{band\,at\,4.675\mu m}})$). In this map, the optical depths are derived at pixels where the integrated M-band map has a S/N higher than 7.\\ In Fig.~\ref{od12Corrcont} we show the optical depth map derived from the corrected data cube for the line-of-sight extinction. In this figure, we find residuals of the 4.675$\mu$m band with optical depth values ranging from 0.1 to 1. The large optical depth values can be due to the gaseous $^{13}$CO~P(1) line that is located at the same wavelength as the solid $^{12}$CO band, as one can see in the theoretical spectrum of Moneti et al. (2001) plotted at our spectral resolution in Fig.~\ref{Moneti}. In that figure, we measure the theoretical ratio between the optical depths of the $^{12}$CO~P(1) line and the $^{12}$CO~P(2) line located at 4.68$\mu$m and find a ratio of 1.05. Thus, measuring larger values of this ratio in our data would imply residues of the $^{12}$CO solid band. This is why, we built the map of the optical depths of the $^{12}$CO~P(2) gaseous line using the same continuum as for the previous maps. This map is shown in Fig.~\ref{od13P2Corr}. Then, we derived the map of the ratio between the optical depths of the absorption band at 4.675$\mu$m and that of the $^{12}$CO~P(2) line. It is shown in Fig.~\ref{od12div12Corrsm3cont}. Let us call this ratio $R$. We have: \begin{equation} \begin{array}{rcl} R & = & \frac{\tau_{Absorption\, at\, 4.675\mu m}}{\tau_{^{12}CO~P(2)}} \\ & & \\ & = & \frac{\tau_{^{12}CO~solid}+\tau_{^{12}CO~P(1)}}{\tau_{^{12}CO~P(2)}} \end{array} \label{eqR} \end{equation} To estimate the amount of residual $^{12}$CO ice, we derived the map of the optical depths of the solid $^{12}$CO absorption. This map is obtained considering the theoretical ratio of 1.05 between the optical depths of the $^{12}$CO~P(1) and $^{12}$CO~P(2) lines. Indeed, from equation~\ref{eqR}, we have: \begin{equation} \begin{array}{rcl} \tau_{^{12}CO~solid}& = &(R-\frac{\tau_{^{12}CO~P(1)}}{\tau_{^{12}CO~P(2)}}) \tau_{^{12}CO~P(2)} \\ & = & (R-1.05) \tau_{^{12}CO~P(2)} \end{array} \label{eqIce} \end{equation} The resulting map is shown in Fig.~\ref{tausolid}. \\ From the values of the optical depths obtained in this map, we can derive the residual column densities of the CO ice absorption using the equation (Sandford et al. 1988): \begin{equation} N(^{12}CO_{solid})= \tau_{^{12}CO~solid} W / A \label{eqNco} \end{equation} In this equation, $W$ is the FWHM of the $^{12}$CO ice absorption (in $cm^{-1}$) and $A$, the absorption strength ($A= 1.7 x 10^{-17}$ cm molecule$^{-1}$). If we measure the FWHM of the band in the template spectrum we get 6 to 7 $cm^{-1}$ $\sim 0.016\mu$m and if we assume the same value for the residuals, we can calculate the residual column densities $N(^{12}CO_{solid})$. From Fig.~\ref{tausolid}, we find optical depth values of the $^{12}CO_{solid}$ band ranging from 0 to $\sim$0.6-0.7, this implies column densities $N(^{12}CO_{solid})$ of 0 to 21-24 $10^{16}$ cm$^{-2}$. Typically, the $^{12}$CO to H$_2$ abundance ratios in dense molecular clouds are of the order of [$^{12}$CO]/[H$_2$] = 8 10$^{-5}$, hence the associated N(H$_2$) range from 0 to $\sim3\,10^{21}$ cm$^{-2}$. Thus, if we consider only the molecular column density associated with the observed CO ice and the A$_V$/N(H$_2$) ratio of 1 to 2~10$^{-21}$ (Bohlin et al. 1978, Moneti et al. 2001), we get a visible residual extinction A$_V$ of 3 to 5 mag across the region. This means that the residual CO ice is larger than the assumed variation of the foreground extinction derived by Sch\"odel et al. (2010) $\Delta A_K=0.3 mag$ (corresponding to $\Delta A_V= 0.88$ mag if we assume the extinction law of Martin \& Whittet 1990 and $\Delta A_V= 2.7 mag$ assuming the law by Rieke \& Lebofsky 1985). This result suggests that part of the residuals observed in our data can be due to local absorption. In the previous maps, we find important ice residuals in the IRS13-IRS2 region and in the IRS1W-IRS21 region. This implies large amounts of dust in the regions also observed in the mid-infrared data by Viehmann et al. (2006). This region is also very bright in the $^{13}$CO~R(0) optical depth map of Fig.~\ref{od13Corrcont}. \\ The high values in optical depths of the gas for the IRS13-IRS2 sources go along with the deep residual ice absorption. This implies that the high amount of gas and dust in this region results in a high shielding of the radiation field allowing for lower temperatures and correspondingly high ice abundances.\\ These high gas-dust density regions are at the inner edge of the mini-cavity, hence they may be a result of the interaction of a wind from SgrA* with the minispiral as has also been suggested by Mu$\check{\bf z}$i\'c et al. (2007, 2010). This is also supported by the observation of IR excess sources in the IRS13N complex that are indicative for star formation in the central parsec (Eckart et al. 2004, 2014, and Jalali et al. 2013). \\ \begin{figure} \includegraphics[width=20pc]{optdepth12COsolidinterpCorr_coorcontMband_scale.eps}\hspace{2pc} \caption{\label{od12Corrcont} Map of the optical depths of the 4.675$\mu$m band corrected for the line-of-sight solid $^{12}$CO contribution and dust extinction with contours of the integrated M-band map.} \end{figure} \begin{figure} \includegraphics[width=20pc]{optdepth13COP2interpCorr_coorcontMbandscale.eps}\hspace{2pc}% \caption{\label{od13P2Corr} Map of the optical depths of the $^{12}$CO P(2) line corrected for the foreground solid $^{12}$CO absorption and dust extinction. } \end{figure} \begin{figure} \includegraphics[width=20pc]{optdepth12COsoliddiv13COP2interpCorr_contMband_scale.eps}\hspace{2pc} \caption{\label{od12div12Corrsm3cont} Map of the optical depths ratio of the absorption band at 4.675$\mu$m over the $^{13}$CO P(2) corrected for the foreground solid $^{12}$CO band and dust extinction with contours of the integrated M-band map. } \end{figure} \begin{figure} \includegraphics[width=20pc]{optdepth12COsolidFromRinterpCorr_coorcontMband_scale.eps}\hspace{2pc}% \caption{\label{tausolid} Map of the residual $^{12}$CO solid-phase absorption optical depths, corrected for the foreground solid $^{12}$CO band and dust extinction with contours of the integrated M-band map. } \end{figure} \section{Summary and discussion} In this paper we present a mid-infrared mapping of the central half parsec of the Galaxy. The data are obtained in the wavelength range going from 4.6 to 5.1$\mu m$. Our mapping covers the region in which most of the continuum emission from the minispiral is observed in millimetre maps (see Kunneriath et al. 2012). The constructed field is very successful with an accuracy of the relative source position recovery of the order of 1 arcsec.\\ We derived a first-order line-of-sight template extinction spectrum. The extinction is partly due to obscuration by dust and partly to absorption by the $^{12}$CO ice that forms on dust grains of the ISM. We, then, corrected the data cube for this first approximation foreground extinction.\\ The resulting data cube shows residuals of the 4.675$\mu$m absorption in the region suggesting that large amounts of CO ices are present in the central region. However, we do not exclude the possibility that the residuals can be due to larger variations of the foreground band extinction than currently admitted. \\ On the other hand, from the resulting data cube, we derived optical depth maps of the $^{13}$CO~R(0) gaseous line and corrected it for the assumed foreground gaseous absorption. We found residual $^{13}$CO lines in the final map. Given the shape of the theoretical gaseous CO line complex, one may expect residuals of cold CO with temperatures as low as 10K.\\ These results provide additional evidence to those shown by Moultaka et al. (2004, 2005), that very low temperatures might be present near the central black hole. These low temperatures may conflict with the physical conditions of the local environment. Indeed, the central parsec proved to be a very complex medium where seem to coexist the warm dust of the minispiral ($\sim$200K) and hot gas of the supernova remnant SgrA East, the strong winds and tidal forces of SgrA$^\star$, as well as high ionizing fluxes and stellar winds from the local early-type stars. However, within this complexity, one can also observe high density pockets suggested by the dusty filaments observed by Mu$\check{\bf z}$i\'c et al. (2007, 2010) and the dust embedded sources that form bowshock shapes due to their interaction with the interstellar material (Tanner et al. 2002, 2005). The presence of such high-density regions preventing the ices to be destroyed as well as the fact that the travel time of molecular cloudlets in the central parsec is of the order of their lifetime of about $10^3-10^5$ yr (Van Loon \& Oliveira 2003, Mellema et al. 1998) favours the required conditions for cold gas and ices to survive the passage through the central stellar cluster (i.e. as a transient phenomenon).\\ Even though we demonstrate that low temperatures can survive in the local environment of the Galactic Centre, we cannot discard the possibility that the residual ices and cold gases that we observe in the present data are the signature of a more varying line-of-sight extinction. Such a variable line-of-sight extinction is not obvious in the data by Goto et al. (2014) since only two sources were observed through the $^{13}$CO~R(0) gaseous line.\\ To provide an ultimate argument for the presence of local cold gas and dust, a similar study has to be done at higher spectral resolution. This program is in progress and will be presented in a coming paper. \section*{Acknowledgements} This work was supported in part by PNCG national program of the French CNRS INSU National Research Institute Universe Sciences. It is also partly supported by the Deutsche Forschungsgemeinschaft (DFG) via the Cologne Bonn Graduate School (BCGS), and via grant SFB 956, as well as by the Max Planck Society and the University of Cologne through the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics. We had fruitful discussions with members of the European Union funded COST Action MP0905: Black Holes in a violent Universe and the COST Action MP1104: Polarization as a tool to study the Solar system and beyond. We received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312789. We also thank the ESO staff for their help and patience.
{ "timestamp": "2015-04-24T02:09:11", "yymm": "1504", "arxiv_id": "1504.06162", "language": "en", "url": "https://arxiv.org/abs/1504.06162" }
\section{Introduction} The fractional Brownian motion (fBm) is a class of stochastic process to describe the anomalous diffusion~\cite{Mandelbrot}. Being a natural extension of normal Brownian motion, it can be expressed in terms of its incremental sequence \begin{eqnarray} x(t) = \int_0^t ds \ \eta(s) \end{eqnarray} with $x(0)=0$, and the fractional Gaussian noise $\eta(t)$ with zero mean and the correlation \begin{eqnarray} \langle \eta(t) \eta(s) \rangle \sim |t-s|^{\alpha-2}, \end{eqnarray} hence, the mean square displacement (MSD) $\langle x^2(t) \rangle \sim t^{\alpha}$, where $\langle \cdots \rangle$ represents the statistical average. Many examples of fBm can be found in systems in cells and various soft matter systems. A list of Examples includes the motion of small particle (colloids, lipid granules, etc.) in cells and polymer networks~\cite{PRL_Tolic_2004,PRL_Amblard_1996}, the lateral diffusion inside lipid membranes~\cite{PRL_Akimoto_2011,PRL_Jeon_2012}, the process of polymer translocation through a nano-pore~\cite{PRL_Zoia_2009, JStatMech_Panja_2010_01, PRE_Dubbeldam_2011,JCP_deHaan_2012}, DNA/RNA hairpin formation~\cite{PRE_Walter_2012} and the telomere dynamics in the nucleus~\cite{PRL_Bronstein_2009}, etc. Here the visco-elastic response of the system is often invoked as a physical mechanism at hand to generate sub-diffusion ($0<\alpha<1$), where the noise $\eta(s)$ and the memory kernel $\mu(t)$ are related through the fluctuation-dissipation theorem (FDT) \begin{eqnarray} \mu(t-s) k_BT = \langle \eta(t) \eta(s) \rangle \label{FDT_GLE} \end{eqnarray} with $k_BT$ being the thermal energy (see, for instance~\cite{Jeon_Metzler, Sokolov} and references therein). In this case, the equation of motion describing the process is called the fractional Langevin equation, which can be written in the integral form as \begin{eqnarray} \frac{dx(t)}{dt} = \int_{-\infty}^t ds \ \mu(t-s) f(s) + \eta(t), \label{GLE_1} \end{eqnarray} that is the generalized Langevin equation (GLE) with a power-law memory kernel, where $f(t)$ is an external force acting on the system. In this paper, we drive such a sub-diffusive walker by an external force and consider its fluctuating dynamics. Such a situation would be related to the active process in the biopolymer, such as polymer translocation driven by voltage drop~\cite{PRE_Sakaue_2007, EPL_Linna_2009, JPC_Rowghanian_2011, EPJE_Saito_2011, JCP_Ikonen_2012}, rotational dynamics of entangled polymers~\cite{EPJE_Walter_2014}, and chromosome segregation during cell division~\cite{NuclAcidsRes_Kuwada_2013, BJ_Lampo_2015}. The quantities of interest are the time evolutions of the average displacement (AD) $\left< x(t) \right> $ and the variance in displacement (VD) $\left< \Delta x^2(t) \right> \ (=\mathrm{MSD}-\left< x(t) \right>^2)$, where $\Delta x(t) \equiv x(t)-\left< x(t) \right>$. When the memory kernel is independent of the driving force, an answer would be rather straightforward; the linear response theory suggests the anomalous drift $\langle x(t) \rangle \sim t^{\alpha} f$ and the fluctuation $\Delta x(t)$ around the average drift again becomes the fBm whose VD is given by $\langle \Delta x^2(t) \rangle \sim t^{\alpha}$ with the same anomalous exponent $\alpha$ as the undriven fBm. Then, the ratio $\left< \Delta x^2(t) \right>/\left< x(t) \right>$ becomes constant in time, which provide a way to evaluate the driving force from the time series of the trajectory (see Eq.~(\ref{x_xx}) below). The natural question here is how such a simple and potentially useful result should be altered (or not) beyond the linear response domain. Indeed, the nonlinearity in the viscoelastic response would be pertinent to most of soft materials and biopolymers, where anomalous diffusion is commonplace. The memory kernel $\mu_f(t)$ and the noise $\eta_f(t)$, then, become force dependent. As a model system, we analyze the motion of a tagged segment in a long polymer chain. This is indeed one of the paradigms for the sub-diffusion, where the GLE with a power-law memory kernel can be derived from a microscopic polymer model~\cite{JStatMech_Panja_2010_01,PRE_Sakaue_2013}. In our model, the nonlinearity arises from the self-avoidance (SA) and hydrodynamic interactions (HIs) between different segments in the polymer. Our main purposes are (i) to determine the nonlinear memory kernel $\mu_f(t)$ and (ii) to understand the nature of the fluctuation of the driven tagged segment determined by $\eta_f(t)$. Though our main interest is in the nonlinear (strong force) regime, we also include the discussion for the linear response (weak force) regime. This not only makes the argument comprehensive, but also counterpoints the qualitative difference between these two regimes. In Sec. II, we construct scaling theory for the anomalous dynamics of the tagged segment based on the notion of {\it time-dependent friction}. This will be done first for the weak driving regime (Sec. II A), and then for the strong driving regime (Sec. II B), where the response becomes generally nonlinear. In Sec. III, we present molecular dynamics (MD) simulation results in the strong driving regime, which are discussed in light of the scaling theory. The result indicates that the analogous relation to Eq.~(\ref{FDT_GLE}) \begin{eqnarray} \mu_f(t-s) k_BT = \langle \eta_f(t) \eta_f(s) \rangle \label{FDT_GLE_f} \end{eqnarray} does not hold, and the fluctuation $\Delta x(t)$ is not necessarily described by fBm. To get better insights into the anomalous driven dynamics, we proceed, in Sec. IV, the analysis of modes in a polymer. This enables us to derive the memory kernel from a microscopic polymer model. Again, we first give a paradigmatic treatment in the weak force regime (Sec. IV A,\ B), on which we develop an approximate argument suitably modified to the strong force regime (Sec. IV C). In Sec. V, we conclude this study. \begin{figure}[t] \begin{center} \includegraphics[scale=0.80]{fig1.eps} \caption{ (Color Online) Schematic representation of driven dynamics in polymer stretching in (a) near equilibrium and (b) strongly driven dynamics. } \label{fig1} \end{center} \end{figure} \section{Scaling theory} \label{scaling_theory} The polymer consists of $N$ segments, each of which is characterized by its size $a$ and friction coefficient $\gamma$. We prepare the polymer in its thermal equilibrium state, so that its average spatial size is \begin{eqnarray} R \simeq a N^{\nu}. \label{R_eq} \end{eqnarray} The Flory exponent $\nu$ characterizes the equilibrium coil size; $\nu=1/2$ for the ideal chain; $\nu=3/4$ or $\nu \simeq 0.588$ in two-dimensional (2D) or 3D spaces, respectively, for the chain with SA~\cite{deGennesBook}. In addition, we write the longest relaxation time $\tau$ of the polymer as \begin{eqnarray} \tau \simeq \tau_0 \left(\frac{R}{a}\right)^{z}, \label{tau_eq} \end{eqnarray} where $\tau_0 \simeq \gamma a^2/k_BT$ is the segmental time scale, and the so-called dynamic exponent takes $z=3$ in the HIs dominated case (non-draining polymer) or $z=2 + (1/\nu)$ for the free-draining polymer~\cite{deGennesBook}. \subsection{Time-dependent friction} \label{time_dependent_friction} Suppose that we switch on a constant external force $f$ acting on the end segment, hereafter called tagged segment, at $t=0$. We set its initial position as origin $x(0)=0$ (see Fig.~1(a)). As time goes on, the motion of the segment creates the tension, which gets transmitted along the polymer~\cite{PRE_Sakaue_2007,PRE_Sakaue_2012}. This leads to the {\it time-dependent friction} associated with the growing section of the correlated motion, thus gives rise to the memory effect to the motion. Below, we construct the scaling form of the memory kernel from the force balance argument during the process. But the way the above scenario comes into effect depends on whether the motion of the segment is dominated by the thermal fluctuation or the driving force. We are thus led to distinguish weak and strong force regimes. For simplicity, we look at the position $x(t)$ of the segment in the direction along the force only; the motion in other directions is just an unforced fBm. {\it Weak force regime---.} The force magnitude $f$ is weak enough $f < k_BT/R$ so that the polymer shape is kept in equilibrium coil. In this weak force (near equilibrium) regime, the motion of the segment is essentially relaxational in thermal fluctuation. Let $r^*(t)$ be the distance, i.e., the root VD (or root MSD), that the segment travels during the time interval $t$. In the time window $t < \tau$, the polymer as a whole has no time to relax. This implies that the restoring force $\simeq k_BT/r^*(t)$ acts to the tagged segment, and at the same time, the tension due to the motion is transmitted up to $n^*(t) \simeq (r^*(t)/a)^{1/\nu}$ segments apart along the chain. These $n^*(t)$ segments would take part in the motion of the tagged segment, thus, contribute to the friction $\gamma(t)$. Therefore, the force balance equation reads \begin{eqnarray} \gamma(t) \frac{dr^*(t)}{dt} \simeq \frac{k_BT}{r^*(t)}, \label{force_balance_eq} \end{eqnarray} where the time-dependent friction is given by \begin{eqnarray} \gamma(t) \simeq \gamma \left( \frac{r^*(t)}{a} \right)^{z-2} \label{gamma_eq} \end{eqnarray} as inspected from Eq.~(\ref{tau_eq}). Solving the differential equation~(\ref{force_balance_eq}), we obtain \begin{eqnarray} r^{*}(t) (\simeq \sqrt{\langle \Delta x^2(t)\rangle}) \simeq a \left( \frac{t}{\tau_0}\right)^{1/z} \label{r*_eq} \\ n^{*}(t) \simeq \left( \frac{t}{\tau_0}\right)^{1/\nu z}, \label{n*_eq} \end{eqnarray} where $\Delta x(t) \equiv x(t)-\langle x(t) \rangle$. We thus identify the anomalous diffusion exponent $\alpha = 2/z$, whose physical origin is the memory effect associated with the tension transmission. Superimposed on this diffusion is the (small) drift $\langle x(t) \rangle$, which can be obtained from the relation $\gamma(t) d\langle x(t)\rangle/dt = f$ as; \begin{eqnarray} \langle x(t) \rangle \simeq a\left(\frac{fa}{k_BT}\right) \left( \frac{t}{\tau_0}\right)^{2/z}, \label{r_drift_eq} \end{eqnarray} where $\gamma(t)$ is determined from Eqs.~(\ref{gamma_eq}) and~(\ref{r*_eq}). Comparing Eqs.~(\ref{r*_eq}) and~(\ref{r_drift_eq}), one finds that the drift is indeed negligible as long as $t \ll \tau_{f0}$ with \begin{eqnarray} \tau_{f0} \simeq \tau_0 \left( \frac{\xi}{a}\right)^{z} \simeq \tau_0 \left( \frac{fa}{k_BT}\right)^{-z}. \label{tau_{f0}} \end{eqnarray} Given the condition $f< k_BT/R$, we find $\tau_{f0} > \tau$, ensuring the fluctuation dominance in the weak force regime. {\it Strong force regime---.} Now the force is strong enough ($f > k_BT/R$) to make the drift dominant over the diffusion for the tagged segment motion. At the same time, the polymer is stretched in the pulled direction. The resultant steady-state conformation can be pictured as a succession of blobs with growing size~\cite{EPL_Brochard_1993}. For simplicity of the argument, we here adopt the mono-block approximation~\cite{Macromolecules_Marciano_Brochard_1995}, i.e., the size $\xi$ of the blobs are uniform and given by $\xi \simeq k_BT/f$, which suffices for the scaling discussion (see Appendix A and e.g. Ref~\cite{PRE_Sakaue_2012} for the argument with spatial inhomogeneity). The stretched length of the polymer is estimated as \begin{eqnarray} R_{\parallel} \simeq \xi \left( \frac{N}{g}\right) \simeq a \left( \frac{fa}{k_BT}\right)^{(1-\nu)/\nu}N, \label{R_parallel} \end{eqnarray} where $g \simeq (\xi/a)^{1/\nu}$ is the number of segments inside the blob. Again we switch on the force at $t=0$, before which the polymer assumes an equilibrium state at rest. In the strong force regime, the driving force overwhelms the thermal fluctuation, thus governs the motion of the tagged segment. Therefore, we set $r^*(t)$ to be a drift distance traveled by the segment during the time interval $t$. The polymer as a whole has no time to react to the force, and the tension gets transmitted only up to $n^*(t) \simeq (r^*(t)/a)(fa/k_BT)^{(\nu-1)/\nu}$ segments apart from the tagged segment (e.g. Eq.~(\ref{R_parallel})). These segments follow the driving force, hence, the time-dependent friction grows as \begin{eqnarray} \gamma_f(t) \simeq \gamma \left( \frac{\xi}{a}\right)^{z-2}\frac{n^*(t)}{g} \simeq \gamma \left( \frac{r^*(t)}{a}\right)\left( \frac{fa}{k_BT}\right)^{3-z}, \label{gamma_f_t} \end{eqnarray} where we assume the equilibrium formula~(\ref{gamma_eq}) is valid up to the length scale $\xi$, which adds up in the larger scale. The force balance equation thus reads \begin{eqnarray} \gamma_f(t) \frac{dr^*(t)}{dt} \simeq f, \label{force_balance_f} \end{eqnarray} the solution of which is \begin{eqnarray} r^{*}(t) (\simeq \langle x(t)\rangle) \simeq a \left( \frac{t}{\tau_0}\right)^{1/2}\left( \frac{fa}{k_BT}\right)^{(z-2)/2} \label{r*_f} \\ n^{*}(t) \simeq \left( \frac{t}{\tau_0}\right)^{1/2}\left( \frac{fa}{k_BT}\right)^{(z/2)-(1/\nu)} \label{n*_f}. \end{eqnarray} In addition to the anomalous drift exponent $1/2$, there arises the force dependence with the characteristic exponent $(z-2)/2$, i.e., the response is nonlinear except for the Rouse model $z=4$~\cite{PRE_Sakaue_2012,PRE_Rowghanian_Grosberg_2012}. Superimposed on this drift is the (small) diffusion $\sqrt{\langle \Delta x^2(t) \rangle}$, the property of which is to be unveiled. One might estimate it from the analogous relation as Eq.~(\ref{force_balance_eq}); $\gamma_f(t) d \sqrt{\langle \Delta x^2(t) \rangle}/dt \simeq k_BT/\sqrt{\langle \Delta x^2(t) \rangle}$, which yields a conjecture \begin{eqnarray} \sqrt{\langle \Delta x^2(t) \rangle} \simeq a \left( \frac{t}{\tau_0}\right)^{1/4} \left( \frac{fa}{k_BT}\right)^{(z/4)-1} \label{DX_f} \end{eqnarray} where $\gamma_f(t)$ is determined from Eqs.~(\ref{gamma_f_t}) and~(\ref{r*_f}). {\it Remarks---.} (i) The effect of the force is a weak perturbation in the scale smaller than $\xi \simeq k_BT/f$. The corresponding time scale is $\tau_{f0} \simeq \tau_0 ( \xi/a)^{z}$ given in Eq.~(\ref{tau_{f0}}), which signals the onset time of the strong force regime. In the time range $t < \tau_{f0}$, the weak force regime applies~\cite{PRE_Sakaue_2007,PRE_Saito_2013}. (ii) The terminal time of the driven process is given by the condition $r^*(\tau_f) \simeq R_{\parallel}$; \begin{eqnarray} \tau_f \simeq \tau_0 N^2 \left( \frac{fa}{k_BT}\right)^{(2/\nu)-z} \simeq \tau_{f0} \left( \frac{N}{g}\right)^2. \label{tau_f} \end{eqnarray} At this time, the tension caused by the external force reaches to the other chain end. This crossovers to the equilibrium formula~(\ref{tau_eq}) at $f \rightarrow k_BT/R$. \subsection{Generalized Langevin Equation} The above scaling arguments for the memory effect can be generalized to the case of the arbitrary protocol of the time-dependent driving force $f(t)$. This leads to the GLE given in Eq.~(\ref{GLE_1}) for the motion of the tagged segment~\cite{PRE_Sakaue_2013}. The equivalent expression is \begin{eqnarray} 0 = - \int_{-\infty}^t ds \ \Gamma(t-s) \frac{dx(s)}{ds} + f(t) + \omega(t), \label{GLE_2} \end{eqnarray} where the friction kernel $\Gamma(t)$ and the noise $\omega(t)$ are related to the mobility kernel $\mu(t)$ and the noise $\eta(t)$ in Eq.~(\ref{GLE_1}) as ${\hat \Gamma} ({\hat z}) {\hat \mu} ({\hat z})=1$ and ${\hat \omega}({\hat z}) = {\hat \Gamma} ({\hat z}) {\hat \eta}({\hat z})$ in the Laplace domain~\cite{JStatMech_Panja_2010_01}. {\it Weak force regime---.} In our protocol, we switch on a constant force at $t=0$, i.e., $f(t)=f\Theta (t)$ with $\Theta (t)$ being the Heaviside step function. The time-dependent friction is then $\gamma(t) = [\int_0^t ds \ \mu(s)]^{-1}$. Using Eqs.~(\ref{gamma_eq}) and~(\ref{r*_eq}), we obtain \begin{eqnarray} \mu(t) &\simeq& - \left(\frac{1}{\gamma \tau_0} \right) \left( \frac{t}{\tau_0}\right)^{(2/z)-2} \label{mu_eq} \\ \Gamma(t)&\simeq& \left(\frac{\gamma}{ \tau_0} \right) \left( \frac{t}{\tau_0}\right)^{-(2/z)}, \label{Gamma_eq} \end{eqnarray} where the minus sign in $\mu(t)$ comes from the fact $2-z <0$ in practice. This reflects the viscoelastic response of the tagged segment leading to the sub-diffusion. Note that such a power-law regime should be viewed as an intermediate asymptotics valid in the time range $\tau_0 \ll t \ll \tau$ (See Sec.~\ref{ModeAnalysis}). We assume that the memory kernels ($\mu$ and $\Gamma$) are related to the noises ($\eta$ and $\omega$) through FDT given by Eq.~(\ref{FDT_GLE}); the equivalent expression is $\Gamma(t-s)k_BT = \langle \omega(t) \omega(s) \rangle$. The AD and VD can be calculated as \begin{eqnarray} \left< x(t) \right> &=& \int_0^t ds \int_0^s du \, \mu(s-u) f \simeq a\left(\frac{fa}{k_BT}\right) \left( \frac{t}{\tau_0}\right)^{2/z}\\ \left< \Delta x^2(t) \right>&=&\int_0^t ds \int_0^t du \, \langle \eta(s) \eta(u) \rangle \simeq a^2 \left( \frac{t}{\tau_0}\right)^{2/z}. \end{eqnarray} in agreement with Eqs.~(\ref{r_drift_eq}) and~(\ref{r*_eq}), respectively. {\it Strong force regime---.} Assuming the same line of argument as in the weak force regime, we obtain the estimate of the nonlinear memory kernel \begin{eqnarray} \mu_f(t) &\simeq& - \frac{1}{\gamma \tau_0}\left( \frac{t}{\tau_0}\right)^{-3/2}\left( \frac{fa}{k_BT}\right)^{(z/2)-2} \label{mu_f_scaling} \\ \Gamma_f(t) &\simeq& \frac{\gamma}{\tau_0}\left( \frac{t}{\tau_0}\right)^{-1/2}\left( \frac{fa}{k_BT}\right)^{2-(z/2)}. \label{gamma_f_scaling} \end{eqnarray} One can easily check that this yields the drift scaling, which is in accord with Eq.~(\ref{r*_f}). The fluctuation around this drift is subtle, however. In Ref.~\cite{PRE_Sakaue_2013}, it was assumed that the property of the noise $\eta_f$ (or $\omega_f$) is encoded in the kernel through the relation~(\ref{FDT_GLE_f}). With this naive assumption, the solution of the GLE leads to the VD scaling, which is in accord with Eq.~(\ref{DX_f}). We repeat once more that this estimate needs to be checked. {\it Summary---.} The result and conjecture obtained so far based on the scaling argument are summarized as follows: The leading component in the dynamics of the tagged segment is fluctuation or drift in weak or strong force regime, respectively. In either case, the motion of the tagged segment creates the tension, which gets transmitted along the chain with characteristic dynamics in respective regime. The resultant anomalous dynamics can be expressed using the memory kernel as \begin{eqnarray} \langle \Delta x^2(t)\rangle &\simeq& \frac{k_BT}{\Gamma (t)} \qquad [{\rm weak \ force \ regime}]\\ \langle x(t) \rangle &\simeq& \frac{f}{\Gamma_f(t)} \label{X_f}\qquad [{\rm strong \ force \ regime}] \end{eqnarray} The FDT~(\ref{FDT_GLE}) or its analogous relation~(\ref{FDT_GLE_f}) then implies \begin{eqnarray} \langle x(t) \rangle &\simeq& \frac{f}{\Gamma (t)} \qquad [{\rm weak \ force \ regime}]\\ \langle \Delta x^2(t)\rangle &\simeq& \frac{k_BT}{\Gamma_f(t)} \qquad [{\rm strong \ force \ regime}] \label{DX2_f} \end{eqnarray} for the minor component in the motion. In addition, it is worth pointing out that GLE formalism with FDT~(\ref{FDT_GLE}) or its analogue~(\ref{FDT_GLE_f}) predicts the quantitative relation between the drift and the fluctuation \begin{eqnarray} \left< \Delta x^2(t) \right> = \frac{2k_BT }{f}\left< x(t) \right> \label{x_xx} \end{eqnarray} regardless of the specific form of memory kernel. Very recently, this relation has been used to evaluate the driving force of the bacterial chromosome segregation {\it in vivo}~\cite{BJ_Lampo_2015}. \section{Molecular dynamics simulations} \label{MD} \begin{figure}[t] \begin{center} \includegraphics[scale=0.70]{fig2.eps} \caption{ (Color Online) Dynamics of the tagged segment from MD simulations. Left or right column shows results in 3D or 2D, respectively. (a) Time evolution of average drift and VD at $fa/k_BT = 0.3$. Force dependence of (b) average drift and (c) VD at time $t/\tau_{MD}=5\times 10^2$. All plots are displayed in a double-logarithmic scale. Inset triangle slopes indicate the theoretical scaling exponents speculated by Eqs.~(\ref{gamma_f_scaling}),\,(\ref{X_f}),\,(\ref{DX2_f}). } \label{fig2} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.65]{fig3.eps} \caption{ (Color Online) VD-AD ratio $f \left<\Delta x^2(t) \right> / (k_BT\left< x(t) \right>) $ obtained from MD simulations in 3D (left) and 2D (right) as a function of time. } \label{fig3} \end{center} \end{figure} In this section, we perform MD simulation to verify the scaling predictions in Sec.~II, in particular those Eqs.~(\ref{X_f}) and~(\ref{DX2_f}) in strong force regime. In simulations, equation of motion for each segment is \begin{eqnarray} m \frac{d^2 \Vec{x}_i}{dt^2}=-\gamma \frac{d\Vec{x}_i}{dt} -\nabla_{\Vec{x}_i} U +\Vec{\zeta}_i(t) +f \delta_{iN} \Vec{e}_x, \end{eqnarray} where $i (1 \sim N)$ is bead indices, $m$ and $\gamma$ are mass and frictional coefficient of a bead, $\Vec{\zeta}_i(t)$ is a Gaussian white noise with mean zero and the variance $\left<\Vec{\zeta}_i(t) \Vec{\zeta}_j(t') \right>=2\gamma k_BT \delta_{ij}\delta(t-t') \Vec{1}$ with $\Vec{1}$ being a unit matrix, and $\Vec{e}_x$ is the unit vector directed to $x$-axis. HIs are ignored for simplicity (free draining). The total potential $U=U_\mathrm{FE}+U_\mathrm{rep}$ consists of the finitely extensible nonlinear elastic potential; \begin{eqnarray} U_\mathrm{FE} &=& -\frac{C_\mathrm{FE}}{2} \sum_{i=1}^{N-1} (2a)^2 \log{\left(1-\frac{|\Vec{x}_{i+1}-\Vec{x}_i|^2}{(2a)^2} \right)} \end{eqnarray} and the repulsive potential between different beads; \begin{eqnarray} U_\mathrm{rep} &=& \epsilon \sum_{i<j} \frac{a^{12}}{|\Vec{x}_i-\Vec{x}_j|^{12}} \qquad \mathrm{for} \qquad |\Vec{x}_i-\Vec{x}_j| \leq R \nonumber \\ &=& 0 \qquad \mathrm{for} \qquad |\Vec{x}_i-\Vec{x}_j| >R. \nonumber \end{eqnarray} We set $\epsilon=k_BT$, $C_\mathrm{FE}=10k_BT/a^2$, $\gamma = (m k_BT/a^2)^{1/2}$ and the unit time is $\tau_{MD}=m/\gamma= (m a^2/k_BT)^{1/2} = \gamma a^2/k_BT$. The total number of beads is $N=100$, for which the equilibrium size is $R\simeq 6.8a$ in 3D and $\simeq 9.5a$ in 2D. Initial conditions are picked up from equilibrium configurations, and the force applied at $N$-th bead is switched on at $t=0$. We choose the time step for integrating equation of motion as $\delta t = 0.005 \tau_{MD} $. Figure~2 (a) shows the time evolution of AD $\langle x(t) \rangle$ and the VD $\langle \Delta x^2(t) \rangle$ of the pulled bead by the force $f=0.3 k_BT/a$ in 3D (left) and 2D (right). Both quantities exhibit a slope close to $0.5$ in agreement with scaling predictions~(\ref{r*_f}) and~(\ref{DX_f}). After long time $t/\tau_{MD} > 5\times 10^3 \ (10^4)$ in 3D (2D), the dynamics becomes normal, i.e., $\langle x(t) \rangle \sim \langle \Delta x^2(t) \rangle \sim t$, which corresponds to the center of mass mode. This time is interpreted as the tension propagation time $\tau_f$ Eq.~(\ref{tau_f}), at which the effect of the driving force reaches to the other side of the chain end, and the chain settles in the steady-state as a whole. Figure~2 (b) shows the force dependence of the drift $\langle x(t) \rangle$. The data is taken at time $t/\tau_{MD} = 5 \times 10^2$ in the anomalous dynamics regime (see Fig.~2 (a)). Recalling the dynamical exponent $z=2+\nu^{-1}$ in the free draining case, the results are in good agreement with the predicted force exponent from Eq.~(\ref{r*_f}), that is $(2\nu)^{-1} \simeq 0.85 \ (0.67)$ in 3D (2D). Figure~2 (c) shows the force dependence of the VD $\langle \Delta x^2(t) \rangle$. Again, the data is taken at time $t/\tau_{MD} = 5 \times 10^2$. The predicted force exponent from Eq.~(\ref{DX_f}) is $(2\nu)^{-1} -1 \simeq -0.15 \ (-0.33)$ in 3D (2D). We find non-negligible deviations from the conjecture~(\ref{DX_f}), which is more evident in 2D. Next, Fig.~3 shows the normalized VD-AD ratio $f\left< \Delta x^2(t) \right>/(k_BT\left< x(t)\right>)$. Recall that the GLE with the relation~(\ref{FDT_GLE_f}) predicts this ratio to be time independent and exactly $2$ (eq.~(\ref{x_xx})). While in 3D, this relation is satisfied reasonably well over a wide time range, appreciable deviations are found in 2D, which gets larger as the force becomes larger. The above results on the fluctuation $ \Delta x(t) $ of the tagged segment around the average drift suggests that, against a naive expectation, it cannot be described as the fBm in the strong force regime. What causes it? To answer this, we shall take a microscopic polymer model, and attempt to derive the memory kernel and GLE based on the analysis of modes in the polymer chain. \section{Mode Analysis} \label{ModeAnalysis} We first provide exact calculation for the Rouse model and deliver some remarks. We then approximately incorporate the nonlinearity due to SA and HIs into the mode equation by making the effective friction and spring constants mode-number dependent (Sec.~\ref{SA_HI}). Based on the solid framework on the weak force regime, we develop an approximate treatment in the strong force regime in Sec.~\ref{DrivenDynamics}. \subsection{Rouse model} \label{Rouse_model} The polymer is modeled by $N+1$ connected beads. These beads have no excluded volume, linked by harmonic springs in series (the root-mean-square length of each spring is $a$), and move in a viscous fluid by being kicked by thermal noise and external force. As in Sec.~\ref{scaling_theory}, we again focus on the direction of the force only, and suppress the vector notations. In the limit where the bead labeling index $n \in [0,N]$ is made a continuous variable, the equation of motion reads \begin{eqnarray} \gamma \frac{\partial x_n}{\partial t} = k \frac{\partial^2 x_n}{\partial n^2} + \zeta_n + f_n, \label{Rouse_eq} \end{eqnarray} where the friction coefficient for a bead $\gamma$ and the spring constant $k = 3 k_BT/a^2$ (in 3D) defines a microscopic time scale $\tau_0 = \gamma/k$, and $f_n$ is the external force acting on $n$-th bead. Open boundary conditions are imposed at both chain ends for linear polymers \begin{eqnarray} \frac{\partial x_n}{\partial n} \Big|_{n=0}=\frac{\partial x_n}{\partial n} \Big|_{n=N}=0. \label{Rouse_bound} \end{eqnarray} The random forces $\zeta_n(t)$ acting independently on individual beads are Gaussian white noise with zero mean, whose correlation obeys the FDT; \begin{eqnarray} \langle \zeta_{n}(t) \zeta_{m}(s) \rangle = 2 \gamma k_BT \delta(n-m) \delta(t-s). \label{FDT_segment} \end{eqnarray} In the Rouse model, the noise and the external force acting on some segment affect the motion of other segments through the elastic connectivity. We may thus expect that the above FDT ~(\ref{FDT_segment}) at the individual segment level could be coordinated to generate a relation between the fluctuation and the response at the level of collective dynamics of the entire chain. Below, we shall derive such a relation \begin{eqnarray} \langle \eta_n (t) \eta_m (s) \rangle =k_B T \chi_{nm}(t,s) \label{FDT_nm} \end{eqnarray} with the concrete functional form of $\chi_{nm}(t,s)$ and $\eta_n(t)$. Here the response function $\chi_{nm}(t,s) \equiv \delta \langle {\dot x}_{n}(t) \rangle/ \delta f_{m}(s)$ describes the change in the average velocity of $n$-th segment at time $t$ caused by the force that acted on $m$-th segment at time $s (\le t)$, and $\eta_n(t)$ is a correlated Gaussian noise with zero mean $\langle \eta_n(t) \rangle =0$ acting on $n$-th segment. The FDT~(\ref{FDT_nm}) thus indicates that the cross correlation of the noise has a long time memory, which is related to the collective response of the segment. It includes Eq.~(\ref{FDT_GLE}) as a special case of the self-response and correlation $n=m$. To this end, we analyze the Rouse equation~(\ref{Rouse_eq}) in terms of the normal coordinate $X_p(t)$ \begin{eqnarray} X_p(t) = \int_{0}^{N} dn \ h_{p,n} x_n(t) \end{eqnarray} with \begin{eqnarray} h_{p,n}= \frac{1}{N}\cos{\left(\frac{ \pi pn}{N}\right)}. \end{eqnarray} Its inverse transform is \begin{eqnarray} x_n(t)&=&\sum_{p \ge 0} X_p(t) h^{\dagger}_{p,n}, \label{X_x} \end{eqnarray} where \begin{eqnarray} h^{\dagger}_{p,n} = \frac{2 \cos{\left( \frac{\pi np}{N}\right)}}{1 + \delta_{p0}}. \end{eqnarray} The normal modes obey the following equation of the overdamped harmonic oscillator type \begin{eqnarray} \gamma_p \frac{\partial X_p(t)}{\partial t} = -k_p X_p(t) + Z_p(t) + F_p(t), \label{Eq_NC} \end{eqnarray} where the spring and the friction constants $k_p$, $\gamma_p$ define the relaxation rate of the $p$-th mode $k_p/\gamma_p = (k/\gamma)(\pi p/N)^2$, and $Z_p = \int dn\ (\gamma_p/ \gamma) h_{p,n} \zeta_n(t)$ is the noise in the mode space. The external force can be arbitrary, but for our present purpose, we manipulate a particular segment (labeled by the index $m$), i.e., $f_n(t) = f_m(t) \delta (n-m)$, which is distributed in the mode space according to $F_p (t)= \int dn\ (\gamma_p/ \gamma) h_{p,n} f_n(t) =(\gamma_p/\gamma) h_{p, m} f_m(t)$. There is no restoring force for $p=0$ mode, which corresponds to the motion of the center of mass. It is useful to set as $\gamma_p = 2N \gamma/(1+ \delta_{p0})$, thus $k_p = 2k ( \pi p)^2/N$, so that the FDT in the mode space takes a familiar form: $\langle Z_{p}(t) Z_{q}(s) \rangle = 2 \gamma_p k_BT \delta_{pq}\delta(t-s)$. Equation~(\ref{Eq_NC}) is solved as \begin{eqnarray} X_p(t) &=& \frac{1}{\gamma_p} \int_{t_0}^{t} ds \ e^{-(k_p/\gamma_p)(t-s)}(Z_p(s) + F_p(s)) \nonumber \\ &&+ X_p(t_0)e^{-(k_p/\gamma_p)(t-t_0)}, \label{Z_p_solution} \end{eqnarray} where $X_p(t_0)$ is the initial condition for the mode $p$. We assume $t_0 \rightarrow - \infty$ so that the system is in a stationary state before we apply the external force. By direct calculation, one can check the following FDT (see Appendix B) \begin{eqnarray} C_p(t,s) = k_BT \chi_p(t,s), \label{FDT_p} \end{eqnarray} where the velocity correlation function $C_p(t,s)$ and the response function $\chi(t,s)$ are defined as $ \langle {\Delta \dot X}_{p}(t) {\Delta \dot X}_{q}(s)\rangle \equiv \delta_{pq}C_p(t,s)$ and $\chi_p(t,s) \equiv \delta \langle {\dot X}_{p}(t) \rangle / \delta F_{p}(s)$, respectively. Upon time derivative of Eq.~(\ref{Z_p_solution}) and transforming it into the real coordinate using Eq.~(\ref{X_x}), one can express the time derivative of the position of $n$-th segment in the following form; \begin{eqnarray} \frac{dx_n(t)}{dt} = \int_{-\infty}^t ds \ \chi_{nm}(t,s) f_m(s) + \eta_n(t), \label{GLE} \end{eqnarray} where \begin{eqnarray} \chi_{nm}(t,s) = \sum_{p \ge 0} \chi_p (t,s) h^{\dagger}_{p,n}h^{\dagger}_{p,m} \label{chi_v} \end{eqnarray} and \begin{eqnarray} \eta_n(t)= \sum_{p \ge 0}\int_{-\infty}^t ds \ \chi_p(t,s) Z_p(s) \ h_{p,n}^{\dagger} \end{eqnarray} are, respectively, interpreted as the velocity response function and the noise (The functional form of $\chi_p(t,s)$ is given in Eq.~(\ref{chi_p})). The latter is identified as the velocity fluctuation $\eta_n(t) = \Delta {\dot x}_n(t) = {\dot x}_n(t) - \langle {\dot x}_n(t) \rangle$ of $n$-th segment. The fluctuation-response relation~(\ref{FDT_nm}) can be most easily verified by decomposing the response and the correlation functions into modes: Eq.~(\ref{chi_v}) and $\langle \eta_n (t) \eta_m (s) \rangle =\langle \Delta {\dot x}_n(t) \Delta {\dot x}_m(s)\rangle=\sum_{p \ge 0} C_p(t,s) h^{\dagger}_{p,n}h^{\dagger}_{p,m}$. The FDT~(\ref{FDT_p}) in the mode space then leads to Eq.~(\ref{FDT_nm}). {\it Memory kernel---.} Suppose we tag a particular segment (labeled by $m$). The force $f(t)$ is applied only to that $m$-th segment, and we track its stochastic motion. The information of interest is contained in the self-response $\chi_{mm}(t,s)$. In this context of the single segment tracking analysis, positions of other segments except for $m$ are inaccessible to observations. One can therefore omit the label index and write the equation of motion of the tagged segment as \begin{eqnarray} \frac{dx(t)}{dt} = \int_{-\infty}^t ds \ \mu(t-s) f(s) + \eta(t) \label{Eq_tagged_monomer} \end{eqnarray} where, as verified above, the FDT~(\ref{FDT_GLE}) holds. The mobility kernel $\mu(t) (\equiv \chi_{mm} (t,0))= \mu_0^{(CM)}(t) + \mu_0(t) + \mu_M(t)$ composes of three terms according to Eq.~(\ref{chi_v}); \begin{eqnarray} \mu_0^{(CM)} (t) &=& \frac{2}{\gamma_0} \delta(t) = \frac{2}{N \gamma}\delta(t) \label{mu_CM}\\ \mu_0(t) &=& \sum_{p =1}^N \frac{2}{\gamma_p} \delta(t) (h_{p,m}^{\dagger})^2 \simeq \frac{2}{\gamma} \delta(t) \label{mu_m0}\\ \mu_M(t) &=& - \sum_{p =1}^N \frac{k_p}{\gamma_p^2}e^{-(k_p/\gamma_p)t}(h_{p,m}^{\dagger})^2 \label{mu_m_Rouse} \\ &\simeq& -\frac{1}{4 \sqrt{\pi} }\frac{1}{\gamma \tau_0} \left( \frac{t}{\tau_0}\right)^{-3/2} \quad (\tau_0 \ll t \ll \tau), \nonumber \end{eqnarray} where $\tau \equiv \gamma_1/k_1 \simeq \tau_0 N^2$ is the terminal (Rouse) time. In the above, the upper bound in the summation over $p$ is set reflecting the original discrete nature of the model with $N$ degrees of freedom, and we replace $\cos^2{(p\pi n/N)}$ by the average $1/2$. The last near-equality in Eq.~(\ref{mu_m_Rouse}) is valid in the time window $\tau_0 \ll t \ll \tau$, where the summation over $p$ can be replaced by the Gaussian integral. For longer time $t > \tau$, only the $p=1$ mode is relevant, thus, the memory decays exponentially. {\it Remarks---.} (i) In the very short time scale ($ t \simeq \tau_0$), the segment is unaware of the connectivity, and exhibits a usual viscous response with the segment friction coefficient $\gamma$ (Eq.~(\ref{mu_m0})). In the time scale coarser than $\tau$, the response again becomes viscous, but now with the much larger friction coefficient $\gamma N$, i.e., the center of mass mode (Eq.~(\ref{mu_CM})); see the sum rule Eq.~(\ref{sum_rule}) below. (ii) In the intermediate time scale ($\tau_0 \ll t \ll \tau$), the last term $\mu_M(t)$ (Eq.~(\ref{mu_m_Rouse})) dominates the dynamics of the tagged segment. Noting $z=4$ for a Rouse model, the result~(\ref{mu_m_Rouse}) agrees with the scaling analysis Eq.~(\ref{mu_eq}) in Sec.~(\ref{scaling_theory}). This term represents a memory effect, which arises from the superposition of the internal modes in the Rouse polymer. The minus sign here is a hallmark of the viscoelastic response inherent in the system with elastic connectivity (see, for instance, a similar analysis for polymerized membrane~\cite{Mizuochi}). (iii) In Eq.~(\ref{mu_m_Rouse}), the mode \begin{eqnarray} p^*(t) = \left( \frac{\gamma N^2}{k \pi^2 t}\right)^{1/2} \simeq \left( \frac{t}{\tau}\right)^{-1/2} \label{p*_eq} \end{eqnarray} has the largest contribution at time $t$. The corresponding number of segments is $n^{*}(t) \simeq N/p^*(t) \simeq (t/\tau_0)^{1/2}$. This agrees with our scaling estimate for the tension front in the weak force regime (Eq.~(\ref{n*_eq}) with $z=4$ and $\nu=1/2$). The physics behind this agreement is the following; The effect from the larger scale beyond the tension front $n^*(t)$ is irrelevant, or only marginal. Therefore, even if we neglect it, i.e., by shifting the lower bound $p_{l.b}$ in the summation $p_{l.b}=1$ to $p_{l.b}= p^*(t)$, one should get a qualitatively correct result with the proper exponent. (iv) The following sum rule \begin{eqnarray} \int_0^{\infty} dt \ [\mu_0(t) + \mu_M(t)] =0 \label{sum_rule} \end{eqnarray} may hold for any physical system, which indicates that in the long time limit $t \gg \tau$, all the internal modes relax, and only the center of mass mode remains. (v) In certain visco-elastic environments, the segment response itself could contain the memory effect. Then, one may think of the visco-elastic Rouse model, where the viscous friction term $\gamma {\dot x}_n$ in the Rouse equation (\ref{Rouse_eq}) is replaced with the integral kernel $\int ds \ \Gamma_1(t-s) {\dot x}_n(s)$~\cite{PRL_Weber_2010, PRE_Weber_2010,Rouse_viscoelastic}. In the case of the power-low memory kernel $\Gamma_1(t) \sim t^{-\alpha_1}$ with $0< \alpha_1 < 1$, the exponential relaxation in viscous Rouse model (Eq.~(\ref{Z_p_solution})) is generalized to the non-exponential one described by the generalized Mittag-Leffler function. This results in the memory kernel in the tagged segment dynamics $\mu_M(t) \sim - t^{-(2-(\alpha_1/2))}$, hence the anomalous exponent $\alpha = \alpha_1/2$ for the tagged segment diffusion. Such a visco-elastic Rouse model has been proposed to analyze the sub-diffusion of bacterial chromosomal loci~\cite{BJ_Lampo_2015,PRL_Weber_2010}. The usual viscous result corresponds to the limit $\alpha_1 \rightarrow 1$. The relation between exponents for the single segment exponent $\alpha_1$ and the tagged segment one $\alpha$ with a factor $2$ is a general consequence of the Rouse model. (vi) The Rouse model is valid as long as $f \lesssim k_BT/a$. For larger force, the chain section close to the pulled site becomes highly stretched, forming a ``stem"~\cite{EPL_Brochard_1995}. For the prescription and the scaling analysis in such a situation, see Ref.~\cite{PRE_Sakaue_2012}. \subsection{Self-avoidance and hydrodynamic interactions} \label{SA_HI} In many of practical situations, one of or both of these effects become important. These interactions are non-local, and conformation dependent, hence, make the equations of motion highly nonlinear, which prevent the rigorous analysis based on the mode expansion. Nevertheless, one can gain physically appearing insights in terms of approximate mode analysis. The pre-averaging approximation provides a way to treat the HIs in terms of modes, in which the conformation-dependent mobility tensor is averaged using the equilibrium segment distribution~\cite{Doi_Edwards}. This yields for the effective friction constant for the mode $p$ \begin{eqnarray} \gamma_p &\simeq& \left\{ \begin{array}{ll} \gamma N^{\nu(z-2)} & (p=0) \\ \gamma p (N/p)^{\nu(z-2)} & (p \neq 0) \label{gamma_p} \end{array} \right. \end{eqnarray} Note that the free-draining polymer $z=2 + (1/\nu)$ bears no relation to the pre-averaging; we then recover $\gamma_p$ for the Rouse model. In a similar level, the self-avoidance (SA) can be treated by employing the linearization (Gaussian) approximation, which alters the spring constant for the mode $p$ as \begin{eqnarray} k_p &\simeq& k p(p/N)^{2 \nu}. \label{k_p} \end{eqnarray} The validity of this form as well as a high degree of statistical independence of different modes has been numerically demonstrated in Ref.~\cite{Panja_Rouse_mode}. Note that for the ideal polymer $\nu = 1/2$, we recover $k_p$ for the Rouse model. Let us analyze the mode equation~(\ref{Eq_NC}) with Eqs.~(\ref{gamma_p}) and~(\ref{k_p}). Note that the terminal time is now given by $\tau = \gamma_1/k_1 \simeq \tau_0 N^{\nu z}$ (see Eq.~(\ref{tau_eq})). The mobility kernel is again decomposed as $\mu(t) = \mu_0^{(CM)}(t) + \mu_0(t) + \mu_M(t)$. While the segment instant response $\mu_0(t)$ is essentially unchanged from the Rouse model (Eq.~(\ref{mu_m0})), the $N$ dependence of the center of mass response is modified as $\mu_0^{(CM)} (t) \simeq (\gamma N^{\nu(z-2)})^{-1} \delta(t)$. In addition, the memory kernel is evaluated as \begin{eqnarray} \mu_M(t) &\simeq& - \sum_{p=1}^{N} \frac{1}{\gamma N \tau_0} \left( \frac{p}{N}\right)^{2\nu(z-1)-1}e^{-\frac{t}{\tau_0}\left( \frac{p}{N}\right)^{\nu z}} (h_{p,n}^{\dagger})^2 \nonumber \\ &\simeq& - \frac{1}{\gamma \tau_0} \left( \frac{t}{\tau_0}\right)^{-(2-2z^{-1})} \label{mu_m_} \quad (\tau_0 \ll t \ll \tau) \label{mu_M_eq} \end{eqnarray} The last near-equality is valid in the intermediate time scale, where the summation is evaluated as the integral using the formula\footnote{The symbol $\Gamma(\cdot)$ here is used as the gamma function $\Gamma(z)=\int_0^{\infty} u^{z-1}e^{-u} du$ in this integral formula.} $\int_0^{\infty}dx \ x^{b-1}e^{-a x^{\theta}} = \Gamma(b/\theta) /(\theta a^{b/\theta})$ for $a, b, \theta > 0$. The result agrees with our scaling argument (see Eq.~(\ref{mu_eq})), and the tagged segment dynamics in this time scale is a fBm with the anomalous exponent $\alpha = 2/z$. {\it Remarks---.} (i) At time $t$, the mode $p^*(t) \simeq \left( t/\tau \right)^{-1/(\nu z)}$ has the largest contribution. The corresponding number of segments $n^*(t) \simeq N/p^*(t)$ agrees with our scaling argument Eq.~(\ref{n*_eq}) for the dynamics of tension front. (ii) For the present description to be valid, at least qualitatively, the condition $f \lesssim k_BT/(aN^{\nu})$ is required. For stronger force, the conformation of the polymer is markedly deviated from the equilibrium distribution, which invalidates the assumption used to evaluate effective friction and spring constants in Eqs.~(\ref{gamma_p}) and~(\ref{k_p}). This is contrasted to the Rouse model case, for which the condition is much weaker, associated to the bond stretching (See remark (vi) in Sec.~\ref{Rouse_model}), but not the conformation. In Sec.~\ref{DrivenDynamics}, we aim at constructing an effective description, which allows us to analyze the fluctuating driven dynamics in larger force regime $f \gtrsim k_BT/(aN^{\nu})$ even with SA and HIs. \subsection{Driven dynamics} \label{DrivenDynamics} Suppose we start applying a constant strong force $f > k_BT/(aN^{\nu})$ to the chain end ($N$-th segment) at time $t=0$ (see Fig.~1(b)). Before that moment ($t<0$), the polymer assumes an equilibrium conformation. We are interested in the motion of that pulled segment. The dynamics is nonlinear with SA or HIs and nonequilibrium in strong force regime. To analyze the average dynamics, the following nonlinear diffusion-type equation (called {\it p}-Laplacian diffusion equation) has been proposed~\cite{EPL_Brochard_1994,EPL_Sebastian_2011}: \begin{eqnarray} \frac{\partial x_n}{\partial t} = D_0 \frac{\partial}{\partial (na)}\left[\left( \frac{\partial x_n}{\partial (na)}\right)^{p-2} \left( \frac{\partial x_n}{\partial (na)}\right) \right] \label{p_Lap} \end{eqnarray} with $p=(z-2)\nu/(1-\nu)$ and the segment diffusion coefficient $D_0 \simeq k_BT/\gamma$. This equation is derived based on the force balance argument for the chain of blobs, and can be thought of as a nonlinear extension of Rouse model (see Appendix A for the derivation). Equation~(\ref{p_Lap}) would be a useful starting point to analyze the stochastic dynamics of the polymer stretching in terms of the collective mode in the chain. Since, by construction, this is expected to provide a reasonable description on the average dynamics, one may add a random force $\zeta_n$ of zero mean to get a nonlinear Langevin equation \begin{eqnarray} \gamma_n \frac{\partial x_n}{\partial t}=k_n \frac{\partial^2 x_n}{\partial n^2} +\zeta_n(t) +f_n, \end{eqnarray} where $k_n$ and $\gamma_n$ are given by Eqs.~(\ref{k_n_s}) and~(\ref{gamma_n_s}) in Appendix~A. Within the mono-block approximation, one can linearize it \begin{eqnarray} \gamma^{(f)} \frac{\partial x_n}{\partial t} = k^{(f)} \frac{\partial^2 x_n}{\partial n^2} + \zeta_n^{(f)}(t) + f_n \label{f_Rouse} \end{eqnarray} with the force-dependent spring and friction coefficients Eqs.~(\ref{k_f}),\ (\ref{gamma_f}). The random forces $\zeta_n$ independently acting on individual segments are assumed to be Gaussian white noise with zero mean $\langle \zeta_n^{(f)}(t) \rangle =0$, whose correlation obeys the FDT; $\langle \zeta_n^{(f)}(t)\zeta_m^{(f)}(s) \rangle = 2 \gamma^{(f)}k_BT \delta(n-m) \delta(t-s)$. Equation~(\ref{f_Rouse}) reduces to the Rouse model (eq.~(\ref{Rouse_eq})) when $\nu=1/2$ and $z=2 + (1/\nu) = 4$. Otherwise, the SA or HIs result in the nonlinear response. With the force free boundary condition (Eq.~(\ref{Rouse_bound})) and explicit inclusion of the external force $f_n(t)= f \Theta(t) \delta (N-n)$ acting on the end segment $n=N$, one can follow the analysis developed in Secs.~\ref{Rouse_model} -~\ref{SA_HI}. In analyzing the dynamics of nonlinear response, one has to be aware of the change in the mode spectrum, i.e., $(k_p,\gamma_p) \rightarrow (k_p^{(f)},\gamma_p^{(f)}) $ due to the external force. This effect may be treated in the following way. After the force is switched on at $t=0$, the equilibrium mode would persist during the induction time (see remark (i) in Sec.~\ref{time_dependent_friction}). Equation~(\ref{Eq_NC}) with Eqs.~(\ref{gamma_p}) and~(\ref{k_p}) would be thus valid up to $t=\tau_{f0}$. At $t > \tau_{f0}$, the effect of the driving dominates the mode dynamics, and the spring and the friction constants become altered to those in the strong force regime. Thus, making these constants time dependent, the equation of motion in mode space becomes \begin{eqnarray} \gamma^{*}_p (t) \frac{\partial X_p(t)}{\partial t} = -k^{*}_p (t) X_p(t) + Z^{*}_p (t) + F_p (t), \label{Eq_NC_f} \end{eqnarray} where the equilibrium mode structure $\gamma^{*}_p (t) = \gamma_p$, $k^{*}_p (t) = k_p$, $Z^{*}_p (t) = Z_p(t)$ persists only up to $t < \tau_{f0}$ (Eqs.~(\ref{gamma_p}) and~(\ref{k_p})). At $t > \tau_{f0}$, these switch to the stretched mode $\gamma^{*}_p (t) = \gamma^{(f)}_p$, $k^{*}_p (t) = k^{(f)}_p$, $Z^{*}_p (t) = Z^{(f)}_p(t)$ with \begin{eqnarray} \gamma^{(f)}_p &\simeq& \left\{ \begin{array}{ll} N \gamma^{(f)} & (p < p_f) \\ \gamma p (N/p)^{\nu(z-2)} & (p >p_f) \label{gamma_p_f} \end{array} \right. \end{eqnarray} \begin{eqnarray} k^{(f)}_p &\simeq& \left\{ \begin{array}{ll} k^{(f)} p^2/N & (p < p_f) \\ k p(p/N)^{2 \nu} & (p >p_f) \label{k_p_f} \end{array} \right. \end{eqnarray} and the Gaussian white noise $Z_p^{(f)}(t)$ with zero mean and the correlation $\langle Z^{(f)}_{p}(t) Z^{(f)}_{q}(s) \rangle = 2 \gamma^{(f)}_p k_BT \delta_{pq}\delta(t-s)$. The tension propagation time given in Eq.~(\ref{tau_f}) can be identified as the slowest relaxation time in the stretched mode $\tau_f = \gamma_1^{(f)}/k_1^{(f)}$. Here, the characteristic mode number is introduced \begin{eqnarray} p_f = 2N \left( \frac{fa}{k_BT}\right)^{1/\nu} \simeq \frac{N}{g} \label{p_f} \end{eqnarray} in such a way that the effect of the force is negligible for modes with $p>p_f (\Leftrightarrow a (2N/p)^{\nu} < k_BT/f)$, hence, the friction and spring constants are given by those in weak force regime (Eqs.~(\ref{gamma_p}) and~(\ref{k_p}), respectively). Notice that our construction assures continuity for both $\gamma_p^{(p)}$ and $k_p^{(f)}$ across $p_f$. The solution of Eq.~(\ref{Eq_NC_f}) for $t>\tau_{f0}$ takes the same form as Eq.~(\ref{Z_p_solution}) with the replacement $(k_p,\gamma_p, t_0) \rightarrow (k_p^{(f)},\gamma_p^{(f)}, \tau_{f0}) $, where the ``initial" condition is given by \begin{eqnarray} X_p(\tau_{f0}) &=& \frac{1}{\gamma_p} \int_{-\infty}^{\tau_{f0}} ds \ e^{-\frac{k_p}{\gamma_p}(\tau_{f0}-s)}Z_p(s) \nonumber \\ && + \frac{F_p}{\gamma_p}\left( 1- e^{-\frac{k_p}{\gamma_p} \tau_{f0}}\right). \label{Xp_ini} \end{eqnarray} Crucially, this noise from the ``initial" condition adds an extra contribution to the noise $\eta_f(t)$ in the motion of the tagged segment, which leads to deviation from the relation~(\ref{FDT_GLE_f}) due to the SA effect as will be discussed below. The motion of the tagged segment is described by Eq.~(\ref{Eq_tagged_monomer}) in the weak force regime at $t<\tau_{f0}$. To take account of the changes in spring and friction constants at $t > \tau_{f0}$, we need to modify it as \begin{eqnarray} \frac{dx(t)}{dt} = \int_{-\infty}^t ds \ \mu_f(t,s; \tau_{f0}) f(s) + \eta_f(t; \tau_{f0}) \label{Eq_tagged_monomer_f} \end{eqnarray} with the mobility kernel \begin{eqnarray} \mu_f(t,s;\tau_{f0}) = \sum_{p \ge 0} \chi_{p,f} (t,s;\tau_{f0}) (h^{\dagger}_{p,N})^2 \label{mu_f} \end{eqnarray} and the noise \begin{eqnarray} \eta_f(t; \tau_{f0}) &=&\sum_{p \ge 0} h_{p,N}^{\dagger}\Bigl[ \int_0^t ds \chi_{p,f}(t,s;\tau_{f0}) Z_p^{*}(s) \nonumber \\ &&- X_p(0) \frac{k_p^{*}(t)}{\gamma_p^{*}(t)}\exp{\left(- \int_0^t ds \frac{k_p^{*}(s)}{\gamma_p^{*}(s)} \right)} \Bigr] \label{eta_f} \end{eqnarray} where $\chi_{p, f}(t,s;\tau_{f0})$ is the velocity response function for the mode $p$ calculated in Appendix B. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{fig4.eps} \caption{ (Color Online) VD-AD ratio $f \left<\Delta x^2 \right> / (k_BT\left< x \right>) $ obtained from theory (Eq.~(\ref{x_xx_diff})) in 3D (left) and 2D (right) as a function of time. The calculation was carried out with $N=100$. Note that there are uncertainties in the precise values in the peak heights due to the scaling estimate of spring constants. In the above plots, we set all the numerical constants (including $(h_{p,N}^\dagger)^2$) to be unity. } \label{fig4} \end{center} \end{figure} {\it Memory kernel---.} The mobility kernel $\mu_f(t,s; \tau_{f0})$ in the intermediate time scale $\tau_{f0} \ll s < t \ll \tau_f$ is stationary, and dominated by the memory effect due to the connectivity, that is $\mu_f(t,s; \tau_{f0}) \simeq \mu_{M,f}(t-s)$ with \begin{eqnarray} &&\mu_{M,f}(t) \simeq - \sum_{p \ge 1 } \frac{k_p^{(f)}}{(\gamma_p^{(f)})^2}e^{-\frac{k_p^{(f)}}{\gamma_p^{(f)}}t}(h_{p,N}^{\dagger})^2 \nonumber \\ &= & - \sum_{p=1}^{p_f} \frac{1}{\gamma N \tau_{f0}} \left( \frac{p}{p_f}\right)^{2} \left( \frac{fa}{k_BT}\right)^{z-2- \nu^{-1}} e^{-\frac{t}{\tau_{f0}}\left( \frac{p}{p_f}\right)^{2}} (h_{p,N}^{\dagger})^2 \nonumber \\ &&-\sum_{p=p_f}^{N} \frac{1}{\gamma N \tau_0} \left( \frac{p}{N}\right)^{2\nu(z-1)-1}e^{-\frac{t}{\tau_0}\left( \frac{p}{N}\right)^{\nu z}} (h_{p,N}^{\dagger})^2 \label{mu_m_f} \end{eqnarray} In the time window $\tau_{f0} \ll t \ll \tau_f$, the second term is negligible and the summation in the first term can be approximated by the Gaussian integral. This calculation leads to our scaling estimate~(\ref{mu_f_scaling}) in Sec.~\ref{scaling_theory}. For longer time $t > \tau_f$, the memory decays exponentially, and only the center of mass ($p=0$) mode remains. It is characterized by the viscous response with the friction coefficient $\simeq N \gamma^{(f)}$. {\it Fluctuation and response relation---.} The correlation function of the velocity fluctuation is \begin{eqnarray} \langle \eta_f(t;\tau_{f0}) \eta_f(s;\tau_{f0}) \rangle= \sum_{p \ge 0} C_{p,f}(t,s;\tau_{f0}) (h_{p,N}^{\dagger})^2, \end{eqnarray} where $C_{p,f}(t,s) $ is the correlation function in the mode space calculated in Appendix B. Comparing this with Eq.~(\ref{mu_f}), we find \begin{eqnarray} &&\langle \eta_f(t;\tau_{f0}) \eta_f(s;\tau_{f0}) \rangle - k_BT \mu_f(t,s;\tau_{f0}) \nonumber \\ &&= \left\{ \begin{array}{lll} 0 & (s<t < \tau_{f0}) \\ 0 & (s< \tau_{f0} < t) \\ \sum_{p \ge 1} C_{p,f}^{(ex)}(t,s;\tau_{f0}) (h_{p,N}^{\dagger})^2 & ( \tau_{f0}<s<t) \label{C-mu} \end{array} \right. \end{eqnarray} Here, a factor from the ``initial" condition can be evaluated using Eq.~(\ref{X_p_equilibrium}). This leads to ``zero" in the first and second lines, and the expression of the excess term in the third line in terms of the change in the spring constant \begin{eqnarray} C_{p,f}^{(ex)}(t,s;\tau_{f0}) &=& \left(\frac{k_p^{(f)}}{\gamma_p^{(f)}}\right)^2 \left( \frac{k_BT}{k_p} -\frac{k_BT}{k_p^{(f)}} \right) \\ && \times e^{-(k_p^{(f)} /\gamma_p^{(f)})(t+s-2\tau_{f0})} \nonumber \end{eqnarray} This term breaks the time translational invariance, and decays exponentially with a characteristic rate $k_p^{(f)}/\gamma_p^{(f)}$. From this, we obtain\footnote{Begining with the solution $X_p(t)$ of Eq.~(\ref{Eq_NC_f}) makes the calculation easier to check Eq.~(\ref{x_xx_diff}).} \begin{eqnarray} &&\left< \Delta x(t)^2 \right> -\frac{2k_BT}{f}\left< x(t) \right> \label{x_xx_diff} \\ &=& \sum_{p \ge 1}\left( \frac{k_BT}{k_p}-\frac{k_BT}{k_p^{(f)}} \right) \left( 1-e^{-\frac{k_p^{(f)}}{\gamma_p^{(f)}}(t-\tau_{f0})} \right)^2(h_{p,N}^{\dagger})^2. \nonumber \end{eqnarray} The deviation (right hand side in eq.~(\ref{x_xx_diff})) is positive due to the force induced hardening $k_p^{(f)} > k_p$ (for SA chain). Comparing the 2D and 3D cases, the larger deviation is expected for 2D as $k_{p,\mathrm{2D}}^{(f)} > k_{p,\mathrm{3D}}^{(f)}$. In addition, Eq.~(\ref{x_xx_diff}) tells that the deviation grows as time evolves until $\tau_{f}$. This means that the VD-AD ratio $f \langle \Delta x^2(t) \rangle /(k_BT \langle x(t) \rangle)$ peaks around $\tau_{f}$. At $t > \tau_{f}$, $p=0$ mode in the denominator overwhelms the internal modes $p\geq 1$, leading to the recovery of the relation $f\left< \Delta x^2\right>/\left< x\right>=2k_BT$. These trends are clearly seen in Fig. 4, where we plot the VD-AD ratio obtained from the above theory. Our present treatment is rather crude in the sense that the switching to the strong force regime around $t \simeq \tau_{f0}$ is treated through a discrete jump in the effective parameters. In reality, it would take place more smoothly. Nevertheless, our theoretical prediction well captures the essential trend in the MD simulation results in Fig.~3. {\it Remarks---.} (i) At time $t$, the mode $p^*(t) \simeq (t/\tau_f)^{-1/2}$ has the largest contribution in Eq.~(\ref{mu_m_f}). This corresponds to the number of segments $n^*(t) \simeq N/p^*(t)$, which agrees with our scaling estimate~(\ref{n*_f}) for the tension front in the strong force regime. (ii) The mode with $p < p^*(t)$ may be unphysical for the stretching process as such a large scale part is not stretched by the force yet. However, we expect that these fictive modes do not alter the qualitative conclusion on the dynamics of tension propagation, just as the case in the weak force regime (see remark (iii) in Sec.~\ref{Rouse_model}). (iii) A rough estimate in the peak height in VD-AD ratio can be obtained by evaluating $p=1$ mode in Eq.~(\ref{x_xx_diff}); \begin{eqnarray} \frac{f \left<\Delta x^2(\tau_{f}) \right> }{ k_BT\left< x (\tau_{f})\right> }&\sim& 2 + f \frac{k_1^{-1}- [k_1^{(f)}]^{-1}}{f\tau_f/\gamma_0^{(f)}}(1- e^{-1})^2 (h_{p,N}^{\dagger})^2 \nonumber \\ &\sim& 2 + \left[ \left( \frac{fa N^{\nu}}{k_BT}\right)^{(2\nu-1)/\nu} -1\right] c_0 \end{eqnarray} With a factor $c_0 \sim (1- e^{-1})^2(h_{p,N}^{\dagger})^2 \sim 0.4$, we obtain VD-AD ratio $ \sim 5.6$ and $\sim 2.4$ in 2D and 3D cases, respectively, with $N=100$ and $fa/k_BT = 1$. Despite indefiniteness of these numerical values (see the caption of Fig.~4), this estimate would be useful to see the qualitative dependence on the force and the chain length. (iv) The tension propagation time $\tau_f$ fluctuates in each realization of the stretching processes, and Eq.~(20) is regarded as the average $\langle \tau_f \rangle = \gamma_1^{(f)}/k_1^{(f)}$. In the strong force regime, the dominant source of the stochasticity comes not from the noise $Z_p(t)$ but from the initial conformation of the polymer along which the tension propagates. In terms of the mode analysis, the fluctuation in $\tau_f$ can be evaluated in the following way. For clarity of the argument, suppose that the force is strong enough $f \simeq k_BT/a$ so that the induction time is very short $\tau_{f0} \simeq \tau_0$. Neglecting the noise effect, the displacement of the center of mass ($p=0$) mode and the slowest relaxational ($p=1$) mode are, respectively, $X_0(t)-X_0(0) =F_0 t/\gamma_0^{(f)}$ and $X_1(t)-X_1(0) =(F_1/k_1^{(f)}-X_1(0))(1-e^{-k_1^{(f)}t/\gamma_1^{(f)}})$. Comparing these, one finds that at $t= \langle \tau_f \rangle $, the displacement in $p=0$ mode reaches the final displacement in $p=1$ mode on average, i.e., $\langle \tau_f \rangle \simeq \gamma_0^{(f)} F_1 /(F_0 k_1^{(f)}) \simeq \gamma_1^{(f)}/k_1^{(f)}$. Taking account of the fluctuation in the latter due to the initial distribution, one obtains the variance in the propagation time as $\langle(\Delta \tau_f)^2 \rangle \simeq ( \gamma_0^{(f)}/F_0 )^2 \langle X_1(0)^2 \rangle$. Evaluating the variance in the initial distribution using Eq.~(\ref{X_p_equilibrium}), this leads to \begin{eqnarray} \sqrt{\langle(\Delta \tau_f)^2 \rangle} \simeq \tau_0 N^{1+\nu}\left( \frac{fa}{k_BT}\right)^{1-z+(1/\nu)} \end{eqnarray} The same result has been obtained in ref.~\cite{PRE_Saito_2012} using the scaling argument. (v) Events on the short length and time scales are described by the weak force regime. Such a range increases with the decrease in the force; the characteristic mode number and the time change from $p_f \simeq N$ and $\tau_{f0} \simeq \tau_0$ at $f\simeq k_BT/a$ to $p_f \simeq 1$ and $\tau_{f0} \simeq \tau$ at $f \simeq k_BT/R$. (vi) Our theory indicates that it is the nonlineality in the elastic response (force-dependent spring constant), but not the frictional response, that is responsible for the non-trivial VD-AD ratio in the stretching process. The sign of Eq.~(\ref{x_xx_diff}) depends on whether the system exhibits the stiffening or the softening under the force. \section{Concluding Remarks} It has been long known that a tagged segment in a polymer exhibits a sub-diffusion in the intermediate time scale, and its consequence ranges from the dynamical function of biopolymers to the rheology of polymer solutions. In this paper, we formulated the problem in terms of the mobility problem, i.e., the dynamical response of the segment after the application of external force, and introduced the weak and strong force regimes for the anomalous dynamics. In the weak force (equilibrium) regime, the motion in the intermediate time scale is dominated by the memory effect, leading to a conventional fBm. We performed lucid and exact analysis for a Rouse model, which leads to a microscopic basis for the fBm. The deduced memory kernel fully agrees with a simple scaling argument based on the physical picture of tension transmission. Together with the approximation scheme to include the SA and HIs, we believe that the present approach provides a comprehensive picture on the anomalous dynamics of the tagged segment in the weak force regime. In the strong force (driven) regime, the motion in the intermediate time scale is again dominated by the memory effect arising from the tension transmission, but now the tension dynamics accompanies a large conformational distortion, and qualitatively different from that in the weak force regime. We discussed that the memory kernel generally becomes force dependent, from which one can derive the nonlinear dynamical scaling for the anomalous drift. Unlike the weak force regime, the fluctuation and the response do not necessarily satisfy a simple proportionality relation due to the noise from the ``initial condition" at $t=\tau_{f0}$, after which the dynamics enters the strong force regime. This extra noise is non-stationary, making the fluctuating dynamics to deviate from the fBm. On the basis of the approximate mode analysis, we proposed a formula to relate the fluctuation and the response in the driven process, which is in a rather good agreement with results obtained from MD simulations. A recent study has made use of the VD-AD ratio of the labeled locus of bacterial chromosome during the segregation process to estimate its driving force~\cite{BJ_Lampo_2015}. We feel that our present study could be a useful guide for such an analysis. \section*{Acknowledgement} This work was supported by KAKENHI [Grant No.26103525,``Fluctuation and Structure", Grant No.24340100, Grant-in-Aid for Scientific Research (B)], Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan and JSPS Core-to-Core Program (Nonequilibrium Dynamics of Soft Matter and Information). \section*{Appendix A} \label{deterministic} We review the previous works on the deterministic (average) dynamics. {\it Coarse-grained description: a chain of blobs ---.} To discuss the dynamics in the scale larger than the blob size, we envision the stretched polymer as a chain of blobs. The blobs are labeled with the index ${\tilde n} = 0,1,2, \cdots$ from the free end at the rear. The ${\tilde n}$-th blob comprised of $g_{{\tilde n}}$ segments has the spatial size $\xi_{{\tilde n}} \simeq a g_{{\tilde n}}^{\nu}$. By taking the $x$ axis as the pulling direction, the position ${\tilde x}_{{\tilde n}}$ of the center of ${\tilde n}$-th blob is \begin{eqnarray} {\tilde x}_{{\tilde n}}= {\tilde x}_{{\tilde n}=0}+ \sum_{{\tilde n}=0}^{{\tilde n}-1}\xi_{{\tilde n}}. \label{Eq.1} \end{eqnarray} The dynamics of the chain of blobs can be analyzed by noting that the spring and the friction constants ${\tilde k}_{{\tilde n}}$, ${\tilde \gamma}_{{\tilde n}}$ of the ${\tilde n}$-th blob are given by \begin{eqnarray} {\tilde k}_{{\tilde n}} &\simeq& \frac{k_BT}{\xi_{{\tilde n}}^2} \label{K_n_blob}\\ {\tilde \gamma}_{{\tilde n}} &\simeq& \gamma \left( \frac{\xi_{{\tilde n}}}{a}\right)^{z-2}, \end{eqnarray} which lead to the force balance equation \begin{eqnarray} {\tilde f}_{{\tilde n}}^{(el)} + {\tilde f}_{{\tilde n}}^{(vis)}=0 \label{f_balance} \end{eqnarray} with the elastic restoring force \begin{eqnarray} {\tilde f}_{{\tilde n}}^{(el)}&=&{\tilde k}_{{\tilde n}}({\tilde x}_{{\tilde n}+1} - {\tilde x}_{{\tilde n}}) - {\tilde k}_{{\tilde n}-1}({\tilde x}_{{\tilde n}} - {\tilde x}_{{\tilde n}-1}) \nonumber \\ &\rightarrow& \frac{\partial}{\partial {\tilde n}}\left[ {\tilde k}_{{\tilde n}} \frac{\partial {\tilde x}_{{\tilde n}}}{\partial {\tilde n}}\right] \qquad ({\rm continuum \ limit}) \end{eqnarray} and the viscous frictional force \begin{eqnarray} {\tilde f}_{{\tilde n}}^{(vis)}= - {\tilde \gamma}_{{\tilde n}} \frac{\partial {\tilde x}_{{\tilde n}}}{\partial t}. \end{eqnarray} Equation~(\ref{Eq.1}) indicates the expression for the local deformation \begin{eqnarray} \frac{ \partial {\tilde x}_{{\tilde n}}}{\partial {\tilde n}} = \xi_{{\tilde n}} \label{local_deformation} \end{eqnarray} One can then write Eq.~(\ref{f_balance}) as \begin{eqnarray} {\tilde k}_{{\tilde n}} \frac{\partial^2 {\tilde x}_{{\tilde n}}}{\partial {\tilde n}^2} + {\tilde \gamma}_{{\tilde n}} \frac{\partial {\tilde x}_{\tilde n}}{\partial t} = 0 \end{eqnarray} This line of argument was used to discuss the normal modes of the tethered chain stretched by flow~\cite{Macromolecules_Marciano_Brochard_1995}. {\it Mapping to the p-Laplacian equation ---} One may extrapolate the above force estimation at the blob scale to the segment scale in such a way that the elastic and viscous frictional forces acting on the $n$-th segment are given by $f_n^{(el)}={\tilde f}_{{\tilde n}}^{(el)}/g_{{\tilde n}} $ and $f_n^{(vis)}={\tilde f}_{{\tilde n}}^{(vis)}/g_{{\tilde n}}$. The label index of blobs and that of segments are related as $n = \int_1^{{\tilde n}} g_{{\tilde n}'} d{\tilde n}'$. We write respective forces as \begin{eqnarray} f_n^{(el)} = k_n \frac{\partial^2x_n}{\partial n^2} \\ f_n^{(vis)} = -\gamma_n \frac{ \partial x_n}{\partial t} \end{eqnarray} The spring and the friction constants $k_n$, $\gamma_n$ in this fine-grained frame can be estimated in the following way. The relation $\partial n = g_{{\tilde n}} \partial {\tilde n}$ between internal coordinate before and after the fine-graining indicates the transformation rule of the local chain deformation \begin{eqnarray} \frac{\partial {\tilde x}_{{\tilde n}}}{\partial {\tilde n}} = g_{{\tilde n}}\frac{\partial x_n}{\partial n} \label{transform_n} \end{eqnarray} This, together with Eq.~(\ref{local_deformation}), implies \begin{eqnarray} a \left( \frac{\xi_{\tilde n}}{a}\right)^{(\nu-1)/\nu} \simeq \frac{\partial x_n}{\partial n}. \end{eqnarray} These considerations lead to~\footnote{The second derivative relation also follows as $\partial^2 {\tilde x}_{{\tilde n}}/\partial {\tilde n}^2 = C g_{{\tilde n}}^2 \ \partial^2 x_n/\partial n^2$ with a negative coefficient $C=(\nu/(\nu-1)) <0$.} \begin{eqnarray} k_n \simeq k \left( \frac{\xi_{{\tilde n}}}{a}\right)^{(1-2\nu)/\nu} \simeq k\left(\frac{\partial x_n}{\partial (na)}\right)^{(2\nu -1)/(1-\nu)} \label{k_n_s}\\ \gamma_n \simeq \gamma \left( \frac{\xi_{{\tilde n}}}{a}\right)^{z-2-(1/\nu)} \simeq \gamma \left( \frac{\partial x_n}{\partial (na)}\right)^{[1-(z-2)\nu] /(1-\nu)}. \label{gamma_n_s} \end{eqnarray} The force balance relation $f_n^{(el)}+f_n^{(vis)}=0$ can be cast into a so-called {\it p}-Laplacian diffusion equation given in Eq.~(\ref{p_Lap})~\cite{EPL_Brochard_1994,EPL_Sebastian_2011}: Again, useful insights can be deduced from the mono-block approximation~\cite{Macromolecules_Marciano_Brochard_1995}, where the blob sizes are assumed to be uniform with $\xi_{{\tilde n}} \simeq k_BT/f$ (see Sec.~\ref{scaling_theory}). In this approximation, the spring and the friction constants~Eqs.~(\ref{k_n_s}),\ (\ref{gamma_n_s}) do depend on $f$ but not on $n$: \begin{eqnarray} k_n = k^{(f)} \simeq k \left(\frac{fa}{k_BT} \right)^{(2\nu-1)/\nu} \label{k_f}\\ \gamma_n = \gamma^{(f)} \simeq \gamma \left(\frac{fa}{k_BT} \right)^{2-z+(1/\nu)} \label{gamma_f} \end{eqnarray} Therefore, Eq.~(\ref{p_Lap}) becomes a simple linear diffusion equation \begin{eqnarray} \frac{\partial x_n}{\partial t} = D_0 \left( \frac{fa}{k_BT}\right)^{z-(2/\nu)}\frac{\partial^2 x_n}{\partial (na)^2} \label{p_Lap_f} \end{eqnarray} with the force dependent diffusion coefficient. One can check that the self-similar scaling solution of Eq.~(\ref{p_Lap_f}) is consistent with the average drift of the tagged segment in strong force regime discussed in Sec.~\ref{scaling_theory}. Assume that at time $s$, the tension gets transmitted up to $m(s)$-th segments from the pulled end. Requiring Eq.~(\ref{p_Lap_f}) to be invariant under the scale transformation $t \rightarrow s t$ and $n \rightarrow n^*(s) n$, one obtains the dynamics of the tension front $n^*(s)$, which is given by Eq.~(\ref{n*_f}). Note that the above stretching process can also be analyzed by a different, but related nonlinear diffusion equation, which describes the time evolution of the segment line density field~\cite{PRE_Sakaue_2012, PRE_Rowghanian_Grosberg_2012, Macromolecules_Paturej_2012, PRE_Saito_2013}. \section*{Appendix B} \subsection*{Fluctuation-response relation in mode space} We calculate the response function $\chi_p(t,s) \equiv \delta \langle {\dot X}_{p}(t) \rangle /\delta F_{p}(s)$ and the correlation function $C_p(t,s) \equiv \langle {\Delta \dot X}_{p}(t) {\Delta \dot X}_{p}(s)\rangle$ without assuming $t_0 \rightarrow -\infty$. We first consider the case with unchanged spring and friction constants, which applies to the Rouse model, and the more general case with SA and HIs in the weak force regime. \subsubsection*{Weak force regime} {\it Response function---} Upon time derivative of Eq.~(\ref{Z_p_solution}) and taking ensemble average over the noise sequence $Z_p(t)$, the response function is obtained as \begin{eqnarray} \chi_p(t,s) = -\frac{k_p}{\gamma_p^2}e^{-(k_p /\gamma_p) (t-s)} + \frac{2}{\gamma_p}\delta(t-s). \label{chi_p} \end{eqnarray} {\it Correlation function---} The time correlation of $\Delta {\dot X}_p(t) \equiv {\dot X}_p(t) - \langle {\dot X}_p(t) \rangle $ can be decomposed as \begin{eqnarray} C_p(t,s; t_0) = C_{p}^{(st)}(t,s) + C_{p}^{(ex)}(t,s; t_0), \label{C_p_total} \end{eqnarray} where the first is the stationary part invariant with respect to the time translation, i.e., $C_{p}^{(st)}(t,s)=C_{p}^{(st)}(t-s,0)$; \begin{eqnarray} C_{p}^{(st)}(t,s) = -\frac{k_p}{\gamma_p^2} k_BT e^{-(k_p /\gamma_p)(t-s)} + \frac{2}{\gamma_p} k_BT \delta(t-s) \label{C_p_st} \end{eqnarray} and the second is the excess due to the non-stationarity of the process; \begin{eqnarray} C_{p}^{(ex)}(t,s; t_0) = &-&\frac{k_p}{\gamma_p^2} k_BT e^{-(k_p /\gamma_p)(t+s-2t_0)} \nonumber \\ &+&\frac{k_p^2}{\gamma_p^2} \langle X_p^2(t_0)\rangle e^{-(k_p /\gamma_p)(t+s-2t_0)}, \label{C_p_ex} \end{eqnarray} where we add an auxiliary argument $t_0$ to indicate the initial time. One can verify the FDT~(\ref{FDT_p}), provided that the process is stationary, i.e., $t_0 \rightarrow -\infty$. Note that the excess part~(\ref{C_p_ex}) identically vanishes, when the equi-partition condition \begin{eqnarray} \langle X_p^2(t_0) \rangle = \frac{k_BT}{k_p} \label{X_p_equilibrium} \end{eqnarray} holds for each of $p \ge 1$ modes at $t=t_0$, where the averaging is taken over the probability distribution of $X_p$ at $t=t_0$. \subsubsection*{Strong force regime} When the polymer with SA and/or HIs is stretched strongly, the calculation becomes slightly complicated due to the time dependence of spring and friction constants. {\it Response function---} From the solution of Eq.~(\ref{Eq_NC_f}), the response function is obtained as \begin{eqnarray} &&\chi_{p,f}(t,s; \tau_{f0}) \nonumber \\ &&= \left\{ \begin{array}{lll} \chi_p(t,s) & (s < t < \tau_{f0}) \\ -\frac{k_p^{(f)}}{\gamma_p \gamma_p^{(f)}} e^{-(k_p/\gamma_p)(\tau_{f0}-s) -(k_p^{(f)}/\gamma_p^{(f)})(t-\tau_{f0})} & (s<\tau_{f0} <t) \\ -\frac{k_p^{(f)}}{(\gamma_p^{(f)})^2}e^{-(k_p^{(f)} /\gamma_p^{(f)}) (t-s)} + \frac{2}{\gamma_p^{(f)}}\delta(t-s) & (\tau_{f0}<s <t) . \end{array} \right. \nonumber \\ \end{eqnarray} For $s<t<\tau_{f0}$ case, the response function is the same as that for the weak force regime (Eq.~(\ref{chi_p})). For $\tau_{f0}<s<t$ case, it again takes the same functional form as Eq.~(\ref{chi_p}) with the replacement $(\gamma_p, k_p) \rightarrow (\gamma_p^{(f)}, k_p^{(f)})$. Only for the case $s<\tau_{f0}<t$, the stationarity in the response function breaks down, and there appears an auxiliary argument $\tau_{f0}$. {\it Correlation function---} From the solution of Eq.~(\ref{Eq_NC_f}), the fluctuation in the velocity $\Delta {\dot X}_{p}(t) \equiv {\dot X}_p(t) - \langle {\dot X}_p(t) \rangle $ is obtained as \begin{eqnarray} \Delta {\dot X}_p(t) = \left\{ \begin{array}{ll} \int_0^t ds \chi_p(t,s) Z(s) - X_p(0) \frac{\gamma_p}{k_p}e^{-(k_p/\gamma_p)t} & (\tau_{f0} > t) \\ \int_{\tau_{f0}}^t ds \chi_{p,f}^{(\tau_{f0}<s<t)}(t,s;\tau_{f0}) Z_p^{(f)}(s) \nonumber \\ \qquad - \Delta X_p(\tau_{f0}) \frac{k_p^{(f)}}{\gamma_p^{(f)}}e^{-(k_p^{(f)}/\gamma_p^{(f)})(t-\tau_{f0})} & (t > \tau_{f0}) . \end{array} \right. \\ \end{eqnarray} From this, one can calculate the correlation of the velocity fluctuation $C_{p,f}(t,s; \tau_{f0}) = \langle \Delta {\dot X}_p(t) \Delta {\dot X}_p(s) \rangle$, and obtain the followings; (i) For $s<t<\tau_{f0}$, the weak force regime applies, so it is given by Eqs.~(\ref{C_p_total}) - (\ref{C_p_ex}); (ii) For $\tau_{f0}<s<t$, it can again be decomposed as \begin{eqnarray} C_{p,f}(t,s; \tau_{f0}) = C_{p,f}^{(st)}(t,s) + C_{p,f}^{(ex)}(t,s; \tau_{f0}), \label{C_pf_total} \end{eqnarray} where $C_{p,f}^{(st)}(t,s)$ and $C_{p,f}^{(ex)}(t,s; \tau_{f0})$ take the same functional forms as those in the weak force regime (Eqs.~(\ref{C_p_st}) and~(\ref{C_p_ex}), respectively) with the replacement $(\gamma_p, k_p,t_0) \rightarrow (\gamma_p^{(f)}, k_p^{(f)},\tau_{f0})$; (iii) For $s<\tau_{f0}<t$, it becomes \begin{eqnarray} &&C_{p,f}(t,s; \tau_{f0}) = -\frac{k_p^{(f)} k_BT}{\gamma_p \gamma_p^{(f)}} e^{-(k_p/\gamma_p)(\tau_{f0}-s) -(k_p^{(f)}/\gamma_p^{(f)})(t-\tau_{f0})} \nonumber \\ &-& \frac{k_p^{(f)} }{\gamma_p \gamma_p^{(f)}} \left(k_BT - k_p \langle X_p(0)^2\rangle \right)e^{-(k_p/\gamma_p)(\tau_{f0}+s) -(k_p^{(f)}/\gamma_p^{(f)})(t-\tau_{f0})} \nonumber \\ \end{eqnarray}
{ "timestamp": "2015-04-27T02:04:47", "yymm": "1504", "arxiv_id": "1504.06403", "language": "en", "url": "https://arxiv.org/abs/1504.06403" }
\section{Introduction} \label{sec.intro} Let $a$ and $b$ be relatively prime positive integers and let $\DD_{a,b}$ be the set of $(a,b)$-Dyck paths, lattice paths $\P$ from $(0,0)$ to $(b,a)$ staying above the line $y=\frac{a}{b}x$. These paths are often called {\em rational Dyck paths} and they generalize the classical and well-studied {\em Dyck paths}. \renewcommand*{\thefootnote}{\fnsymbol{footnote}} We study a remarkable function $\zeta$ on rational Dyck paths conjectured to be an automorphism\footnote{After this article was accepted for publication, we learned that Nathan Williams proved that the zeta map and its sweep map brethren are indeed bijective using other methods. \cite{Williams}}, which has received considerable attention lately; this ``zeta map'' generalizes the map on standard Dyck paths discovered by Haiman in the study of diagonal harmonics and $q,t$-Catalan numbers~\cite{haglund2008q}. Combinatorial definitions of $q,t$-statistics for classical Dyck paths were famously difficult to find, but were nearly simultaneously discovered by Haglund and Haiman. Interestingly, they discovered two \emph{different} pairs of statistics: Haiman found $\ensuremath{\mathsf{area}}$ and $\ensuremath{\mathsf{dinv}}$ shortly after Haglund discovered $\ensuremath{\mathsf{bounce}}$ and $\ensuremath{\mathsf{area}}$ statistics. The zeta map was then uncovered, which satisfies $\ensuremath{\mathsf{bounce}}(\zeta(\P))=\ensuremath{\mathsf{area}}(\P)$ and $\ensuremath{\mathsf{area}}(\zeta(\P))=\ensuremath{\mathsf{dinv}}(\P)$. Many details about the zeta map have been gathered and unified in a comprehensive article by Armstrong, Loehr, and Warrington \cite{ALW-sweep}, including progress on proving its bijectivity in certain cases such as $(a,am\pm 1)$-Dyck paths \cite{Loehr,GMII} (which is associated to the Fuss-Catalan numbers). The zeta map was shown to be a bijection in these special cases by way of a ``bounce path'' by which zeta inverse could be computed. However, constructing such a bounce path for the general $(a,b)$ case remains elusive. Armstrong, Loehr, and Warrington showed that there is a much larger family of sweep maps (for which the zeta map is a special case) which extensive computational exploration suggests are also bijective. A construct of theirs upon which we have relied heavily is the notion of the \emph{levels} of a lattice path. Recent progress related to rational Dyck paths has been made in the case when $a\leq3$ by Gorsky and Mazin and by Kaliszewski and Li \cite{GMII, KL14}, when $a=4$ by Lee, Li, and Loehr \cite{LLL14} in connection with the $q,t$-symmetry of the rational Catalan numbers. A type $C$ analog of the zeta map has been introduced by Sulzgruber and Thiel \cite{ST14}. Rational Dyck paths also are intimately entwined in the study of rational parking functions and MacDonald polynomials, with recent work by Gorsky, Mazin, and Vazirani \cite{GMV} and when $a$ and $b$ are not relatively prime by Bergeron, Garsia, Levin, and Xin~\cite{BGLX}. Our goal is to explore the following conjecture: \begin{conjecture}[\cite{ALW-sweep,GMI}] \label{conj.main} Let $a$ and $b$ be relatively prime positive integers. The {\em zeta map} $\zeta:\DD_{a,b}\rightarrow\DD_{a,b}$ is a bijection. \end{conjecture} Our perspective is that there are in fact two maps, the zeta map and the eta map, which jointly contain enough information to recover the original path. In Section~\ref{sec:zetaeta}, we provide a straightforward algorithm for recovering $\P$ from the combined data of $Q=\zeta(\P)$ and $R=\eta(\P)$. What we find interesting is that the information contained solely in $\zeta(\P)$ does not seem to be enough to reconstruct $\P$ directly. Our argument does not give an explicit construction of $\zeta^{-1}(Q)$, nor do we construct a bounce path. The zeta and eta maps appeared previously in the work of Gorsky and Mazin (See $G_{n,m}$ and~$G_{m,n}$ in \cite{GMII}) and in the work of Armstrong, Loehr, and Warrington (varying the direction of the sweep map in \cite{ALW-sweep}). Although, they were never used simultaneously as we do in this paper. The eta map is based on a natural notion of conjugation on rational Dyck paths explored in Section~\ref{sec:conjugate} that arises from Anderson's bijection \cite{And02} between $(a,b)$-Dyck paths and simultaneous $(a,b)$-core partitions, which in turn are related to many more combinatorial interpretations. (See \cite{AHJ14} for additional background.) One can define the map $\eta$ by $\eta(\P)=\zeta(\P^c)$; in most cases $\zeta(\P)\neq \eta(\P)$. Section~\ref{sec:zeta_map} is devoted to presenting the algorithms for calculating the zeta map and the eta map in multiple fashions. In particular, we present two new methods involving lasers and interval intersections. Meanwhile, $\zeta$ and $\eta$ combine to induce a new {\em area-preserving} involution $\chi$ on the set of Dyck paths defined in Section~\ref{sec:perp} by \[\chi(Q):=\eta(\zeta^{-1}(Q))=\zeta(\zeta^{-1}(Q)^c).\] In Section~\ref{sec:square}, we give a new proof that in the classical Catalan case, this {\em conjugate-area map} $\chi$ is the map that reverses the Dyck path. Applying our inverse algorithm presents a new construction of the inverse of the zeta map on a Dyck path. However, we have no explicit description of $\chi(Q)$ from $Q$ in the general $(a,b)$-case. Indeed, a concrete construction of $\chi(Q)$ from $Q$ could be used to construct an explicit inverse for the zeta map. In Section~\ref{sec:inductive_zeta_inverse}, we show that when a rational Dyck path $Q$ visits the lattice point having level equal to 1, $\zeta^{-1}(Q)$ has a nice decomposition as does its image under the conjugate-area map $\chi$. These observations allows us to explicitly find $\chi$ (and therefore $\zeta^{-1}$) of any path that has valleys exactly on levels equal to $\{1,\hdots,k\}$ for $k<a$ in Theorem~\ref{thm:inductive_area}. We have also constructed $\chi(Q)$ and $\zeta^{-1}(Q)$ for paths that bound left-adjusted or up-adjusted partitions in Proposition~\ref{prop:justified}. Section~\ref{sec:9} investigates the poset of rational Dyck paths ordered by when one path is weakly below the other, motivating a new statistic $\delta(P)$ that appears to be fruitful for recursively computing $\zeta^{-1}$ from evidence gathered by computer learning algorithms. Indeed, in the remainder of Section~\ref{sec:9}, we use $\delta(P)$ to construct the initial part of a rational bounce path and to give a new algorithm that computes $\zeta^{-1}$ for $(a,am+1)$-Dyck paths. \medskip One of the primary motivations for our research was the study of conjectured statistics for the $q,t$-enumeration of $(a,b)$-Dyck paths. Section~\ref{sec.notation} sets the stage by introducing key combinatorial concepts and statistics associated to $(a,b)$-Dyck paths. In Section~\ref{sec:skew_length} we investigate the \emph{skew length} statistic $\sl(\P)$, originally defined in the context of $(a,b)$-cores in \cite{AHJ14}. The original definition of skew length seems to depend on the ordering of $a$ and $b$; we show that skew length is in fact independent of this choice. The main tools we develop involve a row length filling of the boxes under the $(a,b)$-Dyck path $\P$ and above the main diagonal, along with the idea of skew inversions and flip skew inversions. Section~\ref{sec:conjugate} shows that \emph{skew length} is preserved under conjugation. \section{Background and Notation} \label{sec.notation} \begin{definition} An {\em $(a,b)$-lattice path} $\P$ is a lattice path in $\mathbb{Z}^2$ consisting of north and east steps starting from the origin and ending at the point $(b,a)$. We call $\P$ an {\em $(a,b)$-Dyck path} if $\P$ remains (weakly) above the diagonal line connecting the origin to $(b,a)$. Equivalently, the lattice points $(x,y)$ along $\P$ satisfy $ax\leq by$. We draw $(a,b)$-Dyck paths in an $a\times b$ grid, where the lower left corner is the origin. We denote the full collection of $(a,b)$-Dyck paths by $\DD_{a,b}$, or simply $\DD$ if there is no confusion about the values of $a$ and $b$. \end{definition} We use the English notation for Young diagrams, drawing the largest row at the top. The \emph{hook length} of a box $B$ in the Young diagram of a partition is the number of boxes in the \emph{hook} of boxes directly below or directly to the right of $B$, including the box $B$ itself. An \emph{$a$-core partition} (or simply $a$-core) is a partition for which its Young diagram has no boxes with hook length equal to~$a$. Similarly, a \emph{simultaneous $(a,b)$-core partition} (or $(a,b)$-core for short) has no hooks equal to $a$ or $b$. Anderson proved that when $a$ and $b$ are relatively prime there are finitely many $(a,b)$-cores \cite{And02} by finding a bijection with the set of $(a,b)$-Dyck paths; these are counted by the formula \[ \frac{1}{a+b}\binom{a+b}{a}. \] This formula seems to have been discovered at various times; the earliest reference we know of is~\cite{DM47} in 1947. In 1954, Bizley considered the general case of rectangular Dyck paths of which this formula is a special case~\cite{Biz54}. Bizley's counting method starts from the full set of lattice paths from $(0,0)$ to $(b,a)$, and considers the orbit of the cyclic group $C_{a+b}$ acting by \emph{cyclic shifts} on paths. In the case where $a$ and $b$ are relatively prime, there is a unique Dyck path in each such orbit. \begin{example} Let $N$ and $E$ represent a north step and an east step, respectively. Throughout this paper, we will use as our running example the $(5,8)$-Dyck path \[\P=NNNENEEENEEEE,\] shown in Figure~\ref{fig:example58}. \end{example} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{example58.pdf} \end{center} \caption{(Left) A lattice path $\P$ when $a=5$ and $b=8$. The hook filling is given by the numbers in the center of the boxes. The boxes above the path show that the partition bounded by $\P$ is $(4,1)$. \newline(Right) The levels of the lattice points along the path.} \label{fig:example58} \end{figure} \newpage\subsection{Dictionary of notation}\ We keep track of numerous bits of data associated to an $(a,b)$-Dyck path $\P$. \begin{enumerate} \item {\bf General constructions:} \begin{itemize} \item The \emph{hook filling} of the boxes in the square lattice is obtained by filling the box with lower-right lattice point $(b,0)$ with the number $-ab$ and increasing by $a$ for every one box west and increasing by $b$ for every one box north. A box is above the main diagonal if and only if the corresponding hook is positive. (See Figure~\ref{fig:example58}.) \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{AndersonsBijection.pdf} \end{center} \caption{Anderson's bijection gives a correspondence between $(a,b)$-Dyck paths and $(a,b)$-core partitions. Corresponding to $\P=NNNENEEENEEEE$ is the $(5,8)$-core $(6,4,3,2,2,1,1,1,1)$.} \label{fig:AndersonsBijection} \end{figure} \item The \emph{positive hooks} of $\P$ are the numbers in the hook filling below the path but greater than zero. (Elsewhere these have been called {\em beta numbers} or {\em bead numbers}.) \item We denote by $\mathfrak{c}(\P)$ the $(a,b)$-core corresponding to $\P$ under Anderson's bijection. The hook lengths of the boxes in the first column of $\mathfrak{c}(\P)$, its {\em leading hooks}, are precisely the positive hooks of $\P$. An example of Anderson's bijection is illustrated in Figure~\ref{fig:AndersonsBijection}. \item The \emph{row length filling} of $\P$ are numbers placed in the boxes under $\P$. They correspond to the number of boxes in the row of $\mathfrak{c}(\P)$ with the given hook. This will be developed in Section~\ref{sec:length_filling}. (See Figure~\ref{fig:rowLength}.) \item The \emph{partition bounded by} $\P$ is the partition whose Young diagram is the collection of boxes above the path $\P$. \end{itemize} \item {\bf Combinatorial statistics:} \begin{itemize} \item The \emph{area} of $\P$, denoted $\ensuremath{\mathsf{area}}(\P)$ is the number of positive hooks of $\P$. Equivalently, this is the number of rows in $\mathfrak{c}(\P)$. \item The \emph{rank} of $\P$, denoted $\operatorname{rk}(\P)$ is the number of rows in the partition bounded by $\P$. \item The \emph{skew length} of $\P$, denoted $\sl(\P)$ is a statistic that we discuss in detail in Section~\ref{sec:skew_length}. \end{itemize} \item {\bf Sets and sequences of numbers associated to $\P$:} \begin{itemize} \item The \emph{levels} of $\P$ are labels associated to the lattice points of $\P$ defined by Armstrong, Loehr, and Warrington in \cite{ALW-parking,ALW-sweep}. Assign level $0$ to $(0,0)$ and label the other lattice points of $\P$ by adding $b$ after each north step and subtracting $a$ after each east step. Equivalently, this is the value of the hook filling in the box to the northwest of the lattice point. Note that the label of the northeast-most lattice point $(b,a)$ is once again $0+a\cdot b-a \cdot b=0$. \item The path $\P$ has two reading words obtained by reading the levels in order. The \emph{reading word} of $\P$, denoted $L(\P)$ (for `levels'), is obtained by reading the levels that occur along the path from southwest to northeast, excluding the final $0$. (One can imagine assigning to each north and east step of the path the level of the step's initial lattice point.) The \emph{reverse reading word}, denoted $M(\P)$, is obtained by reading from northeast to southwest, excluding the final $0$. (One can imagine $\P$ as a path from $(b,a)$ to $(0,0)$ consisting of west and south steps, once again assigning to each step the level of its initial lattice point.) Reading along $\P$ in Figure~\ref{fig:example58} shows that \[L(\P)=(0,8,16,{24},19,{27},{22},{17},12,{20},{15},{10},{5})\] and \[M(\P)=({0},{5},{10},{15},20,{12},{17},{22},27,{19},24,16,8).\] When $a$ and $b$ are relatively prime, no value occurs more than once in $L(\P)$ or $M(\P)$. \item The set of levels of $\P$ is partitioned into the set of \emph{north levels} $\N(\P)$ and \emph{east levels} $\E(\P)$, where when reading from southwest to northeast, levels of lattice points starting north steps of $\P$ are in $\N(\P)$ and levels of lattice points starting east steps of $\P$ are in $\E(\P)$. We order these levels in decreasing order. In our running example, the north levels of $\P$ are \[\N(\P)=\{19,16,12,8,0\},\] and the east levels of $\P$ are \[\E(\P)=\{27,24,22,20,17,15,10,5\}.\] \end{itemize} \item {\bf Permutations associated to $\P$:} Throughout the paper we use square brackets to write permutations in one-line notation, and round parentheses for permutations in cycle notation.\smallskip \begin{itemize} \item The \emph{reading permutation} of $\P$ is a permutation $\sigma$ in $S_{a+b}$ that encodes the relative order of the levels recorded in $L(\P)$. The \emph{reverse reading permutation} of $\P$, denoted~$\tau(\P)$, encodes the relative order of the values in~$M(\P)$. In our running example, the one-line notation for $\sigma(\P)$ and $\tau(\P)$ are \[\sigma(\P)=[1,3,7,{12},9,{13},{11},{8},5,{10},{6},{4},{2}]\] and \[\tau(\P)=[{1},{2},{4},{6},10,{5},{8},{11},13,{9},12,7,3].\] \item Let $\gamma(\P)$ be the permutation in $S_{a+b}$ that when written in cycle notation starting with $1$ has the same order of entries as $\sigma(\P)$ written in one-line notation. In our running example $\P$ we have \begin{align*} \gamma(\P)&= (1,3,7,{12},9,{13},{11},{8},5,{10},{6},{4},{2})\\ &= [3,1,7,2,10,4,12,5,13,6,8,9,11]. \end{align*} \end{itemize} \end{enumerate} \renewcommand*{\thefootnote}{\fnsymbol{footnote}} \setcounter{footnote}{1} \begin{remark}\label{rem:sigma} The path $\P$ can be recovered knowing only $\sigma(\P)$ (or $\tau(\P)$ or $\gamma(\P)$). The east steps of~$\P$ correspond exactly to the right (cyclic) descents\footnote{A descent of a permutation occurs when $\sigma(i)>\sigma(i+1)$. A cyclic descent is defined in the same way, but considering the indices modulo $a+b$, allowing a descent in the last position of $\sigma$.} of $\sigma$; whereas, the north steps of $\P$ correspond to the right (cyclic) ascents of $\sigma$. In our running example, the right (cyclic) descents of $\sigma(\P)$ occur in positions $4$, $6$, $7$, $8$, $10$, $11$, $12$, and $13$, which are exactly the positions of the east steps in $\P$. \end{remark} \section{Skew length}\label{sec:skew_length} In~\cite{AHJ14} the \emph{skew length} statistic is proposed as a $q$-statistic for $(a,b)$-Dyck paths and a related construction is investigated in \cite[Section 4]{ALW-parking}. In this section, we present the original definition of skew length on cores and two equivalent interpretations on $(a,b)$-Dyck paths using length fillings and skew inversions. We show that these interpretations are indeed equivalent to the original definition and, as a consequence, we prove that skew length is independent of the ordering of $a$ and $b$. Further interpretations of skew length are presented in terms of the zeta map in Section~\ref{sec:zeta_map}. \subsection{Skew length on cores and polynomial motivation}\ We begin with an observation on ordinary core partitions before discussing simultaneous core partitions. \begin{definition}[{\cite[Definition 2.7]{AHJ14}}] Let $\kappa$ be an $a$-core partition. Consider the hook lengths of the boxes in the first column of $\kappa$. Find the largest hook length of each residue modulo $a$. The \emph{$a$-rows} of $\kappa$ are the rows of $\kappa$ corresponding to these hook lengths. The \emph{$a$-boundary} of $\kappa$ consists of all boxes in its Young diagram with hook length less than $a$. \end{definition} \begin{proposition} \label{prop:arows} Let $\kappa$ be an $a$-core partition. The number of boxes in the $a$-rows of $\kappa$ equals the number of boxes in the $a$-boundary of $\kappa$. \end{proposition} \begin{proof} Let $\operatorname{len}(h)$ be the number of boxes in the row of $\kappa$ with leading hook $h$. We first observe that if $h>a$ is a leading hook of $\kappa$, then $h-a$ is also a leading hook of $\kappa$. For this, decompose $h$ into two hooks of lengths $h-a$ and $a$ as illustrated in Figure~\ref{fig:skewLength_proof}, such that the boxes in the row with leading hook $h$ that are intersected by the hook $a$ are exactly the boxes in the $a$-boundary in that row. This guarantees that the right-end box of the hook $h-a$ is in~$\kappa$, and therefore that $h-a$ is also a leading hook. Now, the number of $a$-boundary boxes in the row of $\kappa$ corresponding to $h$ is $\operatorname{len}(h)-\operatorname{len}(h-a)$. Summing over all rows gives the number of $a$-boundary boxes; telescoping over residues modulo $a$ gives the number of boxes in the $a$-rows of $\kappa$. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.5]{skewLength_proof.pdf} \end{center} \caption{The number of $a$-boundary boxes in the row of $\kappa$ corresponding to a leading hook $h$ is $\operatorname{len}(h)-\operatorname{len}(h-a)$.} \label{fig:skewLength_proof} \end{figure} \begin{corollary} The number of boxes in the $a$-rows of $\kappa$ equals the number of boxes in the $a$-rows of $\kappa^c$ \end{corollary} \begin{remark} For readers familiar with the abacus diagram interpretation, hook lengths correspond to beads on the abacus; the $a$-rows correspond to the largest bead on each runner of the $a$-abacus. Proposition~\ref{prop:arows} gives a way to count the number of boxes in the $a$-boundary of an $a$-core by adding the number of gaps that appear on the abacus before each of these largest beads. \end{remark} \begin{definition}[{\cite[Definition 2.7]{AHJ14}}] \label{def:skewlen} Let $\kappa$ be an $(a,b)$-core partition. The \emph{skew length} of~$\kappa$, denoted $\sl(\kappa)$, is the number of boxes simultaneously located in the $a$-rows and the $b$-boundary of~$\kappa$. \end{definition} \begin{example} The core partition shown in Figure~\ref{fig:skewLength_cores} is the $(5,8)$-core $\kappa=\mathfrak{c}(\P)$ corresponding to the path $\P$ in our running example from Figures~\ref{fig:example58} and \ref{fig:AndersonsBijection}. \begin{figure} \begin{center} \includegraphics[scale=0.6]{skewLength_cores.pdf} \end{center} \caption{(Left) The $8$-boundary boxes of our favorite $(5,8)$-core $\kappa$ are shaded; those in the $5$-rows of $\kappa$ are darker. (Right) The $5$-boundary boxes of $\kappa$ are shaded; those in the $8$-rows of $\kappa$ are darker. Surprisingly, the number of darkly shaded boxes on the left $4+3+2+1=10$ is equal to the number of darkly shaded boxes on the right $3+2+2+1+1+1=10$. (See Corollary~\ref{cor.absl}.)} \label{fig:skewLength_cores} \end{figure} On the left, the $5$-rows of $\kappa$ are the rows with leading hook lengths 14, 11, 7, and 3. The darkly shaded boxes are those boxes in the $5$-rows with hook length less than 8. The skew length is equal to $4+3+2+1=10$. On the right, we compute of the skew length of $\kappa$ when considered as an $(8,5)$-core. The $8$-rows of $\kappa$ are the rows with leading hook lengths 14, 11, 9, 7, 4, and 2. The shaded boxes are those boxes in the $8$-rows with hook length less than 5. The skew length is equal to $3+2+2+1+1+1=10$. We will see in Corollary~\ref{cor.absl} that it is not a coincidence that these two numbers are the same. The number of boxes in the $8$-boundary (shaded boxes, left) equals the number of boxes in the $8$-rows (marked rows, right) and the number of boxes in the $5$-boundary (shaded boxes, right) equals the number of boxes in the $5$-rows (marked rows, left), as proved in general in Proposition~\ref{prop:arows}. \end{example} The skew length statistic was found by Armstrong; he conjectures it as a key statistic involved in the $q$- and $q,t$-enumeration of $(a,b)$-cores (or $(a,b)$-Dyck paths). Recall that the rank $\operatorname{rk}(\kappa)$ of an $(a,b)$-core $\kappa$ is the number of rows in its corresponding Young diagram. \begin{conjecture}\cite[Conjecture 2.8]{AHJ14} \label{conj.qcount} Let $a$ and $b$ relatively prime positive integers. The expression \[ f_{a,b}(q) = \frac{1}{[a+b]_q}\begin{bmatrix}a+b\\ a\end{bmatrix}_q \] is equal to the polynomial \[g_{a,b}(q) = \sum_\kappa q^{\sl(\kappa)+\operatorname{rk}(\kappa)},\] where the sum is over all $(a,b)$-cores $\kappa$. \end{conjecture} Haiman \cite[Propositions 2.5.2 and 2.5.3]{Haiman93} proved that $f_{a,b}(q)$ is a polynomial if and only if $a$ and $b$ are relatively prime. \cite[Theorem 1.10]{BEG03} provides a proof that $f_{a,b}(q)$ has non-negative coefficients involving representation theory of rational Cherednik algebras, see also~\cite[Section 1.12]{gordon_catalan_2012}. A proof of Conjecture~\ref{conj.qcount} would provide a combinatorial interpretation for the coefficients of $f_{a,b}(q)$. \begin{proposition}\cite{Haiman93,BEG03} \label{prop.qdef} The expression \[ f_{a,b}(q) = \frac{1}{[a+b]_q}\begin{bmatrix}a+b\\ a\end{bmatrix}_q \] is a polynomial if and only if $\gcd(a,b)=1$. Furthermore, when $a$ and $b$ are relatively prime, the resulting polynomial has integer coefficients. \end{proposition} Define the \emph{co-skew length} of an $(a,b)$-core $\kappa$ as \[ \sl'(\kappa):=\frac{(a-1)(b-1)}{2}-\sl(\kappa). \] Armstrong conjectures that rank and co-skew length give a $q,t$-enumeration of the $(a,b)$-cores, subject to the following symmetry: \begin{conjecture}\cite[Conjecture 2.9]{AHJ14} \label{conj.qtsym} The following $q,t$-polynomials are equal: \[ \sum q^{\operatorname{rk}(\kappa)}t^{\sl'(\kappa)} = \sum q^{\sl'(\kappa)}t^{\operatorname{rk}(\kappa)} \] where the sum is over all $(a,b)$-cores $\kappa$. \end{conjecture} These $q,t$-polynomials are called the \emph{rational $q,t$-Catalan numbers}. \subsection{Skew length on Dyck paths via the row length filling}\label{sec:length_filling}\ We now provide a new method to calculate the skew length of an $(a,b)$-Dyck path $\P$ which uses a {\em row length filling} of the boxes below~$\P$. Our method recovers with the skew length statistic discovered by Armstrong for $(a,b)$-cores. As a consequence, we show that skew length of an $(a,b)$-core is independent of the ordering of $a$ and $b$. We provide two equivalent definitions of the {\em row length filling}. \begin{definition}\label{def:rlf} Let $\P$ be an $(a,b)$-Dyck path. The \emph{row length filling} of $\P$ is an assignment of numbers to each box below the path~$\P$. For a box $B$ with positive hook filling $h$, define the row length of $B$ to be the length of the row in $\mathfrak{c}(\P)$ with leading hook $h$. Alternatively, define the row length of $B$ to be $h-p_h$, where $p_h$ is the number of positive entries in the hook filling strictly less than~$h$. For a box $B$ with non-positive hook filling $h$, define the row length of $B$ to be zero. For any hook $h$ in the hook filling of $\P$, we denote by~$\operatorname{len}(h)$ the corresponding value of the row length filling of $\P$. \end{definition} Figure~\ref{fig:rowLength} shows in red in the upper left corner the row length of the boxes corresponding to the positive hooks of $\P$. \begin{figure} \begin{center} \includegraphics[scale=0.6]{rowLength.pdf} \end{center} \caption{The row length filling of boxes below the path $\P$ is given in red in the upper left corner. The values correspond to the length of the rows of $\mathfrak{c}(\P)$ in Figure~\ref{fig:AndersonsBijection}.} \label{fig:rowLength} \end{figure} \begin{lemma} The two definitions of row length filling in Definition~\ref{def:rlf} are equivalent. \end{lemma} \begin{proof} When ordered in increasing order, the entries in the hook filling of $\P$ correspond to the hook lengths of the boxes in the first column of $\mathfrak{c}(\P)$ from shortest to longest. Suppose the first box of the $i$th shortest row has hook length $h$. Then the length of the $i$th shortest row is $h-(i-1)$, which is exactly the corresponding entry in the row length filling. \end{proof} \begin{remark} For readers familiar with the abacus diagram interpretation, the row length filling associates to each bead on the abacus the number of gaps that appear before it on the abacus. \end{remark} The row length filling is very useful for reading off common core statistics from the Dyck path. For example, we can immediately see that: \begin{corollary} The sum of the entries of the row length filling of $\P$ is equal to the number of boxes of the core $\mathfrak{c}(\P)$. \end{corollary} Furthermore, because the $a$-rows of $\mathfrak{c}(\P)$ correspond to the westmost boxes under $\P$ and the $b$-rows of $\mathfrak{c}(\P)$ correspond to the northmost boxes under $\P$, the number of boxes in $\mathfrak{c}(\P)$ with hook length less than $a$ or less than $b$ can be determined from the row length filling as a direct consequence of Proposition~\ref{prop:arows}. \begin{corollary} \label{cor.boundaryBoxes} The number of boxes in the $a$-boundary of an $(a,b)$-core $\mathfrak{c}(\P)$ is equal to the sum of the row length fillings of the westmost boxes under $\P$. Likewise, the number of boxes in the $b$-boundary of $\mathfrak{c}(\P)$ is equal to the sum of the row length fillings of the northmost boxes under $\P$. \end{corollary} In the same vein, the skew length of $\P$ can also be easily computed, as follows: \begin{theorem} \label{thm.skewLengthCompute} The skew length of an $(a,b)$-core $\mathfrak{c}(\P)$ may be computed from the row length filling of~$\P$ by adding all lengths at peaks of $\P$ and subtracting all lengths at valleys of $\P$. \end{theorem} \begin{proof} By the argument in the proof of Proposition~\ref{prop:arows}, we see that when $h$ is a positive hook of an $(a,b)$-Dyck path $\P$ (so that $h-a$ is the hook of the box directly east of the box with hook $h$ and $h-b$ is the hook of the box directly south of the box with hook $h$), then \begin{enumerate}[(i)] \item The number of $a$-boundary boxes in the row of $\mathfrak{c}(\P)$ corresponding to $h$ is $\operatorname{len}(h)-\operatorname{len}(h-a)$. \item The number of $b$-boundary boxes in the row of $\mathfrak{c}(\P)$ corresponding to $h$ is $\operatorname{len}(h)-\operatorname{len}(h-b)$. \end{enumerate} By restricting to the $a$-rows or $b$-rows, we see that the skew length of $\mathfrak{c}(\P)$ is given by: \begin{equation}\label{eqn.lenSum_a} \sum \operatorname{len}(h)-\operatorname{len}(h-b), \end{equation} where the sum is over all westmost boxes under $\P$, or alternatively the skew length of $\mathfrak{c}(\P)$ is given by: \begin{equation}\label{eqn.lenSum_b} \sum \operatorname{len}(h)-\operatorname{len}(h-a), \end{equation} where the sum is over all northmost boxes under $\P$. When one westmost box under $\P$ is directly north of another, Formula~\eqref{eqn.lenSum_a} telescopes. After cancelling terms, we are left with the lengths at peaks of $\P$ minus the lengths at valleys of $\P$. An equivalent argument can be made from Formula~\eqref{eqn.lenSum_b}. \end{proof} \begin{example} In Figure~\ref{fig:rowLength}, we see that the sum of the row length fillings is 21, which is the number of boxes of $\mathfrak{c}(\P)$. Adding the row lengths of the westmost boxes under $\P$ gives $2+6+4+1+0=13$ boxes in the $5$-boundary of $\mathfrak{c}(\P)$, while adding the row lengths of the northmost boxes under $\P$ gives $4+6+3+1+2+1+0+0=17$ boxes in the $8$-boundary of $\mathfrak{c}(\P)$, as expected from Figure~\ref{fig:skewLength_cores}. Our path $\P$ has three peaks with row lengths 2, 6, and 4 and two valleys with row lengths 2 and 0. The skew length of our path is then \[\sl(\P)=(2+6+4)-(2+0)=10.\] \end{example} When computing skew length directly from the core, it is not obvious that the number of boxes in $a$-rows and the $b$-boundary should be equal to the number of boxes in $b$-rows and the $a$-boundary (see Figure~\ref{fig:skewLength_cores}). But the method of computing the skew length given by Theorem~\ref{thm.skewLengthCompute} is independent of the ordering of $a$ and $b$: Switching $a$ and $b$ flips the rectangle to a $b\times a$ rectangle in which peaks are still peaks, valleys are still valleys, and the hook filling and row length filling are otherwise unaffected. \begin{corollary} \label{cor.absl} The skew length of an $(a,b)$-core $\kappa$ is independent of the ordering of $a$ and $b$. \end{corollary} \subsection{Skew length via skew inversions}\ This section presents another interpretation of the skew length of an $(a,b)$-Dyck path $\P$ in terms of the number of its {\em skew inversions} or the number of its {\em flip skew inversions}. Recall that the north levels of $\P$ are the levels $\N(\P)=\{n_1,\dots,n_a\}$ of the initial lattice points of the north steps in the path, and that the east levels of $\P$ are the levels $\E(\P)=\{e_1,\dots,e_b\}$ of the initial lattice points of the east steps. \begin{definition} A \emph{skew inversion} of $\P$ is a pair of indices $(i,j)$ such that $n_i>e_j$. A \emph{flip skew inversion} of $\P$ is a pair of indices $(i,j)$ with $n_i+b<e_j-a$. \end{definition} \begin{theorem}\label{thm:skewLength-skewInversions} Let $\P$ be an $(a,b)$-Dyck path. The skew length of $\P$ equals the number of skew inversions of $\P$, which is equal to the number of flip skew inversions of $\P$. \end{theorem} The key to the proof of Theorem~\ref{thm:skewLength-skewInversions} is recognizing the relationship between westmost boxes under $\P$ and north levels in $\N(\P)$ and the relationship between northmost boxes under $\P$ and east levels in $\E(\P)$. \begin{remark} \label{rem:hook-north-east} Figure~\ref{fig:hook-north-east} shows that when $h$ is the hook filling of a westmost box under $\P$, then the associated north level $n_h$ (corresponding to the lattice point at its southwest corner) is $h+a$. When $h$ is the hook filling of a northmost box under $\P$, then the associated east level $e_h$ (corresponding to the lattice point at its northwest corner) is $h+a+b$. \begin{figure} \begin{center} \includegraphics[scale=0.5]{hook-north-east.pdf} \end{center} \caption{When $h$ is the hook filling of a westmost box under $\P$, the associated north level is $n_h=h+a$. When $h$ is the hook filling of a northmost box under $\P$, the associated east level is $e_h=h+a+b$.} \label{fig:hook-north-east} \end{figure} \end{remark} \begin{lemma}\label{lem:skew_inversions_length_dif} Let $h$ be the hook filling of a westmost box under an $(a,b)$-Dyck path $\P$. The length difference $\operatorname{len}(h)-\operatorname{len}(h-b)$ is equal to the number of skew inversions involving the associated north level $n_h$, which equals the number of $b$-boundary boxes in the $a$-row corresponding to $h$. \end{lemma} \begin{proof} Recall that $\operatorname{len}(h)=h-p_h$, where $p_h$ is the number of positive hooks in the hook filling of $\P$ less than $h$. Then: \begin{eqnarray*} \operatorname{len}(h)-\operatorname{len}(h-b)&=&h-p_h - (h-b) + p_{h-b}\\ &=&b-(p_h-p_{h-b})\\ &=&b-\#\{g\mid h-b\leq g< h\}. \end{eqnarray*} Each box with hook filling $g$ satisfying the inequalities $h-b\leq g< h$ is in a distinct column of the diagram of $\P$. If two were in the same column, then the difference of their hooks would be a multiple of $b$, so that both could not satisfy the inequality. As a result, we may add a multiple of $b$ to each $g$ satisfying the inequalities to obtain a unique northmost box under $\P$ with hook filling $\bar{g}$ satisfying $h-b\leq \bar{g}$. Conversely, for every northmost box under $\P$ with hook filling $\bar{g}$ satisfying this inequality there is a unique box with hook filling $g$ in the same column satisfying $h-b\leq g< h$. Therefore, \begin{equation*} \begin{aligned} \operatorname{len}(h)-\operatorname{len}(h-b)&=b-\#\{\bar{g}\mid h-b\leq \bar{g}\}\\ &=\#\{\bar{g}\mid h-b> \bar{g}\}. \end{aligned} \end{equation*} By Remark~\ref{rem:hook-north-east}, this is equivalent to $\operatorname{len}(h)-\operatorname{len}(h-b)=\#\{e_j\mid n_h> e_j\}$, as desired. The last clause of the statement of the lemma is given in the proof of Theorem~\ref{thm.skewLengthCompute}. \end{proof} Similar arguments prove the following. \begin{lemma}\label{lem:skew_inversions_length_dif_b} Let $h$ be the hook filling of a northmost box under an $(a,b)$-Dyck path $\P$. The length difference $\operatorname{len}(h)-\operatorname{len}(h-a)$ is equal to the number of flip skew inversions involving the associated east level $e_h$, which equals the number of $a$-boundary boxes in the $b$-row corresponding to $h$. \end{lemma} Theorem~\ref{thm:skewLength-skewInversions} now follows directly from Definition~\ref{def:skewlen} by summing over all westmost boxes in Lemma~\ref{lem:skew_inversions_length_dif} and all northmost boxes in Lemma~\ref{lem:skew_inversions_length_dif_b}. \begin{example}\label{ex:skew_inversions} In our running example, the north levels are~$\N=\{19,16,12,8,0\}$ and the east levels are~$\E=\{27,24,22,20,17,15,10,5\}$. There are 10 skew inversions because there are 4 east levels less than $n_1=19$, 3 east levels less than~$n_2=16$, 2 east levels less than~$n_3=12$, 1 east level less than~$n_4=8$, and 0 east levels less than~$n_5=0$. The total number of skew inversions is then $4+3+2+1+0=10$. These numbers correspond to the number of $b$-boundary boxes in the $a$-rows of the core $\mathfrak{c}(\P)$ in Figure~\ref{fig:skewLength_cores}. To calculate the flip skew inversions, consider the sets $\N+b=\{27,24,20,16,8\}$ and $\E-a=\{22,19,17,15,12,10,5,0\}$. There are 10 flip skew inversions because there are 3 elements of the form~$n_i+b$ less than~$e_1-a=22$, there are 2 less than~$e_2-a=19$, 2 less than~$e_3-a=17$, 1 less than~$e_4-a=15$, 1 less than~$e_5-a=12$, 1 less than~$e_6-a=10$, 0 less than~$e_7-a=5$, and 0 less than~$e_8-a=0$. The total number of flip skew inversions is then $3+2+2+1+1+1+0+0=10$. These numbers correspond to the number of $a$-boundary boxes in the $b$-rows of the of the core $\mathfrak{c}(\P)$. \end{example} \begin{remark} Skew inversions in an $(a,b)$-Dyck path arise from pairs of north levels and east levels where $n_i>e_j$. Note that $n_i+b$ is the level of the terminal lattice point of the corresponding north step (instead of initial lattice point), while $e_j-a$ is the level of the terminal lattice point of the corresponding east step. So flip skew inversions are best understood by a reverse reading of $\P$ as a sequence of west and south steps, counting the pairs where the south level is less than the west level. Alternatively, flip skew inversions of $\P$ correspond to skew inversions of $\P$ when $\P$ is reflected (flipped) to be a $(b,a)$-Dyck path. \end{remark} \section{The conjugate map} \label{sec:conjugate} For any partition $\kappa$, its conjugate partition $\kappa^c$ is obtained by reflecting along its main diagonal. (See Figure~\ref{fig:conjugate_cores}.) Since hook lengths are preserved under this reflection, when $\kappa$ is an $(a,b)$-core, so is $\kappa^c$. When $a$ and $b$ are relatively prime, there is a natural conjugate map on $(a,b)$-Dyck paths~$\P$. Apply cyclic shifts to the path $\P$ until we encounter a path strictly \emph{below} the diagonal, the conjugate path $\P^c$ is the result of rotating this path~$180^\circ$. (See Figure~\ref{fig:conjugate_paths}.) The first main result of this section (Theorem~\ref{thm:conjugation}) shows that these conjugations are equivalent under Anderson's bijection, and the second (Theorem~\ref{thm.slconj}) shows that conjugation preserves skew length. These two results were simultaneously found in independent work by Xin in~\cite{xin_rank_2015}. Lemmas \ref{lem:conjugate_hooks} and \ref{lem:conjugate_positive_hooks} mirror the notion of conjugation of the semimodule of leading hooks presented by Gorsky and Mazin \cite{GMII}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{conjugate_cores.pdf} \end{center} \caption{The conjugate map on $(a,b)$-cores.} \label{fig:conjugate_cores} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{conjugate_paths.pdf} \end{center} \caption{The conjugate map on $(a,b)$-Dyck paths.} \label{fig:conjugate_paths} \end{figure} \begin{theorem}\label{thm:conjugation} Conjugation on $(a,b)$-cores coincides with conjugation on $(a,b)$-Dyck paths via Anderson's bijection: \[ \mathfrak{c}(\P)^c = \mathfrak{c}(\P^c). \] \end{theorem} This follows directly by showing the equivalence between the leading hooks of $\mathfrak{c}(\P)^c$ and the positive hooks of $\P^c$. A result of Olsson gives the leading hooks of $\mathfrak{c}(\P)^c$; we include a proof for completeness. \begin{lemma}\cite[Lemma 2.2]{Olsson93}\label{lem:conjugate_hooks} Let $\kappa$ be any partition with leading hooks given by the set $H$, with $m=\max(H)$. The conjugate partition $\kappa^c$ has leading hooks given by $\{m-n : n\in \{0,1,\dots ,m\}\setminus H\}$. \end{lemma} \begin{proof} Let $\kappa$ be any partition with leading hooks (hooks in the first column) given by the set $H$, with $m=\max(H)$. The leading hooks of its conjugate partition are the hooks in the top row of~$\kappa$. This partition has one column for each number $n$ in the set $\{0,1,\dots ,m\}\setminus H$. The hook of the upper box in the column corresponding to $n$ is equal to $m-n$ as illustrated in Figure~\ref{fig:conjugate_proof}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{conjugate_proof.pdf} \end{center} \caption{Illustration of the proof of Lemmas~\ref{lem:conjugate_hooks} and \ref{lem:conjugate_positive_hooks}} \label{fig:conjugate_proof} \end{figure} \begin{lemma}\label{lem:conjugate_positive_hooks} Let $\P$ be an $(a,b)$-Dyck path with positive hooks given by $H$, with $m=\max(H)$. The conjugate path $\P^c$ has positive hooks given by $\{m-n : n\in \{0,1,\dots ,m\}\setminus H\}$. \end{lemma} \begin{proof} Let $\P$ be an $(a,b)$-Dyck path with positive hook set given by $H$ and where $m=\max(H)$. Fill all the boxes on the left of the path with the hooks that are less than~$m$. Hooks appearing in the same row are equivalent mod $a$. Furthermore, the rows contain all the residues $0,1,\dots, a-1$ modulo $a$ because $a$ and $b$ are relatively prime and, as a consequence, the filled hooks contain all the numbers from 0 to $m$. Draw a diagonal parallel to the main diagonal passing through the upper left corner of the box below $\P$ with the largest hook $m$. Consider the area $A$ below this diagonal directly on the left of $\P$ as illustrated in Figure~\ref{fig:conjugate_proof}. The boxes in $A$ are exactly the boxes on the left of the path with hook length~$n$ less than $m$. Applying cyclic shifts to $P$ to obtain a path below the main diagonal transforms the area $A$ to the area between the main diagonal and the shifted path. Since this transformation maps the box with hook length $m$ to the box with hook length 0 (when rotated~$180$ degrees), the hook length $n$ gets transformed to the hook length $m-n$. \end{proof} \begin{example} In both Figure~\ref{fig:conjugate_cores} and Figure~\ref{fig:conjugate_paths}, the set of hooks on the left is $H=\{1,2,3,4,6,7,9,11,14\}$, with $m=14$. The set $\{0,1,\dots , m\}\setminus H=\{0,5,8,10,12,13\}$, and subtracting these numbers from~$14$ we get that the leading and positive hooks of the conjugate are $\{14,9,6,4,2,1\}$ as desired. \end{example} \begin{theorem} \label{thm.slconj} The skew length of $\P$ is equal to the skew length of $\P^c$. \end{theorem} \begin{proof} Let $n_i>e_j$ be a skew inversion for the path $\P$, with largest level $m$. The north and east steps of the conjugate path are in correspondence with the north and east steps in the original path, respectively. The corresponding north and east levels are given by $n_i'=m-n_i-b$ and $e_j'=m-e_j+a$. A simple calculation shows that these satisfy $n_i'+b<e_j'-a$, giving a flip skew inversion for $\P^c$. Thus, there is a one-to-one correspondence between skew inversions for $\P$ and flip skew inversions in $\P^c$ (and a similar correspondence between flip skew inversions for $\P$ and skew inversions in $\P^c$). The result follows directly from Theorem~\ref{thm:skewLength-skewInversions}. \end{proof} \begin{remark}\label{rem:conj_flip} As explained in the proof of Theorem~\ref{thm.slconj} the number of skew inversions of $\P^c$ is equal to the number of flip skew inversions of $\P$. Therefore, the skew length of a conjugate path may be thought of as the skew length of the original path when flipped to a $(b,a)$-Dyck path. \end{remark} Consider the hook lengths of the boxes in the first row of an $(a,b)$-core partition $\kappa$. Find the largest hook length of each residue modulo $a$. The \emph{$a$-columns} of $\kappa$ are the columns of $\kappa$ corresponding to these hook lengths. Theorem~\ref{thm.slconj} implies the following result, which is illustrated in Figure~\ref{fig:conjugate_cores_skewlenght}. \begin{corollary}\label{cor:slconj_cores} Let $\kappa$ be an $(a,b)$-core partition. The number of boxes in the $a$-rows and $b$-boundary of $\kappa$ is equal to the number of boxes in the $a$-columns and $b$-boundary of $\kappa$. \end{corollary} \begin{proof} The number of boxes in the $a$-rows and $b$-boundary of $\kappa$ is equal to the skew length of $\kappa$. The number of boxes in the $a$-columns and $b$-boundary of $\kappa$ is equal to the skew length of $\kappa^c$. The result then follows from Theorem~\ref{thm:conjugation} and Theorem~\ref{thm.slconj} by applying Anderson's bijection. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.5]{conjugate_cores_skewlength} \end{center} \caption{(Left) The $8$-boundary boxes of our favorite $(5,8)$-core $\kappa$ are shaded; those in the $5$-rows of $\kappa$ are darker. (Right) The $8$-boundary boxes of $\kappa$ are shaded; those in the $5$-columns of $\kappa$ are darker. The number of darkly shaded boxes on the left $4+3+2+1=10$ is equal to the number of darkly shaded boxes on the right~$6+3+1=10$. (See Corollary~\ref{cor:slconj_cores}.)} \label{fig:conjugate_cores_skewlenght} \end{figure} \section{The zeta map (and eta)} \label{sec:zeta_map} The zeta map is an intriguing map from $\DD_{a,b}$ to $\DD_{a,b}$ which can be defined in a wide variety of ways. See, for example, \cite{AHJ14,ALW-sweep,ALW-parking,GMII}, with equivalence of many definitions given in~\cite{ALW-sweep}. The precise description of zeta depends on making some choices; in our experience, these choices always resolve into one of two distinct maps, which we call zeta and eta. The eta map can be interpreted as the zeta map applied to the conjugate of $\P$, as reproved in Proposition~\ref{prop:eta_zeta_conjugate} by appealing to skew inversions. The joint dynamics of zeta and eta will be used to present a combinatorial description of the inverse of zeta in Section~\ref{sec:inverse}. In this section we present four combinatorial descriptions for computing the zeta and eta maps, starting with an interpretation involving core partitions implicit in \cite{AHJ14}, followed by with an equivalent description via the sweep maps considered in~\cite{ALW-sweep}. Our main contributions are two new combinatorial descriptions of the zeta map involving interval intersections and a laser filling, along with the study of the eta map in all four contexts. \subsection{Zeta and eta via cores}\ Drew Armstrong conjectured a combinatorial interpretation for the zeta map by way of core partitions, drawing inspiration from Lapointe and Morse's bounded partitions \cite{LM}, after learning of Loehr and Warrington's sweep map discussed in the next section. We present his definition and provide a parallel definition for the eta map. \begin{definition}\label{def:zeta_eta} Let $\P$ be an $(a,b)$-Dyck path and let $\mathfrak{c}(\P)$ be its corresponding $(a,b)$-core. From~$\P$ define two partitions $\lambda(\P)$ and $\mu(\P)$ and corresponding lattice paths $\zeta(\P)$ and $\eta(\P)$: \begin{itemize} \item $\lambda(\P)=(\lambda_1, \dots , \lambda_a)$ is the partition that has parts equal to the number of $b$-boundary boxes in the $a$-rows of $\mathfrak{c}(\P)$. \item $\mu(\P)=(\mu_1,\dots,\mu_b)$ is the partition that has parts equal to the number of $a$-boundary boxes in the $b$-rows of $\mathfrak{c}(\P)$. \item $\zeta(\P)$ is the $(a,b)$-Dyck path that bounds the partition $\lambda(\P)$. \item $\eta(\P)$ is the $(a,b)$-Dyck path that bounds the conjugate of the partition $\mu(\P)$. \end{itemize} The \emph{zeta map} $\zeta:\DD_{a,b}\rightarrow\DD_{a,b}$ is defined by $\zeta:\P\mapsto\zeta(\P)$. The \emph{eta map} $\eta:\DD_{a,b}\rightarrow\DD_{a,b}$ is defined by $\eta:\P\mapsto\eta(\P)$. \end{definition} One can see from the definition of zeta and eta, via the sweep map described below, that $\zeta(\P)$ and $\eta(\P)$ are indeed paths that stay above the main diagonal. We refer to~\cite{ALW-sweep} for a proof. An alternative method for calculating $\lambda(\P)$ and $\mu(\P)$ follows from Lemmas~\ref{lem:skew_inversions_length_dif} and~\ref{lem:skew_inversions_length_dif_b}. \begin{lemma}\label{lem:lambda_mu_inversions} The entries of the partitions $\lambda(\P)$ and $\mu(\P)$ satisfy: \begin{enumerate}[(i)] \item $\lambda_i$ is the number of skew inversions of $\P$ involving the north level $n_i$. \item $\mu_j$ is the number of flip skew inversions of $\P$ involving the east level $e_j$. \end{enumerate} \end{lemma} In the $(n,n+1)$ case, the zeta map specializes to the map studied in~\cite{haglund2008q} for classical Dyck paths, which sends the $\ensuremath{\mathsf{dinv}}$ and $\ensuremath{\mathsf{area}}$ statistics considered by Haiman to the $\ensuremath{\mathsf{area}}$ and $\ensuremath{\mathsf{bounce}}$ statistics considered by Haglund. One of the main interests on the zeta map is the fact that it sends skew length to co-area, or equivalently, co-skew length to area. \begin{corollary} The skew length of $\P$ is equal to the co-area of $\zeta(\P)$. \end{corollary} \begin{proof} The co-area of $\zeta(\P)$ is by definition equal to the number of boxes in the partition $\lambda$. By Lemma~\ref{lem:lambda_mu_inversions}, this number of boxes counts the number of skew inversions of $\P$, and thus is equal to the skew length of $\P$. \end{proof} \begin{remark}\label{rem:dinv} The $\ensuremath{\mathsf{dinv}}$ statistic for classical Dyck paths can be generalized to the rational Catalan case as the number of boxes $B$ above the path satisfying \[ \frac{\text{arm}(B)}{\text{leg}(B)+1} \leq \frac{b}{a} < \frac{\text{arm}(B)+1}{\text{leg}(B)}, \] where arm denotes the number of boxes directly on the right of $B$ above the path, and leg denotes the number of boxes directly below $B$ above the path. This intriguing statistic also satisfies~\mbox{$\ensuremath{\mathsf{dinv}}(P)=\ensuremath{\mathsf{area}}(\zeta(P))$}, see~\cite[Theorem~16]{loehr_continuous_2009} and~\cite{GMI}. As a consequence the co-skew length and dinv statistics are the same, \begin{equation} \sl'(P) = \ensuremath{\mathsf{dinv}}(P). \end{equation} Note that the definition of $\ensuremath{\mathsf{dinv}}$ is preserved by flipping an $(a,b)$-Dyck path to a $(b,a)$-Dyck path, and therefore skew length is preserved by flipping (as alternatively proved in Corollary~\ref{cor.absl}). By Remark~\ref{rem:conj_flip}, the skew length of the conjugate of $\P$ is equal to the skew length of $\P$ when flipped to a $(b,a)$-Dyck path. This provides an alternative proof that skew length is preserved under conjugation (Theorem~\ref{thm.slconj}). \end{remark} The work of Gorsky and Mazin~\cite[Theorem 8]{GMII} and of Armstrong, Loehr, and Warrington~\cite[Table~1]{ALW-sweep} include the following proposition; we present a new proof involving skew inversions. \begin{proposition}[\cite{GMII,ALW-sweep}]\label{prop:eta_zeta_conjugate} Let $\P$ be an $(a,b)$-Dyck path. Then \[ \eta(\P) = \zeta(\P^c). \] \end{proposition} \begin{proof} There is a one-to-one correspondence between the skew inversions of $\P^c$ and the flip skew inversions of $\P$, as shown in the proof of Theorem~\ref{thm.slconj}. Through Lemma~\ref{lem:lambda_mu_inversions}, one deduces that~$\lambda(\P^c)$ is the conjugate of $\mu(\P)$. As a consequence, $\zeta(\P^c)=\eta(\P)$. \end{proof} \begin{remark} In~\cite{GMII}, conjugation is considered in terms of normalized dual semimodules. The zeta and eta maps correspond to the maps $G_{m,n}$ and $G_{n,m}$ in~~\cite[Section~2.3]{GMII}. \end{remark} Denote by $\P^\mathrm{flip}$ the result of flipping an $(a,b)$-Dyck path $\P$ to a $(b,a)$-Dyck path. The example corresponding to the path $\P$ in Figure~\ref{fig:zetaEta_cores} and the following result are illustrated in Figure~\ref{fig:zetaEta_flip}. This result can also be essentially found in~\cite[Table~1]{ALW-sweep}. \begin{proposition}[\cite{ALW-sweep}] Let $\P$ be an $(a,b)$-Dyck path. Then, \begin{align*} \zeta(\P^\mathrm{flip}) &= \eta(\P)^\mathrm{flip}, \\ \eta(\P^\mathrm{flip}) &= \zeta(\P)^\mathrm{flip}. \end{align*} \end{proposition} \begin{proof} The skew inversions of $\P^\mathrm{flip}$ are in correspondence with the flip skew inversions of $\P$, and therefore $\lambda(\P^\mathrm{flip})=\mu(P)$. As a consequence, $\zeta(\P^\mathrm{flip}) = \eta(\P)^\mathrm{flip}$. A similar argument shows that~$\mu(\P^\mathrm{flip})=\lambda(P)$ and $\eta(\P^\mathrm{flip}) = \zeta(\P)^\mathrm{flip}$. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.55]{zetaEta_cores.pdf} \end{center} \caption{In our running example, $\zeta(\P)$ bounds the partition $\lambda(\P)=(4,3,2,1,0)$ and $\eta(\P)$ bounds the conjugate of the partition $\mu(\P)=(3,2,2,1,1,1,0,0)$. } \label{fig:zetaEta_cores} \end{figure} \begin{example} \label{ex:zetaEta_cores} Figure~\ref{fig:zetaEta_cores} illustrates an example of the zeta map and the eta map applied to our running example path $\P$. From Example~\ref{ex:skew_inversions}, the $8$-boundary boxes in the $5$-rows of $\mathfrak{c}(\P)$ give $\lambda(\P)=(4,3,2,1,0)$ and the $5$-boundary boxes in the $8$-rows of the core $\mathfrak{c}(\P)$ give $\mu(\P)=(3,2,2,1,1,1,0,0)$. Then $\zeta(\P)$ is the path that bounds $\lambda(\P)$ and $\eta(\P)$ is the path that bounds the conjugate partition $\mu(\P)^c=(6,3,1,0,0)$. We often combine $\lambda(\P)$, $\zeta(\P)$, $\mu(\P)$, and $\eta(\P)$ as on the right hand side of Figure~\ref{fig:zetaEta_intervals}. The core partition $\mathfrak{c}(\P^c)$ corresponding to the conjugate path $\P^c$ is illustrated in the right part of Figure~\ref{fig:conjugate_cores}. The $a$-rows of this core are the rows with leading hooks 14, 6, and 2. Counting the number of $b$-boundary boxes in these rows shows that $\lambda(\P^c)=(6,3,1,0,0)$, which equals $\mu(\P)^c$. We see that $\eta(\P)=\zeta(\P^c)$. \end{example} \begin{figure} \begin{center} \includegraphics[scale=0.55]{zetaEta_flip} \end{center} \caption{Zeta and eta applied to the flipped Dyck path of our running example path $P$. } \label{fig:zetaEta_flip} \end{figure} \subsection{Zeta and eta via sweep maps}\label{sec:zetaEta_sweep}\ \renewcommand*{\thefootnote}{\fnsymbol{footnote}} This section presents the combinatorial description of the zeta map on rational Dyck paths as a \emph{sweep map} created by Loehr and Warrington in~\cite{ALW-sweep}. Heuristically, this map `sweeps' the line of fixed slope $\frac{a}{b}$ across $\P$ starting on the main diagonal moving to the northwest, recording north and east steps in the order in which they are met. Analogously, the eta map `sweeps' the line of slope $\frac{a}{b}$ across $\P$ starting at the farthest point from the main diagonal moving to the southeast, recording south and west steps in the order in which they are met\footnote{These definitions exhibit the choice of `east-north' or `west-south' convention in~\cite{ALW-sweep}.}. This procedure is illustrated for our running example in Figure~\ref{fig:zetaEta_sweep}. \begin{figure} \begin{center} \includegraphics[scale=0.55]{zetaEta_sweep.pdf} \end{center} \caption{Zeta and eta via sweep maps. The steps of $\zeta(\P)$ are labeled by the levels of the lattice points of $\P$ in order, recording whether they correspond to north or east levels. The steps of $\eta(\P)$ are labeled by the levels of the lattice points of $\P$ in reverse order starting from the upper right corner, recording whether they correspond to south or west levels.} \label{fig:zetaEta_sweep} \end{figure} Recall that the reading word $L(\P)$ is obtained by reading the levels that occur along the path from southwest to northeast, excluding the final $0$, and the reverse reading word $M(\P)$ is obtained by reading from northeast to southwest, excluding the final $0$. \begin{theorem}[{\cite{ALW-sweep}}]\label{thm:zeta_sweep} The zeta map can be computed as follows: \begin{enumerate} \item[$(1)$] Place a bar over each of the entries of $L(\P)$ corresponding to an east step; these occur exactly at the \emph{right (cyclic) descents} of $\sigma$. \item[$(2)$] Sort $L(\P)$ in increasing order, keeping track of the bars on various values. \item[$(3)$] Read the resulting sequence of labels (bars and non-bars) to produce a new northeast lattice path, which we denote $\zeta(\P)$. \end{enumerate} \end{theorem} \begin{theorem}\label{thm:eta_sweep} The eta map can be computed as follows: \begin{enumerate} \item[$(1')$] Place a bar over each of the entries of $M(\P)$ corresponding to a {west} step; these occur exactly at the \emph{right (cyclic) {ascents}} of $\tau$. \item[$(2')$] Sort $M(\P)$ in increasing order, keeping track of the bars on various values. \item[$(3')$] Read the resulting sequence of labels (bars and non-bars) to produce a new southwest lattice path from $(b,a)$ to $(0,0)$, which we denote $\eta(\P)$. \end{enumerate} \end{theorem} \begin{example} In our running example in Figure~\ref{fig:zetaEta_sweep}, we mark the reading word \[ L(\P)=(0,8,16,\bar{24},19,\bar{27},\bar{22},\bar{17},12,\bar{20},\bar{15},\bar{10},\bar{5}), \] which sorts to $(0,\bar{5},8,\bar{10},12,\bar{15},16,\bar{17},19,\bar{20},\bar{22},\bar{24},\bar{27})$. Thus $\zeta(\P)$ is the path \[NENENENENEEEE.\] We mark the reverse reading word \[M(\P)=(\bar{0},\bar{5},\bar{10},\bar{15},20,\bar{12},\bar{17},\bar{22},27,\bar{19},24,16,8),\] which sorts to $(\bar{0},\bar{5},8,\bar{10},\bar{12},\bar{15},16,\bar{17},\bar{19},20,\bar{22},24,27)$. Thus $\eta(\P)$ is the path \[WWSWWWSWWSWSS \textup{, which is equivalent to } NNENEENEEENEE.\] \end{example} \begin{remark} Note that both computations in Theorem~\ref{thm:zeta_sweep} and Theorem~\ref{thm:eta_sweep} can be performed just as easily on the standardization $\sigma(\P)$ of $L(\P)$, since only the relative values of the labels matter. \end{remark} \begin{proof}[Proof of Theorems~\ref{thm:zeta_sweep} and~\ref{thm:eta_sweep}] Consider the path $\zeta(\P)$ described in Theorem~\ref{thm:zeta_sweep}. The number of boxes on the left of the north step corresponding to a north level $n_i$ of $\P$ is equal to the number of east levels smaller that $n_i$. This number is equal to the number of skew inversions involving~$n_i$, which coincides with $\lambda_i$ by Lemma~\ref{lem:lambda_mu_inversions}~$(i)$. Therefore the described algorithm to compute $\zeta(\P)$ coincides with the definition of zeta in Definition~\ref{def:zeta_eta}. Consider the (rotation of the) path $\eta(\P)$ described in Theorem~\ref{thm:eta_sweep}. The number of boxes below a given west step of $\P$ is equal to the number of south levels smaller than the corresponding west level. This number is equal to the number of flip skew inversions involving the corresponding level~$e_j$, which coincides with $\mu_j$ by Lemma~\ref{lem:lambda_mu_inversions}~$(ii)$. Therefore, the described algorithm to compute $\eta(\P)$ coincides with the definition of eta in Definition~\ref{def:zeta_eta}. \end{proof} \subsection{Zeta and eta via the laser filling}\ This section presents a new interpretation of zeta and eta that are read from a \emph{laser filling} in the boxes below the path $\P$ and above the main diagonal. Our main result in this section describes the partitions $\lambda$ and $\mu$ in terms of the laser filling. This result will be used in Section~\ref{sec:square} to give a new combinatorial description of the inverse of the zeta map in the square case without the use of bounce paths. Figure~\ref{fig:zetaEta_laser} illustrates the following definition. \begin{figure} \begin{center} \includegraphics[scale=0.55]{zetaEta_laser.pdf} \end{center} \caption{Laser filling of a path $\P$. The laser pointed from the lower right corner of the box with filling 2 crosses two vertical walls of the path, while all other lasers cross only one. The entries of the partition $\lambda(\P)=(4,3,2,1,0)$ are the sums of the laser fillings on the rows. The entries of the partition $\mu(\P)=(3,2,2,1,1,1,0,0)$ are the sums of the laser fillings on the columns. } \label{fig:zetaEta_laser} \end{figure} \begin{definition} Let $\P$ be an $(a,b)$-Dyck path and let $B$ be a box below $\P$ and above the line $y=\frac{a}{b}x$. Draw the line of slope $\frac{a}{b}$ through the southeast corner of $B$ (a bi-directional laser). The {\em laser filling} of $B$ is equal to the number of vertical walls of $P$ crossed by the laser. Equivalently, it is equal to the number of horizontal walls of $P$ crossed by the laser. \end{definition} \begin{remark} Lasers also appear in Armstrong, Rhoades, and Williams's \cite{ARW13}. Their lasers stop at the first wall they meet; by contrast, our lasers traverse (and count!)\ the walls of the path~$\P$. \end{remark} \begin{theorem}\label{thm:laser} The partitions $\lambda$ and $\mu$ associated to $\P$ can be computed as follows: \begin{enumerate}[(i)] \item The parts of $\lambda(\P)$ are the sums of the laser fillings in the rows. \item The parts of $\mu(\P)$ are the sums of the laser fillings in the columns. \end{enumerate} \end{theorem} \begin{proof} The entries of $\lambda$ count the skew inversions involving the north levels of each of the vertical steps in the path. For a given vertical step, this number is equal to the number of horizontal steps in the path that are strictly below the laser through its starting point. Each of these horizontal steps is crossed by exactly one of the lasers through the lower right corners of the boxes below the path that are in the same row of the vertical step in consideration. Statement {\em(i)} follows and Statement {\em (ii)} is proved similarly. \end{proof} \begin{corollary} The skew length of $\P$ is equal to the sum of the laser fillings of $\P$. \end{corollary} \begin{proof} The skew length of $\P$ is equal to the area of $\lambda(\P)$. By the previous theorem, this area is equal to the sum of all laser fillings of $\P$. \end{proof} \subsection{Zeta and eta via interval intersections}\label{sec:zetaEta_intervals}\ This section presents a second new combinatorial interpretation of zeta and eta in terms of {\em interval intersections}. Each step of the path $\P$ has an associated closed interval whose endpoints are the levels of its starting and ending points. When the intervals are ordered in increasing order, zeta and eta can be directly determined. \begin{definition} \label{def:intervals} Let $\P$ be an $(a,b)$-Dyck path. Let $\N(\P)$ be the north levels of $\P$ and $\E(\P)$ be the east levels of $\P$. Define the {\em north intervals} of $\P$ to be the set $\calI_N=\{[n_i,n_i+b] \text{ for } n_i\in\N(\P)\}$ and the {\em east intervals} of $\P$ to be the set $\calI_E=\{[e_j-a,e_j] \text{ for } e_j\in\E(\P)\}$. \end{definition} \begin{theorem}\label{cor:zetaEta_intervals} Create an $a\times b$ grid. Label the rows of the grid by the north intervals of $\P$ increasing from bottom to top, and the columns of the grid by the east intervals of $\P$ increasing from left to right. Fill in the boxes in this grid when the corresponding row and column intervals do not intersect. The boundary path of the shaded boxes above the main diagonal is $\zeta(\P)$ and the boundary path of the shaded boxes below the main diagonal is $\eta(\P)$, rotated 180 degrees. \end{theorem} \begin{proof} This theorem is a straightforward consequence of Lemma~\ref{lem:lambda_mu_inversions}. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.55]{zetaEta_intervals.pdf} \end{center} \caption{Zeta and eta via interval intersections. The intervals on the left correspond to the ordered level intervals of the vertical steps in the path. The intervals on the top correspond to the level intervals of the horizontal steps. The shaded boxes of $\lambda$ and $\mu$ are the boxes whose corresponding row and column intervals do not intersect.} \label{fig:zetaEta_intervals} \end{figure} \begin{example} For our running example path $\P$, the north intervals are $[0,8]$, $[8,16]$, $[12,20]$, $[16,24]$, and $[19,27]$, which can be read directly from the north steps of $\P$, or calculated from the north levels as in Definition~\ref{def:intervals}. Similarly, the east intervals of $\P$ are $[0,5]$, $[5,10]$, $[10,15]$, $[12,17]$, $[15,20]$, $[17,22]$, $[19,24]$, and $[22,27]$. Labeling the rows of a $5\times 8$ grid with the north levels and the columns with the east levels gives the right side of Figure~\ref{fig:zetaEta_intervals}. The shaded boxes are the those where the corresponding row interval does not intersect the corresponding column interval, from which $\lambda(\P)$, $\mu(\P)$, $\zeta(\P)$, and $\eta(\P)$ can be read posthaste. \end{example} \section{Pairing the zeta map with the eta map}\label{sec:inverse} By considering the zeta map together with the eta map, we gain two new ideas: a new approach for proving that the zeta map is a bijection and (if $\zeta$ is a bijection) a new area-preserving involution on the set of $(a,b)$-Dyck paths. For clarity and consistency, we have decided to use the letter $P$ to denote a path that is in the domain of $\zeta$ and use the letter $Q$ to denote a path that is in the image of $\zeta$. \subsection{Inverse of the zeta map knowing eta}\label{sec:zetaeta}\ For the image $\calZ$ under the pair of maps \[(\zeta,\eta):\DD\rightarrow\DD\times\DD,\] we define a map $\iota:\calZ\rightarrow\DD$ such that $(\zeta,\eta)\circ\iota$ is the identity map. Further, we conjecture that for every $(a,b)$-Dyck path $Q$ that appears as the image of $\zeta$, there exists a unique $(a,b)$-Dyck path $R$ such that $(Q,R)\in \calZ$. This would imply that in~$\calZ$ every element of $\DD$ appears exactly once as the initial entry in the pair, from which it would follow that the zeta map is a bijection. \begin{definition} A pair of $(a,b)$-Dyck paths $(Q,R)$ is an {\em admissible pair} if $(Q,R)=\big(\zeta(\P),\eta(\P)\big)$ for some $(a,b)$-Dyck path $\P$. The {\em set of admissible pairs} $\calZ\subset\DD\times\DD$ is the image under the pair of maps $(\zeta,\eta):\DD\rightarrow\DD\times\DD$. \end{definition} We now describe a simple combinatorial description of the inverse map $\iota$ that recovers $\P$ from the pair~$(Q,R)$ or, equivalently, from the pair of partitions ($\lambda$, $\mu$) they bound. \begin{definition} \label{def:iota} Let $(Q,R)$ be an admissible pair. Define $\iota(Q,R)$ as follows. \begin{enumerate} \item[(1)] Draw the path $Q$ above the diagonal and rotate the path $R$ 180 degrees so that it embeds below the diagonal in the same diagram. Label the steps of each path from~$1$ to~$a+b$ starting at the bottom-left corner and ending at the top-right corner in the order in which they appear in the path. \item[(2)] Create the permutation $\gamma:[a+b]\to [a+b]$ as follows. If $l$ is a label of a horizontal step in $Q$, define $\gamma(l)$ to be the label of the horizontal step in $R$ that is in the same column of~$l$. If $l$ is a label of a vertical step in $Q$, define $\gamma(l)$ to be the label of the vertical step in $R$ that is in the same row of $l$. \item[(3)] For admissible pairs $(Q,R)$, $\gamma$ is a cycle permutation. Interpret $\gamma$ in cycle notation as $(\sigma_1,\sigma_2,\hdots,\sigma_{a+b})$, fixing $\sigma_1=1$. Define $\P=\iota(Q,R)$ to be the path whose east steps correspond to the cyclic descents of $\sigma$.\footnote{A descent occurs when $\sigma_i>\sigma_{i+1}$. A cyclic descent is defined in the same way, but considering the indices modulo~$a+b$, allowing a descent in the last position of $\sigma$.} \end{enumerate} \end{definition} \begin{theorem} \label{thm:inverse} $\gamma$ is a cycle permutation and the map $\iota$ is the inverse map for the pair $(\zeta,\eta)$. \end{theorem} \begin{proof} Suppose $(Q,R)$ is an admissible pair, so that there exists a $\P\in\DD$ such that $(Q,R)=(\zeta(\P),\eta(\P))$. Label the steps of $Q$ and $R$ with the levels of $\P$ as determined by the sweep map algorithm given in Theorems~\ref{thm:zeta_sweep} and \ref{thm:eta_sweep} (as illustrated in Figure~\ref{fig:zetaEta_sweep}). The definition of the permutation $\gamma$ using these labels instead of on $[a+b]$ induces a permutation on the set of levels of the lattice points of $\P$. We will prove that this permutation is the cycle permutation given by the reading word $L(\P)$ of $\P$. Because of the relationship between the forward reading word $L(\P)$ and the reverse reading word $M(\P)$, the labels of the vertical steps of $R$ are exactly the labels of the vertical steps of $Q$ plus $b$, while the labels of the horizontal steps of $R$ are exactly the labels of the horizontal steps of $Q$ minus $a$. This implies that the permutation $\gamma$ maps the level of a lattice point in $\P$ to the level of the next lattice point along $\P$, forming a permutation on the set of labels that is a cycle ordered by the reading word~$L(\P)$. Since the level labels appear in order as we walk along $Q$, only the relative order of the labels matters; returning all labels to the numbers from~$1$ up to $a+b$ recovers $\gamma(\P)$, which when interpreted as a permutation in one line notation is the reading permutation $\sigma(\P)$. By Remark~\ref{rem:sigma}, we recover $\P$ directly from $\sigma(\P)$ and the result follows. \end{proof} Taken with Theorem~\ref{thm:inverse}, the following conjecture would imply that $\zeta$ is a bijection. \begin{conjecture} Suppose that $Q\in\DD_{a,b}$. There exists at most one $R\in\DD_{a,b}$ such that $(Q,R)\in\calZ$. \end{conjecture} \begin{example} Figure~\ref{fig:zetaEta_inverse} illustrates the procedure outlined in Definition~\ref{def:iota} for the pair~$(Q,R)=(\zeta(\P),\eta(\P))$ from our running example $\P$. After labeling the paths $Q=\zeta(\P)$ and $R=\eta(\P)$ from $1$ to $13$, we see that $\gamma(1)=3$, $\gamma(2)=1$, $\gamma(3)=7$, etc. Writing $\gamma$ in cycle notation gives \[\gamma=(1,3,7,{\bf 12},9,{\bf 13},{\bf 11},{\bf8},5,{\bf10},{\bf6},{\bf4},{\bf2}).\] If we instead interpret this sequence of numbers as the one line notation of a permutation $\sigma$, the cyclic descents of $\sigma$ are bolded and correspond to the east steps of $\iota(Q,R)$. We see that $\iota(Q,R)=\P$. \begin{figure} \begin{center} \includegraphics[scale=0.5]{zetaEta_inverse} \end{center} \caption{Calculating $\P=\iota(Q,R)$ using the method in Definition~\ref{def:iota}.} \label{fig:zetaEta_inverse} \end{figure} \end{example} \begin{remark} The essence of the proof of Theorem~\ref{thm:inverse} is that the $\zeta$ and $\eta$ maps track the positions of the right cyclic descents of $L(\P)$ and $M(\P)$. Using these two sets of data, and the precise relationship between $L(\P)$ and $M(\P)$, we are able to solve for the levels of $\P$. Interestingly, $\zeta(\P)$ does not obviously contain enough information to reconstruct $\P$. We cannot construct a unique permutation solely from its collection of right descents, and need additional information to recover $\P$. In the standard Catalan case, this additional information is essentially implied by the particular structure of the $n\times (n+1)$ rectangle; for the general case, we obtain the extra information necessary from $\eta(\P)$. \end{remark} \begin{remark} When pairing arbitrary $Q$ and $R$ paths a number of things can go wrong. First, Theorem~\ref{thm.slconj} implies that in order to come from an actual path, we must have $\ensuremath{\mathsf{area}}(Q)=\ensuremath{\mathsf{area}}(R)$. Second, we know that $\gamma$ must have a single cycle; it is simple to construct examples where this does not occur. It is also possible to find pairs $(Q,R)$ where $\gamma$ has a single cycle, but the labels $l_i$ obtained from the reverse bijection are in the wrong relative order. In other words, we may have $\zeta(\iota(Q,R))\neq Q$. \end{remark} We propose the problem of characterizing all possible permutations $\gamma(P)$. As a straightforward consequence of the description of this permutation in terms of the pair $Q$ and $R$, we conclude Proposition 6.8 without proof. \begin{proposition} \label{prop.exc} The positions of the exceedences of $\gamma(\P)$ give the collection of north steps in~$\zeta(\P)$, and the values of the exceedences of $\gamma(\P)$ are the north steps in~$\eta(\P)$ when rotated $180^\circ$. \end{proposition} \subsection{An area-preserving involution on rational Dyck paths}\ \label{sec:perp} If $\zeta$ is invertible, we can use $\eta$ to define a new area-preserving involution on the set of $(a,b)$-Dyck paths, induced by the conjugate map under~$\zeta$ which we call the conjugate-area map. This involution sends the path $\zeta(\P)$ to the path $\eta(\P)=\zeta(\P^c)$ and is predictable for certain families of $(a,b)$-Dyck paths. \begin{definition} The \emph{conjugate-area map} applied to an $(a,b)$-Dyck path $Q$ is the path \[\chi(Q):=\zeta \circ c \circ \zeta^{-1}(Q).\] If $\lambda$ is the partition bounded by $Q$, we define $\chi(\lambda)$ to be the partition bounded by $\chi(Q)$. \end{definition} \begin{figure}[h] \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=4em, column sep=6em] { \P & \zeta(\P) \\ \P^c & \eta(\P) \\ }; { [start chain] \chainin (m-1-1); \chainin (m-1-2) [join={node[above,labeled] {\text{zeta}}}]; \chainin (m-2-2) [join={node[right,labeled] {\text{conj-area}}}]; } { [start chain] \chainin (m-1-1); \chainin (m-2-1) [join={node[left,labeled] {\text{conjugate}}}]; \chainin (m-2-2) [join={node[below,labeled] {\text{zeta}}}]; } { [start chain] \chainin (m-1-1); \chainin (m-2-2) [join={node[right,labeled] {\raisebox{.1in}{\text{\tiny eta}}}}]; } \end{tikzpicture} \caption{Diagrammatic description of the conjugate-area involution.} \label{fig:alpha} \end{figure} \begin{remark} For partitions $\lambda$ and $\mu$ bounded by $\zeta(\P)$ and $\eta(\P)$ we have $\chi(\lambda) = \mu^c$. \end{remark} \begin{proposition} If the zeta map is a bijection then the conjugate-area map is an area-preserving involution on the set of $(a,b)$-Dyck paths. \end{proposition} \begin{proof} Since conjugation is an involution, we see that applying the operator $\zeta \circ c \circ \zeta^{-1}$ twice is equal to the identity, and therefore $\chi(\chi(Q)) = Q$. Furthermore, conjugation preserves skew length~(Theorem~\ref{thm.slconj}), which is mapped to co-area via the zeta map. Thus, $\chi$ must be an area-preserving involution. \end{proof} One possible approach to prove that $\zeta$ is a bijection would be to directly construct the involution~$\chi$. In Section~\ref{sec:square} we show that in the square case $\chi$ is exactly the map that reverses the path~$\P$; equivalently one finds $\chi(\lambda)$ by simple conjugation. In the rational case, conjugation must fail in general because conjugates of partitions may not sit above the main diagonal. Although, Proposition~\ref{prop:justified} exhibits our empirical observation that for `small' partitions $\lambda$, $\chi(\lambda)$ is often the conjugate. We have found that $\chi$ is predictable in other families of examples as well; in Section~\ref{sec:inductive_zeta_inverse} we present an inductive combinatorial description of the inverse of the zeta map and of the area-preserving involution for a nice family of examples. \begin{example}[Left-justified and up-justified partitions] Consider two families of partitions whose Young diagrams fit above the main diagonal in the $a\times b$ grid. Let $n\in \mathbb{N}$ be a number no bigger than the number of boxes above the main diagonal in the $a\times b$ grid. Define the {\em left-justified partition}~$\lambda^n$ to be the unique partition whose Young diagram has $n$ boxes as far to the left as possible and the {\em up-justified} partition $\nu^n$ to be the unique partition whose Young diagram has $n$ boxes as far up as possible. Figure~\ref{fig:leftDown_partitions} shows $\lambda^8=(3,2,2,1)$ embedded above the diagonal and $\nu^8=(6,2)$ rotated 180 degrees and embedded below the diagonal. We use the notation $\nu^n$ because it is the conjugate of what one might expect if we called it $\mu^n$, as pointed out by an astute referee. \begin{proposition} \label{prop:justified} The left-justified and up-justified partitions are related by the conjugate-area map: \[ \chi(\lambda^n) = \nu^n. \] Moreover, $\zeta^{-1}(\lambda^n)$ is the path with area $n$ containing the first $n$ positive hooks in the grid. \end{proposition} \begin{proof} Let $\P^n$ be the path containing the first $n$ positive hooks in the grid. This path consists of all the boxes below a line parallel to the main diagonal sitting in the highest level of the path, and therefore all the labels in the laser filling are equal to 1. Adding the labels in the rows and the columns we obtain the partitions $\lambda^n$ and $(\nu^n)^c$. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.5]{leftDown_partitions} \end{center} \caption{The left-justified partition $\lambda^8$, up-justified partition $\nu^8$, and corresponding path~$\P^8$.} \label{fig:leftDown_partitions} \end{figure} Figure~\ref{fig:leftDown_partitions} illustrates and example of left-justified and up-justified partitions $\lambda^n$ and $\nu^n$ together with their corresponding path $\P^n$ for $n=8$. The reader is invited to verify that $\zeta(\P^8)$ and $\eta(\P^8)$ are given by the paths bounding $\lambda^8$ and $\nu^8$ using any of the methods described in Section~\ref{sec:zeta_map}, as well as to verify that the inverse map $\iota$ presented in Section~\ref{sec:inverse} gives $\P^8$ when applied to the paths bounding $\lambda^8$ and $\nu^8$. \end{example} \section{The square case}\label{sec:square} In this section, we consider $(n,n+1)$-Dyck paths, lattice paths in an $n\times (n+1)$ grid staying above the main diagonal. They are in bijection with classical Dyck paths in an $n\times n$ grid by simply forgetting the last east step of the path. Haglund and Haiman~\cite{haglund2008q} discovered a beautiful description of the inverse of the zeta map in this case using a bounce path that completely characterizes the area sequence below the path. We present a new combinatorial description of the inverse of the zeta map in this case in terms of an area-preserving involution. This approach opens a new direction in proving that the zeta map is a bijection in the general $(a,b)$ case. \subsection{The conjugate-area involution, conjugate partitions and reverse paths.}\ Let $Q$ be an $(n,n+1)$-Dyck path. The area-preserving involution $\chi$ conjugates the partition~$\lambda$ bounded by the path $Q$. This was proved in \cite[Theorem 9]{GMII}; we provide a new proof using our laser interpretation of zeta and eta. For simplicity, denote by ${Q}^r$ the path whose bounded partition is $\lambda^c$. We refer to $P^r$ as the \emph{reverse path} of $Q$. Forgetting the last east step of the path, the reverse operation acts by reversing the path in the $n\times n$ grid. An example of the conjugate-area involution, conjugate partition and reverse path is illustrated in Figure~\ref{fig:square_perp}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{square_perp} \end{center} \caption{The conjugate-area involution in the $(n,n+1)$ case. } \label{fig:square_perp} \end{figure} \begin{theorem}[\cite{GMII}]\label{thm:square_perp} For a Dyck path $Q$ and the partition $\lambda$ it bounds, we have $\chi(Q) = Q^r$ and~$\chi(\lambda) = \lambda^c$. \end{theorem} \begin{proof} We need to show that the partitions $\lambda$ and $\mu$ bounded by the images $\zeta(\P)$ and $\eta(\P)$ of any $(n,n+1)$-Dyck path $\P$ satisfy \[ \chi(\lambda) = \mu^c = \lambda^c. \] Equivalently, we need to show that $\lambda=\mu$. The entries of the partitions $\lambda$ and $\mu$ are the sums of the labels in the laser filling of $\P$ over the rows and columns respectively (Theorem~\ref{thm:laser}). We will show that the values of the sums over the rows are in correspondence with the values of the sums over the columns, and therefore $\lambda=\mu$. This correspondence is illustrated for an example in Figure~\ref{fig:square_perp_proof}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{square_perp_proof} \end{center} \caption{Argument in the proof of Theorem~\ref{thm:square_perp}. The values of the sums over the rows are in correspondence with the values of the sums over the columns, and therefore $\lambda=\mu$.} \label{fig:square_perp_proof} \end{figure} For every row, draw a line of slope 1 in the northeast direction pointing from the starting point of the north step in that row. This line hits the path for the first time in the ending point of an east step of the path. The labels of the laser filling in the boxes in the column corresponding to this east step are exactly the same as the labels of the laser filling in the row in consideration. (This is because the lasers are lines with slope $\frac{n}{n+1}$, which implies that for any two boxes on the same diagonal of slope 1 that are not interrupted in line of sight by the path $\P$, they will have the same laser filling.) Thus, their corresponding sums are equal. Doing this for all the rows gives the desired correspondence between the entries of the partition $\lambda$ and the entries of the partition $\mu$. \end{proof} \subsection{The inverse of the zeta map}\ Because Theorem~\ref{thm:square_perp} provides the explicit formula for $\chi$, the method to find inverse of the zeta map in the $(n,n+1)$ case follows as a direct consequence of Theorem~\ref{thm:inverse}. The description of the map $\iota$ is presented in Definition~\ref{def:iota}. \begin{theorem}\label{thm_square_inverse} Let $Q$ be an $(n,n+1)$-Dyck path. Then, $\zeta^{-1}(Q)=\iota(Q,Q^r)$. \end{theorem} An example of this result is illustrated in Figure~\ref{fig:square_zeta_inverse1}. The laser filling of the path $\zeta^{-1}(Q)$ in this example is shown in Figure~\ref{fig:square_perp_proof}. One can verify that the sum of the labels of the laser filling on the rows and columns gives rise to the partitions $\lambda$ and $\mu$ bounded by $Q$ and $\chi(Q)$ (Theorem~\ref{thm:laser}). \begin{figure} \begin{center} \includegraphics[scale=0.5]{square_zeta_inverse1} \end{center} \caption{The inverse of $\zeta$ by way of conjugate partitions.} \label{fig:square_zeta_inverse1} \end{figure} An alternative way to obtain the cycle permutation $\gamma$ directly from $Q$ is as follows. Shade the boxes in the $n\times (n+1)$ rectangle that are crossed by the main diagonal as illustrated in Figure~\ref{fig:square_zeta_inverse2}. Move east from a vertical step labeled $i$ until the center of the first shaded box you see, and then move up until hitting an horizontal step of the path. The image $\gamma(i)$ is equal to the label of this horizontal step plus 1. In the example of Figure~\ref{fig:square_zeta_inverse2}, the path starting at the vertical step labeled 7 hits the horizontal step 12, therefore $\gamma(7)=12+1=13$. In order to determine $\gamma(i)$ of a label of an horizontal step, we move down until the center the last shaded box we see, and then move left until hitting a vertical step of the path. As before, $\gamma(i)$ is equal to the label of this vertical step plus 1. In the example, $\gamma(15)=10+1=11$. The image of the label of the first horizontal step of the path is by definition equal to 1. Interpret $\gamma$ in cycle notation as $(\sigma_1,\sigma_2,\dots,\sigma_{2n+1})$ where we fix $\sigma_1=1$. As a direct consequence of Theorem~\ref{thm_square_inverse} we get: \begin{theorem}\label{thm_square_inverse2} Let $Q$ be an $(n,n+1)$-Dyck path. The inverse $\zeta^{-1}(Q)$ is the path whose east steps correspond to the cyclic descents of the permutation~$\gamma$ when interpreted in one line notation. \end{theorem} \begin{figure} \begin{center} \includegraphics[scale=0.5]{square_zeta_inverse2} \end{center} \caption{Alternative description of the cycle permutation $\gamma$.} \label{fig:square_zeta_inverse2} \end{figure} \section{Zeta inverse and area-preserving involution for a nice family of examples}\label{sec:inductive_zeta_inverse} In this section we present an inductive combinatorial description of the inverse of the zeta map and of the conjugate-area involution~$\chi$ for a nice family of $(a,b)$-Dyck paths. This family consists of the Dyck paths that contain the lattice point with level $1$. Such Dyck paths are obtained by concatenating two Dyck paths in the $a'\times b'$ and $a'' \times b''$ rectangles illustrated in Figure~\ref{fig:base_induction}. The sides of these two rectangles are the unique positive integers $0<a',a''<a$ and $0<b',b''<b$ such that \begin{align*} a' b - b' a &=1, \\ b'' a - a'' b &=1. \end{align*} As a consequence, $a'$ and $b'$ are relatively prime as well as $a''$ and $b''$, allowing us to apply induction. \begin{figure}[htbp] \includegraphics[width=0.6\textwidth]{base_induction} \caption{Base induction for zeta inverse and the area-preserving involution.} \label{fig:base_induction} \end{figure} \subsection{Zeta inverse} Let $P$ be an $(a,b)$-Dyck path congaing the lattice point at level $1$, and let $P'$ and $P''$ be the two Dyck paths in the $a'\times b'$ and $a'' \times b''$ rectangles whose concatenation is equal to~$P$. Define the \emph{star product} of $P' \star P''$ as the path obtained by cutting~$P'$ at its highest level and infixing $P''$. This special product is illustrated in Figure~\ref{fig:star_product}. Note that the highest level of $P'$ can be equivalently obtained by sweeping the main diagonal of either the $a\times b$ rectangle or the $a'\times b'$ rectangle. \begin{figure}[htbp] \includegraphics[width=0.7\textwidth]{star_product} \caption{Star product of rational Dyck paths.} \label{fig:star_product} \end{figure} \begin{theorem}\label{thm:inductive_inverse} If $Q$ is an $(a,b)$-Dyck path containing the lattice point at level 1, zeta inverse of $Q$ is equal to the star product of the zeta inverses of $Q'$ and $Q''$: \[\zeta^{-1}(Q) = \zeta^{-1}(Q') \star \zeta^{-1}(Q''). \] \end{theorem} \begin{proof} We will show that $\zeta(P'\star P'')$ is the concatenation of $\zeta(P')$ and $\zeta(P'')$, the theorem then follows by applying zeta to both sides of the equation. Since the path $P'$ is cut at its highest level, sweeping the main diagonal of the $a\times b$ rectangle crosses the levels of $P'\star P''$ corresponding to the path $P'$ first, followed by all the levels corresponding to the path $P''$. Therefore, $\zeta(P'\star P'')$ is the concatenation of $\zeta(P')$ and $\zeta(P'')$. \end{proof} \subsection{Area-preserving involution} The conjugate-area map of $Q$ can be obtained by induction in this case as well. \begin{lemma}\label{lem:areadif_level} Let $l$ be the level of a lattice point $p$ in the $a\times b$ grid. If $U_l$ is the rectangle composed by the boxes northwest of $p$ and $\tilde U_l$ is the rectangle composed by the boxes southeast of $p$, then \[ \ensuremath{\mathsf{area}}(\tilde U_l)-\ensuremath{\mathsf{area}}(U_l) = l. \] \end{lemma} \begin{proof} If $p=(p_1,p_2)$, then $l = p_2b-p_1a$. Furthermore, \[ \ensuremath{\mathsf{area}}(\tilde U_l) - \ensuremath{\mathsf{area}}(U_l)= (b-p_1)p_2 - p_1(a-p_2) = p_2b-p_1a =l. \] \end{proof} \begin{theorem}\label{thm:inductive_area} Let $Q$ is an $(a,b)$-Dyck path containing the lattice point at level 1. The bounded partition of $\chi(Q)$ is the partition whose restriction to the $a'\times b'$ and $a''\times b''$ rectangles gives the bounded partitions of $\chi(Q')$ and $\chi(Q'')$, and which contains all boxes below the main diagonal outside the two rectangles. \end{theorem} \begin{proof} We first observe that $\chi(Q)$ defined this way has the same area of $Q$. This is equivalent to show that the bounded partitions of $\chi(Q)$ and $Q$ have the same area, when restricted to the complement of the $a'\times b'$ and $a''\times b''$ rectangles. These restrictions are exactly the rectangle $\tilde U_1$ after removing the box on its upper left corner, and the rectangle $U_1$. The claim then follows by Lemma~\ref{lem:areadif_level}. Now, let $\gamma'$ and $\gamma''$ be the cycle permutations arising from the pairs $(Q',\chi(Q'))$ and $(Q'',\chi(Q''))$, and $\gamma$ be the cycle permutation of $(Q,\chi(Q))$. The cycle permutation $\gamma$ can be obtained by cutting~$\gamma'$ exactly before its highest value and putting $\gamma''$ in between, with all its values increased by $a'+b'$. In the example in Figure~\ref{fig:inductive_area} we get \begin{align*} \gamma' &= ({\color{blue} 1, 3\ |\ 5, 4, 2}) \\ \gamma'' &= ({\color{red} 1, 3, 7, 5, 8, 6, 4, 2}) \\ \gamma &= ( {\color{blue}1, 3,} \ {\color{red}6, 8, 12, 10, 13, 11, 9, 7} \ {\color{blue} 5, 4, 2}) \end{align*} The cyclic descents of $\gamma$ correspond exactly to the cyclic descents of $\gamma'$ and $\gamma''$. Moreover, the descent at the highest value of $\gamma'$ corresponds to the east step at the highest level of $\zeta^{-1}(Q')$. Thus, replacing cyclic descents in $\gamma$ by east steps and ascents by north steps gives rise to the start product~$\zeta^{-1}(Q')\star \zeta^{-1} (Q'')$, which is equal to $\zeta^{-1}(Q)$ by Theorem~\ref{thm:inductive_inverse}. \end{proof} \begin{figure}[htbp] \includegraphics[width=0.3\textwidth]{inductive_area} \caption{The inductive conjugate-area map for paths containing level 1.} \label{fig:inductive_area} \end{figure} \subsection{$k$th valley Dyck paths} One interesting family of $(a,b)$-Dyck paths is the family of \emph{$k$th valley Dyck paths}, paths $Q_k$ with valleys at levels $0,1,2,\dots , k$ for some $k < a$. The area-conjugate map for these paths behaves very nice and can be described in terms of the rectangles $U_l$ and $\tilde U_l$ in Lemma~\ref{lem:areadif_level}. For $0<l<a$, consider the collections of boxes $V_l$ and $\hat V_l$ defined by \[ V_l = U_l \smallsetminus \bigcup_{i=1}^{l-1} U_i, \hspace{1cm} \hat V_l = \hat U_l \smallsetminus \bigcup_{i=1}^{l-1} \hat U_i, \] where $\hat U_i$ is composed of the boxes of $\tilde U_i$ that are below the main diagonal. Equivalently, $\hat U_i$ is the result of removing the box in the upper left corner of $\tilde U_i$. An example is illustrated in Figure~\ref{fig:kthDyckpaths}. \begin{lemma} For $0<l<a$, the area of $V_l$ is equal to the area of $\hat V_l$. \end{lemma} \begin{proof} Since $V_1=U_1$ and $\hat V_1=\hat U_1$, which is $\tilde U_1$ after removing one box, Lemma~\ref{lem:areadif_level} implies that~$V_1$ and $\hat V_1$ have the same area. The level 2 becomes level 1 in the smaller $a''\times b''$ rectangle, and~$V_2,\hat V_2$ are given by $U_1,\hat U_1$ in this smaller rectangle. Again, Lemma~\ref{lem:areadif_level} implies that $V_2$ and $\hat V_2$ have the same area. Continuing the same argument in the smaller rectangles that appear in the process finishes the proof. \end{proof} \begin{figure}[htbp] \includegraphics[width=0.45\textwidth]{kthDyckpaths} \caption{Example of the conjugate-area map for $k$th valley Dyck paths for $k=3$. The area of $V_i$ is equal to the area $\hat V_i$.} \label{fig:kthDyckpaths} \end{figure} Note that the bounded partition of $Q_k$ is the (disjoint) union of $V_1,\dots,V_k$. \begin{proposition} The bounded partition of $\chi(Q_k)$ is the (disjoint) union of $\hat V_1,\dots,\hat V_k$. \end{proposition} \begin{proof} Note that the restriction of $Q_k$ to the $a'\times b'$ and $a''\times b''$ rectangles gives two smaller $k$th valley Dyck paths $Q'_{k'}$ and $Q''_{k''}$. The result then follows directly from Theorem~\ref{thm:inductive_area} by induction on~$k$. \end{proof} Figure~\ref{fig:kthDyckpaths_inverse} illustrates an example of the inverse of the zeta map for $k$th valley Dyck paths obtained by applying Theorem~\ref{thm:inverse}. \begin{figure}[htbp] \includegraphics[width=0.9\textwidth]{kthDyckpaths_inverse} \caption{Example of the inverse of the zeta map for $k$th valley Dyck paths for $k=3$.} \label{fig:kthDyckpaths_inverse} \end{figure} \section{The delta statistic and initial bounce paths}\label{sec:9} We have tried a number of approaches for showing that the zeta map is a bijection. Our last approach uses a delta statistic which can be estimated by means of an initial bounce path. \begin{definition} Define the \emph{delta statistic} $\delta(\P)$ to be the number of levels $l_i<a+b$ along $\P$. \end{definition} Geometrically, $\delta(\P)$ counts the number of lattice points in $P$ that belong to the diagonal path (closest to the diagonal). Surprisingly, this is sufficient to construct the inverse of the zeta map. \begin{theorem}\label{cor:delta_inverse} If $\delta(\P)$ is determined uniquely by $\zeta(\P)$ for all $\P$, then the map $\zeta$ is invertible. In such case, the inverse path $\P$ is determined from $\gamma(\P)$ as in Proposition~\ref{rem:zetainverse_delta}. \end{theorem} \subsection{Box math and the inclusion poset of rational Dyck paths}\label{sec:82}\ Our approach for proving Theorem~\ref{cor:delta_inverse} relies on careful analysis of the poset structure on the set of rational Dyck paths under the usual inclusion relation: we say $\P<Q$ if the path $\P$ is weakly below the path $Q$, or, equivalently, if the set of positive hooks of $\P$ is contained in the set of positive hooks of $Q$. This poset is graded by the area statistic, with covering relation given by adding a single box. \begin{definition} The \emph{maximal level} $m$ of a path $\P$ is the largest level appearing in the reading word of $L(\P)$. Likewise, the \emph{maximal box} is the box under the peak of $\P$ labeled by the maximal level~$m$. For any path with area greater than $0$, we define the \emph{predecessor} of $\P$ as the path obtained by removing the box under the peak of $\P$ farthest from the diagonal. This replaces the maximal level $m$ with~$m-a-b$ in $L(P)$. \end{definition} \begin{lemma} Suppose that $\P$ is an $(a,b)$-Dyck path with predecessor $\P'$. We have~$\sl(\P')<\sl(\P)$. \end{lemma} \begin{proof} Since~$\P'$ is obtained by removing the maximal box of~$\P$, the laser filling of $\P'$ is equal to the laser filling of $\P$ when removing the laser filling of its maximal box. Since skew length is equal to the sum of the entries in the laser filling, the result follows. \end{proof} Since every path has a unique maximal hook, we induce a spanning tree $T$ in the Hasse diagram of $\DD$ with the property that if $\P'<\P$ in $T$, then $\sl(\P')<\sl(\P)$. We can also precisely describe the combinatorial effect of removing the maximal box from $\P$ on~$\zeta(\P)$. \begin{proposition} All of the following operations are equivalent ways to remove the maximal box: \begin{enumerate} \item Remove the box whose associated hook length is greatest (furthest box from the diagonal). \item Remove the longest row from $\mathfrak{c}(\P)$. \item In the reading word $(l_1, l_2, \ldots, l_{a+b})$ of $\P$, reduce the maximal level $m$ by $(a+b)$, leaving all other levels unchanged. \item In the standardization $\sigma(\P)$, let $\alpha$ be the number of levels of~$\P$ greater than $m-a-b$ excluding $m$. Replace the entry $(a+b)$ in $\sigma$ with $a+b-\alpha$, and increase all entries greater than or equal to $a+b-\alpha$ by one. Equivalently, multiply $\sigma(\P)$ on the left by the cycle permutation $\rho_{a+b-\alpha,a+b}$ with cycle notation $(a+b-\alpha, a+b-\alpha+1, \ldots, a+b)$. \item Conjugate the permutation $\gamma(\P)$ by the cycle $\rho_{a+b-\alpha,a+b}$ to obtain: \[\rho_{a+b-\alpha,a+b} \gamma(\P) \rho_{a+b-\alpha,a+b}^{-1}.\] \end{enumerate} \end{proposition} \begin{proof} The second operation follows directly from the definition of $\mathfrak{c}(\P)$. The third method is clear from the effect on the labels $l_i$ of applying the first method. The fourth item follows from third, and the fifth item follows from the effect of conjugation on the cycle notation of a permutation. \end{proof} We can thus try to understand the structure of the tree $T$ by understanding certain conjugations of the permutation $\gamma(\P)$. We can observe that adding the box at the label $l_1=0$ is equivalent to removing the maximal box from $\P^c$. This reduces all labels $l_i$ by $a+b$ except for the label $l_1=0$, which remains the same. As a result, the relative value of the label $0$ is increased from $1$ to the number $\delta(P)$ of labels $l_i<a+b$, while all other labels $\leq \delta(P)$ are reduced by one. \begin{proposition}\label{prop:predecessor1} Let $\P'$ be the conjugate of the path obtained by removing the maximal box from~$\P^c$. The permutation~$\gamma(\P')$ is the conjugate \[ \gamma(\P') = \rho_{1,\delta(\P)}^{-1} \gamma(\P) \rho_{1,\delta(\P)}. \] \end{proposition} The action of removing the maximal box from~$\P^c$ on~$Q=\zeta(\P)$ is also completely determined by~$\delta(\P)$. For simplicity, we call the path $Q'=\zeta(P')$ the \emph{$\zeta$-predecessor} of $Q$. \begin{proposition}\label{prop:predecessor2} The $\zeta$-predecessor of $Q=\zeta(P)$ is completely determined by $\delta=\delta(P)$ as follows: \begin{enumerate}[1.] \item All the steps in $Q$ after the first $\delta$ steps remain unchanged. \item The first $\delta$ steps are rotated as follows: \begin{enumerate}[(a)] \item the first east step that appears is changed to a north step, \item the first north step is changed to an east step, \item the first $\delta$ steps are rotated once (rotating the first step to the end of the first delta steps). \end{enumerate} \end{enumerate} \end{proposition} \begin{example} An example of this procedure is illustrated in Figure~\ref{fig:delta_Q_predecessor} for the path~$Q=\zeta(\P)$ associated to our running example path $\P$. The number of levels of~$\P$ smaller than $a+b$ is~$\delta(P)=5$. The cycle permutation $\gamma'$ is obtained from $\gamma$ by rotating the labels $1,\dots,5$. \end{example} \begin{figure}[htb] \includegraphics[width=0.8\textwidth]{delta_Q_predecessor} \caption{Determining the $\zeta$-predecessor of a path $Q=\zeta(\P)$ with $\delta(\P)$.} \label{fig:delta_Q_predecessor} \end{figure} \begin{proof} Adding a box at the level $\ell_1=0$ of $P$ increases this level by $a+b$ and all other levels remain unchanged. The level $\ell_1=0$ is transformed from a north level in $P$ to an east level in $P'$, while the east level $\ell_i=a$ is transformed to a north level. Observing that the levels $\ell_1$ and $\ell_i$ correspond to the first north and first east steps in $Q$ respectively, and that the relative order of the levels less than $a+b$ is rotated once, one concludes the result. \end{proof} The two formulations in Proposition~\ref{prop:predecessor1} and Proposition~\ref{prop:predecessor2} have the advantage that we do not actually need to know the value of the maximal label $m$, and reduces the problem of showing that $\zeta$ is a bijection to computing a single statistic. To wit, if~$\delta(\P)$ can be directly computed from~$Q=\zeta(\P)$, then we can obtain the $\zeta$-predecessor of $Q$, and repeat until we arrive at the diagonal path~$\P_0$. This would completely determine $\P$ from $\zeta(\P)$. \begin{proposition}\label{rem:zetainverse_delta} Let $Q=\zeta(P)$ and $Q=Q_1,\dots,Q_l$ be the list of $\zeta$-predecessors of $Q$ with $Q_l$ being the final path containing all boxes above the main diagonal. If $\delta_i$ is the $\delta$-statistic corresponding to~$Q_i$, then the permutation $\gamma(P)$ is determined by \[ \gamma(P) = \rho\ \gamma_0\ \rho^{-1}, \] where $\rho= \rho_{1,\delta_1}\dots \rho_{1,\delta_{l-1}}$, the permutation $\rho_{1,i}$ has cycle notation $(1, \dots , i)$, and $\gamma_0=\gamma(\P_0)$. The east steps of the path $\P$ are encoded by the cyclic descents of $\gamma$ when considered in one-line notation, as described in Section~\ref{sec:zetaeta}. \end{proposition} \begin{remark} The $\delta$ statistic has also been considered shortly after this paper by Xin in~\cite{xin_efficient_2015}. This statistic, and another related statistic called ``key", is used by the author to present a search algorithm for inverting the zeta map. This algorithm shows an alternative proof that the zeta map is a bijection in the cases~$(a,ak\pm 1)$, by giving a recursive construction of~$\zeta^{-1}(Q)$. However, the general case remains open. The results of Xin in~\cite{xin_efficient_2015} are very similar to those presented in this section, and also use the operations of removing the maximal box in $\P$ and $\P^c$. Corollary~\ref{cor:delta_inverse} should be compared with~\cite[Corollary~19]{xin_efficient_2015}. We also present estimates for the~$\delta$ statistic and a precise formula in the Fuss-Catalan case $(a,ak+1)$ in Proposition~\ref{prop:funnydelta} and Corollary~\ref{cor:delta_Fuss}, which should be compared with~\cite[Theorem~16]{xin_efficient_2015}. The estimates we present in Proposition~\ref{prop:funnydelta} determine the number of children of the nodes in the search tree in the ``ReciPhi algorithm" in~\cite[Section~5]{xin_efficient_2015}. Our algorithm for describing the inverse of zeta in the Fuss-Catalan case $(a,ak+1)$ should be compared with the ReciPhi algorithm for the Fuss-Catalan case in~\cite[Section~5]{xin_efficient_2015}. \end{remark} \subsection{Initial part of a rational bounce path} The zeta map has been shown to be a bijection in the special cases $(a,am\pm 1)$ by way of a ``bounce path'' by which zeta inverse could be computed~\cite{Loehr,GMII}. However, constructing such a bounce path for the general~$(a,b)$ case remains elusive. In this section, we construct the initial part of a rational bounce path and show its relation to the $\delta$ statistic. In particular, we explicitly compute $\delta$ in the Fuss-Catalan case~$(a,ak+1)$. \begin{definition} Let $Q$ be an $(a,ak+r)$-Dyck path with $0\leq k$ and $0<r<a$. The rational \emph{initial bounce path} of $Q$ consists of a sequence of alternating $k+1$ \emph{vertical moves} and~$k$ \emph{horizontal moves}. We begin at $(0,0)$ with a vertical move followed by a horizontal move, and continue until eventually finish with the $(k+1)$th vertical move. Let $v_1,\dots,v_{k+1}$ denote the lengths of the successive vertical moves and $h_1,\dots, h_k$ denote the lengths of the successive horizontal moves. These lengths are determined as follows. We start from $(0,0)$ and move north $v_1$ steps until reaching an east step of $Q$. Next, move~$h_1=v_1$ steps east. Next, move north $v_2$ steps from the current position until reaching an east step of $Q$. Next, move $h_2=v_1+v_2$ steps east. In general, we move north $v_i$ steps from the current position until reaching an east step of the path, and then move east $e_i=v_1+\dots +v_i$ steps. This is done until obtaining the last vertical move $v_{k+1}$. \end{definition} \begin{remark} The definition of the initial bounce path is exactly the same as an initial part of the bounce path in the Fuss-Catalan case~\cite{Loehr,GMII}. The description of this initial part remains the same for the general $(a,b)$-case but we still do not know how to extend it to a complete bounce path in general. \end{remark} It turns out that the initial bounce path is closely related to the $\delta$ statistic. Denote by $|v|=v_1+\dots +v_{k+1}$ and by $|h|=h_1+\dots +h_k$. In all the results of this section we always assume $0\leq k$ and $0<r<a$. \begin{proposition}\label{prop:funnydelta} Let $Q=\zeta(P)$ be an $(a,ak+r)$-Dyck path and $\widetilde \delta(\P) \leq \delta(\P)$ be the number of levels in~$\P$ that are less than or equal to $a(k+1)$. The two following equations hold: \begin{equation}\label{eq:bounce1} \widetilde \delta(\P) = |v|+|h|+1, \end{equation} \begin{equation}\label{eq:bounce2} |v|+|h|+1 \leq \delta(\P) \leq |v|+|h|+r. \end{equation} \end{proposition} This proposition will follow from the following lemma. Note that every such a path $\P$ contains the east levels $a,2a,\dots,(k+1)a$ at the end of the path. Moreover, \begin{lemma} The east steps of $Q=\zeta(P)$ that are reached by the vertical moves $v_1,\dots,v_{k+1}$ of its initial bounce path correspond to the east levels $a,2a,\dots,(k+1)a$ of $P$. \end{lemma} \begin{proof} Denote by $A_0=\{0,1,\dots , a-1\}$ the set of natural numbers between $0$ and $a-1$, and let~$A_i=A_0+ia$ be the translation of $A_0$ by $ia$. Note that the number of boxes above the main diagonal that are directly on the right of a north step with north level in $A_i$ is exactly equal to $i$. So, these sets can be used to encode the ``area-vector" of a Dyck path. As in the known Fuss-Catalan bounce path description, we will show that the vertical steps of~$Q$ that are directly on the left of the $v_i$ vertical move contribute area $i-1$ in $\P$. More precisely, we will show: \begin{enumerate}[1.] \item The vertical steps of~$Q$ that are directly on the left of the $v_i$ vertical move correspond to north levels in $\P$ that belong to $A_{i-1}$. \item The horizontal steps of~$Q$ that are directly above of the $h_i$ horizontal move correspond to east levels in $\P$ that belong to $A_i$. \end{enumerate} Denote by $N_i$ the set of north levels in $\P$ that belong to $A_i$, for $i=0,\dots ,k$. Similarly, denote by $E_i$ the set of east levels in $\P$ that belong to $A_i$, for $i=1,\dots, k$. We first show that $E_i$ can be obtained as the disjoint union \begin{equation}\label{eq:ENcorrespondence} E_i = \bigcup_{j=0}^{i-1} \left(N_j + (i-j)a\right). \end{equation} For this, note that the first north level in $\P$ that appears after an east level $e\in E_i$ should be a north level $n\in N_j$ for some $j \in \{0,\dots i-1\}$, and that $e=n+(i-j)a$. Reciprocally, every north level $n\in N_j$ for some $j \in \{0,\dots i-1\}$ forces the east levels $e_{j+1},e_{j+2}\dots, e_{k}$ to appear as east levels in $\P$, where $e_{j+l}=n+l a$ (indeed, $e_{j+l}\in A_{j+l}$ which means that $e_{k-1}<ak<b=ak+r$. Then, all the lattice points one step below $e_{j+1},e_{j+2}\dots, e_{k-1}$ are below the main diagonal). Thus, the north level $n\in N_j$ contributes with exactly one east level $e=n+(i-j)a$ in $E_i$. Items 1 and 2 above are now equivalent to prove that $v_i=|N_{i-1}|$ and $h_i=|E_i|$. Equation~\ref{eq:ENcorrespondence} implies that $|E_i|=|N_0|+\dots +|N_{i-1}|.$ Since $h_i=v_1+\dots+v_i$, it suffices to prove $v_i=|N_{i-1}|$. Note that $v_1$ is clearly the number of elements in $N_0$, since the smallest east level of $\P$ (which corresponds to the first east step in $Q$) is equal to $a$. Moving $h_1=v_1=|E_1|$ steps horizontally covers all the east steps of $Q$ corresponding to the east levels in $\P$ that belong to $A_1$. Moving up $v_2$ units from the current position hits the path at the east step corresponding to the first east level of $\P$ that belongs to $A_2$. This east level is exactly equal to $2a$, and all the north steps on the left of $v_2$ in $Q$ correspond to the north levels in $\P$ that belong to $A_1$, that is $v_2=|N_1|$. In general, $\P$ contains all east levels $a,2a,\dots,(k+1)a$. Therefore, the initial value of $E_i$ is equal to $ia$ and is smaller than all values in $N_i$. As a consequence the vertical move $v_i$ of the bounce path hits the path $Q$ precisely at the east step corresponding to the level $ia$ of $\P$ as desired, and $v_i=|N_{i-1}|$. This finishes the proof of the proposition and the proof of items 1 and 2. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:funnydelta}] The east step of $Q$ that is reached by the vertical move~$v_{k+1}$ corresponds to the east level $(k+1)a$ in $\P$. So, $\widetilde \delta(\P)$ is equal to the position of this east step in $Q$, which is equal to $|v|+|h|+1$. Since $a+b=(k+1)a+r$ and $a+b$ never appears as a level in $\P$, then~$\delta\leq \widetilde \delta + r-1 = |v|+|h|+r$. \end{proof} Replacing $r=1$ in Equation~\eqref{eq:bounce2}, we obtain. \begin{corollary}\label{cor:delta_Fuss} In the Fuss-Catalan case $(a,ak+1)$, the statistic $\delta(\P)$ is determined by the initial bounce path of $Q=\zeta(\P)$ by \begin{equation} \delta(\P) = |v|+|h|+1. \end{equation} \end{corollary} As a consequence of Corollary~\ref{cor:delta_inverse} and Corollary~\ref{cor:delta_Fuss} we obtain an alternative proof that the zeta map is a bijection in the Fuss-Catalan case~$(a,ak+1)$, as previously shown by Loehr in~\cite{Loehr}. \begin{corollary}\label{cor:zetabijection_Fuss} The zeta map is a bijection in the Fuss-Catalan case $(a,ak+1)$. \end{corollary} \section*{Acknowledgements} \label{sec.ack} Some of the results in this work are partially the result of working sessions at the Algebraic Combinatorics Seminar at the Fields Institute with the active participation of Farid Aliniaeifard, Nantel Bergeron, Eric Ens, Sirous Homayouni, Sahar Jamali, Shu Xiao Li, Trueman MacHenry, and Mike Zabrocki. We thank two anonymous referees for their comments and for pointing us to some references of previously known results. We especially thank Rishi Nath for discussions involving core partitions that helped simplify arguments in Section~\ref{sec:skew_length}, Drew Armstrong for sharing his knowledge of the history of this subject, and Greg Warrington for pointing us the references about the $\ensuremath{\mathsf{dinv}}$ statistic in Remark~\ref{rem:dinv}. We are also grateful to York University for hosting a visit of the third author. \bibliographystyle{amsalpha}
{ "timestamp": "2016-02-19T02:11:33", "yymm": "1504", "arxiv_id": "1504.06383", "language": "en", "url": "https://arxiv.org/abs/1504.06383" }
\section{Introduction} \label{sec:int} Let $B,B_{1},B_2,\dots$ be a family of i.i.d. independent two-sided Brownian motions (BM), meaning that for any $n$, $(B_n(t), t \geq 0)$ and $(B_n(-t), t \geq 0)$ are two independent standard linear BM. Denote by $I^{(n)}=B_n \circ \cdots \circ B_1$ the $n$th time iterated BM. Curien and Konstantopoulos \cite{C-K} obtained the following results, gather in the following proposition. \begin{pro}\label{pro:C-K} (1) For any $k\geq 1$, any non zero $t_1,\dots,t_k$, $(I^{(n)}(t_1),\cdots,I^{(n)}(t_k))$ converges in distribution. The limit distribution $\mu_k$ does not depend on the $t_i$'s, and then is exchangeable.\\ (2) For $(I_1,\dots,I_k) \sim \mu_k$, the equality $(I_1,\dots,I_k)\sur{=}{(d)} (B(I_1),\cdots, B(I_k))$ holds. Moreover, \[ (I_2-I_1,\dots, I_k-I_1)\sur{=}{(d)} (I_1,\cdots,I_{k-1})\sim \mu_{k-1}.\] The distribution of $I_1$ possesses the density $\exp(-2|x|)$ over $\mathbb{R}$ (this result appeared first in Turban \cite{TUR}),\\ (3) Let $\phi_n$ be the occupation measure of $I^{(n)}$ on $[0,1]$, then the sequence $(\phi_n,n \geq 0)$ converges as $n \to \infty$ in distribution to a random probability measure $\phi$, which has a.s. a finite support, and which has a.s. a H\"older continuous density with exponent $1/2-\epsilon$ for all $\epsilon>0$. \end{pro} In this paper we go on this study in several connected directions: among other we give some elements on $\mu_k$, study iterated reflected BM, iterated stable processes, and provide a description of the finite dimensional distribution of the $n$th iterated BM $I^{(n)}$. \medskip Here are the main lines of the paper. In Section \ref{sec:P} we present the studied processes and fix some notations. In Section \ref{sec:FC} we provide some common features of the processes we iterate. Given a finite set of points $L=\{\ell_i, i=0,\dots,k\}$, the gaps sequence of $L$ is the sequence $G = (\wh{\ell}_i-\wh{\ell}_{i-1},1\leq i \leq k)$, of differences of successive points in $L$ when sorted in increasing order. It turns out that for processes $X$ with independent and stationary increments, the distribution of the gaps sequence of $X(L)=\{X(\ell_i),i\in I\}$ can be described uniquely using the gaps sequence $G$ of $L$. This simple property will appear to be at the heart of our advances about iterated BM. \par In Section \ref{sec:BP}, devoted to iterated BM and iterated reflected BM, it is explained that if the initial gaps sequence $G$ is a $k$ tuple of independent exponential random variables (r.v.) with parameters $(\lambda_1,\dots,\lambda_k)$ then the gaps sequence of $X(L)$ is distributed according to a mixture of k-tuple of independent exponential r.v., whose parameters are explicit functions of $(\lambda_1,\dots,\lambda_k)$. To encode this property, we define a Markov chain $(Z^{(n)},n\geq 1)$ at the parameter level, which makes explicit this parameter evolution (see \eref{eq:parameter-kernel} and around). \par A consequence is that the gaps sequence of the iterated BM ad libitum is a mixture of independent exponential r.v., and this mixture can be described precisely using the invariant distribution of the Markov chain $Z^{(n)}$ (Propositions \ref{pro:etsdy}, \ref{pro:ap} and Theorem \ref{theo:multi-dim}). \par Somehow, Remark \ref{rem:Hutch} implies that our description of the iterated BM finite dimensional distribution, while complex, is the simplest we could expect.\par The same construction, using an analogous of the parameter Markov chain $(Z^{(n)},n\geq 1)$, implies that the law of the $n$th iterated BM is accessible if the gaps sequence of the initial distribution follows some independent exponential r.v. In Section \ref{sec:IRMn}, it is seen that this property provides a Laplace type transform of the finite dimensional distributions of the $n$th iterated BM $I^{(n)}$. Section \ref{sec:IRMal} is devoted to the iteration of reflected BM. Section \ref{sec:ISPal} is devoted to the iteration of stable processes, whose study appear much similar to that of iterated BM, except that explicit computations are out of reach for the moment. We discuss in Section \ref{sec:Con} some natural extensions of this work. \section{Random processes} \label{sec:P} ``BM'' will be used to denote the two-sided linear BM as defined at the beginning of Section \ref{sec:int}. The process corresponding to the $n$th iterated process will be denoted $I^{(n)}$ (the process iterated under discussion, denoted $X$ further, will be clear from the context). The processes iterated ad libitum, the limit of $I^{(n)}$ in the sense of the topology of finite dimensional distribution convergence, when it exists will be denoted $I$. The reflected BM is the (one-sided) process $(|B(t)|,t\geq 0)$ where $B$ is the standard linear BM. We go on discussing stable processes (see Applebaum \cite{AP} for more information). We will consider only two-sided stable variables $Z$ that can be written under the form $A+r$ where $A$ is stable symmetric (null skewness), and $r$ a real number (the location parameter). The characteristic function of such a r.v. $Z$ can be written under the form \[\psi(u)=`E(e^{iuZ})=e^{ \eta(u)}\] where \[\eta(u)= -\abs{u}^\alpha \sigma^\alpha + i r u \] where $\alpha \in (0,2]$ is the index of stability, $\sigma \in (0,\infty)$ the scale parameter (Theorem 1.2.21 in \cite{AP}). A stable process $(X(t),t \geq 0)$ with parameters $(\alpha,\sigma,r)$ is the process such that $ X(0)=0$, with stationary and independent increments, and whose characteristic function is \begin{equation} \Phi_t(u) = \esp{e^{i u X(t) }} = e^{ t \eta(u)}. \end{equation} The two sided stable process $(X(t),t \in \mathbb{R})$ is the process such that $(X(t),t\geq 0)$ and $(X(t),t<0)$ are independent and $(X(t),t\geq 0)$ and $(-X(-t),t\geq 0)$ are both one-sided stable process with parameters $(\alpha,\sigma,r)$. For any $t\in \mathbb{R}^\star$, \begin{eqnarray}\label{eq:rec} \frac{X_t-tr}{|t|^{1/\alpha}}\sur{=}{(d)} X_1-r. \end{eqnarray} For any $c> 0$, $(X(c^\alpha t),t \geq 0)$ is a stable process with parameters $\left( \alpha,c \sigma, c^{\alpha} r \right)$. Let $(X_1,X_2,\dots,)$ be a family of i.i.d. two-sided stable processes with parameters $(\alpha,\sigma,r)$. The $n$th iterated stable process $I^{(n)}$ of parameters $(\alpha,\sigma,r)$ is the process \begin{displaymath} I^{(n)}=X_n \circ \cdots \circ X_1. \end{displaymath} We keep the same notation as for the iterated two-sided BM for some reasons that will appear clear below. \begin{rem} \label{rem:bmit} The BM is the stable process with parameters $(2,1/\sqrt{2},0)$. Its Markov kernel is $`P(B_{t+s}\in dy | B_s=x)=\exp(-(y-x)^2/(2t))/\sqrt{2\pi t}$. \end{rem} Iteration of stable processes with parameter $(\alpha,1,0)$ and $(\alpha,\sigma,0)$ can be directly compared as explained in Remark \ref{rem:comp}. \section{Iteration of processes: general considerations} \label{sec:FC} In this section, we discuss some common features of the processes we iterate in the paper. \par All along the section $k$ is a positive integer: the size of the finite dimensional distributions under inspection. \paragraph{Notations.} We denote by $\cro{a,b}$ the ordered sequence $[a,b]\cap \mathbb{Z}$. The permutation group of the set $\cro{a,b}$ is denoted ${\cal S}\cro{a,b}$. Sometimes, we will use the notation $x[a:b]$ instead of $(x_a,\dots,x_b)$, and also $t[a:b], g[a:b], \lambda[a:b]$, etc, accordingly. The simple notation $x[k]$ will stand for $x[1:k]$.\par For any sequence $\ell[0:k]=(\ell_0,\dots,\ell_k)$, denote by $(\wh{\ell}[0:k])=\sort{\ell[0:k]}$ this sequence sorted in increasing order. For any $i\in\cro{1,k}$, set \[\Delta \ell_i=\ell_i-\ell_{i-1}.\] The gaps sequence of $\ell[0:k]$ is the sequence of distances between the elements of $\{\ell_0,\cdots,\ell_k\}$. It is defined by \[\gaps{\ell[0:k] }=\left(\Delta \wh{\ell}_i, i \in\cro{1,k}\right).\] Last, for $x[1:k]$ a sequence, $\bar{x}[0:k]$ is the sequence defined by \begin{eqnarray}\label{eq:S}\bar{x}_0=0,~\ov{x}_i=x_1+\dots+x_i,~~ \textrm{ for } {i \in \cro{1,k}}.\end{eqnarray} \paragraph{Iteration of processes.} What follows is valid for processes $X$ such that $X(0)=0$ a.s., with independent and stationary increments, which distribution are absolutely continuous with respect to the Lebesgue measure on $`R$ such that for any $t>s$, \begin{eqnarray}\label{eq:qds} X(t)-X(s)\sur{=}{(d)} X(t-s)\end{eqnarray} whatever are the signs of $s$ and $t$. Notice that this implies $-X(s)\sur{=}{(d)} X(-s)$ (taking $t=0$). These general setting are satisfied by BM, by symmetric two-sided stable processes, and more generally, by symmetric two-sided Lévy processes such that for any $t>0$, $X(t)$ owns a density. Some modifications are needed for processes as the reflected BM which have stationary but dependent increments. This is discussed in Section \ref{sec:IRMal}. Denote by $\Phi_t(.)$ the density of the distribution of $X(t)$. We then have \begin{eqnarray} \label{eq:sym} \Phi_{t}(y)=\Phi_{-t}(-y),~~ \textrm{ for any }(t,y) \in \mathbb{R}^\star \times \mathbb{R}. \end{eqnarray} \subsection{The gaps sequence evolution} Let $(t_0=0,t_1,\dots,t_k)$ be some distinct real numbers. We start with the description of the distribution of $(X(t_i), i\in\cro{0,k})$. As usual, the description is easier if the $t_i$ are sorted... Let $\tau\in{\cal S}\cro{0,k}$ such that $(\wh{t}_i=t_{\tau(i)},i\in\cro{0,k})=\sort{t[0:k]}$. Hence , $\wh{t}_{\tau^{-1}(0)}=0$. Further let $g[k] = \gaps{t[0:k]}$. The r.v. $(X(\wh{t}_{i+1})-X(\wh{t}_{i}), i\in\cro{1,k})$ are independent, and $X(\wh{t}_{i})-X(\wh{t}_{i-1})\sur{=}{(d)} X(\Delta \wh{t}_{i})$ depends on the gaps sequence of the $t_i$'s. Using the independence of the increments of $X$ and their stationary, we get that the density $f$ of $(X(t_i), {i \in \cro{1,k}})$ on $`R^k$ is \begin{eqnarray}\label{eq:evoll} f(y[k])= \prod_{j=1}^k \Phi_{\Delta \wh{t}_j}\left(\Delta y_{\tau(j)}\right). \end{eqnarray} where in the right hand side $y_0=0$. Indeed, one has $(X(t_i),{i \in \cro{0,k}})=(X(\hat{t}_{\tau^{-1}(i)}),{i \in \cro{0,k}})$, and computing $`P\left(X(\hat{t}_{\tau^{-1}(i)} \in dy_i, {i \in \cro{1,k}} \right)= `P\left(X(\hat{t}_{i}) \in dy_{\tau(i)}, {i \in \cro{1,k}} \right)$ gives the result, using \eref{eq:sym}. The distribution of $\gaps{X(t_i), {i \in \cro{0,k}}}$ depends also only on $\gaps{t[0:k]}$, and this is one of the key point of the paper. First, determine the vectors $(X(t_i), i\in\cro{0,k})$ such that \begin{eqnarray}\label{eq:gaps} \gaps{X(t_i), i\in\cro{0,k}}=x[k] \end{eqnarray} some fixed element of $ (0,+\infty)^k$. Clearly \eref{eq:gaps} holds iff there exists some $a\in\mathbb{R}$ such that \begin{eqnarray}\label{eq:qdq} \sort{X(t_i), i\in\cro{0,k}}=\left( a+\ov{x}_i , i\in\cro{0,k}\right). \end{eqnarray} Equation \eref{eq:qdq} implies that for a certain permutation $\tau \in {\cal S}\cro{0,k}$ \[(X(\wh{t}_{i}),i\in\cro{0,k})=(a+\ov{x}_{\tau(i)}, i\in\cro{0,k})\] from what we find \begin{eqnarray}\label{eq:evol} \left(X(\wh{t}_{i})-X(\wh{t}_{i-1}), i \in\cro{1,k}\right)=(\Delta \ov{x}_{\tau(i)},i \in\cro{1,k}). \end{eqnarray} The following proposition should be clear now \begin{pro}\label{pro:pro2} Let $t[0:k]$ be $k+1$ distinct real numbers with $t_0=0$ such that \[\gaps{t[0:k]}=g[k]\in (0,+\infty)^k.\] The distribution of $\gaps{(X(t_0),\dots,X(t_{k})}$ has density $\Psi_{g[k]}$ on $(\mathbb{R}^+)^k$ where \begin{eqnarray} \label{eq:Psi} \Psi_{g[k]}(x[k])=\sum_{\tau\in {\cal S}\cro{0,k}} \prod_{i=1}^k \Phi_{g_i}\left( \Delta \ov{x}_{\tau(i)}\right) \,1_{x_i>0}. \end{eqnarray} \end{pro} In the mono-dimensional case, \begin{eqnarray}\label{eq:double} \Psi_g(x)= (\Phi_g(x)+\Phi_g(-x))1_{x\geq 0} \end{eqnarray} and this is also $2\Phi_g(x)1_{x\geq 0}$ when $\Phi_g$ is even (that is when $r=0$ in the stable processes case). We may now define a time-homogeneous MC $({\sf G}^{(n)}[k]=({\sf G}_i^{(n)}, i \in\cro{1,k}), n\geq 0)$ taking its values in $(0,+\infty)^k$, giving the successive gaps sequence starting from an initial one; its Markov kernel is given by $\Psi$ in the sense of Proposition \ref{pro:pro2}. We will call ${\sf G}^{(n)}$ the gaps sequence MC. Assume that ${\sf G}^{(0)}_k$ is a r.v. which possesses a density $f_k$ on $(0,+\infty)^k$. The density of ${\sf G}^{(1)}_k$ is ${\sf Op}_k(f_k)$ where ${\sf Op}_k$ is the following integral operator (which sends $f_k$ onto ${\sf Op}_k(f_k)$), where for any $x[k]\in `R^k$, \begin{eqnarray}\label{eq:Opk} {\sf Op}_k(f_k)(x[k]):=\int\cdots\int f_k(g[k]) \Psi_{g[k]}(x[k]) dg_1\dots d{g_k}. \end{eqnarray} Of course, if one considers a case for which the iterated process converges in distribution, \[I^{(n)}[k]=(I^{(n)}(t_1),\dots,I^{(n)}(t_k))\xrightarrow[n]{(d)} I[k]=(I(t_1),\dots,I(t_k))\] then the associated gap MC $({\sf G}^{(n)}[k], n\geq 0)$ converges too since the map $x[0;k]\to \gaps{x[0:k]}$ is continuous. The converse is false but not that much: the gaps sequence characterises the points relative positions. An additional information is needed to recover their positions: somehow the distribution of the translations which sends $\gaps{I(t_i), {i \in \cro{0,k}}}$ onto $\{I(t_i), {i \in \cro{0,k}}\}$, and the distribution of the permutation which provides the distribution of $(I(t_i), {i \in \cro{0,k}})$ knowing $\{I(t_i), {i \in \cro{0,k}}\}$. A simple but powerful trick, discussed at several places in the paper is the following : we are able to pass from the gaps sequence MC to the usual one if instead of $(I^{(n)}(t_i), {i \in \cro{1,k}})$, we study $(I^{(n)}(t_i), {i \in \cro{0,k}})$ instead, where $t_0=0$. We can sum up in two slogans the relative importance of the iteration of the initial process $X$ with respect to the gap MC: the proof of convergence is easier for $(I^{(n)}(t_i),{i \in \cro{0,k}})$, but the behaviour of ${\sf G}^{(n)}[k]$ is easier to understand, and its distribution in the case of Brownian processes is tractable. \subsection{The iterated process evolution} \label{sec:IPE} Any sequence $t[0:k]$ such that $t_0=0$ can be encoded by the pair $C[t]:=\left[g[k],\tau\right]$ formed by the gaps sequence of $t$, and the ``labelling permutation'' $\tau \in {\cal S}\cro{0,k}$, so that \begin{eqnarray}\label{eq:enc} t_i= \ov{g}_{\tau(i)}-\ov{g}_{\tau(0)}, {i \in \cro{0,k}}. \end{eqnarray} Of course, thanks to \eref{eq:enc}, the decoding $t=C^{-1}(g[k],\tau)$ is well defined too (taken $t_0=0$). Follows from \eref{eq:enc} again, that $t_{\tau^{-1}(i)}$is non decreasing in $i$ and then for any $i$ we have \begin{eqnarray}\label{eq:whtau} \wh{t}_i=t_{\tau^{-1}(i)}=\ov{g}_{i}-\ov{g}_{\tau(0)}. \end{eqnarray} The Markov kernel of the MC $n\mapsto(I^{(n)}(t_i), {i \in \cro{0,k}})$ can be made explicit at the level of the encodings. Consider $t[0:k]$ with $t_0=0$ and $C[t]:=\left[g[k],\tau\right]$ its encoding, and $t'[0:k]$ with $t'_0=0$ and $C[t']:=\left[g'[k],\tau'\right]$ its encoding. Denote by $K$ the corresponding Markov kernel (with transparent convention) which gives the distribution of $C[I^{(n+1)}]$ knowing $C[I^{(n)}]$. We have \begin{eqnarray*} \left\{X(t_i) \in dt'_i, {i \in \cro{1,k}}\right\}&=&\left\{X(\wh{t}_i) \in d t'_{\tau^{-1}(i)}, {i \in \cro{1,k}}\right\} \end{eqnarray*} and then using \eref{eq:evol}, \eref{eq:whtau} and $t'_i= \ov{g'}_{\tau'(i)}-\ov{g'}_{\tau'(0)}$ for ${i \in \cro{0,k}}$, \begin{eqnarray*} K_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau']&=&`P(X(t_i) \in dt'_i, {i \in \cro{1,k}})\\ & = & \prod_{i=1}^k \Phi_{\Delta \wh{t}_i}\left(\Delta t'_{\tau^{-1}(i)}\right)\\ & = & \prod_{i=1}^k \Phi_{g_i}\left(\Delta \ov{g'}_{\tau'(\tau^{-1}(i))}\right). \end{eqnarray*} We rewrite more simply as \begin{eqnarray}\label{eq:tran} K_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'\circ \tau]=\prod_{i=1}^k \Phi_{g_i}\left(\Delta \ov{g'}_{\tau'(i)}\right) \end{eqnarray} from what we observe that \begin{eqnarray}\label{eq:recws} K_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'\circ\tau]=K_{g[k],Id}[(dg'_1,\dots,dg'_k),\tau']\end{eqnarray} and then the LHS is independent of $\tau$. Of course, all of this is valid for $\tau,\tau' \in {\cal S}\cro{0,k}$, and for positive $g_i$'s, $g'_i$'s. \subsection{Asymptotic independence of labelling permutation and gaps sequence} \label{eq:AIL} We explain now why in the encoding Markov chain $(C[I^{(n)}[k],n\geq 1)$, the gaps sequence ``becomes progressively'' independent from the labelling permutation as stated in the main convergence theorems of the paper, where this appears under the form of exchangeability of the limiting distribution $\gamma_k$. The asymptotic exchangeability can be proved directly (see \cite{C-K} or the end of Section \ref{sec:PT}). It is somehow quite complex since it relies on the convergence of $(I^{(n)}(t_1),\dots,I^{(n)}(t_k))$ to a limit independent of the $t_i$'s, and the proof relies on some (classical but) involved estimates. We present here another argument which makes this more apparent and which we think, can be of some interest if ones tries to iterate some processes for which the arguments developed in Section \ref{sec:PT} fail. It is a coupling argument. For a fixed pair $(g[k],\tau)$, consider (using \eref{eq:recws}), \begin{eqnarray*} \underline{K}_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'']&=&\min_{\tau'}K_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'\circ \tau]\\ &=&\min_{\tau'}K_{g[k],Id}[(dg'_1,\dots,dg'_k),\tau'] \end{eqnarray*} the ``minimal flow'' going to $[(dg'_1,\dots,dg'_k),\tau']$ from $(g[k],\tau)$, minimum taken on the $\tau'\in {\cal S}\cro{0,k}$. In general $\underline{K}$ is a defective Markov kernel. Since it does not depend on $\tau''$, the marginal restriction of $\underline{K}$ to the permutation labelling, is the uniform distribution on ${\cal S}\cro{0,k}$. Therefore, $\underline{K}_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'']$ possesses a simpler form: \begin{eqnarray} \underline{K}_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'']=\kappa_{g[k]}(dg'_1,\dots,dg'_k) \frac{1_{\tau''\in {\cal S}\cro{0,k}}}{(k+1)!} \end{eqnarray} where $\kappa$ is a defective Markov kernel on $`R^+{}^k$. Let \begin{eqnarray*} q({g[k]}) & = & \kappa_{(g[k])}(`R^+{}^{k}) \end{eqnarray*} be the total mass of $\underline{K}_{g[k],\tau}$ and of $\kappa_{g[k]}$. (Notice that the cases treated in the present paper, for $g[k]\in(0,\infty)^k$, $q({g[k]})>0$.) Now set \begin{equation}\left \{\begin{array}{rcl} K^{[1]}_{g[k],\tau}[(dg'_1,\dots,dg'_k),\tau'\circ\tau]&=&\frac{\kappa_{g[k]}(dg'_1,\dots,dg'_k)/{(k+1)!}}{q({g[k]})}\\ K^{[2]}_{g[k],\tau}&=&\frac{K_{g[k],\tau}-q({g[k]})K^{[1]}_{g[k],\tau}}{1-q({g[k]})} \end{array}\right. \end{equation} so that $K^{[2]}$ is indeed a Markov kernel. It is easily seen that the initial kernel $K$ can be represented as \[K_{g[k],\tau}=q({g[k]})K^{[1]}_{g[k],\tau}+(1-q({g[k]}))K^{[2]}_{g[k],\tau},\] which is the core of our coupling: to sample the MC $C[I^{(n)}]$ from $(g[k],\tau)$, first, sample a Bernoulli random variable with parameter $q({g[k]})$. If it is 1, then use the kernel $K^{[1]}$, else the kernel $K^{[2]}$. If the kernel $K^{[1]}$ is used, the new value $(G[k+1],\tau_{k+1})$ has the following property: $\tau_{k+1}$ is uniform and independent from $G[k+1]$, which has distribution $\kappa_{g[k]}(.)/q({g[k]})$. Then as soon as a transition $K^{[1]}$ is used the labelling permutation and the gaps sequence become independent, and this independence carry on, since by $K$ the labelling permutation evolves somehow independently from the current labelling permutation (and it evolves by product, see \eref{eq:recws}). It remains to say some words about the frequency of these renewal events: letting $C[I^{(n)}]=(G[k]^{(n)},\tau_n)$ the successive values of the encoding chain, one sees that each time the renewal probability is $q({G[k]^{(n)}})$. To get renewal with probability one in the sequence $C[I^{(n)}]$ we need not much: continuity and positivity of the kernel on each compact, and tightness of the sequence $C[I^{(n)}]$. \section{Iteration of Brownian processes} \label{sec:BP} This section is devoted to our results concerning the iterated BM ad libitum, iterated reflected BM ad libitum, and $n$th iterated BM. We will consider iteration of standard linear Brownian motion, but using Remark \ref{rem:comp} iteration of Brownian motions multiplied by a constant can be studied as well. We start with a key point relative to the description of the Markov kernel of the gaps sequence MC when $X$ is a BM (but many of what follows is valid for more general Gaussian processes). In this section, $\Phi_g$ is the density of the centred Gaussian distribution with variance $g$. We denote further by ${\sf Exp}[\lambda,x] = \lambda e^{-\lambda x} 1_{x \geq 0}$ the density of ${\sf Expo}[\lambda]$, the exponential distribution with parameter $\lambda$. Let ${\sf MEX}_k$ be the set of probability measures on $`R^k$ having a density of the form \begin{eqnarray}\label{eq:par-law2} f\left(x[k]\right)=\int_{`R^+{}^k} \left(\prod_{i=1}^k {\sf Exp}\left[ \lambda_i,x_i\right]\right) d\mu(\lambda_1,\cdots,\lambda_k),~~ x[k]\in `R^k \end{eqnarray} where $\mu$ is a general probability distribution on $`R^+{}^k$, called the parameter law of $f$. In other words, the set ${\sf MEX}_k$ is the set of mixtures of product of exponential distributions. The key result in this section, valid only in the Gaussian case, is the following proposition. \begin{pro}\label{eq:lin} For any $k\geq 1$, ${\sf Op}_k$ is linear on ${\sf MEX}_k$, and then ${\sf MEX}_k$ is stable by ${\sf Op}_k$. \end{pro} \begin{proof} We start by the one-dimensional case for which \eref{eq:double} holds. \par Let $f_1(x)={\sf Exp}[\lambda,x]$, and let us find $Op_1(f_1)(x)$ by computing its Fourier transform \[FT_0(a)=\int_{x\geq 0} e^{i a x}\int_{g>0} 2\Phi_g(x)\lambda e^{-\lambda g}dg dx.\] This is done in two steps: $Op_1(f_1)$ is the density of a positive r.v. $Z$. Hence \[FT_1(a)= \frac{1}{2}\int_{-\infty}^{+\infty} e^{i a x}\int_{g>0} 2\Phi_g(x)\lambda e^{-\lambda g}dg dx,\] is the Fourier transform of $`e Z$ where $`e$ is a uniform random sign, independent of $Z$. By Fubini, one finds that it is $\int_{g \geq 0}\lambda e^{-\lambda x} e^{-ga^2} dg= \frac{1}{1+\frac{a^2}{2\lambda}}$, which is the Fourier transform of $`e Y$ where $Y$ has distribution ${\sf Expo}\left[\sqrt{2\lambda}\right]$. We deduce from that the identity \begin{eqnarray}\label{eq:sim-trans} \int_0^{+\infty}{\sf Exp}[\lambda,x]\Psi_g(x)dg ={\sf Exp}[\sqrt{2\lambda},x],~~ x>0. \end{eqnarray} In words, ${\sf Op}_1$ sends $x\mapsto{\sf Exp}[\lambda,x]$ on $x\mapsto{\sf Exp}[\sqrt{2\lambda},x]$. \begin{rem}\label{rem:1} Notice that this implies that ${\sf Exp}[2]$ is stable by ${\sf Op}_1$. This is the result by Curien-Konstantopoulos \cite{C-K} who proved that $I_1\sim `e Y$ where $Y\sim {\sf Expo}[2]$. \end{rem} Assume $k\geq 1$ now. Observe the effect of ${\sf Op}_k$ on a product of exponential distributions. By \eref{eq:sim-trans} and \eref{eq:double}, one has for any $x[k]\in(0,+\infty)^k$, any $\tau\in{\cal S}\cro{0,k}$, the identity \begin{eqnarray}\label{eq:elt} \sum_{\tau\in{\cal S}\cro{0,k}}\int_{`R^+{}^k}\prod_{i=1}^{k} \Big({\sf Exp}[c_i,g_i] \Phi_{g_i}\left(\Delta \ov{x}_{\tau(i)}\right)\Big)dg_1...dg_k= \sum_{\tau\in{\cal S}\cro{0,k}} \frac{1}{2^k} \prod_{i=1}^k {\sf Exp}\left[\sqrt{2c_i},\left|\Delta \ov{x}_{\tau(i)}\right| \right]. \end{eqnarray} An important fact appears here, fact valid only in the Brownian case : one can separate the variables $x_i$'s in the right hand side and let appear a product of independent ${\sf Expo}[c'_i]$ r.v., thanks to the two following identities \begin{eqnarray}\label{eq:iden1} \left\{ \begin{array}{rcl} {\sf Exp}[c,x+x']&=&{\sf Exp}[c,x]\,{\sf Exp}[c,x']/c,\\ {\sf Exp}[c,x]\,{\sf Exp}[c',x]&=&{\sf Exp}[c+c',x]\frac{cc'}{c+c'}. \end{array}\right. \end{eqnarray} Let us separate the variables, and for this, collect in $E_{\tau,i}$ the contribution relative to ${\sf Exp}[.,x_i]$. Since $\ov{x}_{\tau(j)}=x_1+\dots+x_{\tau(j)}$, then $|\Delta \ov{x}_{\tau(j)}|= x_{1+\min(\tau(j),\tau({j-1}))}+\dots+ x_{\max(\tau(j),\tau({j-1}))}$. Let $E_{\tau,i}= \{j: x_i\in |\Delta \ov{x}_{\tau(j)}|\} = \{j: \min(\tau(j),\tau({j+1})) < i \leq \max(\tau(j),\tau({j+1}))\}$ be the sequence of indices $j$ such that $x_i$ appear in $|\Delta \ov{x}_{\tau(j)}| $. Further, let \[w_{\tau}(c[k])= \frac{1}{2^k} \prod_{i=1}^k \frac{\sqrt{2c_i}}{ F_{\tau,i}(c) }\] and $F_\tau(c[k])=(F_{\tau,i}(c[k]),i \in\cro{1,k})$ where \begin{eqnarray}\label{eq:FW} F_{\tau,i}(c[k]) &=&\sum_{j \in E_{\tau,i}} \sqrt{2 c_j}. \end{eqnarray} As a consequence of the previous discussion, \begin{lem}\label{lem:transfo} If $f$ is the map $f(x[k])=\prod_{i=1}^k {\sf Exp}[\lambda_i,x_i]$ for some fixed $\lambda[k]\in(0,+\infty)^k$, then \begin{displaymath} {\sf Op}_k(f)(x[k]) = \sum_{\tau\in {\cal S}\cro{0,k}} w_{\tau}(\lambda[k]) \prod_{i=1}^k {\sf Exp}\left[ F_{\tau,i}(\lambda),x_i\right]. \end{displaymath} \end{lem} Of course, this ends the proof of Proposition \ref{eq:lin}. \end{proof} In the $2$-dimensional case, the 6 functions $F_\tau$ and weights are the following \begin{eqnarray} F_{(0,1,2)}(c_1,c_2) = (s_1,s_2) & , & w_{(0,1,2)}(c_1,c_2) = {1}/{4}, \\ F_{(0,2,1)}(c_1,c_2) = (s_1,s_1+s_2) & ,& w_{(0,2,1)}(c_1,c_2) = 1/4 \ \ s_2/(s_1+s_2) , \\ F_{(1,0,2)}(c_1,c_2) = (s_1+s_2,s_2) & , & w_{(1,0,2)}(c_1,c_2) = 1/4 \ \ s_1/(s_1+s_2), \\ F_{(1,2,0)}(c_1,c_2) = (s_2,s_1+s_2) & , & w_{(1,2,0)}(c_1,c_2) = 1/4 \ \ s_1/(s_1+s_2), \\ F_{(2,0,1)}(c_1,c_2) = (s_1+s_2,s_1) & , & w_{(2,0,1)}(c_1,c_2) = 1/4 \ \ s_2/(s_1+s_2), \\ F_{(2,1,0)}(c_1,c_2) = (s_2,s_1) & , & w_{(2,1,0)}(c_1,c_2) = {1}/{4}, \end{eqnarray} where for short, we have written $s_{i}$ instead of$\sqrt{2c_i}$. We now pass to the consequences in terms of iterated BM ad libitum, reflected BM, and in the case of iterated BM. \subsection{Iteration of BM ad libitum } \label{sec:BMal} Proposition \ref{pro:C-K}, ensures the convergence of $(I^{(n)}(t_i), {i \in \cro{1,k}})$ to $I[k]$ for any distinct and non zeros $t_i$'s, as well as the exchangeability of the limit. Hence $\gaps{I[k]}$ is the limit of the gap MC, and the limit of this MC does not depend on the $t_i$'s. The gaps sequence is not sufficient to describe $I[k]$ even up to a permutation (which would be uniform by exchangeability), since the gaps sequence determines the set of elements of the sequence up to a translation. We present a simple trick which allows one to pass this (apparent) difficulty. Consider $I[k+1]$ a $\mu_{k+1}$ distributed sequence. Take $U$ a r.v. uniform in $\cro{1,k+1}$ independent from the $I_i$'s. By Proposition \ref{pro:C-K} \begin{eqnarray}\label{eq:dsqgr} J[k+1]:=(I_i-I_U, i \in \cro{1,k+1}) \end{eqnarray} is a random sequence with one zero entry (with uniform position), and the rest of its entries has the same distribution as $(I_i,1\leq i \leq k)$. Moreover, $\gaps{J[k+1]}=\gaps{I[k+1]}$ since translations conserve gaps sequence. Denote by $\gamma_k$ be the distribution of $\gaps{I[k+1]}$. The following proposition, consequence of the previous discussion, allows one to get $\mu_k$ using $\gamma_k$. \begin{pro} \label{pro:gaps sequenceToISP} Consider $(G_i,{i \in \cro{1,k}})$ a random vector distributed according to $\gamma_k$, $U$ a uniform r.v. on $\cro{0,k}$ and $\tau$ a uniform random permutation taken in ${\cal S}\cro{0,k}$, all these r.v. being independent. The following identity holds $(\overline{G}_{\tau(i)}-\overline{G}_U, 0 \leq i \leq k)\sur{=}{(d)} J[k+1].$ \end{pro} It remains to describe $\gamma_k$. The case $k=1$ is a consequence of Proposition \ref{pro:C-K}, see also Remark \ref{rem:1}. For $k\geq 2$, this can be obtained by looking at the limit of the gap MC, starting with some initial positive gaps sequence $g[k]$ since, the gap MC inherits from the initial chain (the iterated BM) the property to possess a limiting distribution, independent from the starting point. Clearly, the initial sequence can be taken random (with values in $`R^{+}{}^k$), for example, one can start with some independent exponential r.v. with parameters $\lambda_1,\dots,\lambda_k$... and this is what we will do since Proposition \ref{eq:lin} and Lemma \ref{lem:transfo} allows one to control exactly the evolution of the distribution of the gaps sequence MC in this case. Finding the limiting distribution in this case amounts to finding the fixed point of ${\sf Op}_k$. From Proposition \ref{eq:lin} and Lemma \ref{lem:transfo} one sees that ${\sf Op}_k$ sends an element of ${\sf MEX}_k$ on a weighted sums of elements of ${\sf MEX}_k$, where the total weight is 1: it is a Markov kernel. It can be better understood if instead of seeing the action of ${\sf Op}_k$ at the level of functions, it is seen at the level of the parameters (the parameters of the involved exponential distributions): consider a (discrete time homogeneous) MC $(Z^{(n)}[k]=(Z^{(n)}(1),\dots,Z^{(n)}(k)), n \geq 0)$ defined on $\mathbb{R}^{\star}{}^k$ whose kernel $Q$ is defined, for any Borelian $A$ of $\mathbb{R}^k$, and $\lambda[k] \in \mathbb{R}^{\star}{}^k$ by \begin{eqnarray}\label{eq:parameter-kernel} Q ( \lambda[k],A)=\sum_{\tau\in {\cal S}\cro{0,k}} w_{\tau[k]}(\lambda) \delta_{F_{\tau}(\lambda[k])}(A). \end{eqnarray} In other words, \begin{eqnarray}\label{eq:MC-parameter} `P\left( Z^{(n+1)}[k]=F_{\tau}(\lambda[k])~|~ Z^{(n)}[k]=\lambda[k]\right))= w_\tau(\lambda[k]) \textrm{ for any }\tau\in {\cal S}\cro{0,k}.\end{eqnarray} If $Z^{(n)}[k]\sim \nu$, denote by $\nu Q$ the distribution of $Z^{(n+1)}[k]$. We can sum up the preceding consideration as follows: \begin{pro}\label{pro:etsdy} Assume that the gaps sequence $({\sf G}^{(n)}(i),{i \in \cro{1,k}})$ at time $n=0$ is a mixture of exponential distributions with density $g_0$ and parameter law $\nu^{(0)}$, then $\nu^{(0)} Q$ is the parameter law of ${\sf Op}_k(f_0)$. More generally $\nu^{(0)} Q^n$ is the parameter law of $f_n={\sf Op}_k^{{(n)}}$, the density of $({\sf G}^{(n)}(i),{i \in \cro{1,k}})$. \end{pro} We now conclude by discussing the asymptotic behaviour of the parameter law MC. \begin{pro} \label{pro:ap} (1) $Q$ is ergodic in $[2,2k^2]^k$ (meaning that for any $\nu_k^{(0)}$ having its support in $[2,2k^2]^k$, $\nu_k^{(0)}Q^n$ converges weakly when $n\to +\infty$ to a distribution $\nu_k$ independent from $\nu_k^{(0)}$). \\ (2) The probability density $g$ whose parameter law is $\nu_k$ is solution to ${\sf Op}_k(g)=g$. \end{pro} \begin{proof} We prove the two statements. Let ${\cal M}(S)$ be the set of probability measures with support in $S$. Since the compact set $[2,2k^2]^k$ is stable by any $F_{\tau}$, then ${\cal M}([2,2k^2]^k)$ is stable by $Q$. Take $\nu^{(0)}_k$ in ${\cal M}([2,2k^2]^k)$, and being the parameter law of some function $f_0$. Hence, the sequence $(\nu^{(n)}_k:=\nu^{(0)}_kQ^{n},n\geq 0)$ possesses an accumulation point $\nu_k$ in the compact ${\cal M}([2,2k^2]^k)$. Consider a converging subsequence, still denoted $\nu^{(n)}_k$. Recall that $\nu^{(n)}_k$ is the parameter law of $f_n:={\sf Op}_k^{{(n)}} (f_0)$. For any fixed $x[k]$, the map $\lambda[k]\to \prod_{i=1}^k{\sf Exp}\left[ \lambda_i,x_i\right]$ is bounded continuous on $`R^+{}^k$, therefore $\nu^{(n)}_k\to \nu_k$ implies that for any fixed $x[k]\in(0,+\infty)^k$, \begin{eqnarray}\label{eq:schqfz} f_n(x[k])=\int \prod_{i=1}^k {\sf Exp}\left[ \lambda_i,x_i\right] d\nu^{(n)}_k(\lambda[k])\to f(x[k]):=\int \prod_{i=1}^k {\sf Exp}\left[ \lambda_i,x_i\right] d\nu_k(\lambda[k]).\end{eqnarray} The fact that $f$ is a density can be checked by Fubini. Denote by $\eta_n$ the distribution on $`R^k$ whose density is $f_n$ and by $\eta$ the one whose density is $f$. By Scheffé's theorem, the simple convergence \eref{eq:schqfz} implies the convergence of $\eta_n$ to $\eta$. This implies $\eta=\gamma_k$ (by uniqueness of the limit of the gaps sequence Markov chain), and then $f$ coincides with $\lim_n {\sf Op}_k^{(n)}(f_0)$. By Proposition \ref{pro:etsdy}, $f$ is the density of $\gamma_k$. We must add that a function $f$ in ${\sf MEX}$ possesses a unique parameter law, which implies that $\nu_k^{(0)}Q^{n}$ possesses a unique accumulation point, and then converges in distribution. The uniqueness of the parameter law comes from \eref{eq:par-law2}, where one sees that if $\nu$ is the parameter law of $f$, then $f$ is the Laplace transform of the measure $\left(\prod_{i=1}^k \lambda_i\right) \nu(\lambda[k])$. \end{proof} \begin{rem} Take a bounded continuous function $f:`R^k\to `R$. Our representation of the gaps sequence distribution of IBM permits to calculate $`E({f(G[k])})$ under $\gamma_k$ and to give a representation using $\nu_k$ only: \begin{eqnarray} \label{eq:efg} `E({f(G[k])}) & =& \int_{\mathbb{R}_+^k} f(x[k]) d \gamma_k(x[k]) \\ & = &\int_{\mathbb{R}_+^k} \left(\int_{\mathbb{R}_+^k} f(x[k]) \prod_{j=1}^k \lambda_j e^{-\lambda_j x_j} dx_j \right) d \nu_k(\lambda[k]) \end{eqnarray} hence, it appears clearly that $`E({f(G[k])})$ can be computed thanks to the parameter distribution $\nu_k$ only. More generally, using Proposition \ref{pro:gaps sequenceToISP}, one can use this formula to compute $`E({f(I[k])})$ to, which can then also be expressed in terms of $\nu_k$ only. \end{rem} MCs with kernel such as $Q$, that is, which relies on successive applications of a functions $F_\tau$, where $F_\tau$ is taken at random in a set of functions ${\cal F}=(F_\tau,\tau \in {\cal S}\cro{0,k})$ depending (or not) of the current position, are called iterated function system (IFS) in the literature~\cite{BD85,BDEG88,F04}. Here since $\Theta^{(0)}_k:=[2,2k^2]^k$ is stable by all the $F_\tau$ (for $\tau \in {\cal S}\cro{0,k}$), it is easily seen that for \[\Theta^{(n)}_k:=\bigcup_{\tau\in {\cal S}\cro{0,k}} F_\tau(\Theta_k^{(n-1)}),\] the sequence $(\Theta_k^{(n)},n\geq 0)$ is a sequence of non increasing compact sets whose (non empty) limit is a compact $\Theta_k$. Using the portmanteau theorem and the fact that $n\mapsto \Theta^{(n)}_k$ is decreasing for the inclusion partial order (see Figure \ref{fig:IFS} for a representation of $\Theta_2$) we can establish that for any $k\geq 1$, $\Theta_k\supset {\sf{Support}}(\nu_k)$. \begin{figure} \centerline{\includegraphics[width = 6cm]{support-eps-converted-to.pdf}} \caption{\label{fig:IFS}The support of $\nu_2$ computed by a program.} \label{fig:support} \end{figure} \begin{lem} $\Theta_2={\sf support}(\nu_2)$. \end{lem} \begin{proof} For $k=2$, it is easily seen that all the $F_\tau$ (given in \eref{eq:FW}) are contracting in $`R^2$ equipped with the Euclidean distance. Following classical theorems (e.g. Hutchinson \cite[section 3]{H81}) it turns out that $\Theta_k^{(n)}$ converges to $\Theta_k$ for the Hausdorff metric for any starting set $\Theta0 \subset [2,8]^2$ (and not only from $[2,8]^2$ as stated above). In particular, imagine that $\Theta^{(0)}_2=\{(2,2)\}$, and that the starting measure is $\nu^{(0)}=\delta_{(2,2)}$. Recall that $\nu^{(n)}\to \nu_2$ (since the convergence of $\nu^{(n)}\to \nu_2$ holds for any starting distribution $\nu^{(0)}$ whose marginals own no atom at $0$). \par Take any $x\in \Theta_2$, any $`e >0$. By Hutchinson's result, for $n$ large enough $\Theta^{(n)}_2\cap B(x,`e)\neq \varnothing$, which means, taken into account the positivity of the $w_{\tau}'s$, that $ \nu^{(0)}Q^{n} (B(x,`e))>0$: some mass has been transported in a neighbourhood of $x$ in $n$ steps from ${(2,2)}$. This is a first step in our proof that $x\in {\sf Support}(\Theta_2)$. Now, observe that for any $\rho>0$, there exists $m\geq 1$ such that \[F_{0,1,2}^{\circ m}([2,8]^2)\subset B((2,2),\rho),\] implying that $m$ iterations of $F_{0,1,2}$ (see \eref{eq:FW}), bring back all the mass (that is 1) in a neighbourhood of $(2,2)$. The probability to proceed to these iterations of $F_{0,1,2}$ is positive (since $\inf_{(c_1,c_2)\in [2,8]^2} w_{0,1,2}(c_1,c_2) >0$). Now, since all the functions $F_{\tau}$ are uniformly continuous on the compact $[2,8]$, for $\rho$ small enough, any distribution $\nu^{(0)}{}'$ with support included in $ B((2,2),\rho)$ will also satisfy $ \nu^{(0)}{}'Q^{n} (B(x,2`e))>0$. \end{proof} \begin{rem}\label{rem:Hutch} Hutchinson \cite[section 3]{H81} characterises the set $\Theta_2$: it is the closure of the set of fixed points of the functions $(F_{\tau_1}\circ \dots \circ F_{\tau_m}, m\geq 1, \tau_i \in {\cal S}\cro{0,2})$. \end{rem} Using \cite{BDEG88} and some analysis, for $k=3$, the IFS with place-dependent probabilities is contracting in average for the $\|.\|_2$ distance. This can be proved by computing the Jacobian matrices $J_\tau(c[k])=(\frac{\partial F_{\tau,i}(c[k])}{\partial c_j})_{1\leq i \leq k}$ of the $F_\tau$'s, and by proving that their norms $N_\tau(c[k]):=\sup_{\rho\neq 0} \|\rho J_\tau(c[k])\|_2/\|\rho\|_2$ satisfies $\sum_\tau w_\tau(c[k]) \log (N_\tau(c[k]))<0$ (this can be proved by taking first some bounds on the $w_\tau$, and then using the $\log (N_\tau(c[k]))\leq \log (N_\tau(2,\dots,2))$. From Theorem 1.2 in \cite{BDEG88}, $\nu_3$ is of pure type, atomic, or absolutely continuous. We think that the same results can be proved with additional work for $k=4$, but for $k\geq 5$, other methods should be involved since the average contraction property seems to fail. We were not able to find in the literature any general results allowing one to prove the identification of $\Theta_k$ with ${\sf Support}( \mu_k)$ or to compute the Hausdorff dimension of this support. We conjecture that for any $k\geq 1$, ${\sf Support}(\nu_k)$ coincides with $\Theta_k$ , and that the Lebesgue measure of this support (or of $\Theta_k$) is 0. Now, we describe the distribution of the gaps sequence of the IBM thanks to $\nu_k$. \begin{theo} \label{theo:multi-dim} Let $k$ be an integer larger than 0. If $(G_i,{i \in \cro{1,k}})$ is a random vector with distribution $\gamma_k$, then \begin{equation} (G_i,{i \in \cro{1,k}}) \sur{=}{(d)} (C_1 E_1,C_2 E_2,\dots,C_k E_k) \end{equation} where the $E_i$'s are i.i.d., ${\sf Exp}[1]$ distributed, independent from $C[k]$, a random vector of law $\nu_k$. \end{theo} According to this theorem and Proposition \ref{pro:ap}, we may deduce the following multivariate stochastic order bounds for $\gamma_k$, which somehow, describe the repulsive-attractive property of the gaps sequence. \begin{pro}\label{eq:mso} Let $k$ be an integer larger than 0 and $(G_i,{i \in \cro{1,k}})$ a random vector with distribution $\gamma_k$. For any bounded increasing function $h:`R^k\to `R$ \[`E(h(E_i/(2k^2),1\leq i \leq k)) \leq `E(h(G_i,1\leq i \leq k))\leq`E(h(E_i/2,1\leq i \leq k))\] where the $E_i$ are i.i.d. random variables ${\sf Expo}[1]$ distributed. \end{pro} We can add here that the bound $2k^2$ is not tight (even in the case $k=2$ as one can see on Figure \ref{fig:IFS}). \subsection{Iteration of reflected BM ad libitum} \label{sec:IRMal} In this section $X=|B|$ is the reflected BM (RBM), and $I^{(n)}=X_n\circ \dots \circ X_1$ the $n$th iterated RBM. \begin{pro}\label{pro:rflbm} Let $t_0=0,t_1,\cdots,t_k$ be some non negative distinct real numbers. The sequence \[\left(I^{(n)}(t_i), {i \in \cro{0,k}}\right) \xrightarrow[n]{(d)} (0,I_1,\dots,I_k)\] where $(I_i, {i \in \cro{1,k}})$ is invariant by permutation and independent from the $t_i$'s and takes its value in $(0,+\infty)^k$. Moreover we have $I_1\sim {\sf Expo}[2]$. \par Hence, the gaps sequence MC $(\gaps{I^{(n)}(t_i), {i \in \cro{0,k}}},n\geq 1)$ converges and its limit $(G_i, {i \in \cro{1,k}}):=\gaps{0,I_1,\dots,I_k}$ determine $(0,I_1,\dots,I_k)$: for a uniform permutation $\tau\in {\cal S}\cro{1,k}$ independent from $(G_i, {i \in \cro{1,k}})$, \[I[k]\sur{=}{(d)} \left(\ov{G}_{\tau(i)}, {i \in \cro{1,k}}\right).\] \end{pro} The proof of this proposition can be adapted from the proof of Theorem \ref{theo:ISP}. All these results rely on the ergodicity of the Markov chain $\left(I^{(n)}(t_i), {i \in \cro{0,k}}\right)$. The estimates needed to deal with the reflected Brownian case can be simply adapted from the simple Brownian case.\par We now describe the limiting gaps sequence using again the MC at the parameters level. First, the Markov kernel of the iterated BM has a density. Set, for any $g,x,y\geq 0$, $M_{g}(x,y)=`P\left(B_{t+g}\in dy ~| B_t \in dx \right)$. We have, by André's reflection principle, \begin{eqnarray}\label{eq:MKBR} M_g(x,y) = \left(\Phi_g(y-x)+ \Phi_g(y+x)\right)1_{y\geq 0} \end{eqnarray} where $\Phi_g$ is the density of $B_g$. The gap MC kernel can be described too adapting consideration of Section \ref{sec:BP} (in words, 0 stay at the left). Starting with some gaps sequence $\gaps{t_0=0,t_1,\dots,t_k}=g[k]$, we will have $\gaps{X(t_0)=0,X(t_1),\dots,X(t_k)}=x[k]$, if, for the same notation as in \eref{eq:S}, for ${i \in \cro{1,k}}$, $X(\widehat{t}_i)= \ov{x}_{\tau(j)}$ for some permutation $\tau \in {\cal S}\cro{1,k}$ (instead of $\cro{0,k}$ for the iterated BM). We then have in this case a solid link between the Markov kernel of the gaps sequence and of the initial chain, since $\wh{t}_j=\sum_{i=1}^j g_i$. We get in this case \begin{eqnarray} \label{eq:Psi2} \Psi_{g[k]}(x[k])&=&\sum_{\tau\in {\cal S}\cro{1,k}} \prod_{i=1}^k M_{g_i}\left(\ov{x}_{\tau(i-1)}, \ov{x}_{\tau(i)} \right) \,1_{x_i>0}. \end{eqnarray} Therefore, modifying a bit \eref{eq:elt}, one can still see that ${\sf MEX}$ is stable by the MC with kernel $\Psi_k$. One observes using \eref{eq:sim-trans}, \eref{eq:double}, \[ \int_{`R^+{}^k} \prod_{i=1}^k {\sf Exp}[\lambda_i,g_i] \Psi_{g[k]}(x[k]) dg_1...dg_k ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\] \[~~~~~~~~~~~~~~~~~~~~~ =\frac{1}{2^k}\sum_{\tau\in {\cal S}\cro{1,k}} \prod_{i=1}^k {\sf Exp}\left[{\sqrt{2\lambda_i}}, |\ov{x}_{\tau(i)}-\ov{x}_{\tau(i-1)}|\right]+{\sf Exp}\left[{\sqrt{2\lambda_i}}, |\ov{x}_{\tau(i)}+\ov{x}_{\tau(i-1)}|\right]. \] After expanding this product, one can again put together the elements ``containing a given'' $x_i$. Using the same considerations as those below Remark \ref{rem:1}, this is also \[=\frac{1}{2^k}\sum_{\tau\in {\cal S}\cro{1,k}}\sum_{D\subset\cro{1,k}} \prod_{i \in D} {\sf Exp}\left[{\sqrt{2\lambda_i}}, |\ov{x}_{\tau(i)}-\ov{x}_{\tau(i-1)}|\right]\prod_{i \in \complement D}{\sf Exp}\left[{\sqrt{2\lambda_i}}, |\ov{x}_{\tau(i)}+\ov{x}_{\tau(i-1)}|\right]. \] This formula is the analogous in the case of RBM to that on the BM, \eref{eq:elt}. Let $E_{\tau,i}$ as defined in Section \ref{sec:BP}, and \begin{eqnarray*} E'_{\tau,i}&=& \{j: x_i\in | \ov{x}_{\tau(j)}|\} = \{j: \tau(j)\geq i\}\\ E''_{\tau,i}&=& \{j: x_i\in | \ov{x}_{\tau(j-1)}|\} = \{j: \tau(j-1)\geq i\}. \end{eqnarray*} Set \[F_{\tau,D,i}(c[k])=\sum_{j \in D, j \in E_{\tau,i}} \sqrt{2c_j} +\sum_{j \in \complement D, j \in E'_{\tau,i}} \sqrt{2c_j}+ \sum_{j \in \complement D, j \in E''_{\tau,i}} \sqrt{2c_j}\] and \[w_{\tau,D}(c[k])=\frac{1}{2^k} \prod_{i=1}^k \frac{\sqrt{2c_i}}{F_{\tau,D,i}(c[k])}.\] Again let \[F_{\tau,D}=(F_{\tau,D,i}, 1\leq i \leq k).\] Similarly to Lemma \ref{lem:transfo} we have \begin{lem}\label{lem:transfo2}If $f$ is the function $f(x[k])=\prod_{i=1}^k {\sf Exp}[\lambda_i,x_i]$ for some $\lambda[k]\in(0,+\infty)^k$, then \begin{displaymath} {\sf Op}_k(f)(x[k]) = \sum_{D\subset\cro{1,k}} \sum_{\tau\in {\cal S}\cro{0,k}} w_{\tau,D}(\lambda[k]) \prod_{i=1}^k {\sf Exp}\left[ F_{\tau,D,i}(\lambda),x_i\right]. \end{displaymath} \end{lem} As in the iterated Brownian motion case, to this operator one can associate a Markov chain $Z^{(n)}$ with kernel $Q$ (defined as \eref{eq:parameter-kernel}) at the level of the parameters (see also \eref{eq:MC-parameter}). Again, the Markov chain $Z^{(n)}$ stays eventually confined in a compact region of $`R^+{}^k$ (the compact $[2,18k^2]^k$ is conserved by each of the $F_{\tau,D}$). By the same considerations as that of Section \ref{sec:BP}, Proposition \ref{pro:ap} holds for the present case (with $[2,18k^2]^k$ instead of $[2,2k^2]^k$). The analogous of Theorem \ref{theo:multi-dim} holds too, for $\nu_k$ the fixed point of $Q$, and Proposition \ref{eq:mso} too, with $18k^2$ instead of $2k^2$ (again $18k^2$ is not tight). \subsection{$n$th iteration of the BM ad libitum} \label{sec:IRMn} In the literature, the standard iterated Brownian motion corresponds to our process $I^{(2)}$. It has been deeply studied. It permits to construct solutions to partial differential equations~\cite{Fun79}. Burdzy studied some of its sample paths properties~\cite{Bur93}. Lot of results have been obtained around its probabilistic and analytic properties, see~\cite{Bur93,Ber96,ES99,BK95,Xiao98,KL99} and the references therein. The $n$th IBM permits to construct solutions of differential equations~\cite{OB09}, but they are less studied, only~\cite{Ber96} mentioned that his result can be extended to $n$th IBM. As far as we are aware of, there are no result concerning some description of the finite dimensional distributions of this process. In the sequel, we show that our gaps point of view allows one to give (a non trivial) description of them., but sufficiently simple to make some exact computations for small values of $n$ and $k$. Let $n\geq 1$ be fixed, as well as $(t_0=0,t_1,\dots,t_k)$ some distinct numbers. The aim of this part is to describe the distribution of $(I^{(n)}(t_i), {i \in \cro{0,k}})$, where $I^{(n)}=B_n \circ\cdots \circ B_1$, where the $B_i$ are i.i.d. two sided BM. We built our reflection on the considerations presented in Section \ref{sec:IPE}. Start with formula \eref{eq:tran} which expresses the encoding Markov chain kernel. Here, of course, $\Phi_g$ is the Gaussian density. Again, by \eref{eq:sim-trans} \begin{eqnarray}\label{eq:fondqd} \int \prod_{i=1}^k {\sf Exp}[\lambda_i,g_i]K_{g[k],\tau}((g'_1,\dots,g'_k),\tau'\circ\tau) dg_1...dg_k = \frac{1}{2^k}\prod_{i=1}^k {\sf Exp}\left[\sqrt{2\lambda_i},|\Delta \ov{g'}_{\tau'(i)}|\right]. \end{eqnarray} Thanks to \eref{eq:iden1}, we can again rewrite the right hand side to code the evolution on the parameter space. Setting $m_i=\min\{\tau'_{i-1},\tau'_{i}\}$ and $M_i=\max\{\tau'_{i-1},\tau'_{i}\}$ we get $|\Delta \ov{g'}_{\tau'_i}|= g'_{1+m_i}+\cdots + g'_{M_i}$. Once again, collect the different contribution: set $E_j(\tau,\tau')=\{~i: j \in \cro{m_i +1,M_i}\}$ the set of indices $i$ such that $g_j'$ contributes to $|\Delta \ov{g'}_{\tau'(i)}|$. The RHS of \eref{eq:fondqd} rewrites \[w_{\tau,\tau'}(\lambda[k]) \prod_{j=1}^k{\sf Exp}\left[ F_{\tau,\tau',j}(\lambda), g'_j\right].\] where \[w_{\tau,\tau'}(\lambda[k])= \frac{1}{2^k}\prod_{j=1}^k \frac{\prod_{i \in E_j(\tau,\tau')} \sqrt{2\lambda_i}}{\sum_{i\in E_j(\tau,\tau')}\sqrt{2\lambda_i}} \textrm{ and } F_{\tau,\tau',j}(\lambda[k])=\sum_{i\in E_j(\tau,\tau')}\sqrt{2\lambda_i}.\] Consider ${\sf MEX}'_k$ the set of measures that are mixtures of distribution on $`R^k\times {\cal S}\cro{0,k}$ of the type $\left(\prod_{i=1}^k {\sf Exp}[\lambda_i]\right)\times \delta_{\tau}$ where $\delta_\tau$ is a Dirac on a permutation $\tau$. The previous considerations show that the kernel $K$ operates linearly on ${\sf MEX}'_k$. It sends $\left(\prod_{i=1}^k {\sf Exp}[\lambda_i]\right)\times \delta_{\tau}$ on \[\sum_{\tau'} w_{\tau,\tau'}(\lambda[k]) \left(\prod_{i=1}^k{\sf Exp}\left[F_{\tau,\tau',j}(\lambda[k]), g'_j\right]\right)\times \delta_{\tau'\circ \tau}.\] This can again be written at the parameter level under the form of a time homogeneous MC on $`R^k \times {\cal S}\cro{0,k}$, which, starting at time 0 at position $(\lambda[k],\tau)$, takes at time 1, the value $((F_{\tau,\tau',j}(\lambda[k]), 1\leq j \leq k), \tau'\circ\tau)$ with probability $w_{\tau,\tau'}(\lambda[k])$ (for any $\tau'\in {\cal S}\cro{0,k})$. Denote again by $Q$ the corresponding kernel. This explicit description allow computations for small values of $k$ and of $n$. Recall at the beginning of Section \ref{sec:IPE} the decoding map $C^{-1}$. Finally denoting by $ `E_{\lambda[k],\tau}$ the expectation when the initial encoding distribution is $\left(\prod_{i=1}^n {\sf Exp}(\lambda_i)\right) \times \delta_\tau$, we find \begin{eqnarray}\label{eq:qfdqfd} `E_{\lambda[k],\tau}(f(I^{(n)}[k]))&= &\sum_{\tau'} \int_{R^+{}^k}{}Q^{n}_{(\lambda[k],\tau)}([d\lambda'_1,\dots,d\lambda'_k) ],\tau'\circ \tau)\\ &\times &\int_{`R^k} f(C^{-1}(y[k],\tau'\circ \tau))\left( \prod_{i=1}^n {\sf Exp}[\lambda'_i,y_i]\right) dy_1...dy_k. \end{eqnarray} The LHS appears to the Laplace transform of $f(I^{(n)}(t_1),\dots,I^{(n)}(t_k))$ with respect to the initial gaps sequence, and then it characterises the distribution. This is not a simple description, but we think that it is the simplest representation of the finite dimensional distribution of the iterated BM one can find. \section{Stable processes iterated ad libitum} \label{sec:ISPal} \subsection{Main results} In this section, we consider independent two sided-stable processes $X_1,X_2, \dots,$ with parameters $(\alpha,\sigma,r)$ as defined in Section \ref{sec:P}, and there successive iterations $I^{(n)}=X_n \circ \cdots \circ X_1$. In this section, $\Phi_g$ is no more the Gaussian density but the density of $X_1(g)$. \par Two sided stable processes possess independent and stationary increments, as well as a scaling property which makes their iterations very similar to that of BM (general Lévy processes seem more difficult to handle because of this lacking scaling property). Here are the convergence results we get for iterated stable processes $I^{(n)}$, as described in Section \ref{sec:P}. \begin{theo} \label{theo:ISP} Assume $\sigma\in(0,+\infty)$. Take $k,n\geq 1$ and some non zero $t_1,\dots,t_k$. Set, \[I^{(n)}[k]:=(I^{(n)}(t_1),\cdots,I^{(n)}(t_k)).\] \begin{enumerate} \compact \item When $\alpha \leq 1$ and any $r$, for any $t>0$, $I^{(n)}(t)$ does not converge in distribution in $\mathbb{R}$. \item When $1 < \alpha \leq 2$ and $|r|>1$ then $I^{(n)}(t_1)$ does not converge in distribution in $\mathbb{R}$ \item When $1 < \alpha \leq 2$ and $|r|<1$ the MC $I^{(n)}[k]$ converges in distribution. The limit distribution $\mu_k$ does not depend on the $t_i$'s and is then exchangeable. For $I[k] \sim \mu_k$, the equality $I[k]\sur{=}{(d)} (X(I_1),\cdots, X(I_k))$ holds. Moreover, $(I_2-I_1,\dots, I_k-I_1)\sim \mu_{k-1}$\\ When $r=0$, under $\mu_1$, $I_1\sur{=}{(d)} `e\prod_{i\geq 0} |X(1)^{(i)}|^{1/{\alpha^i}}$ where the $X(1)^{(i)}$'s are i.i.d. copies of $X(1)$ and $`e$ is an independent uniform random sign. \end{enumerate} \end{theo} \begin{rem}\label{rem:comp} If $X$ and $X'$ are two stable processes with parameters $(\alpha,1,0)$ and $(\alpha,\sigma,0)$ for some $\alpha \in(1,2]$ and $\sigma>0$, then $X'\sur{=}{(d)} \sigma X$. The successive iteration of $(\alpha,1,0)$ and $(\alpha,\sigma,0)$ stable processes and limits (if any) can be compared by a simple coupling, but the property fails when $r\not= 0$. \end{rem} \begin{rem} Notice that since $\mu_{k+1}$ is exchangeable, the rank of $I_1$ in $I[k+1]$ is uniform. Therefore the (random) number of indices $\#\{j: 2\leq j \leq k+1, I_j-I_1 >0 \}$ is uniform in $\cro{0,k}$. In other words, if $I[k]\sim\mu_k$ the rank of 0 in the list $(0,I_1,\dots,I_k)$ is uniform. \end{rem} \begin{lem} When $1 < \alpha \leq 2$ and $|r|<1$, the MC $({\sf G}^{(n)}[k], n \geq 1)$ converges in distribution, and the limit distribution does not depend on the initial non-zero state. \end{lem} \begin{proof} Take some gaps sequence $g[k]\in \mathbb{R}^\star{}^k$. They are the gaps sequence of some non zeros and distinct times $t_0, t_1,\dots,t_k$. Start with $I^{(0)}[k+1]=(t_0,\dots,t_k)$. Since $I^{(n)}[k+1]$ converges in distribution to a limit independent from the $t_i$ (Theorem \ref{theo:ISP}(3)), then the gaps sequence MC ${\sf G}^{(n)}[k]:=\gaps{I^{(n)}[k]}$ too. \end{proof} Consider the case $k=1$ and $X$ a symmetric stable process for some $\alpha\in(1,2]$ and $r=0$. Assume that the gap at time 0 is distributed as $G$, at time 1, the gap will be $|X(G)|$. Then from the equality $G \sur{=}{(d)} |X(G)|\sur{=}{(d)} G^{1/\alpha} |X(1)|$ we infer \begin{eqnarray}\label{eq:dpq} G\sur{=}{(d)} \prod_{i\geq 0} |X(1)^{(i)}|^{\frac1{\alpha^i}}, \end{eqnarray} where the $X(1)^{(i)}$ are i.i.d. copies of $X(1)$ (the complete argument can be adapted from Section \ref{sec:PT}). When $r\neq 0$ there are not any such simple formula. Let $\phi$ be the density of $G$ as defined in \eref{eq:dpq}. We have ${\sf Op}_1(\phi)=\phi$, and $\int_{g\geq 0} \phi(g) \Psi_g(x)dg=\phi(x)$ is an identification between the densities of $|X(G)|$ and of $|G|$. Using that $\phi[c,g]:=\frac{\phi(g/c)}c$ is the distribution of $cG$, we get that $\int_{g\geq 0} \phi[c,g] \Psi_g(x)dg$ is the density of $|X(cG)|\sur{=}{(d)} c^{1/\alpha}|X(G)|\sur{=}{(d)} c^{1/\alpha} G$, which density is $\phi[c^{1/\alpha},x]$. All of this can be summed up in \begin{eqnarray}\label{eq:mono-stable} \int_{g\geq 0} \phi[c,g] \Psi_g(x)dg =\phi[c^{1/\alpha},x]. \end{eqnarray} Let $\nu[c]$ the distribution whose density is $\phi[c,.]$. The map ${\sf Op}_1$ sends $\nu[c]$ onto $\nu[c^{1/\alpha}]$ and is a linear application on the set of mixtures of distributions $\nu[c]$. This property which in the Brownian case allowed us to prove that ${\sf Op}_k$ was linear on the set of mixtures of product of exponential distributions can not be extended here. The reason is that, to separate the variables in \eref{eq:elt}, we used \eref{eq:iden1}. This important property holds only for exponential distributions, and it turns out that in the stable case, product measures of the form $\prod_{i=1}^k \phi[c_i,x_i]$ are not sent by ${\sf Op}_k$ on mixtures of measures of the same kind. We were not able to find a family of measures on which ${\sf Op}_k$ would operates simply but the discovery of such a family would be an important step for the identification of the distribution of $I[k]$. \subsection{Occupation measure in the stable case} \label{sec:om} As stated in Proposition \ref{pro:C-K}, Curien and Konstantopoulos \cite{C-K} obtained some information about the occupation measure of the iterated Brownian motion ad libitum. In the stable case, when convergence holds, the family of limiting distributions $\mu_k$ are consistent, and since, they correspond to distribution of exchangeable vectors, by Kolmogorov extension theorem together with de Finetti representation theorem, there exists a random measure $\mu$, so that for any $k\geq 0$, $\mu_k$ is the distribution of $(U_1,\dots,U_k)$ i.i.d. random variables taken under $\mu$ (this is explained in the Brownian case in \cite{C-K}). The main tool used in \cite{C-K} to characterise the regularity of the density of the occupation measure is a paper by Pitt \cite{Pit} only available in the Gaussian case. We are not able for the moment to get a similar result in the stable case, and then we renounce to go on our research in this direction. In view of Figure \ref{Fig:simu}, we may expect that for some small parameters $\alpha$ in $(1,2]$ (close to 1), the density of the local time should be not positive on the range of its support. \begin{figure}[htbp] \centerline{\includegraphics[width=6cm,height=4cm]{tempsloc-stable12.png}~~\includegraphics[width=6cm,height=4cm]{tempsloc-stable15.png}} \centerline{\includegraphics[width=6cm,height=4cm]{tempslocal1point8.png}~~\includegraphics[width=6cm,height=4cm]{tempsloc-BI.png}} \caption{\label{Fig:simu} Simulation of the local time of the iterated centred stable processes ad libitum, in the case $\alpha=1.2, 1.5, 1.8$ and $2$. Each of them is made from an histogram made with a sample $(X^{(10)}(t_i),1\leq i \leq 10^6)$ starting from some fixed position.} \end{figure} In the next subsection, we discuss the finiteness of the support of the limiting occupation measure. The proof follows the same structure as that of~\cite[Prop. 7]{C-K}. Let $P$ be any two-sided real process (in our case $P=X,I^{(n)} \text{ or } I$). The range of $P$ on $[a,b]$ is defined by \begin{equation} R_P(a,b) = \sup_{a \leq t \leq b} P(t) - \inf_{a \leq t \leq b} P(t). \end{equation} In the following, set $D=R_X(0,1)$. \begin{lem} \label{lem:rangeISP} For any $|r|<1$ and $\alpha \in(1,2]$, for almost any $t \neq 0$, $R_{I^{(n)}}(0,t)$ converges in law to a r.v. $\Delta$ which does not depend on $t$. Moreover, when $r = 0$, \begin{equation} \label{eq:rangeISPinf} \Delta \sur{=}{(d)} \prod_{i=0}^{\infty} D_i^{\alpha^{-i}} \end{equation} where the $D_i$'s are i.i.d. copies of $D$. \end{lem} \begin{proof} Let $A_n(t) = \inf\{ I^{(n)}(u),0 \leq u \leq t\}$ and $B_n(t) = \sup\{ I^{(n)}(u),0 \leq u \leq t\}$. When $r=0$, \begin{eqnarray*} R_{I^{(n+1)}}(0,t) & = & \sup_{A_n(t) \leq v \leq B_n(t)} X(v) - \inf_{A_n(t) \leq v \leq B_n(t)} X(v) \\ & = & R_X\left( A_n(t),B_n(t) \right) \label{eq:rangeISP} \\ & = & \left(B_n(t)-A_n(t)\right)^{\alpha^{-1}} D \\ & \sur{=}{(d)} & \left(R_{I^{(n)}}(0,t) \right)^{\alpha^{-1}} D. \end{eqnarray*} By iteration, we get \begin{equation} R_{I^{(n)}}(0,t) = t^{\alpha^{-n}} \prod_{i=1}^n D_i^{\alpha^{-(i+1)}} \end{equation} where $D_i$ are i.i.d. copies of $D$. Since $\alpha > 1$ and $t\neq 0$, $t^{\alpha^{-n}} \to 1$ when $n \to \infty$. Now, we have to prove the convergence in law of $\prod_{i=0}^{n-1} D_i^{\alpha^{-i}}$ as $n\to +\infty$. Write \begin{displaymath} \log \prod_{i=0}^{n-1} \left| D_i \right|^{\alpha^{-i}} = \sum_{i=0}^{n-1} \alpha^{-i} \log \left| D_i \right|. \end{displaymath}\par By the Doob's $\mathbb{L}^p$ inequality~\cite[Theorem II.1.7]{RY99}, for any $\beta \in \mathbb{R}$, \begin{equation} \prob{D \geq x} \leq \prob{\sup_{0 \leq t \leq 1} |X(t)| \geq \frac{x}{2}} \leq \frac{2^\beta \esp{X(1)^\beta}}{x^\beta}. \end{equation} But, $\esp{X(1)^\beta} < \infty$ if $\beta < \alpha$. For $\beta=1 < \alpha$, \[\prob{ \alpha^{-i} \log \left| D_i \right| > i^{-2}} \leq \frac{\text{Cste}}{e^{\alpha^{i}i^{-2}}},\] which is a summable sequence, since $\alpha>1$. By Borel-Cantelli's lemma, $\prod_{i=1}^n D_i^{\alpha^{-i}}$ converges as $n\to+\infty$. This ends the proof when $r=0$.\\ In the general case, write \begin{eqnarray*} R_{I^{(n+1)}}(0,t) & = & R_X\left( A_n(t),B_n(t) \right) \\ & \leq & (B_n(t) - A_n(t))^{\alpha^{-1}} D + r (B_n(t) - A_n(t)) \\ & \sur{=}{(d)} & (R_{I^{(n)}}(0,t))^{\alpha^{-1}} D + r R_{I^{(n)}}(0,t) \label{eq:inegr} \end{eqnarray*} To prove that $R_{I^{(n)}}(0,t)$ converges, we use Theorem~13.0.1 in~\cite{M-T}. By~\eqref{eq:inegr}, \begin{displaymath} \esp{R_{I^{(n+1)}}(0,t)|R_{I^{(n)}}(0,t)} - R_{I^{(n)}}(0,t) \leq (R_{I^{(n)}}(0,t))^{\alpha^{-1}} \esp{D} - (1-r) R_{I^{(n)}}(0,t). \end{displaymath} So if $R_{I^{(n)}}(0,t) > \displaystyle \left( \frac{\esp{D}}{1-r} \right)^{\frac{1}{1-\alpha^{-1}}} = M$, then $\esp{R_{I^{(n+1)}}(0,t) |R_{I^{(n)}}(0,t)} - R_{I^{(n)}}(0,t) \leq -1$; else $\esp{R_{I^{(n+1)}}(0,t)|R_{I^{(n)}}(0,t)} - R_{I^{(n)}}(0,t) \leq M^{\alpha^{-1}} \esp{D} +1 =b$, from what we deduce\begin{equation} \esp{R_{I^{(n+1)}}(0,t)|R_{I^{(n)}}(0,t)} - R_{I^{(n)}}(0,t) \leq -1 + b 1_{[0,M]}(R_{I^{(n)}}(0,t)). \end{equation} This proves the ergodicity of $\left( R_{I^{(n)}}(0,t) ; n \geq 0 \right)$ by \cite[Theorem~13.0.1(iv)]{M-T}. \end{proof} By Lemma~\ref{lem:rangeISP} and the arguments of~\cite[Section 3.2]{C-K}, this proves that $\phi$ has a bounded support a.s. \subsection{Proofs of Theorem \ref{theo:ISP}} \label{sec:PT} The main technical point (Theorem \ref{theo:ISP} (3)) concerns the convergence of the MC $(I^{(n)}(t_i), {i \in \cro{0,k}})$ in the stable case from which we will derive the other convergence theorem of the paper by some slight modifications. In the proof $\widetilde{X}(1)$ stands for the symmetric part of $X(1)$ so that $X(t)\sur{=}{(d)} rt+|t|^{1/\alpha}\widetilde{X}(1)$.\\ 1. Assume $\alpha <1$, and $r \in \mathbb{R}$. One has $I^{(n)}(t)\sur{=}{(d)} rI^{(n-1)}(t) + |I^{(n-1)}(t)|^{1/\alpha} \widetilde{X}(1)$. Since $1/\alpha>1$, it is apparent that $|I^{(n)}(t)|$ should become very large. To prove this, we compare $I^{(n)}$ with a deterministic geometric sequence $c^n$ for $(1/\alpha)> c>1$. \[`P\left(|I^{(n)}(t)|\geq c^n \,|\, |I^{(n-1)}(t)| \geq c^{n-1}\right)\geq \inf_{x \geq c^{n-1}} `P( |rx+x^{1/\alpha} \widetilde{X}(1)| \geq c^n)\] For any $x\geq c^{n-1}$, \begin{eqnarray*} `P( |rx+x^{1/\alpha} \widetilde{X}(1)| \geq c^n)&=& 1- `P\left(\frac{- c^n-rx}{x^{1/\alpha}}\leq \widetilde{X}(1)\leq \frac{c^n-rx}{x^{1/\alpha}}\right) \end{eqnarray*} and since stable distribution possesses continuous density $h$ at 0 (see Feller \cite[sec. XV(3)]{fel2}), this is \begin{eqnarray*} & \geq & 1- C h(0) \left(\frac{c^n-rx}{x^{1/\alpha}} -\frac{- c^n-rx}{x^{1/\alpha}}\right)\\ & = & 1- C h(0) \left(\frac{2c^n}{x^{1/\alpha}}\right)\\ & \geq & 1- C h(0) \left(\frac{2c^n}{c^{(n-1)/\alpha}}\right) \end{eqnarray*} for $n$ large enough and some constant $C>0$. We deduce \[`P(|I^{(n)}(t)|\geq c^n ,\forall n\geq 1 )>0.\] When $\alpha=1$, for any $r$, $I^{(n)}(t)\sur{=}{(d)} r I^{(n-1)}(t) + |I^{(n-1)}(t)| \widetilde{X}_1^{(n-1)}$. As the distribution of $\widetilde{X}_1$ is symmetric with respect to $0$, $(r I^{(n-1)}(t),|I^{(n-1)}(t)| \widetilde{X}_1^{(n-1)}) \sur{=}{(d)} (rI^{(n-1)}(t),I^{(n-1)}(t) \widetilde{X}_1^{(n-1)})$, from which we get $I^{(n)}(t) \sur{=}{(d)} I^{(n-1)}(t) (r + \widetilde{X}_1^{(n-1)} ) \sur{=}{(d)} t \prod_{i=1}^n \left( r + \widetilde{X}_1^{(i)} \right)$. Taking the logarithm, one sees that $I^{(n)}(t)$ does not converge in distribution.\\ 2. The proof we provide here is valid for any $\alpha>0$. In the sequel we assume $r>1$ (the case $r<-1$ is similar). For a fixed $t$, $I^{(n)}(t)\sur{=}{(d)} rI^{(n-1)}(t) + |I^{(n-1)}(t)|^{1/\alpha} \widetilde{X}(1)$. For $r>1$, $I^{(n)}(t)$ can be compared with a geometric sequence with common ratio $s\in(1,r)$. Write \[`P\left(|I^{(n)}(t)|\geq s^n \,|\, |I^{(n-1)}(t)| \geq s^{n-1}\right)\geq \inf_{x \geq s^{n-1}} `P( |rx+x^{1/\alpha} \widetilde{X}(1)| \geq s^n)\] For any $x\geq s^{n-1}$, \begin{eqnarray*} `P( |rx+x^{1/\alpha} \widetilde{X}(1)| \geq s^n)&=& 1- `P(- s^n\leq rx+x^{1/\alpha} \widetilde{X}(1)\leq s^n)\\ &\geq & 1- `P( rx+x^{1/\alpha} \widetilde{X}(1)\leq s^n)\\ &= & 1- `P(\widetilde{X}(1)\geq \frac{ rx-s^n}{x^{1/\alpha}})\\ &= & 1- `P(\widetilde{X}(1)\geq \frac{(r-s)x+ sx-s^n}{x^{1/\alpha}})\\ &\geq & 1- `P(\widetilde{X}(1)\geq \frac{(r-s)x}{x^{1/\alpha}})\\ &= & 1- `P(\widetilde{X}(1)\geq (r-s)x^{1-1/\alpha})\\ &\geq &1- c s^{(n-1)(1-\alpha)} \end{eqnarray*} for $n$ large enough (we have use that $`P(\widetilde{X}(1) \geq v) \leq c' v^{-\alpha}$ for some $c'$ and $v\geq s$, and that $r-s$ is a constant, and the symmetry of the distribution of $\widetilde{X}(1)$). We deduce from that that \[`P(|I^{(n)}(t)|\geq s^n ,\forall n\geq 1 )>0.\] \par 3. The proof of the convergence of $I^{(n)}[k]$ we propose is adapted from Curien \& Konstantopoulos \cite{C-K}. The sequence $(I^{(n)}[k],n\geq 1)$ is a MC, and its Markov kernel is given by \[P(y[k];A)=`P((X(y_1),\dots,X(y_k))\in A),\] for any $y[k]\in `R^k$, any Borelian $A\in \mathbb{R}^k$. As in \cite{C-K}, the Markov chain $I^{(n)}[k]$ is aperiodic, and irreducible with respect to the $p$-dimensional Lebesgue measure on $`R^k$. We prove that it is Harris recurrent (and then possesses a unique invariant distribution), following the elements that can be found in Section 5.5.1. Meyn \& Tweedie \cite{M-T}. Set for any $M>0$ \[S_M=\{x[k]\in `R^k, M^{-1}\leq |x_i|\leq M,|x_i-x_j|\geq M^{-1}\}.\] Denote by $f_{x[k]}$ the density of $(X(x_1),\dots,X(x_k))$, and let \[F_M(z[k])=\min_{x[k]\in S_M} f_{x[k]}(z[k]).\] It is easily seen that $F_M$ is the density of a $\sigma$-finite measure $\mu_M$ on $`R^k$, with total mass $c_M=\int_{R^k}F_M(z[k])dz_1\cdots dz_k>0$ and satisfy $F_M(z[k])>0$ for any $z_1,\dots,z_k$. This provides the following bounds on the Markov kernel of our MC: \[`P((X(x_1),\dots,X(x_k))\in A)\geq \mu_M(A),~~~ \textrm{for all }x[k]\in S_M.\] This is the minoration condition (5.2) in \cite{M-T}: the set $S_M$ is $\mu_M$-petite. To prove the Harris recurrence of the MC, it suffices to prove that for some $M>0$, the expected hitting time of $S_M$ by $I^{(n)}[k]$ starting from $x[k]$, is bounded above for $x[k] \in S_M$. Consider, for $x[k]\in `R^{+}{}^k$ \[V(x[k])=U(x[k])+G(x[k])\] with $U(x[k])=\max\{|x_i|, i \in\cro{1,k}\}$, $G(x[k])=\sum_{0\leq i <j\leq k}|x_i-x_j|^{-1/\alpha}$ (where $x_0=0$). The potential function $V$ is unbounded on $`R^k$, and its drift is defined by \[DV(x[k]):=PV(x[k])-V(x[k])=`E(V(X(x_1),\dots,X(x_k)))-V(x[k]),~~\textrm{ for } x[k]\in `R^k.\] We just have to prove that \begin{eqnarray}\label{eq:qsdd} \Delta V(x[k]) \leq - a +b 1_{S_M}(x),~~ x[k]\in `R^k. \end{eqnarray} We have for any $\lambda>0$, \begin{eqnarray*} PU(x[k])& = &`E\left(\max_{i \in\cro{1,k}}|X_{x_i}|\right) = `E[\max |\bar{X_{x_i}}|+|r| \,|x_i|] \leq |r|U(x)+ \lambda^{1/\alpha}`E[\max |\widetilde{X}_{x_i/\lambda}|] \end{eqnarray*} and then taking $\lambda=U(x[k])$, we get \begin{eqnarray*} PU(x[k])&\leq& |r|U(x[k])+ U(x[k])^{1/\alpha} C_1 \end{eqnarray*} where $C_1= `E[\max_{-1\leq s \leq 1} |X_{s}|]$. Now, \begin{eqnarray*} PG(x[k])&=&\sum_{0\leq i <j\leq k}`E \left[|X_{x_i}-X_{x_j}|^{-1/\alpha}\right]\\ &=&\sum_{0\leq i <j\leq k}`E \left[|\widetilde{X}_{x_i-x_j}+r(x_{i}-x_j)|^{-1/\alpha}\right] \end{eqnarray*} We decompose each term in the sum using \begin{eqnarray*} \frac{1}{|\widetilde{X}_{x_i-x_j}+r(x_{i}-x_j)|^{1/\alpha}}&= &\frac{1_{{\sf Sign}(\widetilde{X}_{x_i-x_j})={\sf Sign}(r)}+1_{{\sf Sign}(\widetilde{X}_{x_i-x_j})\neq {\sf Sign}(r)}}{|\widetilde{X}_{x_i-x_j}+r(x_{i}-x_j)|^{1/\alpha}}\\ &\leq & \frac{1_{{\sf Sign}(\widetilde{X}_{x_i-x_j})={\sf Sign}(r)}}{|\widetilde{X}_{x_i-x_j}|^{1/\alpha}}+\frac{1_{{\sf Sign}(\widetilde{X}_{x_i-x_j})\neq {\sf Sign}(r)}}{||\widetilde{X}_{x_i-x_j}|-|r(x_{i}-x_j)||^{1/\alpha}} \end{eqnarray*} By symmetry and unimodality of the density of centred stable distributions, one has \[`E \left[|\widetilde{X}_{x_i-x_j}+r(x_{i}-x_j)|^{-1/\alpha}\right]\leq 2 `E\left({|\widetilde{X}_{x_i-x_j}|^{-1/\alpha}}\right)\] Hence \begin{eqnarray*} PG(x[k]) &=&2\sum_{0\leq i <j\leq k} (|x_i-x_j|^{1/\alpha})^{-1/\alpha}`E(|\widetilde{X}_1|^{-1/\alpha})\\ &\leq & 2(k^2)^{1-1/\alpha}`E(|\widetilde{X}_1|^{-1/\alpha}) G(x)^{1/\alpha} \end{eqnarray*} this last inequality come from $\sum_{i=1}^m |y_i|^{-1/\alpha^2} \leq m^{1-1/\alpha}\left(\sum_{i=1}^m |y_i|^{-1/\alpha}\right)^{1/\alpha}$ which can be viewed as an application of Jensen inequality: take $W$ uniform in $\cro{1,m}$, $f(x)=x^{1/\alpha}$. Since $f$ is concave $`E(f(|y_W|^{-1/\alpha}))\leq f(`E(|y_W|^{-1/\alpha}))$ which is equivalent to $\frac{1}{m}\sum_{i=1}^m|y_i|^{-1/\alpha^2} \leq (\frac{1}{m}\sum_{i=1}^m |y_i|^{-1/\alpha})^{1/\alpha}$. We get, by convexity, for some constant $C_k$ and $C'_k$, \begin{eqnarray*} PV(x[k])=PU(x[k])+PG(x[k])&\leq& C_k(U(x[k])^{1/\alpha}+G(x[k])^{1/\alpha}) +|r|U(x[k])\\ &\leq& C'_k V(x[k])^{1/\alpha}+|r|V(x[k]) \end{eqnarray*} which implies \begin{eqnarray}\label{eq:dqs} \Delta V(x[k]) &\leq& C'_kV(x[k])^{1/\alpha}-(1-|r|)V(x[k]). \end{eqnarray} If $x[k]\notin S_M$ then there exists $i$ such that $|x_i|\geq M$ or $(i,j)$ such that $|x_i-x_j|\leq 1/M$. In the first case $V(x[k])\geq M$ and in the second one, $V(x[k])\geq M^{1/\alpha}$. For $M\geq 1$, we thus have $V(x[k])\geq M^{1/\alpha}$ for $x\notin S_M$. The RHS of \eref{eq:dqs} rewrites $V(x[k])^{1/\alpha}(C'_k-(1-|r|)V(x[k])^{1-1/\alpha})$. For $M$ chosen such that $(1-|r|)(M^{1/\alpha})^{1-1/\alpha}\geq \max\{1,2C'_k\}$, $V(x[k])^{1/\alpha}(C'_k-(1-|r|)V(x)^{1-1/\alpha})\leq -C'_k V(x[k])^{1/\alpha} \leq -C'_k M^{1/\alpha^2}$. For $x[k]\in S_M$, $0\leq V(x[k]) \leq M+(k+1)^2M^{1/\alpha}<+\infty$ and then $\Delta V(x[k])$ is bounded on $S_M$, this allows one to prove that \eref{eq:qsdd} holds for $C=S_M$ and $M$ large enough. To end the proof, we need to prove the exchangeability of $I[k]$. The argument is general, and present in \cite{C-K}. Take $\sigma \in {\cal S}\cro{1,k}$ and $t_1,\dots,t_k$ distinct and non zeros. By the proof above, both $(I^{(n)}(t_i),{i \in \cro{1,k}})$ and $(I^{(n)}(t_{\sigma(i)}),{i \in \cro{1,k}})$ converge to $I[k]$. So, $(I^{(n)}(t_{\sigma(i)}),{i \in \cro{1,k}})$ converges to $I[k]$ and to $(I_{\sigma(i)},{i \in \cro{1,k}})$. Hence, $I[k] \sur{=}{(d)} (I_{\sigma^{}(i)},{i \in \cro{1,k}})$ for any $\sigma$. ~ $\Box$ \section{Conclusion} \label{sec:Con} In the paper, we have presented some results and some tools allowing to study iterated independent processes. Our tools are really useful only for processes with increments independent and stationary. Hence, the global frame is that of Lévy processes. But what we did for stationary process could probably done for continuous MC, homogeneous or not. For example it is likely that one can get some results on iterated Ornstein-Uhlenbeck processes whose increment are simple enough to be controlled. \small \bibliographystyle{abbrv}
{ "timestamp": "2015-04-27T02:06:13", "yymm": "1504", "arxiv_id": "1504.06433", "language": "en", "url": "https://arxiv.org/abs/1504.06433" }
\section{Introduction} Galaxies are the basic building blocks of the Universe and understanding their nature, formation and evolution is crucial to many areas of current astrophysical research. Concretely, nearby galaxies are the vivid result of the evolution of the Universe, they contain the footprints of the evolution processes that have led to the present status of the galaxies. To better understand the evolutionary history of the Universe, it is necessary to study the structure and dynamics of current galaxies. By studying the kinematics of galaxies we can trace the motions of both their baryonic (gas and stars) and dark matter. Radial velocity measurements have been traditionally performed with radio observations, mainly using the 21-cm H{\sc i} line. Tracing this neutral hydrogen line lets us study most of the gas content of the galaxy, usually traced out to three or four times beyond the visible disc. \citet{Rots1975}; \citet{Bosma1978}; \citet{vanderHulst1979}; \citet{Bosma1981}; \citet{Gottesman1982} and others demonstrated the power of rotation curves in deriving the total mass distribution of disc galaxies. The drawback with these H{\sc i} radio observations was the poor angular resolution. The highest angular resolution ($\sim6''$) 21-cm H{\sc i} surveys of nearby galaxies to date have been carried out by the H{\sc i} Nearby Galaxy Survey (THINGS; \citealt{Walter2008}), using the Very Large Array (VLA) of the NRAO. CO radio observations have also been used traditionally to trace the molecular gas. Nowadays, CO observations provide high angular resolution (comparable to optical), and also high spectral resolution (from one to several km s$ ^{-1} $), but in very small fields. Optical (H$ \alpha $, NII) observations yield high arcsecond angular resolution data. H$\alpha$ is often the brightest emission line in the visible wavelength range due to the cosmic abundance of hydrogen. In spiral galaxies, this line traces primarily the ionized gas in H{\sc ii} regions around young massive stars. In the 20th century, most of the optical observations were based on long-slit spectroscopy. Long-slit observations have been traditionally used to deduce the rotation curves of galaxies (e.g., \citealt{Rubin1980}; \citealt{Rubin1982a}; \citealt{Rubin1985}; \citealt{Mathewson1992}; \citealt{Persic1995}; \citealt{Mathewson1996}; \citealt{Courteau1997}; \citealt{Dale1997,Dale1998,Dale1999}). {However, 3D spectroscopy [IFU, Fabry-Perot (FP), multi long-slit spectroscopy] of galaxies is one of the best methods to obtain detailed information on the kinematics of galaxies. Velocity maps derived using 3D spectroscopy reproduce the complete velocity field, contrary to longslit spectra.} H$\alpha$ observations using FP instruments have been used for some 40 years now (\citealt{Tully1974}; \citealt{Deharveng1975}; \citealt{Dubout1976} or \citealt{deVaucouleurs1980}); and have been {designed to derive velocity fields of nearby galaxies (e.g., \citealt{Atherton1982} or \citealt{Boulesteix1984})}, and to create high signal-to-noise ratio (S/N) rotation curves for many spiral galaxies (e.g., \citealt{Marcelin1985}; \citealt{Bonnarel1988}; \citealt{Pence1990}; \citealt{Corradi1991}; \citealt{Amram1992}; \citealt{Cecil1992}; \citealt{Amram1994}; \citealt{Sicotte1996}; \citealt{Ryder1998}; \citealt{Jimenez-Vicente1999}; \citealt{Knapen2000} and many more). {\citet{Chemin2006} presented an H$ \alpha $ FP survey of Virgo cluster galaxies. \citet{Daigle2006a} also presented an H$ \alpha $ FP survey complementary to the \textit{Spitzer} Infrared Nearby Galaxies Survey (SINGS; \citealt{sings}).} One of the largest and most important FP surveys in H$ \alpha $ is the Gassendi HAlpha survey of SPirals (GHASP; \citealt{Garrido2002}). The GHASP survey consists of a sample of 203 spiral and irregular galaxies that have been observed with a sampling about 16 km s$ ^{-1} $ in velocity and an average 3 arcsec in angular resolution (see \citealt{Epinat2008}; GHASP VII hereafter, for a complete list of data and resolutions). {H$ \alpha $ FP spectrographs are nowadays being used for studying the kinematics of several kinds of galaxies, e.g., bulgeless galaxies \citep{Neumayer2011}, starburst galaxies \citep{Blasco-Herrera2013}, or interacting galaxies in compact dwarfs \citep{Torres-Flores2014}.} {One of the newest FP spectrographs is the Galaxy H$ \alpha $ Fabry-Perot System (GH$\alpha$FaS). It is a visiting instrument on the William Herschel Telescope (WHT) in La Palma. The GH$\alpha$FaS instrument has been operative since 2007 \citep{Fathi2007}, and has been used to study pattern speeds of bars and spiral arms \citep{Fathi2009}, to analyse the kinematics of planetary nebulae \citep{Santander-Garcia2010}, and the gas flows \citep{Font2011a}, star formation and the kinematics of interacting galaxies (\citealt{Zaragoza2013}, 2014) and starburst galaxies \citep{Blasco-Herrera2013}. It has also been used to study the resonance radii and interlocking resonance patterns in galaxy discs \citep{Font2011b,Font2014}.} We have designed an observing programme to obtain FP data of 29 spiral galaxies with the GH$\alpha$FaS instrument, as part of the ancillary data of the \textit{Spitzer} Survey of Stellar Structure in Galaxies (S$ ^{4} $G; Sheth et al. 2010). The S$ ^{4} $G survey has obtained 3.6 and 4.5 $ \mu $m images of 2352 nearby galaxies using the Infrared Array Camera (IRAC; \citealt{Fazio2004}). The sample is composed by galaxies that fulfil these requirements: \textit{d} $<$ 40 Mpc, \textit{$m_b$} $<$ 15.5, \textit{$D_{25}$} $>$ 1 arcmin, and includes galaxies selected using values from HyperLeda \citep{Paturel2003}, with a radial velocity v$ _{\rm radio} <$ 3000 km s$ ^{-1} $ and galactic latitude $\vert b \vert >$ 30$ \degr $. The cornerstone of the S$ ^{4} $G survey is the quantitative analysis of photometric parameters, enabling a variety of studies on secular evolution, outer disc and halo formation, galaxy morphology, etc. The data have been made public\footnote{(http://irsa.ipac.caltech.edu/data/SPITZER/S4G/)}. The first papers resulting from S$^{4}$G survey have been summarised in \citet{Holwerda2014}. The combination of the S$ ^{4} $G mid-IR images with these complementary kinematic FP data at high resolution will allow us to tackle the scientific goals of this study, summarized as followings: \begin{enumerate} \item Perform a detailed study of the kinematical interplay between the interstellar medium and regions of star formation, dust, or other activity. \item Use the kinematics as a probe of the secular evolution, as manifested in the structural components such as bars, spiral arms, rings, lenses, etc. \item Exploit the high angular resolution of the kinematic data to study the inner parts of rotation curves, and relate the galaxy kinematics to the mass distribution and specific observed properties, as well as probing the coupling between the stellar density and the gravitational potential in the inner parts of the galaxies. \item Study possible deviations of the rotation curve that are caused by lopsidedness or asymmetries in the outer parts of the disc, which would help to pin down the interplay between dark matter (DM) and stars. \item Explore the outer disc kinematics, to establish whether discs are generally cold and thus to constrain the halo properties. \end{enumerate} This data paper is the second of a series, and presents the complete set of FP and narrow-band imaging data for the 29 galaxies in our kinematical study. In Paper I (\citealt{Erroz-Ferrer2012}), we illustrated the data and methods, and discussed the kind of results that the main survey will provide through a detailed analysis of NGC~864, an archetypal barred spiral galaxy. In Paper III (Erroz-Ferrer et al.; in prep.) we will present the results from a study of the inner part of the rotation curves of the galaxies of the sample; and Paper IV (Leaman et al.; in prep) will study the outermost parts of the discs of the galaxies of the sample and the relationship between their kinematics and DM. This paper is organized as follows: Section \ref{section2} gives a description of the sample selection, and Section \ref{section3} describes the observations. Section \ref{section4} describes the data reduction and results. The derived rotation curves and non-circular motions maps are presented in Sections \ref{section5} and \ref{section6} respectively. These results are discussed in Section \ref{section7}, and Section \ref{section8} presents our conclusions. \section{Target selection} \label{section2} The galaxies in our survey satisfy the following requirements: first of all, the declination should be higher than -10$\degr$ so that the altitude in the sky could be enough at the time of the observation in La Palma, and only galaxies with inclinations between 0$\degr$ and 70$\degr$ were selected. According to the instrument specifications, the galaxy should fit well in the GH$\alpha$FaS FOV of 3.4 $\times$ 3.4 arcmin. Therefore, galaxies with diameters between 2 and 3.4 arcmin were selected. Secondly, the range in velocity of the galaxy should not be higher than the free spectral range (FSR) of the instrument (see Sect. \ref{section3} for details), although we did not have this ancillary information for all the galaxies. Furthermore, the data reduction processes require that the galaxy should have at least three bright point sources in the field, i.e. three stars or two stars and the nucleus. The observed sample was selected at the time of the observations, looking at the visibility on the sky, but also requiring a spread in morphological type, bar presence and finally, the availability of ancillary data (interferometric CO, H{\sc i}, ultraviolet or \textit{Spitzer} mid-IR). The final sample consists of 29 galaxies. For the morphological classification, we have followed the up-to-date Mid-IR Classifications for S$^{4} $G Galaxies {in the Comprehensive de Vaucouleurs revised Hubble-Sandage System (CVRHS,} \citealt{Buta2015}). The morphological classification also includes bar presence ({9} SA galaxies, 12 SAB galaxies and {7} SB galaxies). In Fig. \ref{sample}, we present the morphological classification as a histogram. The general properties of the galaxies in the sample are presented in Table \ref{props}. As noted in the footnote of Table \ref{props}, the galaxy NGC~7241 is nearly edge-on. The galaxy has an unusual interacting companion in the line of sight, its photometric inclination is biased much lower and therefore we do not include it in the analysis part of this paper. A detailed analysis of it is presented separately in Leaman et al. (in prep.). \begin{figure} \begin{center} \includegraphics[width=84mm]{sample.pdf} \caption{Morphological distribution of the galaxies of the sample, organised by morphological type and also by bar presence.} \label{sample} \end{center} \end{figure} \begin{table*} \caption{General properties of the galaxies in the sample. Notes. (1)~Updated morphological classifications from {the CVRHS}, where ``double stage" galaxies are allowed (i.e., large-scale S0 or S0/a galaxies with smaller-scale inner spirals) (2)~Adopted values of the distances, calculated after applying the Virgo, GA and Shapley corrections, with \textit{H}$_0$= 73 $\pm$ 5 km s$^{-1}$ Mpc$^{-1}$, from the NASA/IPAC Extragalactic Database (NED). The uncertainties in the distance measurements have been adopted as 20 per cent of the value. (3)~\textit{B} magnitude from The Third Reference Catalogue of Bright Galaxies (RC3; \citealt{RC3}). (4)~Absolute \textit{B} magnitude measured using the distance and m$_{b}$ of columns III and IV. (5)~Absolute 3.6$ \mu $m magnitude measured using the distance of column III and asymptotic magnitude at 3.6 $ \mu $m from the ellipse fitting to the 3.6 $ \mu $m S$ ^{4} $G images (Mu\~noz-Mateos et al. in prep). $^{(\dagger)}$ NGC~7241 is nearly edge on, but due to a line of sight companion galaxy, the photometric inclination is biased much lower and should not be considered representative.} \label{props} \center \begin{tabular}{c|clc|clc|c|} \hline Galaxy name & mid-IR morphology & d(Mpc) & m$_{\rm B}$ & M$_{\rm B}$ & M$_{3.6}$ \\ & (1) & (2) & (3) & (4) & (5) \\ \hline NGC 428 & SAB(s)dm & 15.9 $\pm$ 3.2& 11.95 & -19.06 & -19.20 \\ NGC 691 & (R)SA(rs,rl)ab & 35.7 $\pm$ 7.1& 12.28 & -20.48 & -21.36 \\ NGC 864 & SAB(r$\underline{\rm s}$)bc & 20.9 $\pm$ 4.2& 11.62 & -19.98 & -20.44 \\ NGC 918 & SAB(s)cd & 20.5 $\pm$ 4.1& 13.07 & -18.49 & -20.24 \\ NGC 1073 & SB(rs)$\underline{\rm c}$d & 16.1 $\pm$ 3.2& 11.68 & -19.35 & -19.75 \\ NGC 2500 & SAB(s)c$\underline{\rm d}$ & 9.8 $\pm$ 2.0& 12.22 & -17.73 & -18.14 \\ NGC 2541 & SA(s)$\underline{\rm d}$m & 10.4 $\pm$ 2.1& 12.25 & -17.84 & -17.90 \\ NGC 2543 & SAB(s,bl)b & 37.4 $\pm$ 7.5& 12.94 & -19.92 & -20.96 \\ NGC 2712 & (R$^{\prime}$)SAB(rs,nl)a$\underline{\rm b}$ & 29.5 $\pm$ 5.9& 12.78 & -19.57 & -20.67 \\ NGC 2748 & (R$^{\prime}$)SAB($\underline{\rm r}$s)bc & 25.1 $\pm$ 5.0& 12.39 & -19.61 & -20.84 \\ NGC 2805 & (R)SA(s)c pec & 28.7 $\pm$ 5.7& 11.79 & -20.50 & -20.51 \\ NGC 3041 & SA(rs)$\underline{\rm b}$c & 23.9 $\pm$ 4.8& 12.30 & -19.59 & -20.52 \\ NGC 3403 & SA(rs)c: & 22.8 $\pm$ 4.6& 12.94 & -18.85 & -19.63 \\ NGC 3423 & SA(s)b$\underline{\rm c}$ & 14.3 $\pm$ 2.9& 11.60 & -19.18 & -19.58 \\ NGC 3504 & (R$_1^{\prime}$)SA$\underline{\rm B}$($\underline{\rm r}$s,nl)a & 27.8 $\pm$ 5.6& 11.62 & -20.60 & -21.65 \\ NGC 4151 & SAB$_a$(l,nl)0/a & 20.0 $\pm$ 4.0& 11.36 & -20.15 & -21.40 \\ NGC 4324 & (L)SA(r)0$^+$ & 13.6 $\pm$ 2.7& 12.50 & -18.17 & -19.51 \\ NGC 4389 & SB(rs)a[d] & 13.8 $\pm$ 2.8& 12.55 & -18.15 & -19.11 \\ NGC 4498 & SB(r$\underline{\rm s}$)d & 14.1 $\pm$ 2.8& 12.77 & -17.98 & -18.51 \\ NGC 4639 & (R$^{\prime}$)SA$\underline{\rm B}$(rs,bl)ab & 13.9 $\pm$ 2.8& 12.19 & -18.53 & -19.39 \\ NGC 5112 & SB(s)c$\underline{\rm d}$ & 20.2 $\pm$ 4.0& 12.63 & -18.90 & -19.49 \\ NGC 5334 & SB(rs,x$_1$r)cd & 24.2 $\pm$ 4.8& 12.88 & -19.04 & -20.03 \\ NGC 5678 & (R$^{\prime}$L)SA($\underline{\rm r}$s)b: pec & 33.3 $\pm$ 6.7& 12.02 & -20.59 & -21.87 \\ NGC 5740 & ($\underline{\rm R}$L)SAB(r)ab & 27.0 $\pm$ 5.4& 12.60 & -19.56 & -20.65 \\ NGC 5921 & SB($\underline{\rm r}$s)b & 26.2 $\pm$ 5.2& 11.68 & -20.41 & -21.31 \\ NGC 6070 & SA(r$\underline{\rm s}$,nrl)c & 33.6 $\pm$ 6.7& 12.42 & -20.21 & -21.70 \\ NGC 6207 & SAB(r$\underline{\rm s}$)c$\underline{\rm d}$ & 18.5 $\pm$ 3.7& 11.86 & -19.48 & -19.84 \\ NGC 6412 & SB(rs)cd & 23.7 $\pm$ 4.7& 12.38 & -19.49 & -20.07 \\ NGC 7241 & S$\underline{\rm c}$d sp / E(d)7 & 22.4 $\pm$ 4.5& 13.23 & -18.52 & -20.51 \\ \hline \end{tabular} \end{table*} \section{Observations} \label{section3} The Fabry-Perot observations were carried out using the GH$ \alpha $FaS instrument mounted on the 4.2-m WHT in La Palma. The observations were done during 24 nights between September 2010 and March 2013. The instrument is a Fabry-Perot interferometer that provides high spectral resolution and seeing-limited angular resolution data, within a 3\farcm4 $\times$ 3\farcm4 FOV. The instrument comprises a focal reducer, a filter wheel, a Fabry-Perot etalon, an image photon-counting system (IPCS), and a calibration lamp (neon source). The etalon used has an interference order of 765 and a FSR of 391.9 kms$^{-1}$ (8.6 \AA). Each observational cycle consists of 48 steps (named channels) through the etalon for 10 seconds each. The number of cycles depends on the galaxy magnitude, but usually amounts to some 3 hours (corresponding to 22 cycles). However, due to weather and instrumentation problems, not all the galaxies were observed for the same length of time (all the information about the observations is collected in Table \ref{obs}). The high spectral resolution mode was used, achieving a velocity {sampling} of $\sim$8 km s$ ^{-1} $ with a pixel scale of 0\farcs2 in 1K$\times$1K pixel images. In Table~\ref{setup}, we summarize the instrumental setup used in the observations. To perform flux calibration, H$ \alpha $ narrow-band images were observed together with the Fabry-Perot observations, using the Auxiliary port CAMera (ACAM), permanently mounted at a folded-Cassegrain focus of the WHT \citep{Benn2008}. The FOV in imaging mode is 8 arcmin, with a pixel size of 0.25 arcsec. The \textit{R}-band images were taken using a Bessel filter with central wavelength 6500\AA{} and full width half maximum (FWHM) of 1360\AA{}, and with a Sloan 6228/1322 filter (central wavelength/FHWM, in \AA{}). We used four different H$\alpha$ filters depending on the galaxy's redshift (6577/23, 6589/24, 6613/24 and 6631/17; central wavelength/FHWM, in \AA{}). The exposure times per galaxy were 3$ \times $20 seconds for the \textit{R}-band images and 3$ \times $100 seconds for the H$\alpha$ images. The typical seeing was around 1 arcsec, ranging from 0\farcs6 to 1\farcs6. \begin{table*} \caption{Journal of observations.} \label{setup} \center \begin{tabular}{c|clc|c} \hline Observations & Telescope & 4.2-m WHT\\ \hline Fabry Perot &Instrument & GH$ \alpha $FaS\\ & Spatial sampling & FOV & $202"\times202"$\\ & & Pixel scale & 0\farcs2 \\ & Calibration & Neon comparison light & $\lambda$ {6598.95} \AA \\ & Characteristics @ H$ \alpha $ & Interference order & 765\\ & & FSR & 391.9 km s$ ^{-1} $ (8.66 \AA)\\ & & Finesse & 20\\ & & Spectral resolution & 18179 \\ & &Instrumental FWHM & 19.6 km s$ ^{-1} $\\ & Spectral sampling & Number of scanning steps & 48 \\ & & Sampling step & 8.2 km s$ ^{-1} $ (0.18 \AA)\\ & Detector & IPCS & \\ \hline Imaging &Instrument & ACAM\\ & Spatial sampling & FOV & 8 arcmin\\ & & Pixel scale & 0\farcs25 \\ & Calibration & Standard stars& \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Log of the observation. Notes: (1) The adopted format for the filters is \textit{central wavelength/FHWM} (2) The seeing was measured from the ACAM images. To perform the continuum subtraction, the H$ \alpha $ image or the \textit{R}-band image were degraded to the same spatial resolution (the worst seeing of the two images), and this number has been adopted as the final seeing.} \label{obs} \center \begin{tabular}{c|clc|clc|cl} \hline Galaxy name & Date & FP t$ _{exp} $& FP H$ \alpha $ filter$ ^{(1)} $ & ACAM H$ \alpha $ filter$ ^{(1)} $ & seeing$ ^{(2)} $\\ & & (s) & (\AA/\AA) & (\AA/\AA) & (")\\ \hline NGC 428 & Sept. 2010 & 3840 & 6580/23 & 6589/24 & 1.0 \\ NGC 691 & Jan. 2012 & 11040 & 6623/23 & 6631/17 & 1.2 \\ NGC 864 & Sept. 2010 & 13920 & 6600.5/25 & 6589/24 & 0.9 \\ NGC 918 & Sept. 2010 & 2880 & 6600.5/25 & 6589/24 & 0.9 \\ NGC 1073 & Sept. 2010 & 20640 & 6580/23 & 6589/24 & 0.9 \\ NGC 2500 & Jan. 2012 & 10560 & 6580/23 & 6577/23 & 1.0 \\ NGC 2541 & Jan. 2012 & 10560 & 6580/23 & 6577/23 & 1.2 \\ NGC 2543 & Jan. 2012 & 12960 & 6623/23 & 6613/24 & 1.3 \\ NGC 2712 & Feb. 2012 & 10560 & 6600.5/25 & 6613/24 & 1.4 \\ NGC 2748 & Jan. 2012 & 9600 & 6600.5/25 & 6589/24 & 1.4 \\ NGC 2805 & Mar. 2013 & 9120 & 6597.5/17.6 & 6613/24 & 1.1 \\ NGC 3041 & Jan. 2012 & 10080 & 6600.5/25 & 6589/24 & 1.4 \\ NGC 3403 & Jan. 2012 & 10560 & 6580/23 & 6589/24 & 1.1 \\ NGC 3423 & Feb. 2012 & 9120 & 6580/23 & 6589/24 & 1.5 \\ NGC 3504 & Jan. 2012 & 15360 & 6600.5/25 & 6589/24 & 1.4 \\ NGC 4151 & Feb. 2012 & 13920 & 6580/23 & 6589/24 & 1.2 \\ NGC 4324 & Mar. 2013 & 8160 & 6597.5/17.6 & 6613/24 & 1.4 \\ NGC 4389 & Jan. 2012 & 12000 & 6580/23 & 6577/23 & 0.8 \\ NGC 4498 & Jun. 2011 & 10080 & 6598/18 & 6589/24 & 0.9 \\ NGC 4639 & Jun. 2011 & 8640 & 6583/15.5 & 6589/24 & 1.0 \\ NGC 5112 & Jun. 2011 & 9120 & 6583/15.5 & 6589/24 & 0.6 \\ NGC 5334 & Jun. 2011 & 11520 & 6598/18 & 6589/24 & 1.2 \\ NGC 5678 & Jan. 2012 & 10560 & 6600.5/25 & 6613/24 & 1.2 \\ NGC 5740 & Jun. 2011 & 11040 & 6598/18 & 6589/24 & 1.5 \\ NGC 5921 & Mar. 2013 & 7200 & 6597.5/17.6 & 6589/24 & 1.3 \\ NGC 6070 & Jun. 2011 & 9600 & 6608/16 & 6613/24 & 1.5 \\ NGC 6207 & Jun. 2011 & 8640 & 6583/15.5 & 6589/24 & 1.3 \\ NGC 6412 & Jun. 2011 & 13920 & 6598/18 & 6589/24 & 1.3 \\ NGC 7241 & Jun. 2011 & 7680 & 6598/18 & 6589/24 & 1.1 \\ \hline \end{tabular} \end{table*} \section{Data reduction} \label{section4} \subsection{H$ \alpha $ imaging: ACAM} \label{ACAM} The ACAM images were reduced initially using {\sc iraf}. First, bias and flat corrections were made. Then, the sky was subtracted before and after the combination of the exposures. The images were astrometrically calibrated using {\sc koords} in {\sc kappa} first, setting a preliminar astrometry using the DSS images as a reference. Afterwards, we used {\sc gaia} in {\sc starlink} to improve the astrometry with the GSC-2 catalogue at ESO. The next step was aligning the \textit{R}-band and H$ \alpha $ images for each galaxy, subtracting the continuum using the procedures outlined in \citet{Knapen2004} and \citet{Sanchez-Gallego2012}. Finally, the continuum-subtracted H$ \alpha $ images were flux calibrated using spectrophotometric standard stars observed at the time of the observations. The resulting continuum-subtracted image of NGC~2748 is presented in panel f) of Fig. \ref{ngc2748plots}. The images of the other galaxies in the sample are presented in Appendix A. \subsection{Fabry-Perot: GH$ \alpha $FaS} The basic custom reduction of GH$ \alpha $FaS data cubes has been explained in previous papers in the literature (e.g., \citealt{Hernandez2008} or \citealt{Blasco2010}), and was introduced in Paper I. {The data of all the galaxies have been reduced} following these steps: de-rotation, {phase-correction}, combination and wavelength calibration, astrometry correction, spatial smoothing, continuum subtraction, removing sky lines, flux calibration and creation of the moment maps. \subsubsection{De-rotation, {phase-correction}, combination and wavelength calibration} There is no suitable de-rotator at the Nasmyth focus of the WHT. Therefore, the first step is to de-rotate the data cubes. To do this, we followed \citet{Blasco2010}. At least two point sources in all the planes are selected (ideally stars), and selected throughout the cubes. After that, each plane of the subsequent cubes is de-rotated to the same position. Due to the nature of the FP data, any 2D spatial transformation (rotation or translation) has to be done also in the third dimension (the spectral dimension), and therefore de-rotation must be done simultaneously with the phase calibration. The wavelength calibration is also performed at this point. Lamp exposures were taken between galaxy observations and are used to calibrate in wavelength (see \citealt{Carignan2008}). The integration of all the cubes from the various cycles is performed after the de-rotation and phase calibration. The result is one 48-channel cube, {phase-corrected} and calibrated in wavelength. \subsubsection{Astrometry calibration} The data are placed on an astrometrically correct spatial grid by comparing the positions of stars in the ACAM H$ \alpha $ images. The astrometric calibration of the ACAM images is explained in Sect. \ref{ACAM}. \subsubsection{Spatial smoothing} The instrument delivers data cubes with a spatial scale of 0\farcs2/pix, and therefore the spatial resolution is limited by the seeing. If the de-rotation process is not correctly performed, the spatial resolution might change (i.e., the addition of misaligned cubes would result in a blurred cube, the PSF of the point source rises and therefore the seeing increases). Thus, the de-rotation output has been carefully checked in order to minimise changes in the resolution. We applied a 2D Gaussian smoothing kernel with a FWHM of 3 pixels to the data in order to improve the signal-to-noise ratio (S/N) without degrading the angular resolution too much. To apply this Gaussian kernel, we have used the {\sc IDL} task {\sc filter\_image.pro}. \subsubsection{Continuum subtraction} The resulting de-rotated, {phase-corrected} and wavelength-calibrated data cube is then imported into {\sc gipsy} (Groningen Image Processing System; \citealt{vanderHulst1992}). The continuum was subtracted using {\sc conrem} of {\sc gipsy}, which estimates the continuum level from line-free frames. To determine the channels free of line emission, we visually check the images and also the spectra, after additional smoothing. {For NGC~2748, NGC~5678 and NGC~6070, the velocity amplitude is equal or larger than the FSR and there are no channels free of line emission. The H$ \alpha $ line falls into the next interference order and reappears in the first or last channels (what we call ``peak intrusion"). Those channels containing emission from a peak intrusion are copied after or before the first or last channel, where they should really be, increasing the number of channels per cube. Then, we divided the cube spatially in two parts (roughly corresponding to the areas where emission from the receding and approaching halves is seen). The channels free of emission were identified separately for each sub-cube, and the continuum was calculated and subtracted separately in each sub-cube. After the continuum subtraction, the two cutouts have been combined again.} \subsubsection{Sky line removal} Stationary secondary peaks sometimes appear in the data cubes, usually located in three channels, peaking in the middle one and with a circular light gradient along the image (more light in the centre and gradually less light as leaving the centre). {These are} OH sky emission lines due to airglow, peaking near the following velocities/wavelengths: 690 km s$^{-1}$/6577.3 \AA ~(present in NGC~2500, NGC~2541, NGC~4389 and NGC~6207), 1080 km s$^{-1}$ / 6586.5 \AA ~(NGC~428, NGC~1073, NGC~3403, NGC~4151, NGC~4639 and NGC~5112) and 1480 km s$^{-1}$/6596.7 \AA ~(NGC~4498, NGC~5334 and NGC~6412). To remove this contribution, we have dealt with the channels as if they were images to correct for flat fielding. With {\sc flat} of {\sc gipsy}, we fitted a polynomial to the background, eliminated the gradient across the affected planes and flattened the background. \subsubsection{Flux calibration} We calibrate the Fabry-Perot data by using calibrated narrow-band H$ \alpha $ imaging. As explained in Sect. \ref{section3}, the kinematic data have been observed together with ACAM data. The procedure was explained in Paper I: fluxes from selected H{\sc ii} regions in both ACAM and H$ \alpha $ FP data are compared, and fitted to a linear relationship. As a consequence, the zeroth moment maps (intensity maps, see the following section) are flux-calibrated. {GHASP VII uses a different method to flux-calibrate their FP observations. Instead of carrying out their own observations, they compare the integrated H$ \alpha $ flux of their galaxies to that of the narrow-band observations in \citet{James2004}. Although our method requires more observing time at the WHT, it is more accurate} as both FP and narrow-band imaging observations are carried out under the same atmospheric conditions and airmass. \subsubsection{Moment maps} To create the moment maps we have used the {\sc moments} task of {\sc gipsy}, {which performs an intensity weighted mean of the physical coordinates along the profile{\footnote{https://www.astro.rug.nl/$\sim$gipsy/tsk/moments.dc1}}}. We imposed the condition that the line emission had to be present in at least three adjacent channels and at a level above a certain noise level $ \sigma $ in the profiles. To do that, we determined $\sigma$ for each galaxy using {\sc stat} in {\sc gipsy}, and we created the sets of moment maps with emission above 5$ \sigma $. Also, we imposed the condition that secondary peaks are not taken into account (although secondary peaks are assumed not to be present after the previous reduction processes). Subsequently, from the smoothed and continuum-subtracted cube we computed the moment maps for each galaxy, specifically the moment maps of order zero (intensity map), order one (velocity map along the line of sight) and order two (velocity dispersion maps). {The velocity dispersion maps will be used in Paper III, and they will be corrected for instrumental, thermal and natural line broadening. The natural width has a value of 3 km s$ ^{-1} $ \citep{ODell1988}. The thermal width corresponds to 9.1 km s$ ^{-1} $, assuming a temperature of 10$ ^{4} $ K \citep{Osterbrock2006}. The instrumental width for each galaxy was obtained from the data cube taken with the calibration lamp, following the procedures in \citet{Relaño2005}, and is 8.3~km~s$ ^{-1} $.} After creating the moment maps, the stars and central regions may leave a residual (i.e., the continuum may not be properly determined and removed), and therefore the outputs from {\sc moments} need to be masked. Specifically, for the AGN hosts NGC~4151 and NGC~4639, the central emission saturated the detector and in consequence the continuum subtraction could not be removed properly, leaving a residual that had to be masked out. Also, we have masked the central regions in NGC~4324, NGC~5740 and NGC~5921, since the signal there is due to continuum emission (i.e., no line emission is detected). For each galaxy, we created a six-plots figure, presenting the resulting velocity maps and the outputs from the analysis. In Fig. \ref{ngc2748plots}, we present the resulting moment maps for NGC~2748 in the top images. The figures for the other galaxies in the sample are collected in Appendix A. \begin{figure*} \begin{center} \includegraphics[width=170mm]{2748plotsnoaxis.pdf} \caption{Results from the analysis of the Fabry-Perot data cubes of the spiral galaxy NGC~2748.\textit{a)} H$ \alpha $ intensity map. \textit{b)} Velocity map. \textit{c)} Velocity dispersion map. \textit{d)} Velocity model map. \textit{e)} Non-circular motions map. \textit{f)} H$ \alpha $ narrow-band image from ACAM. In the images, North is up and East to the left. All the images have the same scale. Similar panels for all the sample galaxies are presented in App. A.} \label{ngc2748plots} \end{center} \end{figure*} \section{Data analysis} \subsection{Rotation curves} \label{section5} To understand the circular motions within a galaxy, rotation curves have been used widely. Here, we study the rotation of the gas, and in particular, the gas from the H{\sc ii} regions that have been heated by massive, young stars. To extract the rotation curve from the velocity map we have used the {\sc rotcur} task in {\sc gipsy}. This procedure is based on the tilted-ring method described by \citet{Begeman1989}, where each ring can be defined by the parameters: inclination ($i$), position angle (PA), centre ($x_{0}$ and $y_{0}$) and systemic velocity ($v_{\rm sys}$). If we assume that the radial and vertical velocities are negligible, the observed velocity can be expressed as $v_{obs}(R,\theta,i) = v_{sys}+v_{rot}(R,\theta) \cos \theta \sin i$, where $ \theta $ is the azimuthal angle in the plane of the galaxy and depends on $i$, PA, $x_{0}$ and $y_{0}$. It is not easy to determine the rotation curve parameters, as deviations from rotational motions can be present inside the galaxies, some of our galaxies are not much inclined, and most of the velocity fields are sparsely sampled. However, we have proceeded to ensure that we are obtaining the true rotation curve while minimising possible errors. In order to determine the starting values (i.e., initial conditions) of $ i $, PA, $x_{0}$, $y_{0}$ and $v_{\rm sys}$, our first step was to search the literature (RC3, \citealt{RC3}; HyperLeda, \citealt{Paturel2003}; GHASP VII; and Mu\~noz-Mateos et al. in prep). We then derived position-velocity diagrams (PV diagrams) along the kinematic major axis of the galaxies in order to see the rotation in the spatial direction (presented as supplementary material in the Appendix C. Then, we proceeded with {\sc rotcur} as usual: (1) All parameters were left free. (2) To obtain the position of the galaxy centre, the PA, inclination, and systemic velocities were fixed, and the centre values were free. In the majority of the cases, the fits were unsatisfactory due to the patchiness of the data. In the galaxies whose centre is represented by a bright point source (nucleus), we adopted that as the dynamical centre. In the cases where the fits were unsatisfactory and the centre is not presented with a point source, we adopted the value from NED. In any case, the centre positions differ from those given in NED by less than 0\farcs8, less than the angular resolution and less than the accuracy of the astrometry in our data cubes. (3) The centre position, PA, and inclination were fixed in order to fit the systemic velocity. (4) The PA and inclination were then left free with all other parameters fixed. (5) Finally, the rotation curve was obtained by fixing all parameters except the rotation velocity. In all the steps, we avoided the sector under 30$\degr$ from the minor axis and a $ \vert cos (\theta)\vert $ weight was applied to the line-of-sight velocities during the fittings. The objective was to minimize the errors caused by the points close to the minor axis, where important projections effects occur and where the circular velocity term is fitted with difficulty $\left[cos (\theta)\rightarrow 0 \right] $. To determine (fix) one parameter, we forced the mean of the values of the rings to satisfy the following conditions, {based on our experience and linked to the resolution of our data}: (i) The number of points in each ring should be greater than a sufficient number, usually 20-50 {(20 is the minimum number of points in a region with the size of our angular resolution, which is $\sim$1 arcsec)}. (ii) The difference between the average of all the values from all the rings and the value in the given ring should not be more than 15 km s$ ^{-1} $ for $v_{\rm sys}$ and 15$\degr$ for the PA and $ i $ {(15 km s$ ^{-1} $ is our velocity resolution, twice the velocity sampling)}. (iii) The uncertainty coming from the free values in the fit (i.e., $v_{\rm sys}$ and the rotational velocity in step 2) should not be more than 10 km s$ ^{-1} $ for the velocities and 10$\degr$ for the PA and $ i $. Usually, these conditions coincided to pick out those cases where the fit was not satisfactory, and the corresponding values were not taken into account in the average. { All the rotation curves have been computed taking into account both approaching and receding sides, and the error bars are the dispersion of the rotation velocities computed for all the pixels in each elliptical ring.} To check that the fixed values represent the true rotation, we looked at the residual map (see the following section) to search for systematic errors. For example, if the residual map tends to have negative values, it is because the fixed systemic velocity is too large. Other systematic effects that appear in the residual map when one parameter is badly selected are presented in Fig.~{8 of \citet{Warner1973}}. {Also, we superimposed on the PV diagrams along the major axis the deprojected rotation curve at the corresponding angle, confirming that the derived rotation curves are correct (see Appendix C.} We have created two types of rotation curves: a high-resolution curve (with a separation of 1" between points) and a low-resolution curve (with a separation of 5"). The high resolution curve traces better the behaviour of the curve and the small-scale features of the galaxy, although it is noisier as there are fewer pixels taken into account in the fit for each radial point. The parameters ($i$, PA and $v_{\rm sys}$) resulting from the low-resolution fit are presented in Table \ref{rotcuroutput}. For NGC~2748, the high-resolution rotation curve is presented in Fig. \ref{ngc2748rotcur}. The resulting high-resolution rotation curves for all the sample galaxies are presented in the Appendix B. \begin{figure} \begin{center} \includegraphics[width=84mm]{ngc2748rotcurs.pdf} \caption{High-resolution rotation curve derived from the 5-$ \sigma $ velocity map of NGC~2748. Rotation curves for all the sample galaxies are presented in the Appendix B.} \label{ngc2748rotcur} \end{center} \end{figure} \begin{table*} \caption{Results. Column I) Galaxy name. Column II) Systemic velocity from optical observations (HyperLEDA). Column III) Systemic velocity derived from our FP data. Columns IV and VI) Disc inclination and PA obtained from the 25.5 mag/arcsec$ ^{2} $ isophote from the ellipse fitting to the S$ ^{4} $G 3.6 $ \mu $m image (Mu\~noz-Mateos et al. in prep). Columns V and VII) The same as columns IV and VI but derived from our FP data. {Column VIII) Maximum velocity derived from the rotation curves. Column IX) Quality flag for the maximum velocity: reached in our data (1), probably reached (2), probably not reached (3), not reached (4). }\textit{Notes}: (1) PA of the major axis is defined as the angle, taken in anti-clockwise direction between the north direction on the sky and the kinematical major axis of the galaxy, defined from 0\degr to 180\degr. (2) For NGC~1073, 180$\degr$ have been added to PA$_{\rm 3.6 \mu m}$ for a direct comparison with PA$_{\rm H\alpha }$. (3) The AGN contribution has been excluded from the measurements.} \label{rotcuroutput} \centering \begin{tabular}[angle=90]{c|clclc|clcclclc|clc|clc|c|} \hline Galaxy name& $v_{\rm sys,Leda}$ & $v_{\rm sys,H\alpha }$ & $i_{\rm 3.6 \mu m}$& $i_{\rm H\alpha }$ & PA$_{\rm 3.6 \mu m}^{(1)}$& PA$_{\rm H\alpha }^{(1)}$ & $v_{\rm max}$ & $v_{\rm max}$ \\ & (km s$ ^{-1} $) & (km s$ ^{-1} $) & $ (\degr)$& ($\degr$) & $(\degr)$& ($\degr$) & (km s$ ^{-1} $) & flag\\ \hline NGC 428&1162.4&1150 $\pm$ 6&40.3&45 $\pm$ 9 & 113.9 & 120 $\pm$ 4& 127 $\pm$ 5 & 2\\ NGC 691&2665.0&2712 $\pm$ 5&41.3&41 $\pm$ 10& 93.7 & 91 $\pm$ 4& 245 $\pm$ 3 & 1\\ NGC 864&1558.9&1550 $\pm$ 4&44.6&43 $\pm$ 5 & 22.0 & 25 $\pm$ 6& 169 $\pm$ 8 & 1\\ NGC 918&1502.4&1507 $\pm$ 3&54.6&57 $\pm$ 4 & 157.4 & 160 $\pm$ 2& 153 $\pm$ 3 & 1\\ NGC 1073&1209.0&1203 $\pm$ 2&27.3&29 $\pm$ 1 &180.5$^{(2)}$& 165 $\pm$ 5& 80 $\pm$ 4 & 2\\ NGC 2500&479.0 &540 $\pm$ 8&29.9&41 $\pm$ 2 & 62.7 & 85 $\pm$ 5& 111 $\pm$11 & 1\\ NGC 2541&530.0 &575 $\pm$ 3&57.0&57 $\pm$ 4 & 165.0 & 172 $\pm$ 3& 96 $\pm$ 6 & 2\\ NGC 2543&2469.8&2483 $\pm$ 6&59.2&61 $\pm$ 8 & 37.0 & 30 $\pm$ 3& 210 $\pm$ 3 & 1\\ NGC 2712&1815.0&1858 $\pm$ 2&57.9&58 $\pm$ 5 & 3.6 & 1 $\pm$ 3& 176 $\pm$ 4 & 1\\ NGC 2748&1461.3&1544 $\pm$ 6&52.9&74 $\pm$ 2 & 38.9 & 41 $\pm$ 2& 150 $\pm$ 3 & 1\\ NGC 2805&1730.3&1766 $\pm$ 4&36.0&36 $\pm$ 2 & 144.2 & 123 $\pm$ 3& 106 $\pm$ 7 & 3\\ NGC 3041&1400.2&1424 $\pm$ 4&50.7&50 $\pm$ 5 & 94.0 & 90 $\pm$ 5& 173 $\pm$ 7 & 1\\ NGC 3403&1239.5&1282 $\pm$ 5&66.9&66 $\pm$ 4 & 74.9 & 67 $\pm$ 4& 167 $\pm$ 5 & 1\\ NGC 3423&1000.8&1019 $\pm$ 4&28.4&28 $\pm$ 6 & 45.4 & 45 $\pm$ 5& 177 $\pm$ 6 & 2\\ NGC 3504&1523.5&1550 $\pm$ 6&20.9&39 $\pm$ 5 & 138.3 & 165 $\pm$ 8& 151 $\pm$ 8 & 1\\ NGC 4151&927.7 &1000 $\pm$ 5&48.1&21 $\pm$ 7 & 150.6 & 20 $\pm$ 3& 266 $\pm$ 4 & 4\\ NGC 4324&1639.1&1689 $\pm$ 5&63.3&65 $\pm$ 2 & 55.0 & 57 $\pm$ 2& 163 $\pm$ 2 & 4\\ NGC 4389&712.9 &718 $\pm$ 9&47.3&45 $\pm$ 2 & 98.9 & 100 $\pm$ 3& 149 $\pm$ 3 & 4\\ NGC 4498&1656.0&1541 $\pm$ 8&57.0&58 $\pm$ 8 & 140.3 & 132 $\pm$ 7& 120 $\pm$ 8 & 2\\ NGC 4639&1003.3&1025 $\pm$ 2&50.2&39 $\pm$ 4 & 128.4 & 126 $\pm$ 4& 222 $\pm$ 3 & 1\\ NGC 5112&979.1 &1017 $\pm$ 5&49.2&49 $\pm$ 5 & 120.4 & 123 $\pm$ 5& 130 $\pm$ 2 & 1\\ NGC 5334&1372.0&1411 $\pm$ 5&41.5&42 $\pm$ 3 & 11.9 & 10 $\pm$ 6& 166 $\pm$ 4 & 2\\ NGC 5678&1898.5&1932 $\pm$ 5&56.4&60 $\pm$ 4 & 4.1 & 3 $\pm$ 3& 280 $\pm$ 4 & 1\\ NGC 5740&1566.1&1600 $\pm$ 3&56.8&57 $\pm$ 5 & 162.5 & 162 $\pm$ 2& 198 $\pm$ 3 & 1\\ NGC 5921&1409.4&1472 $\pm$ 3&33.2&40 $\pm$ 5 & 146.7 & 150 $\pm$ 6& 142 $\pm$ 9 & 1\\ NGC 6070&2000.1&2030 $\pm$ 4&63.8&60 $\pm$ 5 & 60.0 & 57 $\pm$ 3& 231 $\pm$ 2 & 2\\ NGC 6207&848.4 &869 $\pm$ 3&50.5&57 $\pm$ 7 & 19.4 & 20 $\pm$ 3& 136 $\pm$ 6 & 2\\ NGC 6412&1330.4&1342 $\pm$ 2&18.4&20 $\pm$ 5 & 76.9 & 115 $\pm$ 5& 175 $\pm$ 7 & 2\\ NGC 7241&1447.0&1407 $\pm$ 8&63.5&65 $\pm$ 8 & 18.5 & 23 $\pm$ 4& 142 $\pm$ 2 & 3\\ \hline\end{tabular} \end{table*} \subsection{Non-circular motions and residual velocity fields} \label{section6} With the rotation curves we explore the circular rotation of the sample galaxies. However, there can be deviations from these circular motions provoked by dynamical features of the galaxy, such as the influence of the potential of the bar, past interactions with a companion or streaming motions across the spiral arms. To understand the influence of these features on the galaxy kinematics, we want to study these deviations from rotation, in other words, the non-circular motions. Following the technique described in Paper I, the first step is to create a velocity model map that reflects the rotational velocities, which is done by translating the rotation curve into a 2D velocity map. The {\sc velfi} task in {\sc gipsy} was used for this, assuming kinematic symmetry and using the low-resolution rotation curve and the values of $ i $, PA, $x_{0}$, $y_{0}$ and $v_{\rm sys}$. To extract the non-circular motions we subtract the velocity model map from the observed velocity field, resulting in a residual map that is interpreted as the non-circular motions map. This residual map shows the deviations from pure rotational velocity. In panels d) and e) of Fig. \ref{ngc2748plots}, we have shown the velocity model and the non-circular motion maps respectively for NGC~2748. For the other sample galaxies, similar figures are found in the Appendix A. Our procedure consists of studying separately those parts of galaxies most directly affected by the spiral density waves (spiral arms), those affected by the potential of the bar (bar region), { and those affected by both potentials: the start of the spiral arms close to the bar} (start-of-arms-region, SAR hereafter). We have defined these three regions (arms, bars and SAR) using images from NED (mainly SDSS false-colour images by \citealt{Baillard2011}), 3.6 $ \mu $m S$ ^{4} $G images (using the output from the 2D decompositions of the 3.6 $ \mu $m images by \citet{Salo2015}, to better constrain the bars) and our H$ \alpha $ ACAM images. Thus, we are not biased by a star formation-based classification when defining the regions, because there are barred galaxies that show the bar in the mid-IR but not in H$ \alpha $. {Firstly, based partly on the SDSS images but mainly on the IR images, we have defined the region where the stellar bar is, also taking into account the bar position angle and and bar length (see Sect. \ref{structural}). Secondly, we have identified by eye those regions that are between the spiral arms and the bar, and which might be influenced by both (SARs). Finally, we have identified the spiral arms by looking at the intensity contours on the SDSS and 3.6 $ \mu $m images, drawing by eye their extension on the IR images. In the more flocculent spiral arms, it is not easy to identify the extent of the spiral arms, so everything except for the bar and SAR is assumed to be spiral arms. The three regions have been defined astrometrically with irregular shapes, and can be identified at all used wavelengths. In Fig.~D1, we present the 3.6 $ \mu $m S$ ^{4} $G images of the barred galaxies with overlaid in red and yellow, a schematic representation of the bar region and the SAR for each barred galaxy. For completeness, we have added NGC~5678, a galaxy which may have a bar but which has not been classified as either SAB or SB (see Sect.~\ref{spiralncm}).} To quantify the non-circular motions, we have studied the cumulative distribution function (CDF) of the pixel values in the residual images. We have adopted the value of the 95$\%$ of the distribution of the residual velocities (in absolute value) as a representative value of the overall non-circular motion within that region. With this method, we avoid introducing possible caveats: if we had used a histogram, the bin size would have changed the adopted value for the residual velocity; and if we had defined regions of certain physical scale, the size of the region would matter. Another advantage of the method is that the correlations and trends would not change if we choose 50, 70 or 95$\%$, as we represent the distribution of all the values. The uncertainties were estimated via a Monte Carlo simulation of the effect of adding random noise to the velocity residuals, the former following a Gaussian distribution with sigma equal to the spectral {sampling}. In Fig. \ref{cdf} we present the statistical distribution of the residual velocities of the arm region in NGC~2748. The adopted value for the residual velocities there is 23.2 $ \pm $ 1.8 km s$ ^{-1} $; which corresponds to the representative value in the residual map of NGC~2748 (panel \textit{e} in Fig. \ref{ngc2748plots}). The computed values for the non-circular motions in the bar, SAR and spiral arms are presented in Table \ref{ncmtable}. {For completeness, we have normalised the residual velocities. To do so, we have computed the corresponding circular velocity at a radius equal to the bar length, to the extent of SAR, and to the last measured point (assuming that the spiral arms end there), following the universal rotation curve by \citet{Persic1991}. This method includes several uncertainties, such as assuming that the circular velocity across the complete region is equal to that at the last radial point, or that the region is circular with a radius the extent of the structure (bar, SAR or arms). Note that the residual velocities have been deprojected.} \begin{figure} \begin{center} \includegraphics[width=84mm]{cdf1.pdf} \includegraphics[width=84mm]{cdf2.pdf} \caption{Statistical distribution of the residual velocities of the spiral arm region of NGC~2748. \textit{Top)} Histogram distribution of the residual velocities found in the spiral arm region of the galaxy using a bin size of 1 km s$ ^{-1} $ (in absolute value). \textit{Bottom)} Cumulative distribution function of the absolute values of the residual velocities for the spiral arm region of NGC~2748. We indicate 95$\%$ value which we use as a representative overall measure of the residual velocities.} \label{cdf} \end{center} \end{figure} \subsection{Structural parameters} \label{structural} To understand the nature of the non-circular motions within the regions of the galaxies, we study them in conjunction with some of the structural parameters that have been derived for the galaxies of our sample. The bar strength is quantified by the torque parameter $ Q_{\rm b} $, which describes the maximum amplitude of tangential forcing normalised by axisymmetric radial force (\citealt{Combes1981}; \citealt{Buta2001}). $ Q_{\rm b} $ values have been measured from the torque maps derived from the 3.6 $ \mu $m S$ ^{4} $G images (for a description of the method see \citealt{Salo2010} and \citealt{Laurikainen2002}), and will be published in a compilation of bar strengths for the S$ ^{4} $G sample (S. D\'iaz-Garc\'ia et al., in prep). The parameter $ Q_{\rm b} $ is defined so that $ Q_{\rm b} $ decreases with the axisymmetric radial force, hence the presence of the bulge is implicitly present in $ Q_{\rm b} $. Due to the bulge influence, $ Q_{\rm b} $ is not the best parameter to show the strength of the bar on the gas kinematics. However, we will use those $ Q_{\rm b} $ values as they are the only ones that measure the bar strength that have been derived from the S$ ^{4} $G images. We have also studied the bulge-to-total ratio (\textit{B/T}). The values from \textit{B/T} have been derived from 2D decompositions of the 3.6 $ \mu $m images \citep{Salo2015}. We list the $ Q_{\rm b} $ and \textit{B/T} values for the galaxies of our sample in Table~\ref{ncmtable}. {The bar lengths have been measured from the torque maps (D\'iaz-Garc\'ia et al., in prep.). The bar lengths are deprojected units, measured using the method explained in \citet{Salo2010}, except when the fits do not have trustworthy quality, in which the bar lengths are measured visually (Herrera-Endoqui et al., in prep.).} We have also studied the spiral arm class, as a parameter that is related to the spiral arm strength. We have obtained the arm class classifications carried out by D. Elmegreen and presented in the CVRHS, where F means \textit{flocculent}, M denotes \textit{multi-armed} and G \textit{grand design}. Note that NGC~4324 has not been presented in the table as it does not have an arm class classification. The values for the arm classification are in Table~\ref{ncmtable}. \begin{table*} \caption{Non-circular motions as measured in the bar (column II), SAR (column III) and spiral arms (column IV) of the galaxies of the sample. We use the 95$\%$ value of the cumulative distribution function which we use as a representative overall measure of the residual velocities. Column V) Bar strength ($ Q_{\rm b} $) measured from the torque maps derived from the 3.6 $ \mu $m S$ ^{4} $G images (S. D\'iaz-Garc\'ia et al. in prep). Column VI) \textit{B/T} from the 2D decompositions to the 3.6 $ \mu $m images \citet{Salo2015}. Column VII) Arm class (AC) classification from {the CVRHS: F=flocculent, M=multi-armed, G=grand design}. The ``-" refers to non-barred galaxies, to those barred ones that do not have H$ \alpha $ emission within the bar or to galaxies that do not have an arm classification. {($ ^{*} $) These galaxies do not have an arm classification in the CVRHS, but as their arm classification in \citet{Elmegreen1987} was 5, we have classified them as multi-armed (M).}} \label{ncmtable} \center \begin{tabular}[angle=90]{c|clclc|clclc|} \hline Galaxy name& \textit{v}$_{\rm RES,BAR}$ & \textit{v}$_{\rm RES,SAR}$ & \textit{v}$_{\rm RES,ARMS}$& $ Q_{\rm b} $ & \textit{B/T} & AC\\ & (km s$ ^{-1} $) & (km s$ ^{-1} $) & (km s$ ^{-1} $) & & &\\ \hline NGC 428 & 35.8 $\pm$ 1.4 & 23.5 $\pm$ 2.7 & 18.8 $\pm$ 2.7 & 0.29 $\pm$ 0.03 & 0.002 & F \\ NGC 691 & - & - & 19.1 $\pm$ 2.5 & - & 0.170 & M \\ NGC~864 & 40.1 $\pm$ 1.8 & 45.0 $\pm$ 2.4 & 17.0 $\pm$ 2.9 & 0.47 $\pm$ 0.07 & 0.027 & M \\ NGC 918 & 24.9 $\pm$ 2.3 & 26.7 $\pm$ 0.4 & 20.2 $\pm$ 2.3 & 0.23 $\pm$ 0.02 & 0.008 & M \\ NGC 1073 & 14.6 $\pm$ 2.4 & 13.6 $\pm$ 3.3 & 12.8 $\pm$ 3.0 & 0.63 $\pm$ 0.08 & 0.000 & M \\ NGC 2500 & 20.6 $\pm$ 2.4 & 16.3 $\pm$ 3.5 & 14.5 $\pm$ 3.0 & 0.28 $\pm$ 0.03 & 0.002 & F \\ NGC 2541 & - & - & 18.9 $\pm$ 2.2 & - & 0.000 & F \\ NGC 2543 & 63.2 $\pm$ 0.3 & 43.2 $\pm$ 0.4 & 30.8 $\pm$ 1.7 & 0.35 $\pm$ 0.08 & 0.165 & G \\ NGC 2712 & 48.3 $\pm$ 1.5 & 23.3 $\pm$ 1.5 & 18.5 $\pm$ 2.6 & 0.28 $\pm$ 0.05 & 0.170 & M \\ NGC 2748 & 44.6 $\pm$ 0.4 & 21.5 $\pm$ 2.7 & 23.2 $\pm$ 1.8 & 0.45 $\pm$ 0.03 & 0.034 & M \\ NGC 2805 & 11.1 $\pm$ 3.3 & 13.9 $\pm$ 2.6 & 15.2 $\pm$ 2.7 & 0.19 $\pm$ 0.01 & 0.002 & M \\ NGC 3041 & - & - & 22.6 $\pm$ 2.3 & - & 0.043 & M \\ NGC 3403 & - & - & 19.9 $\pm$ 1.9 & - & 0.000 & M \\ NGC 3423 & - & - & 13.0 $\pm$ 3.1 & - & 0.055 & F \\ NGC 3504 & 41.4 $\pm$ 2.0 & 11.4 $\pm$ 3.7 & 12.6 $\pm$ 3.4 & 0.25 $\pm$ 0.06 & 0.364 & G \\ NGC 4151 & 11.8 $\pm$ 3.1 & 10.1 $\pm$ 3.9 & 9.6 $\pm$ 4.3 & 0.09 $\pm$ 0.02 & 0.443 & M$^{(*)}$\\ NGC 4324 & - & - & 16.2 $\pm$ 3.2 & - & 0.326 & - \\ NGC 4389 & 18.6 $\pm$ 2.6 & 26.3 $\pm$ 1.5 & 21.4 $\pm$ 3.0 & 0.52 $\pm$ 0.06 & 0.000 & M$^{(*)}$\\ NGC 4498 & 19.7 $\pm$ 2.6 & 16.2 $\pm$ 2.8 & 23.9 $\pm$ 1.3 & 0.46 $\pm$ 0.07 & 0.000 & F \\ NGC 4639 & - & 17.7 $\pm$ 2.5 & 15.6 $\pm$ 3.0 & 0.26 $\pm$ 0.04 & 0.112 & M \\ NGC 5112 & 14.0 $\pm$ 2.9 & 23.4 $\pm$ 2.7 & 19.9 $\pm$ 2.1 & 0.62 $\pm$ 0.06 & 0.000 & M \\ NGC 5334 & - & 18.1 $\pm$ 2.9 & 15.3 $\pm$ 2.8 & 0.49 $\pm$ 0.08 & 0.001 & F \\ NGC 5678 & - & - & 72.6 $\pm$ 0.1 & - & 0.037 & F \\ NGC 5740 & - & 21.0 $\pm$ 2.4 & 21.1 $\pm$ 1.7 & 0.16 $\pm$ 0.03 & 0.125 & M \\ NGC 5921 & - & 21.9 $\pm$ 1.7 & 16.5 $\pm$ 2.6 & 0.34 $\pm$ 0.06 & 0.111 & M \\ NGC 6070 & - & - & 20.7 $\pm$ 1.7 & - & 0.045 & M \\ NGC 6207 & 18.6 $\pm$ 2.6 & 15.5 $\pm$ 3.0 & 17.9 $\pm$ 2.3 & 0.21 $\pm$ 0.02 & 0.000 & F \\ NGC 6412 & 12.7 $\pm$ 2.9 & 5.2 $\pm$ 6.7 & 15.0 $\pm$ 2.6 & 0.24 $\pm$ 0.02 & 0.005 & M \\ NGC 7241 & - & - & 18.6 $\pm$ 2.3 & - & 0.000 & M \\ \hline\end{tabular} \end{table*} \subsection{Star formation rates} \label{section7} The H$ \alpha $ line traces the emission from massive young stars. {The advantages of this recombination line are that it traces very recent SF (timescales $ \sim $6-8 Myr), and that it has lower sensitivity to dust attenuation than the UV (although not negligible, \citealt{Cardelli1989}).} Thus, we can exploit the H$ \alpha $ imaging and relate the star formation to the galaxy's kinematics. We derive the star formation rates (SFRs) derived from the H$ \alpha $ luminosity (L$ _{\rm H\alpha} $) following \citet{Kennicutt2009}: \begin{equation} \label{sfrha} {\rm SFR} (M_{\odot} {\rm yr^{-1}}) = 5.5 \times 10^{-42} L({\rm H\alpha}), \end{equation} where $L({\rm H\alpha})$ is the luminosity, calculated as \begin{equation} L({\rm H\alpha}) [{\rm erg/s}]=4\pi D^{2} (3.086 \times 10^{24})^{2}F_{{\rm H\alpha}}^{*}, \end{equation} with \textit{D} the distance to the galaxy in Mpc (Table 1) and $F_{{\rm H\alpha}}^{*}$ the flux corrected for Galactic absorption, as taken from NED and obtained from the \citet{Schlafly2011} recalibration of the \citet{Schlegel1998} dust map. {Equation \ref{sfrha} assumes a \citet{Kroupa2001} stellar IMF with a mass range of 0.1-100 $M_{\odot} $, an electron temperature of $ T_{\rm e} =10^{4} $ K and electron density $ n_{\rm e} =100$ cm$ ^{-3} $, whereas variations in $ T_{\rm e} $ from 5000 to 20000 K would result in a variation of the calibration constant ($5.5 \times 10^{-42}$) of $\sim$15\%. Variations of $ n_{\rm e} = 100-10^{6} $ cm$ ^{-3} $ would result in variations in the calibration constant below 1\% \citep{Osterbrock2006}. This calibration also assumes that over timescales $>$6 Myr, star formation needs to remain constant, and no information about the previous star formation history is given (see \citealt{Kennicutt2012}).} The uncertainties in the measurements of the SFR come from: mainly from the distance, with a typical 20\% uncertainty estimated for the distance value, basic reduction processes (i.e. flat-fielding correction, around 2\%), the zero-point calibration (3\%, in agreement with the value of 2\% typical for photometric nights), uncertainty in the flux measurement (1\%) and continuum-subtraction process (around 11\%). Altogether, taking into account the uncertainties related to the quality of the image (without considering the distance) we estimate an uncertainty of 17\%. To consider only star formation, we have to take into account that some of the galaxies of our sample have nuclear activity: NGC~918 (no specific classification; \citealt{Palumbo1983}); NGC~4151 (Seyfert type 1.5; \citealt{Khachikian1974}) and NGC~4639 (Seyfert 1.0; \citealt{Ho1997}), where all nuclear activity classifications have been collected in \citet{VeronCetty2006}. The H$ \alpha $ emission in these cases is not all coming from star formation, so the derived SFRs in these galaxies should be understood as upper limits. We have masked the nuclear emissions in these three cases and excluded them from the integration (very small in the cases of NGC~918 and the low-luminosity AGN of NGC~4639), so that the global H$ \alpha $ luminosities are not biased due to AGNs. NGC~3504 on the other hand is a starburst galaxy \citep{Balzano1983} with a large amount of star formation in its centre. In Fig. \ref{sfr}, we present the SFR as a function of the morphological \textit{T}-type (from {the CVRHS}). Excluding the starburst galaxy (which stands out in the plot as a filled green diamond), we find the same trend as widely found in the literature (e.g., \citealt{Kennicutt1998}; \citealt{James2004}; and references therein): early-type galaxies tend to have low SFRs, intermediate-type spirals have the highest SFRs and very late-type spirals and irregulars have lower SFRs than the intermediate-type ones, although higher than those of galaxies of the earliest types. Although our sample is small, we confirm that the presence of a bar does not affect the global SFR (e.g., \citealt{Phillips1993}; \citealt{Kennicutt1994}; \citealt{Kennicutt1998} and references therein; \citealt{James2004}; \citealt{Fisher2006}). {A Kolmogorov-Smirnov (K-S) test on both the barred and the non-barred distributions with respect to the SFR results in a \textit{P}-value of 0.094. This \textit{P}-value indicates the relationship between both distributions, and in this case, it means that both the barred and the non-barred distributions with respect to the SFR are very similar. From our ACAM images, we have measured the total SFRs for the galaxies of our sample, and also the SFRs for each of the defined regions. In Table~\ref{sfrtable} we present the resulting SFRs (total, bar, SAR and arms). } \begin{figure} \begin{center} \includegraphics[width=84mm]{sfrtot.pdf} \caption{SFR as a function of morphological \textit{T}-type. Barred galaxies are presented with diamonds, whereas non-barred galaxies are presented with asterisks. The starburst galaxy NGC~3504 is identified with a filled green diamond.} \label{sfr} \end{center} \end{figure} \begin{table*} \caption{SFRs measured from the ACAM H$ \alpha $ images, corrected for Galactic absorption (see text). \textit{Notes}: The ``-" refers to non-barred galaxies or to those barred ones that do not have H$ \alpha $ emission within the bar. (*) The AGN contribution has been excluded from the measurements.} \label{sfrtable} \center \begin{tabular}[angle=90]{|c|c|c|c|c|} \hline Galaxy name& SFR$ _{\rm tot} $ &SFR$ _{\rm bar} $ &SFR$ _{\rm SAR} $ &SFR$ _{\rm arms} $ \\ & ($M_{\odot} $ yr$ ^{-1} $) & ($M_{\odot} $ yr$ ^{-1} $)& ($M_{\odot} $ yr$ ^{-1} $)& ($M_{\odot} $ yr$ ^{-1} $)\\ \hline NGC 428 &0.431 $\pm$ 0.172 & 0.019 $\pm$ 0.008 & 0.078 $\pm$ 0.031 & 0.334 $\pm$ 0.134 \\ NGC 691 &0.628 $\pm$ 0.251 & - & - & 0.628 $\pm$ 0.251 \\ NGC 864 &0.867 $\pm$ 0.347 & 0.082 $\pm$ 0.033 & 0.035 $\pm$ 0.014 & 0.750 $\pm$ 0.300 \\ NGC 918 &0.475$^{(*)}$ $\pm$0.190 & 0.006 $\pm$ 0.002 & 0.004 $\pm$ 0.002 & 0.465 $\pm$ 0.186 \\ NGC 1073 &0.731 $\pm$ 0.293 & 0.029 $\pm$ 0.011 & 0.018 $\pm$ 0.007 & 0.685 $\pm$ 0.274 \\ NGC 2500 &0.161 $\pm$ 0.065 & 0.002 $\pm$ 0.001 & 0.006 $\pm$ 0.002 & 0.154 $\pm$ 0.062 \\ NGC 2541 &0.191 $\pm$ 0.076 & - & - & 0.191 $\pm$ 0.076 \\ NGC 2543 &0.379 $\pm$ 0.152 & 0.060 $\pm$ 0.024 & 0.022 $\pm$ 0.009 & 0.297 $\pm$ 0.119 \\ NGC 2712 &0.209 $\pm$ 0.084 & 0.063 $\pm$ 0.025 & 0.021 $\pm$ 0.008 & 0.125 $\pm$ 0.050 \\ NGC 2748 &0.521 $\pm$ 0.208 & 0.018 $\pm$ 0.007 & 0.158 $\pm$ 0.063 & 0.345 $\pm$ 0.138 \\ NGC 2805 &1.041 $\pm$ 0.417 & 0.003 $\pm$ 0.001 & 0.017 $\pm$ 0.007 & 1.022 $\pm$ 0.409 \\ NGC 3041 &0.730 $\pm$ 0.292 & - & - & 0.730 $\pm$ 0.292 \\ NGC 3403 &0.234 $\pm$ 0.094 & - & - & 0.234 $\pm$ 0.094 \\ NGC 3423 &0.298 $\pm$ 0.119 & - & - & 0.298 $\pm$ 0.119 \\ NGC 3504 &0.931 $\pm$ 0.373 & 0.687 $\pm$ 0.275 & 0.148 $\pm$ 0.059 & 0.097 $\pm$ 0.039 \\ NGC 4151&0.251$^{(*)}$ $\pm$ 0.100 & 0.025 $\pm$ 0.010 & 0.009 $\pm$ 0.003 & 0.217 $\pm$ 0.087 \\ NGC 4324 &0.052 $\pm$ 0.021 & - & - & 0.052 $\pm$ 0.021 \\ NGC 4389 &0.061 $\pm$ 0.025 & 0.035 $\pm$ 0.014 & 0.013 $\pm$ 0.005 & 0.013 $\pm$ 0.005 \\ NGC 4498 &0.112 $\pm$ 0.045 & 0.011 $\pm$ 0.004 & 0.021 $\pm$ 0.008 & 0.081 $\pm$ 0.032 \\ NGC 4639 &0.170$^{(*)}$ $\pm$0.068 & 0.003 $\pm$ 0.001 & 0.045 $\pm$ 0.018 & 0.122 $\pm$ 0.049 \\ NGC 5112 &0.537 $\pm$ 0.215 & 0.020 $\pm$ 0.008 & 0.099 $\pm$ 0.040 & 0.418 $\pm$ 0.167 \\ NGC 5334 &0.177 $\pm$ 0.071 & 0.002 $\pm$ 0.001 & 0.018 $\pm$ 0.007 & 0.157 $\pm$ 0.063 \\ NGC 5678 &0.802 $\pm$ 0.321 & - & - & 0.802 $\pm$ 0.321 \\ NGC 5740 &0.273 $\pm$ 0.109 & 0.005 $\pm$ 0.002 & 0.062 $\pm$ 0.025 & 0.206 $\pm$ 0.083 \\ NGC 5921 &0.974 $\pm$ 0.390 & 0.014 $\pm$ 0.006 & 0.107 $\pm$ 0.043 & 0.853 $\pm$ 0.341 \\ NGC 6070 &0.946 $\pm$ 0.379 & - & - & 0.946 $\pm$ 0.379 \\ NGC 6207 &0.405 $\pm$ 0.162 & 0.007 $\pm$ 0.003 & 0.113 $\pm$ 0.045 & 0.285 $\pm$ 0.114 \\ NGC 6412 &0.484 $\pm$ 0.194 & 0.030 $\pm$ 0.012 & 0.002 $\pm$ 0.001 & 0.452 $\pm$ 0.181 \\ NGC 7241 &0.109 $\pm$ 0.044 & - & - & 0.109 $\pm$ 0.044 \\ \hline\end{tabular} \end{table*} \section{Data release} \label{section8} All the data discussed here are released publicly with the publication of this paper: raw but reduced cubes {(de-rotated, phase corrected and wavelength calibrated cubes)}, continuum-subtracted cubes, 0th, 1st and 2nd order moment maps and continuum-subtracted H$ \alpha $ images. The data are available in FITS format through the NED and the Centre de Donn\'ees Stellaires (CDS). \section{Discussion} \label{section9} In this Section we will discuss critically the data and data quality, and the results from our analysis in terms of kinematical parameters and non-circular motions. We will not analyse here the rotation curves as they will be studied in depth in the forthcoming Paper III (Erroz-Ferrer et al., in prep.). \subsection{Data quality} We first discuss how our observing, reduction and analysis procedures have led to caveats and constraints on the data quality. \subsubsection{Effects from the nature of the H$ \alpha $ data} One of the scientific goals of this kinematical survey is the study of the inner parts of the rotation curves, exploiting the high angular resolution provided by GH$ \alpha $FaS data. To do so, we have tried not to change the angular resolution when reducing the data, as would have been the case when using large spatial smoothing kernels or adaptive binning methods. As the data are made public, the user can always choose to adopt another smoothing kernel depending on scientific interest. Normally, H$ \alpha $ data have a typical patchy appearance with many blank spaces in between the H{\sc ii} regions. As we have hardly smoothed the data spatially, this effect is rather obvious in most of our data. The data set contains galaxies with high signal and less patchy appearance (such as NGC~2748 or NGC~5678), and also galaxies with low signal and patchier appearance (e.g., NGC~691 or NGC~918). Consequently, the process of deriving rotation curves is difficult and leads to unsampled radial regions or bad radial sampling. For example, the central regions of the galaxies NGC~428, NGC~4151, NGC~4324, NGC~4639, NGC~5334, NGC~5740 and NGC~5921 are not sampled and can not be studied. \subsubsection{Kinematical parameters versus photometric parameters} \begin{figure} \begin{center} \includegraphics[width=84mm]{i_comparison.pdf} \includegraphics[width=84mm]{pa_comparison.pdf} \caption{Comparison between the parameters inclination and position angle derived from the S$ ^{4} $G 3.6 $ \mu $m images (i.e. photometric) and derived with the tilted-ring method to our H$ \alpha $ FP data (i.e. kinematical). Left) Comparison of the inclinations derived photometrically and kinematically. Right) Comparison between the position angles derived photometrically and kinematically. \textit{Note:} Due to our definition of the position angles ($0\degr \leq \rm PA < 180\degr$), the difference between PA=0$\degr$ and PA=179$\degr$ is not 179$\degr$, but 1$\degr$.} \label{comparisons} \end{center} \end{figure} Another caveat (though related to the first point) is the determination of the kinematical parameters $ i $, PA or $v_{\rm sys} $. We explained in Sect. \ref{section5} how we have derived our rotation curves using {\sc rotcur} in GIPSY. Due to the nature of our data, it is not easy to determine the kinematical parameters, and it is more often reliable to use other types of data, with full spatial coverage and full spatial sampling albeit at lower angular resolution. Thus, in our fits we have used H{\sc i}-derived parameters, whenever possible, as initial conditions. We also use the photometric information provided by the ellipse fitting analysis on the S$ ^{4} $G 3.6 $ \mu $m images by Mu\~noz-Mateos et al. (in prep), although our results do not always match the photometric results (see Table \ref{rotcuroutput} for a comparison). Regarding the PA, the PV diagrams and the position of the minor axis in our velocity maps are good tracers of the kinematical PA, so the results from {\sc rotcur} describe the gas motion better than the photometric results. There is one case with a high discrepancy: NGC~4151 with PA$_{3.6} $=150.6$\degr$ and PA$_{\rm H\alpha} $=20$\degr \pm$ 3$\degr$ (the real difference is the same as with PA$_{\rm H\alpha} $=180$\degr$+20$\degr$=200$\degr$, that is 49.4$\degr$). After an inspection of the continuum-subtracted cube, and also by looking at the velocity map (panel b of Fig. A8 bottom) we have found that the PA is closer to 0$\degr$, very different from the 150$\degr$ derived from the S$ ^{4} $G image. \citet{Bosma1977} discussed the first H{\sc i} synthesis maps of this galaxy, along with a deep photograph by Arp showing the outer spiral. They found that the PA$_{\rm kin} $=19$\degr$ $\pm$ 4$\degr$, much closer to our PA$_{\rm H\alpha} $ than to PA$_{\rm phot} $. We confirm Bosma's conclusion: the photometric image detects only the oval distribution of the galaxy. In the same way, there are other galaxies in the sample where the mid-IR morphology results in PA$_{\rm phot}$ different from PA$_{\rm kin} $, but our PA$_{\rm kin} $ agree with previous values of PA$_{\rm kin} $ found in the literature. {These are presented in Table~\ref{pacomparison}. All the galaxies in Table~\ref{pacomparison} are barred (NGC~2805 is classified as SA in the CVRHS, but is claimed to host a bar in the 2D structural decomposition of the S$ ^{4} $G images, \citealt{Salo2015}). Therefore, the differences between the photometric and kinematic PA may well be explained with the presence of the bar which, depending on its orientation, influences the kinematics in the bar region and causes deviations from pure circular motions (see Sect.~\ref{ncm}). Also, irregular outer spiral arms (such in NGC~6412) can influence on the measurements of PA$_{\rm phot}$, and we conclude that PA$_{\rm kin}$ trace the real PA better than PA$_{\rm phot}$.} \begin{table*} \caption{PA from different sources, measured in degrees and defined from North to East. \textit{Column I)} PA$_{\rm phot}$ from the ellipse fitting to the 3.6 $ \mu $m S$ ^{4} $G images. \textit{Column~II)} PA$_{\rm kin}$ from our H$ \alpha $ FP data. \textit{Column~III)} PA$_{\rm kin}$ values found in the literature. \textit{Column~IV)} References.} \label{pacomparison} \center \begin{tabular}{|c|c|c|c|c|} \hline Galaxy & PA$_{\rm phot} $& PA$_{\rm kin,THIS~STUDY} $ & PA$_{\rm kin,lit} $ & Reference\\ \hline NGC~1073 & 180.5 & 165 $\pm$ 4 & 164.6 & \citet{England1990}\\ NGC~2500 & 62.7 & 85 $\pm$ 5 & 85 & GHASP~VII \\ NGC~2805 &144.2 & 123 $\pm$ 3 & 114 & GHASP~VII\\ NGC~3504 &138.3 & 165 $\pm$ 8 & 163 & GHASP~VII \\ NGC~6412 & 76.9 & 115 $\pm$ 5 & 115 & GHASP~VII \\ \hline \end{tabular} \end{table*} As for the inclination, the largest discrepancies occur in NGC~4151 (already discussed), and in NGC~2748 and NGC~3504, where the inclinations as derived from the kinematic data are substantially higher than those derived from the surface photometry. This can be due to the non-uniform distribution of the H{\sc ii} regions as compared to the distribution of the stars and dust as seen in the S$ ^{4} $G 3.6 $ \mu $m images. {NGC~2748 shows a bulge at faint light levels, hence the discrepancy in inclination. NGC~3504 has a low inclination, seen in the S$ 4 $G image}. Also, NGC~2748 and NGC~3504 host an outer ring (R' and R'$ _{1} $, respectively), not seen in H$ \alpha $. { \subsubsection{Comparison with literature data} In the Introduction, we mentioned the H$ \alpha $ FP kinematic surveys of \citealt{Garrido2002} (GHASP), \citealt{Daigle2006a} (SINGS sample) and \citealt{Chemin2006} (VIRGO sample). These surveys have used instrumentation similar to GH$ \alpha $FaS: Cigale and FaNTOmM (Fabry–Perot de Nouvelle Technologie de l'Observatoire du Mont M\'egantic\footnote{http://www.astro.umontreal.ca/fantomm}). These were mounted at {different} telescopes: the 3.6 m Canada-France-Hawaii Telescope (VIRGO and SINGS), {the 3.6 m European Southern Observatory telescope (VIRGO)}, the 1.93 m Observatoire de Haute-Provence telescope (VIRGO, SINGS and GHASP), and the 1.60 m Observatoire du Mont M\'egantic telescope (VIRGO and SINGS). All these data are seeing-limited, but taken under observing conditions worse than ours (less than 2" seeing in the very best cases, but with seeing values of 8"-12" in the worst cases). In Table~\ref{tablecomparison}, we show the different quality characteristics of these surveys as compared to our GH$ \alpha $FaS data. \begin{table*} \caption{Characteristics of the H$ \alpha $ FP kinematical surveys of \citealt{Garrido2002} (GHASP), \citealt{Daigle2006a} (SINGS sample) and \citealt{Chemin2006} (VIRGO sample), shown along with the characteristics of the data used in this paper. \textit{Note)} The `-' denotes that there are no galaxies with that criteria (VIRGO, GHASP and this paper), or that no information about the seeing was presented (SINGS). $ ^{(*)} $ Information before the reduction processes (i.e., before smoothing).} \label{tablecomparison} \center \begin{tabular}{|c|c|c|c|c|} \hline Parent survey & VIRGO & SINGS & GHASP & S$ ^{4} $G (this paper)\\ \hline Sample size & 30 & 28 & 203 & 29\\ \hline Seeing $ < 1\farcs5 $ & 11 &- & - &29\\ 2" $ < $ Seeing $ \lesssim $ 4" & 19 &- & 143 & - \\ 4" $ < $ Seeing $ \lesssim $ 6" &- & -& 45 & - \\ Seeing $ > 6\farcs0 $ &- &- & 15& - \\ \hline Angular sampling & 0\farcs42 - 1\farcs61 &0\farcs42 - 1\farcs61 & 0\farcs68 - 0\farcs96 & 0\farcs2 \\ \hline Spatial smoothing & Voronoi S/N=5 &Voronoi S/N=5 & Voronoi S/N=5 &Median 3$ \times $3 pix\\ & Gaussian 3"$ \times $3" & & & \\ \hline Spectral sampling$ ^{(*)} $& 7 - 14 km s$ ^{-1} $ &7 - 14 km s$ ^{-1} $ & $\sim$10km s$ ^{-1} $ & $\sim$8 km s$ ^{-1} $\\ \hline \end{tabular} \end{table*} The reduction techniques used in these surveys are based on the University of Montr\'eal Improved 3D Fabry-Perot Data Reduction Techniques\footnote{http://www.astro.umontreal.ca/~odaigle/reduction/} and are summarised in \citet{Daigle2006}. The most important reduction steps that differ from ours are spectral and spatial smoothing. The three surveys smooth the data spectrally (that is, in the wavelength direction), with three-channel or Hanning-filtering smoothing. Also, they use the Voronoi adaptive binning method \citep{Cappellari2003}, which consists of gathering as many pixels as necessary in order to achieve a threshold value of the signal-to-noise ratio for the considered bin. Hence, the Voronoi method allows maximum spatial resolution for regions with strong emission and good signal-to-noise values for regions with low emission. On the contrary and with the goal of keeping the highest spectral and angular resolution possible throughout, we have not smoothed the data spectrally and applied a median spatial smoothing of 3$\times$3 pixels. These reduction procedures have been chosen following our scientific goals of studying the inner parts of the rotation curves and studying streaming motions within the different galaxy substructures. In Fig. \ref{comparisoghasp}, {we compare the velocity fields of three galaxies in the overlap of our sample with that of GHASP (NGC~864, NGC~2500 and NGC~2543) and two galaxies in the overlap with the VIRGO sample (NGC~4498 and NGC~4639). We compare the velocity fields as obtained from the GH$ \alpha $FaS, GHASP and VIRGO data by different reduction methods: ours, as explained in Sect. \ref{section4}, and following the reduction techniques used for the GHASP and VIRGO surveys (three-channel spectral smoothing and Voronoi adaptive binning, see \citealt{Chemin2006} and \citealt{Epinat2008})}. In these figures, it is possible to compare directly the advantages and disadvantages of each reduction procedure. On the one hand, we see that one drawback of following the reduction steps of Sect. \ref{section4} is that the spatial coverage is limited to the location of the H{\sc ii} regions, resulting in patchy maps. But on the other hand, Voronoi smoothing reduces the angular resolution often to that typical of H{\sc i} data (6"-10"). With the results from the adaptive binning, it is very difficult to perform studies inside the structures of the galaxies (such as bars or spiral arms), as they cannot be resolved any more. This includes the study of the streaming motions (this paper), and of the inner parts of the rotation curves (Paper III). As explained in Sect.~\ref{section8}, we publish our raw and basic reduced data, and any worker interested in smoothed versions of our data sets, either spectrally or Voronoi, can simply download the data and perform their own favourite recipes.} {In Fig.~\ref{comparisoghasp2} we present the velocity field of the central part of NGC~2543 to show the differences between the GH$ \alpha $FaS and GHASP data, both reduced following the techniques explained in the current paper. The differences arising from the different pixel scale (0$\farcs$2 of GH$ \alpha $FaS against 0$\farcs$68 of GHASP) and different seeing conditions ($1\farcs3$ for our observations against $3\farcs7$ for GHASP) can clearly be seen.} {To see the possible impact of the different reduction processes, we also compare the rotation curves of the overlapping galaxies. In Fig.~\ref{rotcurcomparison} we superimpose the rotation curves from GHASP and VIRGO papers onto our rotation curves for the galaxies in Fig.~\ref{comparisoghasp}. \citet{Erroz-Ferrer2012} give a direct comparison between the rotation curve of NGC~864 from our data and that of GHASP, so this galaxy has not been included in Fig.~\ref{rotcurcomparison}. For completeness, we compare the rotation curves of the remaining overlapping galaxies in Figs.~B1-B4. There are many similarities and the rotation curves agree reasonably. The possible discrepancies mainly come from the different parameters ($v_{\rm sys}$, $i$ and PA) used for the fits to the velocity fields. Also, the spatial coverage resulting from the different reduction techniques is reflected in the rotation curves: smoothing kernels such as Voronoi help to highlight the emission in the low-signal regions, but compromise the spatial resolution.} \begin{figure*} \begin{center} \includegraphics[width=175mm]{figures_v2_1.pdf} \caption{Comparison between the data obtained with the GH$ \alpha $FaS instrument and data from GHASP (NGC~864, NGC~2500 and NGC~2543) {and VIRGO (NGC~4498 and NGC~4639)}. Column I) DSS image. Column~II) ACAM H$ \alpha $ image. Column~III) Velocity field from GH$ \alpha $FaS observations, obtained following the reduction procedures described in Sect. \ref{section4}. {Column~IV) Velocity field from GHASP/VIRGO observations, obtained following the reduction procedures described in Sect. \ref{section4}.} Column~V) Velocity field from GH$ \alpha $FaS observations following the reduction processes of GHASP{/VIRGO} data (three-channel spectral smoothing and Voronoi adaptive binning). Column~VI) Data from GHASP {\citep{Epinat2008} and VIRGO \citep{Chemin2006} surveys}. {\textit{Note)} For NGC~2543, the central region has been marked with a black square and explicitly shown in Fig~\ref{comparisoghasp2}.}} \label{comparisoghasp} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=84mm]{figures_v2_2.pdf} \caption{{Velocity fields of the central region of NGC~2543 reduced following the procedures of this paper from GH$ \alpha $FaS (left) and GHASP (right) observations. Note the differences due to the pixel scale (0$\farcs$2 of GH$ \alpha $FaS against 0$\farcs$68 of GHASP) and seeing conditions ($1\farcs3$ for our observations against $3\farcs7$ for GHASP).}} \label{comparisoghasp2} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=150mm]{rotcurcomparison.pdf} \caption{{H$ \alpha $ rotation curves for NGC~2500, NGC~2543, NGC~4498 and NGC~4639. Black diamonds correspond to the high-resolution rotation curves derived from GH$ \alpha $FaS data (this paper). Red asterisks and blue triangles correspond to approaching and receding rotation curves from GHASP data. Pink squares correspond to VIRGO data.}} \label{rotcurcomparison} \end{center} \end{figure*} \subsection{Non-circular motions} \label{ncm} One of the aims of this kinematical study is to analyse the influence of the galaxies' main structural features on their kinematics. In other words, we want to study the kinematical footprints of the components of the galaxies, traced for instance by the non-circular motions. Deviations from pure circular motion have been widely studied previously in the literature. The first studies of streaming motions induced by the spiral density waves were presented by \citet{Bosma1978}, \citet{Visser1980} and \citet{Rots1990} using H{\sc i} data; by \citet{Ichikawa1985}, \citet{Clemens1985} and \citet{Cepa1992} using CO data, and by \citet{vanderKruit1976} and \citet{Marcelin1985} using H$ \alpha $ data. {They confirm what \citet{Roberts1969b} had predicted theoretically: variations of $10-30$ km s$^{-1} $ in the velocity are found when the gas crosses the density wave.} Bar-induced non-circular motions have also been analysed in the literature, and the first studies were performed by \citet{Peterson1978}, \citet{Duval1983} and \citet{Pence1988} using optical data; and by \citet{Sancisi1979} and \citet{Gottesman1984} using H{\sc i} data. {They found that the isovelocity contours tend to go parallel to the bar rather than parallel to the minor axis. Also, they found some cases where these deviations from the circular motion are more prominent when the PA$ _{\rm BAR} $ is at $\sim45\degr$ from the kinematic major axis (in agreement with the models by, e.g., \citealt{vanAlbada1981}). The nature and characterisation of these non-circular motions is a topic still under debate, and many studies of the streaming motions have been performed (e.g., \citealt{Knapen2000}; \citealt{Fresneau2005}; \citealt{Spekkens2007}; \citealt{Castillo-Morales2007}; \citealt{Shetty2007}; \citealt{Tamburro2008}; \citealt{Garcia-Burillo2009}; \citealt{Sellwood2010}; \citealt{Meidt2013}; \citealt{Font2014b}).} In Sect. \ref{section6}, we explained how we computed a non-circular motions map, the residual velocity field. These residual velocity fields depend significantly on the derived rotation, that is, on the rotation curve derived from the observed velocity field. Consequently, a rigorous analysis of the output of the {\sc rotcur} task in {\sc gipsy} has been carried out. We want to study both non-circular motions caused by the spiral density waves (hereafter spiral-induced non-circular motions) and those caused by the gravitational potential of the bar (bar-induced non-circular motions). The galaxies in our sample span a wide variety of morphological types and features, so a statistical approach to the non-circular motions is possible, though not straightforward. \subsubsection{Bar-induced non-circular motions} We are interested in analysing kinematically the effects of secular evolution by investigating the effects of the presence of a bar on the rotation of the galaxies. Our kinematic data allow us to study the kinematics and the star formation at the same time, with the caveat that we are only observing the star-forming regions. Our first goal is to understand the connection between the presence of a bar and the kinematics within the bar region. Our second goal is to study the link to star formation within the bar. Bars are usually dominated by old population stars, and in some cases do not show H$ \alpha $ emission (e.g., in the SB galaxies NGC~4639, NGC~5334 or NGC~5921). From the 12 SAB and {7} SB galaxies in our sample, the bar region is clearly defined in H$ \alpha $ in 11 of them. In most of these (NGC~864, NGC~1073, NGC~2500, NGC~2748, NGC~2805, NGC~4151, NGC~4389, NGC~4498, NGC~5112, NGC~6207 and NGC~6412) the bar shows H$ \alpha $ emission, and there are non-circular motions along the bar (see Table \ref{ncmtable}). Although the bar is not obvious in H$ \alpha $, using the method shown in Fig. \ref{cdf} we can in fact measure non-circular motions in three further galaxies: NGC~428, NGC~918 and NGC~3504 (Table \ref{ncmtable}). In the other galaxies (NGC~2543, NGC~2712, NGC~5740 and NGC~5921) the bar itself does not appear in H$ \alpha $, but the surrounding regions (mostly the SAR) show large non-circular motions. We have also measured the non-circular motions in the SAR in all other barred galaxies, and along the spiral arms of all galaxies. NGC~4324 is a lenticular galaxy, so there we have measured the residual velocities along the ring instead of the arms, finding values of 16.2 $ \pm $ 3.2 km s$ ^{-1} $. {The PV diagrams along the minor axis presented in Appendix C are the first clue to see that there are deviations from circular motion. As illustrated for NGC~864 in Paper I, in the absence of non-circular motions, the velocity profile along the minor axis should be completely flat. For most of the barred galaxies in our sample, there are deviations from the systemic velocity, presented as peaks or troughs in the velocity profile along the minor axis. These indicate that some gas along the minor axis is not following the rotational pattern of the galaxy. As we see in NGC~864 (Fig.~C5), NGC~3504 (Fig.~C6) or NGC~5678 (Fig. C7), the deviations from the systemic velocity are found beyond the extent of the bar. This implies that the non-circular motions outside the bar region (i.e., in the SAR) can be a result of the influence of both potentials (bar and spiral arms).} {In Fig.~\ref{deltapaplots}, we represent the residual velocities in the bar region (and normalised) as a function of the difference between the major axis of the bar and that of the kinematic major axis, $ \Delta $PA. We see that the amplitude of the non-circular motions here does not increase as $ \Delta $PA approaches 45$\degr$, contrary to the expectation.} \begin{figure} \begin{center} \includegraphics[width=84mm]{deltapaplots.pdf} \caption{\textit{Top)}Amplitude of the non-circular motions (residual velocites from the 95 $\%$ of the CDF) in the bar region as a function of the difference between the major axis of the bar and that of the kinematic major axis, $ \Delta $PA. Contrary to what has been reported in the literature, the highest streaming motions are not found in galaxies with $ \Delta $PA$\sim45\degr$. \textit{Bottom)} As the top panel but for the normalised residual velocities.} \label{deltapaplots} \end{center} \end{figure} Fig.~\ref{ncm1} (left) shows the residual velocities in the three regions -bar {(Fig.~\ref{ncm1}a)}, SAR {(Fig.~\ref{ncm1}d)} and spiral arms {(Fig.~\ref{ncm1}g)}- as a function of the bar strength, quantified by the torque parameter $ Q_{\rm b} $. $ Q_{\rm b} $ is strongly reacting to the bulge: a stronger bulge dilutes the tangential force from the bar and lowers $ Q_{\rm b} $. It is not necessarily true that the bar is weaker, just that motions are more controlled by the spherical potential of the bulge. Therefore, to distinguish the influence of the bulge in the bar strength, we have also studied the non-circular motions as a function of the bulge-to-total ratio (\textit{B/T}) in the middle plots of Fig.~\ref{ncm1}{: bar (Fig.~\ref{ncm1}b), SAR (Fig.~\ref{ncm1}e) and spiral arms (Fig.~\ref{ncm1}h)}. Also, in all the plots, we have presented the galaxies with \textit{B/T} $<$ 0.1 with blue asterisks, galaxies with $0.1 <B/T< 0.2$ with red triangles, and galaxies with $B/T> 0.2$ (bulge dominated) with squares. We list the $ Q_{\rm b} $ and \textit{B/T} values for the galaxies of our sample in Table \ref{ncmtable}. {In plots Fig.~\ref{ncm1} (a,b,c,d,e,f,g) we only represent those cases where a galaxy has a bar (i.e., $ Q_{\rm b} \neq $0).} {The same plots as in Fig.~\ref{ncm1} have been created for the normalised residual velocities, and are presented in Fig.~\ref{ncm1bis}.} \begin{figure*} \begin{center} \includegraphics[width=168mm]{bulgeplots_paper.pdf} \caption{Measurements of the non-circular motions. The top plots represent the non-circular motions (residual velocites from the 95 $\%$ of the CDF) in the bar region, whereas the middle and bottom plots represent the residual velocities in the SAR and arms respectively. \textit{Left)} Residual velocities as a function of the bar strength (represented by the parameter $ Q_{\rm b} $ from D\'iaz-Garc\'ia et al. in prep). The galaxies without a bar (i.e., $ Q_{\rm b} $=0) have not been represented. \textit{Middle)} The same residual velocities as a function of \textit{B/T}. \textit{Right)} The same residual velocities as a function of the SFR in each of the three regions. \textit{Note:} In all the plots, we have represented galaxies with \textit{B/T} $<$ 0.1 with blue asterisks, galaxies with 0.1 $<$ \textit{B/T} $<$ 0.2 with red triangles, and galaxies with \textit{B/T} $>$ 0.2 (bulge dominated) with squares. In the top right plot, we have highlighted with a filled green square the starburst nucleus (NGC~3504).} \label{ncm1} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=168mm]{bulgeplots_paper_vc.pdf} \caption{As Fig.~\ref{ncm1} but now for the residual velocities normalised by the circular velocity at the end of the bar region (top plots), SAR (middle plots) and spiral arms (bottom plots). These circular velocities have been estimated using the universal rotation curve from \citet{Persic1991}.} \label{ncm1bis} \end{center} \end{figure*} Analysing the left hand plots of Figs. \ref{ncm1} { and \ref{ncm1bis} (a,d,g)}, we conclude that $ Q_{\rm b} $ does not correlate with the amplitude of the non-circular motions. {Also, the middle plots of Figs. \ref{ncm1} and \ref{ncm1bis} (b,e,f) show no correlation between the amplitude of the non-circular motions and the presence of a bulge. Although the few cases with a significant bulge ($B/T>0.1$), show that the residual velocities in the bar decrease as \textit{B/T} increases, more data would be needed to prove whether the bulge is constraining the velocities to remain circular.} We have measured the luminosities from the three regions identified previously in the galaxies of the sample (bar, SAR and spiral arms), and derived the SFRs in those regions using the equations presented in Sect. \ref{section7}. In panels Fig.~\ref{ncm1}{c and Fig.~\ref{ncm1bis}c}, we see a slight tendency that the higher non-circular motions correspond to star forming bars, or to bars that show more H$ \alpha $ emission (note that the scale in SFR is logarithmic). This is surprising as in the literature, the high shear and shocks in the bar region prevent star formation in the models (e.g., \citealt{Athanassoula1992b}) and observations (e.g., \citealt{Zurita2004} and \citealt{Castillo-Morales2007}), where star formation is inhibited in the bar region and the non-circular motions anti-correlate with the H$ \alpha $ luminosity there. This would cause a bias in our study, as we might not see in H$ \alpha $ those bars with higher non-circular motions. Analysing also panels {f and i of Figs. Fig.~\ref{ncm1} and \ref{ncm1bis},} we see that the SFR and the amplitude of the non-circular motions do not correlate. We previously stated that some galaxies show H$ \alpha $ emission along their bar, and others do not. Why does that happen? In the literature, many studies regarding the star formation along bars have been carried out, but with no clear answer.\textit{ (i)} As we stated before, \textit{the strong shear and parallel shocks} limit the formation of stars (e.g., \citealt{Athanassoula1992b}; \citealt{Reynaud1998}; \citealt{Zurita2004}), because the giant molecular clouds can be pulled apart (\citealt{Downes1996}; \citealt{Schinnerer2002}). On the other hand, H{\sc ii} regions have been found under conditions of high shear stress and shocks (e.g., \citealt{Martin1997}; \citealt{Sheth2002}; \citealt{Zurita2008}). \citet{Martinet1997} found that the highest SFRs along the bar correspond to the stronger and longer bars in their sample of isolated late-type barred galaxies. In Fig.~\ref{sfrvsqb}, we present the resulting SFR values within the bar and SAR of our sample galaxies as a function of $ Q_{\rm b} $, a parameter which quantifies the strength of the bar and consequently that of shocks/shear. From Fig.~\ref{sfrvsqb} we conclude that the \textit{bar strength} does not determine the presence or quantity of H$ \alpha $ emission along the bar. In our sample, we see that the strongest bars ($ Q_{\rm b,NGC1073} $=0.63 and $ Q_{\rm b,NGC5112} $=0.62) show H$ \alpha $ emission, but further study is needed to see if this emission is located in the regions of high shocks/shear. \textit{(ii)} \citet{Garcia-Barreto1996} found that the barred galaxies with Hubble types SBa or earlier in their sample did not show H$ \alpha $ emission, probably due to the low gas content available for inflow (which is not the case, for example, for NGC~4151) so the \textit{morphological type} is not in principle the only reason. \textit{(iii)} \citet{Martin1997} also found that the two highest SFRs correspond to the galaxies which present the strongest signs of \textit{recent interaction or merging} (NGC~4731 and NGC~7479). \textit{(iv)} \citet{Sheth2000} came to the conclusion that the stars in NGC~5383 may have been formed in the spurs before the gas encounters the dust lane, and travel ballistically through the shock at the dust lane, ionizing the regions located at the leading side of the dust lane. \textit{(v)} \citet{Verley2007} suggested that the presence of star formation in the central region and along bars is a consequence of an \textit{evolutive sequence}, that goes from galaxies with H$ \alpha $ in the bar to galaxies with H$ \alpha $ emission in the circumnuclear region but not within the bar, ending in galaxies without any emission either along the bar or in a central knot. \begin{figure} \begin{center} \includegraphics[width=84mm]{sfrvsqb.pdf} \caption{SFRs measured in the bar region (top) and in the SAR as a function of the bar strength, represented by the parameter $ Q_{\rm b} $. We see that in this case, the strength of the bar does not regulate the H$ \alpha $ luminosity. \textit{Note:} Again, we have represented galaxies with \textit{B/T} $<$ 0.1 with blue asterisks, galaxies with 0.1 $<$ \textit{B/T} $<$ 0.2 with red triangles, and galaxies with \textit{B/T} $>$ 0.2 with squares. In the top plot, we have highlighted with a filled green square the starburst nucleus (NGC~3504) which affects only the bar region plot.} \label{sfrvsqb} \end{center} \end{figure} We conclude that there is no simple answer to the question \textit{why some bars have current star formation and others don't}. The answer is probably a combination of the previous features. With our sample of galaxies, we find that neither the Hubble type nor the bar strength determine the presence of H$ \alpha $ emission within the bar region, {but larger samples are needed to establish well-based statistical results on the SF within bars. Furthermore, deeper and higher-resolution kinematic data (e.g., from the Atacama Large Millimeter/submillimeter Array, ALMA) would enable the study of the physical conditions in the bar related to the formation of stars at the scales of the molecular clouds. Then, it may be possible to understand} if a recent interaction or merger is causing the star formation of the bar, if the galaxy has already used up the available gas to be transformed into stars, or if the location of the shocks and shear correspond to regions of diminished star formation. \subsubsection{Spiral-induced non-circular motions} \label{spiralncm} In the bar region, the mechanisms triggering star formation may be different from those in spiral arms, due to the different dynamics and shock conditions. In Fig.~\ref{ncm2}, we show star formation within the spiral arms and the amplitude of the non-circular motions there as a function of the arm class, a parameter which might plausibly be related to the ``strength" of the spiral arms. We do not find any trend or correlation. \citet{Elmegreen1986} found that the SFR per unit area does not depend on the arm class, and we confirm this. {Therefore, we agree with other studies which confirm that the SFR does not depend on the strength of the arms (\citealt{Dobbs2009}; \citealt{Foyle2010}), and disagree with those that confirm the opposite (e.g., \citealt{Seigar2002}; \citealt{Clarke2006}).} The presence and magnitude of streaming motions in the arms seem to be a local phenomenon, unrelated to the SFR or arm class. The high amplitude of the non-circular motions found in NGC~5678 ({with a representative value of $ \sim $70} km s$ ^{-1} $ above the average 20-30 km s$ ^{-1} $ for spiral-induced non-circular motions) leads us to consider that a spiral cannot be the single cause, and that this galaxy could be barred. This galaxy was classified as SAB in \citet{RC3} and argued to be barred by \citet{Ganda2007}, but { the CVRHS} reclassified it as SA from an S$ ^{4} $G mid-IR image and therefore does not have a value for $ Q_{\rm b}$. The PV diagram along the minor axis (Fig.~C7) shows the deviations from the circular motions, very similar to those created by the potential of the bar in NGC~864 (Paper 1). We conclude that the non-circular motions in NGC~5678 are an evidence supporting the claim that NGC~5678 is a barred galaxy, and the high amplitude of those non-circular motions could be bar-induced rather than only spiral-induced. {Another interpretation could be that there was a recent minor merger. The galaxy is asymmetric in the integrated H$ \alpha $ map, contrary to the expectations of a symmetric bar.} \begin{figure} \begin{center} \includegraphics[width=84mm]{spiralncm.pdf} \caption{\textit{Top)} SFR in the arms per unit area as a function of the arm class classification from {the CVRHS}. \textit{Bottom)} Amplitude of the non-circular motions within the spiral arms as a function of the arm class. We see that neither the SFR nor the amplitude of the non-circular motions correlate with the arm class. \textit{Note)} NGC~4324 has not been represented in the figure as it does not have an arm class classification. { NGC~5678 has not been represented either because the amplitude of the non-circular motion within its spiral arms presented in Table \ref{ncmtable} is understood as being caused by the bar (see Sect. \ref{spiralncm}).}} \label{ncm2} \end{center} \end{figure} \section{Conclusions} \label{section10} In this paper we present the data from the kinematical study of S$ ^{4} $G galaxies which started in Paper I, and reach the following conclusions: \begin{enumerate} \item We have completed the observations of our kinematical study of 29 S$ ^{4} $G spiral galaxies of all morphological types using the GH$ \alpha $FaS FP instrument (FOV of 3.4 $\times$ 3.4 arcmin). The data have seeing limited angular resolution (typical values between 0.6 and 1.4 arcsec) sampled with 0\farcs2 per pixel and a spectral {sampling} $\sim$8 km s$ ^{-1} $. These FP data have been observed together with H$ \alpha $ flux-calibrated images. \item To reach our scientific objectives, we have followed specific data reduction and analysis procedures that guarantee high angular resolution ($ \sim $1") kinematic cubes and data products. \item The images, data described {and rotation curves} in this paper are publicly released {through the NED and the Centre de Donn\'ees Stellaires (CDS)}. \item We flux-calibrate the GH$ \alpha $FaS data cubes following the procedures presented in Paper I. We conclude that the flux calibration can not be automated, and no standard calibration factors can be extracted for any particular GH$ \alpha $FaS filter. Instead, flux calibration is specific to each galaxy and needs to be performed by comparison with calibrated H$ \alpha $ images, such as our ACAM images. \item We have applied the tilted-ring method to our velocity maps to extract the rotation curves. Some caveats arising in the nature of these FP data need to be taken into account (e.g., intrinsic patchiness of the line emission). \item We have created non-circular motion maps for all the galaxies of the sample. We confirm the presence of these non-circular motions created by the non-axisymmetric potential of the bar along the bar region and at the start of the spiral arms, with a tendency that the more star-forming bars induce higher non-circular motions. We find that $ Q_{\rm b} $ does not correlate with the amplitude of the non-circular motions. However, our data is biased towards bars with recent star formation, where strong shocks and shear may take place and star formation may be inhibited. \item Also, we confirm the presence of non-circular motions created along the spiral arms, but there is no correlation of the amplitude of these non-circular motions with the arm class, a parameter that is related to the arm strength. \end{enumerate} \section*{Acknowledgments} We acknowledge financial support to the DAGAL network from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement number PITN-GA-2011-289313, and from the Spanish MINECO under grant number AYA2013-41243-P. This work was co-funded under the Marie Curie Actions of the European Commission (FP7-COFUND). We also gratefully acknowledge support from NASA JPL/Spitzer grant RSA 1374189 provided for the S$ ^{4} $G project. E.A. and A.B. thank the CNES for support. KS, JCMM, and TK acknowledge support from The National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has been supported by the MINECO under the grant AYA2007-67625-CO2-O2, and is based on observations made with the WHT operated on the island of La Palma by the Isaac Newton Group of Telescopes, in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrof\'isica de Canarias. The authors thank the entire S$ ^{4} $G team for their efforts in this project. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr) This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by JPL, Caltech, under contract with NASA. \bibliographystyle{mn2e}
{ "timestamp": "2015-04-27T02:06:28", "yymm": "1504", "arxiv_id": "1504.06282", "language": "en", "url": "https://arxiv.org/abs/1504.06282" }
\section{Introduction} The fundamental degrees of freedom of Quantum Chromodynamics (QCD) are quarks, which couple to magnetic fields through their electromagnetic charge. As a result, QCD exhibits some interesting phase structure in the presence of magnetic fields~\cite{Chernodub}. RHIC collisions between nuclei generate large magnetic fields; here chiral magnetic effect is expected to play a significant role~\cite{Naoki}. Similarly, the color-flavor-locked phases~\cite{Gorbar,Son5}, which can exist (at least theoretically) at asymptotically large baryon densities may be found in neutrons stars and also possibly in magnetars, where the magnetic fields are to the order of $10^{18}\ G$~\cite{Harding}. Furthermore, the quark-gluon plasma of the early universe may have sustained large magnetic fields. As such there is a lot of interest in the properties of QCD vacuum and other QCD phases in the presence of strong magnetic fields. QCD in a strong magnetic field is a topic of great contemporary interest and has been studied in the Schwinger limit quite extensively~\cite{Schwinger}. In this limit, the quantity $qH$ is held fixed (with $q$ being a relevant electromagnetic charge and $H$ being the external field) while $H\rightarrow\infty$ as $q\rightarrow 0$. This limit was first considered in the context of vacuum pair production in QED by Schwinger~\cite{Schwinger} but since has been used to study the low-energy QCD vacuum in the context of chiral perturbation theory, primarily for mesons. For instance, the chiral condensate of the low-energy QCD vacuum has been studied through chiral perturbation theory using the Schwinger limit for zero and non-zero pion masses up to next-to-leading order and the result has been generalized to the finite temperature case~\cite{Shushpanov1,Shushpanov2,Werbos1,Werbos2}. At large magnetic fields, QCD undergoes dimensional reduction and quarks acquire masses of $\mathcal{O}(\sqrt{qH})$, where $q$ is the quark electromagnetic charge and $H$ the external field. This explains the origin of magnetic catalysis due to the ``Cooper effect", which means that pairing is more effective in one spatial dimension compared to three~\cite{Shovkovy}. Recently, there has been interest in finite isospin QCD, where the QCD vacuum has been shown to exhibit superconducting behavior through the condensation of charged pions~\cite{Adhikari, Endrodi}. This has been shown in the context of chiral perturbation theory and lattice QCD, even though the lattice seems to have a sign problem in the presence of both an isospin chemical potential and a magnetic field due to the charge difference between the up and down quarks. The study of Ref.~\cite{Adhikari} was performed in chiral perturbation theory at finite isospin chemical potentials and a uniform magnetic field, with the charged pions coupled to dynamical photons. It was shown that for realistic pion masses, the system behaves as a type-II superconductor, with a uniform superconducting phase at low magnetic fields and a phase transition to a normal phase with increasing magnetic fields through an intermediate phase of topological vortices~\cite{Gorkov, Landau, Abrikosov, Tinkham, Fetter}. The study was performed at leading order in chiral perturbation theory and is valid as long as the relevant physical parameters including the pion mass ($m_{\pi}$), the isospin chemical potential ($\mu_{I}$) and the magnetic field ($H$) satisfy the condition that \begin{equation} m_{\pi},\ \mu_{I},\ \sqrt{eH}\ll \Lambda_{\rm Had}\ , \end{equation} where $\Lambda_{\rm Had}$ is the typical hadronic scale which of the order $4\pi f_{\pi}$, with $f_{\pi}$ being the pion decay constant. In this paper, we consider chiral perturbation in a background magnetic field in the Schwinger limit. In particular, this means that the charged pions are coupled to the classical external magnetic fields at all orders. However, unlike the study of Ref~\cite{Adhikari}, we will assume that there are no dynamical photons that can couple to the charged pions. It is well-understood that superconductivity arises due to spontaneous symmetry breaking, whereby photons acquire mass. From the perspective of the relevant Lagrangian, a term proportional to $\vec{A}^{2}$ is introduced - it becomes impossibly expensive to sustain a magnetic field. Obviously, in the absence of photons and hence a back-reaction from the photons to a strong, external magnetic field, (i.e. the Schwinger limit), it is natural to ask what possible phases \textit{can} be sustained in finite isospin chiral perturbation theory. We will show in this paper that unlike the result of Ref.~\cite{Adhikari}, where the charged pions condensed, that in the Schwinger limit (with no photons), that charged pion condensation is forbidden and that only the neutral pions can condense. We will also consider a particular example of the pion condensed phase, namely $\pi^{0}$ domain walls that was first considered in Ref.~\cite{Son4}. It was shown that this phase becomes more stable than the normal phase of chiral perturbation theory above a critical magnetic field. We will show a new phase transition line emerges in finite isospin chiral perturbation theory if the system is above the critical baryon chemical potential of Ref.~\cite{Son4} assuming that the $\pi^{0}$ domain wall is the lowest energy pion condensed phase that can exist. It is important to note that this assumption while plausible is strictly speaking unproven; the plausibility is motivated by the absence of other possible neutral pion phases that satisfy the equations of motion of finite isospin chiral perturbation theory. The paper is organized as follows: in the first section, we summarize the chiral perturbation theory Lagrangian at finite isospin and a uniform magnetic field using the same notation as in Ref.~\cite{Adhikari}. In the following section, we will prove a no-go theorem that charged pions cannot condense in the Schwinger limit (with no back-reacting photon fields). The result seems plausible based on the fact that the mechanism that gives rise to photon masses through symmetry breaking (either spontaneous or explicit) is absent in the Schwinger limit. In the third section, we will consider how the existence of a $\pi^{0}$ domain state modifies the finite isospin low-energy QCD phase diagram in a magnetic field above the critical baryon chemical potential of Ref.~\cite{Son4}. In particular we will show a new phase transition from the normal state to a $\pi^{0}$ domain wall state, which is different from the result of Ref.~\cite{Son4} but reduces to the result of Ref.~\cite{Son4} at zero isospin chemical potential. \section{Lagrangian of Finite Isospin Chiral Perturbation Theory in a Magnetic Field} We begin with the leading order chiral perturbation theory Lagrangian at finite isospin and a magnetic field. The Lagrangian at finite isospin (but zero external magnetic field) was first considered in Refs.~\cite{Son1, Son2}, where it was shown that for isospin chemical potentials greater than or equal to the pion mass, a superfluid phase of condensed, charged pions becomes energetically favorable. It was shown that the charged superfluid in the presence of an external magnetic field with the pions coupled to back-reacting photons exhibit type-II superconductivity in Ref.~\cite{Adhikari}. The Lagrangian is as follows: \begin{equation} \label{Lagrangian} \mathcal{L}_{\rm eff}=f_{\pi}^{2}\ \textrm{Tr} (D_{\mu}\Sigma^{\dagger}D^{\mu}\Sigma)+m_{\pi}^{2}f_{\pi}^{2}\textrm{Tr}(\Sigma+\Sigma^{\dagger}) \ , \end{equation} where $f_{\pi}$ is the pion decay constant $m_{\pi}$ is the pion mass and $\Sigma$ are the $SU(2)$ matrices, which we parametrize as in Ref.~\cite{Adhikari}: \begin{equation} \label{Sigma} \Sigma=\frac{1}{f_{\pi}}\left( \sigma\mathbb{1}+i\pi_{x}\tau_{1}+i\pi_{y}\tau_{2}+i\pi_{z}\tau_{3} \right )\ , \end{equation} where the $\sigma$ and $\pi_{i}$ fields are defined as: \begin{equation} \label{Sigma} \begin{split} \sigma&=f_{\pi}\cos\psi\cos\theta\\ \pi_{x}&=f_{\pi}\cos\psi\sin\theta\cos\alpha\\ \pi_{y}&=f_{\pi}\cos\psi\sin\theta\sin\alpha\\ \pi_{z}&=f_{\pi}\sin\psi\ . \end{split} \end{equation} Note that this parametrization guarantees that $\det\Sigma=1$ and $\Sigma^{\dagger}\Sigma=1$. The gauge derivative in the Lagrangian incorporates both the finite isospin chemical potential and the finite isospin chemical potential. It is defined as follows: \begin{equation} D_{\mu}\Sigma=\partial_{\mu}\Sigma-i\ [\delta_{\mu 0}\mu_{\rm I},\ \Sigma]-ieA_{\mu}[Q,\ \Sigma]\ , \end{equation} where isospin enters as a spatially-independent zeroth component of the gauge field, while the magnetic field enters through only the spatial components. Note that $e$ is the charge of a positive pion and the charge matrix for the up and down quarks is defined through the following relation: \begin{equation} Q=\frac{1}{6}\mathbb{1}+\frac{1}{2}\tau_{3}\ , \end{equation} where $\mathbb{1}$ is a $2\times 2$ identity matrix and $\tau_{3}$ is the third Pauli matrix. Since the charge matrix $Q$ enters the Lagrangian through a commutator with the pion fields $\Sigma$, only the component of charge in the $\tau_{3}$ direction affects the Lagrangian. We use the parametrization for $\Sigma$ defined in Eq.~(\ref{Sigma}) and couple the Lagrangian to the external magnetic field, which we will assume is uniform in space. In doing so, the effective Lagrangian of low-energy QCD at lowest order in chiral perturbation theory becomes: \begin{equation} \label{Leff} \begin{split} \mathcal{L}_{\rm eff}&=-\frac{f_{\pi}^{2}}{2}\left [ \cos^{2}\psi \{\sin^{2}\theta\left (\vec{\nabla}\alpha+e\vec{A} \right)^{2} + (\vec{\nabla}\theta)^{2}\} +(\vec{\nabla}\psi)^{2}\right ]\\&+m_{\pi}^{2}f_{\pi}^{2}(\cos\theta\cos\psi-1)+\frac{\mu_{I}^{2}f_{\pi}^{2}}{2}\sin^{2}\theta\cos^{2}\psi\\ &-\frac{1}{4}F_{ij}F^{ij}\ , \end{split} \end{equation} where we have assumed that the electromagnetic tensor only has spatial components, i.e. $F_{0i}=F_{i0}=0$. Note that we have included the kinetic contributions of the external field, which wasn't present in Eq.~(\ref{Lagrangian}). \section{Phases in the Schwinger Limit} In this section, we argue that the only possible phases that can exist at leading order in chiral perturbation theory in the Schwinger limit (i.e. no back-reacting photons) are phases where the charge pions remain uncondensed, in other words, we will prove that $\pi_{x}=\pi_{y}=0$ for all possible phases that can exist in the Schwinger limit. This seems somewhat reasonable: the mechanism that was responsible for superconductivity, namely photon fields acquiring mass through spontaneous symmetry breaking, is absent now since photons cannot react to the presence of a magnetic field. Then it becomes favorable for the system to remain in the ground state with $\theta=0\ ,\psi=0$ ($\sigma=f_{\pi},\ \pi_{x}=\pi_{y}=\pi_{z}=0$), even for chemical potentials larger than pion masses. In order to prove that this is indeed the case, It is useful to begin with the equation of motion for the gauge fields $\vec{A}$, which can be easily derived using the effective Lagrangian of Eq.~(\ref{Leff}). The equation of motion for the gauge fields is \begin{equation} \label{EoMphoton} -\partial_{k}F_{kl}=j_{l}\ , \end{equation} where $F_{kl}$ refers to the spatial components of the electromagnetic tensor and $j^{l}$ refers to the components of the 3-current, which is as follows: \begin{equation} j_{l}=e \cos^{2}\psi\sin\theta\left (\partial_{l}\alpha+eA_{l} \right )\ . \end{equation} In the Schwinger limit (with no back-reactions), the photon fields $A$ must equal the external field $A_{\rm ext}$, which for a uniform magnetic field is of the following form: \begin{equation} A^{\rm ext}=\left (-b y H_{\rm ext},\ a x H_{\rm ext},\ 0\right )\ , \end{equation} where $H_{\rm ext}$ is the external magnetic field with $a$ and $b$ satisfying the constraint $a+b=1$. In other words, we are working in the most general gauge configuration for a uniform magnetic field pointing in the positive z-direction. The no-back-reaction constraint on the electromagnetic tensor then is: \begin{equation} F_{kl}=\partial_{k}A_{l}^{\rm ext}-\partial_{l}A_{k}^{\rm ext}\ . \end{equation} Since the electromagnetic tensor must be spatially homogeneous, Eq.~(\ref{EoMphoton}) implies that the only way the equation of motion for photons is satisfied then is if the 3-current, $j^{l}$, vanishes. The constraint is satisfied only if at least one of the constraints given below is satisfied. Either \begin{equation} \begin{split} \label{scenarios} \textrm{A: } &\psi=\frac{\pi}{2} \textrm{ or}\\ \textrm{B: } &\theta=0 \textrm{ or}\\ \textrm{C: } &\partial_{l}\alpha+eA^{\rm ext}_{l}=0. \end{split} \end{equation} Scenario A corresponds to a chirally restored phase with the vacuum pointing entirely in the $\tau_{3}$-direction, i.e. $\pi_{z}=f_{\pi}$ and $\sigma=\pi_{x}=\pi_{y}=0$, a phase where the neutral pion condensate is at saturation. Scenario B corresponds to a phase with $\pi_{x}=\pi_{y}=0$, i.e. a phase which only allows the condensation of neutral pions (and cannot be chirally restored as we will see due to the exclusion of Scenario A). Finally scenario C is a constraint on the complex phase of one of the charge pions, either $\pi^{+}$ or $\pi^{-}$. \subsection{Exclusion of Scenarios A and C} Here we will argue that it is not possible to have any stable solutions for either scenario A or C.\\ \\ \textit{Scenario A:} In order to rule out the chirally restored phase implied by scenario $A$, it is important to consider all possible phases for which the condition $\psi=\frac{\pi}{2}$ is satisfied. We begin with the equation of motion for $\psi$, which is as follows: \begin{equation} \begin{split} &\vec{\nabla}^{2}\psi+\sin\psi\cos\psi\{ \sin^{2}\theta(\vec{\nabla}\alpha+e\vec{A})^{2}+(\vec{\nabla}\theta)^{2}\}\\ &-m_{\pi}^{2}\sin\psi\cos\theta-\mu_{I}^{2}\sin^{2}\theta\sin\psi\cos\psi=0 \ . \end{split} \end{equation} It is straightforward to see from this equation that the only way in which the equation of motion is satisfied for scenario A away from the chiral limit is if $\theta=\frac{\pi}{2}$. However, it is easy to show that this phase is unstable. Since this phase is spatially homogeneous, we proceed by considering the effective potential, which can be deduced from the Lagrangian of Eq.~(\ref{Leff}) to be: \begin{equation} V_{\rm eff}=-m_{\pi}^{2}f_{\pi}^{2}\left (\cos\theta\cos\psi-1 \right )-\frac{\mu_{I}^{2}f_{\pi}^{2}}{2}\sin^{2}\theta\cos^{2}\psi\ . \end{equation} Note that $V_{\rm eff}$ has been normalized such that it is zero for $\theta=\psi=0$. In order to proceed note that the first partial derivatives of the effective potential are \begin{equation} \label{firstderivative} \begin{split} \frac{\partial V_{\rm eff}}{\partial \theta}&=m_{\pi}^{2}f_{\pi}^{2}\sin\theta\cos\psi-\mu_{I}^{2}f_{\pi}^{2}\sin\theta\cos\theta\cos^{2}\psi\\ \frac{\partial V_{\rm eff}}{\partial \psi}&=m_{\pi}^{2}f_{\pi}^{2}\cos\theta\sin\psi+\mu_{I}^{2}f_{\pi}^{2}\sin^{2}\theta\cos\psi\sin\psi\ . \end{split} \end{equation} Note that when $\theta=\psi=\frac{\pi}{2}$, $\frac{\partial V_{\rm eff}}{\partial\theta}=\frac{\partial V_{\rm eff}}{\partial\psi}=0$, as expected. However, using the second partial derivative test, the discriminant at $\theta=\psi=\frac{\pi}{2}$ assumes the following value: \begin{equation} \frac{\partial^{2}V_{\rm eff}}{\partial \theta^{2}}\frac{\partial^{2}V_{\rm eff}}{\partial \psi^{2}}-\left (\frac{\partial^{2}V_{\rm eff}}{\partial\theta\partial\psi} \right )^{2}=-f_{\pi}^{4}m_{\pi}^{4}\ , \end{equation} which suggests that the solution is a saddle point assuming pion masses are non-zero. In the chiral limit, the second partial derivative test is inconclusive. However, it is easy to see from Eq.~(\ref{firstderivative}) that the point remains a saddle point even in the chiral limit.\\ \\ \noindent \textit{Scenario C:} Next, we exclude scenario C because it is not possible to find an $\alpha$ that consistently satisfies the three constraints implied by Eq.~(\ref{scenarios}), scenario C. Explicitly the three constraints are: \begin{equation} \begin{split} \label{scenarioC} \partial_{x}\alpha&=eayH_{\rm ext}\\ \partial_{y}\alpha&=-ebxH_{\rm ext}\\ \partial_{z}\alpha&=0\ . \end{split} \end{equation} Note that the last constraint above implies that $\alpha$ is independent of $z$. Then, solving the first constraint in the equation above, we get the following functional form for $\alpha$: \begin{equation} \label{alpha} \alpha(x,y)=eaxyH_{\rm ext}+g(y)\ , \end{equation} with $g(y)$ being a function of $y$ only. However, this solution is inconsistent with the second constraint of Eq.~(\ref{scenarioC}). In order to prove this is indeed the case, we can plug the result of Eq.~(\ref{alpha}) into the second equation of (\ref{scenarioC}), which leads to the following condition \begin{equation} \partial_{y}g(y)=-e(a+b)xH_{\rm ext}=-exH_{\rm ext}\ . \end{equation} which suggests that $g(y)$ must be a function of $x$. This is in contradiction with Eq.~(\ref{alpha}), which suggests that $g(y)$ is a function of $y$ but not $x$. Therefore, scenario C of Eq.~(\ref{scenarios}) cannot be satisfied self-consistently. \subsection{Scenario B} Having excluded scenarios A and C, in this subsection we show that scenario B is the only one that can actually be satisfied. Below are two examples: \subsubsection{Example 1: Normal Vacuum} A trivial solution that satisfies this constraint is the spatially homogeneous solution with $\pi_{x}=\pi_{y}=\pi_{z}=0$ but $\sigma=f_{\pi}$, which is simply the normal, non-superconducting vacuum of low-energy QCD~\cite{Son1,Son2}. The Gibbs free energy of this state is \begin{equation} \label{gibbsnormal} G_{\rm n}=\int d^{2}\vec{x}\ \frac{1}{2}H^{2}_{\rm ext}\ , \end{equation} with $H_{\rm ext}$ being the external magnetic field. \subsubsection{Example 2: $\pi^{0}$ domain wall} Another possible phase that can exist in the Schwinger limit (with no photon back-reaction) is a phase of $\pi^{0}$ domain walls~\cite{Son4}. The phase has the following structure: In other words, the pion fields are as follows: \begin{equation} \begin{split} \label{dw} \sigma&=f_{\pi}\cos[4\arctan(\exp(m_{\pi}z)) ]\\ \pi_{z}&=f_{\pi}\sin[4\arctan(\exp(m_{\pi}z)) ]\\ \pi_{x}&=\pi_{y}=0\ . \end{split} \end{equation} \\ \underline{$\mu_{I}=0$}: This phase was considered in the case of zero-isospin chiral perturbation theory and was calculated by incorporating the effects of the axial anomaly through the Wess-Zumino-Witten term~\cite{Son4, WessZumino, Witten, Kaymakcalan}. Here we briefly summarize the results of Ref.~\cite{Son4}. The Gibbs free energy (not the Gibbs free energy density) of the domain wall phase is \begin{equation} \label{gibbsdw} G_{\rm dw}=\int d^{3}\vec{x}\ \frac{1}{2}H_{\rm ext}^{2}+\int d^{2}\vec{x}\ \left (8 f_{\pi}^{2}m_{\pi}-\frac{eH_{\rm ext}}{2\pi}\mu_{B} \right )\ , \end{equation} where $H_{\rm ext}$ is the external magnetic field and $\mu_{B}$ is the baryon chemical potential. Furthermore, it was shown in Ref.~\cite{Son4} that the magnetization per unit area of the domain wall state is: \begin{equation} \frac{\vec{M}_{\rm dw}}{S}=\hat{z}\frac{e}{2\pi}\mu_{B}\ , \end{equation} and the energy per unit area of the state is \begin{equation} \frac{E_{\rm dw}}{S}=8 f_{\pi}^{2}m_{\pi}\ . \end{equation} This phase becomes metastable for external magnetic fields ($H_{\rm ext}$) larger than $\frac{3m_{\pi}^{2}}{e}$, i.e. \begin{equation} \label{zeroisospinH} H_{\rm ext}>\frac{3m_{\pi}^{2}}{e}\ . \end{equation} Additionally above a critical magnetic field of $\mu_{B}^{c}$, which is given by \begin{equation} \label{criticalmuB} \mu_{B}^{c}=\frac{16\pi f_{\pi}^{2}m_{\pi}}{eH_{\rm ext}}\ , \end{equation} $\pi^{0}$ domain wall is not only metastable but also becomes stable at the expense of the normal vacuum state if the magnetic is large enough, i.e. satisfies the constraint of Eq~\ref{zeroisospinH}. This can be easily seen by comparing the Gibbs free energies of the $\pi^{0}$ domain wall state in Eq.~(\ref{gibbsdw}) with the Gibbs free energy of the normal state, which is in Eq.~(\ref{gibbsnormal}).\\ \\ \underline{$\mu_{I}\neq 0$}: At finite isospin chemical potential, we will show here that while the energy, the critical baryon density and the magnetization of the $\pi^{0}$ domain wall remains unchanged the magnetic fields over which the state becomes metastable changes. Note that it is clear from Eq.~(\ref{Leff}) that the energy is unaffected at finite isospin density as long as charged pions do not condense. Additionally, the magnetization of the domain wall is also unchanged as along as only the charged pions condense. In particular, the domain wall is stable if \begin{equation} \label{finiteisospinH} H_{\rm ext}>H_{c3}\ , \end{equation} where $H_{c3}$ is defined as follows: \begin{equation} \label{Hc3} H_{c3}=\frac{3m_{\pi}^{2}+\mu_{I}^{2}}{e}\ . \end{equation} Note that $H_{c3}$ reduces to the lower bound for the external field of Eq.~(\ref{zeroisospinH}) provided $\mu_{I}=0$. In order to derive that stability condition, we proceed by expanding the Langrangian of Eq.~(\ref{Lagrangian}) around the $\pi^{0}$ domain wall solution in the $\pi_{\pm}$ directions, which have the following standard definitions: \begin{equation} \pi_{\pm}=\frac{\pi_{x}\pm\pi_{y}}{\sqrt{2}} \ . \end{equation} Then for small fluctuations of $\pi_{+}$ and $\pi_{-}$, the effective Lagrangian becomes: \begin{equation} \label{Lfluctuation} \begin{split} \mathcal{L}&\approx(\partial^{\mu}+ieA^{\mu})\pi_{+}(\partial_{\mu}-ieA_{\mu})\pi_{-}\\ &-m_{\pi}^{2}\left (1-\frac{6}{\cosh^{2}(m_{\pi}z)} \right )\pi_{+}\pi_{-}+\mu_{\rm I}^{2}\pi_{+}\pi_{-}\ . \end{split} \end{equation} with corrections of $\mathcal{O}(\pi_{+}^{2}\pi_{-}^{2})$. Note that this Lagrangian is most easily derived using the parametrization of Ref.~\cite{Son4}. The resulting equation of motion for $\pi_{\pm}$ is as follows: \begin{equation} \label{EoMfluctuation} \begin{split} -(\partial_{i}\pm ieA_{i})^{2}\tilde{\pi}_{\pm}&+m_{\pi}^{2}\left (1-\frac{6}{\cosh^{2}(m_{\pi}z)} \right )\tilde{\pi}_{\pm}\\&-\mu_{\rm I}^{2}\tilde{\pi}_{\pm}=\omega_{n}^{2}\tilde{\pi}_{\pm}\ , \end{split} \end{equation} where we have used to decomposition \begin{equation} \pi_{\pm}(\vec{x},t)=\sum_{n}e^{i\omega_{n}t}\tilde{\pi}_{\pm}(\vec{x})\ . \end{equation} The potential of Eqs.~(\ref{Lfluctuation}) and (\ref{EoMfluctuation}) in the absence of an external magnetic field is a standard quantum mechanical reflection-less potential. In a magnetic field the dispersion relation is simply: \begin{equation} \omega_{n}^{2}=(2n+1)qH-3m_{\pi}^{2}-\mu_{I}^{2}\ . \end{equation} From the dispersion relation, it is obvious then that the $\pi^{0}$ domain walls are metastable only if $\omega_{n}^{2}$ is positive for all $n$. It is easy to see then that this occurs if the condition of Eq.~(\ref{finiteisospinH}) is satisfied. While the metastability condition has changed going to the finite isospin case, note that the energy per unit area of the $\pi^{0}$ domain wall phase remains unchanged since the isospin chemical potential enters the Lagrangian through a term proportional to $\pi_{+}\pi_{-}$, which does not contribute to the energy. Furthermore, these equations of motion remain valid even with the inclusion of the Wess-Zumino-Witten term in chiral perturbation theory since the contribution to the action of this term arises from a full derivative term, which does not affect the equations of motion. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{phase-diagram-above-muB-schwinger-limit.pdf} \caption{A schematic plot showing the two phases that can exist above the critical baryon chemical potential in the Schwinger limit. The isospin chemical potential is plotted on the x-axis while the external magnetic field $H$ is plotted on the y-axis. $H_{c3}$ is the phase transition line from the normal phase to the domain wall phase and it depends on the isospin chemical potential through Eq.~(\ref{Hc3}).} \label{belowmuB} \end{figure} \section{Phase Diagram at finite isospin density and finite baryon density} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{phase-diagram-below-muB.pdf} \caption{A schematic plot showing the different phases that exist below the critical baryon chemical potential. Note that for isospin chemical potentials, $\mu_{I}$ below $m_{\pi}$, the normal vacuum state persists. Above $m_{\pi}$ and below $H_{c1}$, a uniform superconducting state exists; between $H_{c1}$ and $H_{c2}$, there is an inhomogeneous phase of superconducting vortices and above $H_{c1}$ the normal state is energetically favored. (color online)} \label{belowmuB} \end{figure} \subsection{Schwinger Limit: No Photon Back-reaction} Now we can consider the phase diagram of finite isospin chiral perturbation theory in the Schwinger limit but in the presence of a baryon chemical potential, which we will assume is smaller than the mass of the lightest nucleon such that nuclear matter does not condense. Note that if there is no baryon chemical potential the normal phase is energetically favored and there are no phase transitions. The only other phase we know of that can occur in the Schwinger limit is the $\pi^{0}$ domain wall state, which only becomes stable above the critical baryon chemical potential of Eq.~(\ref{criticalmuB}). \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{phase-diagram-above-muB.pdf} \caption{A schematic plot showing the different phases that exist above the critical baryon chemical potential. Note that for isospin chemical potentials, $\mu_{I}$ above $m_{\pi}$ and below $H_{c1}$, a uniform superconducting state exists; between $H_{c1}$ and $H_{c2}$, there is an inhomogeneous phase of superconducting vortices and above $H_{c1}$ the normal state is energetically favored. Above $H_{c3}$, even if $\mu_{I}<m_{\pi}$, the domain wall phase is more stable than the normal state. (color online)} \label{above-muB} \end{figure} \subsection{Case with photon back-reaction} The $\pi^{0}$ domain wall state remains stable away from the Schwinger limit but is energetically disfavored compared to the normal vacuum, the vortex state and the uniform superconducting state of finite isospin chiral perturbation theory when $\mu_{B}=0$. However, with the inclusion of anomaly effects through the WZW term, $\pi^{0}$ domain walls do become stable above the critical baryon chemical potential defined in Eq.~(\ref{criticalmuB}). In Fig~\ref{belowmuB}, we show the phase diagram of finite isospin chiral perturbation theory in the absence of a baryon chemical potential. This phase diagram was considered in Ref.~\cite{Adhikari}, where we assumed that while the external magnetic field couples to the charged pions, the pions themselves only interact with each other through strong interactions at leading order in chiral perturbation theory. This is analogous to the nuclear matter problem, where electromagnetic effects are ignored, and the physical picture is valid as long $\mu_{I}\gg\mu_{\rm es}$, where $\mu_{I}$ is the finite isospin chemical potential, the amount of energy for an additional charged pion to condense and $\mu_{\rm es}$ is the electromagnetic energy associated with adding the additional charged pion. Above the critical baryon chemical potential of Eq.~(\ref{criticalmuB}) and below nuclear saturation densities, i.e. the silver blaze region for the baryon chemical potential, the $\pi^{0}$ domain wall state becomes energetically more stable compared to the normal phase. In order to characterize the phase diagram of finite isospin chiral perturbation theory in a magnetic field ($H_{\rm ext}$) above the critical baryon chemical potential, it is important to determine exactly the critical magnetic field, $H_{c2}$, which is the external magnetic field where the transition from the vortex state to the normal state occurs. Close to this critical point, the magnitude of the charged pion condensates are small compared to the pion decay constant, i.e. $\theta\approx 0$. As such the Lagrangian is equal to \begin{equation} \begin{split} \mathcal{L}_{\rm eff}=&\frac{1}{2}H_{\rm ext}^{2}-\frac{1}{2}\left (\partial_{i}+i e A^{\rm ext}_{i} \right ) \pi_{+}\left ( \partial_{i}-i e A^{\rm ext}_{i} \right )\pi_{-}\\ &-\frac{1}{2}\left (\mu_{I}^{2}-m_{\pi}^{2}\right ) \pi_{+}\pi_{-}+\mathcal{O}\left ( (\pi_{+}\pi_{-})^{2} \right)\ , \end{split} \end{equation} and has corrections that are quartic in the charged pion fields. Note that $A^{\rm ext}$ is the vector potential associated with an approximately uniform magnetic field of strength $H_{c2}$, i.e. the second critical point, pointing in the z-direction. In writing the Lagrangian down, we assumed that the neutral pion field remains uncondensed. It was shown numerically in the low density regime of Abrikosov vortices that the neutral pion does not condense. We assume that this remains the case near the second critical field, where the vortices are densely packed. The equation of motion for the charged pions is then \begin{equation} -(\partial_{i}-ie A_{i}^{\rm ext})^{2}\pi_{+}+m_{\pi}^{2}\pi_{+}=\mu_{I}^{2}\pi_{+}\ , \end{equation} and the dispersion relation of this gives the standard Landau levels \begin{equation} \mu_{I}^{2}=(2n+1)eH_{\rm ext}+k_{z}^{2}+m_{\pi}^{2}\ . \end{equation} Given this dispersion relation, the largest magnetic field that can be sustained in an Abrikosov vortex lattice occurs when $n=0$ and $k_{z}=0$. Therefore, the largest value that can be assumed by $H_{\rm ext}$ is the critical field $H_{c2}$, which is as follows: \begin{equation} eH_{c2}=\mu_{I}^{2}-m_{\pi}^{2}\ . \end{equation} Now we know the two critical fields, $H_{c2}$ and $H_{c3}$, where the transition from the vortex state to the normal state and the transition from the normal state to the $\pi^{0}$ domain wall state occurs respectively, we can proceed to construct the phase diagram of finite isospin chiral perturbation theory in a uniform magnetic above the critical magnetic field and the critical baryon density. The phase diagram is present in Fig~\ref{above-muB}. Note that the difference between $H_{c3}$ and $H_{c2}$ is \begin{equation} H_{c3}-H_{c2}=\frac{4m_{\pi}^{2}}{e}\ . \end{equation} However, in the chiral limit, the phase transition to the $\pi^{0}$ domain wall state occurs directly from the vortex state without first a phase transition to the normal state that occurs for finite pion masses. Note that in the chiral limit, $H_{c3}=H_{c2}$, suggesting that the normal phase is absent in the phase diagram. \begin{acknowledgements} I would like to acknowlege the support of the physics department at St. Olaf College. \end{acknowledgements}
{ "timestamp": "2015-04-27T02:01:49", "yymm": "1504", "arxiv_id": "1504.06349", "language": "en", "url": "https://arxiv.org/abs/1504.06349" }
\section{Introduction}\label{s1} The vertex operator realizations of affine Kac--Moody algebras \cite{LW,FK,KKLW,KP} led to the introduction of the notions of a \emph{vertex algebra} \cite{B} and its \emph{twisted modules} \cite{Le,FLM,FFR,D}. Twisted modules played an important role in the Frenkel--Lepowsky--Meurman construction of a vertex algebra with a natural action of the Monster on it \cite{FLM}. Vertex algebras provide a rigorous algebraic description of two-dimensional chiral \emph{conformal field theory} (see e.g.\ \cite{BPZ, Go, DMS}), and twisted modules are important for studying orbifolds (see e.g.\ \cite{DHVW,DVVV, KT} among many other works). Motivated by an example from \emph{logarithmic conformal field theory} (see e.g.\ \cite{AM,CR}), Y.-Z.~Huang introduced in \cite{H} a more general notion of a twisted module, for which the corresponding automorphism may have an infinite order and is not necessarily semisimple. The main feature of such twisted modules is that the twisted fields involve the logarithm of the formal variable. However, they lacked a Borcherds identity, $n$-th product identity, or commutator formula, all of which are powerful tools in the theory of vertex algebras. The difficulty was partly caused by the fact that the definition of $n$-th product of fields from \cite{Li1,Li2} is not very convenient in the case of twisted modules. This problem was solved in \cite{BM}, where we showed that another formula for the $n$-th product \cite{BN,BK2} remains valid in the twisted case. In the present paper, we use the formula from \cite{BM} to provide another definition of a twisted module, more general than the one from \cite{H}. Our definition is in the spirit of \cite{Li1,Li2,LL}, so that the state-field correspondence map $Y$ is a homomorphism of vertex algebras relative to all $n$-th products. We develop a framework that allows many results about vertex algebras to be transferred to general twisted modules. In particular, we define a mode expansion of twisted fields and a shifted delta function. Our main results are a Borcherds identity and a commutator formula for general twisted modules. We investigate in detail the examples of affine and Heisenberg vertex algebras, and we plan to consider additional examples in the future. The theory developed here will be used in our joint work with T.~Milanov, which aims to understand and utilize the vertex operators arising in \emph{Gromov--Witten theory} (see \cite{DZ1,DZ2,M,MT,FGM,BM,CV,LYZ,MST}). Here is an outline of the present paper. In \seref{s2}, we briefly review the basic definitions and properties of vertex algebras and their modules. This section can be skipped by readers familiar with the theory. In \seref{s3}, we introduce the notions of a logarithmic field, locality, and $n$-th products of logarithmic fields. We express the $n$-th product in terms of the normally ordered product and the propagator, and we prove that any local collection of logarithmic fields generates a vertex algebra. In \seref{s4}, we introduce the main object of the paper, the notion of a $\varphi$-twisted $V$-module where $\varphi$ is an arbitrary (not necessarily semisimple) automorphism of a vertex algebra $V$. When $\varphi$ is locally finite, we express it as $\varphi=\sigma e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$, where $\sigma\in\Aut(V)$ is semisimple and $\mathcal{N}\in\Der(V)$ is locally nilpotent. \seref{s5} contains our main result, the Borcherds identity for $\varphi$-twisted modules. In particular, as a consequence, we derive a commutator formula for the logarithmic fields in a twisted module. We prove that the Borcherds identity can replace the locality and $n$-th product identity in the definition of a twisted module. In \seref{s6}, we describe all twisted modules of affine and Heisenberg vertex algebras in terms of modules over certain twisted versions of the corresponding Lie algebras. We also determine the action of the Virasoro algebra. For the Heisenberg vertex algebra, all twisted irreducible highest-weight modules are constructed explicitly. Throughout the paper, $z,z_1,z_2,\dots$ will be commuting formal variables, and we will use the notation $z_{ij}=z_i-z_j$ and $x^{(k)}=x^k/k!$. All vector spaces will be over $\mathbb{C}$. We denote by $\mathbb{Z}_+$ the set of non-negative integers. \section{Preliminaries on vertex algebras}\label{s2} In this section, we briefly review the basic definitions and properties of vertex algebras and their modules. For more details, we refer to \cite{FLM, K2, FB, LL, KRR}. \subsection{Quantum fields}\label{sqf} A \emph{(quantum) field} on a vector space $V$ is a linear map from $V$ to the space of Laurent series $V(\!(z)\!) = V[[z]][z^{-1}]$. The space of all fields \begin{equation*}\label{vert1} \QF(V)=\Hom_\mathbb{C}(V, V(\!(z)\!)) \end{equation*} is closed under the derivative $\partial_z$. The composition $a(z)b(z)$ of two fields is not well defined in general. Instead, one considers the composition $a(z_1)b(z_2)$, which is a map from $V$ to $V(\!(z_1)\!)(\!(z_2)\!)$. Note that $V(\!(z_1)\!)(\!(z_2)\!)$ and $V(\!(z_2)\!)(\!(z_1)\!)$ are two different subspaces of $V[[z_1^{\pm1},z_2^{\pm1}]]$ whose intersection is $V(\!(z_1,z_2)\!) = V[[z_1,z_2]][z_1^{-1},z_2^{-1}]$. A pair of fields $a,b$ is called \emph{local} \cite{Go,DL,Li1} if \begin{equation}\label{vert2} z_{12}^N \, a(z_1) b(z_2) = z_{12}^N \, b(z_2) a(z_1) \,, \qquad z_{12}=z_1-z_2 \,, \end{equation} for some integer $N\ge0$. When applied to any $v\in V$, both sides of this equation become elements of $V(\!(z_1,z_2)\!)$. For $n\in\mathbb{Z}$, the \emph{$n$-th product} $a_{(n)}b$ of two local fields $a,b$ is defined by (cf.\ \cite{BN,BK2}): \begin{equation*}\label{vert3} (a_{(n)}b)(z)v = \partial_{z_1}^{(N-1-n)} \bigl(z_{12}^N \, a(z_1) b(z_2)v \bigr)\big|_{ z_1=z_2=z } \end{equation*} for $v\in V$, $n\le N-1$, and $a_{(n)}b=0$ for $n\ge N$. It is easy to show that this definition is equivalent to the one due to H.\ Li \cite{Li1} (see \cite[Lecture 14]{KRR}). Note that $a_{(n)}b$ is again a field and is independent of the choice of $N$ satisfying \eqref{vert2}. Moreover, if $c$ is another field local with $a$ and $b$, then $c$ is local with $a_{(n)}b$ (Dong's Lemma \cite{Li1,K2}; see \leref{llogf4} below). The constant field $I$ equal to the identity operator is local with any other field $a$, and satisfies \begin{equation* a_{(n)}I = 0 \,, \quad a_{(-n-1)}I = \partial_z^{(n)} a \,, \qquad n\ge 0\,. \end{equation*} Let $\mathcal{V}\subset\QF(V)$ be a \emph{local collection} of fields, i.e., such that every pair $a,b\in\mathcal{V}$ is local. We will assume that $I\in\mathcal{V}$. By Dong's Lemma, the smallest subspace $\bar\mathcal{V}\subset\QF(V)$ containing $\mathcal{V}$ and closed under all $n$-th products is again a local collection. Then $\bar\mathcal{V}$ is also closed under $\partial_z$. \subsection{Vertex algebras}\label{svert} A \emph{vertex algebra} is a vector space $V$ (space of states), with a distinguished vector $\vac\in V$ (vacuum vector) and a linear map $Y\colon V\to\QF(V)$ (state-field correspondence), such that $Y(\vac)=I$ and $Y(V)$ is a local collection of fields. The fields $Y(a)$ $(a\in V)$ are usually written as \begin{equation*}\label{vert5} Y(a,z) = \sum_{n\in\mathbb{Z}} a_{(n)} \, z^{-n-1} \,, \qquad a_{(n)} \in \End(V) \,, \end{equation*} and the coefficients $a_{(n)}$ are called the \emph{modes} of $a$. This endows $V$ with products $a_{(n)}b \in V$ for all $a,b\in V$, $n\in\mathbb{Z}$, and the map $Y$ is a homomorphism for all of them: \begin{equation}\label{vert7} Y(a_{(n)}b,z) = Y(a,z)_{(n)} Y(b,z) \,. \end{equation} The \emph{translation operator} $T\in\End(V)$ is defined by $Ta=a_{(-2)}\vac$. Then \begin{equation* [T,Y(a,z)] = \partial_z Y(a,z) \,, \qquad a\in V \,. \end{equation*} A field $a(z)$ with this property is called \emph{translation covariant}. By the Kac Existence Theorem \cite{K2,DK}, every local collection $\mathcal{V}\subset\QF(V)$ of translation covariant fields generates a vertex algebra structure on $V$, provided that $V$ is linearly spanned by $\vac$ and all coefficients of \begin{equation* \quad a_1(z_1) \cdots a_r(z_r) \vac \,, \qquad r\geq1 \,, \; a_i \in\mathcal{V} \,. \end{equation*} Note that $\bar\mathcal{V}$, as defined above, is also a vertex algebra and the map $Y\colon V\to\bar\mathcal{V}$ is an isomorphism \cite{Li1,K2,DK}. For future use, recall that a \emph{derivation} of $V$ is a linear operator $\mathcal{D}$ on $V$ such that \begin{equation* \mathcal{D}(a_{(n)}b) = (\mathcal{D} a)_{(n)}b + a_{(n)} (\mathcal{D} b) \,, \qquad a,b\in V \,, \;\; n\in\mathbb{Z}\,. \end{equation*} The space $\Der(V)$ of all derivations is a Lie algebra containing $T$. \subsection{Borcherds identity}\label{sborid} The main identity satisfied by the modes is the \emph{Borcherds identity} (also called Jacobi identity \cite{FLM}): \begin{equation}\label{vert10} \begin{split} \sum_{i=0}^\infty (-1)^i & \binom{n}{i} \Bigl( a_{(m+n-i)}(b_{(k+i)}c) - (-1)^n \, b_{(k+n-i)}(a_{(m+i)}c) \Bigr) \\ &= \sum_{j=0}^\infty \binom{m}{j} (a_{(n+j)}b)_{(m+k-j)}c \,, \end{split} \end{equation} where $a,b,c \in V$. Observe that the above sums are finite, because $a_{(j)}b = 0$ for sufficiently large $j$. In particular, setting $n=0$ in the Borcherds identity, we obtain the \emph{commutator formula} \begin{equation}\label{vert11} [a_{(m)}, b_{(k)}] = \sum_{j=0}^\infty \binom{m}{j} (a_{(j)}b)_{(m+k-j)}c \,. \end{equation} Equivalently, \begin{equation}\label{vert12} [Y(a,z_1), Y(b,z_2)] = \sum_{j=0}^\infty Y(a_{(j)}b,z_2) \, \partial_{z_2}^{(j)} \delta(z_1,z_2) \,, \end{equation} where \begin{equation* \delta(z_1,z_2) = \sum_{m\in\mathbb{Z}} z_1^{-m-1} z_2^m \end{equation*} is the formal \emph{delta function}. It is often convenient to use the formal expansions \begin{equation}\label{vert14} \begin{split} \iota_{z_1,z_2} z_{12}^n &= \sum_{i=0}^\infty \binom{n}{i} (-1)^i z_1^{n-i} z_2^i \,, \\ \iota_{z_2,z_1} z_{12}^n &= \sum_{i=0}^\infty \binom{n}{i} (-1)^{n+i} z_1^i z_2^{n-i} \,. \end{split} \end{equation} Then \begin{equation}\label{vert15} \partial_{z_2}^{(j)} \delta(z_1,z_2) = (\iota_{z_1,z_2} - \iota_{z_2,z_1}) z_{12}^{-j-1} \,, \qquad j\ge0 \,. \end{equation} The delta function has the property \begin{equation}\label{vert16} \Res_{z_1} a(z_1) \, \partial_{z_2}^{(j)} \delta(z_1,z_2) = \partial_{z_2}^{(j)} a(z_2) \end{equation} for any field $a(z)$, where as usual $\Res_z$ denotes the coefficient of $z^{-1}$. \subsection{Twisted modules}\label{stwmod} A \emph{representation} (or \emph{module}) of $V$ is a vector space $W$ endowed with a linear map $Y\colon V\to\QF(W)$ such that $Y(\vac)=I$ and the Borcherds identity \eqref{vert10} holds for $a,b\in V$, $c\in W$ (see \cite{FB, LL,KRR}). Equivalently, due to \cite{Li1}, one can replace \eqref{vert10} by the condition that $Y(V)\subset\QF(W)$ is a local collection of fields satisfying the $n$-th product identity \eqref{vert7}. The commutator formulas \eqref{vert11}, \eqref{vert12} hold for modules as well. Recall that an \emph{automorphism} of a vertex algebra $V$ is an invertible linear operator $\sigma$ on $V$ such that \begin{equation* \sigma( a_{(n)} b ) = (\sigma a)_{(n)} (\sigma b) \,, \qquad a,b\in V \,, \;\; n\in\mathbb{Z} \,. \end{equation*} The group of all automorphisms of $V$ is denoted $\Aut(V)$. If $\sigma\in\Aut(V)$ has a finite order $r$, then $\sigma$ is semisimple with eigenvalues $r$-th roots of $1$. In the definition of a \emph{$\sigma$-twisted representation} $W$ of $V$, the image of the above map $Y$ is allowed to have non-integral (rational) powers of $z$ (see \cite{FFR,D,KRR}). More precisely, \begin{equation* Y(a,z) = \sum_{n\in p+\mathbb{Z}} a_{(n)} \, z^{-n-1} \,, \qquad \text{if} \quad \sigma a = e^{-2\pi\mathrm{i}} %{\sqrt{-1} p} a \,, \;\; p\in\frac1r\mathbb{Z} \,, \end{equation*} where $a_{(n)} \in \End(W)$. Equivalently, the monodromy around $z=0$ is given by the action of $\sigma$: \begin{equation* Y(\sigma a,z) = Y(a, e^{2\pi\mathrm{i}} %{\sqrt{-1}}z) \,, \qquad a\in V \,. \end{equation*} The Borcherds identity \eqref{vert10} satisfied by the modes remains the same in the twisted case, provided that \begin{equation* \sigma a = e^{-2\pi\mathrm{i}} %{\sqrt{-1} m} a \,, \quad \sigma b = e^{-2\pi\mathrm{i}} %{\sqrt{-1} k} b \,, \qquad m,k\in\mathbb{Q} \,, \;\; n\in\mathbb{Z}\,. \end{equation*} As a consequence, we also have the commutator formula \eqref{vert11}. However, \eqref{vert12} needs to be modified for twisted modules (see, e.g., \cite{BK1} and \eqref{twlog16} below). It was proved in \cite{BM} that in the definition of a twisted module the Borcherds identity can be replaced by the locality of all $Y(a,z)$ and the $n$-th product identity \eqref{vert7}. Note that in the twisted case our definition of $n$-th product differs from H.\ Li's one from \cite{Li1,Li2}. \section{Logarithmic quantum fields}\label{s3} In this section, we introduce the notions of a logarithmic field, locality, and $n$-th products of logarithmic fields. We express the $n$-th product in terms of the normally ordered product and propagator. We prove that any local collection of logarithmic fields generates a vertex algebra. \subsection{Logarithmic fields and locality}\label{slogf} As before, $z,z_1,z_2,\dots$ will be formal variables, and let $\zeta,\zeta_1,\zeta_2,\dots$ be another set of formal variables corresponding to them, which will be thought of as $\zeta=\logz$ and $\zeta_i=\logz_i$. More precisely, instead of $\partial_z$ and $\partial_\zeta$, we will work with the derivations \begin{equation* D_z = \partial_z+z^{-1} \partial_\zeta \,, \qquad D_\zeta =z\partial_z+\partial_\zeta \,, \end{equation*} and similarly for $D_{z_i}$, $D_{\zeta_i}$. Fix a vector space $W$ over $\mathbb{C}$. For $\alpha\in\mathbb{C}/\mathbb{Z}$, we denote by $W[\zeta][[z]] z^{-\alpha}$ the space of all formal series of the form (cf.\ \cite{BK2}): \begin{equation* \sum_{i=0}^\infty w_{i}(\zeta) z^{i-m} \,, \qquad w_{i}(\zeta)\in W[\zeta] \,, \;\; m\in\alpha \,. \end{equation*} For example, $W[\zeta][[z]] z^\mathbb{Z} = W[\zeta](\!(z)\!)$ is the space of Laurent series in $z$ with coefficients in $W[\zeta]$. Observe that $W[\zeta][[z]] z^{-\alpha}$ is a module over the ring $\mathbb{C}(\!(z)\!)$, and is closed under the derivations $D_z$ and $D_\zeta$. \begin{definition}\label{dlogf1} With the above notation, let \begin{equation* \LF_\alpha(W) = \Hom_\mathbb{C}(W,W[\zeta][[z]] z^{-\alpha}) \,, \qquad \alpha\in\mathbb{C}/\mathbb{Z} \,, \end{equation*} and \begin{equation* \LF(W) = \bigoplus_{\alpha\in\mathbb{C}/\mathbb{Z}} \LF_\alpha(W) \,. \end{equation*} The elements of $\LF(W)$ are called \emph{logarithmic (quantum) fields} on $W$, and are denoted as $a(\zeta,z)$ or $a(z)$ for short. \end{definition} By definition, every logarithmic field $a(z)$ is a finite sum of elements from the spaces $\LF_\alpha(W)$. The composition of two logarithmic fields $a\in\LF_\alpha(W)$ and $b\in\LF_\beta(W)$ is the linear map \begin{equation* a(z_1)b(z_2) \colon W\to \bigl( W[\zeta_1][[z_1]] z_1^{-\alpha} \bigr) [\zeta_2][[z_2]] z_2^{-\beta} \,. \end{equation*} \begin{definition}\label{dlogf2} A pair of logarithmic fields $a,b$ is called \emph{local} if \begin{equation}\label{logf7} z_{12}^N \, a(z_1) b(z_2) = z_{12}^N \, b(z_2) a(z_1) \,, \qquad z_{12}=z_1-z_2 \,, \end{equation} for some integer $N\ge0$. \end{definition} For every $v\in W$, the powers of $z_2$ in $a(z_1)b(z_2)v$ belong to the union of finitely many sets of the form $\gamma+\mathbb{Z}_+$ ($\gamma\in\mathbb{C}$). If $a(z_1)$ and $b(z_2)$ are local, then \begin{equation* z_{12}^N \, a(z_1) b(z_2)v = z_{12}^N \, b(z_2) a(z_1)v \end{equation*} satisfies this property both for the powers of $z_1$ and $z_2$. In fact, when $a\in\LF_\alpha(W)$ and $b\in\LF_\beta(W)$ are local, both sides of this equation belong to the space \begin{equation* W[\zeta_1,\zeta_2][[z_1,z_2]] z_1^{-\alpha} z_2^{-\beta} \,. \end{equation*} \subsection{$n$-th products}\label{slocnpr} Now we define an operation on local logarithmic fields, which provides an algebraic formulation of the operator product expansion (cf.\ \cite{BN,BK2,BM}). \begin{definition}\label{dlogf3} For $n\in\mathbb{Z}$, the \emph{$n$-th product} $a_{(n)}b$ of two local logarithmic fields $a,b$ is defined by: \begin{equation}\label{logf9} (a_{(n)}b)(\zeta,z)v = D_{z_1}^{(N-1-n)} \bigl(z_{12}^N \, a(\zeta_1,z_1) b(\zeta_2,z_2)v \bigr)\Big|_{ \substack{z_1=z_2=z \\ \zeta_1=\zeta_2=\zeta} } \end{equation} for $v\in W$ and $n\le N-1$. For $n\ge N$, let $a_{(n)}b=0$. As before, we will supress the dependence on $\zeta$ and understand that setting $z_1=z$ automatically sets $\zeta_1=\zeta$. \end{definition} Note that $a_{(n)}b$ is again a logarithmic field, and it does not depend on the choice of $N$ satisfying \eqref{logf7}. Moreover, $a_{(n)}b \in \LF_{\alpha+\beta}(W)$ if $a\in\LF_\alpha(W)$ and $b\in\LF_\beta(W)$. Using the Leibniz rule, one can derive from \eqref{logf9} the following properties: \begin{align*} (D_z a)_{(n)}b &= -n a_{(n-1)}b \,, \\ D_z(a_{(n)}b) &= (D_z a)_{(n)}b + a_{(n)} (D_z b) \,, \\ \partial_\zeta(a_{(n)}b) &= (\partial_\zeta a)_{(n)}b + a_{(n)} (\partial_\zeta b) \,. \end{align*} We also have an analog of Dong's Lemma (cf.\ \cite{Li1,Li2,K2}). \begin{lemma}\label{llogf4} Let\/ $a,b,c$ be logarithmic fields such that the pairs\/ $(a,b)$, $(a,c)$, $(b,c)$ are local. Then\/ $a_{(n)}b$ and\/ $c$ are local for all\/ $n\in\mathbb{Z}$. \end{lemma} \begin{proof} For some sufficiently large $N$, we have \eqref{logf7} and \begin{equation*} z_{13}^N \, a(z_1) c(z_3) = z_{13}^N \, c(z_3) a(z_1) \,, \qquad z_{23}^N \, b(z_2) c(z_3) = z_{23}^N \, c(z_3) b(z_2) \,. \end{equation*} Using the Leibniz rule, we find for $n'=N-1-n\ge0$ and $v\in W$, \begin{align*} z_{23}^{2N+n'} (a_{(n)}b)(z_2) c(z_3)v = \Bigl(z_{13}^{N+n'} D_{z_1}^{(n')} \bigl(z_{12}^N \, z_{23}^N \, a(z_1) b(z_2) c(z_3)v \bigr)\Bigr)&\Big|_{z_1=z_2} \\ = \sum_{i=0}^{n'} (-1)^{n'-i} \binom{N+n'}{n'-i} D_{z_1}^{(i)} \Bigl(z_{13}^{N+i} \, z_{12}^N \, z_{23}^N \, a(z_1) b(z_2) c(z_3)v \Bigr)&\Big|_{z_1=z_2} \,. \end{align*} We can move $c(z_3)$ to the left of $a(z_1) b(z_2)$ inside the parentheses, and then rewrite the whole expression back as $z_{23}^{2N+n'} c(z_3) (a_{(n)}b)(z_2) v$. \end{proof} \subsection{Normally ordered products and propagators}\label{sprop} For $\alpha\in\mathbb{C}/\mathbb{Z}$, pick the unique representative $\alpha_0\in\alpha$ with $-1<\mathrm{Re}\,\alpha_0\le0$. Every logarithmic field $a\in\LF_\alpha(W)$ can be expanded as \begin{equation* a(\zeta,z) = \sum_{i\in\mathbb{Z}} a_i(\zeta) z^{-i-\alpha_0} \,, \qquad a_i(\zeta) \in\Hom_\mathbb{C}(W,W[\zeta]) \,, \end{equation*} where for each $v\in W$ we have $a_i(\zeta)v=0$ for sufficiently large $i$. The \emph{annihilation} and \emph{creation parts} of $a(z)$ are defined respectively as \begin{align* a(z)_- &= a(\zeta,z)_- = \sum_{i=1}^\infty a_i(\zeta) z^{-i-\alpha_0} \,, \\ a(z)_+ &= a(\zeta,z)_+ = \sum_{i=-\infty}^{0} a_i(\zeta) z^{-i-\alpha_0} \,. \end{align*} These are extended by linearity to all $a\in\LF(W)$. In other words, $a(z)_-$ is the part of $a(z)$ containing only $z^\gamma$ with $\mathrm{Re}\,\gamma<0$, while in $a(z)_+$ we have only $z^\gamma$ with $\mathrm{Re}\,\gamma\ge0$. \begin{definition}\label{dlogf4} The \emph{normally ordered product} of two logarithmic fields $a(z_1)$, $b(z_2)$ is defined by: \begin{equation* {:} a(z_1) b(z_2) {:} = a(z_1)_+ b(z_2) + b(z_2) a(z_1)_- \,. \end{equation*} Their \emph{propagator} is: \begin{equation* P( a, b; z_1,z_2) = [a(z_1)_- , b(z_2)] = a(z_1) b(z_2) - {:} a(z_1) b(z_2) {:} \,. \end{equation*} \end{definition} Just like for usual quantum fields (see, e.g., \cite{K2}), it is easy to check that ${:} a(z_1) b(z_2) {:}$ is well defined for $z_1=z_2$ and ${:} a(z) b(z) {:}$ is again a logarithmic field. \begin{proposition}\label{ptlogf5} Let\/ $a$ and\/ $b$ be two local logarithmic fields, and\/ $N$ be from\/ \eqref{logf7}. Then for\/ $0\le n\le N-1$ and\/ $k\ge0$, we have$:$ \begin{align* (a_{(n)}b)(z) &= D_{z_1}^{(N-1-n)} \bigl(z_{12}^N \, P( a, b; z_1,z_2) \bigr)\big|_{ z_1=z_2=z} \,, \\ (a_{(-k-1)}b)(z) &= {:} \bigl( D_z^{(k)} a(z) \bigr) b(z) {:} + D_{z_1}^{(N+k)} \bigl(z_{12}^N \, P( a, b; z_1,z_2) \bigr)\big|_{ z_1=z_2=z} \,. \end{align*} \end{proposition} \begin{proof} The proof is a straightforward calculation using \begin{equation*} z_{12}^N \, a(z_1) b(z_2) = z_{12}^N \, {:} a(z_1) b(z_2) {:} + z_{12}^N \, P( a, b; z_1,z_2) \end{equation*} and the fact that ${:} a(z_1) b(z_2) {:}$ is well defined for $z_1=z_2$. \end{proof} \subsection{Local collections of logarithmic fields}\label{sloccol} The identity operator $I$ is local with any other logarithmic field $a$, and satisfies \begin{equation* a_{(n)}I = 0 \,, \quad a_{(-n-1)}I = D_z^{(n)} a \,, \qquad n\ge 0\,. \end{equation*} Let $\mathcal{W}\subset\LF(W)$ be a \emph{local collection}, i.e., such that every pair $a,b\in\mathcal{W}$ is local. We can add $I$ to $\mathcal{W}$ and still have a local collection. If a pair $(a,b)$ is local, then $(D_\zeta a,b)$ is also local; thus the $\mathbb{C}[D_\zeta]$-module generated by $\mathcal{W}$ is again local. Due to \leref{llogf4}, the smallest subspace $\bar\mathcal{W}\subset\LF(W)$ containing $\mathcal{W}\cup\{I\}$ and closed under $D_\zeta$ and all $n$-th products is a local collection. Similarly to \cite{Li1,Li2}, we have the following result. \begin{theorem}\label{tlogf5} The $n$-th products endow the space\/ $\bar\mathcal{W}$ with the structure of a vertex algebra with a vacuum vector $I$ and translation operator\/ $D_z$. \end{theorem} \begin{proof} The state-field correspondence $Y\colon\bar\mathcal{W}\to\QF(\bar\mathcal{W})$ is given by \begin{equation*} Y(a,x)b = \sum_{n\in\mathbb{Z}} x^{-n-1} (a_{(n)}b) \,, \qquad a,b\in\bar\mathcal{W} \,, \end{equation*} where the formal variable is now denoted by $x$, and $a_{(n)}b$ is again defined by \eqref{logf9}. Due to the already established properties of the $n$-th products, it only remains to prove that $Y(a,x)$ and $Y(b,x)$ are local for $a,b\in\bar\mathcal{W}$. When we need to specify the formal variable in the fields $a$ and $b$, we will write $Y$ as \begin{equation*} Y\bigl(a(z),x\bigr)b(z) = \sum_{n\in\mathbb{Z}} x^{-n-1} (a_{(n)}b)(z) \,. \end{equation*} It follows immediately from \eqref{logf9} that for all $v\in W$, \begin{equation*} \bigl( Y\bigl(a(z),x\bigr)b(z) \bigr) v = x^{-N} e^{x D_{z_1}} \bigl( z_{12}^N \, a(z_1) b(z_2)v \bigr)\big|_{z_1=z_2=z} \,, \end{equation*} where, as before, $N$ is such that the locality \eqref{logf7} holds. For brevity, through the rest of the proof we will omit the vector $v$. Consider $a,b,c\in\bar\mathcal{W}$, and take $N$ to be an even number such that \eqref{logf7} holds for the pairs $(a,b)$, $(a,c)$ and $(b,c)$. By the proof of \leref{llogf4}, for any $k\le N-1$, the pair $(a,b_{(k)}c)$ satisfies \eqref{logf7} with $N$ replaced by $2N+k'$ where $k'=N-1-k\ge0$. Therefore, \begin{align*} Y\bigl(& a(z), x_1\bigr) (b_{(k)}c)(z) \\ &= x_1^{-2N-k'} e^{x_1 D_{z_1}} \Bigl( z_{13}^{2N+k'} D_{z_2}^{(k')} \bigl( z_{23}^N \, a(z_1) b(z_2) c(z_3) \bigr) \Bigr) \Big|_{z_1=z_2=z_3=z} \,. \end{align*} Summing over $k$, we obtain: \begin{align*} & x_1^{2N} x_2^{N} \, Y\bigl(a(z), x_1\bigr) Y\bigl(b(z), x_2\bigr) c(z) \\ &= \sum_{k'=0}^\infty \Bigl(\frac{x_2}{x_1}\Bigr)^{k'} e^{x_1 D_{z_1}} \Bigl( z_{13}^{2N+k'} D_{z_2}^{(k')} \bigl( z_{23}^N \, a(z_1) b(z_2) c(z_3) \bigr) \Bigr) \Big|_{z_1=z_2=z_3=z} \,. \end{align*} Notice that here $z_{13}^{2N+k'}$ can be replaced by $z_{13}^{N} z_{12}^{N+k'}$. Consider the linear operator \begin{equation*} A_2 = \sum_{k'=0}^\infty \Bigl(\frac{x_2}{x_1}\Bigr)^{k'} z_{12}^{k'} \, D_{z_2}^{(k')} = \Bigl(1-\frac{x_2}{x_1}\Bigr)^{z_{21} D_{z_2}} \,, \end{equation*} where we applied the well-known identity $z^k\partial_{z}^{(k)} = \binom{z\partial_z}{k}$. Then \begin{equation*} z_{12}^{N} \, A_2 = \Bigl(1-\frac{x_2}{x_1}\Bigr)^{-N} A_2 \circ z_{12}^{N} \,, \end{equation*} and we have \begin{align*} x_1^{2N} & x_2^{N} \, Y\bigl(a(z), x_1\bigr) Y\bigl(b(z), x_2\bigr) c(z) \\ &= e^{x_1 D_{z_1}} \bigl( z_{13}^{N} \, z_{12}^{N} \, A_2 \bigl( z_{23}^N \, a(z_1) b(z_2) c(z_3) \bigr) \bigr) \big|_{z_1=z_2=z_3=z} \\ &= \Bigl(1-\frac{x_2}{x_1}\Bigr)^{-N} e^{x_1 D_{z_1}} A_2 \bigl( z_{13}^{N} \, z_{12}^{N} \, z_{23}^N \, a(z_1) b(z_2) c(z_3) \bigr) \big|_{z_1=z_2=z_3=z}\,. \end{align*} Now observe that \begin{equation*} e^{x_1 D_{z_1}} A_2 = \sum_{k=0}^\infty \Bigl(\frac{x_2}{x_1}\Bigr)^{k} (x_1+z_{12})^{k} \, D_{z_2}^{(k)} e^{x_1 D_{z_1}} \end{equation*} becomes $e^{x_2 D_{z_2}} e^{x_1 D_{z_1}}$ after setting $z_1=z_2$. Therefore, \begin{align*} x_1^{N} & x_2^{N} (x_1-x_2)^N \, Y\bigl(a(z), x_1\bigr) Y\bigl(b(z), x_2\bigr) c(z) \\ &= e^{x_1 D_{z_1} + x_2 D_{z_2}} \bigl( z_{13}^{N} \, z_{12}^{N} \, z_{23}^N \, a(z_1) b(z_2) c(z_3) \bigr) \big|_{z_1=z_2=z_3=z}\,. \end{align*} This implies the locality of $Y(a,x)$ and $Y(b,x)$, thus completing the proof of the theorem. \end{proof} It follows from \eqref{logf9} and $[D_\zeta,D_z]=-D_z$ that \begin{equation* D_\zeta(a_{(n)}b) = (D_\zeta a)_{(n)}b + a_{(n)} (D_\zeta b) + (n+1)(a_{(n)}b) \,. \end{equation*} Hence, $e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta}$ is an automorphism of the vertex algebra $\bar\mathcal{W}$. It acts exactly as the monodromy operator around $0$, sending $\zeta$ to $\zeta+2\pi\mathrm{i}} %{\sqrt{-1}$ and $z^\gamma$ to $e^{2\pi\mathrm{i}} %{\sqrt{-1}\gamma} z^\gamma$. Note that $e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta} = e^{2\pi\mathrm{i}} %{\sqrt{-1} z\partial_z} e^{2\pi\mathrm{i}} %{\sqrt{-1} \partial_\zeta}$ and $e^{2\pi\mathrm{i}} %{\sqrt{-1} z\partial_z} \in\Aut(\bar\mathcal{W})$, $\partial_\zeta\in\Der(\bar\mathcal{W})$. \section{Definition of twisted modules}\label{s4} From now on, $V$ will be a vertex algebra and $\varphi$ an automorphism of $V$, which is not necessarily of finite order. In this section, we introduce the notion of a $\varphi$-twisted $V$-module and establish some of its basic properties. We continue to use the notation from \seref{s3}. \subsection{$\varphi$-twisted modules}\label{sphtwm} The following is the main object of the paper. \begin{definition}\label{dltwlog2} A \emph{$\varphi$-twisted $V$-module} is a vector space $W$, equipped with a linear map $Y\colon V\to\PLF(W)$ such that $Y(\vac)=I$ is the identity operator, $Y(V)$ is a local collection, \begin{equation}\label{twlog2} Y(\varphi a,z) = e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta} Y(a, z) \,, \end{equation} and \begin{equation}\label{twlog3} Y(a_{(n)}b,z)=Y(a,z)_{(n)} Y(b,z) \end{equation} for all $a,b\in V$, $n\in\mathbb{Z}$. We will call \eqref{twlog2} the \emph{$\varphi$-equivariance}, and \eqref{twlog3} the \emph{$n$-th product identity}. \end{definition} \begin{remark}\label{rtwlog1} Y.-Z.~Huang has introduced in \cite{H} a notion of a $\varphi$-twisted $V$-module $W$, which is more restrictive than ours (in particular, it assumes certain gradings of $V$ and $W$). One can show that every $\varphi$-twisted module in the sense of \cite{H} satisfies our definition. Conversely, as will be indicated below, some assumptions of \cite[Definition 3.1]{H} also hold in our case. \end{remark} \begin{remark}\label{rtwlog2} S.-Q.~Liu, D.~Yang, and Y.~Zhang have introduced in \cite{LYZ} a notion of a $\varphi$-twisted $V$-module, which has some similarities to ours but also important differences. In particular, it involves vectors in $V\otimes\mathbb{C}^d$ and a certain $d\times d$ matrix associated to a Frobenius manifold of dimension~$d$. \end{remark} As a consequence of \eqref{twlog3} and $Ta=a_{(-2)}\vac$, we have (cf.\ \cite{H}): \begin{equation}\label{twlog3t} Y(Ta,z)=D_z Y(a,z) \,, \qquad a\in V\,. \end{equation} Eqs.\ \eqref{twlog2}, \eqref{twlog3} can be stated equivalently that $Y\colon V\to\bar\mathcal{W}$ is a vertex algebra homomorphism compatible with the automorphisms $\varphi$ and $e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta}$, where $\bar\mathcal{W}=Y(V)$ (see \thref{tlogf5}). \begin{example}\label{etwlog} Let $W$ be a vector space, $\mathcal{W}\subset\LF(W)$ a local collection, $\bar\mathcal{W}$ be the vertex algebra generated by $\mathcal{W}$, and $\varphi=e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta} \in\Aut(\bar\mathcal{W})$ (see \thref{tlogf5}). Then the identity map $Y\colon\bar\mathcal{W}\to\LF(W)$, $Y(a,z)=a(z)$, provides $W$ with the structure of a $\varphi$-twisted $\bar\mathcal{W}$-module (cf.\ \cite{Li1,Li2}). \end{example} \begin{remark}\label{rtwlog3} The space $V^\varphi$ of $\varphi$-invariants (i.e., $a\in V$ such that $\varphi a=a$) is a subalgebra of $V$. The restriction of any $\varphi$-twisted $V$-module to $V^\varphi$ is a (untwisted) $V^\varphi$-module. \end{remark} \subsection{Locally finite automorphisms}\label{slocfin} A linear operator $\varphi$ on $V$ is called \emph{locally finite} if every $a\in V$ is contained in some finite-dimensional $\varphi$-invariant subspace of $V$ (see \cite[Chapter 3]{K1}). In particular, this holds when $V$ is a direct sum of finite-dimensional $\varphi$-invariant subspaces, as is assumed in \cite{H}. A linear operator $\mathcal{N}$ on $V$ is called \emph{locally nilpotent} if for every $a\in V$ we have $\mathcal{N}^l a=0$ for some $l\ge1$. The next lemma is standard. \begin{lemma}\label{ltwlog1} Every invertible locally finite linear operator\/ $\varphi$ can be written uniquely in the form\/ $\varphi=\sigma e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$, where\/ $\sigma$ is semisimple, $\mathcal{N}$ is locally nilpotent and\/ $\sigma\mathcal{N}=\mathcal{N}\sigma$. Furthermore, if\/ $\varphi\in\Aut(V)$, then\/ $\sigma\in\Aut(V)$ and\/ $\mathcal{N}\in\Der(V)$. \end{lemma} \begin{proof} Fix $a\in V$ and a finite-dimensional subspace $U\subset V$ such that $a\in U$ and $\varphi(U)\subset U$. Then the restriction $\varphi|_U$ has the desired decomposition (Jordan--Chevalley decomposition). If we have another such subspace $U'\supset U$, it will give rise to the same $\sigma$ and $\mathcal{N}$ when restricted to $U$. Therefore, $\sigma$ and $\mathcal{N}$ are uniquely defined and are independent of the choice of $U$. Let $\varphi\in\Aut(V)$, and $a,b\in V$ be such that $(\varphi-\lambda)^l a = (\varphi-\mu)^l b = 0$ for some $\lambda,\mu\in\mathbb{C}$ and $l\ge1$. Then $\sigma a = \lambda a$ and $\sigma b = \mu b$. The identity \begin{equation*} \varphi\otimes\varphi-\lambda\otimes\mu = (\varphi-\lambda)\otimes\varphi +\lambda\otimes(\varphi-\mu) \end{equation*} then implies that \begin{equation*} (\varphi-\lambda\mu)^{m} (a_{(n)}b) = \sum_{k=0}^{m} \binom{m}{k} \bigl( (\varphi-\lambda)^k \lambda^{m-k} a \bigr)_{(n)} \bigl( \varphi^k (\varphi-\mu)^{m-k} b \bigr) =0 \end{equation*} for $m\ge 2l-1$. Therefore, $\sigma(a_{(n)}b) = \lambda\mu(a_{(n)}b)$ and $\sigma\in\Aut(V)$. To prove that $\mathcal{N}\in\Der(V)$, consider the expression \begin{equation*} e^{2\pi\mathrm{i}} %{\sqrt{-1} x\mathcal{N}} (a_{(n)}b) - (e^{2\pi\mathrm{i}} %{\sqrt{-1} x\mathcal{N}} a)_{(n)} (e^{2\pi\mathrm{i}} %{\sqrt{-1} x\mathcal{N}} b) \,, \end{equation*} which is a polynomial in $x$. This polynomial vanishes at all $k\in\mathbb{Z}$, since $(\varphi\sigma^{-1})^k \in\Aut(V)$. Taking $\partial_x$ at $x=0$, we obtain that $\mathcal{N}\in\Der(V)$. \end{proof} We will say that $\varphi$ is \emph{locally finite on} $a\in V$ if there is a finite-dimensional subspace $U\subset V$ such that $a\in U$ and $\varphi(U)\subset U$. \begin{lemma}\label{lltwlog2} The set\/ $\bar V$ of all\/ $a\in V$, on which\/ $\varphi$ is locally finite, is the maximal\/ $\varphi$-invariant subspace\/ $\bar V\subset V$ such that the restriction\/ $\varphi|_{\bar V}$ is locally finite. Moreover, $\bar V$ is a subalgebra of\/ $V$. \end{lemma} \begin{proof} This follows easily from the definitions. Indeed, let $U$ and $U'$ be finite-dimensional $\varphi$-invariant subspaces such that $a\in U$, $b\in U'$. Then $a+\lambda b\in U+U'$ and $a_{(n)} b \in U_{(n)} U'$ for all $\lambda\in\mathbb{C}$, $n\in\mathbb{Z}$. \end{proof} \subsection{Consequences of local finiteness}\label{sconlf} Consider again an automorphism $\varphi$ of $V$ and a $\varphi$-twisted $V$-module $W$. Let $\bar V\subset V$ be the maximal subalgebra on which $\varphi$ is locally finite (see \leref{lltwlog2}). Write $\varphi|_{\bar V}=\sigma e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$, as in \leref{ltwlog1}, where $\sigma\in\Aut(\bar V)$ is semisimple and $\mathcal{N}\in\Der(\bar V)$ is locally nilpotent. \begin{lemma}\label{lltwlog2b} For all\/ $a\in\bar V$, we have \begin{equation}\label{twlog2a} Y(\sigma a,z) = e^{2\pi\mathrm{i}} %{\sqrt{-1}z\partial_z} Y(a, z) \,, \qquad Y(\mathcal{N} a,z) = -\partial_\zeta Y(a, z) \,. \end{equation} \end{lemma} \begin{proof} This follows from the uniqueness of $\sigma$ and $\mathcal{N}$ from \leref{ltwlog1}, since the semisimple part of $e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta}$ is $e^{2\pi\mathrm{i}} %{\sqrt{-1} z\partial_z}$ and the corresponding locally nilpotent operator is $-\partial_\zeta$. \end{proof} In particular, since $\mathcal{N}$ is locally nilpotent, we see from \eqref{twlog2a} that each logarithmic field $Y(a,z)$ is a polynomial in $\zeta$ for $a\in\bar V$ (cf.\ \cite{H}). Introduce the linear map $z^\mathcal{N}$ from $\bar V$ to $\bar V[\zeta]$, given by \begin{equation* z^\mathcal{N} a = e^{\zeta\mathcal{N}} a \in \bar V[\zeta] \,, \qquad a\in \bar V \,, \end{equation*} and let \begin{equation* X(a,z) = Y(z^\mathcal{N} a,z) \,, \quad Y(a,z) = X(z^{-\mathcal{N}} a,z) \,, \qquad a\in \bar V \,. \end{equation*} Note that, since $\mathcal{N}\in\Der(\bar V)$, we have \begin{equation}\label{twlog7} z^\mathcal{N}(a_{(n)}b) = (z^\mathcal{N} a)_{(n)} (z^\mathcal{N} b) \,, \qquad a,b\in \bar V \,, \;\; n\in\mathbb{Z}\,. \end{equation} \begin{lemma}\label{lltwlog3} With the above notation, we have\/ $X(a,z) = Y(a,z)|_{\zeta=0}$ for all\/ $a\in \bar V$. Furthermore, $X(\bar V)$ is a local collection of fields. \end{lemma} \begin{proof} Using \eqref{twlog2a}, we find \begin{align* \partial_\zeta X(a,z) &= Y(\mathcal{N}z^\mathcal{N} a,z) +\partial_\zeta Y(a',z)\big|_{a'=z^\mathcal{N} a} \\ &= Y(\mathcal{N}z^\mathcal{N} a,z) - Y(\mathcal{N} a',z)\big|_{a'=z^\mathcal{N} a} = 0 \,. \end{align*} Then $X(a,z) = X(a,z)|_{\zeta=0} = Y(a,z)|_{\zeta=0}$. Setting $\zeta_1=\zeta_2=0$ in \eqref{logf7}, we see that all the fields $X(a,z)$ are local. \end{proof} \begin{remark}\label{rtwlog4} The kernel $V^\mathcal{N} \subset\bar V$ of $\mathcal{N}$ is a subalgebra of $V$. The restriction of any $\varphi$-twisted $V$-module to $V^\mathcal{N}$ is a $\sigma$-twisted $V^\mathcal{N}$-module. \end{remark} \subsection{$\mathcal{D}$-twisted modules}\label{ssintw} When $\mathcal{N}$ is not locally nilpotent, we might not be able to exponentiate it. However, \eqref{twlog2a} still makes sense, suggesting the following more general notion, which we plan to investigate in the future. \begin{definition}\label{dltwlog3} Let $V$ be a vertex algebra and $\mathcal{D}\in\Der(V)$. A \emph{$\mathcal{D}$-twisted $V$-module} is a vector space $W$, equipped with a linear map \begin{equation* Y\colon V\to \Hom_\mathbb{C} \bigl( W, W[[\zeta]](\!(z)\!) \bigr) \end{equation*} such that \begin{equation* Y(\mathcal{D} a,z) = -\partial_\zeta Y(a,z) \,, \qquad a\in V \,, \end{equation*} $Y(\vac)=I$ is the identity operator, $Y(V)$ is a local collection, and the $n$-th product identity \eqref{twlog3} holds. \end{definition} Observe that, in comparison to \deref{dltwlog2}, here we allow the logarithmic fields $Y(a,z)$ to have coefficients formal power series in $\zeta$. However, all results of \seref{s3} still hold in this case. When $\mathcal{D}$ is locally finite, then $\varphi=e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{D}} \in\Aut(V)$ is locally finite and the notion of a $\mathcal{D}$-twisted module is equivalent to that of a $\varphi$-twisted module. \begin{remark}\label{rtwlog5} For every $\mathcal{D}\in\Der(V)$, the kernel $V^\mathcal{D}$ of $\mathcal{D}$ is a subalgebra of $V$. The restriction of any $\mathcal{D}$-twisted $V$-module to $V^\mathcal{D}$ is a (untwisted) $V^\mathcal{D}$-module. \end{remark} \section{Borcherds identity for twisted modules}\label{s5} In this section, we derive a Borcherds identity for twisted modules and, as a consequence, a commutator formula. We prove that the Borcherds identity can replace the locality and $n$-th product identity in the definition of a twisted module. \subsection{Modes in a twisted module}\label{smodes} Throughout this section, $V$ will be a vertex algebra, $\varphi$ a locally finite automorphism of $V$, and $W$ a $\varphi$-twisted $V$-module. We will again write $\varphi=\sigma e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$ with commuting semisimple $\sigma\in\Aut(V)$ and locally nilpotent $\mathcal{N}\in\Der(V)$ (see \seref{sconlf}). In particular, $\bar V=V$ as $\varphi$ is locally finite. We will denote by \begin{equation}\label{twlog4} V_\alpha = \{ a\in V \,|\, \sigma a = e^{-2\pi\mathrm{i}} %{\sqrt{-1}\alpha} a \} \,, \qquad \alpha\in\mathbb{C}/\mathbb{Z} \,, \end{equation} the eigenspaces of $\sigma$. Then by \eqref{twlog2a} we have $Y(V_\alpha)\subset \LF_\alpha(W)$, which means that all powers of $z$ in $Y(a,z)$ belong to the coset $-\alpha$ for $a\in V_\alpha$. \begin{definition}\label{dltwlog4} For $a\in V_\alpha$, $\alpha\in\mathbb{C}/\mathbb{Z}$ and $m\in\alpha$, the \emph{$(m+\mathcal{N})$-th mode} of $a$ is defined as \begin{equation* a_{(m+\mathcal{N})} = \Res_z z^m X(a,z) = \Res_z Y(z^{m+\mathcal{N}} a,z) \in \End(W) \,, \end{equation*} where $z^{m+\mathcal{N}} = z^m e^{\zeta\mathcal{N}}$. \end{definition} Since $\mathcal{N}(V_\alpha)\subset V_\alpha$ and $X(a,z)$ is independent of $\zeta$, we have \begin{equation* X(a,z) = \sum_{m\in\alpha} a_{(m+\mathcal{N})} z^{-m-1} \,, \qquad a\in V_\alpha \,. \end{equation*} We can recover the field $Y(a,z)$ from the modes of $\mathcal{N}^l a$ $(l\ge0)$ as follows: \begin{equation}\label{twlog10} \begin{split} Y(a,z) &= X(e^{-\zeta\mathcal{N}} a,z) = \sum_{m\in\alpha} (e^{-\zeta\mathcal{N}} a)_{(m+\mathcal{N})} z^{-m-1} \\ &= \sum_{m\in\alpha} (z^{-m-1-\mathcal{N}} a)_{(m+\mathcal{N})} \,, \qquad a\in V_\alpha \,. \end{split} \end{equation} Note that for every $a\in V_\alpha$, $m\in\alpha$ and $v\in W$, there is an integer $L$ such that \begin{equation* a_{(m+i+\mathcal{N})} v = 0 \quad\text{for all}\quad i\in\mathbb{Z} \,, \; i\ge L \,. \end{equation*} \subsection{Borcherds identity}\label{stwbor} Now we can derive the main identity satisfied by the modes. \begin{theorem}\label{tltwlog5} Let\/ $V$ be a vertex algebra, $\varphi$ a locally finite automorphism of\/ $V$, and\/ $W$ a\/ $\varphi$-twisted\/ $V$-module. Then we have the {Borcherds identity} \begin{equation}\label{twlog11} \begin{split} \sum_{i=0}^\infty & (-1)^i \binom{n}{i} a_{(m+n-i+\mathcal{N})}(b_{(k+i+\mathcal{N})}v) \\ -\sum_{i=0}^\infty & (-1)^{n+i} \binom{n}{i} b_{(k+n-i+\mathcal{N})}(a_{(m+i+\mathcal{N})}v) \\ &= \sum_{j=0}^\infty \Bigl( \Bigl( \binom{m+\mathcal{N}}{j} a \Bigr)_{(n+j)}b \Bigr)_{(m+k-j+\mathcal{N})}v \,, \end{split} \end{equation} for\/ $a\in V_\alpha$, $b\in V_\beta$, $v\in W$, and\/ $m\in\alpha$, $k\in\beta$, $n\in\mathbb{Z}$. \end{theorem} \begin{proof} Notice that all sums in \eqref{twlog11} are finite. Let $N$ be such that $(\mathcal{N}^l a)_{(j)}b=0$ for all $l\ge0$, $j\ge N$. Then \begin{equation}\label{twlog12a} z_{12}^N \, Y(a,z_1) Y(b,z_2) v = z_{12}^N \, Y(b,z_2) Y(a,z_1) v \,, \end{equation} and let us denote both sides by $F(a,b;z_1,z_2)$. Using the expansions \eqref{vert14} and \eqref{vert15}, we compute for $n\le N-1$: \begin{equation}\label{twlog12} \begin{split} \iota_{z_1,z_2} z_{12}^n \, & Y(a,z_1) Y(b,z_2) v - \iota_{z_2,z_1} z_{12}^n \, Y(b,z_2) Y(a,z_1) v \\ &= F(a,b;z_1,z_2) \, (\iota_{z_1,z_2} - \iota_{z_2,z_1}) z_{12}^{n-N} \\ &= F(a,b;z_1,z_2) \, \partial_{z_2}^{(N-1-n)} \delta(z_1,z_2) \,. \end{split} \end{equation} Let us now replace in this equation $a$ with $z_1^{m+\mathcal{N}} a$ and $b$ with $z_2^{k+\mathcal{N}} b$ to get \begin{equation}\label{twlog12x} \begin{split} \iota_{z_1,z_2} z_{12}^n \, & z_1^m z_2^k X(a,z_1) X(b,z_2) v - \iota_{z_2,z_1} z_{12}^n \, z_1^m z_2^k X(b,z_2) X(a,z_1) v \\ &= F(z_1^{m+\mathcal{N}} a,z_2^{k+\mathcal{N}} b;z_1,z_2) \, \partial_{z_2}^{(N-1-n)} \delta(z_1,z_2) \,. \end{split} \end{equation} If we then take $\Res_{z_1} \Res_{z_2}$ of the left-hand side of \eqref{twlog12x}, we will obtain the left-hand side of \eqref{twlog11}, for any $n\in\mathbb{Z}$. By the property \eqref{vert16} of the delta function, if we take $\Res_{z_1}$ of the right-hand side of \eqref{twlog12x}, we will get \begin{align*} \partial_{z_1}^{(N-1-n)} F(z_1^{m+\mathcal{N}} a, z_2^{k+\mathcal{N}} b;z_1,z_2) \Big|_{z_1=z_2} \,. \end{align*} In this formula, we can replace $\partial_{z_1}$ with $D_{z_1}$, because $F(z_1^{m+\mathcal{N}} a, z_2^{k+\mathcal{N}} b;$ $z_1,z_2)$ is independent of $\zeta_1$. Then using the Leibniz rule, \eqref{logf9}, \eqref{twlog3} and \eqref{twlog7}, we obtain: \begin{align*} \sum_{j=0}^{N-1-n} & \, D_{z_1}^{(N-1-n-j)} F\Bigl( \binom{m+\mathcal{N}}{j} z_2^{m-j+\mathcal{N}} a , z_2^{k+\mathcal{N}} b;z_1,z_2 \Bigr) \Big|_{z_1=z_2} \\ &= \sum_{j=0}^{N-1-n} Y\Bigl(\Bigl( \binom{m+\mathcal{N}}{j} z_2^{m-j+\mathcal{N}} a\Bigr)_{(n+j)} \bigl( z_2^{k+\mathcal{N}} b \bigr),z_2 \Bigr) v \\ &= \sum_{j=0}^{N-1-n} Y\Bigl(z_2^{m+k-j+\mathcal{N}} \Bigl( \Bigl( \binom{m+\mathcal{N}}{j} a\Bigr)_{(n+j)} b \Bigr),z_2 \Bigr) v \,. \end{align*} Now taking $\Res_{z_2}$ gives exactly the right-hand side of \eqref{twlog11}. This proves \eqref{twlog11} in the case $n\le N-1$. When $n\ge N$, the left-hand side of \eqref{twlog12} is $0$. The right-hand side of \eqref{twlog11} is also obviously $0$ for $n\ge N$. \end{proof} For $\alpha\in\mathbb{C}/\mathbb{Z}$, introduce the shifted delta function (cf.\ \cite{BK1}): \begin{equation}\label{twlog13} \begin{split} \delta_{\alpha+\mathcal{N}}(z_1,z_2) &= \sum_{m\in\alpha} z_1^{-m-1-\mathcal{N}} z_2^{m+\mathcal{N}} \\ &= z_1^{-m} z_2^{m} \delta(z_1,z_2) \, e^{(\zeta_2-\zeta_1)\mathcal{N}} \,, \qquad m\in\alpha \,, \end{split} \end{equation} where $e^{(\zeta_2-\zeta_1)\mathcal{N}}$ is a linear map from $V$ to $V[\zeta_1,\zeta_2]$. \begin{proposition}\label{pltwlog6} The Borcherds identity\/ \eqref{twlog11} is equivalent to the equation \begin{equation}\label{twlog14} \begin{split} \iota_{z_1,z_2} z_{12}^n \, & Y(a,z_1) Y(b,z_2) v - \iota_{z_2,z_1} z_{12}^n \, Y(b,z_2) Y(a,z_1) v \\ &= \sum_{j=0}^\infty Y \Bigl( \bigl( D_{z_2}^{(j)} \delta_{\alpha+\mathcal{N}}(z_1,z_2) a \bigr)_{(n+j)} b,z_2 \Bigr) v \end{split} \end{equation} for\/ $a\in V_\alpha$, $b\in V$, $v\in W$ and\/ $n\in\mathbb{Z}$. \end{proposition} \begin{proof} In the proof of \thref{tltwlog5} we saw that we can get the left-hand side of \eqref{twlog11} from the left-hand side of \eqref{twlog14} if we replace $a$ with $z_1^{m+\mathcal{N}} a$, $b$ with $z_2^{k+\mathcal{N}} b$ and then take $\Res_{z_1} \Res_{z_2}$. Conversely, we can go back by summing over all $m,k$. Similarly, the right-hand side of \eqref{twlog14}, which is equal to \begin{equation*} \sum_{j=0}^\infty \sum_{m\in\alpha} Y\Bigl(\Bigl( \binom{m+\mathcal{N}}{j} z_1^{-m-1-\mathcal{N}} z_2^{m-j+\mathcal{N}} a\Bigr)_{(n+j)} b,z_2 \Bigr) v \,, \end{equation*} corresponds to the right-hand side of \eqref{twlog11}. \end{proof} \begin{remark}\label{rtwlog6} The Borcherds identity \eqref{twlog14} remains true without the assumption that $\varphi$ is locally finite. However, it requires that $a\in V_\alpha\subset\bar V$, so $\varphi$ is locally finite on $a$. \end{remark} Next, we show that the Borcherds identity can replace the locality and $n$-th product identity in the definition of a $\varphi$-twisted module. \begin{proposition}\label{pltwlog7} Let\/ $V$ be a vertex algebra, $\varphi$ a locally finite automorphism, $W$ a vector space, and\/ $Y\colon V\to\LF(W)$ a linear map satisfying the\/ $\varphi$-equivariance\/ \eqref{twlog2} and the Borcherds identity\/ \eqref{twlog14}. Then\/ $W$ is a\/ $\varphi$-twisted\/ $V$-module. \end{proposition} \begin{proof} The proof follows by reversing the proofs of \thref{tltwlog5} and \prref{pltwlog6}. Fix $a,b\in V$, and let $N\ge0$ be such that $(\mathcal{N}^l a)_{(j)} b = 0$ for all $l\ge0$, $j\ge N$. Then setting $n=N$ in \eqref{twlog14}, we obtain the locality \eqref{twlog12a} for all $v\in W$. Note that \eqref{twlog3} is trivial for $n\ge N$. Suppose $n\le N-1$ and denote both sides of \eqref{twlog12a} again by $F(a,b;z_1,z_2)$. By \eqref{twlog12} and \eqref{twlog14}, \begin{align*} F(a,b;z_1,z_2) \, \partial_{z_2}^{(N-1-n)} \delta(z_1,z_2) = \sum_{j=0}^{N-1-n} Y \Bigl( \bigl( D_{z_2}^{(j)} \delta_{\alpha+\mathcal{N}}(z_1,z_2) a \bigr)_{(n+j)} b,z_2 \Bigr) v \,. \end{align*} Now if we replace $a$ with $z_1^{m+\mathcal{N}} a$, where $a\in V_\alpha$, $m\in\alpha$, we will have only integral powers of $z_1$ and no dependence on $\zeta_1$. Then take $\Res_{z_1}$ to obtain \begin{align*} \partial_{z_1}^{(N-1-n)} \, F(z_1^{m+\mathcal{N}} a,b;z_1,z_2) \Big|_{z_1=z_2} = \sum_{j=0}^{N-1-n} Y \Bigl( \bigl( D_{z_2}^{(j)} (z_2^{m+\mathcal{N}}) a \bigr)_{(n+j)} b,z_2 \Bigr) v \,. \end{align*} By the Leibniz rule, the left-hand side is equal to \begin{align*} \sum_{j=0}^{N-1-n} D_{z_1}^{(N-1-n-j)} \, F\bigl( D_{z_2}^{(j)} (z_2^{m+\mathcal{N}}) a,b;z_1,z_2\bigr) \Big|_{z_1=z_2} \,. \end{align*} Then by induction on $N-1-n$, it follows that \begin{align*} D_{z_1}^{(N-1-n)} F(a,b;z_1,z_2) \Big|_{z_1=z_2} = Y (a_{(n)} b,z_2) \,, \end{align*} which is exactly \eqref{twlog3}. \end{proof} \subsection{Commutator formulas}\label{scomf} Setting $n=0$ in the Borcherds identity~\eqref{twlog11}, we obtain the \emph{commutator formula} \begin{equation}\label{twlog15} \bigl[ a_{(m+\mathcal{N})}, b_{(k+\mathcal{N})} \bigr] = \sum_{j=0}^\infty \Bigl( \Bigl( \binom{m+\mathcal{N}}{j} a \Bigr)_{(j)}b \Bigr)_{(m+k-j+\mathcal{N})} \,, \end{equation} where $a\in V_\alpha$, $b\in V_\beta$, $m\in\alpha$, $k\in\beta$. Similarly, from \eqref{twlog14} we have: \begin{equation}\label{twlog16} \bigl[ Y(a,z_1), Y(b,z_2) \bigr] = \sum_{j=0}^\infty Y \Bigl( \bigl( D_{z_2}^{(j)} \delta_{\alpha+\mathcal{N}}(z_1,z_2) a \bigr)_{(j)} b,z_2 \Bigr) \end{equation} for all $a\in V_\alpha$ and $b\in V$. Extracting the coefficient of $z_1^{-m-1-\mathcal{N}} a$, we deduce another useful formula: \begin{equation}\label{twlog17} \bigl[ a_{(m+\mathcal{N})}, Y(b,z) \bigr] = \sum_{j=0}^\infty Y \Bigl( \bigl( \binom{m+\mathcal{N}}{j} z^{m-j+\mathcal{N}} a \bigr)_{(j)} b,z \Bigr) \,. \end{equation} As in \reref{rtwlog6}, equations \eqref{twlog16} and \eqref{twlog17} hold without the assumption that $\varphi$ is locally finite, but they require $a\in V_\alpha \subset\bar V$. From \eqref{twlog16} we can derive a formula for the propagator $P( a, b; z_1,z_2)$ of $Y(a,z_1)$ and $Y(b,z_2)$ (see \seref{sloccol}). Recall that $\alpha_0\in\alpha$ is such that $-1<\mathrm{Re}\,\alpha_0\le0$. \begin{lemma}\label{lltwlog8} For any\/ $a\in V_\alpha$ and\/ $b\in V$, we have \begin{equation* z_{12}^N \, P( a, b; z_1,z_2) = \sum_{j=0}^{N-1} \sum_{i=0}^j z_{12}^{N-1-i} \, Y\Bigl( \bigl( D_{z_2}^{(j-i)} z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} a \bigr)_{(j)} b,z_2 \Bigr) \,, \end{equation*} where\/ $N$ is such that\/ $(\mathcal{N}^l a)_{(j)}b=0$ for all\/ $l\ge0$, $j\ge N$. \end{lemma} \begin{proof} By comparing the powers of $z_1$ in \eqref{twlog16}, we obtain \begin{equation* P( a, b; z_1,z_2) = \sum_{j=0}^\infty Y \Bigl( \bigl( D_{z_2}^{(j)} \iota_{z_1,z_2} z_{12}^{-1} z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} a \bigr)_{(j)} b,z_2 \Bigr) \,, \end{equation*} using \eqref{vert14}, \eqref{vert15} and \eqref{twlog13}. In this equation, the sum over $j$ goes only up to $j=N-1$. Then we apply the Leibniz rule and multiply by $z_{12}^N$ to finish the proof. \end{proof} The next result is useful for constructing $\varphi$-twisted modules. \begin{proposition}\label{pltwlog9} Let\/ $V$ be a vertex algebra, $\varphi$ an automorphism, $W$ a vector space, and\/ $Y\colon V\to\LF(W)$ a linear map satisfying the $\varphi$-equivariance\/ \eqref{twlog2} and the commutator formula\/ \eqref{twlog16} for\/ $a\in V_\alpha$, $b\in V$. Then the logarithmic fields\/ $Y(a,z)$ and\/ $Y(b,z)$ are local, and the\/ $n$-th product identity\/ \eqref{twlog3} holds for\/ $a\in V_\alpha$, $b\in V$ and all\/ $n\ge0$. \end{proposition} \begin{proof} As before, let $N\ge0$ be such that $(\mathcal{N}^l a)_{(j)} b = 0$ for all $l\ge0$, $j\ge N$. Then in \eqref{twlog16} the sum over $j$ goes only up to $j=N-1$. Using $z_{12} \,\delta_{\alpha+\mathcal{N}}(z_1,z_2) = 0$, we derive from \eqref{twlog16} the locality \eqref{logf7} of $a(z)=Y(a,z)$ and $b(z)=Y(b,z)$. By definition, their $n$-th product is $0$ for $n\ge N$. To find it for $0\le n\le N-1$, we apply \prref{ptlogf5} and \leref{lltwlog8}. Let us use the convention that $x^{(k)}=0$ for $k<0$. Then \begin{align*} Y&(a,z)_{(n)} Y(b,z) \\ &= D_{z_1}^{(N-1-n)} \sum_{i,j=0}^{N-1} z_{12}^{N-1-i} \, Y\Bigl( \bigl( D_{z_2}^{(j-i)} z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} a \bigr)_{(j)} b,z_2 \Bigr) \Big|_{ z_1=z_2=z} \,. \end{align*} For a fixed $j$, we calculate using the Leibniz rule: \begin{align*} \sum_{i=0}^{N-1} & D_{z_1}^{(N-1-n)} \bigl( z_{12}^{N-1-i} \bigl( D_{z_2}^{(j-i)} z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} \bigr) \bigr) \big|_{ z_1=z_2=z} \\ &= \sum_{i=0}^{N-1} D_{z_1}^{(i-n)} D_{z_2}^{(j-i)} \bigl( z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} \bigr) \big|_{ z_1=z_2=z} \\ &= D_{z}^{(j-n)} \bigl( z^{-\alpha_0-\mathcal{N}} z^{\alpha_0+\mathcal{N}} \bigr) = \delta_{j,n} \,. \end{align*} Therefore, $Y(a,z)_{(n)} Y(b,z) = Y(a_{(n)}b,z)$. \end{proof} As another application of \leref{lltwlog8}, we obtain a formula relating the $(-1)$-st product with the normally ordered product given by \deref{dlogf4} (cf.\ \cite[(3.13)]{BK1}). \begin{lemma}\label{lltwlog11} In every\/ $\varphi$-twisted\/ $V$-module, we have \begin{equation* {:} Y(a,z) Y(b,z) {:} = \sum_{j=-1}^{N-1} z^{-j-1} \, Y \Bigl( \bigl( \binom{\alpha_0+\mathcal{N}}{j+1} a \bigr)_{(j)} b,z \Bigr) \end{equation*} for\/ $a\in V_\alpha$ and\/ $b\in V$. \end{lemma} \begin{proof} We proceed as in the proof of \prref{pltwlog9} for $n=-1$. We calculate for a fixed $0\le j\le N-1$: \begin{align*} \sum_{i=0}^{N-1} & D_{z_1}^{(N)} \bigl( z_{12}^{N-1-i} \bigl( D_{z_2}^{(j-i)} z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} \bigr) \bigr) \big|_{ z_1=z_2=z} \\ &= \sum_{i=0}^{j} D_{z_1}^{(i+1)} D_{z_2}^{(j-i)} \bigl( z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} \bigr) \big|_{ z_1=z_2=z} \\ &= D_{z}^{(j+1)} \bigl( z^{-\alpha_0-\mathcal{N}} z^{\alpha_0+\mathcal{N}} \bigr) - D_{z_1}^{(0)} D_{z_2}^{(j+1)} \bigl( z_1^{-\alpha_0-\mathcal{N}} z_2^{\alpha_0+\mathcal{N}} \bigr) \big|_{ z_1=z_2=z} \\ &= - \binom{\alpha_0+\mathcal{N}}{j+1} z^{-j-1} \,. \end{align*} The rest of the proof follows again from \prref{ptlogf5} and \leref{lltwlog8}. \end{proof} \subsection{Action of the Virasoro algebra}\label{svirac} In this subsection, we assume that the vertex algebra $V$ is \emph{conformal}, i.e., there exist $\omega\in V$ (conformal vector) and $c\in\mathbb{C}$ (central charge) such that (see, e.g., \cite{K2}): \begin{equation* \omega_{(0)} = T \,, \quad \omega_{(1)} \omega = 2\omega \,, \quad \omega_{(2)} \omega = \frac{c}2 \vac \,, \end{equation*} and $\omega_{(j)} \omega = 0$ for $j\ge 3$. Then the modes $L_n = \omega_{(n+1)}$ give a representation of the Virasoro Lie algebra on $V$ with a central charge $c$. In addition, it is usually assumed that the operator $L_0$ is semisimple on~$V$. Consider a (not necessarily locally finite) $\varphi\in\Aut(V)$ such that $\varphi(\omega)=\omega$, and a $\varphi$-twisted $V$-module $W$. Then the field $Y(\omega,z)$ on $W$ has only integral powers of $z$ and no $\zeta$, and its modes \begin{equation}\label{twlog19} Y(\omega,z) = \sum_{n\in\mathbb{Z}} L_n^W z^{-n-2} \,, \qquad L_n^W \in\End(W) \,, \end{equation} give a representation of the Virasoro algebra on $W$ with the same central charge $c$. Applying the commutator formula \eqref{twlog17}, we obtain: \begin{align* [L_{-1}^W, Y(a,z) ] &= Y(Ta,z) = D_z Y(a,z) \,, \\ [L_{0}^W, Y(a,z) ] &= z Y(Ta,z) + Y(L_0 a,z) \,, \qquad a\in V\,. \end{align*} Then from $z D_z=D_\zeta$, we have \begin{equation* [L_{0}^W, Y(a,z) ] = (D_\zeta+\Delta) Y(a,z) \,, \quad\text{if}\quad L_0 a=\Delta a \,. \end{equation*} \begin{remark}\label{rtwlog8} Assume that $L_0$ is semisimple on $V$, but $\varphi$ is not semisimple. Then $L_{0}^W$ is not semisimple on $W$. Thus, $W$ is a (untwisted) $V^\varphi$-module with a non-semisimple action of $L_{0}^W$, also known as a \emph{logarithmic module} (see \cite{AM}). \end{remark} \begin{lemma}\label{lltwlog10} Let\/ $W$ be a\/ $\varphi$-twisted\/ $V$-module. Assume that the operator\/ $L_0$ is semisimple on\/ $V$ with integral eigenvalues, and the operator\/ $e^{2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W}$ is well defined on\/ $W$. Then \begin{equation* e^{2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W} Y(a,z) e^{-2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W} = e^{2\pi\mathrm{i}} %{\sqrt{-1} D_\zeta} Y(a,z) = Y(\varphi a,z) \end{equation*} when acting on\/ $W$, for every\/ $a\in V$. \end{lemma} \begin{proof} Indeed, \begin{align*} e^{2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W} Y(a,z) e^{-2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W} = e^{2\pi\mathrm{i}} %{\sqrt{-1} \ad(L_{0}^W) } Y(a,z) = e^{ 2\pi\mathrm{i}} %{\sqrt{-1} (D_\zeta+\Delta) } Y(a,z) \end{align*} when $L_0 a=\Delta a$. \end{proof} \leref{lltwlog10} can be used to define $e^{2\pi\mathrm{i}} %{\sqrt{-1} L_{0}^W}$ on the whole $W$, provided it can be defined on a set of generators of $W$ as a $\varphi$-twisted $V$-module; in particular, when these generators are eigenvectors of $L_{0}^W$. This can be used to define a grading of $W$ as in \cite[Definition 3.1]{H}. \section{Twisted modules of affine and Heisenberg vertex algebras}\label{s6} In this section, we describe all twisted modules of affine and Heisenberg vertex algebras in terms of modules over certain twisted versions of the corresponding Lie algebras. We also determine the action of the Virasoro algebra. For the Heisenberg vertex algebra, all twisted irreducible highest-weight modules are constructed explicitly. \subsection{Universal affine vertex algebras}\label{aff} Let us first recall the definition of affine Lie algebras, following \cite{K1}. Consider a finite-dimensional Lie algebra ${\mathfrak{g}}$ equipped with a nondegenerate symmetric invariant bilinear form $(\cdot|\cdot)$, normalized so that the square length of a long root is $2$ in the case when ${\mathfrak{g}}$ is simple. The \emph{affine Lie algebra} $\hat{\mathfrak{g}} = {\mathfrak{g}}[t,t^{-1}] \oplus \mathbb{C} K$ has the Lie brackets \begin{equation}\label{aff1} [at^m,bt^n] = [a,b]t^{m+n} + m \delta_{m,-n} (a|b) K \,, \qquad [K,at^m]=0 \,. \end{equation} For a fixed $\kappa\in\mathbb{C}$, called the \emph{level}, the (generalized) \emph{Verma module} $M(\kappa\Lambda_0) = \Ind^{\hat{\mathfrak{g}}}_{{\mathfrak{g}}[t]\oplus\mathbb{C} K} \mathbb{C}$ is defined by letting ${\mathfrak{g}}[t]$ act trivially on $\mathbb{C}$ and $K$ act as $\kappa$. Then $M(\kappa\Lambda_0)$ is a highest-weight $\hat{\mathfrak{g}}$-module with a highest-weight vector the image of $1\in\mathbb{C}$, which will be denoted $\vac$. Notice that as a vector space, $M(\kappa\Lambda_0) \cong U({\mathfrak{g}}[t^{-1}]t^{-1})$. Due to \cite{FZ}, $M(\kappa\Lambda_0)$ has the structure of a vertex algebra, which is called the \emph{universal affine vertex algebra} at level $\kappa$ and is denoted $V^\kappa({\mathfrak{g}})$. It has a vacuum vector $\vac$ and is generated by the local fields \begin{equation* Y((at^{-1})\vac,z) = \sum_{m\in\mathbb{Z}} (at^m) z^{-m-1} \,, \qquad a\in{\mathfrak{g}} \end{equation*} (see, e.g., \cite{K2} for more details). For simplicity of notation, let us identify $a\in{\mathfrak{g}}$ with $(at^{-1})\vac\in V^\kappa({\mathfrak{g}})$; then $a_{(m)} = at^m$ as operators on $V^\kappa({\mathfrak{g}})$. By the commutator formula \eqref{vert11}, the Lie brackets \eqref{aff1} are equivalent to the relations \begin{equation}\label{aff3} a_{(0)} b = [a,b] \,, \quad a_{(1)} b = (a|b) \kappa\vac \,, \quad a_{(j)} b = 0 \quad (j\ge2) \end{equation} for $a,b\in{\mathfrak{g}}$. Now suppose that ${\mathfrak{g}}$ is simple or abelian, and let $h^\vee$ be the dual Coxeter number of ${\mathfrak{g}}$ in the case when it is simple. When ${\mathfrak{g}}$ is abelian, we set $h^\vee=0$. Then the vertex algebra $V^\kappa({\mathfrak{g}})$ is conformal for $\kappa\neq -h^\vee$. Pick dual bases $\{v_i\}$ and $\{v^i\}$ for ${\mathfrak{g}}$ with respect to $(\cdot|\cdot)$. The conformal vector $\omega\in V^\kappa({\mathfrak{g}})$ is given by the \emph{Sugawara construction} \begin{equation}\label{aff4} \omega = \frac1{2(\kappa+h^\vee)} \sum_{i=1}^{\dim{\mathfrak{g}}} v^i_{(-1)} v_i \,, \qquad \kappa\neq -h^\vee \end{equation} (see, e.g., \cite{K2}). The Virasoro central charge is $c=\kappa\dim{\mathfrak{g}} / (\kappa+h^\vee)$. The operator $L_0$ satisfies $L_0\vac=0$, $L_0 a=a$ $(a\in{\mathfrak{g}})$, and it defines a $\mathbb{Z}_+$-grading of $V^\kappa({\mathfrak{g}})$ by its eigenvalues. \subsection{$\varphi$-twisted modules of $V^\kappa({\mathfrak{g}})$}\label{sphtwv} From now on, $\varphi$ will be an automorphism of ${\mathfrak{g}}$ such that $(\cdot|\cdot)$ is $\varphi$-invariant. Then $\varphi$ induces automorphisms of $\hat{\mathfrak{g}}$ and $V=V^\kappa({\mathfrak{g}})$, which we will again denote as $\varphi$. Notice that $\varphi(\omega)=\omega$. Since the eigenspaces of $L_0$ in $V$ are finite-dimensional and $\varphi$-invariant, $\varphi$ is locally finite on $V$. Writing again $\varphi=\sigma e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$, we have $\sigma\in\Aut({\mathfrak{g}})$, $\mathcal{N}\in\Der({\mathfrak{g}})$, and \begin{equation}\label{aff5} (\sigma a| \sigma b) = (a|b) \,, \qquad (\mathcal{N} a|b) + (a|\mathcal{N} b) = 0 \,. \end{equation} As before, we denote the eigenspaces of $\sigma$ by \begin{equation* {\mathfrak{g}}_\alpha=\{a\in{\mathfrak{g}} \, | \,\sigma a = e^{-2\pi\mathrm{i}} %{\sqrt{-1}\alpha} a\} \,, \qquad \alpha\in\mathbb{C}/\mathbb{Z} \,. \end{equation*} If $W$ is a $\varphi$-twisted $V$-module, then by \eqref{twlog15} and \eqref{aff3}, we have: \begin{equation}\label{aff7} \bigl[ a_{(m+\mathcal{N})}, b_{(k+\mathcal{N})} \bigr] = [a,b]_{(m+k+\mathcal{N})} + \delta_{m,-n} ((m+\mathcal{N})a|b) \kappa I \end{equation} for $a\in{\mathfrak{g}}_\alpha$, $b\in{\mathfrak{g}}_\beta$, $m\in\alpha$, $k\in\beta$. Hence, the modes $a_{(m+\mathcal{N})}$ close a Lie algebra, which can be described as follows (cf.\ \cite[Chapter 8]{K1}). Let $\tilde{\mathfrak{g}} = \bigoplus_{\alpha\in\mathbb{C}/\mathbb{Z}} {\mathfrak{g}}[t]t^\alpha$ be the \emph{loop algebra}, whose elements are finite sums of $at^m$ ($a\in{\mathfrak{g}}$, $m\in\mathbb{C}$), with the Lie bracket $[at^m,bt^n] = [a,b]t^{m+n}$. We define an automorphism $\tilde\sigma$ of $\tilde{\mathfrak{g}}$ by $\tilde\sigma(at^m)=e^{2\pi\mathrm{i}} %{\sqrt{-1} m}\sigma(a)t^m$. The subalgebra $\tilde{\mathfrak{g}}_\sigma$ of fixed points under $\tilde\sigma$ is spanned by $at^m$ ($a\in{\mathfrak{g}}_\alpha$, $m\in\alpha$). The loop algebra $\tilde{\mathfrak{g}}$ has a $2$-cocycle $\gamma$ given by \begin{equation* \gamma(at^m,bt^n) = m\delta_{m,-n} (a|b) = \Res_t (\partial_t(at^m) | bt^n) \,, \end{equation*} which gives rise to a central extension of $\tilde{\mathfrak{g}}$ similar to $\hat{\mathfrak{g}}$ (see \eqref{aff1}). When restricted to $\tilde{\mathfrak{g}}_\sigma$, we obtain the Lie algebra $\hat{\mathfrak{g}}_\sigma=\tilde{\mathfrak{g}}_\sigma\oplus\mathbb{C} K$. It is easy to check that \begin{equation* \gamma_\mathcal{N}(at^m,bt^n) = \delta_{m,-n} ((m+\mathcal{N})a|b) = \Res_t \bigl( (\partial_t+t^{-1}\mathcal{N})(at^m) \big| bt^n \bigr) \end{equation*} again defines a $2$-cocycle on $\tilde{\mathfrak{g}}$. If we use $\gamma_\mathcal{N}$ instead of $\gamma$, we obtain the Lie algebra $\hat{\mathfrak{g}}_\varphi=\tilde{\mathfrak{g}}_\sigma\oplus\mathbb{C} K$. \begin{definition}\label{daff1} The \emph{$\varphi$-twisted affinization} of ${\mathfrak{g}}$ is the Lie algebra $\hat{\mathfrak{g}}_\varphi$ spanned by a central element $K$ and elements $at^m$ ($a\in{\mathfrak{g}}_\alpha$, $m\in\alpha$), with the Lie bracket \begin{equation}\label{aff10} [at^m,bt^n] = [a,b]t^{m+n} + \delta_{m,-n} ((m+\mathcal{N})a|b) K \,. \end{equation} \end{definition} \begin{proposition}\label{paff3} When the Lie algebra\/ ${\mathfrak{g}}$ is simple, there exists a Dynkin diagram automorphism\/ $\mu$ of\/ ${\mathfrak{g}}$, such that\/ $\hat{\mathfrak{g}}_\varphi\cong\hat{\mathfrak{g}}_\mu$ is a (possibly twisted) affine Kac--Moody algebra. \end{proposition} \begin{proof} We have $\mathcal{N}=\ad_y$ for some $y\in{\mathfrak{g}}$. Then \begin{equation*} \Res_t ( t^{-1}\mathcal{N}(at^m) | bt^n ) = \Res_t ( y t^{-1} | [at^m,bt^n] ) \,, \end{equation*} and the cocycle $\gamma_\mathcal{N}$ is equivalent to $\gamma$. Hence, $\hat{\mathfrak{g}}_\varphi\cong\hat{\mathfrak{g}}_\sigma$. We can write $\sigma = \mu e^{2\pi\mathrm{i}} %{\sqrt{-1}\ad_x}$ for some semisimple $x\in{\mathfrak{g}}$ and a Dynkin diagram automorphism $\mu$, so that $\mu$ commutes with $\ad_x$. Then the map $t^{\ad_x}$, defined by $t^{\ad_x}(at^m) = at^{m+p}$ whenever $[x,a]=pa$, is an isomorphism from $\tilde{\mathfrak{g}}_\sigma$ to $\tilde{\mathfrak{g}}_\mu$ (cf.\ \cite[Proposition 8.5]{K1}). It lifts to an isomorphism $\hat{\mathfrak{g}}_\sigma\cong\hat{\mathfrak{g}}_\mu$, since the cocycle \begin{equation*} \Res_t \bigl( \partial_t( t^{\ad_x} at^m) \big| t^{\ad_x} bt^n \bigr) = \Res_t \bigl( (\partial_t + t^{-1} \ad_x) (at^m) \big| bt^n \bigr) = \gamma_{\ad_x} (at^m | bt^n ) \end{equation*} is equivalent to $\gamma$. Finally, $\hat{\mathfrak{g}}_\mu$ is an affine Kac--Moody algebra by \cite[Theorem 8.3]{K1}. \end{proof} A $\hat{\mathfrak{g}}_\varphi$-module $W$ is called \emph{restricted} if for every $a\in{\mathfrak{g}}_\alpha$, $m\in\alpha$, $v\in W$, there is an integer $L$ such that $(at^{m+i}) v = 0$ for all $i\in\mathbb{Z}$, $i\ge L$. For example, every highest-weight $\hat{\mathfrak{g}}_\varphi$-module is restricted (see \cite{K1}). We say that $W$ has \emph{level $\kappa$} if $K$ acts on it as $\kappa I$. Then we have the following correspondence of modules (cf.\ \cite{Li2,KRR}). \begin{theorem}\label{taff2} Every\/ $\varphi$-twisted\/ $V^\kappa({\mathfrak{g}})$-module is a restricted\/ $\hat{\mathfrak{g}}_\varphi$-module of level\/ $\kappa$ and, conversely, every restricted\/ $\hat{\mathfrak{g}}_\varphi$-module of level\/ $\kappa$ uniquely extends to a\/ $\varphi$-twisted\/ $V^\kappa({\mathfrak{g}})$-module. \end{theorem} \begin{proof} In one direction the statement is obvious from the definitions. Conversely, suppose that $W$ is a restricted $\hat{\mathfrak{g}}_\varphi$-module of level $\kappa$. For $a\in{\mathfrak{g}}_\alpha$, we define the logarithmic field $Y(a,z)\in\LF_\alpha(W)$ by \begin{equation}\label{aff11} Y(a,z) = \sum_{m\in\alpha} z^{-m-1} \bigl( (e^{-\zeta\mathcal{N}}a)t^m \bigr) \,. \end{equation} Then \eqref{twlog2a} holds, which implies the $\varphi$-equivariance \eqref{twlog2}. By \eqref{twlog10}, the modes of $a$ are $a_{(m+\mathcal{N})}=at^m$. Comparing \eqref{aff7} and \eqref{aff10}, we see that the commutator formula \eqref{twlog16} holds for $a\in{\mathfrak{g}}_\alpha$, $b\in{\mathfrak{g}}_\beta$. By \prref{pltwlog9}, the fields $Y(a,z)$ are local and satisfy the $n$-th product identity \eqref{twlog3} for $n\ge0$. Let $\mathcal{W}$ be the local collection $\{Y(a,z)\}_{a\in{\mathfrak{g}}}$, and $\bar\mathcal{W}\subset\LF(W)$ be the vertex algebra generated by it (see \thref{tlogf5}). Since $V^\kappa({\mathfrak{g}}) \cong U({\mathfrak{g}}[t^{-1}]t^{-1})$, the map $Y$ can be extended uniquely to a vertex algebra homomorphism from $V^\kappa({\mathfrak{g}})$ to $\bar\mathcal{W}$. This endows $W$ with the structure of a $\varphi$-twisted $V^\kappa({\mathfrak{g}})$-module. \end{proof} As in \seref{svirac}, the modes of $Y(\omega,z)$ give a representation of the Virasoro Lie algebra on every $\varphi$-twisted $V^\kappa({\mathfrak{g}})$-module $W$. To state the explicit formula, let us define a linear operator $\mathcal{S}$ on ${\mathfrak{g}}$ by $\mathcal{S} a= \alpha_0\, a$ for $a\in{\mathfrak{g}}_\alpha$, $\alpha\in\mathbb{C}/\mathbb{Z}$ and $\alpha_0\in\alpha$ such that $-1<\mathrm{Re}\,\alpha_0\le0$. Recall that the normally ordered product of two logarithmic fields is given by \deref{dlogf4}. \begin{lemma}\label{laff5} In every\/ $\varphi$-twisted\/ $V^\kappa({\mathfrak{g}})$-module\/ $W$, we have \begin{equation* 2(\kappa+h^\vee) Y(\omega,z) = \sum_{i=1}^{\dim{\mathfrak{g}}} {:} X(v^i,z) X(v_i,z) {:} -z^{-1} X( \bar\omega, z ) -z^{-2} \kappa\tr \binom{\mathcal{S}}2 I \,, \end{equation*} where \begin{equation* \bar\omega = \sum_{i=1}^{\dim{\mathfrak{g}}} [(\mathcal{S}+\mathcal{N})v^i, v_i] \,, \end{equation*} using the notation from\/ \eqref{aff4}. \end{lemma} \begin{proof} Applying \leref{lltwlog11} and \eqref{aff3}, we obtain for $a,b\in{\mathfrak{g}}$: \begin{equation* \begin{split} Y(a_{(-1)}b,z) &= {:} Y(a,z) Y(b,z) {:} \\ &-z^{-1} Y\bigl( \bigl[(\mathcal{S}+\mathcal{N})a, b\bigr], z \bigr) -z^{-2} \Bigl( \binom{\mathcal{S}+\mathcal{N}}2 a \Big| b \Bigr) \kappa I \,. \end{split} \end{equation*} Then we use this with \eqref{aff4} to find $Y(\omega,z)$. Note that $Y(\omega,z)$ is independent of $\zeta$, because $\varphi\omega=\omega$. Hence, we can set $\zeta=0$ and replace $Y$ with $X$ (see \leref{lltwlog3}). Finally, \begin{equation*} \sum_{i=1}^{\dim{\mathfrak{g}}} \Bigl( \binom{\mathcal{S}+\mathcal{N}}2 v^i \Big| v_i \Bigr) = \tr\binom{\mathcal{S}+\mathcal{N}}2 = \tr\binom{\mathcal{S}}2 \,, \end{equation*} since we can find a basis for ${\mathfrak{g}}$ in which $\mathcal{S}$ is diagonal and $\mathcal{N}$ is strictly upper triangular. \end{proof} Note that we can pick the dual bases for ${\mathfrak{g}}$ so that $v^i\in{\mathfrak{g}}_{\alpha^i}$ and $v_i\in{\mathfrak{g}}_{-\alpha^i}$ for some $\alpha^i\in\mathbb{C}/\mathbb{Z}$. Moreover, $\bar\omega\in{\mathfrak{g}}_0$ since $\varphi\bar\omega=\bar\omega$. Then from \leref{laff5}, we obtain for the modes \eqref{twlog19}: \begin{equation}\label{aff13o} 2(\kappa+h^\vee) L_n^W = \sum_{i=1}^{\dim{\mathfrak{g}}} \sum_{m\in\alpha^i} {:} (v^i t^m) (v_i t^{n-m}) {:} - \bar\omega t^n - \delta_{n,0} \kappa\tr \binom{\mathcal{S}}2 I \,, \end{equation} for any $n\in\mathbb{Z}$. \subsection{$\varphi$-twisted modules of the Heisenberg vertex algebra}\label{stwheis} Now assume that ${\mathfrak{g}}$ is abelian, and denote it by ${\mathfrak{h}}$ instead of ${\mathfrak{g}}$. The affine Lie algebra $\hat{\mathfrak{h}}$ is called the \emph{Heisenberg Lie algebra} and its irreducible highest-weight module $\mathcal{F} = M(\Lambda_0) = V^1({\mathfrak{h}})$ is known as the (bosonic) \emph{Fock space} or the \emph{Heisenberg vertex algebra}. Explicitly, the Lie bracket in $\hat{\mathfrak{h}}$ is given by \begin{equation}\label{aff10h} [at^m,bt^n] = \delta_{m,-n} ((m+\mathcal{N})a|b) K \,. \end{equation} Note that $V^\kappa({\mathfrak{h}}) \cong V^1({\mathfrak{h}})$ for any $\kappa\ne0$, so we can assume $\kappa=1$ without loss of generality. Let us split $\mathbb{C}$ as a disjoint union of subsets $\mathbb{C}^+$, $\mathbb{C}^-=-\mathbb{C}^+$ and $\{0\}$. We will take \begin{equation* \mathbb{C}^+ = \{ \gamma\in\mathbb{C} \,|\, \mathrm{Re}\,\gamma>0 \} \cup \{ \gamma\in\mathbb{C} \,|\, \mathrm{Re}\,\gamma=0, \, \mathrm{Im}\,\gamma>0 \} \,. \end{equation*} Then the $\varphi$-twisted affinization $\hat\lieh_\ph$ has a triangular decomposition $\hat\lieh_\ph=\hat\lieh_\ph^- \oplus \hat\lieh_\ph^0 \oplus \hat\lieh_\ph^+$ (direct sum of vector spaces), where \begin{equation* \hat\lieh_\ph^\pm = \Span\{ at^m \,|\, a\in{\mathfrak{h}}_\alpha, \, \alpha\in\mathbb{C}/\mathbb{Z}, \, m\in\alpha\cap\mathbb{C}^\pm\} \end{equation*} and \begin{equation* \hat\lieh_\ph^0 = \Span\{ at^0 \,|\, a\in{\mathfrak{h}}_0\} \oplus\mathbb{C} K\,. \end{equation*} It is clear from \eqref{aff10h} that $\hat\lieh_\ph^\pm$ are abelian subalgebras of $\hat\lieh_\ph$, and $\hat\lieh_\ph^0$ is a finite-dimensional subalgebra satisfying $[\hat\lieh_\ph^0,\hat\lieh_\ph^0] \subset \mathbb{C} K$, $[\hat\lieh_\ph^0,\hat\lieh_\ph^\pm] =\{0\}$. Let $W$ be an $\hat\lieh_\ph$-module. A \emph{highest-weight vector} (also called a \emph{vacuum vector}) in $W$ is $v\in W$ such that $\hat\lieh_\ph^+ v = 0$. All such vectors form an $\hat\lieh_\ph^0$-submodule $R$ of $W$. If $W$ is generated by $R$ as an $\hat\lieh_\ph$-module, we say that $W$ is a highest-weight module. As usual, examples can be constructed as induced modules. Starting from any $\hat\lieh_\ph^0$-module $R$ such that $K=I$, we define the (generalized) \emph{Verma module} \begin{equation* M_\varphi(R) = \Ind^{\hat\lieh_\ph}_{\hat\lieh_\ph^+\oplus\hat\lieh_\ph^0} R \cong S(\hat\lieh_\ph^-) \otimes_\mathbb{C} R \,, \end{equation*} where $\hat\lieh_\ph^+$ acts trivially on $R$. It is a standard fact that the $\hat\lieh_\ph$-module $M_\varphi(R)$ is irreducible for any irreducible $\hat\lieh_\ph^0$-module $R$ (cf.\ \cite{FLM,KRR}). Therefore, all irreducible highest-weight $\hat\lieh_\ph$-modules have this form. In addition, all of them are restricted, so they give rise to $\varphi$-twisted $\mathcal{F}$-modules. We will present two explicit examples of linear operators $\sigma$ and $\mathcal{N}$ on ${\mathfrak{h}}$ satisfying \eqref{aff5}. Fix a positive integer $\ell$ and $\alpha_0\in\mathbb{C}$ such that $-1<\mathrm{Re}\,\alpha_0\le0$, and set $\lambda=e^{-2\pi\mathrm{i}} %{\sqrt{-1}\alpha_0}$. \begin{example}[$\dim{\mathfrak{h}}=2\ell$]\label{eaff5} Consider a vector space ${\mathfrak{h}}$ with a basis $\{v_1,\dots,v_{2\ell}\}$ such that $(v_i|v_j)=\delta_{i+j,2\ell+1}$ and \begin{equation*} \sigma v_i = \begin{cases} \lambda v_i, \quad\; 1\le i\le\ell \,, \\ \lambda^{-1} v_i, \; \ell+1\le i\le 2\ell \,, \end{cases} \quad \mathcal{N} v_i = \begin{cases} v_{i+1}, \;\;\;\, 1\le i\le\ell-1 \,, \\ -v_{i+1}, \; \ell+1\le i\le 2\ell-1 \,, \\ 0, \qquad\;\, i=\ell, \, 2\ell \,. \end{cases} \end{equation*} Due to the symmetry $v_i \mapsto (-1)^i v_{\ell+i}$, $v_{\ell+i} \mapsto (-1)^{i+\ell+1} v_i$ $(1\le i\le\ell)$, we can assume that $\alpha_0\in\mathbb{C}^-\cup\{0\}$. \end{example} \begin{example}[$\dim{\mathfrak{h}}=2\ell-1$]\label{eaff6} Here $\lambda=\pm1$, so $\alpha_0=0$ or $-1/2$. Define ${\mathfrak{h}}$ as a vector space with a basis $\{v_1,\dots,v_{2\ell-1}\}$ such that $(v_i|v_j)=\delta_{i+j,2\ell}$ and \begin{equation*} \sigma v_i = \lambda v_i, \;\;\; 1\le i\le2\ell-1 \,, \qquad \mathcal{N} v_i = \begin{cases} (-1)^{i+1} v_{i+1}, \; 1\le i\le 2\ell-2 \,, \\ 0, \qquad\qquad\quad i=2\ell-1 \,. \end{cases} \end{equation*} \end{example} \begin{proposition}\label{paff3} Let\/ ${\mathfrak{h}}$ be a finite-dimensional vector space, equipped with a nondegenerate symmetric bilinear form\/ $(\cdot|\cdot)$ and with commuting linear operators\/ $\sigma$, $\mathcal{N}$ satisfying \eqref{aff5}, such that\/ $\sigma$ is invertible and semisimple and $\mathcal{N}$ is nilpotent. Then\/ ${\mathfrak{h}}$ is an orthogonal direct sum of subspaces that are like Examples \ref{eaff5} and \ref{eaff6}. \end{proposition} \begin{proof} This follows from the well-known classification, up to conjugation, of orthogonal and skew-symmetric matrices over $\mathbb{C}$ (see \cite{Ga,HM}). \end{proof} In the next two subsections, we will consider separately the above two examples. In each case, we will describe explicitly the $\varphi$-twisted affinization $\hat{\mathfrak{h}}_\varphi$ and its irreducible highest-weight modules $M_\varphi(R)$. We will also determine the action of the Virasoro algebra using \eqref{aff13o}. Note that in \eqref{aff13o}, we have $\bar\omega=0$ and the normally ordered product is needed only for $n=0$, since ${\mathfrak{h}}$ is abelian. \subsection{The case $\dim{\mathfrak{h}}=2\ell$}\label{stwheis1} First, let ${\mathfrak{h}}$ be as in \exref{eaff5}. Then $\hat{\mathfrak{h}}_\varphi$ is the Lie algebra spanned by a central element $K$ and elements $v_i t^{\alpha_0+n}$, $v_{\ell+i} t^{-\alpha_0+n}$ ($1\le i\le\ell$, $n\in\mathbb{Z}$), with Lie brackets given by \eqref{aff10h}. More explicitly, for $1\le i\le\ell$ and $1\le j\le 2\ell$, we have: \begin{align* [v_i t^m, v_j t^k] &= m \delta_{m+k,0}\delta_{i+j,2\ell+1} K + \delta_{m+k,0} (1-\delta_{i,\ell}) \delta_{i+j,2\ell} K \,, \\ [v_{\ell+i} t^m, v_j t^k] &= m \delta_{m+k,0}\delta_{i+j,\ell+1} K - \delta_{m+k,0} (1-\delta_{i,\ell}) \delta_{i+j,\ell} K \,. \end{align*} It follows from \eqref{twlog10} that for $1\le j\le\ell$: \begin{align* Y(v_j,z) &= \sum_{i=j}^\ell \sum_{m\in\alpha_0+\mathbb{Z}} (-\zeta)^{(i-j)} (v_i t^m) z^{-m-1} \,, \\ Y(v_{\ell+j},z) &= \sum_{i=j}^\ell \sum_{m\in-\alpha_0+\mathbb{Z}} \zeta^{(i-j)} (v_{\ell+i} t^m) z^{-m-1} \,. \end{align*} The action of the Virasoro algebra is determined by \eqref{aff13o} with $\bar\omega=0$. The dual basis $\{v^i\}$ to the basis $\{v_i\}$ is given by $v^i=v_{2\ell+1-i}$. As we already pointed out, the normally ordered product in \eqref{aff13o} is needed only for $L_0^W$. Therefore, \begin{equation}\label{aff30v} L_k^W = \sum_{i=1}^{\ell} \sum_{n\in\mathbb{Z}} (v_i t^{\alpha_0+n+k}) (v_{2\ell+1-i} t^{-\alpha_0-n}) \,, \qquad k\ne 0\,. \end{equation} The triangular decomposition of $\hat\lieh_\ph$ depends on whether $\alpha_0\in\mathbb{C}^-$ or $\alpha_0=0$. Suppose first that $\alpha_0\in\mathbb{C}^-$. Then $\hat\lieh_\ph^0=\mathbb{C} K$ and $R=\mathbb{C}$ with $K=I$ acting as the identity operator. We have: \begin{equation}\label{aff20} M_\varphi(R) \cong \mathbb{C}[x_{i,0}, x_{j,n}]_{1\le i\le\ell, \,1\le j\le 2\ell, \, n=1,2,3,\dots} \,, \end{equation} where for $1\le i\le\ell$, $n=0,1,2,\dots,$ \begin{equation* v_i t^{\alpha_0-n} = x_{i,n} \,, \qquad v_{\ell+i} t^{-\alpha_0-n-1} = x_{\ell+i,n+1} \,, \end{equation*} and \begin{align* v_i t^{\alpha_0+n+1} &= (\alpha_0+n+1) \partial_{x_{2\ell+1-i,n+1}} +(1-\delta_{i,\ell}) \partial_{x_{2\ell-i,n+1}} \,, \\ v_{\ell+i} t^{-\alpha_0+n} &= (-\alpha_0+n) \partial_{x_{\ell+1-i,n}} -(1-\delta_{i,\ell}) \partial_{x_{\ell-i,n}} \,. \end{align*} \begin{lemma}\label{laff6} For\/ $\alpha_0\in\mathbb{C}^-$ and\/ $W=M_\varphi(R)$ as in \eqref{aff20}, we have \begin{align* L_0^W &= \sum_{i=1}^\ell \sum_{n=0}^\infty x_{i,n} \bigl( (-\alpha_0+n) \partial_{x_{i,n}} -(1-\delta_{i,1}) \partial_{x_{i-1,n}} \bigr) \\ &+ \sum_{i=1}^\ell \sum_{n=1}^\infty x_{\ell+i,n} \bigl( (\alpha_0+n) \partial_{x_{\ell+i,n}} +(1-\delta_{i,1}) \partial_{x_{\ell+i-1,n}} \bigr) \\ &- \frac\ell{2}(\alpha_0^2+\alpha_0) I \,. \end{align*} \end{lemma} \begin{proof} Let us apply \eqref{aff13o} with $v^i=v_{2\ell+1-i}$. We observe that there are only two cases in which the normally ordered product from \deref{dlogf4} differs from the one obtained by placing all $x_{i,n}$ to the left of all $\partial_{x_{i,n}}$. First, for $\ell+1\le i\le 2\ell$, \begin{align*} {:} (v^i t^{\alpha_0}) & (v_i t^{-\alpha_0}) {:} = (v_i t^{-\alpha_0}) (v^i t^{\alpha_0}) \\ &= x_{2\ell+1-i,0} \bigl( (-\alpha_0) \partial_{x_{2\ell+1-i,0}} -(1-\delta_{i,2\ell}) \partial_{x_{2\ell-i,0}} \bigr) -\alpha_0 I \,. \end{align*} Second, for $\mathrm{Re}\,\alpha_0<0$ and $1\le i\le \ell$, \begin{align*} {:} (v^i t^{-\alpha_0-1}) & (v_i t^{\alpha_0+1}) {:} = (v_i t^{\alpha_0+1}) (v^i t^{-\alpha_0-1}) \\ &= x_{2\ell+1-i,1} \bigl( (\alpha_0+1) \partial_{x_{2\ell+1-i,1}} +(1-\delta_{i,\ell}) \partial_{x_{2\ell-i,1}} \bigr) + (\alpha_0+1) I \,. \end{align*} On the other hand, when $\mathrm{Re}\,\alpha_0<0$, we have \begin{equation*} \mathcal{S} v_i = \alpha_0 v_i \,, \quad \mathcal{S} v_{\ell+i} = (-\alpha_0-1) v_{\ell+i} \,, \qquad 1\le i\le \ell \,, \end{equation*} from where we find $\tr\binom{\mathcal{S}}2 = \ell (\alpha_0^2+\alpha_0+1)$. When $\mathrm{Re}\,\alpha_0=0$, we have \begin{equation*} \mathcal{S} v_i = \alpha_0 v_i \,, \quad \mathcal{S} v_{\ell+i} = -\alpha_0 v_{\ell+i} \,, \qquad 1\le i\le \ell \,, \end{equation*} which gives $\tr\binom{\mathcal{S}}2 = \ell \alpha_0^2$. In both cases, the combined contribution from the difference of the normally ordered products and $\tr\binom{\mathcal{S}}2$ is $- \frac\ell{2}(\alpha_0^2+\alpha_0)$. \end{proof} Now consider the case when $\alpha_0=0$ in \exref{eaff5}. Then $\hat\lieh_\ph^0={\mathfrak{h}} t^0\oplus\mathbb{C} K$ is a direct sum of a finite-dimensional Heisenberg Lie algebra and the central ideal $\Span\{v_\ell t^0,v_{2\ell} t^0\}$. We have the following irreducible $\hat\lieh_\ph^0$-modules $R$ with $K=I$: \begin{equation* R_{a_1,a_2} = \mathbb{C}[x_{i,0}]_{1\le i\le\ell-1} \qquad (a_1,a_2 \in \mathbb{C}) \,, \end{equation*} where for $1\le i\le\ell-1$, \begin{equation* v_i t^0 = x_{i,0} \,, \quad v_{\ell+i} t^0 = -\partial_{x_{\ell-i,0}} \,, \quad v_\ell t^0=a_1 I \,, \quad v_{2\ell} t^0=a_2 I \,. \end{equation*} Then \begin{equation}\label{aff23} M_\varphi(R_{a_1,a_2}) \cong \mathbb{C}[x_{i,0}, x_{j,n}]_{1\le i\le\ell-1, \,1\le j\le 2\ell, \, n=1,2,3,\dots} \,, \end{equation} where for $1\le i\le\ell$ and $n=1,2,3,\dots,$ \begin{align* v_i t^{-n} &= x_{i,n} \,, \qquad v_{\ell+i} t^{-n} = x_{\ell+i,n} \,, \\ v_i t^{n} &= n \partial_{x_{2\ell+1-i,n}} +(1-\delta_{i,\ell}) \partial_{x_{2\ell-i,n}} \,, \\ v_{\ell+i} t^{n} &= n \partial_{x_{\ell+1-i,n}} -(1-\delta_{i,\ell}) \partial_{x_{\ell-i,n}} \,. \end{align*} We can determine $L_0^W$ as in \leref{laff6}; however, now there is no problem with the normally ordered products because $v^i t^0$ commutes with $v_i t^0$. We obtain for $\alpha_0=0$ and $W=M_\varphi(R_{a_1,a_2})$: \begin{align* L_0^W &= \sum_{i=1}^\ell \sum_{n=1}^\infty x_{i,n} \bigl( n \partial_{x_{i,n}} -(1-\delta_{i,1}) \partial_{x_{i-1,n}} \bigr) \\ &+ \sum_{i=1}^\ell \sum_{n=1}^\infty x_{\ell+i,n} \bigl( n \partial_{x_{\ell+i,n}} +(1-\delta_{i,1}) \partial_{x_{\ell+i-1,n}} \bigr) \\ &- \sum_{i=2}^{\ell-1} x_{i,0} \partial_{x_{i-1,0}} + a_2 x_{1,0} - a_1 \partial_{x_{\ell-1,0}} \,. \end{align*} \subsection{The case $\dim{\mathfrak{h}}=2\ell-1$}\label{stwheis2} Now let ${\mathfrak{h}}$ be as in \exref{eaff6}. Then $\hat{\mathfrak{h}}_\varphi$ is the Lie algebra spanned by a central element $K$ and elements $v_i t^{\alpha_0+n}$ ($1\le i\le 2\ell-1$, $n\in\mathbb{Z}$), with Lie brackets given by: \begin{equation* [v_i t^m, v_j t^k] = m \delta_{m+k,0}\delta_{i+j,2\ell} K + (-1)^{i+1} \delta_{m+k,0} \delta_{i+j,2\ell-1} K \,. \end{equation*} It follows from \eqref{twlog10} that for $1\le j\le 2\ell-1$: \begin{equation* Y(v_j,z) = \sum_{i=j}^{2\ell-1} \sum_{m\in\alpha_0+\mathbb{Z}} (-1)^{(i-j)(i+j-1)/2} \, \zeta^{(i-j)} (v_i t^m) z^{-m-1} \,. \end{equation*} The action of the Virasoro algebra is again determined by \eqref{aff13o} with $\bar\omega=0$. The dual basis $\{v^i\}$ to the basis $\{v_i\}$ is given by $v^i=v_{2\ell-i}$; therefore \begin{equation}\label{aff32v} L_k^W = \frac12 \sum_{i=1}^{2\ell-1} \sum_{n\in\mathbb{Z}} (v_i t^{\alpha_0+n+k}) (v_{2\ell-i} t^{-\alpha_0-n}) \,, \qquad k\ne 0\,. \end{equation} Suppose first that $\alpha_0=-1/2$; then $\hat\lieh_\ph^0=\mathbb{C} K$ and $R=\mathbb{C}$ with $K=I$. We have: \begin{equation}\label{aff27} M_\varphi(R) \cong \mathbb{C}[x_{j,n}]_{1\le j\le 2\ell-1, \, n=0,1,2,\dots} \,, \end{equation} where for $1\le i\le 2\ell-1$ and $n=0,1,2,\dots,$ \begin{align* v_i t^{-\frac12-n} &= x_{i,n} \,,\\ v_i t^{\frac12+n} &= \Bigl(\frac12+n\Bigr) \partial_{x_{2\ell-i,n}} + (-1)^{i+1} (1-\delta_{i,2\ell-1}) \partial_{x_{2\ell-1-i,n}} \,. \end{align*} As in \leref{laff6}, we find that for $W=M_\varphi(R)$, \begin{align* L_0^W &= \sum_{i=1}^{2\ell-1} \sum_{n=0}^\infty x_{i,n} \Bigl( \Bigl(\frac12+n\Bigr) \partial_{x_{i,n}} + (-1)^{i+1} (1-\delta_{i,1}) \partial_{x_{i-1,n}} \Bigr) \\ &+ \frac1{16} (2\ell-1) I \,. \end{align*} Now let $\alpha_0=0$. Then $\hat\lieh_\ph^0={\mathfrak{h}} t^0\oplus\mathbb{C} K$ is a direct sum of a finite-dimensional Heisenberg Lie algebra and the central ideal $\Span\{v_{2\ell-1} t^0\}$. The following are irreducible $\hat\lieh_\ph^0$-modules with $K=I$: \begin{equation* R_a = \mathbb{C}[x_{i,0}]_{1\le i\le\ell-1} \qquad (a\in \mathbb{C}) \,, \end{equation*} where \begin{equation* v_i t^0 = x_{i,0} \,, \quad v_{\ell-1+i} t^0 = (-1)^{\ell-i} \partial_{x_{\ell-i,0}} \,, \quad v_{2\ell-1} t^0=a I \,, \end{equation*} for $1\le i\le\ell-1$. Then \begin{equation}\label{aff25} M_\varphi(R_a) \cong \mathbb{C}[x_{i,0}, x_{j,n}]_{1\le i\le\ell-1, \, 1\le j\le 2\ell-1, \, n=1,2,3,\dots} \,, \end{equation} where for $1\le i\le 2\ell-1$ and $n=1,2,3,\dots,$ \begin{align* v_i t^{-n} &= x_{i,n} \,,\\ v_i t^{n} &= n \partial_{x_{2\ell-i,n}} + (-1)^{i+1} (1-\delta_{i,2\ell-1}) \partial_{x_{2\ell-1-i,n}} \,. \end{align*} For $W=M_\varphi(R_a)$, we have \begin{align* L_0^W &= \sum_{i=1}^{2\ell-1} \sum_{n=1}^\infty x_{i,n} \bigl( n \partial_{x_{i,n}} + (-1)^{i+1} (1-\delta_{i,1}) \partial_{x_{i-1,n}} \bigr) \\ &+ \sum_{i=2}^{\ell-1} (-1)^{i+1} x_{i,0} \partial_{x_{i-1,0}} + \frac12 \partial_{x_{\ell-1,0}}^2 + a x_{1,0} \,. \end{align*} \begin{remark}\label{raff} After a change of variables, the Virasoro operators \eqref{aff30v}, \eqref{aff32v} with $\alpha_0=a_1=a_2=a=0$ coincide with those of \cite{EHX, EJX} (see also \cite{DZ1, DZ2}). The detailed correspondence will be discussed elsewhere. \end{remark} The following special case of \exref{eaff6} is related to \cite{M}. \begin{example}\label{eaff7} Consider an affine Kac--Moody algebra of type $A_1^{(1)}$, and let ${\mathfrak{h}}$ be its Cartan subalgebra. The dual space ${\mathfrak{h}}^*$ has a basis $\{\alpha_1,\delta,\Lambda_0\}$ and a nondegenerate symmetric bilinear form $(\cdot|\cdot)$ given by: \begin{equation* (\alpha_1|\alpha_1)=2 \,, \qquad (\delta|\Lambda_0)=(\Lambda_0|\delta)=1 \,, \end{equation*} where all other products of basis vectors are $0$ (see \cite[Chapter 6]{K1}). The affine Weyl group has an element $\varphi=t_{\alpha_1}$, which acts on ${\mathfrak{h}}^*$ by: \begin{equation* \varphi(\alpha_1)=\alpha_1-2\delta \,, \qquad \varphi(\delta)=\delta \,, \qquad \varphi(\Lambda_0)=\Lambda_0+\alpha_1-\delta \,. \end{equation*} The bilinear form $(\cdot|\cdot)$ is $\varphi$-invariant. Introduce another basis \begin{equation* v_1=-\frac{2\pi\mathrm{i}} %{\sqrt{-1}}{\sqrt2} \, \Lambda_0 \,, \qquad v_2=\frac{\alpha_1}{\sqrt2} \,, \qquad v_3=-\frac{\sqrt2}{2\pi\mathrm{i}} %{\sqrt{-1}} \, \delta \,, \end{equation*} so that $(v_i|v_j)=\delta_{i+j,4}$. Then $\varphi=e^{-2\pi\mathrm{i}} %{\sqrt{-1}\mathcal{N}}$ where $\mathcal{N}$ is the linear operator defined by $\mathcal{N}(v_1)=v_2$, $\mathcal{N}(v_2)=-v_3$ and $\mathcal{N}(v_3)=0$. \end{example} \section*{Acknowledgements} This paper was motivated by my joint work \cite{BM} with Todor Milanov and our ongoing collaboration. I would like to thank him for many stimulating discussions. I am grateful to Di Yang for sharing his unpublished manuscript \cite{LYZ}, and to Dra\v{z}en Adamovi\'c and Antun Milas for discussions on logarithmic CFT. This research was supported in part by a Simons Foundation grant. \bibliographystyle{amsalpha}
{ "timestamp": "2015-04-27T02:03:26", "yymm": "1504", "arxiv_id": "1504.06381", "language": "en", "url": "https://arxiv.org/abs/1504.06381" }
\section{Introduction} Star formation occurs on the borders of Galactic H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions. Different physical processes may be at work to trigger this star formation \citep{elmegreen1977,deharveng2010}. This phenomenon has been studied in detail over the past ten years \citep{deharveng2003,zavagno2007,pomares2009,zavagno2010,paron2011,davies2012}. All these studies concentrate on specific Galactic H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions where young stellar objects (YSOs) are observed on their borders. This phenomenon is also observed in nearby galaxies (see, e.g., the case of N11 in the Large Magellanic Cloud). The {\it Spitzer}\ satellite and the GLIMPSE and MIPSGAL surveys of the Galactic Plane have revealed that we are living in a bubbling galactic disk where H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions have a clear impact on their environment. A study using {\it Spitzer}\ GLIMPSE and MIPSGAL data combined with ATLASGAL data on 102 bubbles have shown that the star formation triggered by H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions is an important phenomenon in our Galaxy. Up to 25\% of the bubbles show triggered massive-star formation on their border \citep{deharveng2010}. The {\it Herschel}\ satellite offers a unique opportunity to study the star formation triggered by Galactic H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions. Thanks to its sensitivity and its large wavelength coverage in the far infrared, {\it Herschel}\ is perfectly suited to the study of the earliest phases of star formation. We are engaged in several guaranteed (HOBYS, \citealt{motte2010}, ``Evolution of Interstellar Dust'', \citealt{abergel2010}) and open time (Hi-GAL, \citealt{molinari2010}) key programs on {\it Herschel}\ that aim to characterize this way of forming stars. We used PACS and SPIRE photometers to characterize the emission in the $60-500\,\mu{\rm m}$ range towards Galactic H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions. This allows us to detect and characterize the properties of young stellar objects (YSOs) observed towards these regions. We also used the PACS and SPIRE spectrometers to derive the physical conditions towards these regions. Here we present results of {\it Herschel}\ SPIRE-FTS spectroscopy obtained towards the Galactic H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region RCW~120. This region is among the closest H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ regions with a distance of $ \sim1.3\,$kpc \citep{russeil2003}, which combined to its angular diameter of $\sim7'.5$ \citet{anderson2015} results in a physical size of $ \sim3 $pc. RCW~120\ is ionized by the single star $ CD-38^{\circ}11636 $, with a spectral type O6-8V/III according to the latest measurements by \citet{martins2010}. \begin{figure*}[th!] \centering \includegraphics[width=\textwidth]{footprints_new.pdf} \caption{{\it Herschel}\ SPIRE-FTS pointing towards RCW~120: {\it Herschel}\ SPIRE $350\,\mu{\rm m}$ image (background) of RCW~120 on which the SLW (blue) and SSW detectors (yellow) are superimposed. The nine pointings observed are labeled according to their location (see text). } \label{fig-pointings} \end{figure*} Labeled ``the perfect bubble'' throughout the literature, recent single-dish observations of the lowest CO transitions \citep{anderson2015,torii2015} fail to detect an expanding shell of fore- and background material, which would indicate a 3-D structure. However, \citet{anderson2015} finds discrete ``holes'' in the PDR, through which the ionizing radiation is escaping, and \citet{torii2015} propose an explanation of the observed morphology of RCW~120\ through the collision of two clouds, which formed the ionizing star. Observations in mm and sub-mm wavelengths \citep{zavagno2007,deharveng2009} show a fragmented neutral layer along its PDR, which contains five of the eight mm-condensations found by \citet{zavagno2007}. According to \citet{deharveng2009} the mass of the layer is $ \sim2000\,$M$_{\sun}$. The most massive condensation has been resolved into a chain of several Class~I objects \citep{deharveng2009}, likely an example of Jeans instability, while \citet{zavagno2010a} found a massive $ 8-10\, $M$ _{\sun} $ YSO towards this condensation, which they suggest is the first detection of a massive Class~0 object formed by the collect and collapse process on the border of an H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region. \begin{table*}[] \renewcommand{\arraystretch}{1.2} \centering \caption{SPIRE-FTS observations on Operational Day 288 (February 26 2010).} \label{table-obs} \renewcommand{\footnoterule}{} \begin{tabular}{l|cc|cccc} \hline\hline Pointing & RA (J2000) & Dec (J2000) & Obs. ID & Start time (UT) & Int. time (s) & Repetitions \\ \hline C6 & 17:11:47.89 & $-$38:19:48.2 & 1342191225 & 15:21:40.00 & 1353 & 8 \\ CLay\ & 17:12:04.91 & $-$38:24:03.5 & 1342191226 & 15:44:38.00 & 1353 & 8 \\ C1 & 17:12:08.93 & $-$38:30:43.1 & 1342191227 & 16:07:38.00 & 543 & 2 \\ C8 & 17:12:19.81 & $-$38:34:04.0 & 1342191228 & 16:17:04.00 & 543 & 2 \\ IRS2 & 17:12:20.77 & $-$38:18:21.1 & 1342191229 & 16:26:43.00 & 1353 & 8 \\ HII & 17:12:24.47 & $-$38:27:14.3 & 1342191230 & 16:49:45.00 & 543 & 2 \\ C2 & 17:12:34.15 & $-$38:30:51.9 & 1342191231 & 16:59:11.00 & 543 & 2 \\ IRS1 & 17:12:40.89 & $-$38:27:07.9 & 1342191232 & 17:08:37.00 & 543 & 2 \\ OFF & 17:13:01.35 & $-$38:27:13.3 & 1342191233 & 17:18:03.00 & 1353 & 8 \\ \hline\hline \end{tabular} \tablefoot{Coordinates are for the central pixels of the SLW and SSW arrays (pixels C3 and D4, repectively).} \end{table*} \section{Observations and data reduction} \label{obs} RCW~120\ was observed in spectroscopy with the SPIRE-Fourier Transform Spectrometer (SPIRE-FTS, \citealt{griffin2010}) on 26 February 2010 (OD 288), as part of the \textit{Evolution of Interstellar Dust} key program \citep{abergel2010}\footnote{The reduced data cubes are available on the HESIOD portal (\url{http://idoc-herschel.ias.u-psud.fr/sitools/client-user/})}. The aim of the program is to study the physical conditions that exist in regions of triggered star formation. For this purpose we selected 8 representative positions towards RCW~120: towards the ionized gas (pointing HII), towards the photo-dissociation region (PDR) without any condensation or star formation (pointing CLay), and towards condensations with ongoing star formation (see Sec. \ref{sec-pointings}). In RCW~120, nine condensations suspected of harboring ongoing star formation were observed in the mm continuum by \citet{zavagno2007}. For the FTS observations, we selected six such condensations, labeled C1, C2, C6, C8, IRS1 and IRS2. A position ``off'' the H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region was also observed (pointing OFF). All nine pointings are shown in Fig.~\ref{fig-pointings}. The short wavelength (SSW, $194-320\,\mu{\rm m}$) and long wavelength (SLW, $313-671\,\mu{\rm m}$) arrays of the SPIRE-FTS are overlaid on the $350\,\mu{\rm m}$ SPIRE image of RCW~120, in yellow and blue, respectively. The arrays are composed of an hexagonal grid of $19$ pixels for SLW, and $37$ pixels for SSW. The pixel distribution and labels are shown in Fig.~\ref{fig-detectors}. There are two ``dead'' pixels in SSW, pixels F4 and D5, and are thus not shown in Fig.~\ref{fig-pointings} (missing yellow circles). All the observations were performed in the high-resolution mode of the SPIRE-FTS. In general two scan repetitions were done, except for positions expected to be fainter, where eight scan repetitions were taken. An observation summary is given in Table~\ref{table-obs}. \begin{figure}[h] \includegraphics[width=0.242\textwidth]{SLW.pdf} \includegraphics[width=0.242\textwidth]{SSW.pdf} \caption{Disposition of the detectors (pixels) on the SSW (right) and SLW (left) arrays of the SPIRE-FTS instrument for our observations as seen projected in the sky. The two gray pixels in the SSW array mark dead pixels.} \label{fig-detectors} \end{figure} The data reduction was performed using the {\it Herschel}\ Interactive Processing Environment (HIPE) version 9.0 build 3048 with the calibration tree 9\_1. An iterative spectral line fitting routine has been used to extract the line parameters (line center, intensity and associated errors) from the unapodized FTS spectra. The routine used the unapodized instrumental line shape (the classical \textit{sinc} function) as the lines were checked to be unresolved by the FTS. The two scan directions were treated separately. As a conservative approach, the retained final error is the maximum value between the fitting procedure errors and the uncertainties obtained from the two direction results. \subsection{Pointings} \label{sec-pointings} The nine pointings were selected on the basis of what was known from the infrared and millimeter study of RCW~120\ by \citet{zavagno2007}. Their $1.3\,$mm continuum emission map revealed the presence of nine condensations, five of which are located on the PDR surrounding the ionized region of RCW~120. The idea of this SPIRE-FTS study was to study the physical conditions that prevail in the different regions located around RCW~120, regions that sample different evolution stage of star formation or different typical media (ionized region, PDR). One FTS pointing was obtained on each of the main condensations (C1, C2, C6, C8 and IRS1). The stellar content of these condensations is described in detail in \citet{zavagno2007} and in \citet{deharveng2009}. The properties of the point sources observed in this region with $Herschel$ PACS and SPIRE are described in \citet{zavagno2010}. An off position (OFF) was obtained to serve as a reference. A position was obtained towards the ionized gas at the center of the ionized region (HII). A pointing was obtained towards the PDR, on the northeastern side of the region (CLay). All these pointings are described in more detail below. C1: the most dense and massive condensation observed on the borders of RCW120. This region contains a massive Class~0 source revealed with $Herschel$ \citep{zavagno2010} and a chain of young, lower mass stars \citep{deharveng2009}. C2: this condensation contains a bright Class~I source and is, as C1, also located on the southern (and densest) border of the PDR. C6: this condensation is located in the northeastern part of the region but not in direct contact with the main ionizing front. However, a deep H$\alpha$ image shows that the ionized gas leaking from the H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region hits this zone and is in direct contact with it (see Fig.~3 in \citealt{deharveng2009}), possibly acting on the star formation there. This condensation is dense and contains many bright Class~I YSOs. C8: this condensation is located in the southern part of the ionized region and has probably been shaped by the leaking UV field (see Fig.~16 in \citealt{deharveng2009}). This region contains YSOs seen by $Herschel$ and is a site of active star formation. IRS1: this region is located in the west part of the PDR and is labeled condensation 4 in \citet{zavagno2007}. It was renamed IRS1 for the FTS pointing because this region hosts bright IR YSOs observed with \textit{Spitzer}, but the peak of the $1.3\,$mm map was devoid of sources (see Fig. 8 in \citealt{zavagno2007}). This region is of interest because it samples very different physical conditions on a small spatial scale. IRS2: this region is located on the northern part of the PDR. The central pixel position was chosen to match with the position of the source IRAS$\,17089-3814$. CLay: with the idea of sampling all the different physical conditions that exist towards RCW~120, we selected this zone known to be free of condensations detected in the mm map and star formation. Therefore this region should be representative of the PDR. At the time of the selection, the only process envisioned to form the layer was the collect and collapse \citep{elmegreen1977}, therefore the name Collected Layer (CLay). HII: this pointing is obtained towards the ionized region. No star formation and no $1.3\,$mm continuum emission is observed towards this region. \begin{figure} \includegraphics[width=0.47\textwidth]{spectrum_rcw120_lsw_high} \caption{SPIRE-FTS SLW spectra towards the richest pointings, at the position of the central pixels (see Sec.~\ref{sec-res}). The spectra are normalized at the brightest line on each spectrum. The lines detected are marked and labeled.} \label{fig-spectra-lsw-h} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{spectrum_rcw120_lsw_low} \caption{Same as Fig.~\ref{fig-spectra-lsw-h} but for the poorest pointings.} \label{fig-spectra-lsw-l} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{spectrum_rcw120_ssw} \caption{Same as Fig.~\ref{fig-spectra-lsw-h} but for all SPIRE-FTS SSW pointings.} \label{fig-spectra-ssw} \end{figure} OFF: the off position was chosen by looking at the \textit{Spitzer} mid-IR map of RCW~120. The PACS and SPIRE maps confirm that this is a low-emission region. However, after the analysis we find that this position has similar emission to other parts of the region (e.g., CLay, see Sec. \ref{sec-res}). \begin{figure*} \includegraphics[width = 0.5\textwidth]{spectrum_rcw120_C1} \includegraphics[width = 0.5\textwidth]{spectrum_rcw120_IRS1} \caption{SPIRE-FTS SLW and SSW combined spectra for pointings C1 (left) and IRS1 (right), showing how in some cases the overlap between detector arrays is not perfect (see text).} \label{fig-gap} \end{figure*} \section{Results} \label{sec-res} \subsection{Measured spectra} \label{sec-measured} Each of the pixels of SLW and SSW arrays produces one spectrum, therefore we obtained $ 486 $ spectra. Owing to space constraints, we will only show a few selected spectra in this paper, in Figs.~\ref{fig-spectra-lsw-h}, \ref{fig-spectra-lsw-l} and \ref{fig-spectra-ssw}. These are the spectra at the central pixels of SLW and SSW for each pointing, which are the only cases where a spectrum on the whole wavelength range of the FTS at a given pointing can be obtained since the central pixels are the only ones whose centers are spatially coincident on the sky. For other pixels there certainly is a spatial overlap between the SLW and SSW (see Figs.~\ref{fig-pointings}~and~\ref{fig-detectors}), however the relative centers are shifted. The lines detected are marked in Figs.~\ref{fig-spectra-lsw-h} to \ref{fig-spectra-ssw}. These are the main lines observed in the obtained spectra, and the detections are discussed in the next section. The overlap between the SLW and SSW portions of the spectra is not perfect. It can be seen in Fig. \ref{fig-gap} that in some cases there is a vertical offset between the two portions of the spectra. This offset is due to the morphology of the regions mapped, and the complex properties of the SPIRE beam (see, e.g., \citealt{makiwa2013, spinoglio2012, fletcher2012}, and references therein). The flux calibration for the FTS observations uses a Relative Spectral Response Function based on the telescope model emission. By default it is assumed that the observed source is uniformly extended over the entire beam. Any source morphology (compact-like) would affect the calibration. In our case, the result is that a relatively compact source (e.g., C1) will produce an apparently broken continuum, while for an extended source (e.g., IRS1) the detected signals are similar in both detectors. New calibration tools have been developed for semi-extended sources (see \citealt{etxaluze2013, wu2013}). However, as we do not know the morphology of our source (which could vary with frequency), we are not able to use these tools. In Section \ref{sec:radex} we address the consequences this has on our results. Because the FTS is a Fourier spectrograph, the line profiles have the characteristic \textit{sinc} function shape. This is more noticeable in the strong lines, for example the CO($6-5$) line at pointing IRS1, shown in Figs. \ref{fig-spectra-lsw-h} and \ref{fig-gap}. The spectral resolution is $0.048\,{\rm cm}^{-1}$ throughout the FTS band \citep{swinyard2010}. \begin{figure} \includegraphics[width=0.5\textwidth]{spectrum_rcw120_ch_all} \caption{Zoom of the normalized CH$^{+}(1-0)$ absorption feature averaged over all pixels for each pointing.} \label{fig-ch+} \end{figure} \subsection{Detected line intensities} \label{sec-lines} \begin{table*} \renewcommand{\arraystretch}{1.2} \caption{Properties of the CO lines detected in emission in the central spectrum of each pointing.} \label{table-co} \scriptsize \centering \begin{tabular}{@{}l@{}@{}c@{ }|*{10}{r@{$\,\pm\,$}l}} \hline\hline & & \multicolumn{20}{|c}{CO} \\ & & \multicolumn{2}{c}{4-3} & \multicolumn{2}{c}{5-4} & \multicolumn{2}{c}{6-5} & \multicolumn{2}{c}{7-6} & \multicolumn{2}{c}{8-7} & \multicolumn{2}{c}{9-8} & \multicolumn{2}{c}{10-9} & \multicolumn{2}{c}{11-10} & \multicolumn{2}{c}{12-11} & \multicolumn{2}{c}{13-12} \\ \hline $\lambda$ & ($\mu$m) & \multicolumn{2}{c}{650.3} & \multicolumn{2}{c}{520.3} & \multicolumn{2}{c}{433.6} & \multicolumn{2}{c}{371.7} & \multicolumn{2}{c}{325.2} & \multicolumn{2}{c}{289.1} & \multicolumn{2}{c}{260.2} & \multicolumn{2}{c}{236.6} & \multicolumn{2}{c}{216.9} & \multicolumn{2}{c}{200.3} \\ E$_{up}$ & (K) & \multicolumn{2}{c}{55.3} & \multicolumn{2}{c}{83} & \multicolumn{2}{c}{116.2} & \multicolumn{2}{c}{154.9} & \multicolumn{2}{c}{199.1} & \multicolumn{2}{c}{248.9} & \multicolumn{2}{c}{304.2} & \multicolumn{2}{c}{365} & \multicolumn{2}{c}{431.3} & \multicolumn{2}{c}{503.1} \\ FWHM & ($''$) & \multicolumn{2}{c}{40.4} & \multicolumn{2}{c}{32.6} & \multicolumn{2}{c}{29.4} & \multicolumn{2}{c}{34.8} & \multicolumn{2}{c}{36.8} & \multicolumn{2}{c}{19.2} & \multicolumn{2}{c}{17.7} & \multicolumn{2}{c}{17.6} & \multicolumn{2}{c}{17} & \multicolumn{2}{c}{16.8} \\ \hline C1 & \multirow{9}{*}{\rotatebox{90}{($10^{-3}\,$erg\,s$^{-1}$\,cm$^{-2}$\,sr$^{-1}$)}} & 2.40 & 0.32 & 6.33 & 0.62 & 16.04 & 0.47 & 25.59 & 1.77 & 37.20 & 0.64 & 51.76 & 3.88 & 72.53 & 4.30 & 77.53 & 4.73 & 66.87 & 5.17 & 39.96 & 5.60 \\ C2 & & 2.15 & 0.27 & 4.24 & 0.23 & 8.21 & 0.16 & 12.60 & 1.38 & 17.43 & 0.29 & 17.47 & 0.76 & 16.56 & 1.58 & 11.47 & 0.92 & 6.08 & 1.01 & 5.90 & 1.29 \\ C6 & & 0.73 & 0.05 & 1.15 & 0.06 & 1.36 & 0.05 & 1.45 & 0.49 & 1.36 & 0.18 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ C8 & & 0.92 & 0.25 & 1.55 & 0.27 & 2.18 & 0.17 & 2.02 & 0.42 & 1.37 & 0.40 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ IRS1 & & 1.44 & 0.06 & 3.23 & 0.39 & 6.71 & 0.23 & 10.76 & 0.92 & 13.36 & 0.12 & 12.92 & 1.74 & 11.61 & 0.56 & 7.77 & 1.62 & 4.06 & 1.19 & \multicolumn{2}{c}{\ldots} \\ IRS2 & & 0.43 & 0.06 & 0.50 & 0.10 & 0.29 & 0.07 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ HII & & 0.37 & 0.05 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ CLay & & 0.22 & 0.02 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ OFF & & 0.53 & 0.06 & 0.51 & 0.04 & 0.30 & 0.07 & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ \hline\hline \end{tabular} \end{table*} In the different spectra we detect the $^{12}$CO\ ladder from the $J=4\rightarrow3$ to the $J=13\rightarrow12$ transitions; the $^{13}$CO\ ladder between the $J=5\rightarrow4$ and the $J=8\rightarrow7$ transitions; the [CI] $^{3}P_{1}-^{3}P_{0}$ and $^{3}P_{2}-^{3}P_{1}$ transitions ([CI]($1-0$) and [CI]($2-1$) from now on); and the [NII] $^{3}P_{1}-^{3}P_{0}$ transition ([NII]($1-0$) from now on). In Tables \ref{table-co}, \ref{table-13co}, and \ref{table-lines} are the details of the lines detected in emission on each of the central spectra, shown in Figs. \ref{fig-spectra-lsw-h}, \ref{fig-spectra-lsw-l} and \ref{fig-spectra-ssw}. In the pointings towards a condensation (C1, C2, C6, C8, IRS1, and IRS2) or the PDR (CLay) of RCW~120, the number of lines detected on each pixel correlate with the position of said pixel in the sky in such a way that the pixels towards the PDR material or the condensation show richer spectra than the pixels ``off'' the PDR or the condensation. This means that for example in pointing C1 pixels C1, D1, and E1 show fewer lines than the others. In the other pointings, towards diffuse gas (HII and OFF), the lines detected are mostly the same for all pixels of a given pointing. The only line that is detected in every pixel of every pointing is the [NII]($1-0$) transition. The [CI] transitions are present in almost all the pixels. This is also true for the first three CO transitions detected, except in pointings HII and CLay, where only the lowest CO transition detected, i.e., $J=4\rightarrow3$, is present. The $^{13}$CO lines are detected only in pointings C1, C2, and IRS1. Those pointings are also the only ones with detections of the higher CO transitions. The CH$^{+}(1-0)$ line has been also detected, but in all the cases it appears in absorption. This CH$^{+}$ transition is seen in emission in most of the Orion Bar \citep{naylor2010} while it is seen in absorption in two ultracompact H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ (UCH\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,) regions located near the Galactic plane \citep{kirk2010}. Therefore we can expect that the corresponding lines seen in the direction of RCW~120\ should be the result of both emission and absorption mechanisms. Because of the low spectral resolution of the FTS these lines are not spectrally resolved and we cannot obtain their equivalent width, thus no quantitative results can be derived from the FTS spectra. Nevertheless, we can see from Fig.~\ref{fig-ch+} that the largest absorption seems to be in the direction of the OFF position, while the pointings towards the PDR of RCW~120\ show the weakest signatures. All this suggests that CH$^{+}(1-0)$ would be tracing diffuse gas along the line of sight, while RCW~120\ is in fact emitting in this line, dampening the absorption in the pointings towards the H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region (see, e.g., \citealt{falgarone2010,nagy2013}). \begin{table} \renewcommand{\arraystretch}{1.2} \caption{Same as Table~\ref{table-co} but for $^{13}$CO} \label{table-13co} \small \centering \begin{tabular}{@{}l@{}@{}c@{ }|*{4}{r@{$\,\pm\,$}l}} \hline\hline & & \multicolumn{8}{|c}{$^{13}$CO} \\ & & \multicolumn{2}{c}{5-4} & \multicolumn{2}{c}{6-5} & \multicolumn{2}{c}{7-6} & \multicolumn{2}{c}{8-7} \\ \hline $\lambda$ & ($\mu$m) & \multicolumn{2}{c}{544.2} & \multicolumn{2}{c}{453.5} & \multicolumn{2}{c}{388.7} & \multicolumn{2}{c}{340.2} \\ E$_{up}$ & (K) & \multicolumn{2}{c}{79.3} &\multicolumn{2}{c}{111.1} & \multicolumn{2}{c}{148.1} & \multicolumn{2}{c}{190.4} \\ FWHM & ($''$) & \multicolumn{2}{c}{32.9} & \multicolumn{2}{c}{30} & \multicolumn{2}{c}{34} & \multicolumn{2}{c}{36.1} \\ \hline C1 & \multirow{9}{*}{\rotatebox{90}{($10^{-3}\,$erg\,s$^{-1}$\,cm$^{-2}$\,sr$^{-1}$)}} & 1.48 & 0.37 & 2.35 & 0.45 & 3.74 & 0.52 & 3.97 & 0.60 \\ C2 & & 0.86 & 0.09 & 1.23 & 0.31 & 1.54 & 0.13 & 0.94 & 0.31 \\ C6 & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ C8 & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ IRS1 & & 0.73 & 0.16 & 0.71 & 0.28 & 0.78 & 0.10 & 0.65 & 0.11 \\ IRS2 & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ HII & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ CLay & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ OFF & & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} & \multicolumn{2}{c}{\ldots} \\ \hline\hline \end{tabular} \end{table} \begin{table} \renewcommand{\arraystretch}{1.2} \caption{Same as Table~\ref{table-co} but for \textbf{atomic} species} \label{table-lines} \centering \begin{tabular}{@{}l@{}@{}c@{ }|*{2}{r@{$\,\pm\,$}l}|r@{$\,\pm\,$}l} \hline\hline & & \multicolumn{4}{|c|}{[CI]} & \multicolumn{2}{c}{[NII]} \\ & & \multicolumn{2}{c}{1-0} & \multicolumn{2}{c|}{2-1} & \multicolumn{2}{c}{1-0} \\ \hline $\lambda$ & ($\mu$m) & \multicolumn{2}{c}{609.1} & \multicolumn{2}{c|}{370.4} & \multicolumn{2}{c}{205.2} \\ E$_{up}$ & (K) & \multicolumn{2}{c}{23.6} & \multicolumn{2}{c|}{62.5} & \multicolumn{2}{c}{70.1} \\ FWHM & ($''$) & \multicolumn{2}{c}{37.2} & \multicolumn{2}{c|}{34.8} & \multicolumn{2}{c}{16.9} \\ \hline C1 & \multirow{9}{*}{\rotatebox{90}{($10^{-3}\,$erg\,s$^{-1}$\,cm$^{-2}$\,sr$^{-1}$)}} & 1.12 & 0.34 & 6.80 & 0.57 & 84.99 & 5.46 \\ C2 & & 1.01 & 0.28 & 5.50 & 0.20 & 46.04 & 3.87 \\ C6 & & 0.61 & 0.06 & 1.92 & 0.06 & 10.02 & 0.39 \\ C8 & & 0.67 & 0.19 & 1.66 & 0.11 & 15.06 & 1.02 \\ IRS1 & & 0.95 & 0.14 & 3.72 & 0.26 & 26.33 & 0.72 \\ IRS2 & & 0.53 & 0.06 & 1.18 & 0.05 & 13.27 & 1.45 \\ HII & & 0.48 & 0.05 & 0.88 & 0.11 & 34.39 & 1.24 \\ CLay & & 0.29 & 0.03 & 0.87 & 0.09 & 13.13 & 1.20 \\ OFF & & 0.53 & 0.07 & 0.98 & 0.08 & 19.96 & 0.34 \\ \hline\hline \end{tabular} \end{table} \begin{figure*}[t] \includegraphics[width=0.99\textwidth]{RCW120-ILI-RADEX-error-50-NEU.pdf} \caption[]{Fit of integrated line intensities of the observed $^{12}$CO and $^{13}$CO lines (symbols) with RADEX for three densities: $n=10^4$ (green lines), $n=10^5$ (blue lines) and $n=10^6$~cm$^{-3}$ (black lines). The resulting temperatures and CO column densities are shown in Table~\ref{tab:radex}.} \label{fig:denprof} \end{figure*} \begin{center} \begin{figure*}[t] \includegraphics[width=0.99\textwidth]{RCW120-Tau.pdf} \caption[]{Optical depth $\tau$ determined with RADEX for $^{12}$CO (top) and $^{13}$CO (bottom) for n$_{\rm H_2}=10^4\,{\rm cm^{-2}}$ (left), n$_{\rm H_2}=10^5\,{\rm cm^{-2}}$ (middle) and n$_{\rm H_2}=10^6\,{\rm cm^{-2}}$ (right).} \label{fig:tau} \end{figure*} \end{center} \subsection{Determination of physical properties using RADEX} \label{sec:radex} In Fig. \ref{fig:denprof} we present the integrated line intensities of the different positions for $^{12}$CO (crosses) and $^{13}$CO (plus signs). Comparing the curves of the integrated line intensities, four groups can be distinguished: \begin{enumerate} \item C1, C2 and IRS1 show high-excited $^{12}$CO and $^{13}$CO lines and a small $^{12}$CO/$^{13}$CO ratio for low-excited lines, \item C6 and C8 show no $^{13}$CO lines, no high-excited $^{12}$CO lines and the low-excited $^{12}$CO lines have a moderately high intensity, \item IRS2 and OFF have no $^{13}$CO lines, no high-excited $^ {12}$CO lines and the low-excited $^{12}$CO lines have a rather low intensity, and \item CLay and HII show only one, the $^{12}$CO($4-3$) line. \end{enumerate} In order to asses the kinetic temperature, density and column density in which the CO lines arise in the different positions, we fit the integrated line intensities of the observed $^{12}$CO and $^{13}$CO lines with the non-LTE and (local) radiative transfer code RADEX\footnote{\url{http://www.sron.rug.nl/~vdtak/radex/index.shtml}.}. We use a grid of input parameters, where the gas density varies between 10$^4$\,cm$^{-3}$ and 10$^6$\,cm$^{-3}$, the kinetic temperatures between 10\,K and 500\,K, CO column densities between 10$^{15}$\,cm$^{-2}$ to 10$^{19}$\,cm$^{-2}$ and the beam filling factor $\eta$ between 0.01 and 1. We use the new set of collisional rate coefficients calculated by \citet{yang2010} including energy levels up to J = 40 for temperatures ranging from 2\,K to 3000\,K. A standard carbon isotopic ratio $^{12}$C/$^{13}$C of 70 \citep{wilson1999} is assumed. We use the cosmic microwave background radiation at 2.73\,K. The effect introduced by the default assumption in the calibration is reflected in a small gap in Fig.~\ref{fig:denprof} between the intensities of the CO lines at the edge of the SSW and SLW receivers. However this gap is smaller than the $ 30\% $ error bars assumed for the calculations. Therefore, the inability to use the semi-extended calibration tools mentioned in Section~\ref{sec-measured} has no measurable effect on the temperatures and densities we derive. We fit the slope of the cooling curve of the CO lines with different combinations of kinetic temperature and gas density, which reflects a degeneracy between these two parameters. For a given combination of kinetic temperature and gas density, we obtain the beam-averaged column density by fitting the line ratio $^{12}$CO/$^{13}$CO which is sensitive to optical effects\footnote{In the cases where $^{13}$CO is not detected, we are only able to give a lower limit of the column density, assuming a filling factor of one.}. The optical depth at the line center depends on the ratio of the column density to line width. For reasons of simplicity, we take a constant width for all the lines measured in the FTS cubes. We assume $\Delta \rm{v} = 2~{\rm km~s^{-1}}$, since for the range and the physical conditions considered we find that the RADEX results do not vary significantly for $\Delta \rm{v} = 0.5 - 2~{\rm km~s^{-1}}$. Those values are in agreement with HiFi observations towards classical PDRs (e.g., NGC~7023 \citealt{ossenkopf2013}). The beam filling factor was obtained by fitting the mean absolute line intensities. The results for three density values, n$_{\rm H_2}=10^4$, n$_{\rm H_2}=10^5$ and n$_{\rm H_2}=10^6$\,cm$^{-3}$, are presented in Figs.~\ref{fig:denprof} and \ref{fig:tau}, and are summarized in Table~\ref{tab:radex}. For each set of parameters, we derive the length of the emission layer along the line of sight: $l\sim N_{\rm H_2}/n_{\rm H_2}$. We convert the CO into H$_2$ column densities taking a standard relative $^{12}$CO abundance to H$_2$ of $8\times10^{-5}$ in PDRs \citep{johnstone2003}. \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \centering \caption{The results with RADEX for the different positions.} \begin{tabular}{l|cccc|cccc|cccc} \hline\hline \T \B& \multicolumn{4}{c|}{n$_{\rm H_2}=10^6{\rm cm^{-3}}$} & \multicolumn{4}{c|}{n$_{\rm H_2}=10^5{\rm cm^{-3}}$} & \multicolumn{4}{c}{n$_{\rm H_2}=10^4{\rm cm^{-3}}$} \\ \cline{2-13} \T \B & T & ${\rm N_{\rm CO}}$ & l & $\eta$ & T & ${\rm N_{\rm CO}}$ & l & $\eta$ & T & ${\rm N_{\rm CO}}$ & l & $\eta$ \\ \B & [K] & ($ 10^{16}\,$cm$^{-2}$) & ($ 10^{-3}\,$pc) & & (K) & ($ 10^{16}\,$cm$^{-2}$) & ($ 10^{-3}\,$pc) & & (K) & ($ 10^{16}\,$cm$^{-2}$) & ($ 10^{-3}\,$pc) & \\ \hline C1 \T & 70 & $100$ & $4$ & 0.5 & 100 & $100$ & 40 & 0.5 & 250 & $300$ & 1200 & 0.3 \\ C2 & 45 & $100$ & $4$ & 0.5 & 60 & $100$ & 40 & 0.5 & 100 & $200$ & 800 & 0.6 \\ C6 & 40 & $>1$ & $>0.05$ & \ldots & 70 & $>0.5$ & $>0.2$ & \ldots & 200 & $>1$ & $>4$ & \ldots \\ C8 & 33 & $>3$ & $>0.1$ & \ldots & 42 & $>3$ & $>1$ & \ldots & 80 & $>5$ & $>20$ & \ldots \\ IRS1 & 45 & $70$ & $3$ & 0.5 & 60 & $70$ & 30 & 0.5 & 90 & $200$ & $800$ & 0.5 \\ IRS2 & 20 & $>2$ & $>0.08$ & \ldots & 25 & $>2$ & $>0.8$ & \ldots & 35 & $>5$ & $>20$ & \ldots \\ OFF & 17 & $>3$ & $>0.1$ & \ldots & 20 & $>3$ & $>1$ & \ldots & 30 & $>5$ & $>20$ & \ldots \\ \hline\hline \end{tabular} \label{tab:radex} \end{table*} Positions C1, C2, and IRS1 are rather similar concerning the gas line analysis. We obtain relatively high column densities, $N_{\rm CO}$, of around $1 \times 10^{18}$ ${\rm cm^{-2}}$ of warm and dense gas for C1, C2 and IRS1. The CO temperatures are rather similar in these three position while pointing C1 might have slightly higher temperatures. It is rather difficult to fix the density, since with all three assumed density values of n$_{\rm H_2}=10^4$, n$_{\rm H_2}=10^5$ and n$_{\rm H_2}=10^6$~cm$^{-3}$ we are able to fit the observations. The differences lie in the obtained length along the line of sight for these three assumptions. Where for n$_{\rm H_2}=10^4\,{\rm cm}^{-3}$ the length is rather large, of around $1\,$pc, for n$_{\rm H_2}=10^6\,{\rm cm}^{-3}$ the length is rather small, of around $0.003\,$ to $ 0.004\,$pc. The observed projected width and extension of the CO emission lines $(J\ge9)$ deduced from the detection in the detectors are $\le0.1\,$pc and $\sim0.5\,$pc, respectively. This roughly follows the emission of the dust as seen in Fig.~\ref{fig-pointings}. The calculated length along the line of sight from the RADEX fit is about two times larger than the extension for n$_{\rm H_2}=10^4\,{\rm cm}^{-3}$, 3 times smaller to the width for n$_{\rm H_2}=10^5\,{\rm cm}^{-3}$ and $25-30$ times smaller than the upper width limit for n$_{\rm H_2}=10^6\,{\rm cm}^{-3}$. The large length along the line of sight for n$_{\rm H_2}=10^4\,{\rm cm}^{-3}$ exclude lower gas densities. The small length along the line of sight derived for n$_{\rm H_2}=10^6\,{\rm cm}^{-3}$ could be the result of clumps. Regarding the comparison of projected extension, $0.1-0.5\,$pc, and obtained length from the model calculations, we would therefore tend to a density between n$_{\rm H_2}=10^4\,{\rm cm}^{-3}$ and n$_{\rm H_2}=10^5\,{\rm cm}^{-3}$. Regarding the optical depth, we see from Fig.~\ref{fig:tau} that in pointing C1 all the detected $^{12}$CO lines are optically thick (E$_{up}\sim 55 - 500\,$K), while for pointings C2 and IRS1 the uppermost transitions detected have optical depths $ \tau < 1 $. The $ ^{13} $CO lines are mostly optically thin for all positions. For the positions C6, C8, IRS2 and OFF, no $^{13}$CO and no high-excited $^{12}$CO lines are detected, which suggest that the column density of warm gas is rather small. The obtained lower limit on the CO column densities in these four regions are similar around $1 - 5 \times 10^{16}\,{\rm cm^{-2}}$. The density and temperature cannot be unambiguously determined. In position C6 a low density of n$_{\rm H_2}=10^4$\,cm$^{-3}$ results in a very high gas temperature. It might be more likely to find lower gas temperatures which would lead to slightly larger densities. For the positions C8, IRS2 and OFF, it is not possible to make constraint on the density. For all assumed densities the gas temperatures are smaller than in the other positions. For the position OFF we find the smallest temperatures, which is not that surprising since the position is at a far distance from the star and no YSO is assumed in its vicinity. Whether this position is located in a dense molecular cloud or in a diffuse environment cannot unambiguously be determined. For position CLay and HII only the $^{12}$CO J=4-3 line is detected, therefore we are not able to carry out a RADEX fit. The lack of observed higher-excited $^{12}$CO indicates a low temperature and/or a low density. It can further be assumed that the small integrated line intensity of this one line is the result of a rather small column density. In summary, we get a relatively good assumption of the physical properties for positions C1, C2 and IRS1, while for the other positions we can only narrow down the physical properties. \begin{figure*} \includegraphics[width=0.5\textwidth]{ratios_RCW120_ci_new} \includegraphics[width=0.5\textwidth]{ratios_RCW120_co_new} \caption{Line intensity ratios for the central pixel (C3) or each pointing.} \label{fig-ratios} \end{figure*} \begin{table} \renewcommand{\arraystretch}{1.2} \centering \caption{Line intensity ratios for the central pixel (C3) of each pointing.} \label{table-ratios} \begin{tabular}{l|ccc} \hline\hline Pointing & CIr & COr & CICOtr \\ \hline C1 & 3.7 $\pm$ 1.2 & 7.7 $\pm$ 0.1 & 0.030 $\pm$ 0.004 \\ C2 & 3.3 $\pm$ 0.9 & 4.05 $\pm$ 0.08 & 0.08 $\pm$ 0.01 \\ C6 & 1.9 $\pm$ 0.2 & 0.9 $\pm$ 0.1 & 0.41 $\pm$ 0.03 \\ C8 & 1.5 $\pm$ 0.4 & 0.8 $\pm$ 0.4 & 0.29 $\pm$ 0.06 \\ IRS1 & 2.4 $\pm$ 0.4 & 4.64 $\pm$ 0.04 & 0.08 $\pm$ 0.01 \\ IRS2 & 1.3 $\pm$ 0.2 & \ldots & 1.1 $\pm$ 0.2 \\ HII & 1.1 $\pm$ 0.2 & \ldots & 2.6 $\pm$ 0.5 \\ CLay & 1.9 $\pm$ 0.3 & \ldots & 3.6 $\pm$ 0.6 \\ OFF & 1.1 $\pm$ 0.2 & \dots & 0.9 $\pm$ 0.2 \\ \hline\hline \end{tabular} \end{table} \subsection{Line ratios} In Table~\ref{table-ratios} we give the line ratios \mbox{CICOtr\,=\,$\sum$[CI]/$\sum$CO}, \mbox{CIr\,=\,[CI]($ 2-1 $)/[CI]($ 1-0 $)}, and \mbox{COr\,=\,CO($ 8-7 $)/CO($ 4-3 $)} for the central pixel of each pointing. In the right panel of Fig.~\ref{fig-ratios} we see how the pointings can be separated into two groups according to the COr\ and CICOtr\ ratios, labeled GrA\ (C1, C2, IRS1) and GrB\ (C6, C8) in the figure. A third group GrC, not shown in the figure, can be defined containing those pointings that do not have CO($ 8-7 $) emission (IRS2, CLay, HII, OFF) For GrA\ the COr\ ratio is highest, signaling a high density and/or temperature, while the CICOtr\ ratio is the lowest, suggesting that the cooling by CO lines is more important than by [CI] emission and therefore that the density is high. For GrB, the COr\ ratio is low and the CICOtr\ ratio is high. The high CICOtr\ ratio indicates a low density and the low COr\ ratio indicates a low density and/or temperature. The pointings in GrC\ have the lowest COr\ ratio, since the CO($ 8-7 $) line is undetectable, and the highest CICOtr\ ratio. This shows that cooling by [CI] emission is larger than by CO, and much more important than in the other two groups. This medium should be therefore less dense compared to GrB. This behavior is largely consistent with the temperatures obtained with RADEX in Sec.~\ref{sec:radex}. Taking into account the position of each pointing (see Fig.~\ref{fig-pointings}), we see that the groups can also be related to a distinction between regions of RCW~120; group GrA\ are the YSOs located in the PDR, GrB\ are the dust condensations situated not in the PDR but next to it, and in GrC\ are the pointings targeting regions of RCW~120\ with diffuse gas. As seen before in section \ref{sec:radex}, pointing IRS2 can be associated with the diffuse gas regions (GrC), despite being targeting a known protostellar source. In the optically thin limit and assuming LTE, the ratio of the velocity integrated emission of [CI]($ 2-1 $) and [CI]($ 1-0 $) is a sensitive function of the excitation temperature, on the form $T_{ex} = 38.3\,{\rm K}/ \ln[2.11/CIr]$ (e.g., \citealt{kramer2004}). Using this equation, we calculated the temperatures for the central pixels of each pointing, shown in column 3 of Table~\ref{table-citemp}. These values are different as the ones derived with RADEX. The larger differences are for the pointings targeting regions off the PDR (IRS2 and OFF), which lead us to suggest that a large fraction of the [CI] emission is likely originating from the warmer surface layers. This is also hinted by the [CI] temperatures, since all of them are similar independent of the position in the cloud. \begin{table} \renewcommand{\arraystretch}{1.2} \centering \caption{Velocity integrated [CI]($ 2-1 $)/[CI]($ 1-0 $) ratios and temperatures associated with it for pixel C3 of each pointing.} \label{table-citemp} \begin{tabular}{l|c|c} \hline\hline Pointing & Ratio & T$_{ex}$ (K) \\ \hline C1 & 1.6 $\pm$ 0.5 & 138 $\pm$ 12 \\ C2 & 1.6 $\pm$ 0.4 & 131 $\pm$ 11 \\ C6 & 1.5 $\pm$ 0.2 & 107 $\pm$ 4 \\ C8 & 1.4 $\pm$ 0.4 & 98 $\pm$ 11 \\ IRS1 & 1.5 $\pm$ 0.3 & 114 $\pm$ 6 \\ IRS2 & 1.4 $\pm$ 0.2 & 96 $\pm$ 4 \\ HII & 1.4 $\pm$ 0.2 & 91 $\pm$ 6 \\ CLay & 1.5 $\pm$ 0.2 & 109 $\pm$ 6 \\ OFF & 1.4 $\pm$ 0.2 & 91 $\pm$ 6 \\ \hline\hline \end{tabular} \end{table} \subsection{PDR model} \label{sec-model} In order to obtain a first approach to the gas density and the UV radiation field we applied the Meudon PDR model, developed by \citet{lepetit2006}. This is a 1-D radiative transfer model, which consists of a plane-parallel gas and dust slab of a given depth, illuminated on one or both sides by an ultraviolet (UV) radiation field and observed in the face-on direction. The slab depth is measured by its visual extinction $A_{{\rm v}}$, in magnitudes. The UV field is in units of the local interstellar value of $5.6\times10^{-14}\,{\rm ergs\,cm}^{-3}$ (Habing field), scaled by a factor $\chi$. For a detailed description of the treatment of the UV radiation field, see Appendix C of \citet{lepetit2006}. We apply the model to pointing CLay\ since it is the one that better approximates the model assumptions regarding geometry and stellar content, and it was set to assume a constant density in the slab. We produced models varying the parameters representing the slab depth ($A_{{\rm v}}$), the atomic Hydrogen initial density ($n_{{\rm H}}$), and the UV field radiation strength ($\chi$), while leaving the remaining parameters in their default settings, as detailed in \citet{lepetit2006}. We run a grid of models varying $\log \chi$ in the range $[1,6]$ in steps of $1$, $\log n_{{\rm H}}$ from $3$ to $6$ in steps of $1$, and $A_{{\rm v}}$ from $\sim0.5$\,mag to $\sim500$\,mag, in tenfold increases. We used the line ratios \mbox{CICOtr\,=\,$\sum$[CI]/$\sum$CO} and \mbox{CIr\,=\,[CI]($ 2-1 $)/[CI]($ 1-0 $)} to determine the best solution for $n_{{\rm H}}$ and $\chi$. CICOtr\ is indicative of the relative importance of the two main species ([CI] and CO) with regard to the cooling mechanisms. CIr\ is indicative of the temperature at the depth in the slab where neutral carbon is located and is emitting. First, we compare the ratio against the corresponding line ratio estimated by the model as a function of $n_{{\rm H}}$ and $\chi$, obtaining the most likely solution $n_{{\rm H}} \sim 10^3$; $\chi \sim 10^4$. We then test these results, running a grid of models this time with $\chi=10^{3}$ fixed, and varying $n_{{\rm H}}$ and $A_{{\rm v}}$. The range of densities is the same as before, while $A_{{\rm v}}$ ranges between 1 and 10 in unity steps. Taking into account the relationship between $N_{{\rm H}}$ and $A_{{\rm v}}$ (e.g., \citealt{bohlin1978,rachford2002,lepetit2006}), \begin{eqnarray} A_{{\rm v}} & \sim & 5.34\times10^{-22} \left[\frac{N_{{\rm H}}}{{\rm cm}^{-2}}\right] \label{eq-avcd} \\ & \sim & 1.65\times 10^{-3}\left[\frac{n_{H}}{{\rm cm}^{-3}}\right]\left[\frac{\Delta pdr}{{\rm pc}}\right], \label{eq-av} \end{eqnarray} \noindent this means that, by Eq. \ref{eq-av}, the width of the slab varies between $\Delta pdr \sim 6\times 10^{-4}$\,pc (for $\log n_{{\rm H}}=6$; $A_{{\rm v}}=1$) and $\Delta pdr \sim 6$\,pc (for $\log n_{{\rm H}} =3$; $A_{{\rm v}} =10$). This is illustrated in Fig. \ref{fig-modres}. Each of the lines represent the locus of points for a given density and varying slab width in the [CIr ;CICOtr] space. The square is the ratio obtained from the data of the central pixel of pointing CLay, plotted with its respective error bars. We see that the simulated values closest to the observed ratio are the ones for models with $n_{{\rm H}} = 10^{3-4}$ and $A_{{\rm v}} = 4-5$. From the model then, we obtain the values $\chi\sim10^{4}$, $n_{{\rm H}}\sim10^{3}$ and $\Delta pdr$ between $0.25$ and $3.0\,$pc (with Eq. \ref{eq-av}). \begin{figure} \includegraphics[width=0.5\textwidth]{rcw120_ratios} \caption{Behavior of $n_{{\rm H}}$ for different values of $A_{{\rm v}}$, from 1 to 10 (filled big square) by steps of 1, for models with $\chi=10^{4}$ plotted against the CIr\ and CICOtr\ ratios. The line ratios observed at each SLW pixel of pointing CLay\ are plotted with their error bars. We see that the simulated values closest to the observed ratio are the ones for models with $n_{{\rm H}} = 10^{3-4}$ and $A_{{\rm v}} > 4-5$.} \label{fig-modres} \end{figure} \subsubsection*{Estimation of $\chi$ at the PDR surface} In order to estimate the UV radiation field impacting on the PDR and to compare it with the value derived from the model we integrated the spectrum of the O8V ionizing star of RCW~120\ \citep{martins2010} between 912\,\AA\ and 2400\,\AA, following Appendix C of \citet{lepetit2006}. The star spectrum and radius ($R_{*}\sim 8.166\,$R$_{\odot}$) were computed and facilitated by F. Martins (priv. comm.). The distance of the PDR surface from the ionizing star was measured on the SPIRE 350$\,\mu{\rm m}$ image and was estimated at $d\sim 2.15\,$pc. A simple dilution factor $(R_{*}/d)^{2}$ was applied and the backscattering of the radiation by dust at the PDR surface was taken into account, as recommended by \citet{lepetit2006}. We obtain a value of $\sim 1925$ in Habing units. This value is in good agreement with the $\sim 1000$ derived from our measurements using the PDR code, especially if we consider that dust present in the ionized region, as shown by the 24$\,\mu{\rm m}$ emission \citep{martins2010}, can absorb part of the radiation and diminishes that reaching the PDR surface. \section{Discussion} \label{sec-disc} Combining the results from sections~\ref{sec:radex} and \ref{sec-model}, we see that the physical parameters corresponding to RCW~120\ are most likely between those derived with RADEX with $n_{{\rm H}}=10^{4}-10^{6}\,$cm$^{-3}$, while for the diffuse regions CLay\ and HII the density might be lower. Assuming a standard $^{12}$CO\ abundance \mbox{${\rm ^{12}CO/H_{2}}\sim8\times10^{-5}$} \citep{frerking1982,roellig2011} and considering that \mbox{$N_{{\rm H}} = N({\rm H})+2N({\rm H_{2}})$}, we obtain the total hydrogen column density values show in columns 3 and 4 of Table~\ref{table-Nh}. On the other hand, using the column density map obtained by \citet{anderson2012} for RCW~120, we averaged their column density values obtained at the position of the different pointings within the beam of the central pixel of the SLW array ($\sim 36\arcsec$ at $350\,\mu{\rm m}$, \citealt{makiwa2013}), obtaining the total hydrogen column densities shown in column 2 of Table~\ref{table-Nh}. For pointings C1, C2, and IRS1 it was possible to obtain a more accurate value for $ N_{{\rm H}} $ with RADEX and not just a lower limit. Among these three positions, only for C1 is the density value obtained with \citet{anderson2012} map between the RADEX values. Nevertheless, the column densities derived by both methods are, in a first order, in agreement. The assumptions on the dust properties play a crucial role when deriving the column density from dust observations, therefore we cannot make a detailed comparison. \begin{table} \renewcommand{\arraystretch}{1.2} \centering \caption{$ N_{{\rm H}} $ obtained for the different pointings.} \label{table-Nh} \begin{tabular}{l|c|ccc} \hline \hline & \multicolumn{4}{c}{$ N_{{\rm H}}\,(\times10^{20}\,{\rm cm}^{-2}) $} \\ \cline{2-5} Pointing & Anderson+\citeyear{anderson2012} & \multicolumn{3}{c}{RADEX ($\mathrm{n_{H_{2}}}$ in cm$ ^{-3} $)} \\ & $ N_{{\rm H}} $ map & $ 10^{6}$ & $ 10^{5}$ & $10^{4}$ \\ \hline C1 & $ 440 $ & $ 250 $ & $ 250 $ & $ 750 $ \\ C2 & $ 110 $ & $ 250 $ & $ 250 $ & $ 500 $ \\ C6 & $ 100 $ & $ >2.5 $ & $ >1.25 $ & $ >2.5 $ \\ C8 & $ 78 $ & $ >7.5 $ & $ >7.5 $ & $ >12.5 $ \\ CLay & $ 12 $ & \ldots & \ldots & \ldots \\ HII & $ 13 $ & \ldots & \ldots & \ldots \\ IRS1 & $ 51 $ & $ 175 $ & $ 175 $ & $ 500 $ \\ IRS2 & $ 34 $ & $ >5 $ & $ >5 $ & $ >12.5 $ \\ OFF & $ 28 $ & $ >7.5 $ & $ >7.5 $ & $ >12.5 $ \\ \hline \hline \end{tabular} \end{table} For the other pointings, with RADEX we only obtain lower limits to the column density, and correspondingly, the average values from \citet{anderson2012} map are higher than the RADEX values. In particular, pointing C6 shows a large difference between the two estimations, in line with the suggestion made in section~\ref{sec:radex} that its temperature is lower than that obtained with RADEX. Any of the two methods show the differences in gas properties throughout RCW~120. The similar values for average column density of pointings CLay\ and HII suggest that the PDR of RCW~120\ also extends on the plane-of-the-sky direction, supporting the bubble-shaped morphology proposed for it. Pointings C6 and C8 show temperatures and densities intermediate between the warm and dense regions on the PDR, and its diffuse and cooler parts. As mentioned in section~\ref{sec:radex}, the relatively high temperature given by the code for pointing C6 is likely overestimated, leading to a slightly larger densities. \section{Summary} \label{sec-sum} We have obtained {\it Herschel}\ SPIRE-FTS spectra towards 9 positions in the RCW~120\ H\,{\sc{ii}}}%\defH\,{\sc{ii}}\,{H\,{\sc{ii}}\,\ region, detecting the [CI] lines at $370$ and $609\,\mu{\rm m}$, the $205\,\mu{\rm m}$ [NII] transition, the $^{12}$CO\ ladder between the $J=4$ and $J=13$ levels and the $^{13}$CO\ ladder between the $J=5$ and $J=14$ levels. CH$ ^{+} $ was detected in absorption at all positions in the region, however the low spectral resolution of the spectra do not allow us to obtain quantifiable information from that line. Nevertheless, its ubiquitous absorption suggest the presence of diffuse gas along the line of sight, while RCW~120\ may emit in this line with varying absorption depth. The [NII] emission line is strong and is also detected over the entire field. The [CI] lines are detected in almost all detectors with a ratio which shows little variation throughout the region. This suggests the presence of low-density PDR over the entire RCW 120 region. This is further supported by the temperatures obtained with the ratio of the two [CI] lines detected, which show little variation throughout the region. The low-excitation $^{12}$CO lines are detected everywhere, while higher-excited lines are only detected in the condensations C1, C2 and IRS1 together with $^{13}$CO. We use RADEX to derive the physical properties at these positions. The gas temperatures are $45-250\,$K for densities of $10^4-10^6\,{\rm cm}^{-3}$, and a high column density that is in agreement with dust analysis. The excited CO could arise either from the edge of the dense irradiated structure or small dense clumps containing young stellar objects. We see the excited CO emission in several detectors, partly in an elongated region, coming from the PDR and/or several young stellar objects. The analysis of the other condensations C6 and C8 reveal a less dense medium with still high gas temperatures. For the positions HII, CLay and OFF, where no condensations are observed, reveal the lowest densities with a highest CICOtr\ ratio. We model the PDR of RCW~120\ (poiting CLay) with the Meudon PDR code. We obtain a hydrogen density of $n_{{\rm H}}\sim 10^{4.3}$\,cm$^{-3}$ and an ionizing radiation field $\chi\sim10^{3}$ in Habing units. The value for $\chi$ agrees with what is expected from the emission of an O8V star, as is the ionizing star of RCW~120. \begin{acknowledgements} We thank Dominique Benielli for assistance in the data calibration, Edward Polehampton for the useful discussion on the data calibration modes, and the referee for the helpful remarks that lead to a vastly improved version of this manuscript. We thank the French Space Agency (CNES) for financial support. J.A.R. acknowledges support by CNES on his post-doctoral fellowship. \end{acknowledgements} \input{acronyms} \bibliographystyle{aa}
{ "timestamp": "2015-04-27T02:07:59", "yymm": "1504", "arxiv_id": "1504.06485", "language": "en", "url": "https://arxiv.org/abs/1504.06485" }
\section{Minicharged particles} Minicharged Particles (MCPs) naturally arise in extensions of the Standard Model (SM) with an additional local hidden gauge group $U(1)_{\rm h}$. The hidden photon (HP) associated with this $U(1)_{\rm h}$ does not couple to any SM particle but mixes kinetically with the SM photon $$ \mathcal{L} = -\frac{\chi}{2}F_{\mu \nu} F'^{\mu \nu}, $$ where $\chi$ is the kinetic mixing parameter, $F_{\mu \nu}$ is the photon field strength tensor and $F'^{\mu \nu}$ is the field strength tensor of the HP. We assume that the $U(1)_{\rm h}$ is unbroken. Hence, the HP is massless. It would be unobservable because the mixing can be removed through a field redefinition. The kinetic mixing becomes unphysical.\\ The HP field has observable consequences in a minimal extension of this model with a (fermionic) field $f$, the hidden fermion. This fermion is only charged under the $U(1)_{\rm h}$ with a coupling strength $g'$. This makes $\chi$ a physical parameter because transforming the kinetic mixing away induces an effective coupling of the hidden fermion to the SM photon with a coupling strength $g'\chi$. Since the hidden fermion couples to the SM photon, it carries an effective electric charge. It is useful to define $$ g'\chi=e\epsilon, $$ where $e$ is the electron charge and $\epsilon$ is the minicharge. Since $\epsilon$ can be small, $f$ is called a minicharged particle. It is fully characterized by $g'$, $\epsilon$ and its arbitrary mass $m_f$.\\ \section{Impact on the CMB and BBN} \begin{figure}[!ht] \centerline{\includegraphics[width=0.45\textwidth]{DRFig1.pdf}\includegraphics[width=0.45\textwidth]{DRFig2.pdf}} \caption{Isocontours of $N_{\rm eff}$ at the CMB epoch (left) and primordial Helium yield $Y_p$ (right) as a function of $m_f$ and $\epsilon$, for $g'=0.1$ . Numbers correspond to the value of the closest contour line. \emph{Left:} Dark green: $N_{\text{eff}}\sim 3$, light green and yellow $N_{\text{eff}}\sim 3.5-4.5$, Orange and red $N_{\text{eff}}>4.5$. The red dashed line shows the 95\% upper exclusion limit $N_{\text{eff}}=3.84$ (Planck+WP+highL+BAO) by Planck~\cite{Ade:2013zuv}. The blue dashed line gives the best fit value $N_{\text{eff}}= 4$ from a combination of BICEP2 and Planck data~\cite{Giusarma:2014zza}. \emph{Right:} Dark green coloring denotes regions far away from the upper limit $Y_p<0.263$~\cite{Mangano:2011ar}. The limit is given by the red dashed contour line. Orange and red regions are excluded on more than a 95\% CL.}\label{Fig:CMB} \label{sec:figures1} \end{figure} Due to the small charge, HPs and MCPs will be produced in the early Universe to some extent. Both of them will then contribute to the energy density of radiation during the formation of the CMB anisotropies. The impact of light degrees of freedom on the CMB is parametrized with the effective neutrino degrees of freedom $N_{\rm eff}$. They are defined as $$ N_{\rm eff} = \frac{\rho_{\rm DR}}{\rho_{1 \nu*}}, $$ where $\rho_{\rm DR}$ is the energy density of dark radiation (DR), i.e. all relativistic particles that are not strongly coupled to the SM photon at the CMB epoch, $\rho_{1\nu*}$ is the energy density of one neutrino species which instantly decoupled from the electron-photon plasma before BBN. All densities are evaluated at times corresponding to the formation of the CMB anisotropies. $N_{\rm eff}$ scales like $(T_{\rm DR}/T_\gamma)^4$. It is, hence, very sensitive to temperature changes. \\ In the SM the three neutrinos contribute $N_{\rm eff} = 3.046$~\cite{Mangano:2005cc} after correcting for non-instantaneous decoupling. The most precise observations to date come from the PLANCK collaboration. They set an upper limit of $N_{\rm eff}<3.84$ at 95 \% CL on the number of relativistic degrees of freedom at the CMB epoch. This value is obtained by fitting a Cosmological model to the data. The limit is conservative since it does not use the controversial value for the Hubble rate today, $H_0$, obtained from local astrophysical measurements. Local measurements prefer a higher value for $N_{\rm eff}$. Also the disputed result obtained by BICEP2~\cite{Ade:2014xna} suggests a higher value, $N_{\rm eff}= 4$~\cite{Giusarma:2014zza}.\\ In order to compare the predictions of models with minicharged particles to the observations, we have to accurately compute the energy density of dark radiation at the CMB epoch. We numerically solve a set of Boltzmann-like equations that parametrize the energy transport between SM particles and the dark sector (DS) consisting of HPs and MCPs \begin{eqnarray} &\dot \rho_{\rm SM} + 3H\left(\rho_{\rm SM}+P_{\rm SM}\right)=-\mathcal{W}\nonumber,\\ &\dot \rho_{\rm DS} + 3H\left(\rho_{\rm DS}+P_{\rm DS}\right)=\mathcal{W}\nonumber, \end{eqnarray} where $\rho \ (P)$ is the energy density (pressure) of the DS and the SM particles, $H$ is the Hubble parameter, the dot is a derivative with respect to physical time and $\mathcal{W}$ is a generalized collision term that gives the energy transported from one sector to the other. \\ For the computation we assume that the DS is always in thermal equilibrium with itself. It is fully characterized by a common temperature $T_{\rm DS}$ and $m_f$. Following the spirit of a dark, weakly coupled sector we initially set $T_{\rm DS}=0$. The DS is then produced by the SM particles during the evolution of the Universe. In order to compute this thermalization as accurately as possible we include all SM particles and light mesons and all relevant processes in our computations. It turns out that for $T_{\rm DS}\ll T_\gamma$ DS particles are most efficiently produced by SM particle pair annihilation ($e\bar e\to f\bar f$). When $T_{\rm DS}$ approaches the SM temperature Coulomb-like scattering ($e f\to e f$), regularized by plasma screening, and (for large $g'$) Compton-like scattering ($\gamma f \to \gamma' f$) become the most important processes.\\ We scan over a wide range of parameters $m_f,\epsilon$ for $g\in \{10^{-2},0.1,1\}$. The constraints for $g'=0.1$ are the most conservative. For $g=10^{-2}$ the results are indistinguishable from the results with $g'=0.1$ because in this case the dominant processes are $\propto e\epsilon=g'\chi$. A change in $g'$ can be compensated by a corresponding change in $\chi$ but leaves $\epsilon$ unchanged. The bounds for $g'=1$ are stronger because Compton-like scattering ($\gamma f \to \gamma' f$), which is $\propto g'^4\chi^2 \sim g'^2 \epsilon^2$, becomes dominant for high $g'$. Increasing $g'$ cannot be compensated by a shift in $\chi$ anymore if $\epsilon$ is fixed. For $g'=1$ the DS and SM particles equilibrate faster.\\ Our result for $g=0.1$ is presented in Fig.~\ref{Fig:CMB} (left). The sudden increase for small kinetic mixing $\epsilon \sim 10^{-8}$ corresponds to partial thermalization of the DS with the SM photon. The behavior for large $\epsilon$ can be understood analytically. The DS fully thermalizes and after decoupling its temperature at the CMB epoch is given by entropy conservation. For more details see~\cite{Vogel:2013raa}.\\ Another primordial probe is BBN. The DS changes the formation of nuclei. The most sensitive probe is the yield of primordial helium $Y_p$. By adjusting the BBN code of~\cite{Cadamuro:2011fd}, we computed $Y_p$ using the data of our CMB simulation. The result is shown in Fig.~\ref{Fig:CMB} (right). We use $Y_p<0.263$ at 95\% CL~\cite{Mangano:2011ar} as a conservative upper bound. \section{Conclusions} Figure~\ref{Fig:CMB} shows that, using PLANCK's limit, the CMB anisotropies allow for light minicharged particles in the range $m_f\sim 5$ MeV for a wide range of minicharges $\epsilon$. This parameter space is, however, disfavored by the abundance of primordial helium. Using these two results we set a lower limit on the hidden fermion mass of $m_f \gtrsim 390$ MeV for $\epsilon > 10^{-7}$. A compilation of this result with earlier approaches using astrophysics, Cosmology or laboratory experiments is shown in Fig.~\ref{Fig:CompleteResult}.\\ If BICEP2 has observed primordial gravitational waves, the limit on MCPs from the CMB anisotropies vanishes and BBN gives the most stringent bounds. Further research is needed to clarify the role of BICEP's observations. Since the question of the existence of additional light degrees of freedom cannot be settled with the current data, we present our results in a flexible way. Future more accurate values of $N_{\rm eff}$ are to be expected. Using the predicted accuracy of CMBPol~\cite{Galli:2010it}, PLANCK's current mean value $N_{\rm eff}=3.30$~\cite{Ade:2013zuv} could be detected at $\sim 5\sigma$. \begin{figure}[!ht] \centerline{\includegraphics[width=0.6\textwidth]{DRFig3.pdf}} \caption{Exclusion plot for MCPs from various experiments and observations. The constraint from the amount of helium $Y_p$ produced during BBN (dark blue) and from light extra degrees of freedom $N_{\text{eff}}$ by Planck (light blue) described here have been obtained in~\cite{Vogel:2013raa}.}\label{Fig:CompleteResult} \label{sec:figures3} \end{figure} \begin{footnotesize}
{ "timestamp": "2015-04-24T02:10:45", "yymm": "1504", "arxiv_id": "1504.06222", "language": "en", "url": "https://arxiv.org/abs/1504.06222" }
\section{} \section{Introduction} Sparse matrix-vector multiplication (SpMV) is perhaps the most widely-used non-trivial sparse basic linear algebra subprogram (BLAS) in computational science and modeling. The operation multiplies a sparse matrix $A$ of size $m\times n$ by a dense vector $x$ of size $n$ and gives a dense vector $y$ of size $m$. Despite its simplicity at the semantic level, an efficient SpMV implementation is generally hard, because $A$'s sparsity structure can be very irregular and unpredictable. Compared to CPUs, co-processors (e.g., GPUs and Xeon Phi) promise much higher peak floating-point performance and memory bandwidth. Thus a lot of research has focused on accelerating SpMV on co-processors. One straightforward way on utilizing co-processors is to develop all-new sparse matrix formats (e.g., HYB~\cite{Bell:Implementing}, Cocktail~\cite{Su:clSpMV}, JAD~\cite{Li:GPU}, ESB~\cite{Liu:Efficient}, BCCOO~\cite{Yan:yaSpMV} and BRC~\cite{Ashari:An}) for specific hardware architectures. The experimental results showed that these formats can provide performance improvement for various SpMV benchmarks. However, the completely new formats bring several new problems. The first one is backward-compatibility. When the input data are stored in basic formats such as compressed sparse row (CSR), a format conversion is required for using the new format based SpMV. In practice, fusing a completely new format into well-established toolkits (e.g., PETSc~\cite{Balay:PETSc}) for scientific software is not a trivial task~\cite{Minden:Preliminary} because of the format conversion. Moreover, Kumbhar~\cite{Kumbhar:Performance} pointed out that once an application (in particular a non-linear solver) needs repeated format conversion after a fixed small number of iterations, the new formats may degrade overall performance. Furthermore, Langr and Tvrd\'{\i}k~\cite{Langr:Evaluation} demonstrated that isolated SpMV performance is insufficient to evaluate a new format. Thus more evaluation criteria, such as format conversion cost and memory footprint, must be taken into consideration. Secondly, when the SpMV operation is used with other sparse building blocks (e.g., sparse matrix-matrix multiplication~\cite{Liu:An}) that require basic storage formats, using the all-new formats is less feasible. To leverage the SpMV performance and the practicality, Liu and Vinter proposed the CSR5 format~\cite{Liu:CSR5} to extend the basic CSR format. The experimental results showed that the CSR5 format delivers excellent SpMV performance, but merely needs very short format conversion time (a few SpMV operations) and very small extra memory footprint (around 2\% of the CSR data). Because the CSR5 format shares data with the CSR format, the CSR-based sparse BLAS routines can efficiently work with the CSR5 format. However, when a solver only needs a few iterations, the CSR5 may not deliver speedups, compared to using the basic CSR data. Thereofore, improving performance of SpMV using the most widely supported CSR format has also gained plenty of attention~\cite{Bell:Implementing, Su:clSpMV, Williams:Optimization, Greathouse:Efficient, Ashari:Fast, Blelloch:Segmented, Harris:CUDPP, Baxter:Modern}. Most of the related work~\cite{Bell:Implementing, Su:clSpMV, Williams:Optimization, Greathouse:Efficient, Ashari:Fast, Liu:LightSpMV} has focused on improving row block method for the CSR-based SpMV. However, these newly proposed approaches are not highly efficient. The main reason is that co-processors are designed for load balanced high throughput computation, which is not naturally offered by the row pointer information of the CSR format. On the other hand, using segmented sum method for the CSR-based SpMV has been proposed by~\cite{Blelloch:Segmented} and been implemented in libraries cuDPP~\cite{Harris:CUDPP, Sengupta:Scan, Garland:Sparse} and Modern GPU~\cite{Baxter:Modern} for nVidia GPUs. Unlike the row block methods, the segmented sum algorithms evenly partition an input matrix $A$ for nearly perfect load balancing, and thus may be suitable for a co-processor implementation. But unfortunately, this method cannot recognize empty rows and requires more costly global operations. These extra overheads may offset performance gain of load balanced segmented sum and degrade overall SpMV efficiency. Recently, heterogeneous processors (which are also known as heterogeneous chip multiprocessors) have been designed~\cite{Kumar:Heterogeneous, Keckler:GPUs} and implemented~\cite{Branover:Llano, AMD:Compute, Damaraju:22nm, nVidia:Tegra, Qualcomm:Snapdragon}. Compared to homogeneous processors such as CPUs or GPUs, heterogeneous processors can deliver improved overall performance and power efficiency~\cite{Chung:Single}, while sufficient heterogeneous parallelisms exist. The main characteristics of heterogeneous processors include unified shared memory and fast communication among different types of cores (e.g., CPU cores and GPU cores). Practically, heterogeneous system architecture (HSA)~\cite{HSA:Manual}, OpenCL~\cite{Munshi:The} and CUDA~\cite{Negrut:Unified} have supplied toolkits for programming heterogeneous processors. Our work described in this paper particularly focuses on accelerating CSR-based SpMV on CPU-GPU heterogeneous processors. The main idea of our SpMV algorithm is first speculatively executing SpMV on a heterogeneous processor's GPU cores targeting high throughput computation, and then locally re-arranging resulting vectors by the CPU cores of the same chip for low-latency memory access. To achieve load balanced first step computation and to utilize both CPU and GPU cores, we improved the conventional segmented sum method by generating auxiliary information (e.g., segment descriptor) at runtime and recognizing empty rows on-the-fly. Compared to the row block methods for the CSR-based SpMV, our method delivers load balanced computation to achieve higher throughput. Compared with the classic segmented sum method for the CSR-based SpMV, our approach decreases the overhead of global synchronization and removes pre- and post-processing regarding empty rows. This paper makes the following contributions: \begin{itemize} \item We propose a fast CSR-based SpMV algorithm that efficiently utilizes different types of cores in emerging CPU-GPU heterogeneous processors. \item We develop an speculative segmented sum algorithm by generating auxiliary information on-the-fly and eliminating costly pre- and post-processing on empty rows. \item We evaluate our CSR-based SpMV algorithm on a widely-adopted benchmark suite and achieve stable SpMV performance independent of the sparsity structure of input matrix. \end{itemize} On a benchmark suite composed of 20 matrices with diverse sparsity structures, our approach greatly outperforms the row block methods for the CSR-based SpMV running on GPU cores of heterogeneous processors. On an Intel heterogeneous processor, the experimental results show that our method obtains up to 6.90x and on average 2.57x speedup over an OpenCL implementation of the CSR-vector algorithm in CUSP running on its GPU cores. On an AMD heterogeneous processor, our approach delivers up to 16.07x (14.43x) and on average 5.61x (4.47x) speedup over the fastest single (double) precision CSR-based SpMV algorithm from PARALUTION and an OpenCL version of CUSP running on its GPU cores. On an nVidia heterogeneous processor, our approach delivers up to 5.91x (6.20x) and on average 2.69x (2.53x) speedup over the fastest single (double) precision CSR-based SpMV algorithm from cuSPARSE and CUSP running on its GPU cores. The paper is organized as follows. We first introduce background knowledge about the CSR format, the CSR-based SpMV algorithms and heterogeneous processors in Section 2. Then we describe our CSR-based SpMV algorithm in Section 3. Moreover, we give and analyze our experimental results in Section 4. We review the related approaches in Section 5. \section{Background and Motivations} \subsection{CSR Format and CSR-based SpMV algorithms} The CSR format of a sparse matrix consists of three separate arrays: (1) row pointer array of size $m+1$, where $m$ is the number of rows of the matrix, (2) column index array of size $nnz$, where $nnz$ is the number of nonzero entries of the matrix, and (3) value array of size $nnz$. Hence the overall space complexity of the CSR format is $O(m+nnz)$. Below we show a sparse matrix $A$ of size $6\times 6$ and its CSR representation. \[ A = \begin{bmatrix} a & 0 & b & 0 & 0 & c \\ d & e & f & 0 & 0 & 0 \\ 0 & 0 & g & 0 & h & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & i & 0 \\ 0 & 0 & j & k & l & 0 \end{bmatrix} \] \begin{equation} \label{eq1} \begin{split} row\; pointer & = \begin{bmatrix} 0, 3, 6, 8, 8, 9, 12\end{bmatrix} \\%\big[\;\;\;0\;\;\;\; 3\;\;\;\; 9\;\;\;\; 11\;\;\;\; 11\;\;\;\; 12\;\;\;\; 15\;\;\;\; \big] \\ column\; index & = \begin{bmatrix}0, 2, 5,\;\; 0, 1, 2,\;\; 2, 4,\;\; 4,\;\; 2, 3, 4\end{bmatrix} \\% \big[ [\; 0\; 2\; 5\;] [\; 0\; 1\; 2\; 3\; 4\; 5\;] [\; 2\;\;\; 4\;\;\, ] [\; 4\;\;\,] [\;\, 2\;\;\, 3\;\;\,\, 4\;\,\, ] \big] \\ value & = \begin{bmatrix}a, b, c,\;\; d, e, f,\;\; g, h,\;\; i,\;\;\; j, k, l\end{bmatrix}. \nonumbe \end{split} \end{equation} Assume we have an input dense vector \begin{equation} \label{eq1} \begin{split} x^T & = \begin{bmatrix}1, 2, 3, 4, 5, 6\end{bmatrix},\nonumber \end{split} \end{equation} we can obtain a dense vector $y$ by multiplying the sparse matrix $A$ by the vector $x$: \begin{equation} \label{eq1} \begin{split} y^T & = \begin{bmatrix}a+3b+6c, \;\;d+2e+3f, \;\;3g+5h, \;\;0, \;\;5i, \;\;3j+4k+5l\end{bmatrix}. \nonumber \end{split} \end{equation} The straightforward way to multiply $A$ with $x$ on a multicore processor is assigning a row block (i.e., multiple rows) to each core. Since the row pointer array records offset information of column index and value of nonzero entries, each core can easily position data in $A$ and $x$. Then generating corresponding entries of $y$ merely needs some simple multiply and add operations. For example, we assume that using a six-core processor for the above SpMV operation and each core is responsible for one row. We can notice that the cores calculating the first, the second and the sixth rows are busier than the other cores. Meanwhile, the core doing the fourth row is actually idle while the other cores are working. Therefore, the row block method cannot naturally handle load balance on multicore processors. On co-processors composed of a large amount of lightweight single instruction, multiple data (SIMD) units, the problem can heavily degrade performance of SpMV operation. Even though many strategies, such as vectorization~\cite{Bell:Implementing, Su:clSpMV, Williams:Optimization}, data streaming~\cite{Greathouse:Efficient}, memory coalescing~\cite{Deng:Taming}, static or dynamic binning~\cite{Greathouse:Efficient, Ashari:Fast}, Dynamic Parallelism~\cite{Ashari:Fast} and dynamic row distribution~\cite{Liu:LightSpMV}, have been proposed for the row block method, it is still impossible to achieve nearly perfect load balancing in general sense, simply since row sizes are irregular and unpredictable. The other method of computing the CSR-based SpMV is utilizing a segmented sum algorithm. This method first generates a segment descriptor of size $nnz$. The descriptor marks the first nonzero entry of each non-empty row as 1 (or equivalently \texttt{TRUE}) and the other nonzero entries as 0 (or equivalently \texttt{FALSE}). Using the above 6-by-6 sparse matrix as an example, we have \begin{equation} \label{eq1} \begin{split} segment\; descriptor & = [1, 0, 0,\;\; 1, 0, 0,\;\; 1, 0,\;\; 1,\;\; 1, 0, 0]. \nonumber \end{split} \end{equation} Then an element-wise product array of size $nnz$ is allocated and filled by calculating \begin{equation} \label{eq1} \begin{split} product[i] = x[column\; index[i]]\times value[i], i\in[0,nnz). \nonumber \end{split} \end{equation} The third step conducts a segmented sum operation on the product array by using segment information stored in the segment descriptor. Finally, the sum in each segment is stored to a contiguous location in $y$. We can see that the segmented sum method can achieve nearly perfect load balance in the nonzero entry space. However, this method has two obvious drawbacks: (1) since the segment descriptor is binary, this method is unable to recognize empty rows, thus a pre-processing (squeezing out possible empty rows) is required for calculating a ``clean'' row pointer array, and a post-processing (adding zeros to proper locations) is needed for a correct resulting vector $y$, and (2) this method requires more expensive global synchronizations and global memory access than the row block method (which needs only one single kernel launch). Therefore, in practice, the segmented sum method is not necessarily faster than the row block methods. \subsection{Heterogeneous Processors} Compared to homogeneous chip multiprocessors such as CPUs and GPUs, the heterogeneous processors are able to combine different types of cores into one chip. Thus heterogeneous processors offer more flexibilities in architecture design space. Because of mature CPU and GPU architectures and applications, CPU-GPU integrated heterogeneous processor with multiple instruction set architectures (ISAs) is the most widely adopted choice. Representatives of this model include AMD Accelerated Processing Units (APUs)~\cite{Branover:Llano, AMD:Compute}, Intel multi-CPU and GPU system-on-a-chip (SoC) devices~\cite{Damaraju:22nm}, nVidia Echelon heterogeneous GPU architecture~\cite{Keckler:GPUs}, and many mobile processors (e.g., nVidia Tegra~\cite{nVidia:Tegra} and Qualcomm Snapdragon~\cite{Qualcomm:Snapdragon}). Figure~\ref{spmv.parco.fig.hcmps} shows two block diagrams of heterogeneous processors used as experimental testbed in this paper. In general, a heterogeneous processor consists of four major parts: (1) a group of CPU cores with hardware-controlled caches, (2) a group of GPU cores with shared command processors, software-controlled scratchpad memory and hardware-controlled caches, (3) shared memory management unit, and (4) shared global dynamic random-access memory (DRAM). The last level cache of the two types of cores can be separate as shown in Figure~\ref{spmv.parco.fig.hcmps}(a) or shared as shown in Figure~\ref{spmv.parco.fig.hcmps}(b). \begin{figure}[h!t] \centering \subfloat[Heterogeneous processor with separate last level cache]{\epsfig{file=hCMP_sepCache.eps, trim=0in 3.6in 0in 0in, width=3.8in}} \vspace{3mm} \qquad \subfloat[Heterogeneous processor with CPU-GPU shared last level cache]{\epsfig{file=hCMP_shaCache.eps, trim=0in 3.6in 0in 0in, width=3.8in}} \caption{Block diagrams of two representative heterogeneous processors.} \label{spmv.parco.fig.hcmps} \end{figure} The CPU cores have higher single-thread performance due to out-of-order execution, branch prediction and large amounts of caches. The GPU cores execute massively parallel lightweight threads on SIMD units for higher aggregate throughput. The two types of compute units have completely different ISAs and separate cache sub-systems. In this paper, our experiments run on three different platforms (shown in Table~\ref{spmv.parco.tab.testbed}), the platforms from AMD and nVidia are based on the design of Figure~\ref{spmv.parco.fig.hcmps}(a); the Intel platform uses the design of Figure~\ref{spmv.parco.fig.hcmps}(b). Note that in the current AMD APU architecture, although the two types of cores have separate last level caches, the GPU cores are able to snoop the last level cache on the CPU side. Compared to loosely-coupled CPU-GPU heterogeneous systems, the two types of cores in a heterogeneous processor share one single unified address space instead of using separate address spaces (i.e., system memory space and GPU device memory space). One obvious benefit is avoiding data transfer through connection interfaces (e.g., PCIe link), which is one of the most well known bottlenecks of co-processor computing~\cite{Gregg:Where}. Additionally, GPU cores can access more memory by paging memory to and from disk. Further, the consistent pageable shared virtual memory can be fully or partially coherent, meaning that much more efficient CPU-GPU interactions are possible due to eliminated heavyweight synchronization (i.e., flushing and GPU cache invalidation). Currently, programming on the unified address space and low-overhead kernel launch are supported by HSA~\cite{HSA:Manual}, OpenCL~\cite{Munshi:The} and CUDA~\cite{Negrut:Unified}. \section{New Sparse Matrix-Vector Multiplication Algorithm} \subsection{Data Decomposition} We first evenly decompose nonzero entries of the input matrix to multiple small tiles for load balanced data parallelism. Here we define a tile as a 2D array of size $W\times T$. The width $T$ is the size of a thread-bunch, which is the minimum SIMD execution unit in a given vector processor. It is also known as wavefront in AMD GPUs or warp in nVidia GPUs. The height $W$ is the workload (i.e., the number of nonzero entries to be processed) of a thread. A tile is a basic work unit in matrix-based segmented sum method~\cite{Sengupta:Scan, Dotsenko:Fast}, which is used as a building block in our SpMV algorithm. Actually, the term ``tile'' is equivalent to the term ``matrix'' used in original description of the segmented scan algorithms~\cite{Sengupta:Scan, Dotsenko:Fast}. Here we use ``tile'' to avoid confusion between a work unit of matrix shape and a sparse matrix in SpMV. Since a thread-bunch can be relatively too small (e.g., as low as 8 in current Intel GPUs) to amortize scheduling cost, we combine multiple thread-bunches into one thread-group (i.e., work-group in OpenCL terminology or thread block in CUDA terminology) for possibly higher throughput. We define $B$ to denote the number of thread-bunches in one thread-group. Additionally, we let each thread-bunch compute $S$ contiguous tiles. Thus higher on-chip resource reuse and faster global synchronization are expected. Therefore, we can calculate that each thread-group deals with $BSWT$ nonzero entries. Thus the whole nonzero entry space of size $nnz$ can be evenly assigned to $\lceil nnz / (BSWT)\rceil$ thread-groups. Figure~\ref{spmv.parco.fig.datadecomposation} shows an example of the data decomposition. In this example, we set $B=2$, $S=2$, $W=4$, and $T=2$. Thus each thread-group is responsible for 32 nonzero entries. Then $\lceil nnz / 32\rceil$ thread-groups are dispatched. \begin{figure}[ht] \centering \epsfig{file=datadecom.eps, trim=0in 5in 0.5in 0.1in, width=5in} \caption{Data decomposition on the nonzero entry space. $nnz$ nonzero entries are assigned to multiple thread-groups. In this case, each thread-group consists of 2 thread-bunches (i.e., $B=2$). The number of threads in each thread-bunch is equal to 2 (i.e., $T=2$). The workload per thread is 4 (i.e., $W=4$). The number of iterative steps in each thread-bunch is 2 (i.e., $S=2$).} \label{spmv.parco.fig.datadecomposation} \end{figure} \subsection{Algorithm Description} Our CSR-based SpMV is based on fundamental segmented sum algorithm, which guarantees load balanced computation in the nonzero entry space. While utilizing segmented sum as a building block in our SpMV algorithm, we have three main performance considerations: (1) the segment descriptor needs to be generated in on-chip memory at runtime to reduce overhead of global memory access, (2) empty rows must be recognized and processed without calling specific pre- and post-processing functions, and (3) taking advantages of both types of cores in a heterogeneous processor. Hence we improve the basic segmented sum method to meet the above performance requirements. The algorithm framework includes two main stages: (1) speculative execution stage, and (2) checking prediction stage. The first stage speculatively executes SpMV operation and generates a possibly incorrect resulting vector $y$. Here the term ``incorrect'' means that the layout of entries in $y$ can be incorrect, but the entries are guaranteed to be numerically identified. Then in the second stage we check whether or not the speculative execution is successful. If the prediction is wrong, a data re-arrangement will be launched for getting a completely correct $y$. We first give an example of our algorithm and use it in the following algorithm description. Figure~\ref{spmv.parco.fig.example} plots this example. The input sparse matrix includes 12 rows (2 of them are empty) and 48 nonzero entries. We set $B$ to 1, $S$ to 2, $T$ to 4 and $W$ to 6. This setting means that one thread-group is composed of one thread-bunch of size 4; each thread-bunch runs 2 iteration steps. Before GPU kernel launch, three containers, \textit{synchronizer}, \textit{dirty\_counter} and \textit{speculator}, are pre-allocated in DRAM for global synchronization and speculative execution. Algorithm~\ref{spmv.parco.alg.spmv} in Appendix A lists pseudo code of the first stage. \begin{figure}[!t] \centering \epsfig{file=example.eps, trim=0.1in 2.1in 1.5in 0in, width=5.2in} \caption{An example of our CSR-based SpMV algorithm. The input sparse matrix contains 48 nonzero entries in 12 rows (10 non-empty rows and 2 empty rows). One thread-bunch composed of 4 threads is launched in this 2-iteration process. The arrays \textit{synchronizer} and \textit{speculator} store tuples (shown with angular brackets).} \label{spmv.parco.fig.example} \end{figure} The \textbf{speculative execution stage} includes the following steps: (1) positioning a range of row indices for nonzero entries in a given tile, (2) calculating segment descriptor based on the range, (3) conducting segmented sum on the tile, (4) saving partial sums to the computed index range in vector $y$. This stage also has some input-triggered operations such as labeling a tile with empty rows. First, each thread-bunch executes binary search of $S+1$ tile boundaries on the CSR row pointer array. Then we obtain corresponding row indices and store them in a scratchpad array \textit{tile offset} of size $S+1$. The results of the binary search are starting and ending row indices of the nonzero entries in each tile. Thus each tile knows the locations to store generated partial sums. Lines 3--7 of Algorithm~\ref{spmv.parco.alg.spmv} give a code expression of this step. In our example shown in Figure~\ref{spmv.parco.fig.example}, the 2 tiles of size 24 have 3 boundaries \{0, 24, 48\}. The results of binary search of \{0, 24, 48\} on the CSR row pointer array are \{0, 4, 12\}. Note that the binary search needs to return the rightmost match, if multiple slots have the same value. Then each thread-bunch executes an iteration of $S$ steps. Lines 8--59 of Algorithm~\ref{spmv.parco.alg.spmv} give code expression of this step. Each iteration deals with one tile. By calculating offset between the left boundary of a tile and the covered row indices, a local segment descriptor is generated (lines 14--21 in Algorithm~\ref{spmv.parco.alg.spmv}). For example, the left boundary of the second tile is 24 and its row index range is 4--12. We need to compute offset between 24 and the row pointer \{19, 27, 29, 29, 34, 37, 37, 44, 48\}. Then we obtain a group of offsets \{-5, 3, 5, 5, 10, 13, 13, 20, 24\}. After removing duplicate values and overflowed values on the left and the right sides, the effective part \{3, 5, 10, 13, 20\} in fact implies local segment descriptor for the current tile. We can easily convert it to a binary expression \{0, 0, 0, 1, 0, 1, 0, ... , 0, 0, 1, 0, 0, 0\} through a scatter operation in on-chip scratchpad memory. Moreover, since each tile is an independent work unit, the first bit of its segment descriptor should be \texttt{TRUE}. Thus the final expression becomes \{1, 0, 0, 1, 0, 1, 0, ... , 0, 0, 1, 0, 0, 0\}. In Figure~\ref{spmv.parco.fig.example}, the filled and empty circles are heads (i.e., 1s or \texttt{TRUE}s) and body (i.e., 0s or \texttt{FALSE}s) of segments, respectively. While generating the segment descriptor, each thread detects whether or not its right neighbor wants to write to the same slot. If yes (like the duplicate offset information \{..., 5, 5, ...\} and \{..., 13, 13, ...\} in the above example), we can make sure that this tile contains at least one empty row, since an empty row is expressed as two contiguous indices of the same value in the CSR row pointer array. Then we mark this tile as ``dirty'' (line 19 in Algorithm~\ref{spmv.parco.alg.spmv}). Further, the \textit{dirty counter} array stored in DRAM is incremented by atomic operation, and this tile's offset is recorded in the \textit{speculator} array (lines 53--58 in Algorithm~\ref{spmv.parco.alg.spmv}). In our example, \textit{dirty counter} is 1 and \textit{speculator} array has a pair of offsets \{$\langle$4, 12$\rangle$\} ((shown with angular brackets in Figure~\ref{spmv.parco.fig.example}). Then we calculate and save element-wise products in scratchpad memory, based on its nonzero entries' column indices, values and corresponding values in the vector $x$. Lines 22--26 of Algorithm~\ref{spmv.parco.alg.spmv} show code expression of this step. When finished, we transmit the sum of the last segment to an intermediate space for the next iteration (lines 27--31 in Algorithm~\ref{spmv.parco.alg.spmv}). In our example, the first tile's last value $5e$ is transmitted to the next tile. Then we execute the matrix-based segmented sum (lines 32--33) on the tile. Because the segmented sum algorithm used here is very similar to the method described in~\cite{Blelloch:Segmented}, we refer the reader to~\cite{Blelloch:Segmented} and several pervious GPU segmented sum algorithms~\cite{Sengupta:Scan, Dotsenko:Fast} for details. But note that compared to~\cite{Blelloch:Segmented}, our method makes one difference: we store partial sums in a compact pattern (i.e., values are arranged in order from the first location in the thread work space), but not save them to locations of corresponding segment heads. For this reason, we need to record the starting position and the number of partial sums. Then we can use an ordinary exclusive scan operation (lines 34--35) for obtaining contiguous indices of the partials sums in $y$. In Figure~\ref{spmv.parco.fig.example}, we can see that the partial sums (expressed as filled hexagons) are aggregated in the compact fashion. Note that empty hexagons are intermediate partial sums, which are already added to the correct position of segment heads. Finally, we store the partial sums to known locations in the resulting vector. Lines 36--52 of Algorithm~\ref{spmv.parco.alg.spmv} show code expression. As an exception, the sum result of the first segment in a thread-bunch is stored to the \textit{synchronizer} array (lines 40--43), since the first row of each thread-bunch may cross multiple thread-bunch. This is a well known issue while conducting basic primitives, such as reduction and prefix-sum scan, using more than one thread-group that cannot communicate with each other. In fact, atomic add operation can be utilized to avoid the global synchronization. But we choose not to use relatively slow global atomic operations and let a CPU core to later on finish the global synchronization. Lines 62--68 of Algorithm~\ref{spmv.parco.alg.spmv} show the corresponding code expression. Since the problem size (i.e., $\lceil nnz / (SWT)\rceil$) can be too small to saturate a GPU core, a CPU core is in fact faster for accessing short arrays linearly stored in DRAM. Taking the first tile in Figure~\ref{spmv.parco.fig.example} as an example, its first partial sum is $3a$, which is stored with its global index 0 to the \textit{synchronizer}. After that, the value $3a$ is added to position 0 of $y$. When the above steps are complete, the resulting vector is numerically identified, except that some values generated by dirty tiles are not in their correct locations. In Figure~\ref{spmv.parco.fig.example}, we can see that after synchronization, vector $y$ is already numerically identified to its final form, but entries $5g$, $3h$, $7i$ and $4j$ generated by the second tile are located in wrong slots. The \textbf{checking prediction stage} first checks value of the \textit{dirty counter} array. If it is zero, the previous prediction is correct and the result of the first stage is the final result; if it is not zero, the predicted entries generated by dirty tiles are scattered to their correct positions in the resulting vector. In this procedure, the CSR row pointer array is required to be read for getting correct row distribution information. Again, we use a CPU core for the irregular linear memory access, which is more suitable for cache sub-systems in CPUs. In our example, entries $5g$, $3h$, $7i$ and $4j$ are moved to their correct positions. Then the SpMV operation is done. \iffalse \begin{algorithm \caption{The pseudo code of checking prediction stage of the CSR-based SpMV} \begin{algorithmic}[1] \Function{checking\_prediction\_cpu}{$ $} \For {$i=0$ to $dirty\_counter[0]$} \State $start \gets speculator[i].start$ \State $stop \gets speculator[i].stop$ \State $y[synchronizer[i].index] \gets y[synchronizer[i].index] + synchronizer[i].value$ \EndFor \EndFunction \end{algorithmic} \end{algorithm} \fi \subsection{Complexity Analysis} Our CSR-based SpMV algorithm pre-allocates three auxiliary arrays, \textit{synchronizer}, \textit{dirty counter} and \textit{speculator}, in DRAM. The space complexity of \textit{synchronizer} is $\lceil nnz / (SWT)\rceil$, equivalent to the number of thread-bunches. The size of \textit{dirty counter} is constant 1. The \textit{speculator} array needs a size of $\lceil nnz / (WT)\rceil$, equivalent to the number of tiles. Since $W$ and $T$ are typically set to relatively large values, the auxiliary arrays merely slightly increase overall space requirement. For each thread-bunch, we executes $S+1$ binary searches in the row pointer array of size $m+1$. Thus $O(\lceil nnz / (SWT) \rceil \times (S+1)\times \log_2 (m+1))=O(nnz \log_2 (m) / WT)$ is work complexity of this part. On the whole, generating segment descriptor needs $O(m)$ time. Collecting element-wise products needs $O(nnz)$ time. For each tile, segmented sum needs $O(WT+\log_2 (T))$ time. Thus all segmented sum operations need $O(\lceil nnz / (WT) \rceil (WT+\log_2 (T)))=O(nnz + nnz \log_2 (T) / WT)$ time. Saving entries to $y$ needs $O(m)$ time. Synchronization takes $O(\lceil nnz / (SWT)\rceil)=O(nnz/SWT)$ time. Possible re-arrangement needs $O(m)$ time in the worst case. Thus overall work complexity of our CSR-based SpMV algorithm is $O(m+nnz+nnz(\log_2 (m)+\log_2 (T))/WT)$. \subsection{Implementation Details} Based on the above analysis, we can see that when the input matrix is fixed, the cost of our SpMV algorithm only depends on two parameters: $T$ and $W$. In our algorithm implementation, $T$ is set to SIMD length of the used processor. Choosing $W$ needs to consider the capacity of on-chip scratchpad memory. The other two parameters $B$ and $S$ are empirically chosen. Table~\ref{spmv.parco.tab.parameters} shows the selected parameters. Note that double precision is not currently supported in Intel OpenCL implementation for its GPUs. \begin{table*}[!ht] \small \caption{The selected parameters} \label{spmv.parco.tab.parameters} \centering \begin{tabular}{ >{\centering}m{1.5cm} >{\centering}m{1.25cm} >{\centering}m{1.25cm} >{\centering}m{1.25cm} >{\centering}m{1.25cm} >{\centering}m{1.25cm} } \hline Processor & Intel & \multicolumn{2}{c}{AMD} & \multicolumn{2}{c}{nVidia} \tabularnewline \hline Precision & 32-bit single & 32-bit single & 64-bit double & 32-bit single & 64-bit double \tabularnewline \hline $T$ & 8 & 64 & 64 & 32 & 32 \tabularnewline $W$ & 16 & 16 & 8 & 8 & 4 \tabularnewline $B$ & 4 & 2 & 2 & 5 & 5 \tabularnewline $S$ & 6 & 2 & 5 & 7 & 7 \tabularnewline \hline \end{tabular} \end{table*} We implement the first stage of our algorithm in OpenCL for the Intel and AMD platforms (and CUDA for the nVidia platform) for GPU execution and the second stage in standard C language running on the CPU part. Since our algorithm needs CPU and GPU share some arrays, we allocate all arrays in Shared Virtual Memory supported by OpenCL for the best performance. On the nVidia platform, we use Unified Memory in CUDA SDK. \section{Experimental Results} \subsection{Experimental Environments} We use three heterogeneous processors, Intel Core i3-5010U, AMD A10-7850K APU and nVidia Tegra K1, for evaluating SpMV algorithms. Table~\ref{spmv.parco.tab.testbed} shows specifications of the three processors. All of them are composed of multiple CPU cores and GPU cores. The two types of cores in the Intel heterogeneous processor share a 3 MB last level cache. In contrast, GPU cores in the AMD heterogeneous processor can snoop the L2 cache of size 4 MB on the CPU side. Unlike those, the cache systems of the CPU part and the GPU part in the nVidia Tegra processor are completely separate. Note that currently the Intel GPU can run OpenCL program only on Microsoft Windows operating system. Also note that we use kB, MB and GB to denote $2^{10}$, $2^{20}$ and $2^{30}$ bytes, respectively; and use GFlop to denote $10^9$ flops. \begin{table*}[!ht] \tiny \caption{The test environments used in our experiments} \label{spmv.parco.tab.testbed} \centering \begin{tabular}{ m{2.5cm} >{\centering}m{1.3cm} >{\centering}m{1.3cm} >{\centering}m{1.3cm} >{\centering}m{1.3cm} >{\centering}m{1.3cm} >{\centering}m{1.3cm} } \hline Processor & \multicolumn{2}{c}{Intel Core i3-5010U} & \multicolumn{2}{c}{AMD A10-7850K APU} & \multicolumn{2}{c}{nVidia Tegra K1} \tabularnewline \hline Core type & x86 CPU & GPU &x86 CPU &GPU &ARM CPU &GPU\tabularnewline Codename & Broadwell & HD 5500 & Steamroller &GCN & Cortex A15 &Kepler\tabularnewline Cores @ clock (GHz) & 2 @ 2.1 & 3 @ 0.9 & 4 @ 3.7 & 8 @ 0.72 & 4 @ 2.3 & 1 @ 0.85\tabularnewline SP flops/cycle & 2$\times$32 & 3$\times$128 & 4$\times$8 & 8$\times$128 & 4$\times$8 & 1$\times$384\tabularnewline SP peak (GFlop/s) & 134.4 & 345.6 & 118.4 & 737.3 & 73.6 & 327.2\tabularnewline DP flops/cycle & 2$\times$16 & 3$\times$32 & 4$\times$4 & 8$\times$8 & 2$\times$2 & 1$\times$16\tabularnewline DP peak (GFlop/s) & 67.2 & 86.4 & 59.2 & 46.1 & 18.4 & 13.6\tabularnewline L1 data cache & 4$\times$32 kB & 3$\times$4 kB & 4$\times$16 kB & 8$\times$16 kB & 4$\times$32 kB & 1$\times$16 kB\tabularnewline L2 cache & 4$\times$256 kB &3$\times$24 kB & 2$\times$2 MB & Unreleased & 2 MB & 128 kB\tabularnewline L3 cache & N/A &384 kB & N/A & N/A & N/A & N/A\tabularnewline Scratchpad & N/A & 3$\times$64 kB & N/A & 8$\times$64 kB & N/A & 1$\times$48 kB\tabularnewline Shared last level cache & \multicolumn{2}{c}{3 MB} & \multicolumn{2}{c}{N/A} & \multicolumn{2}{c}{N/A}\tabularnewline DRAM & \multicolumn{2}{c}{Dual-channel DDR3-1600} & \multicolumn{2}{c}{Dual-channel DDR3-1600} & \multicolumn{2}{c}{Single-channel DDR3L-1866} \tabularnewline DRAM capacity & \multicolumn{2}{c}{8 GB} & \multicolumn{2}{c}{8 GB} & \multicolumn{2}{c}{2 GB} \tabularnewline DRAM bandwidth & \multicolumn{2}{c}{25.6 GB/s} & \multicolumn{2}{c}{25.6 GB/s} & \multicolumn{2}{c}{14.9 GB/s}\tabularnewline \hline Operating system & \multicolumn{2}{c}{Microsoft Windows 64-bit} &\multicolumn{2}{c}{Ubuntu Linux 14.04 64-bit} &\multicolumn{2}{c}{Ubuntu Linux 14.04 32-bit} \tabularnewline GPU driver version & \multicolumn{2}{c}{15.36} & \multicolumn{2}{c}{14.501} & \multicolumn{2}{c}{r19.2} \tabularnewline Compiler & \multicolumn{2}{c}{icc 15.0.2} & \multicolumn{2}{c}{gcc 4.8.2} & \multicolumn{2}{c}{gcc 4.8.2, nvcc 6.0.1} \tabularnewline Toolkit version & \multicolumn{2}{c}{ OpenCL 2.0} & \multicolumn{2}{c}{OpenCL 2.0} & \multicolumn{2}{c}{CUDA 6.0} \tabularnewline \hline \end{tabular} \end{table*} \subsection{Benchmark Suite} To evaluate our method, we choose 20 unstructured matrices from the University of Florida Sparse Matrix Collection~\cite{Davis:The}. Table~\ref{spmv.parco.tab.benchmark} lists main information of the evaluated sparse matrices. The first 14 matrices of the benchmark suite have been widely used in previous work~\cite{Bell:Implementing, Su:clSpMV, Yan:yaSpMV, Ashari:An, Liu:CSR5, Williams:Optimization}. The last 6 matrices are chosen as representatives of irregular matrices extracted from graph applications, such as circuit simulation and optimization problems. The first 10 matrices are relatively regular, due to short distance between the average value and the maximum value of $nnz$/row. The other matrices are relatively irregular. In this context, `regular' is used for a sparse matrix including rows of roughly the same size. In contrast, an `irregular matrix' can have some very long rows and many very short rows. For example, matrices generated from power-law graphs can have a few rows with $O(n)$ nonzero entries and many rows with $O(1)$ nonzero entries. \begin{table}[!ht \tiny \renewcommand{\arraystretch}{1.3 \caption{Overview of evaluated sparse matrices} \label{spmv.parco.tab.benchmark} \centering \begin{tabular}{ c c c c c c } \hline \textbf{Name} & Dense & Protein & FEM/Spheres & FEM/Cantilever & Wind Tunnel \\ \textbf{Plot} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{dense2} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{protein} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{spheres} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{cant} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{pwtk} \end{minipage} \\ \textbf{Dimensions} & 2 K $\times$ 2 K & 36 K $\times$ 36 K & 83 K $\times$ 83 K & 62 K $\times$ 62 K & 218 K $\times$ 218 K \\ \boldmath{$nnz$} & 4.0 M & 4.3 M & 6.0 M & 4.0 M & 11.6 M \\ \begin{minipage}[c]{0.15\columnwidth} \centering \boldmath{$nnz/row$} \textbf{(min, avg, max)} \end{minipage} & 2 K, 2 K, 2 K & 18, 119, 204 & 1, 72, 81 & 1, 64, 78 & 2, 53, 180 \\ \hline \textbf{Name} & FEM/Harbor & QCD & FEM/Ship & Economics & Epidemiology \\ \textbf{Plot} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{harbor} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{QCD} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{ship} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{economics} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{epidemiology} \end{minipage} \\ \textbf{Dimensions} & 47 K $\times$ 47 K & 49 K $\times$ 49 K & 141 K $\times$ 141 K & 207 K $\times$ 207 K & 526 K $\times$ 526 K \\ \boldmath{$nnz$} & 2.4 M & 1.9 M & 7.8 M & 1.3 M & 2.1 M \\ \begin{minipage}[c]{0.15\columnwidth} \centering \boldmath{$nnz/row$} \textbf{(min, avg, max)} \end{minipage} & 4, 50, 145 & 39, 39, 39 & 24, 55, 102 & 1, 6, 44 & 2, 3, 4 \\ \hline \textbf{Name} & FEM/Accelerator & Circuit & Webbase & LP & ASIC\_680k \\ \textbf{Plot} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{accelerator} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{scircuit} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{webbase-1M} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{lp} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{ASIC_680k} \end{minipage} \\ \textbf{Dimensions} & 121 K $\times$ 121 K & 171 K $\times$ 171 K & 1 M $\times$ 1 M & 4 K $\times$ 1.1 M & 683 K $\times$ 683 K \\ \boldmath{$nnz$} & 2.6 M & 959 K & 3.1 M & 11.3 M & 3.9 M \\ \begin{minipage}[c]{0.15\columnwidth} \centering \boldmath{$nnz/row$} \textbf{(min, avg, max)} \end{minipage} & 0, 21, 81 & 1, 5, 353 & 1, 3, 4.7 K & 1, 2.6 K, 56.2 K & 1, 6, 395 K \\ \hline \textbf{Name} & boyd2 & dc2 & ins2 & rajat21 & transient \\ \textbf{Plot} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{boyd2} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{dc2} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{ins2} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{rajat21} \end{minipage} & \begin{minipage}[c]{0.12\columnwidth} \centering \includegraphics[width=0.6in]{transient} \end{minipage} \\ \textbf{Dimensions} & 466 K $\times$ 466 K & 117 K $\times$ 117 K & 309 K $\times$ 309 K & 412 K $\times$ 412 K & 179 K $\times$ 179 K \\ \boldmath{$nnz$} & 1.5 M & 766 K & 2.8 M & 1.9 M & 962 K \\ \begin{minipage}[c]{0.15\columnwidth} \centering \boldmath{$nnz/row$} \textbf{(min, avg, max)} \end{minipage} & 2, 3, 93 K & 1, 7, 114 K & 5, 9, 309 K & 1, 5, 119 K & 1, 5, 60 K \\ \hline \end{tabular} \end{table} \subsection{Experimental Setup} To analyze efficiency of the proposed SpMV algorithm, we also benchmark parallel CSR-based SpMV using some other libraries or methods on CPUs and GPUs. On CPUs, we execute three CSR-based SpMV approaches: (1) OpenMP-accelerated basic row block method, (2) pOSKI library~\cite{Byun:pOSKI} using OSKI~\cite{Vuduc:OSKI} as a building block, and (3) Intel MKL v11.2 Update 2 in Intel Parallel Studio XE 2015 Update 2. The three approaches are running on all CPU cores of the used heterogeneous processors. For the Intel CPU, we report results from MKL, since it always delivers the best performance and the pOSKI is not supported by the used Microsoft Windows operating system. For the AMD CPU, we report the best results of the three libraries, since none of the three libraries outperforms all the others. For the ARM CPU included in the nVidia Tegra K1 platform, we only report results from OpenMP, since the current pOSKI and Intel MKL implementations do not support the ARM architecture. Moreover, single-threaded na\"{\i}ve implementation on CPU is included in our benchmark as well. On GPUs, we benchmark variants of the CSR-scalar and the CSR-vector algorithms proposed in~\cite{Bell:Implementing}. The OpenCL version of the CSR-scalar method is extracted from PARALLUTION v1.0.0~\cite{Lukarski:PARALUTION} and evaluated on the AMD platform. The OpenCL implementation of the CSR-vector method is extracted from semantically equivalent CUDA code in the CUSP library v0.4.0 and executed on both the Intel and the AMD platforms. On the nVidia platform, we run the CSR-based SpMV from vendor-supplied cuSPARSE v6.0 and CUSP v0.4.0 libraries. For all tests, we run SpMV 200 times and record averages. The implicit data transfer (i.e., matrices and vectors data copy from their sources to OpenCL Shared Virtual Memory or CUDA Unified Memory) is not included in our evaluation, since SpMV operation is normally one building block of more complex applications. All participating methods conduct general SpMV, meaning that symmetry is not considered although some input matrices are symmetric. The throughput (flops per second) is calculated by \begin{equation} \label{eq1} \begin{split} \frac{2\times nnz}{runtime}. \nonumber \end{split} \end{equation} The bandwidth (bytes per second) is calculated by \begin{equation} \label{eq1} \begin{split} \frac{(m + 1 + nnz) \times sizeof(idx\_type) + (nnz+nnz+m) * sizeof(val\_type)}{runtime}. \nonumber \end{split} \end{equation} \subsection{Performance Analysis} \begin{figure}[!t] \captionsetup[subfigure]{labelformat=empty} \centering \subfloat[(a) Dense]{\epsfig{file=sp01-dense.eps, width=1.35in}} \subfloat[(b) Protein]{\epsfig{file=sp02-protein.eps, width=1.35in}} \subfloat[(c) FEM/Spheres]{\epsfig{file=sp03-spheres.eps, width=1.35in}} \subfloat[(d) FEM/Cantilever]{\epsfig{file=sp04-cant.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(e) Wind Tunnel]{\epsfig{file=sp05-windtunnel.eps, width=1.35in}} \subfloat[(f) FEM/Harbor]{\epsfig{file=sp06-harbor.eps, width=1.35in}} \subfloat[(g) QCD]{\epsfig{file=sp07-qcd.eps, width=1.35in}} \subfloat[(h) FEM/Ship]{\epsfig{file=sp08-ship.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(i) Economics]{\epsfig{file=sp09-economics.eps, width=1.35in}} \subfloat[(j) Epidemiology]{\epsfig{file=sp10-epidemiology.eps, width=1.35in}} \subfloat[(k) FEM/Accelerator]{\epsfig{file=sp11-accelerator.eps, width=1.35in}} \subfloat[(l) Circuit]{\epsfig{file=sp12-circuit.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(m) Webbase]{\epsfig{file=sp13-webbase.eps, width=1.35in}} \subfloat[(n) LP]{\epsfig{file=sp14-lp.eps, width=1.35in}} \subfloat[(o) ASIC\_680k]{\epsfig{file=sp15-asic680k.eps, width=1.35in}} \subfloat[(p) boyd2]{\epsfig{file=sp16-boyd2.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(q) dc2]{\epsfig{file=sp17-dc2.eps, width=1.35in}} \subfloat[(r) ins2]{\epsfig{file=sp18-ins2.eps, width=1.35in}} \subfloat[(s) rajat21]{\epsfig{file=sp19-rajat21.eps, width=1.35in}} \subfloat[(t) transient]{\epsfig{file=sp20-transient.eps, width=1.35in}} \vspace{3mm} \qquad \subfloat[]{\epsfig{file=sp-legend.eps, trim=0in 4.1in 0in 0in, width=2.9in}} \subfloat[Harmonic mean]{\epsfig{file=sp-hmean.eps, width=2.35in}} \caption{Throughput (GFlop/s) of the single precision CSR-based SpMV algorithms running on the three platforms. ``CPU (Best, multi-threaded)'' shows the best results of OpenMP parallelization, pOSKI and Intel MKL on the AMD device. ``CSR-scalar'' and ``CSR-vector'' are variants of the row block algorithm on GPUs. ``bhSPARSE'' shows our CSR-based SpMV approach described in this paper.} \label{spmv.parco.fig.spthroughput} \end{figure} \begin{figure}[!t] \captionsetup[subfigure]{labelformat=empty} \centering \subfloat[(a) Dense]{\epsfig{file=dp01-dense.eps, width=1.35in}} \subfloat[(b) Protein]{\epsfig{file=dp02-protein.eps, width=1.35in}} \subfloat[(c) FEM/Spheres]{\epsfig{file=dp03-spheres.eps, width=1.35in}} \subfloat[(d) FEM/Cantilever]{\epsfig{file=dp04-cant.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(e) Wind Tunnel]{\epsfig{file=dp05-windtunnel.eps, width=1.35in}} \subfloat[(f) FEM/Harbor]{\epsfig{file=dp06-harbor.eps, width=1.35in}} \subfloat[(g) QCD]{\epsfig{file=dp07-qcd.eps, width=1.35in}} \subfloat[(h) FEM/Ship]{\epsfig{file=dp08-ship.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(i) Economics]{\epsfig{file=dp09-economics.eps, width=1.35in}} \subfloat[(j) Epidemiology]{\epsfig{file=dp10-epidemiology.eps, width=1.35in}} \subfloat[(k) FEM/Accelerator]{\epsfig{file=dp11-accelerator.eps, width=1.35in}} \subfloat[(l) Circuit]{\epsfig{file=dp12-circuit.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(m) Webbase]{\epsfig{file=dp13-webbase.eps, width=1.35in}} \subfloat[(n) LP]{\epsfig{file=dp14-lp.eps, width=1.35in}} \subfloat[(o) ASIC\_680k]{\epsfig{file=dp15-asic680k.eps, width=1.35in}} \subfloat[(p) boyd2]{\epsfig{file=dp16-boyd2.eps, width=1.35in}} \vskip -6pt \qquad \subfloat[(q) dc2]{\epsfig{file=dp17-dc2.eps, width=1.35in}} \subfloat[(r) ins2]{\epsfig{file=dp18-ins2.eps, width=1.35in}} \subfloat[(s) rajat21]{\epsfig{file=dp19-rajat21.eps, width=1.35in}} \subfloat[(t) transient]{\epsfig{file=dp20-transient.eps, width=1.35in}} \vspace{3mm} \qquad \subfloat[]{\epsfig{file=dp-legend.eps, trim=0in 4.1in 0in 0in, width=2.9in}} \subfloat[Harmonic mean]{\epsfig{file=dp-hmean.eps, width=2.35in}} \caption{Throughput (GFlop/s) of the double precision CSR-based SpMV algorithms running on the AMD and the nVidia platforms.} \label{spmv.parco.fig.dpthroughput} \end{figure} Figures~\ref{spmv.parco.fig.spthroughput} and~\ref{spmv.parco.fig.dpthroughput} show throughput of single precision and double precision SpMV of the tested CSR-based approaches, respectively. In Figure~\ref{spmv.parco.fig.spthroughput}, we can see that on the Intel heterogeneous processor, our approach obtains up to 6.90x and on average 2.57x speedup over the CSR-vector method running on the used GPU. Although the speedup mainly comes from irregular matrices, our method generally does not obviously lose performance on regular matrices. Further, compared to CPU cores running MKL, both GPU SpMV algorithms are slower. For our algorithm, the main reason is that the integrated GPU implements scratchpad memory in its L3 cache, which has one order of magnitude higher latency compared to fast scratchpad in nVidia or AMD GPUs. Our algorithm in fact heavily uses scratchpad memory for storing and reusing segment descriptor, element-wise products and other shared data by threads. Thus even though the GPU part of the Intel heterogeneous processor has higher single precision theoretical peak performance than its CPU part, the delivered SpMV throughput is lower than expected. For the CSR-vector method, the low performance has another reason: small thread-bunch of size 8 dramatically increases loop overhead~\cite{Baskaran:A}, which is one of the well known bottlenecks~\cite{Fang:A} of GPU programming. In Figures~\ref{spmv.parco.fig.spthroughput} and~\ref{spmv.parco.fig.dpthroughput}, we can see that on the AMD heterogeneous processor, our method delivers up to 71.90x (94.05x) and on average 22.17x (22.88x) speedup over the single (double) precision CSR-scalar method running on the used GPU. Compared to the GPU CSR-vector method, our algorithm achieves up to 16.07x (14.43x) and on average 5.61x (4.47x) speedup. The CSR-scalar and the CSR-vector methods give very low throughput while running the last 6 irregular matrices, because of the problem of load imbalance. Further, we find that the Intel heterogeneous processor's GPU is actually faster than the AMD GPU while running the last 6 matrices. The reason is that the shorter thread-bunch (8 in Intel GPU vs. 64 in AMD GPU) brings a positive influence for saving SIMD idle cost by executing a much shorter vector width for dramatically imbalanced row distribution. On the other hand, for several very regular matrices with short rows, e.g., \textit{Epidemiology}, the CSR-scalar method offers the best performance because of almost perfect load balance and execution of short rows without loop cost. For most regular matrices, our method delivers comparable performance over the best CPU algorithm. In Figures~\ref{spmv.parco.fig.spthroughput} and~\ref{spmv.parco.fig.dpthroughput}, we can see that on the nVidia platform, our method delivers up to 5.91x (6.20x) and on average 2.69x (2.53x) speedup over the single (double) precision SpMV in the CUSP library running on the used GPU. Compared to cuSPARSE, our method has higher speedups. Since the both libraries use CSR-vector algorithm, those speedups are within expectations. Consider the Tegra K1 platform only contains one single GPU core, the problem of load imbalance on this device is not as heavy as on the above AMD platform. As a result, the speedups are not as high as those from the AMD processor. Here our method delivers on average 1.41x (1.42x) speedup over OpenMP-accelerated SpMV on the quad-core ARM CPU, while using single (double) precision benchmark. \begin{figure}[!ht] \centering \subfloat[Single precision SpMV on Intel Core i3-5010U]{\epsfig{file=bandwidth-intel.eps, width=5in}}\qquad \subfloat[Single precision and double precision SpMV on AMD A10-7850K]{\epsfig{file=bandwidth-amd.eps, width=5in}}\qquad \subfloat[Single precision and double precision SpMV on nVidia Tegra K1]{\epsfig{file=bandwidth-nvidia.eps, width=5in}} \caption{Bandwidth utilization (GB/s) of our CSR-based SpMV algorithm running on the three platforms. Theoretical bandwidth from the hardware specifications are marked up using black lines.} \label{spmv.parco.fig.bandwidth} \end{figure} Figure~\ref{spmv.parco.fig.bandwidth} shows bandwidth utilization of our algorithm proposed in this paper. We can see that the regular matrices can use bandwidth more efficiently compared to the irregular ones. Considering the throughput speedups listed above, our method can obtain higher bandwidth utilization than the other CSR-based SpMV algorithms running on GPUs. \subsection{Parameter Selection} We further conduct experiments to exploit how selected parameters influence overall performance. \begin{figure}[!ht] \centering \subfloat[Intel, SP]{\epsfig{file=segh-intel-sp.eps, width=1.05in}} \subfloat[AMD, SP]{\epsfig{file=segh-amd-sp.eps, width=1.05in}} \subfloat[nVidia, SP]{\epsfig{file=segh-nvidia-sp.eps, width=1.05in}} \subfloat[AMD, DP]{\epsfig{file=segh-amd-dp.eps, width=1.05in}} \subfloat[nVidia, DP]{\epsfig{file=segh-nvidia-dp.eps, width=1.05in}} \caption{Single precision (SP) and double precision (DP) SpMV performance of our algorithm on the three platforms while parameter $W$ changes and all the others fixed to the best observed values (see Table~\ref{spmv.parco.tab.parameters}).} \label{spmv.parco.fig.parameterw} \end{figure} Figure~\ref{spmv.parco.fig.parameterw} shows dependency of the overall performance (harmonic means of the 20 benchmarks) on the parameters, while we fix all the parameters except for parameter $W$ (i.e., workload per thread). We can see that in general the overall performance goes up as parameter $W$ increases. This trend matches the algorithm complexity analysis described in Section 3.3. However, when $W$ is larger than a certain value, the overall performance degrades. The reason is that device occupancy may decrease while more on-chip scratchpad memory is allocated for $WT$ work space of each thread-bunch. \begin{figure}[!ht] \centering \subfloat[Intel, SP]{\epsfig{file=step-intel-sp.eps, width=1.05in}} \subfloat[AMD, SP]{\epsfig{file=step-amd-sp.eps, width=1.05in}} \subfloat[nVidia, SP]{\epsfig{file=step-nvidia-sp.eps, width=1.05in}} \subfloat[AMD, DP]{\epsfig{file=step-amd-dp.eps, width=1.05in}} \subfloat[nVidia, DP]{\epsfig{file=step-nvidia-dp.eps, width=1.05in}} \caption{Single precision (SP) and double precision (DP) SpMV performance of our algorithm on the three platforms while parameter $S$ changes and all the others fixed to the best observed values(see Table~\ref{spmv.parco.tab.parameters}).} \label{spmv.parco.fig.parameters} \end{figure} Figure~\ref{spmv.parco.fig.parameters} shows the trend of the overall performance while we change parameter $S$ (i.e., the number of iterations of each thread-bunch) and fix all the other parameters. We can see that if we assign more work to each thread-bunch, a better performance can be expected. The performance improvement mainly comes from higher on-chip resource reuse. \section{Comparison to Related Methods} In recent years, some new formats have been designed for SpMV operation on various processor architectures. Because of less off-chip memory access and better on-chip memory localization, block-based formats or libraries, such as OSKI~\cite{Vuduc:OSKI, Vuduc:Fast, Vuduc:Automatic}, pOSKI~\cite{Byun:pOSKI}, CSB~\cite{Buluc:Parallel, Buluc:Reduced}, BELLPACK~\cite{Choi:Model}, BCCOO/BCCOO+~\cite{Yan:yaSpMV}, BRC~\cite{Ashari:An} and RSB~\cite{Martone:Efficient}, attracted the most attention. However, block-based formats heavily rely on sparsity structure, meaning that the input matrix is required to have a block structure to meet potential block layout. Therefore, block-based formats are mainly suitable for some matrices generated from scientific computation problems, but may not fit irregular matrices generated from graph applications. Our method proposed in this paper is insensitive to the sparsity structure of input matrix, thus a generally better performance is achieved. A lot of research has focused on improving row block method CSR-based SpMV. Williams et al.~\cite{Williams:Optimization} proposed multiple optimization techniques for SpMV on multi-core CPUs and Cell B.E. processor. Nishtala et al.~\cite{Nishtala:When} designed a high-level data partitioning method for SpMV to achieve better cache locality on multicore CPUs. Pichel et al.~\cite{Pichel:Optimization} evaluated how reordering techniques influence performance of SpMV on GPUs. Baskaran and Bordawekar~\cite{Baskaran:Optimizing} improved off-chip and on-chip memory access patterns of SpMV on GPUs. Reguly and Giles~\cite{Reguly:Efficient} improved thread cooperation for better GPU cache utilization. Ashari et al.~\cite{Ashari:Fast} utilized static reordering and the Dynamic Parallelism scheme offered by nVidia GPUs for fast SpMV operation. Greathouse et al.~\cite{Greathouse:Efficient} grouped contiguous rows for better runtime load balancing on GPUs. LightSpMV~\ref{Liu:LightSpMV} proposed to dynamically distribute matrix rows over warps in order for more balanced CSR-based SpMV without the requirement of generating auxiliary data structures, and implemented this approach using atomic operations and warp shuffle functions as the fundamental building blocks. However, again, the row block methods cannot achieve good performance for input matrix with dramatically imbalanced row distribution. In contrast, our method is independent with the sparsity structure of input matrix. Using segmented sum as a building block is potentially a better generic method for the CSR-based SpMV. An early segmented sum method GPU SpMV was introduced by Sengupta et al.~\cite{Sengupta:Scan} and Garland~\cite{Garland:Sparse} and implemented in the cuDPP library~\cite{Harris:CUDPP}. But the cost of segmented sum and global memory access degrade overall SpMV performance. Zhang~\cite{Zhang:A} improved backward segmented scan for a better cache efficiency and implemented the CSR-based SpMV on multicore CPUs. Recently, nVidia's Modern GPU library~\cite{Baxter:Modern} implemented an improved reduction method, which has been used as a back-end of cuDPP. However, its performance still suffered by pre- and post-processing empty rows in global memory space. Our method, in contrast, uses scratchpad memory more efficiently and utilizes the two types of cores in a heterogeneous processor for better workload distribution. Compared with our recent work CSR5~\cite{Liu:CSR5}, a format designed for cross-platform SpMV on CPUs, GPUs and Xeon Phi, the SpMV approach presented in this paper does not need to process any format conversion or generate any auxiliary data for the input CSR matrix. Consider the format conversion from the CSR to the CSR5 merely needs the cost of a few SpMV operations, the CSR5-based SpMV and the CSR-based SpMV can find their own application scenarios, such as solvers with different number of iterations. \section{Conclusion and Future Work} We proposed an efficient method for SpMV on heterogeneous processors using the CSR storage format. On three mainstream platforms from Intel, AMD and nVidia, our method greatly outperforms row block method CSR-based SpMV algorithms running on GPUs. The performance gain mainly comes from our newly developed speculative segmented sum strategy that efficiently utilizes different types of cores in a heterogeneous processor. In this work, we assign different task to the CPU part and the GPU part in one heterogeneous processor. However, the heaviest workload (stage 1 in our method) currently only runs on GPU cores, while the CPU cores may be idle. Obviously, it is possible to schedule tasks in the first stage on both CPU cores and GPU cores simultaneously for potentially higher throughput. However, a heterogeneity-aware scheduling strategy is beyond the scope of the SpMV algorithm focused in this paper. We refer the reader to~\cite{Lee:Transparent, Kaleem:Adaptive, Shen:An, Shen:Improving} for recent progress on utilizing both CPU cores and GPU cores in a heterogeneous environment. \section*{Acknowledgments} The authors would like to thank James Avery for his valuable feedback. The authors also thank the anonymous reviewers for their insightful suggestions and comments on this paper.
{ "timestamp": "2015-09-15T02:13:50", "yymm": "1504", "arxiv_id": "1504.06474", "language": "en", "url": "https://arxiv.org/abs/1504.06474" }
\section{Introduction} \label{sec:1Introduction} An important signature of the recently discovered quantum spin hall (QSH) state of matter is the existence of electronic surface states which are robust to disorder (non-magnetic impurities) \cite{kane2005quantum,bernevig2006quantum}. This property arises since the spin of the electron is intrinsically locked to the direction of propagation (momentum) and the electrons cannot backscatter unless there is a spin-flip \cite{maciejko2011quantum}. Intriguingly, recent experiments have explored an analogous phenomenon in photonics showing polarization dependent directional propagation of optical modes in spontaneously emitted as well as scattered light \cite{neugebauer2014polarization,rodriguez2013near,lin2013polarization,petersen2014chiral,mitsch2014quantum,o2014spin,kapitanova2014photonic,sollner2015deterministic}. For example, experiments have shown that spontaneous emission from atomic transitions is preferentially uni-directed along a fiber depending on the magnetic quantum number of the excited state \cite{mitsch2014quantum}. On the other hand, surface plasmon polaritons excited with circularly polarized light have also demonstrated unidirectional propagation \cite{rodriguez2013near,lin2013polarization}. One common thread in these experiments is the evanescent wave which leads to a clear hint that the effect is tied to fundamental properties of decaying waves and not the details of the nanophotonic structures. A quantum field theoretic treatment has also recently shed light on the interesting spin properties of evanescent waves \cite{bliokh2014extraordinary,bliokh2012transverse}. However, there is an urgent need for a unified theory about the inherent origin of this effect and its underlying connection to experiments. In analogy with the behavior of electrons in the quantum spin hall effect, we call this phenomenon ``spin-momentum locking". In this paper, our central contribution is the proof that spin-momentum locking is universal behavior for electromagnetic waves which stems from the complex dispersion relation of evanescent waves and fundamental causality requirements. We introduce a universal triplet consisting of momentum, decay and spin of evanescent waves. We show that the Stokes parameters for an evanescent wave unambiguously reveals that every fast decaying evanescent wave is inherently circularly polarized irrespective of how it originates. Furthermore, this inherent handedness (spin) is locked to the direction of propagation (momentum). This information hidden in the Stokes parameters has been overlooked till date and is in stark contrast to the existing knowledge on propagating waves. The universality of this phenomenon is revealed by analyzing, in detail, the cases corresponding to a) total internal reflection (TIR) b) waveguides c) optical fibers d) surface electromagnetic waves. We also show the existence of a unique criterion in TIR (``golden ratio condition") at which propagating light is locally circularly polarized on total internal reflection. This effect can be used to verify our theory in near-field optical experiments. Lastly, we provide detailed insight on how spontaneous emission from a quantum emitter can couple to spin-momentum locked states in optical fibers. Our work explains various experimental observations and should open up future ways of exploiting this universal spin-momentum locking for practical applications. \section{Evanescent Waves} \label{sec:2ComplexkSpace} \subsection{Complex Dispersion Relation} \label{subsec:2.1ComplexDispersionRelation} We first construct a general basis vector for evanescent waves independent of origin which reveals a universal electromagnetic right handed triplet consisting of momentum, decay and spin. The wavevector of an evanescent plane wave necessarily has to be complex and can be written in a general form as $\mathbf{k}=\pmb{\kappa}+i~\pmb{\eta}$. Here, $\pmb{\eta}$ is the imaginary part of $\mathbf{k}$ and is related to the decay while $\pmb{\kappa}$ is the real part related to phase propagation (momentum). Starting from the dispersion relation in free space, the square of $\mathbf{k}$ is fixed via, \begin{equation}\label{eq:1dispersion} \mathbf{k}^2=\mathbf{k}\cdot\mathbf{k}={k_0}^2 \end{equation} which implies, since $k_0=\omega/c$ is purely real, that the two components of $\mathbf{k}$ must satisfy, \begin{subequations}\label{eq:2dispersion} \begin{equation} \kappa^2-\eta^2={k_0}^2 \end{equation} \begin{equation} \pmb{\kappa}\cdot\pmb{\eta}=0. \end{equation} \end{subequations} From Eq.~(\ref{eq:2dispersion}b), we make an important observation: the complex dispersion relation in free space necessarily requires that $\pmb{\kappa}$ and $\pmb{\eta}$ be orthogonal. This implies that the phase propagation of an evanescent wave (momentum) is perpendicular to its direction of decay. Furthermore, these orthogonal phase propagation and decay vectors always have a phase difference between them (factor of $i=\sqrt{-1}$) which are imprinted on orthogonal components of the electromagnetic field vectors through the transversality condition ($\mathbf{k}\cdot\mathbf{E}=0$). We will show now that this is the intuitive reason for the inherent handedness (spin) of the evanescent wave. Like propagating plane waves, evanescent waves can have two orthogonal field polarizations which we denote by $\mathbf{\hat{s}}$ and $\mathbf{\hat{p}}$ unit vectors. $\mathbf{\hat{s}}$ is defined to have an electric field perpendicular to the plane formed by the propagation vector ($\pmb{\kappa}$) and decay vector ($\pmb{\eta}$) while the electric field vector lies in the plane for $\mathbf{\hat{p}}$. Without any loss of generality, an elegant choice of basis can be made to represent these unit vectors uniquely in terms of the evanescent wave wavevector. Our choice of basis is the triplet $\{\pmb{\kappa}$, $\pmb{\eta}$, $\pmb{\kappa}\times\pmb{\eta}\}$. We emphasize that this choice of basis alone fulfils the transversality condition imposed on electromagnetic waves in vacuum ($\mathbf{k}\cdot\mathbf{E}=0$). By defining $\mathbf{\hat{s}}$ and $\mathbf{\hat{p}}$ as, \begin{subequations}\label{eq:3polarization} \begin{equation} \mathbf{\hat{s}}=\frac{\pmb{\kappa}\times\pmb{\eta}}{|\pmb{\kappa}\times\pmb{\eta}|}=i~\frac{\mathbf{k}\times\mathbf{k}^*}{|\mathbf{k}\times\mathbf{k}^*|} \end{equation} \begin{equation} \mathbf{\hat{p}}=\frac{\mathbf{k}\times\mathbf{\hat{s}}}{|\mathbf{k}|}=i~\frac{\mathbf{k}\times(\mathbf{k}\times\mathbf{k}^*)}{|\mathbf{k}|~|\mathbf{k}\times\mathbf{k}^*|} \end{equation} \begin{equation} \mathbf{k}\cdot\mathbf{\hat{s}}=\mathbf{k}\cdot\mathbf{\hat{p}}=\mathbf{\hat{s}}\cdot\mathbf{\hat{p}}=0 \end{equation} \end{subequations} we express the evanescent field polarization entirely in terms of its momentum ($\mathbf{k}$). This form is robust enough that it can also be generalized to lossy media when $\pmb{\kappa}$ and $\pmb{\eta}$ are non-orthogonal. We emphasize that this unique form of evanescent wave basis vectors is universal and reduces to the case of plane wave basis vectors when $\eta\to 0$. The above representation reveals important aspects about the intrinsic ``spin" of an evanescent wave. We define this intrinsic ``spin" to be the inherent handedness (left/right circular/elliptical polarization) of the field basis vector. We rigorously justify this in the next section but make a note that electric fields in any specific scenario can be represented using these basis vectors. Hence, properties of field basis vectors will always be manifested in the electric and magnetic fields. Note first that $\mathbf{\hat{s}}$ is purely real so the orthogonal components comprising the field vector will be in phase. Thus evanescent waves with electric field vector field perpendicular to the plane formed by the decay vector and propagation vector will show no interesting polarization characteristics. However, the $\mathbf{\hat{p}}$ field basis vector is now complex. Using the properties from Eq.~(\ref{eq:2dispersion}) and a bit of manipulation we obtain, \begin{equation}\label{eq:5p-polarization} \mathbf{\hat{p}}=i\bigg[\frac{\eta}{|\mathbf{k}|}\bigg(\frac{\pmb{\kappa}}{\kappa}\bigg)+i\frac{\kappa}{|\mathbf{k}|}\bigg(\frac{\pmb{\eta}}{\eta}\bigg)\bigg] \end{equation} where we can clearly see that the $\mathbf{\hat{p}}$-polarization is just a linear combination of $\pmb{\kappa}$ and $\pmb{\eta}$ unit vectors with an in-built phase difference between the orthogonal components. This immediately implies that there will be an inherent elliptical polarization imparted to the field. \begin{figure} \includegraphics{EvanescentBasis} \caption{Our result shows a fundamental right handed triplet formed by momentum, decay and spin for evanescent waves. Note the locked triplets for waves propagating in two opposite directions. As we can see, the direction of the spin $\mathbf{\hat{s}}$ flips for the two cases. It is important to note that in general there are four degenerate solutions but two of these correspond to growing evanescent waves which are forbidden due to causality. This explains why the left handed triplet is not allowed and the phenomenon of spin-momentum locking is universal to evanescent waves.} \label{fig:EvanescentBasis} \end{figure} \subsection{Stokes Parameters} \label{subsec:2.2StokesParameters} We now extend the concept of Stokes parameters \cite{mcmaster1961matrix} beyond propagating waves to fully characterize this interesting $\mathbf{\hat{p}}$-polarization state of an evanescent wave. Complex $\mathbf{\hat{p}}$ is expressed as a linear combination of two basis vectors which motivates us to consider spin-$\frac{1}{2}$ operators. The Stokes parameters of an evanescent wave can be written as the expectation values of the Pauli matrices and carries non-trivial information; \begin{subequations}\label{eq:6stokesparamaters} \begin{equation} S_0 = \langle\mathbf{\hat{p}}| 1 |\mathbf{\hat{p}}\rangle = 1 \end{equation} \begin{equation} S_1 = \langle \mathbf{\hat{p}}| \sigma_z |\mathbf{\hat{p}}\rangle = \frac{{k_0}^2}{|\mathbf{k}|^2} \end{equation} \begin{equation} S_2 = \langle \mathbf{\hat{p}}| \sigma_x |\mathbf{\hat{p}} \rangle = 0 \end{equation} \begin{equation} {S_3}^{\pm} = \langle \mathbf{\hat{p}}| \sigma_y |\mathbf{\hat{p}} \rangle = \pm 2\frac{\kappa~\eta}{~|\mathbf{k}|^2}. \end{equation} \end{subequations} $S_1$ and $S_3$ quantify the amount of spin, i.e. the degree of linear and circular polarized character of an electromagnetic wave. Here, $\pm$ denotes the two directions of the phase propagation possible for the evanescent wave. We see that the polarization state of the field basis vector $\mathbf{\hat{p}}$ is dependent only on the complex components of the wavenumber while the actual electric and magnetic field elements are irrelevant in this instance. This means that there will be a certain degree of elliptical polarization \textit{intrinsic} to the electromagnetic field which is determined entirely by the real and imaginary components of the momentum ($\mathbf{k})$. In this sense, there will be inherent ``spin" associated with the evanescent wave since the unique basis vector $\mathbf{\hat{p}}$ itself imparts handedness to the wave. Note, the $\mathbf{\hat{s}}$ vector can now be interpreted as the ``spin direction" since it signifies the handedness of the electric field with $\mathbf{\hat{p}}$-polarization. This spin vector ($\mathbf{\hat{s}}$) is orthogonal to both $\pmb{\kappa}$ and $\pmb{\eta}$ which constitute the basis of $\mathbf{\hat{p}}$. Furthermore, the transformation $\pmb{\kappa}\to-\pmb{\kappa}$, for fixed decay direction ($\pmb{\eta})$, changes the handedness of $\mathbf{\hat{p}}$ (sign($S_3$)). This also flips the direction of $\mathbf{\hat{s}}$ which is consistent with an opposite direction of spin. This shows that the spin is fundamentally locked to the direction of propagation (momentum). The diagram in Fig.~(\ref{fig:EvanescentBasis}) shows the construction of a fundamentally locked triplet related to $\mathbf{\hat{p}}$-polarized evanescent waves formed by the phase propagation vector ($\pmb{\kappa}$), decay vector ($\pmb{\eta}$) and spin ($\mathbf{\hat{s}}$). \begin{figure} \includegraphics{BlochSphere} \caption{Poincar\'{e} spheres for propagating waves and $\mathbf{\hat{p}}$-polarized evanescent waves. Propagating waves can have any arbitrary polarization state for a given phase velocity. However, all fast decaying evanescent waves are circularly polarized and lie on the south or north pole of the Poincar\'{e} sphere ($S_3=\pm1$). Furthermore, the choice between these two points is locked to the direction of momentum ($\pm \pmb{\kappa}$).} \label{fig:BlochSphere} \end{figure} \subsection{Inherent Polarization} \label{subsec:2.3InherentPolarizationofEvanescentWaves} In this section, we prove that every fast decaying evanescent wave is inherently circularly polarized and its handedness is tied to the direction of phase propagation (momentum). We consider the case of an evanescent wave with very high momentum such that $\kappa \gg k_0$. The dispersion relation then implies $\kappa \approx \eta$ and the wave decays on a length scale much smaller than the wavelength. Simplifying the expression for the $\mathbf{\hat{p}}$-polarized basis vector, \begin{subequations}\label{eq:10reducedp} \begin{equation} \mathbf{\hat{p}}\to \frac{i}{\sqrt{2}}\bigg[\bigg(\frac{\pmb{\kappa}}{\kappa}\bigg)+i\bigg(\frac{\pmb{\eta}}{\eta}\bigg)\bigg] \end{equation} \begin{equation} S_1 \to 0 \end{equation} \begin{equation} {S_3}^{\pm} \to \pm 1 \end{equation} \end{subequations} which we can clearly see is a state of circular polarization. The above result implies that every fast decaying evanescent wave lies on the north or south pole of the Poincar\'{e} sphere while propagating waves can lie anywhere on the Poincar\'{e} sphere. Furthermore, the choice of the south and north pole ($S_3=\pm1$) is dictated by the direction of the phase velocity ($\pm \pmb{\kappa}$). Thus spin-momentum locking is a fundamental property of evanescent waves. To visually illustrate these polarization states, we compare the Poincar\'{e} spheres of propagating and $\mathbf{\hat{p}}$-polarized evanescent waves in Fig.~(\ref{fig:BlochSphere}). \section{Spin-momentum Locking From Causality} \label{sec:3SpinMomentumLocking} The ``spin-locking" characteristic of evanescent waves comes from the fact that $\pmb{\kappa}$ and $\pmb{\eta}$ are inherently orthogonal as dictated by the complex dispersion (Eq.~(\ref{eq:2dispersion})). Simultaneously, the unit field vector $\mathbf{\hat{p}}$ which is related to the wavevector possesses a $\pi/2$ phase difference between its orthogonal components. This phase is not an artifact of some particular combination of polarization vectors but is \textit{embedded into the vector field itself to guarantee that the transverse condition ($\mathbf{k}\cdot\mathbf{E}=0$) is satisfied}. Ultimately, evanescent waves require some sort of boundary condition to exist in a region of space, which usually involves a symmetry breaking or a change in material parameters. For an arbitrary plane wave ($\propto \exp(i~\mathbf{k}\cdot\mathbf{r})=\exp(i~\pmb{\kappa}\cdot\mathbf{r})\exp(-\pmb{\eta}\cdot\mathbf{r})$), this boundary condition, in general, opens up 2 possible propagation directions $\pm \pmb{\kappa}$, and 2 decay/growth directions $\pm \pmb{\eta}$ which allows up to 4 degenerate solutions. However, we know immediately that only one of the $\pmb{\eta}$ solutions can exist because the wave must be finite in the region of space that includes infinity, i.e. it must decay away from the boundary towards infinity. Exponential growth in a passive medium is non-physical because it would require a non-causal solution to the boundary condition. This causality requirement leads to the fact that the handedness or ``spin" of the evanescent waves is now determined and locked to the propagation direction (the momentum). This is because while the decay vector ($\pmb{\eta}$) cannot change, the wave is free to propagate in both directions ($\pm \pmb{\kappa}$), flipping the handedness of $\mathbf{\hat{p}}$. In other words, the set of allowed evanescent waves only consists of 2 possibilities due to this condition. One with positive momentum $+\pmb{\kappa}$ and positive spin direction $+\mathbf{\hat{s}}$ and the other with negative momentum $-\pmb{\kappa}$ and negative spin direction $-\mathbf{\hat{s}}$. Hence, causality and transversality (or complex dispersion) can be considered to be the fundamental origin of the universal spin-momentum locking of evanescent waves (see Fig.~(\ref{fig:EvanescentBasis})). \section{Universal Behavior} In this section, we show that evanescent waves possess this spin-momentum locking in various scenarios. It becomes imperative to revisit fundamental concepts of total internal reflection and waveguide modes to prove that evanescent waves indeed possess a property which has been overlooked. To analyze these textbook phenomena, we introduce the concept of a local handedness for inhomogeneous fields. We specifically plot the spatial distribution of the Stokes parameter ($S_3$) which depends on the local electric fields and sheds light on the local handedness (polarization state) of the fields. We note that our approach is different but equivalent to the historic concept of the light beam tensor introduced by Fedorov \cite{fedorov1965covariant} and the recently developed concept of the spin density \cite{cameron2012optical,bliokh2012transverse,bliokh2013dual}. \label{sec:4UniversalBehavior} \subsection{Circular Total Internal Reflection (Golden Ratio Condition)} \label{subsec:4.1CircularTIR} The simplest case where such a phenomenon can occur is when evanescent waves are generated at total internal reflection (TIR). We consider a wave $\mathbf{\hat{p}}$-polarized in the $\mathbf{\hat{x}}$-$\mathbf{\hat{z}}$ plane (TM) travelling from glass with index $n_{1}=\sqrt{\epsilon_1}$ into medium 2 with index $n_{2}=\sqrt{\epsilon_2}$ where we require $\epsilon_1>\epsilon_2$ for evanescent waves to be supported. The electric fields generated during TIR are well known and are depicted by white arrows in Fig.~(\ref{fig:1TIR1}) and (\ref{fig:2TIR2}). However, when overlaid against the local handedness of the field an intriguing phenomenon comes to light - the direction of propagation of the wave alters the relative handedness of the evanescent field. The false colors in the same figures depict the spatial distribution of the normalized Stokes parameter ($S_3$) and quantifies the polarization state of the field at each point. In region 2, it is evident that the evanescent wave possesses similar handedness at every point (orange region). Furthermore, comparing the counter-propagating cases between Fig.~(\ref{fig:1TIR1}) and (\ref{fig:2TIR2}) we clearly see that the polarization state of the evanescent wave is reversed and the Stokes parameter changes sign. The insets of Fig.~(\ref{fig:1TIR1}) and (\ref{fig:2TIR2}) elucidate this spin-momentum locking phenomenon for TIR. We now show that the propagating waves inherit handedness from the evanescent waves due to boundary conditions at the interface. The phase between the perpendicular and parallel components of an arbitrary ($\mathbf{\hat{p}}$-polarized) electric field in region 1, interfaced with an evanescent wave in region 2 must satisfy \begin{equation}\label{eq:13electricfieldratio} \bigg[\frac{E_{\perp}}{E_{\parallel}}\bigg]_{1}=\pm i~\frac{\epsilon_{2}}{\epsilon_{1}}~\bigg[\frac{\kappa}{\eta}\bigg]_{2} \qquad @~\textrm{interface} \end{equation} where the $\pm$ indicates oppositely travelling evanescent waves and the subscripts designate the field components in their respective material regions. It should be stressed that this only applies \textit{locally} at the interface. However, this could have interesting consequences for near-field optics since it implies that there is a preferential handedness depending on the direction of propagation when we couple to evanescent waves. We make the important observation that perfect circular polarization is enforced (locally) in region 1 when \begin{equation}\label{eq:14perfectcp} \frac{\epsilon_{1}}{\epsilon_{2}}=\bigg[\frac{\kappa}{\eta}\bigg]_{2}. \end{equation} We can now solve for the momentum and decay of the evanescent wave which achieves this circular total internal reflection. They are \begin{equation} \kappa_2=\epsilon_1\sqrt{\frac{\epsilon_2}{{\epsilon_1}^{2}-{\epsilon_{2}}^2}}k_0 \end{equation} and \begin{equation} \eta_2=\epsilon_2\sqrt{\frac{\epsilon_2}{{\epsilon_1}^{2}-{\epsilon_{2}}^2}}k_0. \end{equation} In the case of TIR, this local circular polarization is generated in region 1 because there is a phase shift imparted to the reflected wave and the interference with the incident wave causes the combined field to be locally handed. Lastly, we need to determine the angle of incidence of the propagating wave that is required to accomplish this circular TIR condition. Using Snell's law, it can be shown that the CTIR angle of incidence $\theta_1=\theta_{\mathrm{CTIR}}$ is, \begin{equation}\label{ctirangle} \sin(\theta_{\mathrm{CTIR}})=1/\sqrt{\epsilon_{1}/\epsilon_{2}-\epsilon_{2}/\epsilon_{1}}. \end{equation} We note that in this instance, $\theta_{\mathrm{CTIR}}$ must necessarily be real which requires that $\sqrt{\epsilon_{1}/\epsilon_{2}-\epsilon_{2}/\epsilon_{1}}>1$. Therefore, there is an interesting limiting condition for local CTIR to exist which is when $\theta_{\mathrm{CTIR}}\to\pi/2$ (i.e. when the propagating wave in region 1 is parallel to the interface). This is equivalent to the limit when \begin{equation} \bigg[\frac{\epsilon_{1}}{\epsilon_{2}}\bigg]_{\mathrm{GR}}=\frac{1}{2}(1+\sqrt{5})\approx 1.618 \end{equation} which is the \textit{minimum} allowable ratio of the permittivities for CTIR to occur, and curiously, it can also be identified as the golden ratio \cite{livio2008golden}. We term this the ``golden ratio condition" for local circularly polarized total internal reflection. This induced CTIR in region 1 is visible clearly in Fig.~(\ref{fig:1TIR1}) and (\ref{fig:2TIR2}). Note our choice of refractive indices satisfies $\epsilon_1/\epsilon_2=4 > [\epsilon_1/\epsilon_2]_{\mathrm{GR}}$. The angle given by our analytical expression in Eq.~(\ref{ctirangle}) is $\theta_{\mathrm{CTIR}}=31.09^{\mathrm{o}}$ and we have plotted the fields for this incident angle. Close to the interface in region 1, the Stokes parameter takes the maximal values of $S_3=\pm 1$ (red and blue regions). Thus the fields are perfectly circular polarized close to the interface specifically for this angle of incidence. Although phase propagation normal to the interface ($\hat{\mathbf{z}}$) will alter the degree of this polarization, the \textit{relative} handedness between forward and backward propagating waves is maintained. This can be seen from the blue and red contours in region 1, where rotation of the electric field vectors is reversed at every point in space - which is in agreement of differing signs of $S_3$. \begin{figure} \includegraphics[width=0.75\columnwidth]{TIR_1} \caption{CTIR at interface between glass with $n_{1}=2$ and air with $n_{2}=1$ at the $\theta_{\mathrm{CTIR}}$ condition. For waves travelling in the $+x$ direction, the evanescent wave in region 2 has right handed spin-momentum locking (inset). Note the wave in medium 1 has perfect circular polarization characteristics close to the interface at this angle of incidence. The overlaid false color plot is the spatial distribution of the normalized Stokes parameter ($S_3$) which characterizes the handedness of the wave (degree of circular polarization) from $-1$ to $1$ at each point.} \label{fig:1TIR1} \includegraphics[width=0.75\columnwidth]{TIR_2} \caption{CTIR at interface between glass with $n_1=2$ and air with $n_2=1$ at the $\theta_{\mathrm{CTIR}}$ condition. For waves travelling in the $-x$ direction, the evanescent wave in region 2 has left handed spin-momentum locking (inset). The plot illustrates that the evanescent wave spin has the opposite sign as compared to the previous case because the momentum and spin are locked.} \label{fig:2TIR2} \end{figure} \subsection{Waveguides} \label{subsec:4.3Waveguides } Interesting spin-locking phenomena also occur when we consider confined light in waveguides and optical fibres. The confinement of light necessarily requires evanescent waves to be present which implies that there will be handedness imparted on the waveguide and fibre modes through the boundary conditions. For planar waveguides, there are even and odd solutions and the $\mathbf{\hat{p}}$-polarized electric field components (TM modes) inside the waveguide are proportional to \begin{equation}\label{eq:11planarwaveguides} \mathbf{E}\propto \bigg[k_z\bigg\{\begin{array}{c} \sin(k_z z)\\ -\cos(k_z z)\end{array}\bigg\}\mathbf{\hat{x}}+ik_x\bigg\{\begin{array}{c} \cos(k_z z)\\ \sin(k_z z)\end{array}\bigg\}\mathbf{\hat{z}}\bigg]e^{ik_x x} \end{equation} where the array inside the braces indicates the two separate solutions. Note that the electric field components along the x- and z-axis have a phase difference between them dictated solely by the boundary conditions. If we consider a wave propagating in the opposite direction, i.e. change $k_x \to -k_x$ the wave changes handedness. We see that there is spin-momentum locking in waveguides since $k_x$ now constitutes the momentum and also controls the relative phase between the orthogonal field components. The electric field vector plots in Fig.~(\ref{fig:3WG1}) and (\ref{fig:4WG2}) are overlaid on the spatial distribution of the $S_3$ Stokes parameters (false color plot) to illustrate the different spin (handedness) between two oppositely propagating waveguide modes. We note that a similar explanation can be extended to the case of metamaterials \cite{kapitanova2014photonic}. This is discussed briefly in the supplementary information and a detailed derivation will be presented elsewhere. \begin{figure} \includegraphics[width=0.75\columnwidth]{WG_1} \caption{Waveguide mode at interface between glass with $n_1=4$ and air with $n_2=1$. The width of the waveguide is $2k_{0}d=2$. For waveguide modes travelling in the $+x$ direction, the evanescent waves in region 2 lock the handedness (locally) to $+\mathbf{\hat{s}}$ at $k_{0}z=1$ and $-\mathbf{\hat{s}}$ at $k_{0}z=-1$. The false color plot shows the spatial distribution of the normalized Stokes parameter ($S_3$) from $-1$ to $1$ for the waveguide and illustrates the intrinsic handedness of the evanescent waves. Furthermore, on comparison with the counter-propagating waveguide mode, we see that the handedness is reversed.} \label{fig:3WG1} \includegraphics[width=0.75\columnwidth]{WG_2} \caption{Waveguide mode at interface between glass with $n_1=4$ and air with $n_2=1$. The width of the waveguide is $2k_{0}d=2$. For waveguide modes travelling in the $-x$ direction, the evanescent waves in region 2 lock the handedness (locally) to $-\mathbf{\hat{s}}$ at $k_{0}z=1$ and $+\mathbf{\hat{s}}$ at $k_{0}z=-1$.} \label{fig:4WG2} \end{figure} \subsection{Optical Fibres} \label{subsec:4.4OpticalFibres} We now show that spin-momentum locking in optical fibres is the fundamental origin of recent experimental observations where scattered light and spontaneous emission was directed preferentially along the fiber \cite{mitsch2014quantum,petersen2014chiral}. The $\mathrm{HE_{11}}$ fundamental mode operation is the most important case so we quantify its degree of polarization. To characterize our fibre mode we consider weakly guided waves, $\Delta=({n_1}^2-{n_2}^2)/(2{n_1}^2)\approx (n_1-n_2)/n_1 \ll 1$ with a numerical aperture, $\mathrm{NA}=\sqrt{{n_1}^2-{n_2}^2}\approx n_1 \sqrt{2\Delta}$~. For single mode $\mathrm{HE_{11}}$ operation, we require that $\mathrm{V}=2\pi(a/\lambda_0)\mathrm{NA}=\sigma_1 \sqrt{2\Delta}<2.405$, where $a$ is the radius of the fibre and $\sigma_1=k_1 a=2 n_1 \pi(a/\lambda_0)$ is the scaling parameter inside the core. The $\mathrm{HE_{11}}$ is doubly degenerate in that we have two counter-rotating angular momentum modes in the plane perpendicular to the fiber-optic axis. We denote the electric and magnetic fields as $\mathbf{E}_{m}$ and $\mathbf{H}_{m}$ respectively where the subscripts denote $m=+1$ or $m=-1$. In the circular basis we define unit vectors $\mathbf{\hat{e}}_{m}=(\mathbf{\hat{r}}+im~\pmb{\hat{\phi}})/\sqrt{2}$ and clearly $\mathbf{\hat{e}_{-}}=\mathbf{\hat{e}_{+}}^*$. With a propagation factor of $\exp[i(\beta~z/a-\omega t)]$ omitted, the electric and magnetic fields can then be written as, \begin{subequations}\label{eq:HE11Inside} \begin{equation} \mathbf{E}_{m}=E_0[\sqrt{2}\beta J_{0}(\mathrm{X}~r/a)\mathbf{\hat{e}}_{m}+i~\mathrm{X}J_{1}(\mathrm{X}~r/a)\mathbf{\hat{z}}]e^{im\phi} \end{equation} \begin{equation} \mathbf{H}_{m}=-imH_0[\sqrt{2}(\sigma_{1})^2 J_{0}(\mathrm{X}~r/a)\mathbf{\hat{e}}_{m}+i~\beta\mathrm{X}J_{1}(\mathrm{X}~r/a)\mathbf{\hat{z}}]e^{im\phi} \end{equation} \end{subequations} for fields inside the fibre when $r<a$, where $H_0=E_0/(\omega\mu_{0}a)$ and \begin{subequations}\label{eq:HE11Outside} \begin{equation} \mathbf{E}_{m}=\mathcal{N}E_0[\sqrt{2}\beta K_{0}(\mathrm{Y}~r/a)\mathbf{\hat{e}}_{m}+i~\mathrm{Y}K_{1}(\mathrm{Y}~r/a)\mathbf{\hat{z}}]e^{im\phi} \end{equation} \begin{equation} \mathbf{H}_{m}=-im\mathcal{N}H_0[\sqrt{2}(\sigma_{2})^2 K_{0}(\mathrm{Y}~r/a)\mathbf{\hat{e}}_{m}+i~\beta\mathrm{Y}K_{1}(\mathrm{Y}~r/a)\mathbf{\hat{z}}]e^{im\phi} \end{equation} \end{subequations} outside the fibre when $r>a$ and $\mathcal{N}=(\mathrm{X}/\mathrm{Y})J_{1}(\mathrm{X})/K_{1}(\mathrm{Y})$. $J_{n}$ and $K_{n}$ are the Bessel and Modified Bessel functions of order $n$ respectively. The normalized propagation constants are defined as, $|\beta|=\sqrt{(\sigma_1)^2-\mathrm{X}^2}=\sqrt{(\sigma_2)^2+\mathrm{Y}^2}$ and $\mathrm{V}^2=\mathrm{X}^2+\mathrm{Y}^2$. The components of the $\mathbf{E}_{m}$ and $\mathbf{H}_{m}$ fields have identical forms (up to a proportionality constant) so we concentrate on the electric type. The above equations are commonplace in textbooks on fiber optics. However, the differentiation between the angular momentum and spin components of the $\mathrm{HE_{11}}$ mode has not been done before. This can be done unambiguously by extending our concept of local handedness of a wave to three dimensions. We consider the Stokes parameter ($S_3$) which characterizes circular polarization. However, for the optical fiber, it has to be evaluated for three dimensional fields by considering pairs of orthogonal directions. This leads to Stokes parameters $S_3^z$ and $S_3^{\phi}$ which can be interpreted as local circular polarization of the field with handedness along the $\mathbf{\hat{z}}$ direction or $\pmb{\hat{\phi}}$ direction. We concentrate on the field components inside the core when $r<a$, but similar expressions hold for $r>a$ where the Bessel functions are substituted with the Modified Bessel functions. For the two $m=\pm1$ angular momentum modes, the $S_3^z$ Stokes parameter evaluated with electric field components orthogonal to the propagation $\mathbf{\hat{z}}$ direction is \begin{equation}\label{eq:CircularIntensity} (I_{AM})_{m}=m~2|E_0|^2\beta^2 {J_0}^2(\mathrm{X}~r/a) \end{equation} which we denote as the angular momentum intensity. The handedness of this angular momentum is either positive or negative for the $m=\pm 1$ modes. This is valid even if we change the sign of the propagation constant, i.e. if the $\mathrm{HE_{11}}$ mode moves along $-\mathbf{\hat{z}}$. Thus, both forward and backward propagating waves can have either positive or negative angular momentum as is expected. However, a fundamental and intriguing asymmetry is noticed for the $S_3^{\phi}$ Stokes parameter evaluated using electric field components orthogonal to $\pmb{\hat{\phi}}$. It is given by the expression \begin{equation} (I_S)_{m}=\mathrm{sign}(\beta)~2|E_0|^2|\beta|\mathrm{X}J_{0}(\mathrm{X}~r/a)J_{1}(\mathrm{X}~r/a) \end{equation} which we denote as the spin polarization intensity. The direction of this ``spin" is in the unique $\pmb{\hat{\phi}}$ direction and is seen to be independent of the sign of the angular momentum. Furthermore, it is also locked to the momentum $\beta$ since $\mathrm{sign}(\beta)=\pm 1$ leading to fundamentally different behavior of forward and backward propagating $\mathrm{HE_{11}}$ modes along the fiber. For forward momentum $\mathrm{sign}(\beta)=+1$ we have $+\pmb{\hat{\phi}}$ transverse spin and for $\mathrm{sign}(\beta)=-1$ we have $-\pmb{\hat{\phi}}$ regardless of which angular momentum mode we are considering. Therefore, instead of four degenerate solutions, only two are allowed. We emphasize once again that the spin-momentum locking arises from the fact that growing solutions for evanescent waves outside the optical fiber are discarded. These growing solutions have the opposite spin direction for a given propagation direction. (Sec.~\ref{sec:3SpinMomentumLocking}). This shows we have spin-momentum locking even in standard optical fibres which is directly linked to the evanescent fields necessary for confinement. Strictly speaking, we enforced spin-momentum locking from the outset by only permitting $K_n$ type Modified Bessel functions and discarding the $I_n$ type - since they exponentially grow as $r$ increases. This causality requirement with regards to fiber modes is the precise reason that we have handedness imparted to the optical fiber. The total electric field intensity is a sum of linear, angular momentum and spin intensities which arises from the properties of Stokes parameters (${S_0}^2 = {S_1}^2 + {S_2}^2 + {S_3}^2$). We thus have ${I_E}^2={I_{AM}}^2+{I_S}^2+{I_L}^2$ where $I_E=|\mathbf{E}|^2=2|E_0|^2\beta^2{J_{0}}^2+|E_0|^2\mathrm{X}^2{J_{1}}^2$ is the total intensity of the electric field. Here, the linear polarization intensity is defined as $I_L=|E_0|^2\mathrm{X}^2{J_{1}}^2$, arising due to the electric field component in the $\mathbf{\hat{z}}$ direction. We can now analyze the fractional field intensity residing in the angular momentum, spin or linear polarization components. The normalized polarization intensities for a weakly-guiding optical fibre are shown in the plot of Fig.~(\ref{fig:HE11Polarizations}). We also include a field vector plot in Fig.~(\ref{fig:HE11Transverse}) to help visualize the transverse spin component in the $\mathrm{HE_{11}}$ mode. \begin{figure} \includegraphics[width=0.75\columnwidth]{HE11Transverse} \caption{The evolution of the polarization vector as it propagates in an optical fibre with $\mathrm{V}=1.5$ and $\Delta=0.1$. We display the electric field at a single point at $r=a$ in the $m=+1$ $\mathrm{HE_{11}}$ mode to demonstrate the transverse spin near the core-cladding region. As we can see, the electric field rotates in the z-plane as well as in the x-y plane, hence there is a spin component directed around $\pmb{\hat{\phi}}$ (inset). Out of four possible degenerate solutions, only two are allowed because of the decaying condition on evanescent waves outside the core. Consequently, the $\mathrm{HE_{11}}$ mode of the optical fiber has spin-momentum locking.} \label{fig:HE11Transverse} \end{figure} \begin{figure} \includegraphics{HE11} \caption{Normalized $\mathrm{HE_{11}}$ polarization intensities ($I/I_E(0)$) for an optical fibre of $\mathrm{V}=1.5$ and $\Delta=0.1$. We see that the majority of field is concentrated in the $I_{AM}$ angular momentum component but there is a significant component of spin intensity ($I_S$) in the $\pmb{\hat{\phi}}$ direction near the core-cladding interface at $r=a$.} \label{fig:HE11Polarizations} \end{figure} \subsection{Directional Quantum Emitter Coupling} \label{subsec:4.5quantum-emitter} All that being said, this intriguing symmetry breaking could be exploited for applications in the field of quantum photonics. One recent experiment has utilized cold atoms near optical fibers to demonstrate directional waveguiding of spin-polarized spontaneous emission\cite{mitsch2014quantum}. We show how this phenomenon is related to spin-momentum locking of the $\mathrm{HE_{11}}$ mode. Note, our results can be expanded to an isotropic scatterer with circularly polarized incident light or a chiral scatterer with linearly polarized incident light. Let us consider a left and right handed circularly polarized source that has both electric and magnetic moments. Following the semiclassical theory of spontaneous emission \cite{ford1984electromagnetic,klimov2012engineering,lee2012role}, we approximate this chiral source to be \begin{equation}\label{eq:CurrentDensities} \left [ \begin{array}{c} \textbf{J}_\textsc{E}(\textbf{r}) \\ \textbf{J}_\textsc{H}(\textbf{r}) \end{array} \right]_{\pm} = -i~\omega~\delta^{3}(\textbf{r}-\textbf{r}_0)\left [ \begin{array}{c} \mathbf{p} \\ \mathbf{m} \end{array} \right]\\ =-i~\omega~\delta^{3}(\textbf{r}-\textbf{r}_0)\left [ \begin{array}{c} p_0 \\ -i~m_0 \end{array} \right]\mathbf{\hat{e}_{\pm}}~e^{\pm i \phi} \end{equation} where the $\pm$ indicates left or right handed circular polarization in the cylindrical coordinate basis of the optical fiber. The coupling strength (energy of interaction) into one of the $\mathrm{HE_{11}}$ modes is then proportional to $A_{m} \propto i\omega[\mathbf{p}^*\cdot\mathbf{E}_{m}(\mathbf{r}_0)+\mathbf{m}^*\cdot\mathbf{H}_{m}(\mathbf{r}_0)$]. Plugging in for $\mathbf{r}_0=\mathbf{0}$ it can be shown that the magnitude of the coupling strength ($|A_{m}|^2$) for each $m=\pm 1$ mode is equal to \begin{equation}\label{eq:ChiralCoupling} |A_{m}|^2 = C_1\bigg|\mathrm{sign}(\beta)|\beta|\omega p_0 + m\frac{(\sigma_1)^2 m_0}{\mu_0 a}\bigg|^2 \end{equation} where $C_1$ is some positive proportionality constant. The angular momentum quantum number $m=\pm 1$ should not be confused with the magnitude of the magnetic dipole $|\mathbf{m}|=m_0$. Also note, that the time averaged power along the fiber axis for each mode is proportional to $P_m\propto |A_{m}|^2$. We notice the striking fact that this coupling factor of the chiral emitter into the $\mathrm{HE_{11}}$ mode is direction dependent. The $+$ polarization chiral emitter couples only into the $m=+1$ mode and emits most strongly in the forward propagating $\mathrm{sign}(\beta)=+1$ direction while being weaker for backward propagation $\mathrm{sign}(\beta)=-1$. Conversely, the $-$ polarization chiral emitter couples only into the $m=-1$ mode and emits more strongly in the $\mathrm{sign}(\beta)=-1$ direction rather than $\mathrm{sign}(\beta)=+1$. This means we can control the directional propagation of waves \textit{and} the specific angular momentum mode ($m=\pm 1$) we couple into by choosing either left or right handed chiral emitters. This effect is maximal when the electric and magnetic dipole moments are tuned to have $|\beta|\omega p_0 = \frac{(\sigma_{1})^2}{\mu_{0}a}m_0$. For weakly guided waves, $|\beta|\approx\sigma_{1}$, and it can be shown that maximal coupling will occur when $m_0 \approx Z_1 p_0$ where $Z_1=Z_0/n_1=\sqrt{\mu_0/\epsilon_1\epsilon_0}$ is the wave impedance inside the fibre. We now propose an approach to couple strictly to the transverse spin components of the electric field with a transversely polarized electric source. This can have the advantage of not requiring magnetic dipoles or chirality. We achieve this by tuning the phase difference between two orthogonally oriented point dipole emitters $\mathbf{p}=p_x\mathbf{\hat{x}}+i~p_z\mathbf{\hat{z}}$. This emitter is placed at the location $\mathbf{r}_0=a\mathbf{\hat{x}}$ where the spin intensity is maximum (see Subsec.~\ref{subsec:4.4OpticalFibres}). The transverse spin is unchanged between angular momentum modes so they will both contribute to the propagation of the wave. The transverse coupling strength for both $m=\pm 1$ is equal to \begin{equation}\label{eq:TransverseCoupling} |A_m|^2 = C_2~\omega^2\bigg|\mathrm{sign}(\beta)|\beta| J_{0}(\mathrm{X})p_x+\mathrm{X}J_{1}(\mathrm{X})p_z\bigg|^2 \end{equation} where $C_2$ is another positive proportionality constant and we see again that there is dominance of the wave to be in the $\mathrm{sign}(\beta)=+1$ direction compared to the $\mathrm{sign}(\beta)=-1$. The asymmetry in coupling between the two directions is maximal when the dipole strengths are adjusted to have $|\beta| J_{0}(\mathrm{X})p_x=\mathrm{X}J_{1}(\mathrm{X})p_z$. We illustrate these two unique quantum emitter couplings in Fig.~(\ref{fig:QE}) and their orientation in the optical fiber. \begin{figure} \includegraphics[width=0.75\columnwidth]{QE} \caption{Chiral emitter placed at $\mathbf{r}_0=0$ and transverse emitter placed at $\mathbf{r}_0=a\mathbf{\hat{x}}$ inside the optical fibre. The intrinsic chirality of the $\mathrm{HE_{11}}$ mode opens possibilities for spin-controlled quantum photonics. We emphasize that this intrinsic chirality is universal and arises from the evanescent waves outside the core.} \label{fig:QE} \end{figure} \subsection{Surface States} \label{subsec:4.6SurfaceStates} The last example is that of surface electromagnetic waves such as Zenneck waves \cite{jeon2006thz}, Dyakonov waves \cite{d1988new} and surface plasmon-polaritons (SPPs) which exist at the interface of two materials. The necessarily evanescent nature of the electromagnetic field will introduce very clear spin-momentum locking in all these waves. We emphasize that such polarization dependent transport has been observed for the particular case of surface plasmon polaritons \cite{rodriguez2013near,lin2013polarization,o2014spin,bliokh2012transverse} but the universality and fundamental origin of the phenomenon has never been pointed out. Note that surface waves are evanescent in both regions (see Fig.~(\ref{fig:6SPP})) and hence will have \textit{global} spin-locking where the handedness of the wave will be invariant in each of the half-spaces. We explain this by taking the example of surface plasmon polaritons which exist at the interface of a metal and dielectric. Region 1 ($-z$) is metallic having a relative permittivity $\epsilon_1<0$ and the dielectric in region 2 ($+z$) has a relative permittivity $\epsilon_2>1$. This results in the familiar dispersion relation $\kappa=k_{0}\sqrt{\epsilon_{1}\epsilon_{2}/(\epsilon_{1}+\epsilon_{2})}$. We can now fully quantify the evanescent spin in terms of the permittivities. Utilizing the expression for the circular Stokes parameters ($S_{3}$) derived in Eq.~(\ref{eq:6stokesparamaters}) this leads to \begin{equation} -(S_{3})_{1}=(S_{3})_{2}=2\frac{\sqrt{|\epsilon_{1}|~\epsilon_{2}}}{|\epsilon_{1}|+\epsilon_{2}} \end{equation} where $(S_{3})_{1}$ and $(S_{3})_{2}$ are the $\mathbf{\hat{p}}$-polarization Stokes parameters in region 1 and 2 respectively and we are assuming the permittivities are purely real in this instance. As we can see, as $|\epsilon_{1}|\to\epsilon_{2}$, the momentum $\kappa\to\infty$ and the spin approaches perfect circular polarization $-(S_{3})_{1}=(S_{3})_{2}\to1$, as expected. Also to reiterate, the spin-momentum locking of evanescent waves means these spins are flipped when the wave is propagating in the opposite direction. To help visualize these phenomena, the electric field vector plot for an SPP is displayed in Fig.~(\ref{fig:6SPP}) along with the ``full" SPP dispersion relation in Fig.~(\ref{fig:7SPPDisp}) that includes the handedness of the spin (in the dielectric region). Our approach provides an intuitive explanation of this phenomenon observed in recent experiments where chiral emitters or near-field interference from electric and magnetic dipoles lead to unidirectional SPP propagation \cite{rodriguez2013near,lin2013polarization,o2014spin,lee2012role}. \begin{figure} \includegraphics[width=0.75\columnwidth]{SPP} \caption{All electromagnetic surface waves will show spin-momentum locking. We depict here an SPP excitation between metal with $\epsilon_1=-2$ and air with $\epsilon_2=1$ propagating in the $+x$ direction. The vector plot overlaid on the spatial distribution of the Stokes parameter ($S_3$) illustrates the inherent handedness of the two evanescent waves and how they couple with counter rotating spins.} \label{fig:6SPP} \includegraphics[width=0.75\columnwidth]{SPPDisp} \caption{SPP dispersion relation that also includes the handedness of the evanescent spin (in the dielectric region). As the momentum $\kappa$ increases, the SPP spin approaches perfect circular polarization (SPP resonance).} \label{fig:7SPPDisp} \end{figure} \section{Conclusion} \label{sec:5Conclusion} In conclusion, we have shown that evanescent waves possess inherent local handedness (spin) which is tied to their phase velocity (momentum). We have proven this spin-momentum locking is universal behavior since it arises due to causality and the complex dispersion relation of evanescent waves. It is interesting to note that recent work on topological photonics \cite{lu2014topological,khanikaev2013photonic,rechtsman2013photonic,gao2015topological} has shed light on the existence of surface states immune to disorder and our work will surely lead to a better understanding of those surface states as well. The QSH surface state has electrons with spins locked to their direction of propagation but only occurs on the surface (interface) of materials with spin-orbit coupling (eg: HgTe quantum wells). The electromagnetic surface state curiously always possesses this property irrespective of the nature of the material. This warrants a deeper investigation and simultaneously opens up possibilities for practical applications. \section*{Funding Information} We acknowledge funding from Helmholtz Alberta Initiative and National Science and Engineering Research Council of Canada.
{ "timestamp": "2015-10-21T02:10:20", "yymm": "1504", "arxiv_id": "1504.06361", "language": "en", "url": "https://arxiv.org/abs/1504.06361" }
\chapter[Using World Scientific's Review Volume Document Style] \begin{center} {\Large\bf Neutrino Masses and Flavor Oscillations} \end{center} \vspace{0.5cm} \author[Y.F. Wang $\&$ Z.Z. Xing]{{\bf Yifang Wang}\footnote{E-mail: yfwang@ihep.ac.cn}, ~ {\bf Zhi-zhong Xing}\footnote{E-mail: xingzz@ihep.ac.cn}} \address{Institute of High Energy Physics, Chinese Academy of Sciences, \\ P.O. Box 918, Beijing 100049, China} \begin{center} {\bf Abstract} \end{center} \begin{abstract} This essay is intended to provide a brief description of the peculiar properties of neutrinos within and beyond the standard theory of weak interactions. The focus is on the flavor oscillations of massive neutrinos, from which one has achieved some striking knowledge about their mass spectrum and flavor mixing pattern. The experimental prospects towards probing the absolute neutrino mass scale, possible Majorana nature and CP-violating effects will also be addressed. \end{abstract} \markright{Neutrino Masses and Flavor Oscillations} \body \vspace{1cm} \section{Neutrinos and Their Sources} \subsection{From Pauli's hypothesis to the discoveries of neutrinos} Soon after Henri Becquerel discovered the radioactivity of uranium in 1896 \cite{Becquerel}, many nuclear physicists started to pay attention to the beta decays $(A, Z) \to (A, Z+1) + e^-$, in which the energy spectrum of electrons was expected to be {\it discrete} thanks to the laws of energy and momentum conservations. However, James Chadwick observed a {\it continuous} electron energy spectrum of the beta decay in 1914 \cite{Chadwick1}, and such a result was firmly confirmed by Charles Ellis and his colleagues in the 1920s \cite{Ellis}. At that time there were two different ideas to resolve this ``new physics" phenomenon (i.e., the discrepancy between {\it observed} and {\it expected} energy spectra of electrons): one was to give up the energy conservation law and the other was to add in a new particle. Niels Bohr was the representative of the former idea, which turned out to be wrong. Wolfgang Pauli conjectured that an unobservable, light, spin-1/2 and neutral particle --- known as the electron antineutrino later --- appeared in the beta decay and carried away some energy and momentum, and thus the energy spectrum of electrons in the process $(A, Z) \to (A, Z+1) + e^- + \overline{\nu}^{}_e$ was continuous. Pauli first put forward the concept of neutrinos in his famous letter to the ``Dear radioactive ladies and gentlemen" who had gathered in T$\rm\ddot{u}$bingen on 4 December 1930 \cite{Pauli}. Three years later he gave a talk on his neutrino hypothesis in the renowned Solvay Conference, where Enrico Fermi was in the audience and took this hypothesis seriously. In the end of 1933, Fermi published his most important theoretical work --- an effective theory of the beta decay \cite{Fermi}, which is actually a low-energy version of today's standard picture of weak charged-current interactions. Fermi's seminal work made it possible to calculate the reaction rates of nucleons and electrons (or positrons) interacting with neutrinos (or antineutrinos). In 1936, Hans Bethe pointed out that an inverse beta decay mode of the type $\overline{\nu}^{}_e + p \to n + e^+$ (or more general, $\overline{\nu}^{}_e + (A, Z) \to (A, Z-1) + e^+$) could be a possible way to verify the existence of electron antineutrinos produced from either fission bombs or fission reactors \cite{Bethe0}. This preliminary idea was elaborated by Bruno Pontecorvo in 1946 \cite{Pon46}, and it became feasible with the development of the liquid scintillation counting techniques in the 1950s. Although the incident $\overline{\nu}^{}_e$ is invisible, it can trigger the inverse beta decay where the emitted positron annihilates with an electron and the daughter nucleus is captured in the detector. Both events are observable because they emit gamma rays, and the corresponding flashes in the liquid scintillator are separated by some microseconds. Frederick Reines and Clyde Cowan did the first reactor antineutrino experiment and obtained a positive result in 1956 \cite{RC}, and they reported a new result consistent with the parity-violating theory of weak interactions in 1960. The Nobel Prize finally came to Reines in 1995, when Cowan had passed away 21 years before. The discovery of electron antineutrinos motivated Pontecorvo to speculate on the possibility of lepton number violation and neutrino-antineutrino transitions in 1957 \cite{Pontecorvo}. His argument was actually based on a striking conjecture made by Ettore Majorana in 1937: a massive neutrino could be its own antiparticle \cite{Majorana}. In 1962, the muon neutrino --- a sister of the electron neutrino --- was discovered by Leon Lederman, Melvin Schwartz and Jack Steinberger in an accelerator-based experiment \cite{Danby}. This discovery, which immediately motivated Ziro Maki, Masami Nakagawa and Shoichi Sakata to conjecture the $\nu^{}_e \leftrightarrow \nu^{}_\mu$ conversion \cite{MNS}, was also recognized by the Nobel Prize in 1988. The tau neutrino, another sister of the electron neutrino, was finally observed at the Fermilab in the end of 2000 \cite{Kodama}. Within the standard model the complete lepton family consists of three charged members ($e$, $\mu$, $\tau$) and three neutral members ($\nu^{}_e$, $\nu^{}_\mu$, $\nu^{}_\tau$), and their corresponding antiparticles. \subsection{Where do neutrinos come from?} Neutrinos and antineutrinos may originate from many physical and astrophysical processes via weak interactions. Fig. 1 illustrates some typical examples of neutrino or antineutrino sources in the Universe. \begin{figure}[t] \centerline{\includegraphics[width=11.5cm]{F1.eps}} \caption{ Some representative sources of neutrinos and (or) antineutrinos and their corresponding energies \cite{Source}. The cross sections of $\overline{\nu}^{}_e + e^- \to \overline{\nu}^{}_e + e^-$ scattering associated with different sources are also shown for comparison, where the peak around $6.3$ PeV is related to the Glashow resonance \cite{Glashow}.} \end{figure} {\it Example (1)}: Neutrinos and antineutrinos from the Big Bang. The standard cosmology predicts the existence of a cosmic neutrino (or antineutrino) background in the Universe. Today such {\it relic} neutrinos and antineutrinos should have an overall number density around $330 ~{\rm cm}^{-3}$, but their temperature is so low (only about $1.9$ K, or roughly $1.6 \times 10^{-4}$ eV) that there is no way to detect them. In the long run it might be possible to capture the relic electron neutrinos on some beta-decaying nuclei \cite{W}, as the PTOLEMY project is trying \cite{P}. {\it Example (2)}: Electron antineutrinos from the Earth. Since its birth, the Earth's interior has kept a number of radioactive nuclei (e.g., $^{40}{\rm K}$, $^{238}{\rm U}$ and $^{232}{\rm Th}$). That is why numerous electron antineutrinos can be produced from terrestrial ``natural radioactivity" (i.e., the beta decays), at a rate of several millions per square centimeter per second. So far such interesting geo-$\overline{\nu}^{}_e$ events have been observed at the $3\sigma$ level in the KamLAND \cite{GEO1} and Borexino \cite{GEO2} experiments. {\it Example (3)}: Electron neutrinos from the Sun. Solar electron neutrinos come along with a number of thermonuclear fusion reactions inside the Sun. One may understand why the Sun shines with the help of $4 p \to ~^4{\rm He} + 2 e^+ + 2 \nu^{}_e + 26.7 ~{\rm MeV}$: about $98\%$ of the energy radiates in the form of light and only $2\%$ of the energy is taken away by neutrinos \cite{Bethe}. The only way to verify such a picture on the Earth is to detect the electron neutrinos emitted from the core of the Sun. In 1968 solar neutrinos were first observed by Raymond Davis in his radiochemical experiment (see section 4.1 for a more detailed description) \cite{Davis}. {\it Example (4)}: Neutrinos and antineutrinos from supernovae. The explosion of a supernova may release the gravitational binding energy of ${\cal O}(10^{53})$ erg in the form of neutrinos and antineutrinos \cite{Bethe2}. On 23 February 1987 the $\nu^{}_e$ and $\overline{\nu}^{}_e$ events from the Supernova 1987A explosion were observed by the Kamiokande-II \cite{Koshiba}, IMB \cite{IMB} and Baksan \cite{Baksan} detectors. This observation was a great milestone in neutrino astronomy. Davis and Masatoshi Koshiba received the Nobel Prize in 2002 for their pioneering detections of solar and supernova neutrinos, respectively. {\it Example (5)}: Neutrinos and antineutrinos from the Earth's atmosphere. When a cosmic ray (which is mainly composed of high-energy protons coming from somewhere in the galactic or extragalactic space) penetrates the atmosphere around the Earth, it may interact with the ambient nuclei and generate a particle shower containing charged pions and muons. The decays of $\pi^{\pm}$ and $\mu^{\pm}$ can therefore produce atmospheric $\nu^{}_\mu$, $\overline{\nu}^{}_\mu$, $\nu^{}_e$ and $\overline{\nu}^{}_e$ events, which have been observed in several experiments \cite{PDG}. In particular, the phenomenon of atmospheric neutrino oscillations was firmly established by the Super-Kamiokande (SK) Collaboration in 1998 \cite{SK2}. {\it Example (6)}: Ultrahigh-energy (UHE) cosmic neutrinos and antineutrinos from distant astrophysical sources, including the expected active galactic nuclei, gamma ray bursts, supernova remnants and the Greisen-Zatsepin-Kuzmin cutoff of cosmic rays \cite{XZ}. The UHE $\nu^{}_\mu$, $\overline{\nu}^{}_\mu$, $\nu^{}_e$ and $\overline{\nu}^{}_e$ events can be produced from UHE $p\gamma$ or $pp$ collisions via $\pi^{\pm}$ and $\mu^{\pm}$ decays, and thus they may serve as a unique cosmic messenger and provide us with useful information about the cosmos that cannot be extracted from the measurements of cosmic rays and gamma rays. So far the IceCube detector at the South Pole has observed 37 extraterrestrial neutrino candidate events with deposited energies ranging from 30 TeV to 2 PeV \cite{IC}. Among them, the three PeV events represent the highest-energy neutrino interactions ever observed, but their astrophysical origin remains mysterious. Of course, neutrinos and (or) antineutrinos can also be produced from some man-made facilities, especially the nuclear reactors and particle accelerators. They also play a crucial role in discovering neutrinos, observing flavor oscillations and measuring fundamental parameters, as one will see in sections 3---5. \section{Weak Interactions of Neutrinos in the Standard Theory} As an important part of the matter content in the standard electroweak model based on the $SU(2)^{}_{\rm L} \times U(1)^{}_{\rm Y}$ gauge group, neutrinos are assumed to be the {\it massless} Weyl particles. Hence only the left-handed neutrinos and right-handed antineutrinos exist, and they take part in weak charged- and neutral-current interactions via \begin{eqnarray} -{\cal L}^{}_{\rm cc} & = & \frac{g}{2\sqrt{2}} \sum_\alpha \left[ \overline{\alpha} \ \gamma^\mu \left(1 - \gamma^{}_5\right) \nu^{}_\alpha W^-_\mu + {\rm h.c.} \right] \; , \nonumber \\ -{\cal L}^{}_{\rm nc} & = & \frac{g}{4\cos\theta^{}_{\rm w}} \sum_\alpha \left[ \overline{\nu^{}_\alpha} \ \gamma^\mu \left(1 - \gamma^{}_5\right) \nu^{}_\alpha \right] Z^{}_\mu \; , \end{eqnarray} where $\alpha = e, \mu, \tau$. Eq. (1) allows one to calculate the cross sections of neutrino-electron, neutrino-neutrino and neutrino-nucleon scattering processes \cite{XZ}. Note that the reactions $\nu^{}_e + e^- \to \nu^{}_e + e^-$ and $\overline{\nu}^{}_e + e^- \to \overline{\nu}^{}_e + e^-$ can happen via both charged- and neutral-current interactions, but $\nu^{}_\mu + e^- \to \nu^{}_\mu + e^-$ (or $\nu^{}_\tau + e^- \to \nu^{}_\tau + e^-$) and $\overline{\nu}^{}_\mu + e^- \to \overline{\nu}^{}_\mu + e^-$ (or $\overline{\nu}^{}_\tau + e^- \to \overline{\nu}^{}_\tau + e^-$) can only occur via the neutral-current interactions. That is why the behavior of neutrino flavor conversion in a dense medium may be modified by the coherent forward $\nu^{}_e e^-$ or $\overline{\nu}^{}_e e^-$ scattering. This effect is referred to as the Wolfenstein-Mikheyev-Smirnov (MSW) matter effect \cite{MSW}. The simplest quasi-elastic neutrino-nucleon scattering processes are the inverse beta decays $\overline{\nu}^{}_e + p \to e^+ + n$ and $\nu^{}_e + n \to e^- + p$, which take place via the charged-current weak interactions. Their cross sections can be approximately expressed as $\sigma \left(\overline{\nu}^{}_e p\right) = \sigma \left(\nu^{}_e n\right) \simeq 9.1 \times 10^{-44} \left(E^{}_\nu/{\rm MeV}\right)^2 {\rm cm}^2$. In comparison, the elastic neutrino-nucleon scattering reaction $\nu^{}_\alpha + N \to \nu^{}_\alpha + N$ (for $\alpha = e, \mu, \tau$) is mediated by the neutral-current weak interactions. Historically, the existence of weak neutral currents was first established in the Gargamelle bubble chamber at CERN in 1973 \cite{NC}. This experiment, which observed the highly expected events of $\nu^{}_\mu + N \to \nu^{}_\mu + {\rm hadrons}$ and $\overline{\nu}^{}_\mu + N \to \overline{\nu}^{}_\mu + {\rm hadrons}$, crowned the long-range neutrino program initiated by CERN at that time and brought CERN a leading role in the field of high energy physics. It also provided an unprecedentedly strong support to the standard electroweak model formulated by Sheldon Glashow, Steven Weinberg and Abdus Salam in the 1960s \cite{GWS}. These three theorists received the Nobel Prize in 1979 for their contributions to the electroweak theory and especially for their prediction of the weak neutral current. Four years later, the three mediators of the weak force (i.e., the $W^\pm$ and $Z^0$ bosons) were finally discovered by Carlo Rubbia and his colleagues at CERN \cite{Rubbia}. The standard model was thoroughly tested in the 1990s with the help of the Large Electron-Positron Collider (LEP) running on the $Z^0$ resonance at CERN. In particular, the number of neutrino species was determined to be $N^{}_\nu = 2.984 \pm 0.008$ via the decay $Z^0 \to \nu^{}_\alpha + \overline{\nu}^{}_\alpha$ \cite{PDG}. Such a result is consistent very well with 3 as required in the theory. Extra light neutrino species are not impossible, but they must be ``sterile" --- in the sense that they do not directly take part in the standard weak interactions, and hence their existence is not subject to the LEP measurement. Note that the structure of the standard model itself is too economical to allow the neutrinos to be massive. On the one hand, the particle content of the model is so limited that there are neither right-handed neutrinos nor any Higgs triplets. Hence a normal Dirac neutrino mass term is not allowed, nor a gauge-invariant Majorana mass term. On the other hand, the model is a renormalizable quantum field theory. The renormalizability implies that an effective dimension-5 operator, which can give each neutrino a Majorana mass, is also forbidden. \section{Neutrino Masses, Flavor Mixing and Oscillations} \subsection{Massive neutrinos and their electromagnetic properties} There are several ways to slightly extend the standard theory such that the neutrinos can acquire their masses with little influence on the great success of the theory itself \cite{Xing09}. Here let us take two typical examples for illustration. (1) If the renormalizability of the standard theory is relaxed, then the lowest-dimension operator that violates lepton number and generates neutrino masses must be the unique dimension-5 Weinberg operator $HH \ell\ell/\Lambda$, where $\Lambda$ denotes the cut-off energy scale in such an effective field theory, $H$ and $\ell$ are the Higgs and lepton doublets, respectively \cite{Weinberg}. After spontaneous gauge symmetry breaking, this operator yields the neutrino masses $m^{}_i \sim \langle H\rangle^2/\Lambda$ (for $i=1,2,3$), which can be sufficiently small ($\lesssim 1$ eV) provided $\Lambda \gtrsim 10^{13}$ GeV and $\langle H\rangle \sim 10^2$ GeV. In this sense the study of neutrino mass generation can serve as a striking low-energy window onto new physics at superhigh energy scales. (2) If two or more heavy right-handed neutrinos are added into the standard theory and lepton number is violated by their Majorana mass term, then the Lagrangian responsible for neutrino masses can be written as \begin{eqnarray} -{\cal L}^{}_{\rm mass} = \overline{\ell^{}_{\rm L}} Y^{}_\nu \tilde{H} N^{}_{\rm R} + \frac{1}{2} \overline{N^c_{\rm R}} M^{}_{\rm R} N^{}_{\rm R} + {\rm h.c.} \; , \end{eqnarray} in which the first term stands for the neutrino Yukawa interactions, and the second term is lepton-number-violating. After the $SU(2)^{}_{\rm L} \times U(1)^{}_{\rm Y}$ gauge symmetry is spontaneously broken to $U(1)^{}_{\rm em}$, one is left with the effective Majorana neutrino mass matrix $M^{}_\nu \simeq - \langle H\rangle^2 Y^{}_\nu M^{-1}_{\rm R} Y^T_\nu$, which is often referred to as the canonical {\it seesaw} formula \cite{SS}. Because $N^{}_{\rm R}$ is the $SU(2)^{}_{\rm L}$ singlet, the mass scale of $M^{}_{\rm R}$ can be greatly higher than the electroweak scale $\langle H\rangle$. Hence the mass scale of $M^{}_\nu$ is highly suppressed, providing a natural explanation of the smallness of neutrino masses. Instead of introducing the heavy right-handed neutrinos, one may also introduce a Higgs triplet or a few triplet fermions into the standard theory so as to explain why the three active neutrinos should have naturally small masses \cite{XZ}. Such seesaw mechanisms essentially have the same spirit, which attributes the smallness of neutrino masses to the largeness of new degrees of freedom. Furthermore, they require massive neutrinos to be the Majorana particles and thus allow some lepton-number-violating processes to happen. It is worth pointing out that a pure Dirac neutrino mass term, originating from the neutrino Yukawa interactions on the right-hand side of Eq. (2), is less convincing and less interesting from a theoretical point of view. The reason for this argument is two-fold: (a) such a scenario cannot explain why the neutrino masses are so small as compared with the charged lepton masses; (b) given $N^{}_{\rm R}$, the lepton-number-violating term $\overline{N^c_{\rm R}} M^{}_{\rm R} N^{}_{\rm R}$ should not be absent because it is not forbidden by gauge symmetry and Lorentz invariance. If massive neutrinos really have the Majorana nature, they can trigger the neutrinoless double-beta ($0\nu\beta\beta$) decays and some other lepton-number-violating processes. In particular, they are likely to have something to do with the observed asymmetry of matter and antimatter in the Universe via the seesaw and leptogenesis \cite{FY} mechanisms. Hence the phenomenology of Majorana neutrinos is much richer and more interesting than that of Dirac neutrinos. Although a massive neutrino does not possess any electric charge, it can have electromagnetic interactions via quantum loops \cite{DM}. Now that Dirac and Majorana neutrinos couple to the photon in different ways, their corresponding electromagnetic form factors must be different. Given the standard weak interactions, one finds that a massive Dirac neutrino has no electric dipole moment and its magnetic dipole moment is finite but extremely small: $\mu^{}_\nu \sim 3 \times 10^{-20} \left(m^{}_\nu/0.1 ~{\rm eV}\right) \mu^{}_{\rm B}$ with $\mu^{}_{\rm B}$ being the Bohr magneton. In contrast, a massive Majorana neutrino has neither electric nor magnetic dipole moments, simply because its antiparticle is just itself. But both Dirac and Majorana neutrinos can have the {\it transition} dipole moments (i.e., from one mass eigenstate to another mass eigenstate), which may result in neutrino decays, neutrino-electron scattering, neutrino interactions with external magnetic fields, etc \cite{Giunti1}. In a realistic neutrino-electron scattering experiment, what can be constrained is actually an effective transition dipole moment $\mu^{}_{\rm eff}$ consisting of both electric and magnetic components. Hence it is practically impossible to distinguish between Dirac and Majorana neutrinos in such measurements. Current experimental upper bounds on $\mu^{}_{\rm eff}$ are at the level of $10^{-11} \mu^{}_{\rm B}$ \cite{Giunti1}, far above the afore-mentioned theoretical expectation $\mu^{}_\nu \sim 10^{-20} \mu^{}_{\rm B}$. \subsection{Lepton flavor mixing and neutrino oscillations} In the basis where the flavor eigenstates of three charged leptons are identified with their mass eigenstates, one may diagonalize the Majorana neutrino mass matrix $M^{}_\nu$ by means of a unitary transformation. Then the leptonic charged-current interactions in Eq. (1) can be reexpressed in terms of the mass eigenstates: \begin{eqnarray} -{\cal L}^{}_{\rm cc} = \frac{g}{\sqrt{2}} \ \overline{\left( e ~ \mu ~ \tau\right)^{}_{\rm L}} \ \gamma^\mu \ U \left( \begin{matrix} \nu^{}_1 \cr \nu^{}_2 \cr \nu^{}_3 \end{matrix} \right)^{}_{\rm L} W^-_\mu + {\rm h.c.} \; , \end{eqnarray} where the $3\times 3$ unitary matrix $U$ describes the strength of lepton flavor mixing and can be parameterized by using three rotation angles and three CP-violating phases: \begin{eqnarray} U = \left( \begin{matrix} c^{}_{12} c^{}_{13} & s^{}_{12} c^{}_{13} & s^{}_{13} e^{-{\rm i} \delta} \cr -s^{}_{12} c^{}_{23} - c^{}_{12} s^{}_{13} s^{}_{23} e^{{\rm i} \delta} & c^{}_{12} c^{}_{23} - s^{}_{12} s^{}_{13} s^{}_{23} e^{{\rm i} \delta} & c^{}_{13} s^{}_{23} \cr s^{}_{12} s^{}_{23} - c^{}_{12} s^{}_{13} c^{}_{23} e^{{\rm i} \delta} & ~ -c^{}_{12} s^{}_{23} - s^{}_{12} s^{}_{13} c^{}_{23} e^{{\rm i} \delta} ~ & c^{}_{13} c^{}_{23} \cr \end{matrix} \right) P^{}_\nu \; , \end{eqnarray} where $c^{}_{ij} \equiv \cos\theta^{}_{ij}$, $s^{}_{ij} \equiv \sin\theta^{}_{ij}$ (for $ij = 12, 13, 23$), $\delta$ is referred to as the Dirac CP-violating phase, and $P^{}_\nu = {\rm Diag}\left\{e^{{\rm i}\rho}, e^{{\rm i}\sigma}, 1\right\}$ contains two extra phase parameters of the Majorana nature. The matrix $U$ is often called the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, and its unitarity has been tested at the percent level \cite{Antusch} \footnote{Note that whether $U$ is unitary or not depends on the mechanism of neutrino mass generation. In the canonical seesaw mechanism \cite{SS}, for instance, the mixing between light and heavy Majorana neutrinos may lead to tiny unitarity-violating effects for the PMNS matrix $U$ itself.}. Eq. (3) tells us that a $\nu^{}_\alpha$ neutrino can be produced from the $W^+ + \alpha^- \to \nu^{}_\alpha$ interaction, and a $\nu^{}_\beta$ neutrino can be detected through the $\nu^{}_\beta + W^- \to \beta^-$ interaction (for $\alpha, \beta = e, \mu, \tau$). The $\nu^{}_\alpha \to \nu^{}_\beta$ oscillation may happen if the $\nu^{}_i$ beam with energy $E \gg m^{}_i$ travels a proper distance $L$ in vacuum. The probability of such a flavor oscillation is given by \cite{XZ} \begin{eqnarray} P(\nu^{}_\alpha \to \nu^{}_\beta) & = & \delta^{}_{\alpha\beta} - 4 \sum_{i<j} \left({\rm Re} \lozenge^{ij}_{\alpha\beta} \sin^2\Delta^{}_{ji} \right) + \ 8 {\rm Im} \lozenge^{ij}_{\alpha\beta} \prod_{i<j} \sin\Delta^{}_{ji} \; , \end{eqnarray} in which $\Delta^{}_{ji} \equiv \Delta m^2_{ji} L/\left(4 E\right)$ and $\lozenge^{ij}_{\alpha\beta} \equiv U^{}_{\alpha i} U^{}_{\beta j} U^*_{\alpha j} U^*_{\beta i}$ (for $i,j = 1,2,3$ and $\alpha, \beta = e, \mu, \tau$). The probability of the $\overline{\nu}^{}_\alpha \to \overline{\nu}^{}_\beta$ oscillation can easily be read off from Eq. (5) by making the replacement $U \to U^*$. There are two types of neutrino oscillation experiments: the ``appearance" one ($\alpha \neq \beta$) and the ``disappearance" one ($\alpha = \beta$). Both solar neutrino oscillations ($\nu^{}_e \to \nu^{}_e$) and reactor antineutrino oscillations ($\overline{\nu}^{}_e \to \overline{\nu}^{}_e$) are of the disappearance type. The atmospheric muon-neutrino (or muon-antineutrino) oscillations essentially belong to the disappearance type, and the accelerator neutrino oscillations can be of either type. At this point let us explain why it is extremely difficult to do a realistic neutrino-antineutrino oscillation experiment. We consider an $\overline{\nu}^{}_\alpha$ beam produced from the standard charged-current interactions $\alpha^+ + W^- \to \overline{\nu}^{}_\alpha$. After traveling a distance $L$ this beam will be detected at a detector through the standard charged-current interactions $\nu^{}_\beta \to \beta^- + W^+$. Different from the normal $\nu^{}_\alpha \to \nu^{}_\beta$ or $\overline{\nu}^{}_\alpha \to \overline{\nu}^{}_\beta$ oscillations, the $\overline{\nu}^{}_\alpha \to \nu^{}_\beta$ oscillation involves a suppression factor $m^{}_i/E$ in its amplitude. This factor reflects the fact that the incoming $\alpha^+$ leads to an antineutrino $\overline{\nu}^{}_\alpha$ in a dominantly right-handed helicity state, whereas the standard charged-current interactions that produce the outgoing $\beta^-$ would prefer the incident neutrino $\nu^{}_\beta$ being in a left-handed state \cite{S}. Because of $m^{}_i \lesssim 1$ eV and $E \gtrsim 1$ MeV in a realistic experiment, this helicity suppression factor (i.e., $m^{}_i/E \lesssim 10^{-6}$) makes it impossible to observe the phenomenon of neutrino-antineutrino oscillations. \section{Observations of Neutrino Oscillations} \subsection{Solar neutrino oscillations} In 1946 Pontecorvo put forward a radiochemical technique which can be used to measure solar electron neutrinos via the reaction $^{37}{\rm Cl} + \nu^{}_e \to ~^{37}{\rm Ar} + e^-$ \cite{Pon46}. The incident neutrino's energy threshold for this reaction to happen is $0.814$ MeV, low enough to make it sensitive to solar $^8{\rm B}$ neutrinos. In 1964 John Bahcall carefully calculated the solar neutrino flux and the capture rate of $^8{\rm B}$ neutrinos, demonstrating the experimental feasibility of Pontecorvo's idea \cite{Bahcall}. This motivated Davis to build a $10^5$-gallon Chlorine-Argon neutrino detector in the Homestake Gold Mine in the middle of the 1960s. The final result of this experiment was published in 1968 and caused a big puzzle: the measured flux of solar $^8{\rm B}$ neutrinos was only about one third of the value predicted by the standard solar model (SSM) \cite{Davis}. Such a deficit was later confirmed in a number of solar neutrino experiments, including the Homestake \cite{H}, GALLEX/GNO \cite{G}, SAGE \cite{Sage}, SK \cite{SK} and SNO \cite{SNO} experiments. Among them, the SNO experiment was especially crucial because it model-independently demonstrated the flavor conversion of solar $\nu^{}_e$ neutrinos into $\nu^{}_\mu$ and $\nu^{}_\tau$ neutrinos. \begin{figure}[t] \centerline{\includegraphics[width=12.5cm]{F2.eps}} \caption{The $\nu^{}_\mu + \nu^{}_\tau$ flux versus the $\nu^{}_e$ flux determined from the SNO data. The total solar $^8{\rm B}$ neutrino flux predicted by the SSM is shown as dashed lines, parallel to the NC measurement. The narrowed band parallel to the SNO's ES measurement corresponds to the SK's ES result. The best-fit point is obtained by using only the SNO data \cite{SNO2}.} \end{figure} Given heavy water as the target material of the SNO detector, the solar $^8{\rm B}$ neutrinos were measured via the charged-current (CC) reaction $\nu^{}_e + {\rm D} \to e^- + p + p$, the neutral-current (NC) reaction $\nu^{}_\alpha + {\rm D} \to \nu^{}_\alpha + p + n$ and the elastic-scattering process $\nu^{}_\alpha + e^- \to \nu^{}_\alpha + e^-$ (for $\alpha = e, \mu, \tau$) \cite{SNO}. The observed neutrino fluxes in these three different channels are expected to satisfy $\phi^{}_{\rm CC} = \phi^{}_e$, $\phi^{}_{\rm NC} = \phi^{}_e + \phi^{}_{\mu\tau}$ and $\phi^{}_{\rm ES} = \phi^{}_e + 0.155 \phi^{}_{\mu \tau}$, where $\phi^{}_{\mu\tau}$ denotes a sum of the fluxes of $\nu^{}_\mu$ and $\nu^{}_\tau$ neutrinos. So $\phi^{}_{\rm CC} = \phi^{}_{\rm NC} = \phi^{}_{\rm ES}$ would hold if there were no flavor conversion (i.e., $\phi^{}_{\mu\tau} =0$). The SNO data $\phi^{}_{\rm CC} = 1.68^{+0.06}_{-0.06} ({\rm stat})^{+0.08}_{-0.09} ({\rm syst})$, $\phi^{}_{\rm NC} = 4.94^{+0.21}_{-0.21} ({\rm stat})^{+0.38}_{-0.34} ({\rm syst})$ and $\phi^{}_{\rm ES} = 2.35^{+0.22}_{-0.22} ({\rm stat})^{+0.15}_{-0.15} ({\rm syst})$ as illustrated in Fig. 2 \cite{SNO2} definitely demonstrated $\phi^{}_{\mu\tau} \neq 0$. Now we are sure that the deficit of solar $^8{\rm B}$ neutrinos, whose typical energies are about 6 MeV to 7 MeV, is due to $\nu^{}_e \to \nu^{}_\mu$ and $\nu^{}_e \to \nu^{}_\tau$ oscillations modified by significant MSW matter effects in the Sun. A careful analysis shows that the observed survival probability of solar $^8{\rm B}$ neutrino oscillations can approximate to $P(\nu^{}_e \to \nu^{}_e) \simeq \sin^2\theta^{}_{12} \simeq 0.32$ \cite{Kayser}, leading us to $\theta^{}_{12} \simeq 34^\circ$. Moreover, the Borexino experiment has accomplished a real-time measurement of the mono-energetic solar $^7{\rm Be}$ neutrinos with $E = 0.862$ MeV and observed a remarkable deficit corresponding to $P(\nu^{}_e \to \nu^{}_e) =0.56 \pm 0.1$ \cite{B}. Such a result can roughly be explained as a vacuum oscillation effect, because the low-energy $^7{\rm Be}$ neutrino oscillation is not very sensitive to matter effects \cite{Kayser}. In this case we are left with the averaged survival probability $P(\nu^{}_e \to \nu^{}_e) \simeq 1 - \sin^2 2\theta^{}_{12}/2 \simeq 0.56$ as a reasonable approximation for solar $^7{\rm Be}$ neutrinos, and thus obtain $\theta^{}_{12} \simeq 35^\circ$. This result is essentially consistent with the one extracted from solar $^8{\rm B}$ neutrinos. \subsection{Atmospheric neutrino oscillations} The atmospheric $\nu^{}_\mu$, $\overline{\nu}^{}_\mu$, $\nu^{}_e$ and $\overline{\nu}^{}_e$ events are produced in the Earth's atmosphere by cosmic rays, mainly via the decays $\pi^+ \to \mu^+ + \nu^{}_\mu$ with $\mu^+ \to e^+ + \nu^{}_e + \overline{\nu}^{}_\mu$ and $\pi^- \to \mu^- + \overline{\nu}^{}_\mu$ with $\mu^- \to e^- + \overline{\nu}^{}_e + \nu^{}_\mu$. So the ratio of $\nu^{}_\mu$ and $\overline{\nu}^{}_\mu$ events to $\nu^{}_e$ and $\overline{\nu}^{}_e$ events is expected to be nearly $2:1$ at low energies ($\lesssim 1$ GeV). But a smaller ratio was observed at the Kamiokande \cite{K2} and IMB \cite{IMB2} detectors in the late 1980s and early 1990s, indicating a preliminary deficit of atmospheric muon neutrinos and muon antineutrinos. If there were no neutrino oscillation, the atmospheric neutrinos that enter and excite an underground detector would have an almost perfect spherical symmetry. Namely, the downward-going and upward-going neutrino fluxes should be equal to each other, or equivalently $\Phi^{}_e (\theta^{}_z) = \Phi^{}_e (\pi -\theta^{}_z)$ and $\Phi^{}_\mu (\theta^{}_z) = \Phi^{}_\mu (\pi -\theta^{}_z)$ for the zenith angle $\theta^{}_z$. In 1998 the SK Collaboration observed an approximate up-down flux symmetry for atmospheric $\nu^{}_e$ and $\overline{\nu}^{}_e$ events and a significant up-down flux asymmetry for atmospheric $\nu^{}_\mu$ and $\overline{\nu}^{}_\mu$ events \cite{SK2}. \begin{figure}[t] \centerline{\includegraphics[width=7.6cm]{F3.eps}} \caption{ A brief view from inside the SK detector's water tank during filling \cite{SK2}.} \end{figure} The SK detector is a $5\times 10^4$-ton tank of ultra-pure water, located approximately 1 km underground in the Mozumi Mine in Kamioka. As illustrated in Fig. 3, the inside surface of the tank is lined with more than $1.1 \times10^4$ photo-multiplier tubes (PMTs). An additional layer of water called the outer detector is also instrumented PMTs to detect any charged particles entering the central volume and to shield the inner detector by absorbing any neutrons produced in the nearby rock. A neutrino interacting with the electrons or nuclei of water can produce a charged particle that moves faster than the speed of light in water, creating a cone of light known as Cherenkov radiation. The Cherenkov light is projected as a ring on the wall of the detector and recorded by the PMTs. Hence the direction and flavor of an incident neutrino can be identified by using the details of the ring pattern. As shown in Fig. 4, the observed deficit of atmospheric upward-going $\nu^{}_\mu$ and $\overline{\nu}^{}_\mu$ events at SK could naturally be attributed to $\nu^{}_\mu \to \nu^{}_\tau$ and $\overline{\nu}^{}_\mu \to \overline{\nu}^{}_\tau$ oscillations, because the detector itself was insensitive to $\nu^{}_\tau$ and $\overline{\nu}^{}_\tau$ events. This was actually the first {\it model-independent} evidence for neutrino oscillations, and it marked the threshold of a new era in particle physics. Since 1998 a number of breakthroughs have been made in experimental neutrino physics. \begin{figure}[t] \centerline{\includegraphics[width=8.5cm]{F4.eps}} \caption{The SK zenith-angle distributions for fully contained 1-ring $e$-like and $\mu$-like events with visible energy $< 1.33$ GeV (sub-GeV) and $> 1.33$ GeV (multi-GeV). For multi-GeV $\mu$-like events, a combined distribution with partially contained events is illustrated. The dotted histograms show the non-oscillation Monte Carlo events, and the solid histograms show the best-fit expectations for atmospheric $\nu^{}_\mu \to \nu^{}_\mu$ oscillations \cite{PDG}.} \end{figure} In 2004 the SK Collaboration carried out a careful analysis of the $\nu^{}_\mu$ (or $\overline{\nu}^{}_\mu$) disappearance probability as a function of the neutrino flight length $L$ over the neutrino energy $E$, and observed a dip in the $L/E$ distribution as the first {\it direct} evidence for atmospheric neutrino oscillations \cite{SK04}. This dip was consistent with the prediction from the sinusoidal flavor transition probability of neutrino oscillations, but inconsistent with the exotic neutrino decay and neutrino decoherence scenarios. To directly observe the atmospheric $\nu^{}_\mu \to \nu^{}_\tau$ oscillation is quite difficult because it requires the neutrino beam energy greater than a threshold of 3.5 GeV, such that a tau lepton can be produced via the charged-current interaction of incident $\nu^{}_\tau$ with the target nuclei in the detector. But the SK data are found to be best described by neutrino oscillations that include the $\nu^{}_\tau$ appearance in addition to the overwhelming signature of the $\nu^{}_\mu$ disappearance. A neural network analysis of the zenith-angle distribution of multi-GeV contained events has recently demonstrated this observation at the $3.8 \sigma$ level \cite{SK13}. \subsection{Accelerator neutrino oscillations} If the observed deficit of atmospheric $\nu^{}_\mu$ and $\overline{\nu}^{}_\mu$ events is ascribed to neutrino oscillations, then a fraction of the accelerator-produced $\nu^{}_\mu$ and $\overline{\nu}^{}_\mu$ events should also disappear on their way to a remote detector. This expectation has definitely been confirmed by two long-baseline neutrino oscillation experiments: K2K \cite{K2K} and MINOS \cite{MINOS}. The K2K experiment was designed in such a way that the $\nu^{}_\mu$ beam was produced at the KEK accelerator and measured 250 km away at the SK detector in Kamioka. In comparison, the baseline length of the MINOS experiment is 735 km, from the source of $\nu^{}_\mu$ neutrinos at Fermilab to the far detector in northern Minnesota. Both of them have observed a reduction of the $\nu^{}_\mu$ flux and a distortion of the $\nu^{}_\mu$ energy spectrum, implying $\nu^{}_\mu \to \nu^{}_\mu$ oscillations. The most striking result obtained from the atmospheric and accelerator neutrino oscillation experiments is $\sin^2 2\theta^{}_{23} \simeq 1$ or $\theta^{}_{23} \simeq 45^\circ$, which might hint at a special flavor structure or a certain flavor symmetry in the neutrino sector \cite{Vissani}. An especially important accelerator neutrino oscillation experiment is the T2K experiment with a $\nu^{}_\mu$ beam produced from the J-PARC Main Ring in Tokai and pointing to the SK detector at a distance of 295 km. Its main goal is to discover $\nu^{}_\mu \to \nu^{}_e$ appearance oscillations and perform a precision measurement of $\nu^{}_\mu \to \nu^{}_\mu$ disappearance oscillations. Since its preliminary data were first released in June 2011, the T2K experiment has proved to be very successful in establishing the $\nu^{}_e$ appearance out of a $\nu^{}_\mu$ beam at the $7.3 \sigma$ level and constraining the neutrino mixing parameters $\theta^{}_{13}$, $\theta^{}_{23}$ and $\delta$ \cite{T2K}. The point is that the leading term of $P(\nu^{}_\mu \to \nu^{}_e)$ is sensitive to $\sin^2 2\theta^{}_{13} \sin^2\theta^{}_{23}$, and its sub-leading term is sensitive to $\delta$ and terrestrial matter effects \cite{Freund}. Fig. 5 shows the allowed region of $\sin^2 2\theta^{}_{13}$ changing with the CP-violating phase $\delta$ as constrained by the T2K data \cite{T2K}, from which one can see an unsuppressed value of $\theta^{}_{13}$ together with a preliminary hint $\delta \sim -\pi/2$ even though the neutrino mass ordering (i.e., the sign of $\Delta m^2_{32}$) remains undetermined. \begin{figure}[t] \centerline{\includegraphics[width=12.8cm]{F5.eps}} \caption{The allowed region of $\sin^2 2\theta^{}_{13}$ as a function of the CP-violating phase $\delta$, constrained by the present T2K neutrino oscillation data \cite{T2K}.} \end{figure} Different from the K2K, MINOS and T2K experiments, the OPERA experiment was designed to search for the $\nu^{}_\tau$ appearance in a $\nu^{}_\mu$ beam traveling from CERN to Gran Sasso at a distance of 730 km. After several years of data taking, the OPERA Collaboration reported four $\nu^{}_\tau$ candidate events in 2014. These events are consistent with $\nu^{}_\mu \to \nu^{}_\tau$ oscillations with the $4.2 \sigma$ significance \cite{O}. \subsection{Reactor antineutrino oscillations} Since the first discovery of electron antineutrinos with the help of the Savannah River reactor in 1956 \cite{RC}, reactors have been playing an important role in neutrino physics. In particular, two of the three neutrino mixing angles ($\theta^{}_{12}$ and $\theta^{}_{13}$) have been measured in the KamLAND \cite{KM} and Daya Bay \cite{DYB} reactor antineutrino oscillation experiments to an unprecedentedly good degree of accuracy. \begin{figure}[t] \centerline{\includegraphics[width=9.1cm]{F6.eps}} \caption{The allowed region for two-flavor neutrino oscillation parameters from the KamLAND and solar neutrino experiments, where $\Delta m^2_\odot \simeq \Delta m^2_{21}$ and $\tan^2\theta^{}_\odot \simeq \tan^2\theta^{}_{12}$ hold \cite{KM2}.} \end{figure} The average baseline length of the KamLAND experiment was $L =180$ km, and hence it was sensitive to the $\Delta m^2_{21}$-driven $\overline{\nu}^{}_e \to \overline{\nu}^{}_e$ oscillation and allowed a terrestrial test of the large-mixing-angle (LMA) MSW solution to the solar neutrino problem. Under CPT invariance the KamLAND measurement \cite{KM} firmly established the LMA solution for the first time, and pinned down the correct parameter space of solar $\nu^{}_e \to \nu^{}_e$ oscillations constrained by the SNO and SK experiments, as shown in Fig. 6 in the two-flavor scheme \cite{KM2}. A striking sinusoidal behavior of $P(\overline{\nu}^{}_e \to \overline{\nu}^{}_e)$ against $L/E$ was also demonstrated in the KamLAND experiment \cite{KM2}. While the CHOOZ \cite{CHOOZ} and Palo Verde \cite{PV} reactor antineutrino experiments tried to search for the $\Delta m^2_{31}$-driven $\overline{\nu}^{}_e \to \overline{\nu}^{}_e$ oscillations at the end of the 20th century, they found no indication in favor of such oscillations and thus set an upper bound on the smallest neutrino mixing angle $\theta^{}_{13}$. This situation has been changed by the Daya Bay \cite{DYB}, RENO \cite{RENO} and Double Chooz \cite{DC} experiments in the past few years. The Daya Bay experiment was designed to probe the smallest neutrino mixing angle $\theta^{}_{13}$ with an unprecedented sensitivity $\sin^2 2\theta^{}_{13} \sim 1\%$ by measuring the $\Delta m^2_{31}$-driven $\overline{\nu}^{}_e \to \overline{\nu}^{}_e$ oscillation with a baseline length $L \simeq 2$ km. In this experiment the electron antineutrino beam takes its source at the Daya Bay nuclear power complex located in Shenzhen, as shown in Fig. 7. The eight antineutrino detectors deployed at the near (two plus two) and far (four) sites are all the liquid scintillator detectors. In March 2012 the Daya Bay Collaboration announced a $5.2\sigma$ discovery of $\theta^{}_{13} \neq 0$, with $\sin^2 2\theta^{}_{13} = 0.092 \pm 0.016 ({\rm stat}) \pm 0.005 ({\rm syst})$ (see Fig. 8 for illustration) \cite{DYB}. A similar but slightly less significant result was later achieved in the RENO \cite{RENO} and Double Chooz \cite{DC} reactor antineutrino experiments. \begin{figure}[t] \centerline{\includegraphics[width=11.5cm]{F7.eps}} \caption{The layout of the Daya Bay reactor antineutrino experiment with three pairs of reactor cores (Daya Bay, Ling Ao I and Ling Ao II). Four detector modules are deployed at the far site, and two detector modules are deployed at each of the two near sites \cite{DYB}.} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=9.5cm]{F8.eps}} \caption{The survival probability of $\overline{\nu}^{}_e \to \overline{\nu}^{}_e$ oscillations observed at the near and far experimental halls (i.e., EH1, EH2 and EH3) in the Daya Bay experiment \cite{DYB}.} \end{figure} The Daya Bay Collaboration has also measured the energy dependence of $\overline{\nu}^{}_e$ disappearance and observed a nearly full oscillation cycle against $L/E$ \cite{DYB2}. An improved result of the oscillation amplitude $\sin^2 2\theta^{}_{13} = 0.090^{+0.008}_{-0.009}$ has recently been obtained by using the observed $\overline{\nu}^{}_e$ rate and the observed energy spectrum in the three-flavor framework \cite{DYB2}. The relative large value of $\theta^{}_{13}$ is very encouraging for the next-generation precision neutrino experiments, which aim to determine the neutrino mass ordering and probe leptonic CP violation in the foreseeable future. \subsection{Determination of oscillation parameters} The aforementioned neutrino or antineutrino oscillation experiments involve different sources, different flavors, different energies and different baseline lengths. But the relevant experimental data can all be explained in the scheme of three-flavor oscillations, which depend on two independent neutrino mass-squared differences ($\Delta m^2_{21}$, $\Delta m^2_{32}$), three flavor mixing angles ($\theta^{}_{12}$, $\theta^{}_{13}$, $\theta^{}_{23}$) and one CP-violating phase ($\delta$). A global fit of all the available experimental data is therefore needed in order to determine or constrain the six oscillation parameters. A global three-flavor analysis of current experimental data on solar (SNO, SK, Borexino), atmospheric (SK), accelerator (MINOS, T2K) and reactor (KamLAND, Daya Bay, RENO) neutrino or antineutrino oscillations has recently been done by several groups \cite{Fogli,Valle,Schwetz}. For the sake of simplicity, here we only quote the main results obtained by the Italian group \cite{Fogli} \footnote{In this reference the notations $\delta m^2 \equiv m^2_2 - m^2_1$ and $\Delta m^2 \equiv m^2_3 - (m^2_1 + m^2_2)/2$ are used. Their relations with $\Delta m^2_{21}$ and $\Delta m^2_{31}$ are rather simple: $\Delta m^2_{21} = \delta m^2$ and $\Delta m^2_{31} = \Delta m^2 + \delta m^2/2$.}, as listed in Table 1. \begin{table}[t] \tbl{The three-flavor neutrino oscillation parameters determined or constrained from a global analysis of current experimental data \cite{Fogli}.} {\begin{tabular}{ccccc} \hline \hline Parameter & Best fit & 1$\sigma$ range & 2$\sigma$ range & 3$\sigma$ range \\ \hline \multicolumn{5}{c}{Normal neutrino mass ordering $(m^{}_1 < m^{}_2 < m^{}_3$)} \\ \hline $\Delta m^2_{21}/10^{-5} ~{\rm eV}^2$ & $7.54$ & 7.32 --- 7.80 & 7.15 --- 8.00 & 6.99 --- 8.18 \\ $\Delta m^2_{31}/10^{-3} ~ {\rm eV}^2$ & $2.47$ & 2.41 --- 2.53 & 2.34 --- 2.59 & 2.26 --- 2.65 \\ $\sin^2\theta_{12}/10^{-1}$ & $3.08$ & 2.91 --- 3.25 & 2.75 --- 3.42 & 2.59 --- 3.59 \\ $\sin^2\theta_{13}/10^{-2}$ & $2.34$ & 2.15 --- 2.54 & 1.95 --- 2.74 & 1.76 --- 2.95 \\ $\sin^2\theta_{23}/10^{-1}$ & $4.37$ & 4.14 --- 4.70 & 3.93 --- 5.52 & 3.74 --- 6.26 \\ $\delta/180^\circ$ & $1.39$ & 1.12 --- 1.77 & 0.00 --- 0.16 $\oplus$ 0.86 --- 2.00 & 0.00 --- 2.00 \\ \hline \multicolumn{5}{c}{Inverted neutrino mass ordering $(m^{}_3 < m^{}_1 < m^{}_2$)} \\ \hline $\Delta m^2_{21}/10^{-5} ~{\rm eV}^2$ & $7.54$ & 7.32 --- 7.80 & 7.15 --- 8.00 & 6.99 --- 8.18 \\ $\Delta m^2_{13}/10^{-3} ~ {\rm eV}^2$ & $2.42$ & 2.36 --- 2.48 & 2.29 --- 2.54 & 2.22 --- 2.60 \\ $\sin^2\theta_{12}/10^{-1}$ & $3.08$ & 2.91 --- 3.25 & 2.75 --- 3.42 & 2.59 --- 3.59 \\ $\sin^2\theta_{13}/10^{-2}$ & $2.40$ & 2.18 --- 2.59 & 1.98 --- 2.79 & 1.78 --- 2.98 \\ $\sin^2\theta_{23}/10^{-1}$ & $4.55$ & 4.24 --- 5.94 & 4.00 --- 6.20 & 3.80 --- 6.41 \\ $\delta/180^\circ$ & $1.31$ & 0.98 --- 1.60 & 0.00 --- 0.02 $\oplus$ 0.70 --- 2.00 & 0.00 --- 2.00 \\ \hline\hline \end{tabular}} \end{table} Table 1 shows that the output values of $\theta^{}_{13}$, $\theta^{}_{23}$ and $\delta$ in such a global fit are sensitive to the sign of $\Delta m^2_{31}$. That is why it is crucial to determine the neutrino mass ordering in the upcoming neutrino oscillation experiments. The hint $\delta \neq 0^\circ$ (or $180^\circ$) at the $1\sigma$ level is still preliminary but quite encouraging, because it implies a potential effect of leptonic CP violation which is likely to show up in some long-baseline neutrino oscillation experiments in the foreseeable future. The possibility $\theta^{}_{23} = 45^\circ$ cannot be ruled out at the $2\sigma$ level, and thus a more precise determination of $\theta^{}_{23}$ is required in order to resolve its octant. It is worth pointing out that $|U^{}_{\mu i}| = |U^{}_{\tau i}|$ (for $i=1,2,3$), the so-called $\mu$-$\tau$ permutation symmetry of the PMNS matrix $U$ itself, holds if either the conditions $\theta^{}_{13} = 0^\circ$ and $\theta^{}_{23} = 45^\circ$ or the conditions $\delta = 90^\circ$ (or $270^\circ$) and $\theta^{}_{23} = 45^\circ$ are satisfied \cite{Zhou}. Now that $\theta^{}_{13} = 0^\circ$ has definitely been excluded, it is imperative to know the values of $\theta^{}_{23}$ and $\delta$ as accurately as possible, so as to fix the strength of $\mu$-$\tau$ symmetry breaking associated with the structure of $U$. \section{Neutrino Mass Ordering and CP Violation} The neutrino mass ordering can be explored with either reactor electron antineutrinos or atmospheric muon neutrinos in the ``disappearance" oscillation experiments, or with accelerator muon neutrinos in the ``appearance" oscillation experiments. Let us take the JUNO \cite{JUNO}, PINGU \cite{PINGU} and LBNE \cite{LBNE} experiments for example to illustrate the future prospects in this regard. The JUNO electron antineutrino detector is expected to be a 20-kiloton liquid-scintillator detector located in the Jiangmen city of Guangdong province in southern China, about 53 km away from the Yangjiang (17.4 $\rm GW^{}_{th}$) and Taishan (18.4 $\rm GW^{}_{th}$) reactor facilities which serve as the $\overline{\nu}^{}_e$ source. Given Eq. (5), the survival probability of $\overline{\nu}^{}_e \to \overline{\nu}^{}_e$ oscillations can be explicitly expressed as \begin{eqnarray} P(\overline{\nu}^{}_e \to \overline{\nu}^{}_e) & = & 1 - \sin^2 2\theta^{}_{12} \cos^4\theta^{}_{13} \sin^2 \Delta^{}_{21} - \frac{1}{2} \sin^2 2\theta^{}_{13} \left[ 1 - \cos\Delta^{}_{*} \cos\Delta^{}_{21} \right. \nonumber \\ && + \left. \cos 2\theta^{}_{12} \sin\Delta^{}_{*} \sin\Delta^{}_{21} \right] \; , \end{eqnarray} where $\Delta^{}_{*} \equiv \Delta^{}_{31} + \Delta^{}_{32}$. In Eq. (6) the oscillating argument $\Delta^{}_{21}$ is unambiguous, and the neutrino mass ordering is determined by the sign of $\Delta^{}_{*}$ (normal: positive; inverted: negative). To distinguish the inverted neutrino mass hierarchy from the normal one, it is necessary to measure the $\Delta^{}_{*}$-driven oscillations over many cycles on condition that $\Delta^{}_{21} \sim \pi/2$ is satisfied for $L \sim 53$ km as taken in the JUNO experiment \cite{Zhan}. Fig. 9 illustrates why this idea works. \begin{figure}[t] \centerline{\includegraphics[width=9.5cm]{F9.eps}} \caption{The reactor antineutrino spectrum changing with $L/E$ at a baseline $L \sim 53$ km, where the blue (normal) or red (inverted) fine structure can tell the neutrino mass hierarchy after a Fourier transformation of the spectrum \cite{Zhan}.} \end{figure} Now the JUNO experiment's civil construction is underway, and its detector assembly is planned for 2018 to 2019. Data taking will commence in 2020, with a target of about six years of operation to pin down the neutrino mass ordering at the $3 \sigma$ or $4 \sigma$ level \cite{JUNO}. The challenges for this experiment, which must be met successfully, are mainly technological, such as how to improve the scintillator light yield, attenuation length and PMT quantum efficiency \cite{Luk}. The PINGU experiment is a proposed low-energy infill extension of the IceCube experiment at the South Pole \cite{PINGU}. Its design closely follows the one used for IceCube and DeepCore. The idea is to further infill the central DeepCore volume with 40 new strings of 60 optical modules each, so that the neutrino trigger energy threshold can be lowered to a few GeV and thus high-quality reconstructions for neutrino events can be achieved between 5 and 15 GeV. Such a detector geometry will be able to distinguish between the normal and inverted neutrino mass hierarchies at the $3 \sigma$ significance with an estimated 3.5 years of data taking. The survival probability of atmospheric muon neutrinos that reach the PINGU detector after propagation through the Earth (i.e., from below) depends on their beam energy $E$ and propagation length $L$. Thanks to interactions with electrons within the Earth, a resonant flavor conversion can happen at a specific pattern of neutrino energies and Earth-crossing paths. This matter-induced resonant conversion occurs only for neutrinos in the normal mass ordering or only for antineutrinos in the inverted mass ordering, as the behaviors of $\nu^{}_\mu \to \nu^{}_\mu$ and $\overline{\nu}^{}_\mu \to \overline{\nu}^{}_\mu$ oscillations depend respectively on $\Delta m^2_{31} \mp 2\sqrt{2} G^{}_{\rm F} N^{}_e E$, where $N^{}_e$ is the number density of electrons in matter and $E$ denotes the neutrino beam energy. The PINGU detector is capable of discriminating the cross sections and kinematics of neutrino and antineutrino interactions with nuclei, so it is capable of identifying different detected event rates which depend on different neutrino mass orderings. Given an accelerator-driven neutrino beam, the long-baseline oscillation experiments are also sensitive to the neutrino mass ordering. Because of the interaction of neutrinos with terrestrial matter as they pass through the Earth, the probability of $\nu^{}_\mu \to \nu^{}_e$ oscillations can be approximately expressed as \cite{Freund} \begin{eqnarray} P(\nu^{}_\mu \to \nu^{}_e) & \simeq & \sin^2 2\theta^{}_{13} \sin^2\theta^{}_{23} \frac{\sin^2 \left(x -1\right) \Delta^{}_{31}} {\left(x -1\right)^2} + \alpha \sin 2\theta^{}_{12} \sin 2\theta^{}_{13} \sin 2\theta^{}_{23} \nonumber \\ && \times \cos\left(\Delta^{}_{31} + \delta\right) \frac{\sin x \Delta^{}_{31} \sin\left(x -1\right) \Delta^{}_{31}}{x \left(x -1\right)} \nonumber \\ && + \alpha^2 \sin^2 2\theta^{}_{12} \cos^2\theta^{}_{23} \frac{\sin^2 x \Delta^{}_{31}}{x^2} \; , \end{eqnarray} where $x\equiv 2\sqrt{2} G^{}_{\rm F} N^{}_e E/\Delta m^2_{31}$ and $\alpha \equiv \Delta m^2_{21}/\Delta m^2_{31}$. One may easily obtain the expression of $P(\overline{\nu}^{}_\mu \to \overline{\nu}^{}_e)$ from Eq. (7) with the replacements $\delta \to -\delta$ and $x \to -x$. So the sign of $\Delta m^2_{31}$ affects the behaviors of neutrino oscillations via the signs of $x$ and $\alpha$. That is why the matter-induced resonant conversion can only occur for neutrinos in the normal mass hierarchy ($x >0$) or for antineutrinos in the inverted mass hierarchy ($x <0$), similar to the case of atmospheric neutrino or antineutrino oscillations. In practice the baseline length $L$ of an experiment is crucial for its sensitivity to the mass hierarchy. The LBNE experiment \cite{LBNE} with $L \simeq 1300$ km is therefore expected to be more promising than the T2K experiment \cite{T2K} with $L \simeq 295$ km and the NO$\nu$A experiment \cite{NOVA} with $L \simeq 810$ km in this respect. But the undetermined CP-violating phase $\delta$ may in general give rise to some uncertainties associated with a determination of the neutrino mass hierarchy in the long-baseline experiments. In particular, a careful analysis shows that the mass hierarchy sensitivity is most optimistic (or pessimistic) for $\delta \simeq -\pi/2$ in the normal (or inverted) hierarchy case, or for $\delta \simeq +\pi/2$ in the inverted (or normal) hierarchy case \cite{LBNE}. Regardless of possible values of $\delta$, LBNE in combination with T2K and NO$\nu$A promises to resolve the neutrino mass hierarchy with a significance of more than $3\sigma$ by 2030 \cite{Luk}. In addition, the proposed Hyper-Kamiokande (HK) detector will be a next-generation underground water Cherenkov detector serving as the far detector of the 295 km-baseline neutrino oscillation experiment for the J-PARC neutrino beam \cite{HK}. It is expected to be ten times larger than the SK detector and capable of probing the neutrino mass ordering, resolving the octant of the largest flavor mixing angle $\theta^{}_{23}$ and observing leptonic CP violation as well as proton decays and extraterrestrial neutrinos from distant astrophysical sources. CP violation in the lepton sector may have far-reaching impacts on our understanding of the origin of matter-antimatter asymmetries at both microscales and macroscales. The LBNE and HK experiments, together with other next-generation long-baseline neutrino oscillation experiments, are aiming at a determination of the CP-violating phase $\delta$. The latter can be extracted from comparing between the probabilities of $\nu^{}_\mu \to \nu^{}_e$ and $\overline{\nu}^{}_\mu \to \overline{\nu}^{}_e$ oscillations, but it is in general contaminated by terrestrial matter effects. In the leading-order approximation, \begin{eqnarray} {\cal A}^{}_{\rm CP} \equiv \frac{P(\nu^{}_\mu \to \nu^{}_e) - P(\overline{\nu}^{}_\mu \to \overline{\nu}^{}_e)} {P(\nu^{}_\mu \to \nu^{}_e) + P(\overline{\nu}^{}_\mu \to \overline{\nu}^{}_e)} \simeq -\frac{\sin 2\theta^{}_{12} \sin\delta} {\sin\theta^{}_{13} \tan\theta^{}_{23}} \Delta^{}_{21} + {\rm matter ~ effects} \; , \end{eqnarray} where the term of matter effects should more or less be correlated with the neutrino mass ordering. To lower the matter contamination, one may therefore consider a low-energy neutrino (or antineutrino) beam with a much shorter baseline length \cite{Low}. A proposal of this kind is the MOMENT project with a neutrino beam energy $E \sim 300$ MeV and a baseline length $L \sim 120$ km \cite{MOMENT}, towards probing leptonic CP violation before a more powerful neutrino factory is built. \section{Two Non-oscillation Aspects} \subsection{Neutrinoless double-beta decays} Soon after Fermi developed an effective beta decay theory \cite{Fermi}, Maria Goeppert-Mayer pointed out that certain even-even nuclei should have a chance to decay into the second nearest neighbors via two simultaneous beta decays \cite{Mayer}: $(A, Z) \to (A, Z+2) + 2 e^- + 2 \overline{\nu}^{}_e$, where the kinematic conditions $m (A, Z) > m (A, Z+2)$ and $m (A, Z) < m (A, Z+1)$ must be satisfied. In 1939 Wendell Furry further pointed out that the $0\nu\beta\beta$ decays $(A, Z) \to (A, Z+2) + 2 e^-$ could happen via an exchange of the {\it virtual} neutrinos between two associated beta decays \cite{Furry}, provided the neutrinos are massive and have the Majorana nature \cite{Majorana}. If such a $0\nu\beta\beta$ process is measured, does it definitely imply the existence of a Majorana mass term for neutrinos? The answer is affirmative according to the Schechter-Valle theorem \cite{SV}, no matter whether there are new physics contributions to the $0\nu\beta\beta$ decays. Hence the $0\nu\beta\beta$ transitions can serve for an experimentally feasible probe towards identifying the Majorana nature of massive neutrinos at low energies. The half-life of a $0\nu\beta\beta$-decaying nuclide can be expressed as follows: \begin{eqnarray} T^{0\nu}_{1/2} = \left(G^{0\nu}\right)^{-1} \left|M^{0\nu}\right|^{-2} \left|\langle m\rangle^{}_{ee}\right|^{-2} \; , ~~~~~~ \langle m\rangle^{}_{ee} \equiv \sum^{}_{i} \left( m^{}_i U^2_{e i} \right) \; , \end{eqnarray} where $G^{0\nu}$ is the phase-space factor, $M^{0\nu}$ stands for the relevant nuclear matrix element, and $\langle m\rangle^{}_{ee}$ denotes the effective Majorana neutrino mass in the absence of new physics contributions. Among them, the calculation of $|M^{0\nu}|$ relies on the chosen nuclear models which are only able to approximately describe the many-body interactions of nucleons in nuclei, and thus it involves the largest theoretical uncertainty (e.g., a factor of two or three for some typical nuclei) \cite{Giunti}. This causes quite a big uncertainty associated with the determination of $|\langle m\rangle^{}_{ee}|$. So far no convincing evidence for an occurrence of the $0\nu\beta\beta$ decay has been established, although a lot of experimental efforts have been made in the past few decades. Such an experiment is designed to observe the two electrons emitted in a given $0\nu\beta\beta$ decay, and its signature is based on the fact that the sum of the energies of the two emitted electrons is equal to the $Q$-value of this process. In contrast, the energy spectrum of the two emitted electrons in a normal double-beta decay must be continuous. At present the strongest upper bound on the effective mass term $|\langle m\rangle^{}_{ee}|$ can be set by the $^{76}_{32}{\rm Ge} \to ~^{76}_{34}{\rm Se} + 2 e^-$ and $^{136}_{~54}{\rm Xe} \to ~^{136}_{~56}{\rm Ba} + 2 e^-$ experiments \cite{Giunti}. In particular, the GERDA \cite{GERDA}, EXO-200 \cite{EXO} and KamLAND-Zen \cite{KZ} experiments have obtained $T^{0\nu}_{1/2} > 2.1 \times 10^{25} ~{\rm yr}$, $1.1 \times 10^{25} ~{\rm yr}$ and $1.9 \times 10^{25} ~{\rm yr}$ at the $90\%$ confidence level, respectively. These results lead to the constraints $|\langle m\rangle^{}_{ee}| < 0.22$---$0.64$ eV, $0.2$---$0.69$ eV and $0.15$---$0.52$ eV at the same confidence level, respectively, after the relevant uncertainties of nuclear matrix elements are taken into account \cite{Giunti}. \begin{figure}[t] \centerline{\includegraphics[width=7.5cm]{F10.eps}} \caption{The effective Majorana neutrino mass $m^{}_{\beta\beta} \equiv |\langle m\rangle^{}_{ee}|$ as a function of the lightest neutrino mass $m^{}_{\rm light} \equiv m^{}_1$ (normal hierarchy, red band) or $m^{}_3$ (inverted hierarchy, green band) \cite{GC}. Here the horizontally-excluded region comes from the $0\nu\beta\beta$ experiments \cite{GERDA,EXO,KZ}, and the vertically-excluded region is due to the cosmological bound \cite{Planck}.} \end{figure} The expected magnitude of $|\langle m\rangle^{}_{ee}|$ in the standard three-flavor case is illustrated in Fig. 10, where current neutrino oscillation data have been input and arbitrary values of the CP-violating phases have been taken \cite{GC}. It is clear that the inverted neutrino mass ordering or a near neutrino mass degeneracy may allow $|\langle m\rangle^{}_{ee}| \geq 0.01$ eV, which should be accessible in the next-generation $0\nu\beta\beta$-decay experiments. If the neutrino mass spectrum is normal and hierarchical, however, there will be little prospect of observing any $0\nu\beta\beta$ decays in the foreseeable future, simply because of $|\langle m\rangle^{}_{ee}| \sim {\cal O}(10^{-3})$ eV in this unfortunate case. \subsection{The absolute neutrino mass scale} Since the flavor oscillations of massive neutrinos are only sensitive to the neutrino mass-squared differences, a determination of the absolute neutrino mass scale has to rely on some non-oscillation experiments. Searching for the $0\nu\beta\beta$ decay is one of the feasible ways for this purpose if massive neutrinos are the Majorana particles, because the magnitude of its effective mass term $\langle m\rangle^{}_{ee}$ is associated with $m^{}_i$ as shown in Eq. (9) and Fig. 10. Another way is to detect the beta decays, such as $^3_1 {\rm H} \to ~ ^3_2 {\rm He} + e^- + \overline{\nu}^{}_e$, whose effective neutrino mass term $\langle m\rangle^{}_e$ is defined via \begin{eqnarray} \left(\langle m\rangle^{}_e\right)^2 \equiv \sum_i \left(m^2_i |U^{}_{e i}|^2 \right) \; . \end{eqnarray} The most promising experiment of this kind is the KATRIN experiment \cite{KATRIN}, which may hopefully probe $\langle m\rangle^{}_e$ with a sensitivity of about $0.2$ eV in the near future. But up to now only $\langle m\rangle^{}_e < 2.05$ eV has been obtained at the $95\%$ confidence level from the Troitzk beta-decay experiment \cite{Beta}. Furthermore, one may get useful information on the mass scale of light neutrinos from cosmology. Based on the standard $\Lambda$CDM model, a global analysis of current cosmological data (especially those on the cosmic microwave background (CMB) radiation and large-scale structure (LSS) formation) can provide us with the most powerful sensitivity to the sum of light neutrino masses via the relation \begin{eqnarray} \Omega^{}_\nu h^2 = \frac{1}{\rm 93 ~ eV} \Sigma^{}_\nu \; , ~~~~~~ \Sigma^{}_\nu \equiv \sum_i m^{}_i \; , \end{eqnarray} in which $\Omega^{}_\nu$ denotes the light neutrino contribution to today's energy density of the Universe, and $h$ is the Hubble constant. For example, $\Sigma^{}_\nu < 0.23$ eV has recently been reported by the Planck Collaboration at the $95\%$ confidence level \cite{Planck}. If a combination of the next-generation CMB and LSS measurements can reach a sensitivity of about $0.02$ eV for the sum of three neutrino masses \cite{Abazajian}, then it will be possible to determine the absolute neutrino mass scale via a definite determination of $\Sigma^{}_\nu$ even though the neutrino mass ordering is normal. Note that it is also possible to determine or constrain the absolute neutrino mass scale $m^{}_\nu$ through the study of kinematic effects of supernova neutrinos, because their flight time from a supernova's core to a terrestrial detector will be more or less delayed as compared with the massless particles \cite{Z}. A careful analysis of the $\overline{\nu}^{}_e$ events from the Supernova 1987A explosion led us to an upper bound of about $6$ eV on $m^{}_\nu$ \cite{SN}. The prospects of this astrophysical approach depend on the emergence of new neutrino detectors or the existence of antineutrino pulses in the first instants of a supernova explosion \cite{V}. Given the JUNO liquid scintillator detector as an example, $m^{}_\nu < 0.83 \pm 0.24$ eV is expected to be achievable at the $95\%$ confidence level for a typical galactic supernova at a distance of $10$ kpc from the Earth \cite{Lu}. \section{Summary and Outlook} Since 1998, quite a lot of significant breakthroughs have been made in experimental neutrino physics. On the one hand, the exciting phenomena of atmospheric, solar, reactor and accelerator neutrino or antineutrino oscillations have all been observed, and the oscillation parameters $\Delta m^2_{21}$, $|\Delta m^2_{31}|$, $\theta^{}_{12}$, $\theta^{}_{13}$ and $\theta^{}_{23}$ have been determined to an impressive degree of accuracy. On the other hand, the geo-antineutrino events and extraterrestrial PeV neutrino events have been observed, and the sensitivities to neutrino masses in the beta decays, $0\nu\beta\beta$ decays and cosmology have been improved to a great extent. Furthermore, a lot of theoretical efforts have also been made towards understanding the origin of tiny neutrino masses and the flavor structure behind the observed neutrino mixing pattern, and towards studying possible implications of massive neutrinos on the cosmological matter-antimatter asymmetry, warm dark matter and many violent astrophysical processes \cite{XZ,Altarelli}. All these have demonstrated neutrino physics to be one of the most important frontiers of particle physics, astrophysics and cosmology. But a number of fundamental questions about massive neutrinos remain open. The burning ones include how small the absolute neutrino mass scale is, whether the neutrino mass spectrum is normal or inverted, whether massive neutrinos are the Majorana particles, how large the CP-violating phase $\delta$ is, which octant the largest flavor mixing angle $\theta^{}_{23}$ belongs to, whether there are light and (or) heavy sterile neutrinos, what the role of neutrinos is in dark matter, whether the observed matter-antimatter asymmetry of the Universe is related to CP violation in neutrino oscillations, etc. Motivated by so many questions, we are trying to discover a new physics world with the help of massive neutrinos in the coming decades. We would like to thank Luciano Maiani and Gigi Rolandi for inviting us to contribute to this book. We are also grateful to Yu-Feng Li, Jue Zhang, Zhen-hua Zhao, Shun Zhou and Ye-Ling Zhou for their helpful comments on this essay. This work is supported in part by the National Natural Science Foundation of China under grant No. 11135009 and 11390380; by the National Basic Research Program of China under grant No. 2013CB834300; by the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) under grant No. XDA10000000; and by the CAS Center for Excellence in Particle Physics.
{ "timestamp": "2015-04-24T02:09:06", "yymm": "1504", "arxiv_id": "1504.06155", "language": "en", "url": "https://arxiv.org/abs/1504.06155" }
\section{System description} \label{sec:intro} The observed architectures of stellar systems depend both on the formation processes and subsequent evolution. In many cases the orbits preserve information about the formation processes, and their study helps us to understand the physics of fragmentation, accretion, and early evolution of stars. This information is gleaned from observations of young binaries, from statistics of binary and multiple stars in different environments \citep{DK13}, and from unusual objects that reveal the history of their formation like ``Rosetta stones''. One such object is featured here. The 7th magnitude G1V star studied here is known as HD~91962, HIP~51966, WDS J10370$-$0850, or ADS~7854; the J2000 coordinates are 10:37:00.01, $-$08:50:23.7. It is located at a distance of 36\,pc from the Sun. HD~91962 is an X-ray source RX~J1036.9$-$0850 and a hierarchical quadruple system (Figure~\ref{fig:str}). \begin{figure} \epsscale{1.0} \plotone{fig1.eps} \caption{Structure of the hierarchical quadruple system HD 91962. Components are designated by letters and numbers, subsystems are identified by their components joined by the comma. Approximate spectral types are assigned to match the estimated masses of the stars. All orbits have small eccentricity and are possibly located in one plane. \label{fig:str} } \end{figure} The outer binary A,B (A~556) is known since 1903 \citep{Aitken1904}. Its orbit with $P=283$\,yr by \citet{Pop1978} was recently revised by \citet{TMH14}. We show below that the period is close to 200\,yr. The original {\it Hipparcos} parallax is 27.5$\pm$1.3\,mas, the new {\it Hipparcos} reduction \citep{HIP2} revised it to 25.1$\pm$1.2\,mas. However, the {\it Hipparcos} parallax could be biased by the orbital motion which was not taken into consideration in its data reduction. The subsystem Aa,Ab (TOK~44) was discovered by \citet{MH09} with adaptive optics in 2003.354, at 0\farcs142, 56\fdg2, $\Delta K=1.25$ mag. Independently, it was resolved in 2009 by speckle interferometry at SOAR \citep{TMH10} and was measured several times since then. The pair Aa,Ab was seen in 2012.18 at the same position as in 2003.35, completing one full revolution. The orbital period is therefore well constrained by the speckle measurements. DL independently determined the spectroscopic orbit of Aa,Ab with a period of $3233 \pm 20$ days (8.85\,yr) and discovered the inner spectroscopic subsystem Aa1,Aa2 with $P=170.3$\,d. This is therefore a hierarchical quadruple system with a 3-tier ``planetary'' architecture, where all components revolve around the most massive central star Aa1. The two inner orbits have small and similar eccentricities and their apsidal angles are also similar. We show below that these orbits are locked in a 1:19 mutual resonance. The period of the outer visual orbit is about 20 times longer than the period of the middle orbit. The observational material is presented in Section~\ref{sec:obs}. It is used for calculation of the orbits in Section~\ref{sec:orb} and for the estimate of component's masses and distance to the system (Section~\ref{sec:mass}). In Section~\ref{sec:disc} we discuss the formation mechanism of such hierarchies and give examples of other 3-tier hierarchical systems with known orbits. \section{Observations} \label{sec:obs} \subsection{Speckle interferometry} \begin{figure} \plotone{fig2.eps} \caption{Fragments of speckle auto-correlation functions showing the resolved triple system. The scale and orientation are approximate (North up, East left). The two middle images (2010.86 and 2012.18) are taken in the $I_C$ band and contain the faint peak corresponding to the cross-correlation between B and Ab. In the remaining images (in the $y$ filter) this peak is lost in the noise. \label{fig:ACF} } \end{figure} All resolved measurements of the middle subsystem Aa,Ab except the first one come from the 4.1-m SOAR telescope. The instrument and data reduction are described by \citet{TMH10}. The observations were made with the 534/22\,nm interference filter close to the Str\"omgren $y$ band and the 788/132\,nm filter approximating the Cousins system $I_C$. Figure~\ref{fig:ACF} presents samples of the speckle auto-correlation functions (ACFs) of the resolved triple star. The relative position and brightness of the components is determined by least-squares fitting of the power spectrum to a triple-star model (not from the ACF). The orientation of the inner pair is determined without the usual $180^\circ$ ambiguity when the correlation peak between Ab and B is detectable (it is barely seen in the $y$ filter). Three observations made in 2014 are still unpublished. After submission of the manuscripot, two more measurenments made in 2015 were added, slightly reducing the errors of the middle orbit. We also use one unpublished measure of A,B made in 2001 by BM (the subsystem was not resolved). \subsection{Spectroscopy} \begin{deluxetable}{ r rrr l } \tabletypesize{\scriptsize} \tablecaption{Parameters of the star A\rm{a}1 \label{tab:Teff} } \tablewidth{0pt} \tablehead{ \colhead{$T_e$} & \colhead{$\log g$} & \colhead{[m/H]} & \colhead{$V \sin i$} & \colhead{Reference} \\ (K) & (m~s$^{-2}$) & (Sun) & km~s$^{-1}$ & } \startdata 5827 & 4.49 & $-$0.15 & 9.27 & This work \\ $\pm 17$ & $\pm 0.01$ & $\pm 0.01$ & $\pm 0.26$ & \\ 5818 & 4.85 & \ldots & 6.10 & \citet{Schroeder} \\ 5675 & 4.12 & $-$0.21 & \ldots & \citet{Casagrande2011} \enddata \end{deluxetable} The spectrum of HD 91962 was monitored for 23 years starting in 1991. Eighty-four observations were obtained with the CfA Digital Speedometers \citep{Latham1985,Latham1992}, initially using the 1.5-m Wyeth Reflector at the Oak Ridge Observatory in the town of Harvard, Massachusetts, and subsequently with the 1.5-m Tillinghast Reflector at the Whipple Observatory on Mount Hopkins, Arizona. Starting in 2009 the new fiber-fed Tillinghast Reflector Echelle Spectrograph \citep[TRES;][]{TRES} was used to obtain an additional ten observations. The spectral resolution was 44,000 for all three spectrographs, but the typical signal-to-noise ratio (SNR) per resolution element of 100 for the TRES observations was a few times higher than for the CfA Digital Speedometer observations. The total light of HD 91962 is dominated by the component Aa1, and at visible wavelengths the spectrum appears single-lined. Therefore we followed our standard procedure of using one-dimensional correlations of each observed spectrum against a synthetic template drawn from our library of calculated spectra. The radial velocity (RV) zero point for each spectrograph was monitored using observations of standard stars, of daytime sky, and of minor planets, and the velocities were all adjusted to the native system of the CfA Digital Speedometers. To get onto the absolute velocity system defined by our observations of minor planets, about 0.14 km~s$^{-1}$ should be added to the velocities reported in Table~3. These velocities are all based on correlations of just a single echelle order centered on the Mg b triplet near 519\,nm, with a wavelength window of 4.5\,nm for the CfA Digital Speedometers and 10.0\,nm for TRES. The ten TRES observations were analyzed with the Stellar Parameter Classification tool \citep[SPC;][]{Bucchave2014}. The results are given in Table~\ref{tab:Teff}. As they are mutually consistent, we list the average values of stellar parameters and their errors estimated from the scatter of ten measurements. The parameters found in the literature are given for comparison. The star Aa1 dominates in the visible light, with all other components together contributing only 14\%. Therefore their influence on the derived stellar parameters of Aa1 is minor. \subsection{Photometry} \begin{deluxetable}{c ccc ccc } \tablecaption{Photometry of components \label{tab:ptm1} } \tablewidth{0pt} \tablehead{ \colhead{Band} & \colhead{A+B} & \colhead{Ab$-$Aa} & \colhead{B$-$A} & \colhead{Aa} & \colhead{Ab} & \colhead{B} } \startdata $V$ (mag) & 7.03 & 2.86$\pm$0.10 & 2.67$\pm$0.10 & 7.19 & 10.05 & 9.80\\ $I_C$ (mag) & 6.34 & 2.29$\pm$0.10 & 2.09$\pm$0.05 & 6.62 & 8.91 & 8.55 \\ $K_s$ (mag) & 5.39 & 1.25$\pm$0.11 & 1.37$\pm$0.06 & 5.96 & 7.21 & 7.03 \\ Mass (${\cal M}_\odot$) & 2.74 & \ldots & \ldots & 1.46 & 0.64 & 0.64 \enddata \end{deluxetable} Table~\ref{tab:ptm1} gathers the photometric information. Its column (2) gives the combined magnitudes of AB, with $V$ from SIMBAD, $I_C$ calculated from the $V-I$ color given in \citep{HIP2}, and $K_s$ from 2MASS \citep{2MASS}. The magnitude differences between Ab and Aa and between B and A are given in the columns (3) and (4). They are based on speckle interferometry at SOAR, assuming $\Delta V \approx \Delta y$, and on photometry from \citep{MH09}: $\Delta K_{\rm Aa,Ab} = 1.25 \pm 0.11$ mag and $\Delta K_{\rm AB} = 1.37 \pm 0.06$ mag. The scatter of the relative speckle photometry is about 0.1\,mag, and the magnitude difference between Aa and B is slightly over-estimated owing to the anisoplanatism. This bias is overcome by using the relative photometry of the wide pair on the long-exposure images produced from the speckle data cubes: $\Delta y_{\rm AB} = 2.67 \pm 0.10$ mag and $\Delta I_{\rm AB} = 2.09 \pm 0.05$ mag, in good agreement with {\it Hipparcos} and {\it Tycho} \citep{FM2000}: $\Delta Hp_{\rm AB} = 2.69$, $\Delta V_{\rm AB} = 2.48$ and $\Delta B_{\rm AB} = 3.30$ mag. Note also the magnitude difference $\Delta V_{\rm AB} = 2.52$ mag measured by \citet{Horch2001}. The last three columns of Table~\ref{tab:ptm1} list individual magnitudes calculated from the combined and differential photometry. The component Aa is treated as a single star, while it is in fact a 170-d binary. Masses given in the last line are discussed in Section~\ref{sec:mass}. \section{Orbits} \label{sec:orb} \begin{deluxetable}{l l cc c } \tablecaption{Orbital elements of HD 91962. \label{tab:orb} } \tablewidth{0pt} \tablehead{ \colhead{Element} & \colhead{A,B} & \colhead{Aa,Ab} & \colhead{Aa1,Aa2} } \startdata $P$ (yr) & 205 (fixed) & 8.84638 $\pm$ 0.025 & 0.46628 $\pm$ 0.00005 \\ $P$ (d) & 74875 (fixed) & 3230.8 $\pm$ 9 & 170.304 $\pm$ 0.013 \\ $T$ (yr) & 2048.6 $\pm$ 2.5 &2009.816 $\pm$ 0.072 & 2003.292 $\pm$ 0.005 \\ $T$ (JD + 2,400,0000) & 69290 $\pm$ 873 &55129 $\pm$ 26 & 52746.80 $\pm$ 1.48 \\ $e$ & 0.301 $\pm$ 0.016 &0.125 $\pm$ 0.010 & 0.135 $\pm$ 0.008 \\ $a$ (arcsec) & 1.334 $\pm$ 0.016 &0.1501 $\pm$ 0.0021 & (0.0184) \\ $\Omega_A$ (deg) & 63.2 $\pm$ 2.0 &50.4 $\pm$ 1.0 & \ldots \\ $\omega_A$ (deg) & 219.7 $\pm$ 1.3 &263.9 $\pm$ 2.6 & 297.6 $\pm$ 3.2 \\ $i$ (deg) & 54.2 $\pm$ 0.8 &56.6 $\pm$ 0.9 & (57) \\ $K_1$ (km~s$^{-1}$) & (0.21) & 4.71$\pm$ 0.07 & 8.08 $\pm$ 0.07 \\ $V_0$ (km~s$^{-1}$) & \ldots & \ldots & 21.16 $\pm$ 0.04 \enddata \end{deluxetable} \begin{deluxetable}{ cc cr cc cr } \tabletypesize{\scriptsize} \tablecaption{Radial velocities and residuals \label{tab:RV} } \tablewidth{0pt} \tablehead{ \colhead{JD} & \colhead{RV} & \colhead{Err} & \colhead{O$-$C} & \colhead{JD} & \colhead{RV} & \colhead{Err} & \colhead{O$-$C} \\ \colhead{+2,400,000} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{+2,400,000} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} } \startdata 48290.835 & 12.11 & 0.44 & 0.35 & 53838.722 & 19.32 & 0.21 & --0.07 \\ 49051.810 & 28.68 & 0.43 & --0.35 & 53866.741 & 13.20 & 0.19 & --0.32 \\ 49165.488 & 27.95 & 0.39 & 0.22 & 53871.656 & 12.16 & 0.25 & --0.63 \\ 49384.843 & 32.55 & 0.32 & 0.64 & 54071.034 & 10.39 & 0.23 & --0.39 \\ 49391.711 & 31.52 & 0.39 & 1.09 & 54077.023 & 11.65 & 0.20 & 0.05 \\ 49479.349 & 19.67 & 0.41 & --0.03 & 54100.952 & 18.56 & 0.25 & --0.22 \\ 49480.348 & 19.75 & 0.42 & --0.14 & 54107.933 & 21.17 & 0.22 & --0.21 \\ 49483.360 & 20.42 & 0.37 & --0.12 & 54127.920 & 25.35 & 0.36 & --0.56 \\ 49708.698 & 34.25 & 0.49 & 0.68 & 54135.897 & 26.03 & 0.21 & 0.07 \\ 49748.613 & 25.63 & 0.41 & --0.05 & 54158.887 & 22.25 & 0.18 & --0.03 \\ 49750.574 & 25.13 & 0.37 & --0.07 & 54166.821 & 20.25 & 0.25 & --0.15 \\ 49775.520 & 20.33 & 0.55 & 0.48 & 54187.787 & 15.59 & 0.28 & 0.22 \\ 49807.431 & 16.69 & 0.41 & --0.48 & 54192.724 & 13.92 & 0.42 & --0.36 \\ 49813.412 & 17.03 & 0.45 & --0.56 & 54196.739 & 12.68 & 0.27 & --0.77 \\ 50070.693 & 27.17 & 0.41 & --1.03 & 54221.689 & 10.57 & 0.24 & 0.76 \\ 50128.539 & 16.18 & 0.43 & 0.21 & 54251.721 & 11.73 & 0.28 & 0.08 \\ 50138.507 & 15.10 & 0.49 & --0.04 & 54277.656 & 20.74 & 0.29 & 0.35 \\ 50161.454 & 16.20 & 0.36 & --0.37 & 54457.972 & 22.99 & 0.21 & --0.12 \\ 50163.476 & 17.20 & 0.38 & 0.23 & 54461.968 & 25.02 & 0.23 & 1.05 \\ 50170.394 & 18.79 & 0.49 & 0.05 & 54481.934 & 24.54 & 0.23 & 0.08 \\ 50183.390 & 22.96 & 0.38 & --0.29 & 54486.927 & 24.67 & 0.22 & 0.88 \\ 50185.384 & 23.89 & 0.43 & --0.12 & 54516.861 & 17.48 & 0.23 & 0.26 \\ 50189.385 & 25.06 & 0.45 & --0.44 & 54521.883 & 16.28 & 0.35 & 0.24 \\ 50458.658 & 15.34 & 0.37 & 0.27 & 54543.743 & 11.82 & 0.29 & 0.30 \\ 50460.619 & 15.59 & 0.43 & 0.84 & 54549.829 & 10.93 & 0.25 & 0.37 \\ 50484.559 & 12.56 & 0.44 & --0.02 & 54576.693 & 8.38 & 0.22 & --0.62 \\ 50514.480 & 17.80 & 0.42 & 0.41 & 54601.702 & 14.80 & 0.32 & 0.91 \\ 50516.461 & 18.47 & 0.43 & 0.42 & 54605.617 & 15.23 & 0.35 & --0.04 \\ 52997.868 & 21.85 & 0.28 & --0.67 & 54808.035 & 25.21 & 0.22 & --0.53 \\ 53037.771 & 17.90 & 0.32 & 0.83 & 54839.907 & 22.61 & 0.29 & --0.02 \\ 53054.679 & 18.59 & 0.27 & 0.29 & 54846.883 & 20.91 & 0.22 & --0.16 \\ 53087.635 & 28.91 & 0.35 & 0.21 & 54868.897 & 16.35 & 0.22 & 0.28 \\ 53106.657 & 33.38 & 0.34 & 0.65 & 54878.903 & 13.67 & 0.26 & --0.41 \\ 53353.876 & 17.92 & 0.28 & 0.46 & 54898.830 & 10.77 & 0.28 & --0.48 \\ 53388.852 & 15.65 & 0.36 & 0.29 & 54924.741 & 11.15 & 0.30 & --0.46 \\ 53403.862 & 18.33 & 0.22 & 0.29 & 54957.690 & 22.20 & 0.30 & 0.38 \\ 53418.705 & 23.45 & 0.23 & 0.43 & 54962.655 & 23.28 & 0.31 & --0.35 \\ 53433.695 & 27.45 & 0.27 & --0.78 & 55169.033 & 28.08 & 0.10 & --0.25 \\ 53448.694 & 30.38 & 0.26 & --0.21 & 55172.031 & 27.66 & 0.10 & --0.18 \\ 53465.621 & 28.12 & 0.46 & --1.14 & 56743.750 & 19.17 & 0.10 & --0.31 \\ 53485.588 & 24.59 & 0.26 & --0.31 & 56790.638 & 13.60 & 0.10 & --0.23 \\ 53510.706 & 19.19 & 0.19 & 0.29 & 56804.634 & 15.39 & 0.10 & --0.14 \\ 53541.647 & 14.13 & 0.26 & 0.05 & 56811.653 & 17.16 & 0.10 & --0.09 \\ 53721.004 & 12.61 & 0.52 & 0.07 & 56817.658 & 19.05 & 0.10 & --0.10 \\ 53746.976 & 16.04 & 0.33 & --0.27 & 56820.637 & 19.99 & 0.10 & --0.21 \\ 53776.914 & 26.13 & 0.45 & --0.34 & 56825.656 & 22.02 & 0.10 & --0.06 \\ 53836.847 & 19.84 & 0.21 & 0.00 & 56827.656 & 22.60 & 0.10 & --0.23 \\ \enddata \end{deluxetable} \begin{deluxetable}{ rrr c rr } \tabletypesize{\scriptsize} \tablecaption{Measurements and residuals of A\rm{a},A\rm{b} \label{tab:Aa} } \tablewidth{0pt} \tablehead{ \colhead{Date} & \colhead{$\theta$} & \colhead{$\rho$} & \colhead{$\sigma_\rho$} & \colhead{(O$-$C)$_\theta$} & \colhead{(O$-$C)$_\rho$} \\ & \colhead{(deg)} & \colhead{(arcsec)} & \colhead{(arcsec)} & \colhead{(deg)} & \colhead{(arcsec)} } \startdata 2003.354 & 56.2 & 0.142 & 0.005 & $-$2.5 & $-$0.009 \\ 2009.263 & 268.8 & 0.097 & 0.005 & 0.1 & 0.000 \\ 2010.969 & 29.0 & 0.122 & 0.005 & 1.6 & 0.003 \\ 2010.969 & 30.0 & 0.125 & 0.005 & 2.5 & 0.006 \\ 2012.184 & 58.0 & 0.151 & 0.005 & $-$0.3 & 0.000 \\ 2012.184 & 58.1 & 0.151 & 0.005 & $-$0.2 & 0.000 \\ 2013.132 & 81.9 & 0.124 & 0.005 & 0.1 & $-$0.005 \\ 2013.132 & 83.8 & 0.135 & 0.105 & 1.9 & 0.006 \\ 2014.043 & 117.7 & 0.097 & 0.002 & $-$1.1 & 0.000 \\ 2014.186 & 126.6 & 0.097 & 0.002 & 0.1 & 0.003 \\ 2014.300 & 133.0 & 0.097 & 0.002 & 0.0 & 0.004 \\ 2015.029 & 172.1 & 0.098 & 0.002 & $-$0.9 & $-$0.005 \\ 2015.169 & 181.0 & 0.108 & 0.002 & 1.6 & 0.001 \enddata \end{deluxetable} \begin{deluxetable}{ rrr c rr } \tabletypesize{\scriptsize} \tablecaption{Measurements and residuals of A,B \label{tab:AB} } \tablewidth{0pt} \tablehead{ \colhead{Date} & \colhead{$\theta$} & \colhead{$\rho$} & \colhead{$\sigma_\rho$} & \colhead{(O$-$C)$_\theta$} & \colhead{(O$-$C)$_\rho$} \\ & (deg) & (arcsec) & (arcsec) & (deg) & (arcsec) } \startdata 1903.040 & 54.0 & 1.340 & 0.500 & $-$5.3 & $-$0.185 \\ 1909.320 & 65.2 & 1.800 & 0.100 & 1.4 & 0.214 \\ 1909.320 & 65.0 & 1.640 & 0.100 & 1.2 & 0.054 \\ 1911.100 & 60.4 & 1.600 & 0.100 & $-$4.6 & 0.001 \\ 1912.046 & 66.2 & 1.650 & 0.100 & 0.6 & 0.045 \\ 1915.210 & 69.6 & 1.480 & 0.100 & 1.8 & $-$0.140 \\ 1916.210 & 70.0 & 1.700 & 0.100 & 1.6 & 0.076 \\ 1921.360 & 73.1 & 1.610 & 0.100 & 1.3 & $-$0.023 \\ 1925.340 & 76.8 & 1.680 & 0.100 & 2.4 & 0.050 \\ 1926.260 & 77.0 & 1.550 & 0.100 & 2.0 & $-$0.078 \\ 1928.160 & 81.3 & 1.520 & 0.100 & 5.0 & $-$0.102 \\ 1929.510 & 74.2 & 1.540 & 0.100 & $-$3.0 & $-$0.077 \\ 1933.280 & 78.0 & 1.560 & 0.100 & $-$1.7 & $-$0.038 \\ 1933.960 & 76.5 & 1.330 & 0.500 & $-$3.7 & $-$0.264 \\ 1936.250 & 81.4 & 1.350 & 0.100 & $-$0.4 & $-$0.228 \\ 1939.300 & 81.5 & 1.520 & 0.100 & $-$2.4 & $-$0.034 \\ 1944.010 & 86.4 & 1.450 & 0.100 & $-$1.0 & $-$0.059 \\ 1944.290 & 92.4 & 1.400 & 0.100 & 4.7 & $-$0.106 \\ 1947.930 & 89.0 & 1.280 & 0.100 & $-$1.5 & $-$0.184 \\ 1956.340 & 97.6 & 1.280 & 0.100 & $-$0.3 & $-$0.074 \\ 1957.760 & 106.2 & 1.520 & 0.100 & 6.9 & 0.187 \\ 1958.000 & 97.0 & 1.420 & 0.100 & $-$2.5 & 0.091 \\ 1958.040 & 103.1 & 1.380 & 0.100 & 3.5 & 0.051 \\ 1959.160 & 110.8 & 1.600 & 1.100 & 10.1 & 0.285 \\ 1959.300 & 97.7 & 1.140 & 0.100 & $-$3.1 & $-$0.170 \\ 1962.500 & 110.3 & 1.180 & 0.100 & 6.1 & $-$0.082 \\ 1966.380 & 108.4 & 1.200 & 0.100 & $-$0.3 & $-$0.002 \\ 1972.112 & 116.5 & 1.140 & 0.100 & 0.4 & 0.026 \\ 1972.120 & 113.1 & 0.920 & 0.100 & $-$3.0 & $-$0.194 \\ 1977.280 & 121.3 & 1.160 & 0.100 & $-$2.6 & 0.123 \\ 1982.260 & 127.9 & 1.010 & 0.100 & $-$4.1 & 0.039 \\ 1991.250 & 151.7 & 0.886 & 0.010 & 0.7 & 0.001 \\ 1996.350 & 141.7 & 0.723 & 1.100 & $-$21.0 & $-$0.138 \\ 1997.123 & 165.6 & 0.833 & 0.020 & 1.1 & $-$0.027 \\ 1997.300 & 141.4 & 0.753 & 1.100 & $-$23.5 & $-$0.106 \\ 2001.077 & 172.4 & 0.842 & 0.005 & $-$1.5 & $-$0.017 \\ 2002.167 & 177.3 & 0.873 & 0.005 & 0.9 & 0.012 \\ 2009.263 & 192.6 & 0.906 & 0.005 & 0.0 & 0.013 \\ 2010.969 & 196.0 & 0.899 & 0.005 & $-$0.3 & $-$0.006 \\ 2010.969 & 196.0 & 0.904 & 0.005 & $-$0.3 & $-$0.001 \\ 2012.184 & 198.7 & 0.909 & 0.005 & $-$0.2 & $-$0.005 \\ 2013.132 & 200.7 & 0.924 & 0.005 & $-$0.0 & 0.004 \\ 2013.132 & 200.9 & 0.926 & 0.005 & 0.1 & 0.006 \\ 2014.043 & 203.1 & 0.923 & 0.005 & 0.4 & $-$0.005 \\ 2014.186 & 202.9 & 0.925 & 0.005 & $-$0.1 & $-$0.003 \\ 2014.300 & 203.5 & 0.933 & 0.005 & 0.3 & 0.004 \\ 2015.029 & 204.7 & 0.939 & 0.005 & $-$0.0 & 0.004 \\ 2015.169 & 205.4 & 0.937 & 0.005 & 0.4 & 0.001 \enddata \end{deluxetable} \begin{figure*} \epsscale{2.0} \plotone{fig3.eps} \caption{Spectroscopic orbits of the middle (left) and the inner (right) subsystems of HD~91962.\label{fig:sborb} } \end{figure*} \begin{figure*} \epsscale{2.0} \plotone{fig4.eps} \caption{Visual orbits of the outer system A,B (left) and the middle system Aa,Ab (right). The scale is in arcseconds. The small dotted ellipses shows inner orbits to scale (Aa1,Aa2 is assumed to be co-planar with Aa,Ab). The scheme in the left panel illustrates the correction of the resolved measurements of Aa,B using the vector formula $\vec{Aa,B} = \vec{Aa,A} + \vec{A,B}$. \label{fig:vborb} } \end{figure*} Table~\ref{tab:orb} lists the orbital elements and their errors for all three orbits: outer, middle, and inner. In purely visual orbits, the ascending node is not known, meaning that both elements $\Omega$ and $\omega$ can be changed by $180^\circ$. Radial velocities help to select the correct node. Here the longitude of periastron $\omega_A$ corresponds to the primary component, and the position angle of the node $\Omega_A$ is chosen in such way as to represent the motion of the secondary on its visual orbit. Estimated or assumed quantities are given in brackets for reference. The orbital elements and their errors are determined by the unconstrained least-squares fit to the data (RVs, positional measurements, or both) with weights inversely proportional to the squares of the measurement errors. The center-of-mass velocity is common to all three orbits, but we do not know yet the spectroscopic elements of A,B and arbitrarily ascribe $V_0$ to the inner pair Aa1,Aa2 meanwhile. The RVs used in the orbit calculation and common residuals to the orbits of Aa1,Aa2 and Aa,Ab are presented in Table~\ref{tab:RV}. The speckle measurements of Aa,Ab and residuals to its orbit are listed in Table~\ref{tab:Aa}. The first measurement is made by \citet{MH09}, all remaining measurements are made at SOAR. The orbit of the middle pair Aa,Ab was initially determined independently from both speckle interferometry and RVs. We used the initial spectroscopic orbits of Aa,Ab and Aa1,Aa2 fitted jointly to the RV data alone as a first approximation to the combined orbit of Aa,Ab (the 170-d orbit was then subtracted from the RVs). With this combined solution, another iteration on the inner system was made, giving essentially the same elements. The weighted rms RV residual to both orbits is 0.42\,km~s$^{-1}$. Figure~\ref{fig:sborb} shows the RV curves, while Figure~\ref{fig:vborb} shows the visual orbits of Aa,Ab and A,B. The final elements of Aa,Ab and their errors are determined by the least-squares fit to both RVs and resolved measures. The speckle errors are assumed to be 5\,mas prior to 2014 (except for one less precise measure). and 2\,mas for 2015--2015. The rms residuals are 1\fdg2 and 3.4\,mas in $\theta$ and $\rho$, respectively. We had to adopt realistic speckle errors (hence weights) to reach the correct balance between positional measurements and RVs. The formal errors delivered by the speckle data processing are smaller, typically under 1\,mas. We updated the visual orbit of the outer system A,B = A~556 (see the elements in Table~\ref{tab:orb}, obseravtions and residuals in Table~\ref{tab:AB}). The speckle measures of Aa,B (all data from SOAR where the triple is resolved) are translated into the positions of A,B, where A refers to the center of mass of Aa,Ab. The position of the inner pair Aa,Ab was calculated from its kown orbit and the vector directed from Aa to Ab was added to the vector Aa,B with a coefficient $\alpha = -q/(1+q) = -0.305$ ($q = 0.44$ is the mass ratio in Aa,Ab). The unresolved accurate measurements (1991--2001) were corrected with $\alpha = -(q -r)/[(1+q)(1+r)] = -0.237$, considering that they refer to the photo-center of Aa,Ab and not to A or Aa ($r =0.07$ is the light ratio Ab/Aa). This correction substantially reduces the scatter of accurate speckle measurements (see the scheme and the zoomed portion of the orbit in Figure~\ref{fig:vborb}, left). The visual micrometer measurements are left uncorrected, as the effect of Aa,Ab is $<36$\,mas. They are assigned errors of 0\farcs1, except the four highly deviant micrometer measurements that were given larger errors to cancel their influence on the orbit. It turns out that the data do not yet constrain all elements of the outer pair A,B. Equally good solutions can be obtained by fixing the period at different values within a certain range; longer periods correspond to a smaller mass sum. The orbit given here assumes $P=205$\,yr and gives the mass sum of 2.68 ${\cal M}_\odot$ for a parallax of 27.6\,mas. The unconstrained fit gives $P = 240 \pm 35$ years. Using the masses estimated below, we calculate that the RV amplitude in the A,B orbit is $K_1 = 0.2$\,km~s$^{-1}$. The ephemeris predicts that the RV(A) should change by $-0.18$\,km~s$^{-1}$ during the period 1991--2014 covered by the observations. We fitted the RV residuals by a linear function and found the coefficient of $-0.007 \pm 0.008$\,km~s$^{-1}$~yr$^{-1}$, or a total change of $-0.17$\,km~s$^{-1}$ during the 23-yr period of RV observations. The concidence of those numbers is accidental, given that the RV trend is not formally significant. However, the {\it sign} of the emerging RV trend tells us that the node of the orbit of A,B is probably chosen correctly. Therefore, the orbits of A,B and Aa,Ab are nearly co-planar (the angle between the angular momenta is $\phi = 10.8^\circ \pm 1.9^\circ$). If the node of the outer orbit is changed by $180^\circ$, then $\phi = 110^\circ$. In such case, the Kozai-Lidov cycles would have made the middle orbit highly eccentric and would have destroyed the architecture of this multiple system. The calculated semi-major axis of Aa1,Aa2 is 18.4\,mas. The ``wobble'' of Aa due to the inner subsystem should have an amplitude of 4\,mas. The orientation of the inner orbit Aa1,Aa2 can be established by frequent and precise speckle measurements of Aa,Ab. The measurements of Aa,Ab made in 2014--2015 seem to deviate from the middle orbit in a systematic way, but we could not yet use the residuals for determining the elements $\Omega$ and $i$ of the inner orbit. The reason is that the number of measurements of Aa,Ab is still modest, leading to a cross-talk between the elements of the middle and inner orbits. The co-planarity of those orbits thus remains hypothetical. However, the moderate eccentricity of the inner orbit implies the absence of Kozai-Lidov cycles, hence mutual inclination $\phi < 39^\circ$. An attempt was made to measure the RVs of the faint components Ab and B using two-dimensional correlation, TODCOR \citep{TODCOR}. Three well-exposed spectra from TRES (JD 2455169 to 2456743) were processed using synthetic templates with effective temperatures $T_e$ of 5750 and 4500\,K. A second maximum was seen at velocities of 21.74, 21.44, and 25.07 km~s$^{-1}$. All three dates are close to the node of the middle orbit Aa,Ab, so the measured velocities of the secondary, if real, correspond to a blend between Ab and B. The measurements should be repeated in a couple of years, at a different phase of the middle orbit. \section{Masses and distance} \label{sec:mass} \begin{figure} \epsscale{1.0} \plotone{fig5.eps} \caption{Components Aa, Ab, and B on the $(K_s, V - K_s)$ CMD with a parallax of 27.6\,mas. The line shows the 1-Gyr Dartmouth isochrone with solar metallicity \citep{Dotter2008}. \label{fig:cmd} } \end{figure} The masses and distance were determined iteratively, using photometry and orbital elements. Several assumptions are made: (i) the stars are not evolved and follow standard mass-luminosity relations for main sequence stars; (ii) the component Aa2 contributes little light in all bands, and (iii) the inner orbit Aa1,Aa2 has an inclination of $57^\circ$. Using the initial estimates of masses from the absolute $V$-band magnitudes, we determine the dynamical parallax from the middle orbit Aa,Ab, then refine the masses and other parameters. The numbers given below were obtained in the last iteration. The mass of Aa1 is 1.14 ${\cal M}_\odot$, as determined from the isochrone in Figure~\ref{fig:cmd}. Then the orbit of Aa1,Aa2 and the assumption (iii) lead to the mass of Aa2 of 0.32${\cal M}_\odot$. The mass of Aa is therefore 1.46${\cal M}_\odot$. The orbit of Aa,Ab with known inclination leads to the mass of 0.64 $M_\odot$ for Ab. The mass of A=(Aa+Ab) is therefore established at 2.10 ${\cal M}_\odot$ and its semi-major axis is 5.48\,AU. The semi-major axis of Aa,Ab is measured at $150 \pm 2$\,mas and leads to the dynamical parallax of $27.4 \pm 0.6$\,mas, i.e. a 2\% accuracy on the distance. The distance is proportional to the mass sum to 1/3 power, so revision of the mass estimates would have a minor effect on the dynamical parallax. Using the distance and photometry, we place the components Aa, Ab, and B on the color-magnitude diagram (Figure~\ref{fig:cmd}). The absolute magnitude of B corresponds to a main-sequence star of 0.74~${\cal M}_\odot$. However, the components Ab and B have similar luminosity, so we adopt the mass of 0.64~${\cal M}_\odot$ for B, close to the measured value for Ab. The mass sum of A+B is therefore 2.74~${\cal M}_\odot$. The orbit of A,B with a fixed period of 205\,yr and the parallax of 27.4\,mas corresponds to such a mass sum. \section{Discussion} \label{sec:disc} The four known components of the quadruple system HD~91962 are normal dwarfs that match the standard mass-luminosity relation. We cannot exclude additional low-mass satellites revolving around Ab or B, as their RVs were not measured directly, while the constraints from the photometry and orbits are not tight enough. The system appears to be young. \citet{White07} measured the lithum 6707\AA~ line strength of 73\,m\AA, axial rotation of 17\,km~s$^{-1}$, and detected chromospheric emission in a spectrum with a resolution of 16,000. \citet{Schroeder} confirmed the chromospheric emission, explaining the X-ray flux. The star does not belong to any known kinematic group. No excess far-infrared emission was found with {\em Spitzer} \citep{Carpenter2009}. Indeed, a debris disk would not survive inside this multiple system. If the dust exists outside the orbit of A,B it would be too cold to be detectable. All three orbits have low eccentricity. The orbits of A,B and Aa,Ab have mutual inclination of $11^\circ$. Small eccentricities imply the absence of the Kozai-Lidov cycles, hence moderate mutual orbit inclinations at all hierarchical levels. Moreover, the longitudes of periastron $\omega$ in all three orbits are also similar, showing that the lines of apsides have similar orientation. One can't help noting the similarities of the orbits while comparing the left and right parts of Figures~\ref{fig:sborb} and \ref{fig:vborb}. The period ratio between the middle and inner systems is $18.97 \pm 0.06$. It appears that these orbits are in a weak 1:19 resonance. The period ratio of the outer and middle systems is about 23 (it is not accurate enough to check for a resonance). The quadruple system is dynamically stable and is organized in a regular way, remiscient of the Solar system (Figure~\ref{fig:str}). Similar eccentricities and apsidal angles, as well as the resonance, suggest that the companions interacted with each other during their formation and early dynamical evolution. This quadruple system could originate in an isolated rotating core. Rotation prevented immediate collapse. The gas formed a massive and unstable disk which fragmented into a companion. Continuing accretion onto the companion increased its mass and caused inward migration, while dissipative gas friction maintained the low orbital eccentricity. The first companions could have merged with the central body Aa1, while other companions were formed on the periphery and migrated inwards. The process was stopped when the gas reservoir was exhausted or lost, leaving the last three surviving companions Aa2, Ab, and B. This scenario, although speculative, matches the observed properties of HD~91962. Most quadruple systems consist of two close pairs in a 2+2 hierarchy, while the 2-tier hierarchies of 3+1 ``planetary'' type are less typical; they are found in about 1\% of solar-type stars \citep{Tok2014}. The sample of 4847 solar-type stars within 67\,pc contains only 24 multiple systems with a 3-tier hierarchy, but for none of them except HD~91962 are all three orbits known because the outer orbits have estimated periods of several thousand years. In the current version\footnote{\url{http://www.ctio.noao.edu/\~{}atokovin/stars/index.php}} of the Multiple Star Catalog \citep{MSC} we found four 3-tier hierarchies where all three orbits are known (Table~\ref{tab:msc}). In those systems, the inner pairs have short orbital periods, presumably produced by inward migration. In the first system, HD~5408 (HR~266, ADS~784), the two outer orbits are nearly co-planar with periods of 83.1\,yr and 4.85\,yr (period ratio 17.1) and eccentricities of 0.24 and 0.22. The architecture of HD~91962 is therefore rare, but not unique. It may belong to a class of multiple systems that evolved in a viscous disk. The distinguishing features of this class are approximate co-planarity of the orbits, moderate period ratio on the order of 20 (possibly in resonance), and small eccentricities. Other members of this class, quadruple as well as triple, may be found among known multiple systems and discovered in the future. Determination of accurate orbital elements will be essential in checking the co-planarity and resonance. \begin{deluxetable}{ cc lll } \tabletypesize{\scriptsize} \tablecaption{Three-tier hierarchies with all known orbits in the MSC \label{tab:msc} } \tablewidth{0pt} \tablehead{ \colhead{HD} & \colhead{Sp. type} & \colhead{Inner} & \colhead{Middle} & \colhead{Outer} } \startdata 5408 & B9IVn & SB2 4.24\,d & SB1,VB 4.84\,yr & VB 83.1\,yr \\ 9770 & K4V & Ecl. 0.477\,d & VB 4.56\,yr & VB 123.5\,yr \\ 12376 & G9V & SB2 3.08\,d & VB,SB2 12.9\,yr & VB 330\,yr \\ 21364 & B9Vn & SB2 7.15\,d & SB 145\,d & VB 212\,yr \enddata \end{deluxetable} This is the case when a common visual binary turns into a unique object worth of further detailed study. The rare quadruple system HD~91962 gives interesting insights about its origin and, by extension, the origin of multiple stars in general. Further RV monitoring will help to confirm the sign of the long-term RV trend, hence the co-planarity of A,B and Aa,Ab. Precise speckle measurements of Aa,Ab with high cadence can be used to infer the orientation of the inner orbit Aa1,Aa2. This can be done even better with long-baseline interferometers. Direct resolution of the inner pair, for which we estimate $\Delta K_{\rm Aa1,Aa2} \sim 3.8$ mag and separation on the order of 20\,mas, will be difficult but not impossible. The weak signatures of B and Ab might be detectable in the high-resolution spectra with a good SNR. Such observations can prove the absence of additional close companions in this system and will provide accurate measurements of stellar masses. Future precise astrometry with {\it Gaia} will add new constraints. \acknowledgements The data used in this work were obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, e Inova\c{c}\~{a}o da Rep\'{u}blica Federativa do Brasil, the U.S. National Optical Astronomy Observatory, the University of North Carolina at Chapel Hill, and Michigan State University. This work used the SIMBAD service operated by Centre des Donn\'ees Stellaires (Strasbourg, France), bibliographic references from the Astrophysics Data System maintained by SAO/NASA, data products of the Two Micron All-Sky Survey (2MASS), and the Washington Double Star Catalog maintained at USNO.
{ "timestamp": "2015-04-27T02:09:33", "yymm": "1504", "arxiv_id": "1504.06535", "language": "en", "url": "https://arxiv.org/abs/1504.06535" }
\section{Introduction} It is well known that the multi-parton interaction in general and double parton scattering processes in particular become more and more important at high energies. In the present short review we concentrate on double parton processes (DPS) which can be described as perturbative processes, i.e. processes where the hard scale is well defined (production of heavy objects, or objects with large transverse momenta). In general, the cross section for the double-parton scattering grows faster than the corresponding (the same final state) cross section for single parton scattering (SPS). The double-parton scattering was recognized already in seventies and eighties \cite{LP78,T1979,GSH80,H1983,PT1984,PT1985,M1985,HO1985,SZ1987}. The activity stopped when it was realized that their contribution at the center-of-mass energies available at those times was negligible. Several estimates of the cross section for different processes have been presented in recent years \cite{DH1996,KS2000,FT2002,BJS2009,GKKS2010, SV2011,BDFS2011,KKS2011,BSZ2011}. The theory of the double-parton scattering is quickly developing (see e.g. \cite{S2003,KS2004,SS2004,GS2010,GS2011,DS2011,RS2011,DOS2011}). In Ref.~\cite{LMS2012} we showed that the production of $c \bar c c \bar c$ is a very good place to study DPS effects. Here, the quark mass is small enough to assure that the cross section for DPS is very large, and large enough that each of the scatterings can be treated within pQCD. The calculation performed in Ref.~\cite{LMS2012} were done in the leading-order (LO) collinear approximation. This may not be sufficient when comparing the results of the calculation with real experimental data. In the meantime the LHCb collaboration presented new interesting data for simultaneous production of two charmed mesons \cite{Aaij:2012dz}. They have observed large percentage of the events with two mesons, both containing $c$ quark, with respect to the typical production of the corresponding meson/antimeson pair ($\sigma_{D_{i}D_{j}}/ \sigma_{D_{i}\bar{D_{j}}} \sim 10\%$). In Ref.~\cite{MS2013} we discussed that the large effect is a footprint of double parton scattering. In this paper each step of the double parton scattering was considered in the $k_t$-factorization approach. In Ref.~\cite{Berezhnoy2012} the authors estimated DPS contribution based on the experimental inclusive $D$ meson spectra measured at the LHC. In their approach fragmentation was included only in terms of the branching fractions for the quark-to-meson transition $c \to D$. In our approach in Ref.~\cite{MS2013} we included full kinematics of hadronization process. There we showed also first differential distributions on the hadron level that can be confronted with recent LHCb experimental data \cite{Aaij:2012dz}. \begin{figure} \begin{center} \includegraphics[width=5cm]{double_parton_scattering.eps} \includegraphics[width=4cm]{diff7.eps} \end{center} \caption{ SPS and DPS production mechanisms of $c \bar c c \bar c$. } \label{fig:diagrams:ccbarccbar} \end{figure} 25 years ago Mueller and Navelet predicted strong decorrelation in relative azimuthal angle \cite{Mueller:1986ey} of jets with large rapidity separation due to exchange of the BFKL ladder between quarks. The generic picture is presented in diagram (a) of Fig.~\ref{fig:diagrams}. In a bit simplified picture quark/antiquarks are emitted forward and backward, whereas gluons emitted along the ladder populate rapidity regions in between. Due to diffusion along the ladder the correlation between the most forward and the most backward jets is small. This was a picture obtained within leading-logarithmic BFKL formalism \cite{Mueller:1986ey,DelDuca:1993mn,Stirling:1994he,DelDuca:1994ng,Kim96,Andersen2001}. Calculations of higher-order BFKL effects slightly modified this simple picture \cite{Bartels-MNjets,Vera:2007kn,Marquet:2007xx,Colferai:2010wu,Caporale:2011cc,Ivanov:2012ms, Caporale:2012ih,Ducloue:2013hia,Ducloue:2013bva,DelDuca2014} leading to smaller decorrelation in rapidity. Recently the NLL corrections were calculated both to the Green's function and to the jet vertices. The effect of the NLL correction is large and leads to significant lowering of the cross section. So far only averaged values of $<\!cos(n \phi_{jj}\!>$ over available phase space or even their ratios were studied experimentally \cite{CMS_MN1}. More detailed studies are necessary to verify this type of calculations. In particular, the approach should reproduce dependence on the rapidity distance between the jets emitted in opposite hemispheres. Large-rapidity-distance jets can be produced only at high energies where the rapidity span is large. A first experimental search for the Mueller-Navelet jets was made by the D0 collaboration. In their study rapidity distance between jets was limited to 5.5 units only. In spite of this they have observed a broadening of the $\phi_{jj}$ distribution with growing rapidity distance between jets. The dijet azimuthal correlations were also studied in collinear next-to-leading order approximation \cite{Aurenche:2008dn}. The LHC opens new possibility to study the decorrelation effect. First experimental data measured at $\sqrt{s}$ = 7 TeV are expected soon \cite{CMS_private}. \begin{figure}[!h] \begin{center} \includegraphics[width=4.0cm]{MN-jets-diagram.eps} \includegraphics[width=4.0cm]{DPS-diagram-jets.eps} \end{center} \caption{ \small A diagramatic representation of the Mueller-Navelet jet production (left diagram) and of the double paron scattering mechanism (right diagram). } \label{fig:diagrams} \end{figure} The double parton scattering mechanism of $W^+ W^-$ production was discussed e.g. in Refs.~\cite{KS2000,Kulesza2010,GKKS2011,LSR2015,GL2014}. The $W^+ W^-$ final states constitutes a background to Higgs production. It was discussed recently that double-parton scattering could explain a large part of the observed signal \cite{KP2013}. We shall also discuss the double parton scattering mechanism of $W^+W^-$ production in the present paper. \section{Formalism used in the calculations} \subsection{$c \bar c c \bar c$ production} Let us consider first production of $c \bar c c \bar c$ final state within the DPS framework. In a simple probabilistic picture the cross section for double-parton scattering can be written as: \begin{equation} \sigma^{DPS}(p p \to c \bar c c \bar c X) = \frac{1}{2 \sigma_{eff}} \sigma^{SPS}(p p \to c \bar c X_1) \cdot \sigma^{SPS}(p p \to c \bar c X_2). \label{basic_formula} \end{equation} This formula assumes that the two subprocesses are not correlated. At low energies one has to include parton momentum conservation i.e. extra limitations: $x_1+x_3 <$ 1 and $x_2+x_4 <$ 1, where $x_1$ and $x_3$ are longitudinal momentum fractions of gluons emitted from one proton and $x_2$ and $x_4$ their counterpairs for gluons emitted from the second proton. Experimental data \cite{Tevatron} provide an estimate of $\sigma_{eff}$ in the denominator of formula (\ref{basic_formula}). In our studies presented here we usually take $\sigma_{eff}$ = 15 mb. The simple formula (\ref{basic_formula}) can be generalized to address differential distributions. In leading-order approximation differential distribution can be written as \begin{equation} \frac{d \sigma}{d y_1 d y_2 d^2 p_{1t} d y_3 d y_4 d^2 p_{2t}} \\ = \frac{1}{ 2 \sigma_{eff} } \frac{ d \sigma } {d y_1 d y_2 d^2 p_{1t}} \cdot \frac{ d \sigma } {d y_3 d y_4 d^2 p_{2t}} \label{differential_distribution} \end{equation} which by construction reproduces formula for integrated cross section (\ref{basic_formula}). This cross section is formally differential in 8 dimensions but can be easily reduced to 7 dimensions noting that physics of unpolarized scattering cannot depend on azimuthal angle of the pair or on azimuthal angle of one of the produced $c$ ($\bar c$) quark (antiquark). The differential distributions for each single scattering step can be written in terms of collinear gluon distributions with longitudinal momentum fractions $x_1$, $x_2$, $x_3$ and $x_4$ expressed in terms of rapidities $y_1$, $y_2$, $y_3$, $y_4$ and transverse momenta of quark (or antiquark) for each step (in the LO approximation identical for quark and antiquark). A more general formula for the cross section can be written formally in terms of double-parton distributions, e.g. $F_{gg}$, $F_{qq}$, etc. In the case of heavy quark (antiquark) production at high energies: \begin{eqnarray} d \sigma^{DPS} &=& \frac{1}{2 \sigma_{eff}} F_{gg}(x_1,x_2,\mu_1^2,\mu_2^2) F_{gg}(x'_{1}x'_{2},\mu_1^2,\mu_2^2) \nonumber \\ &&d \sigma_{gg \to c \bar c}(x_1,x'_{1},\mu_1^2) d \sigma_{gg \to c \bar c}(x_2,x'_{2},\mu_2^2) \; dx_1 dx_2 dx'_1 dx'_2 \, . \label{cs_via_doublePDFs} \end{eqnarray} It is rather inspiring to write the double-parton distributions in the impact parameter space $F_{gg}(x_1,x_2,b) = g(x_1) g(x_2) F(b)$, where $g$ are usual conventional parton distributions and $F(b)$ is an overlap of the matter distribution in the transverse plane where $b$ is a distance between both gluons in the transverse plane \cite{CT1999}. The effective cross section in (\ref{basic_formula}) is then $1/\sigma_{eff} = \int d^2b F^2(b)$ and in this approximation is energy independent. Even if the factorization is valid at some scale, QCD evolution may lead to a factorization breaking. Evolution is known only in the case when the scale of both scatterings is the same \cite{S2003,KS2004,GS2010} i.e. for heavy object, like double gauge boson production. In Ref.~\cite{LMS2012} we applied the commonly used in the literature factorized model to $p p \to c \bar c c \bar c$ and predicted that at the LHC energies the cross section for two $c \bar c$ pair production starts to be of the same size as that for single $c \bar c$ production. In LO collinear approximation the differential distributions for $c\bar{c}$ production depend e.g. on rapidity of quark, rapidity of antiquark and transverse momentum of one of them \cite{LMS2012}. In the next-to-leading order (NLO) collinear approach or in the $k_t$-factorization approach the situation is more complicated as there are more kinematical variables needed to describe the kinematical situation. In the $k_t$-factorization approach the differential cross section for DPS production of $c \bar c c \bar c$ system, assuming factorization of the DPS model, can be written as: \begin{eqnarray} \frac{d \sigma^{DPS}(p p \to c \bar c c \bar c X)}{d y_1 d y_2 d^2 p_{1,t} d^2 p_{2,t} d y_3 d y_4 d^2 p_{3,t} d^2 p_{4,t}} = \nonumber \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\ \frac{1}{2 \sigma_{eff}} \cdot \frac{d \sigma^{SPS}(p p \to c \bar c X_1)}{d y_1 d y_2 d^2 p_{1,t} d^2 p_{2,t}} \cdot \frac{d \sigma^{SPS}(p p \to c \bar c X_2)}{d y_3 d y_4 d^2 p_{3,t} d^2 p_{4,t}}. \end{eqnarray} Again when integrating over kinematical variables one recovers Eq.(\ref{basic_formula}). \begin{equation} \sigma_{eff} = \left[ \int d^{2}b \; (T(\vec{b}))^{2} \right]^{-1}, \end{equation} where the overlap function \begin{equation} T ( \vec{b} ) = \int f( \vec{b}_{1} ) f(\vec{b}_{1} - \vec{b} ) \; d^2 b_{1}, \end{equation} if the impact-parameter dependent double-parton distributions (dPDFs) are written in the following factorized approximation \cite{GS2010,Gustaffson2011}: \begin{equation} \Gamma_{i,j}(x_1,x_2;\vec{b}_{1},\vec{b}_{2};\mu_{1}^{2},\mu_{2}^{2}) = F_{i,j}(x_1,x_2;\mu_{1}^{2},\mu_{2}^{2}) \; f(\vec{b}_{1}) \; f(\vec{b}_{2}). \end{equation} Without loosing generality the impact-parameter distribution can be written as \begin{equation} \Gamma(b,x_1,x_2;\mu_1^2,\mu_2^2) = F(x_1,\mu_1^2) \; F(x_2,\mu_2^2) \; F(b;x_1,x_2,\mu_1^2,\mu_2^2), \; \label{correlation_function} \end{equation} where $b$ is the parton separation in the impact parameters space. In the formula above the function $F(b;x_1,x_2,\mu_1^2,\mu_2^2)$ contains all information about correlations between the two partons (two gluons in our case). The dependence was studied numerically in Ref.~\cite{Gustaffson2011} within Lund Dipole Cascade model. The biggest discrepancy was found in the small $b$ region, particularly for large $\mu_1^2$ and/or $\mu_2^2$. We shall return to the issue when commenting our results. In general the effective cross section may depend on many kinematical variables: \begin{equation} \sigma_{eff}(x_1,x_2,x'_1,x'_2,\mu_1^2,\mu_2^2) = \left( \int d^2 b \; F(b;x_1,x_2,\mu_1^2,\mu_2^2) \; F(b;x'_1,x'_2,\mu_1^2,\mu_2^2) \right)^{-1}. \label{generalized_sigma_eff} \end{equation} We shall return to these dependences when discussing the role of perturbative parton splitting. \begin{figure} \begin{center} \includegraphics[width=7cm]{inclusive-talk2.eps} \end{center} \caption{ Production of $c \bar c$ quark and antiquark via fusion of virtual gluons. } \label{gg_ccbar} \end{figure} \subsection{Parton splitting} In Fig.~\ref{fig:diagrams_ccbarccbar} we illustrate a conventional and perturbative DPS mechanisms for $c \bar c c \bar c$ production. The 2v1 mechanism (the second and third diagrams) were considered first in \cite{GMS2014}. \begin{figure}[!h] \begin{center} \includegraphics[width=4cm]{DPS-diagram_ccbar.eps} \includegraphics[width=4cm]{DPS-diagram_ccbar_splitA.eps} \includegraphics[width=4cm]{DPS-diagram_ccbar_splitB.eps} \end{center} \caption{ \small The diagrams for DPS production of $c \bar c c \bar c$. } \label{fig:diagrams_ccbarccbar} \end{figure} In the case of $c \bar c c \bar c$ production the cross section for conventional DPS can be written as: \begin{eqnarray} \sigma(2v2) &=& \frac{1}{2} \frac{1}{\sigma_{eff,2v2}} \int dy_1 dy_2 d^2 p_{1t} dy_3 dy_4 d^2 p_{2t} \frac{1}{16 \pi {\hat s}^2} \overline{ |{\cal M}(gg \to c \bar c)|^2} \; x_1 x_1' x_2 x_2' \nonumber \\ && \times \; D^{gg}(x_1, x_2, \mu_1^2, \mu_2^2) \; D^{gg}(x_1, x_2, \mu_1^2, \mu_2^2) \label{DPS_ccbarccbar_2v2} \end{eqnarray} while that for the perturbative parton splitting DPS in a very similar fashsion (see e.g.\cite{GMS2014}) \begin{eqnarray} \sigma(2v1) &=& \frac{1}{2} \frac{1}{\sigma_{eff,2v1}} \int dy_1 dy_2 d^2 p_{1t} dy_3 dy_4 d^2 p_{2t} \frac{1}{16 \pi {\hat s}^2} \overline{ |{\cal M}(gg \to c \bar c)|^2} \; x_1 x_1' x_2 x_2' \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \times \; \left({\hat D}^{gg}(x_1', x_2', \mu_1^2, \mu_2^2) D^{gg}(x_1, x_2, \mu_1^2, \mu_2^2) + \; D^{gg}(x_1', x_2', \mu_1^2, \mu_2^2) {\hat D}^{gg}(x_1, x_2, \mu_1^2, \mu_2^2)\right) \nonumber \\ \label{DPS_ccbarccbar_2v1} \end{eqnarray} \subsection{Four-jet production in DPS} In the calculations performed in Ref.~\cite{MS2014} all partonic cross sections are calculated only in leading order. The cross section for dijet production can be written then as: \begin{equation} \frac{d \sigma(i j \to k l)}{d y_1 d y_2 d^2p_t} = \frac{1}{16 \pi^2 {\hat s}^2} \sum_{i,j} x_1 f_i(x_1,\mu^2) \; x_2 f_j(x_2,\mu^2) \; \overline{|\mathcal{M}_{i j \to k l}|^2} \;, \label{LO_SPS} \end{equation} where $y_1$, $y_2$ are rapidities of the two jets and $p_t$ is transverse momentum of one of them (identical). In our calculations we include all leading-order $i j \to k l$ partonic subprocesses (see e.g. \cite{Ellis-Stirling-Webber,Barger-Phillips}). The $K$-factor for dijet production is rather small, of the order of $1.1 - 1.3$ (see e.g. \cite{K-factor1,K-factor2}), and can be easily incorporated in our calculations. Below we shall show that already the leading-order approach gives results in reasonable agreement with recent ATLAS \cite{ATLASjets} and CMS \cite{CMSjets} data. This simplified leading-order approach was used in our first estimate of DPS differential cross sections for jets widely separated in rapidity \cite{MS2014}. Similarly as for $c \bar c c \bar c$ production one can write: \begin{eqnarray} \frac{d \sigma^{DPS}(p p \to \textrm{4jets} \; X)}{d y_1 d y_2 d^{2} p_{1t} d y_3 d y_4 d^{2} p_{2t}} &=& \nonumber \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum\nolimits_{i_1,j_1,k_1,l_1,i_2,j_2,k_2,l_2} \; \frac{\mathcal{C}}{\sigma_{eff}} \; \frac{d \sigma(i_1 j_1 \to k_1 l_1)}{d y_1 d y_2 d^{2} p_{1t}} \; \frac{d \sigma(i_2 j_2 \to k_2 l_2)}{d y_3 d y_4 d^{2} p_{2t}}\;, \label{DPS} \end{eqnarray} where $\mathcal{C} = \left\{ \begin{array}{ll} \frac{1}{2}\;\; & \textrm{if} \;\;i_1 j_1 = i_2 j_2 \wedge k_1 l_1 = k_2 l_2\\ 1\;\; & \textrm{if} \;\;i_1 j_1 \neq i_2 j_2 \vee k_1 l_1 \neq k_2 l_2 \end{array} \right\} $ and partons $j,k,l,m = g, u, d, s, \bar u, \bar d, \bar s$. The combinatorial factors include identity of the two subprocesses. Each step of the DPS is calculated in the leading-order approach (see Eq.(\ref{LO_SPS})). Above $y_1$, $y_2$ and $y_3$, $y_4$ are rapidities of partons in first and second partonic subprocess, respectively. The $p_{1t}$ and $p_{2t}$ are respective transverse momenta. Experimental data from the Tevatron \cite{Tevatron} and the LHC \cite{Aaij:2011yc,Aaij:2012dz,Aad:2013bjm} provide an estimate of $\sigma_{eff}$ in the denominator of formula (\ref{DPS}). As in our recent paper \cite{Hameren2014} we take $\sigma_{eff}$ = 15 mb. A detailed analysis of $\sigma_{eff}$ based on various experimental data can be found in Refs.~\cite{Seymour:2013qka,Bahr:2013gkj}. \subsection{$W^+ W^-$ production} The diagram representating the double parton scattering process is shown in Fig.~\ref{fig:DPS}. The cross section for double parton scattering is often modelled in the factorized anzatz which in our case would mean: \begin{figure*} \begin{center} \includegraphics[width=5cm]{diag_DPS.eps} \caption{Diagram representing double parton scattering mechanism of production of $W^+ W^-$ pairs. } \label{fig:DPS} \end{center} \end{figure*} \begin{equation} \sigma_{W^+ W^-}^{DPS} = \frac{1}{\sigma_{qq}^{eff}} \sigma_{W^{+}} \sigma_{W^{-}} \; . \label{factorized_model} \end{equation} In general, the parameter $\sigma_{q q}$ does not need to be the same as for gluon-gluon initiated processes $\sigma_{gg}^{eff}$. In the present, rather conservative, calculations we take it to be $\sigma_{qq}^{eff} = \sigma_{gg}^{eff}$ = 15 mb. The latter value is known within about 10 \% from systematics of gluon-gluon initiated processes at the Tevatron and LHC. The factorized model (\ref{factorized_model}) can be generalized to more differential distributions. For example in our case of $W^{+} W^{-}$ production the cross section differential in $W$ boson rapidities can be written as: \begin{equation} \frac{d \sigma_{W^+ W^-}^{DPS}}{d y_{+} d y_{-}} = \frac{1}{\sigma_{qq}^{eff}} \frac{d\sigma_W^{+}}{d y_{+}} \frac{d\sigma_W^{-}}{d y_{-}} \; . \label{generalized_factorized_model} \end{equation} In particular, in leading-order approximation the cross section for quark-antiquark annihilation reads: \begin{eqnarray} \frac{d\sigma}{dy} &=& \sum_{ij} \left( x_1 q_{i/1}(x_1,\mu^2) x_2 {\bar q}_{j/2}(x_2,\mu^2) + x_1 {\bar q}_{i/1}(x_1,\mu^2) x_2 q_{j/2}(x_1,\mu^2) \right) \nonumber \\ && \; \times \; \overline{|{\cal M}_{ij \to W^{\pm}}|^2} \label{rapidity_of_W} \end{eqnarray} where the matrix element for quark-antiquark annihilation to $W$ bosons (${\cal M}_{ij \to W^{\pm}}$) contains Cabibbo-Kobayashi-Maskawa matrix elements. When calculating the cross section for single $W$ boson production in leading-order approximation a well known Drell-Yan $K$-factor can be included. The double-parton scattering would be then multiplied by $K^2$. \section{Results for different processes} \subsection{$c \bar c c \bar c$ production} We start presentation of our results with production of two pairs of $c \bar c$. In Fig.~\ref{fig:single_vs_double_LO} we compare cross sections for the single and double-parton scattering as a function of proton-proton center-of-mass energy. At low energies the single-parton scattering dominates. For reference we show the proton-proton total cross section as a function of collision energy as parametrized in Ref.~\cite{DL92}. At low energy the $c \bar c$ or $ c \bar c c \bar c$ cross sections are much smaller than the total cross section. At higher energies both the contributions dangerously approach the expected total cross section. This shows that inclusion of unitarity effect and/or saturation of parton distributions may be necessary. The effects of saturation in $c \bar c$ production were included e.g. in Ref.~\cite{enberg1} but not checked versus experimental data. Presence of double-parton scattering changes the situation. The double-parton scattering is potentially very important ingredient in the context of high energy neutrino production in the atmosphere \cite{GIT96, MRS2003, enberg1} or of cosmogenic origin \cite{enberg2}. At LHC energies the cross section for both terms become comparable. This is a completely new situation. \begin{figure}[!h] \begin{center} \includegraphics[width=5.0cm]{sig_tot_LO.eps} \includegraphics[width=5.0cm]{sig_tot_GRV94_LO_scale.eps} \end{center} \caption{ \small Total LO cross section for $c \bar c$ and double-parton scattering production of $c \bar c c \bar c$ as a function of center-of-mass energy (left panel) and uncertainties due to the choice of (factorization, renormalization) scales (right panel). We show in addition a parametrization of the total cross section in the left panel. } \label{fig:single_vs_double_LO} \end{figure} So far we have concentrated on DPS production of $c \bar c c \bar c$ and completely ignored SPS production of $c \bar c c \bar c$. In Refs.\cite{SS2012,Hameren2014} we calculated the SPS contribution in high-energy approximation \cite{SS2012} and including all diagrams in the collinear-factorization approach \cite{Hameren2014}. In Fig.~\ref{fig:SPSccbar_vs_SPSccbarccbar} we show the cross section from Ref.~\cite{Hameren2014}. The corresponding cross section at the LHC energies is more than two orders of magnitude smaller than that for $c \bar c$ production i.e. much smaller than the DPS contribution discussed in the previous figure. \begin{figure}[!h] \begin{center} \includegraphics[width=7cm]{sigma_pp_ccbarccbar_W.eps} \end{center} \caption{Cross section for SPS production of $c \bar c c \bar c$ compared to this for standard $c \bar c$ production as a function of collision energy. } \label{fig:SPSccbar_vs_SPSccbarccbar} \end{figure} In experiment one measures $D$ mesons instead of charm quarks/antiquarks. In Fig.~\ref{fig:DPS_ydiffandphid} we show resulting distributions in rapidity distance between two $D^0$ mesons (left panel) and corresponding distribution in relative azimuthal angle (right panel). The DPS contribuion (dashed line) dominates over the single parton scattering one (dash-dotted line). The sum of the two contributions is represented by the solid line. We get a reasonable agreement with the LHCb experimental data \cite{Aaij:2012dz}. \begin{figure} \begin{center} \includegraphics[width=6cm]{dsig_dydiff_lhcb_D0D0_vH.eps} \includegraphics[width=6cm]{dsig_dphid_lhcb_D0D0_vH.eps} \end{center} \caption{Rapidity distance between two $D^0$ mesons (left panel) and corresponding azimuthal correlations (right panel). } \label{fig:DPS_ydiffandphid} \end{figure} Distribution in the invariant mass of two $D^0 D^0$ mesons is shown in Fig.~\ref{fig:dsig_dMinv}. Again a reasonable agreement is obtained. Some strength is missing in the interval 10 GeV $< M_{D^0 D^0} <$ 16 GeV. \begin{figure} \begin{center} \includegraphics[width=6cm]{dsig_dminv_lhcb_D0D0_vH.eps} \end{center} \caption{Distribution of invariant mass of two $D^0$ mesons.} \label{fig:dsig_dMinv} \end{figure} At the LHC the cross section for $p p \to c \bar c$ is still bigger than that for $p p \to c \bar c c \bar c$ \cite{MS2013}. As shown in Fig.~\ref{fig:single_vs_double_LO} the latter cross section is growing fast and at high energy it may become even larger than that for single pair production. The situation at Future Circular Collider (FCC) is shown in Fig.~\ref{fig:charm_FCC}. Now the situation reverses and the cross section for $c \bar c c \bar c$ is bigger than that for single pair production. We predict rather flat distributions in charm quark/antiquark rapidities. The shapes in quark/aniquark transverse momenta are almost identical which can be easily understood within the formalism presented in the previous section. \begin{figure} \begin{center} \includegraphics[width=5cm]{dsig_dy_100TeV_dps_charm.eps} \includegraphics[width=5cm]{dsig_dpt_100TeV_dps_charm.eps} \end{center} \caption{Cross section for one $c$ or one $\bar c$ from the $c \bar c c \bar c$ final state at the FCC.} \label{fig:charm_FCC} \end{figure} \subsection{Parton splitting} As described in the Formalism section the splitting contributions are calculated in leading order only. In the calculations performed in Ref.~\cite{GMS2014} we either assumed $\mu_1^2 = m_{1t}^2$ and $\mu_2^2 = m_{2t}^2$, or $\mu_1^2 = M_{c\bar c,1}^{2}$ and $\mu_2^2 = M_{c\bar c,2}^{2}$. The quantity $m_{it}$ is the transverse mass of either parton emerging from subprocess $i$, whilst $M_{c \bar c,i}$ is the invariant mass of the pair emerging from subprocess $i$. In Fig.~\ref{fig:dsig_dy} we show the rapidity distribution of the charm quark/antiquark for different choices of the scale at $\sqrt{s}$ = 7 TeV. The conventional and splitting terms are shown separately. The splitting contribution (lowest curve, red online) is smaller, but has almost the same shape as the conventional DPS contribution. We can observe asymmetric (in rapidity) shapes for the $1v2$ and $2v1$ contributions. \begin{figure} \begin{center} \includegraphics[width=7cm]{dsig_dy_charm_2v1vs2v2_mt_new.eps} \end{center} \caption{ Rapidity distribution of charm quark/antiquark for $\sqrt{s}$ = 7 TeV for $\mu_1^2 = m_{1t}^2$, $\mu_2^2 = m_{2t}^2$. } \label{fig:dsig_dy} \end{figure} The corresponding ratios of the 2v1-to-2v2 contributions as a function of rapidity is shown in Fig.~\ref{fig:ratio_y_charm_2v1vs2v2}. \begin{figure} \begin{center} \includegraphics[width=6cm]{ratio_y_charm_2v1vs2v2.eps} \end{center} \caption{ Ratio of the 2v1 to 2v2 cross sections as a function of quark/antiquark rapidity.} \label{fig:ratio_y_charm_2v1vs2v2} \end{figure} In Fig.~\ref{fig:ratio_sqrtS_charm} we show energy deprendence of the ratio of the 2v1 to 2v2 cross sections. The ratio systematically decreases with the collision energy. \begin{figure} \begin{center} \includegraphics[width=6cm]{ratio_sqrtS_charm.eps} \end{center} \caption{ Ratio of the 2v1 to 2v2 cross sections as a function of collision energy.} \label{fig:ratio_sqrtS_charm} \end{figure} Finally in Fig.~\ref{fig:sig_eff_charm} we show the empirical $\sigma_{eff}$, for double charm production. Again $\sigma_{eff}$ rises with the centre-of-mass energy. A sizeable difference of results for different choices of scales can be observed. \begin{figure} \begin{center} \includegraphics[width=6cm]{sigeff_sqrtS_charm.eps} \end{center} \caption{ Energy and factorization scale dependence of $\sigma_{eff}$ for $c \bar c c \bar c$ production as a consequence of existence of the two DPS components. In this calculation $\sigma_{eff,2v2}$ = 30 mb and $\sigma_{eff,2v1}$ = 15 mb. } \label{fig:sig_eff_charm} \end{figure} \subsection{Jets with large rapidity separation} In Fig.~\ref{fig:pt-and-y-spectra-CMSjets} we compare our calculation for inclusive jet production with the CMS data \cite{CMSjets}. In addition, we show contributions of different partonic mechanisms. In all rapidity intervals the gluon-gluon and quark-gluon (gluon-quark) contributions clearly dominate over the other contributions and in practice it is sufficient to include only these subprocesses in further analysis. \begin{figure}[!h] \begin{center} \includegraphics[width=4cm]{dsig_dpt_CMS_rap1.eps} \includegraphics[width=4cm]{dsig_dpt_CMS_rap2.eps} \includegraphics[width=4cm]{dsig_dpt_CMS_rap3.eps} \includegraphics[width=4cm]{dsig_dpt_CMS_rap4.eps} \includegraphics[width=4cm]{dsig_dpt_CMS_rap5.eps} \includegraphics[width=4cm]{dsig_dpt_CMS_rap6.eps} \end{center} \caption{ \small Our results for inclusive jet production against the CMS experimental data \cite{CMSjets}. In addition we show decomposition into different partonic components as explained in the figure caption. } \label{fig:pt-and-y-spectra-CMSjets} \end{figure} Now we proceed to the jets with large rapidity separation. In Fig.~\ref{fig:Deltay1} we show distribution in the rapidity distance between two jets in leading-order collinear calculation and between the most distant jets in rapidity in the case of four DPS jets. In this calculation we have included cuts for the CMS expriment \cite{CMS_private}: $y_1, y_2 \in$ (-4.7,4.7), $p_{1t}, p_{2t} \in$ (35 GeV, 60 GeV). For comparison we show also results for the BFKL calculation from Ref.~\cite{Ducloue:2013hia}. For this kinematics the DPS jets give sizeable contribution only at large rapidity distance. The NLL BFKL cross section (long-dashed line) is smaller than that for the LO collinear approach (short-dashed line). \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{dsig_dDeltay_pt35_7TeV.eps} \includegraphics[width=5cm]{dsig_dDeltay_pt35_14TeV.eps} \end{center} \caption{ \small Distribution in rapidity distance between jets (35 GeV $< p_t <$ 60 GeV) with maximal (the most positive) and minimal (the most negative) rapidities. The collinear pQCD result is shown by the short-dashed line and the DPS result by the solid line for $\sqrt{s}$ = 7 TeV (left panel) and $\sqrt{s}$ = 14 TeV (right panel). For comparison we show also results for the BFKL Mueller-Navelet jets in leading-logarithm and next-to-leading-order logarithm approaches from Ref.~\cite{Ducloue:2013hia}. } \label{fig:Deltay1} \end{figure} In Fig.~\ref{fig:Deltay-2} we show rapidity-distance distribution for even smaller lowest transverse momenta of the "jet". A measurement of such minijets may be, however, difficult. Now the DPS contribution may even exceed the standard SPS dijet contribution, especially at the nominal LHC energy. How to measure such (mini)jets is an open issue. In principle, one could measure correlations of semihard ($p_t \sim$ 10 GeV) neutral pions with the help of so-called zero-degree calorimeters (ZDC) which are installed by all major LHC experiments. \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{dsig_dDeltay_pt20_7TeV.eps} \includegraphics[width=5cm]{dsig_dDeltay_pt20_14TeV.eps} \\ \includegraphics[width=5cm]{dsig_dDeltay_pt10_7TeV.eps} \includegraphics[width=5cm]{dsig_dDeltay_pt10_14TeV.eps} \end{center} \caption{ \small The same as in the previous figure but now for somewhat smaller lower cut on minijet transverse momentum. } \label{fig:Deltay-2} \end{figure} \subsection{Production of $W^+ W^-$ pairs} It was argued that the DPS contribution for inclusive $W^+ W^-$ could be large \cite{KP2013}. Here we partly report results from Ref.~\cite{LSR2015}. In this analysis we have assumed $\sigma_{eff}$ = 15 mb as is a phenomenological standard for many other, mostly gluon-gluon induced, processes. Similar value was used also in other recent analysis \cite{GL2014} where in addition evolution effects of dPDFs were discussed. In our opinion the normalization of the cross section may be an open issue \cite{LSR2015}. Therefore below we wish to compare rather shapes of a few distributions. In Fig.~\ref{fig:WW_maps} we show two-dimensional distributions in rapidity of $W^+$ and $W^-$. For reference we show also distributions for $\gamma \gamma$ and $q \bar q$ components (see a detailed discussion in Ref.~\cite{LSR2015}). The DPS contribution seems broader in the $(y_{W^{+}},y_{W^{-}})$ space than the other two contributions. \begin{figure} \begin{center} \includegraphics[width=4.1cm]{map_y1y2_DPS_mp.eps} \includegraphics[width=4.1cm]{map_y1y2_in_in.eps} \includegraphics[width=4.1cm]{map_y1y2_qq.eps} \end{center} \caption{ Two-dimensional distributions in rapidity of $W^+$ and rapidity of $W^-$ for the DPS mechanism (left), $\gamma \gamma$ (middle) and $q \bar q$ (right) mechanism for $\sqrt{s}$ = 8 TeV. } \label{fig:WW_maps} \end{figure} In Fig.~\ref{fig:dsig_dM_DPS} we show invariant $M_{WW}$ mass distribution for $\sqrt{s}$ = 8 TeV. The DPS contribution seems to dominate at very large invariant masses. \begin{figure} \begin{center} \includegraphics[width=7.0cm]{dsig_dM_DPS.eps} \end{center} \caption{$M_{WW}$ invariant mass distribution for different mechanism discussed in Ref.~\cite{LSR2015}.} \label{fig:dsig_dM_DPS} \end{figure} \begin{table}[tb] \caption{Cross section for $W^+ W^-$ production at different collision energies for the dominant $q \bar q$ and DPS contributions.} \label{table1} \begin{center} \begin{tabular}{|l|c|c|} \hline & $q \bar q$ & DPS \\ \hline 8000 & 0.032575 & 0.1775(-03) \\ 14000 & 0.06402 & 0.6367(-03) \\ 100000 & 0.53820 & 0.03832 \\ \hline \end{tabular} \end{center} \end{table} How the situation may look at future high-energy experiments at the LHC and FCC is shown in Table~\ref{table1} and Fig.~\ref{fig:dsig_dy_DPS_future}. Now the DPS (conservative estimation) is relatively larger compared to other contributions. \begin{figure} \begin{center} \includegraphics[width=6.0cm]{dsig_dy_W_14.eps} \includegraphics[width=6.0cm]{dsig_dy_W_100.eps} \end{center} \caption{Our predictions for future experiments.} \label{fig:dsig_dy_DPS_future} \end{figure} In experiments one can measure charged leptons and not $W^{\pm}$ bosons. Therefore a detailed study of lepton distributions is needed. As en example we show (see Fig.~\ref{fig:dsig_dMWW_lplm}) distribution of invariant mass of charged leptons compared with that for gauge bosons. Only a relatively small shift towards smaller invariant masses is observed. A more detailed studies are necessary to answer whether the $W^+ W^-$ distribution can be identified experimentally. Several background contributions have to be considered. We leave such a detailed studies for future. \begin{figure} \begin{center} \includegraphics[width=5.0cm]{dsig_dMWW_lplm.eps} \end{center} \caption{Invariant mass distribution of the $W^{+}W^{-}$ system (thick solid line) and corresponding distribution for the $\mu^{+}\mu^{-}$ system. No branching fractions are included.} \label{fig:dsig_dMWW_lplm} \end{figure} \section{Conclusions} We have briefly review some double-parton scattering processes considered by us recently. First we have shown, within a leading-order collinear-factorization, that the cross section for $c \bar c c \bar c$ production grows much faster than the cross section for $c \bar c$ making the production of two pairs of $c \bar c$ production very attractive in the context of exploring the double-parton scattering processes. We have also discussed production of $c \bar c c \bar c$ in the double-parton scattering in the factorized Ansatz with each step calculated in the $k_t$-factorization approach, i.e. including effectively higher-order QCD corrections. The cross section for the same process calculated in the $k_t$-factorization approach turned out to be larger than its counterpart calculated in the LO collinear approach. We have calculated also cross sections for the production of $D_i D_j$ (both containing $c$ quark) and $\bar D_i \bar D_j$ (both containing $\bar c$ antiquark) pairs of mesons. The results of the calculation have been compared to recent results of the LHCb collaboration. The total rates of the meson pair production depend on the unintegrated gluon distributions. The best agreement with the LHCb data has been obtained for the Kimber-Martin-Ryskin UGDF. This approach, as discussed already in the literature, effectively includes higher-order QCD corrections. As an example we have shown some differential distributions for $D^0 D^0$ pair production. Rather good agreement has been obtained for transverse momentum distribution of $D^0$ $(\bar D^0)$ mesons and $D^0 D^0$ invariant mass distribution. The distribution in azimuthal angle between both $D^0$'s suggests that some contributions may be still missing. The single parton scattering contribution, calculated in the high energy approximation, turned out to be rather small. In the meantime we checked that $2 \to 4$ ($gg \to c \bar c c \bar c$) $k_t$-factorization approach leads to similar results as the collinear approach discussed here \cite{HMS2015}. We have discussed also a new type of mechanism called parton splitting in the context of the $c \bar c c \bar c$ production. Our calculation showed that the parton-splitting contribution gives sizeable contribution and has to be included when analysing experimental data. However, it is too early in the moment for precise predictions of the corresponding contributions as our results strongly depend on the values of not well known parameters $\sigma_{eff,2v2}$ and $\sigma_{eff,2v1}$. Some examples inspired by a simple geometrical model of colliding partons have been shown. A better understanding of the two nonperturbative parameters is a future task. We have shown that almost all differential distributions for the conventional and the parton-splitting contributions have essentially the same shape. This makes their model-independent separation extremely difficult. This also shows why the analyses performed so far could describe different experimental data sets in terms of the conventional 2v2 contribution alone. The sum of the 2v1 and 2v2 contributions behaves almost exactly like the 2v2 contribution, albeit with a smaller $\sigma_{eff}$ that depends only weakly on energy, scale and momentum fractions. With the perturbative 2v1 mechanism included, $\sigma_{eff}$ increases as $\sqrt{s}$ is increased, and decreases as $Q$ is increased. We have discussed also how the double-parton scattering effects may contribute to large-rapidity-distance dijet correlations. The presented results were performed in leading-order approximation only i.e. each step of DPS was calculated in collinear pQCD leading-order. Already leading-order calculation provides quite adequate description of inclusive jet production when confronted with recent results obtained by the ATLAS and CMS collaborations. We have identified the dominant partonic pQCD subprocesses relevant for the production of jets with large rapidity distance. We have shown distributions in rapidity distance between the most-distant jets in rapidity. The results of the dijet SPS mechanism have been compared to the DPS mechanism. We have performed calculations relevant for a planned CMS analysis. The contribution of the DPS mechanism increases with increasing distance in rapidity between jets. We have also shown some recent predictions of the Mueller-Navelet jets in the LL and NLL BFKL framework from the literature. For the CMS configuration our DPS contribution is smaller than the dijet SPS contribution even at high rapidity distances and only slightly smaller than that for the NLL BFKL calculation known from the literature. The DPS final state topology is clearly different than that for the dijet SPS (four versus two jets) which may help to disentangle the two mechanisms experimentally. We have shown that the relative effect of DPS can be increased by lowering the transverse momenta. Alternatively one could study correlations of semihard pions distant in rapidity. Correlations of two neutral pions could be done, at least in principle, with the help of so-called zero-degree calorimeters present at each main detectors at the LHC. The DPS effects are interesting not only in the context how they contribute to distribution in rapidity distance but per se. One could make use of correlations in jet transverse momenta, jet imbalance and azimuthal correlations to enhance the contribution of DPS. Further detailed Monte Carlo studies are required to settle real experimental program of such studies. The four-jet final states analyses of distributions in rapidity distance and other kinematical observables was performed by us very recently \cite{MS2015}. Finally we have discussed DPS effects in inclusive production of $W^+ W^-$ pairs. We have shown that the relative contribution of DPS grows with collision energy. In experiments one measures rather electrons or muons than the gauge bosons. Whether experimental identification of the DPS contribution in this case is possible requires a detailed Monte Carlo studies. \vspace{1cm} {\bf Acknowledgments} This presentation is based on common work mostly with Rafa{\l} Maciu{\l}a and partially with Jonathan Gaunt, Marta {\L}uszczak and Wolfgang Sch\"afer. I am very indebted to Rafa{\l} Maciu{\l}a for help in preparing this manuscript.
{ "timestamp": "2015-04-27T02:08:26", "yymm": "1504", "arxiv_id": "1504.06491", "language": "en", "url": "https://arxiv.org/abs/1504.06491" }
\section{Introduction} Radio galaxies are a subclass of AGN, which display jets that are observed with a large angle of $\theta>10^{\circ}$ with respect to the line of sight, enabling a view of both the jet and the core. In the unification model of AGN, they are thought to be the radio-loud counterparts of Seyfert galaxies \citep{Antonucci1993,Urry1995,2012agn}. In the case of blazars $\gamma$-ray emission is expected to be detected due to the small jet angle and the resulting relativistic beaming towards the observer. The relativistic jet will dominate the emission, and due to Doppler boosting this emission can reach into the $\gamma$-ray and TeV range. However, several non-blazar AGN and in particular radio galaxies have also been detected in the $\gamma$-rays \citep[see for example the third {\it Fermi}/LAT source catalogue;][]{thirdsourcecat2015}, and a few of these sources have been observed at energies $>100 \rm \, GeV$ \citep{Perkins2012}. The mechanism driving this high-energy emission in radio galaxies is still under discussion. M~87 is a FR-I radio galaxy \citep{FanaroffRiley1974MNRAS,Laing1983}, with a central supermassive black hole of mass \mbox{$\rm M_{BH}=(3-6)\times 10^{9}\, \rm M_{\odot}$} \citep{Macchetto1997,Gebhardt2009, Batcheldor2010}, at a distance of 16 Mpc \citep{Tonry1991}. The jet angle has been estimated at $\theta=15^{\circ}$ with respect to the line of sight based proper motions of the jet features \citep{Biretta1999}. M~87 has been detected by {\it Fermi}/LAT \citep{Abdo2009}, making it the third radio galaxy to be detected in $\gamma$-rays, after Centaurus~A and NGC~1275. M~87 has also been detected in the VHE range during flares, e.g. by {\it HEGRA} \citep{Aharonian2003}. This source shows variable emission in the soft ($< 10\rm \, keV$) X-ray regime \citep{Harris2009}, and extrapolating from high flux states observed by {\it Chandra} ($f \simeq 2 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$), a flux of \mbox{$f \simeq 10^{-12} \rm \, erg\, cm^{-2} \, s^{-1}$} would be expected between 20 and 60 keV. \citet{Walter2008} reported a detection with \mbox{$f=(8.6 \pm 1.8)\times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$} between 20--60 keV using {\it INTEGRAL}/ISGRI data. However, we will show that this detection can not be confirmed using the latest software and instrument calibration. The proximity of M~87 allows us to image the core separately from the jet and the diffuse extended emission in several wavelengths like radio, optical, and soft X-rays, but not in hard X-rays ($> 10 \rm\, keV$) nor at higher energies. At energies above $10 \rm\, keV$ the resolution of commonly used coded-mask instruments (e.g. {\it INTEGRAL}, {\it Swift}/BAT) is of the order of 10'. Only recently with NuSTAR’s narrow field instrument a resolution of $\sim$ 10'' can be achieved at hard X-rays using grazing incidence optics. By studying the spectral energy distribution (SED) it is possible to test whether the high-energy emission originates from the core, the jet, or from an extended region. Earlier SEDs of M~87 have been represented by a synchrotron self-Compton (SSC) type model \citep[see for example][]{Abdo2009}. SSC models are often used to represent blazar spectra, but they have also been applied to other $\gamma$-ray detected radio galaxies, such as Centaurus~A \citep{AbdoCenA2010}. The latter study was inconclusive, though, on the question whether the X-ray domain is dominated by the non-thermal jet emission, or arises rather from a Seyfert type core \citep[see also][]{Beckmann2011} In this paper we present an upper limit on the average long-term hard X-ray emission of M~87 using 1.7 Ms of {\it INTEGRAL} IBIS/ISGRI data and different techniques for performance enhancement of {\it INTEGRAL} IBIS/ISGRI. We have also analysed {\it Suzaku} data from November 2006, where we have detected M~87 for the first time between 20--60 keV. This is combined with other data in soft X-rays provided by {\it INTEGRAL}/JEM-X, $\gamma$-rays by {\it Fermi}/LAT, historical radio, infrared and optical emission of the core, and VHE data from H.E.S.S. to create an average SED. To understand the 2006 {\it Suzaku} observations we create two simultaneous SEDs: one for the M~87 core and one for the bright jet knot HST-1. Error values quoted in this paper are at the 1-$\sigma$ level unless indicated otherwise. \section{Data analysis} \subsection{\it INTEGRAL} In this study we used all available data on M~87 taken by the {\it INTEGRAL} mission \citep{INTEGRALmission} since its launch. Observations have been performed in dithering mode with pointed observations, so-called science windows, lasting between 2000 s and 4000 s. The data cover the time from 2003 to 2011, and most of the observations are taken after 2008. We analysed data from the Joint European Monitor in X-rays (JEM-X) and from the IBIS/ISGRI imager. JEM-X is a coded-mask instrument that consists of two identical co-aligned telescopes and operates in the 3 to 35 keV band \citep{Lund2003}. The field of view is circular with a diameter of 4.8 degrees (fully coded) and an angular resolution of 3.25 arcminutes. We created images in the energy ranges of 3--10 keV and 10--25 keV for each pointing and combined these into a single mosaic image for both JEM-X detectors. In the 3--10 keV energy band we detect M~87 with a significance of $15 \sigma$ and a flux of $f=1.6 \times10^{-11} \rm \, erg \, cm^{-2} \, s^{-1}$. In the 10--25 keV energy band M~87 is not detectable, with a $3\sigma$ upper limit of $f \la 1.2\times10^{-11} \rm \, erg \, cm^{-2} \, s^{-1}$. The {\it INTEGRAL} Soft Gamma-Ray Imager (ISGRI) is part of the Imager on Board {\it INTEGRAL} Spacecraft (IBIS), which is also a coded-mask instrument. ISGRI is sensitive between 15 keV and 1 MeV \citep{Lebrun2003}. The field of view is $9^\circ \times 9^\circ$ (fully coded), and the angular resolution, limited by the coded mask technology, is \mbox{12 arcminutes}. The total IBIS/ISGRI data set reaches an effective on-source exposure time of 1.7 Ms (see Fig.~\ref{mosaic_features}). Using the standard \mbox{Offline Scientific Analysis} (OSA) package version 9 provided by the ISDC \citep{Courvoisier03} we created images for each pointing in the energy range 20--60 keV and combined them into a mosaic. At the position of M~87 we derived a detection significance of $3.8 \sigma$ and a flux of \mbox{$f \simeq 3\times 10^{-12}$ $\rm \, erg \, cm^{-2} \, s^{-1}$}. However, the mosaic image shows strong noise features in the vicinity of M~87, and thus the detection cannot be deemed as trustworthy. To improve the quality of the image we applied several techniques, a summary of which can be found in Table~\ref{isgri_res}. In order to quantify the quality of the mosaic images, we determined histograms of the significance image and the root mean square (rms, $s_{\rm rms}$) of this histogram, where in the ideal case the mean should be 0$\sigma$ and the $s_{\rm rms}=1$. Using the $s_{\rm rms}$ we can track the improvement of the mosaic quality. For the total mosaic created using OSA~9 we derived $s_{\rm rms}=1.75$ (see Tab.~\ref{isgri_res}). To improve the image we started by excluding those science windows from the mosaic analysis that showed a high noise level, i.e. we removed all science windows with $s_{\rm rms} >1.2$ ($\sim 4\%$ of the total). The global image quality improved, and the M~87 detection is no longer significant. We have also processed the good science windows with the more recent OSA~10 software. We found a detection significance of $1.32 \sigma$ at the position of M~87 and derived an upper limit of $f \la 3.3 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$, within a mosaic with a rms of $s_{\rm rms}=1.38$ (OSA~10 in Table~\ref{isgri_res}). We then produced mosaics per revolution (a revolution lasts about 3 days and has an effective exposure time of $\sim 200 \rm \, ks$) to evaluate their quality based on the $s_{\rm rms}$ value of the significance map. For the mosaics we set a lower rms threshold of 1.1, because the fluctuations in the mosaics are more averaged out compared to the single science windows. Most of the revolutions that have $s_{\rm rms}<1.1$ are within the first 6.5 years of the mission (rev. $ < 800$, see Fig.~\ref{mosaic_features}). This is probably due to the evolution of the background (private communication). \begin{figure} \includegraphics[width=8cm, keepaspectratio=true]{Mosaic_features_new.pdf} \caption{The evolution of the $s_{\rm rms}$ of the significance image of the mosaic per revolution. The line shows the cut for $s_{\rm rms}=1.1$. Later revolutions, starting around revolution 800 (2009), have on average a higher rms.} \label{mosaic_features} \end{figure} We then combined the images from revolutions where the rms is $s_{\rm rms}<1.1$. The rms of the combined mosaic is $s_{\rm rms}=1.18$ and we derived for M~87 a $3\sigma$ upper limit of $f < 4.2 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$ (Tab.~\ref{isgri_res}: OSA 10 selected revolutions). We also made a mosaic with the combined revolutions with rms $s_{\rm rms}>$1.1, which resulted in an upper limit to the flux of $4.9 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$ and the mosaic rms of $s_{\rm rms}=1.39$. Lastly we used a technique which required changes to the program {\tt ii\_shadow\_ubc}, which is part of the OSA 10 pipeline. This routine performs the background correction using the detector images (shadowgrams). To decrease the noise in the images we removed pixels from the borders of the shadowgrams, since the borders of the detector contain the most unpredictable instrumental background. After testing several configurations on single revolutions we used a cut of the 3 outmost pixels to create 3 mosaics containing all available observations: one mosaic contains the non-modified science windows (control), one mosaic where all science windows have been modified, and one mosaic where cuts were applied only for those revolutions with a high rms value. To quantitatively evaluate these mosaics we calculated the rms $s_{\rm rms}$ of the significance map, where we now considered only the inner $10^\circ \times 10^\circ$ since the borders of the mosaics are unnaturally smooth due to the cutting procedure of the shadowgrams, and the area around bright sources in the field is excluded. The rms for the mosaic with all science windows modified is the lowest, $s_{\rm rms}=1.40$, and for the unmodified mosaic the highest, $s_{\rm rms}=1.54$. However, we found that these techniques did not further alter the upper limit on the flux of M~87. In {\it INTEGRAL}/ISGRI observations the systematic errors are dependent on exposure time and on the number of sources in the field and their brightness. In the M~87 field the number of sources is relatively low and the sources present are only moderately bright, therefore limiting the influence of systematic errors. In addition, the systematic errors are averaged due to the broad energy range and large observation time. In this field it has been verified that the systematic errors are negligible compared to the statistical errors. \begin{table*} \centering \caption{Results of IBIS/ISGRI analysis on the field of M~87 applying different data selection criteria. The detection found with the OSA 9 analysis is spurious. The count rates have been converted into fluxes assuming a Crab-like spectrum. } {\footnotesize \begin{tabular*}{\textwidth}{lccccc} \hline \hline \noalign{\smallskip} Method & Mosaic rms $s_{\rm rms}$ & Detection significance & count rate & M~87 $3 \sigma \,$ upper limit& Detection significance \\ & & M~87 [$\sigma$] & [s$^{-1}$] & [$10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$] & NGC~4388 [$\sigma$]\\ \hline \noalign{\smallskip} OSA 9 & 1.75 & 3.77 & 2.34$\pm$0.02 & 3.0 & 148.0 \\ OSA 10 & 1.38 & 1.35 & 2.20$\pm$0.02 & 3.3 & 113.5\\ OSA 10: selected revolutions & 1.18 & 1.70 & 2.33$\pm$0.03 & 4.2 & 89.4\\ OSA 10: borders cut & 1.40 & 1.16 & 2.20$\pm$0.02 & 3.2 & 111.3\\ \noalign{\smallskip} \hline \end{tabular*} \label{isgri_res} } \end{table*} \subsection{ {\it Suzaku}} {\it Suzaku} has observed M~87 from November 29 to December 2, 2006 with an elapsed time of 187~ks in HXD nominal pointing mode. We analysed data from both the X-ray Imaging Spectrometer \citep[XIS,][]{Koyama2007} and the Hard X-ray Detector \citep[HXD,][]{Takahashi2007}. The XIS instrument operates between $\sim$0.2--12.0 keV and consists of three separate CCD detectors with a field of view of 18' x 18' and a spatial resolution of 1.6-2.0 arcmin. The XIS data were reprocessed applying the standard event cuts and a spectrum was extracted from a circular region with a 60'' radius around the core. Due to calibration issues the observations between 0.4 to $\sim$3 keV show strong residuals. The 3--7 keV band displays thermal signatures such as a tentative iron K$\alpha$ line at 6.7 keV (upper limit to the equivalent width of 200 eV), indicating that the emission originates from a hot diffuse gas. This is due to the extraction region used that includes both the core of M87 and the surrounding hot gas. Since we are interested in the non-thermal core emission, we model only the 7--10 keV band, where the thermal emission from the hot gas is not dominant. An absorbed power-law model yielded a fit of $\chi_\nu^2=1.03$ (for 150 d.o.f.), with a fixed Galactic hydrogen column density $N_{\rm H}=2 \times 10^{\rm 20} \rm \, cm^{\rm -2}$ and a power law index of $\Gamma=2.3\pm0.3$ (90\% error). The extrapolated 2--10 keV power law flux is $f=(2.5^{+0.2}_{-0.4})\times 10^{-11} \rm \, erg \, cm^{-2} \, s^{-1}$. The HXD is a collimated detector with a field of view of 34' x 34'. The detector that consists of two independent systems: silicon PIN diodes that operate in the range $\sim$10--60 keV and GSO scintillation counters that function between $\sim$30--600$ \rm \, keV$. We reprocessed the data applying the standard event cuts. Due to the low count rate in the GSO band no significant detection could be extracted from this detector. For the PIN detector we extracted a significant spectrum between 15 and 70~keV (Fig.~\ref{pinspec}). The spectrum can be represented by an absorbed power law with a fixed column density $N_{\rm H}=2 \times 10^{\rm 20} \rm \, cm^{\rm -2}$ and a power law index of $\Gamma=2.8^{+0.5}_{-0.4}$ (90\% errors) giving a reduced $\chi_\nu^2=1.17$ (for 12 degrees of freedom). The model flux is $f = (1.04^{+0.03}_{-0.19})\times 10^{-11} \rm \, erg \, cm^{-2} \, s^{-1} $ between 20 and 60 keV. The combined XIS and HXD spectrum modelled with an absorbed power law shows a best fit of $\chi_\nu^2=1.03$ (150 d.o.f.) and a power law index of $\Gamma=2.6\pm0.2$ (90\% error). While the spectral indices of the XIS and HXD differ, they are consistent within the 90\% confidence intervals. \begin{figure} \includegraphics[width=8cm, keepaspectratio=true]{m87pin_delchi.pdf} \caption{Suzaku/PIN count spectrum between 15 and 70 keV with an absorbed power law model fit. The bottom panel shows the residuals of the fit in terms of the standard deviation with error bars of size one sigma.} \label{pinspec} \end{figure} \subsection{{\it Fermi}/LAT} The Large Area Telescope \citep[LAT,][]{LAT} aboard the {\it Fermi} satellite is a pair-conversion instrument that is sensitive between 20 MeV and 300 GeV. We have used all available data taken between August 2008 and May 2012, with a total effective exposure of 30 Ms. We have selected source-class-events (P7SOURCE\_V6) between 100 MeV and 100 GeV in a circular region of $30^{\circ}$ around M~87. The large extraction radius is necessary for the binned analysis we performed. Events with a zenith angle of more than $100^{\circ}$ were excluded, and we used the standard cuts proposed by the {\it Fermi} team based on the data quality of the events and on the instrument configuration. In order to get a reliable result for the source of interest, we included in the maximum likelihood fitting procedure all sources that had been reported in the second {\it Fermi} catalogue (2FGL) within $45 ^{\circ}$ around the position of M~87. Between 100 MeV and 100 GeV we derived a flux for M~87 of $f = (2.2 \pm 0.3)\times 10^{-8} \rm \, ph \, cm^{-2} \, s^{-1}$ and a power law index of $2.16 \pm 0.06$, with a test statistic $TS=370$, which corresponds to a detection significance of $\sim 19 \sigma$. Since the source is bright in $\gamma$-rays, we divided the energy range 100 MeV to 100 GeV into 5 logarithmic bins to track the spectral evolution with energy. The results are summarized in Table~\ref{fermi}, where we give the significance, flux and power law index per energy range. \begin{table} \caption{Results of the binned likelihood analysis of the {\it Fermi}/LAT data in 5 bins between 0.1--100 GeV with a power law of photon index $\Gamma$. The errors are given at $1\sigma$ confidence level.} {\footnotesize \begin{tabular}{c c c r c} \hline \hline \noalign{\smallskip} bin & energy range & flux & $TS$ & $\Gamma$\\ & [GeV] & $[10^{-12} \rm ergs \, cm^{-2} \, s^{-1}]$ & & \\ \hline \noalign{\smallskip} 1 & 0.1--0.4 & $4.4\pm1.5 $ & 36 & 1.9$\pm$0.6 \\ 2 & 0.4--1.6 & $3.9\pm0.5$ & 126 & 1.9 $\pm$ 0.3 \\ 3 & 1.6--3.0 & $2.1\pm0.3$ & 105 & 2.7$\pm$0.5 \\ 4 & 3.0--10 & $2.8 \pm 0.3$ & 87 & 1.47$\pm$ 0.05 \\ 5 & 10--100 & $1.6\pm0.4$ & 12 & 2.29 $\pm$0.06 \\ \noalign{\smallskip} \hline \end{tabular} \label{fermi} } \end{table} \section{Spectral energy distribution modelling} In order to derive a spectral energy distribution (SED) over the whole electromagnetic waveband, we included radio to UV data from the literature. We used the {\it INTEGRAL} IBIS/ISGRI upper limit between 20--60 keV, the {\it INTEGRAL}/JEM-X detection between 3--10 keV and the 10--25 keV upper limit together with the {\it Fermi}/LAT data to represent the long-term average emission, where we assume that the majority of the low-flux hard X-ray emission is due to the core rather than to the jet knots. We combined these data with core detections from radio to infrared from NED \footnote{http://ned.ipac.caltech.edu/} and VHE data from H.E.S.S., which has observed M~87 between 2003 and 2006 \citep{Aharonian2006}.To model the 2006 {\it Suzaku} detection we use simultaneous observations, as described in Section 3.2. To represent the broad-band SED we applied a one-zone synchrotron self-Compton (SSC) model. This model assumes an isotropic population of high-energy electrons that emit synchrotron radiation followed by inverse Compton scattering of the synchrotron photons to higher energies \citep{Maraschi1992}. In this simplified model the electron population is contained in a spherical volume with radius $R$ and a randomly orientated magnetic field $B$. The volume moves relativistically with a bulk Lorentz factor $\Gamma_b$ towards the observer in a jet with an angle $\theta$ to the line of sight. The emission is Doppler-shifted with a Doppler factor $\delta=[\Gamma_b (1-\beta \cos\theta)]^{-1}$. The electron energy distribution in the jet-frame is assumed to follow a broken power-law, with index $p_1$ between the minimum energy $E_{\rm min}$ and the break energy $E_{\rm br}$ and index $p_2$ between $E_{\rm br}$ and the maximum energy $E_{\rm max}$: \begin{equation} N(E)= \begin{cases} k E^{-p_1}\, \text{if}& E_{\rm min} <E< E_{\rm br} \\ k E^{-p_2}\, \text{if}& E_{\rm br} <E< E_{\rm max} \end{cases} \end{equation} Here, $k$ is the electron normalisation factor, and $p_1<3$ and $p_2>3$. The peak frequencies are dependent on the break energy via \begin{equation} \nu_s=\frac{4}{3}\gamma_{br}^2\frac{eB}{2\pi m_e c}\frac{\delta}{1+z} \end{equation} for the synchrotron peak, where \begin{equation} \gamma_{br}=\frac{E_{br}}{m_e c^2} \end{equation} and for the frequency of the inverse Compton peak (in the Thomson regime) \begin{equation} \nu_{IC}=\frac{4}{3} \gamma_{br}^2 \nu_s \end{equation} where we assume that the dominant synchrotron power is emitted at the peak of the synchrotron branch. Here, $\gamma$ is the Lorentz factor of the relativistic electrons in the plasma blob. For some objects, such as bright flat spectrum radio quasars (FSRQ), the SSC model does not properly represent the SED. In addition to the inverse Compton scattering of synchrotron photons, external seed photons from e.g. the broad-line region, that are Compton upscattered to higher energies can contribute to the inverse Compton emission \citep[external Compton component,][]{Dermer1993}. Otherwise, multiple scatterings should be considered to explain the Compton dominance of FSRQs \citep[e.g.][]{Georganopoulos2006}. For the modelling of M~87 we have used a single-zone SSC code developed by A. Tramacere\footnote{http://www.isdc.unige.ch/sedtool/} (to be released soon) where the least-square method is used to find the best fit of the numerical modelling \citep{Massaro2006,Tramacere2009,Tramacere2011}. This code allows to apply several different electron energy distribution shapes, and we choose the broken power law shape to compare the result of the fitting with previous works. In addition to the SSC emission there is also a host galaxy component in the model, which shows a peak in the optical that is consistent with the data. The flux contribution of the host galaxy is $\nu f_{host} = (4\pm2)\times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$ for all fits presented below. In the following sections the results of the modelling is described (see Table~\ref{ssc_param} for the best-fit results). \begin{table*} \centering \caption{The results of the SED fitting of M~87. The first three columns show the results of fitting the average low-flux state using several different configurations for the jet angle. The columns "2006 core" and "2006 HST-1 knot" refer to the 2006 SEDs of the M~87 core and HST-1 jet knot, respectively. The last column shows the result of the SED model presented by \citet{Abdo2009}. } {\footnotesize \begin{tabular*}{\textwidth}{l|l|l|l|l|l|l} \hline \hline \noalign{\smallskip} Parameter & $\theta=15^{\circ}$& Fixed beam & $\theta=10^{\circ}$ & 2006 core& 2006 HST-1 knot & Abdo 2009$^{c}$\\ \hline \noalign{\smallskip} $\theta$ & $15^{\circ \, a}$ & -$^{b}$ & 10$^{\circ \, a}$ & $15^{\circ \, a}$ & $15^{\circ \, a}$ & $10^{\circ \, a}$ \\ $B [G]$ & $2.0\times10^{-3}$ & $1.7\times 10^{-2}$ & $2.2\times 10^{-2}$ & $3.3\times10^{-3}$& $6.4\times10^{-1}$ & $5.5\times 10^{-2}$ \\ $R [cm]$ &$5.6\times10^{17} $ & $3.3\times 10^{16}$ & $2.4\times10^{16}$ & $5.0\times10^{17}$& $1.0\times10^{16}$ & $1.4 \times 10^{16}$ \\ $\Gamma_b$ & $3.8 $ & -$^{b}$ & $3.4 $ & $3.9$ & $1.2$ & $2.3$\\ $E_{\rm min} [eV]$ & $2.8\times10^7$ & $2.6\times10^{8}$ & $2.1\times10^8$ & $6.7\times10^7$ & $6.0\times10^7$ & $5\times 10^{5}$ \\ $E_{\rm max} [eV]$ & $2.3\times10^{13}$ & $6.5\times10^{13}$ & $5.1\times10^{13}$ & $2.5\times10^{13}$ $^{a}$& $2.5\times10^{13}$ $^{a}$ & $5 \times 10^{12}$\\ $E_{\rm br} [eV]$ & $2.0\times10^8 $ & $1.3\times 10^9$ & $1.3\times 10^9$ & $5.0\times10^8$ & $1.0\times10^{8}$ & $2\times 10^{9}$\\ $p_1$ & $-1.8$ & $1.1$ & $1.1$ & $2.8$ & $2.5$ &$1.6$ \\ $p_2$ & $3.4$ & $3.5$ & $3.5$ & $3.6$ & $3.4$ &$3.6$\\ $\chi_\nu^2$ & 1.9\, (17 d.o.f.) & 2.8\, (17 d.o.f.) & 3.2\, (16 d.o.f.) & 3.1\, (2 d.o.f.) & 1.9\, (2 d.o.f.) &-\\ \noalign{\smallskip} \hline \hline \end{tabular*} {\it a)} parameter fixed;\\ {\it b)} beaming factor $\theta$ fixed to 5, see text;\\ {\it c)} see \citet{Abdo2009} \label{ssc_param} } \end{table*} \subsection{SSC model for the average low-flux state} At first, we kept the angle $\theta$ and the bulk Lorentz factor $\Gamma_b$ free with a fixed beaming factor $\delta = 5$. This value is consistent with the apparent motion of $\sim 0.5c$ observed in the jet \citep{Kellermann2007}. The fit has a $\chi^2$ of 2.8 (17 d.o.f., Table~\ref{ssc_param}). This gives a value of $E_{\rm min}=2.6\times 10^{8}\rm\, eV$ for the minimum energy of the electrons, $E_{\rm max}=6.5\times10^{13} \rm \,eV$ for the maximum energy and $E_{\rm br}=1.3\times 10^9\, \rm eV$ for the break energy. The magnetic field has a value of $B=1.7 \times 10^{-2}\rm\,G$ and the radius of the emission region $R=3.3\times 10^{16}\rm\,cm$. The indices of the broken power law are $p_1=1.1$ and $p_2=3.5$. Next we have fixed the jet angle at $\theta=10^{\circ}$, a value that is closer to that of blazars, although the apparent angle as seen in the large scale radio jet appears to be rather $\theta=15^{\circ}$. This allows us to compare our results of the SED fitting with the previous work of \citet{Abdo2009}, who also used $\theta=10^{\circ}$ for their SED fitting, although this value does not use the true orientation of the jet \citep[e.g.][]{Meyer2013}. The fit gives a reduced $\chi_\nu^2 = 3.2$ (16 d.o.f., Table~\ref{ssc_param}). The magnetic field is $B= 2.2\times 10^{-2} \rm \, G$ and the radius of the emission region $R=2.4\times10^{16} \, \rm cm$. A bulk Lorentz factor $\Gamma_b =3.4 $ is found, which is slightly higher than the bulk factor derived by \citet{Abdo2009} of $\Gamma_b=2.3$. The minimum energy is $E_{\rm min}=2.1\times 10^8\rm\,eV$, the maximum energy is $E_{\rm max}=5.1\times 10^{13} \rm \, eV$, the break energy $E_{\rm br}=1.3\times 10^9 \rm \, eV$, and the power law indices found are $p_1=1.1$ and $p_2= 3.5$, all consistent with the previous result using a free angle. Except for the minimum energy of the electrons these values are also consistent with the values used by \citet{Abdo2009}. After this we fitted the data with a more conservative angle of $\theta=15^{\circ}$, consistent with proper motion observations by \citet{Biretta1999} and \citet{Meyer2013}. The SED is presented in Fig.~\ref{averageSED}. The fit, with a reduced $\chi_\nu^2 = 1.9$ (17 d.o.f., Table~\ref{ssc_param}), yields a lower magnetic field, \mbox{$B=2.0\times10^{-3} \rm \, G$}, and a larger emitting region, \mbox{$R=5.6\times 10^{17}\rm\,cm$}, compared to the previous fits. While the second index of the broken power law is similar with $p_2=3.6$, the first index changes to $p_1=-1.8$. The bulk Lorentz factor increased slightly to $\Gamma_b =3.8$. The minimum energy of the electrons decreased to $E_{\rm min}=2.8 \times 10^7\rm\,eV$ and the maximum energy decreased slightly to $E_{\rm max}=2.3 \times 10^{13} \rm \,eV$. Also the break energy of the electron energy distribution decreased to $E_{\rm br}=2.0 \times10^8\, \rm eV$. \begin{figure} \includegraphics[width=9cm, keepaspectratio=true]{M87average_crop_UL_err.pdf} \caption{SED of the average low-flux state of M~87, using a jet angle of $\theta=15^{\circ}$. The arrows show the {\it INTEGRAL} upper limits. The solid line from the radio to VHE domain shows the SSC fit. The synchrotron contribution, dashed, is dominant at low frequencies and the inverse Compton contribution, also dashed, is dominant at high frequencies. The host galaxy component is shown with a solid line in the range $10^{14}-10^{15}\rm\, Hz$. The {\it Chandra}/ACIS observation of the M~87 core at $2.4\times 10^{17}\rm\, Hz$ was taken at the end of 2008, when M~87 was in a quiescent state \citep{Abdo2009}.} \label{averageSED} \end{figure} \subsection{SSC model for the high flux state in 2006} Since the {\it Suzaku}/PIN data taken in 2006 indicate a flux level that is 3 times as high as the upper limit determined from the average 2003-2011 data provided by {\it INTEGRAL} IBIS/ISGRI, and similarly the {\it Suzaku}/XIS data show a 3--10 keV flux about four times that of the {\it INTEGRAL} JEM-X core flux, we also collected additional 2006 data in order to derive a simultaneous SED for this period. Because we are not able to resolve the core and jet with {\it Suzaku}/PIN, it is not clear whether the flux increase originates in the core or in one of the jet knots (see e.g. the light curves presented by \citealt{Abramowski2012}). Therefore we create two different SEDs: one for the M~87 core, and one for the bright jet knot HST-1, which is known to flare. We add simultaneous observations for both components from {\it HST} \citep{Perlman2011}, {\it VLA} \citep{Harris2009} and {\it VLBA} \citep{Cheung2007}. In addition we add simultaneous VHE data from {\it MAGIC} \citep{Berger2011}, however also in gamma-rays it is not possible to distinguish between the core and the HST-1 knot. Lastly, we also include the {\it HESS} observations from the average SED \citep{Aharonian2006}. While these observations were not taken simultaneously, in 2006 no increased emission was observed from M~87. \begin{figure} \includegraphics[width=9cm, keepaspectratio=true]{M872006core_crop_ed.pdf} \caption{SED of the core of M~87 during the high-flux state in 2006, using a jet angle of $\theta=15^{\circ}$. The SSC fit is shown with a solid line. The synchrotron branch is shown with a dashed line at low frequencies, and the inverse Compton contribution is shown with a dashed line at high frequencies.} The Suzaku/PIN data do not seem to match the SED. \label{2006core} \end{figure} The SED for the M~87 core is presented in Fig.~\ref{2006core}. The fit has a reduced $\chi_\nu^2 = 3.1$ (2 d.o.f., Table~\ref{ssc_param}), where the maximum energy has been fixed to $E_{\rm max}=2.5\times10^{13} \rm \,eV$. Using an angle of $\theta=15^{\circ}$, the bulk Lorentz factor is consistent compared to the low-flux average state with $\Gamma_b =3.6$. The radius of the emitting region of $R=5.0\times 10^{17}\rm\,cm$ is consistent with the low-flux state modelled with the same jet angle, whereas the magnetic field slightly increased to $B=3.3\times10^{-3} \rm \, G$. Both the minimum energy and the break energy of the electron energy distribution increased slightly to $E_{\rm min}=6.7\times10^7\rm\,eV$, and $E_{\rm br}=5.0\times10^8\, \rm eV$. The indices of the power law describing the electron energy distribution have indices $p_1=2.8$ and $p_2=3.6$. As can be seen in Fig.~\ref{2006core}, the model is not consistent with the {\it Suzaku} spectrum, which dictates a steep power law in the X-ray regime. Therefore, the {\it Suzaku} spectrum implies that the hard X-ray data describe rather the tail of the synchrotron branch than the inverse Compton branch. \begin{figure} \includegraphics[width=9cm, keepaspectratio=true]{M872006knot_crop_ed.pdf} \caption{SED of the HST-1 knot of M~87 during the high-flux state in 2006, using a jet angle of $\theta=15^{\circ}$. The SSC fit is shown with the solid line. The synchrotron branch, dominant at low frequencies, and the inverse Compton contribution, which is dominant at high frequencies are shown with dashed lines.} The Suzaku/PIN data match the SED. \label{2006knot} \end{figure} Since the jet-knot HST-1 is known to flare we also model it as a potential origin of the hard X-ray emission detected by {\it Suzaku}, see Fig.~\ref{2006knot}. A jet angle of $\theta=15^{\circ}$ is used and the fit has a reduced $\chi_\nu^2 = 1.9$ (2 d.o.f., Table~\ref{ssc_param}). Also in this case the maximum energy is fixed to \mbox{$E_{\rm max}=2.5\times10^{13} \rm \,eV$} to allow convergence. The bulk Lorentz factor is lower compared to the 2006 core fit and the average low-flux state with $\Gamma_b =1.2$. The size of the emitting region is also smaller, with $R=1.0 \times 10^{16} \rm \, cm$. The magnetic field strength increased to $B=6.4\times10^{-1} \rm \, G$. The minimum energy of the electrons is similar to the fit of the core with $E_{\rm min}= 6.0 \times 10^7 \rm \, eV$, and the break energy decreased to $E_{\rm br}=1.0\times10^8\rm\,eV$. The power law indices that characterise the electron energy distribution are similar to the 2006 core fit with $p_1=2.5$ and $p_2=3.4$. As can be seen in Fig.~\ref{2006knot}, the {\it Suzaku} data points lie on the inverse Compton branch, rather than on the synchrotron branch, consistent with the hard X-ray spectrum. \section{Discussion} In the following we first discuss the possible origin site of the emission seen in {\it INTEGRAL}/JEM-X, before we turn to the hard X-ray variability and the spectral energy distribution. Finally, we discuss the possible source type at the origin of the high-energy emission in M~87. \subsection{JEM-X: core emission} Due to the the angular resolution of JEM-X (FWHM = 3 arcmin), the flux of M~87 observed by JEM-X is the sum of core, jet and extended diffuse emission. Here we are interested only in the core emission, because we want to investigate the origins of the high-energy emission, which is emitted from, or close to the core. In addition, the measured JEM-X flux is an average over several years. The jet knot HST-1 is known to be variable \citep{Harris2009}, and was in outburst between 2003 to 2007, with the luminosity of the knot peaking in 2005. For a light curve of the X-ray emission of HST-1 see Fig.~1 in \cite{Abramowski2012}. Since the JEM-X observations we used are taken mostly after 2008, and none during 2005, we assume that we did not include data during which the knot was in outburst. Using literature values from observations of the knot HST-1 made with {\it Chandra}/ACIS by \citet{Perlman2005} and by \citet{Wilson2002} for other jet knots, we find that the combined jet flux is about \mbox{$f(3-10 \rm \, keV) = 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$}. In quiescence the jet emission is dominated by the jet knots HST-1, A, and D. For the extended emission we used observations from {\it Chandra}/ACIS, taken beginning of May 2005. We analysed the data, with a total exposure of 123 ks, and extracted a spectrum in a circular region with a radius of 2.5 arcminutes centred around the core of M~87. The core and jet were excluded in the analysis. This spectrum yielded a flux of $f(3-10 \rm \, keV) = 9\times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$. This implies that the core emission is about $f= 6 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$, with a variability by a factor of two in X-rays \citep{Hilburn2012}. The spectrum extracted from the 2006 {\it Suzaku}/XIS observation also showed a thermal and non-thermal component due to the extraction radius covering both the core and surrounding medium. Since a spectrum is available, disentangling these components is a more straightforward task and we find a 3--10 keV flux of \mbox{$f= 2 \times 10 ^{-11}\rm \, erg \, cm^{-2} \, s^{-1}$} for the non-thermal component, which is almost four times the JEM-X core flux. The strongly increased flux derived from the XIS data implies a flare when compared to the average flux. \subsection{Hard X-ray variability} We found an overall 3$\sigma$ upper limit to the M~87 flux of $f(20 - 60 \rm \, keV) < 3 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$ using all available {\it INTEGRAL} data for a total effective exposure time of 1.7~Ms. Apart from a likely spurious detection by \mbox{{\it INTEGRAL}} at 5.1$\sigma$, corresponding to a flux of \mbox{$f(20-60 \rm \, keV) = (9 \pm 2) \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$} \citep{Walter2008}, M~87 had not previously been detected in hard X-rays. Using the same data set as \citet{Walter2008} but applying the latest analysis software OSA~10 we derived a $3\sigma$ upper limit flux of $f(20-60 \rm \, keV) < 8 \times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$, inconsistent with the earlier detection claim. This does not mean, however, that the hard X-ray flux of M~87 never exceeds this average value. We found that, during a {\it Suzaku}/PIN observation at the end of November 2006, M~87 is detected with a flux of \mbox{$f(20-60 \rm \, keV) = 10^{-11}\rm \, erg \, cm^{-2} \, s^{-1} $}. There is no \mbox{{\it INTEGRAL}} IBIS/ISGRI observation at this time to confirm the detection, but in July 2006 {\it INTEGRAL} has observed M~87 for 23 ks constraining a $3\sigma$ upper limit of $f (20 - 60 \rm \, keV) < 3\times 10^{-11}$ $\rm \, erg \, cm^{-2} \, s^{-1}$, consistent with the flux measured by {\it Suzaku}/PIN. \subsection{Spectral energy distribution} To characterise the broad-band emission of M~87 we have produced three SEDs; one representing the average low-flux state and two representing the increased X-ray emission as observed by {\it Suzaku} at the end of 2006. The data used in the average low-flux SED have been taken at different times, which might result in a biased measurement in case of variability since several spectral states might contribute to the data. To investigate whether there is contamination of different spectral states in the M~87 SED, we consult light curves of M~87 from 2001 to 2011 based on radio, optical, X-rays and VHE observations \citep{Abramowski2012}. The VHE band shows increased activity in 2005, 2008 and 2010, during which the X-ray emission also increased by a factor of 3. During the VHE flare in 2008 the radio emission from the jet base increased as well, indicating the VHE emission is likely produced near the SMBH. However, during the VHE flare in 2008, no enhanced radio emission has been observed. The VHE observation used in the average low-flux SED was taken in 2004 when M~87 was in a low state, so there is no contamination from any of the observed VHE flares. The radio and optical detections of the core used in the SED are averaged over several years, and the light curves show variations by a factor of $\sim 2$. Variability at this level will not significantly alter the physical parameters derived from modelling the SED. A similar approach of using time-averaged SEDs has been applied already successfully to M~87 \citep{Abdo2009} and other radio galaxies, for example Pictor~A \citep{Brown2012} and 3C~111 \citep{deJong2012}. The high flux detected by {\it Suzaku}/PIN indicates a flaring state of the source. The phenomenology of the flaring activity of M~87 hints at a behaviour similar to that observed in blazars, and in particular in BL~Lac objects. During a flare an increase of the peak energies of the SED correlates with an increase in the corresponding peak fluxes \citep[see for example Mrk~501,][]{Tavecchio2001}. In 2005, the increase in VHE, X-ray and optical emission coincided with the expulsion of the jet knot HST-1 from the core. During the {\it Suzaku} observations at the end of 2006 the light curves presented in \citet{Abramowski2012} do not show increased activity in the VHE or optical band. Even though the amount of simultaneous observations is sparse, based on the 2006 SED modelling the HST-1 knot is the more likely candidate for the hard X-ray emission. The SED models show that the hard X-ray spectrum can not be reconciled with the simultaneous core observations. If the hard X-ray band would be on the tail of the synchrotron branch for the 2006 core observations, this would indicate a synchrotron peak frequency shift from $\sim 10^{12}\rm\,Hz$ for the low-flux average state to $>10^{16}\rm\, Hz$ for the flaring state. A peak frequency shift this large has not been observed even in the most extreme blazar flares. {\it Chandra} had also observed M~87 on 29 November 2005, but due to pile-up no useful spectra could be extracted during this time \citep{Harris2009}. Since {\it Chandra}/ACIS has a small angular resolution of 0.5'', it is possible to monitor the core and jet knot separately. Because of the pile-up during the observation, the intensity is expressed in detector-based units of $\, \rm keV\,s^{-1}$ where the entire energy band from 0.2-17 keV is integrated. For the core an intensity of $0.51\pm0.02 \, \rm keV\,s^{-1}$ was measured, whereas the HST-1 knot has an intensity of $4.03\pm0.04 \, \rm keV\,s^{-1}$. This supports our conclusion that the X-ray flare detected by {\it Suzaku} originates in the HST-1 knot rather than the core. Since similar SED modelling has been applied to other $\gamma$-ray bright radio galaxies, comparing the physical parameters derived from these models with the average fit of M~87 will help us put the results in a broader frame-work. Due to the degeneracy of several SED parameters, we can qualitatively compare the bolometric luminosity with the magnetic field $B$, the radius of the emitting region $R$ and the Doppler factor $\delta$, as these parameters influence the overall SED power. The FR-I radio galaxy Cen~A has also been observed in the gamma-ray band, and similar to M~87 the overall SED can also be modelled by a simple SSC process \citep{Abdo2010CenAcore}, although a strong Seyfert contribution is visible in the X-ray and optical domain \citep{Beckmann2013}. \citet{Abdo2010CenAcore} present several SED fits to the core of Cen~A. Comparing the results for M~87 (jet angle $\theta=15\,^{\circ}$) to the model for Cen~A (jet angle $\theta=30\,^{\circ}$), M~87 displays a slightly higher Doppler factor of $\delta=3.9$, compared to the Doppler factor used for Cen~A ($\delta = 1$) due to the smaller jet angle $\theta$. A lower Doppler factor causes the emission to appear less boosted. The magnetic field $B$ and radius $R$ of the emitting region are quite different for both sources: for Cen~A a value of $B = 6.2 \rm \, G$ was found for the magnetic field, and a much lower value of $B=2.0\times 10^{-3}\rm \, G$ for M~87. Since the synchrotron emission depends on $B$, a stronger magnetic field will result in a higher synchrotron flux. The radius of the emitting region is reported to be $R=5.6 \times 10^{17}\rm \, cm$ for M~87 and $R=3 \times 10^{15} \rm \, cm$ for Cen~A \citep{Abdo2010CenAcore}. The radius $R$ defines the amount of radiating particles, therefore a larger emitting region results in a higher flux. The lower magnetic field strength $B$ used to model M~87, combined with the larger radius $R$ compared to Cen~A then results in a similar overall power in the SED, consistent with the comparable bolometric luminosity of about $L_{\rm bol}\sim10^{42}\rm\, erg \,s^{-1}$ of the two sources. Comparing the SED parameters with the luminous ($L_{\rm bol}=5\times10^{44}\rm\, erg \,s^{-1}$) FR-II galaxy 3C~111 shows the differences between these two types of sources. A larger Doppler factor $\delta=14$, a smaller emitting region \mbox{($R=2\times10^{16}\rm\,cm$)} and larger magnetic field ($B= 0.04\rm\,G$) are used to model 3C~111 \citep{deJong2012}. The smaller volume of the emitting region used to model 3C~111 compared to M~87 yields a lower flux, but the strong Doppler factor $\delta$ and larger magnetic field of 3C~111 increase the flux strongly, consistent with the larger bolometric luminosity of this source compared to M~87. As FR-I radio galaxies like M~87 are the parent population of BL~Lacs, comparing the resulting SED of M~87 with a BL~Lac object Mrk~421 will illustrate the differences between these two source classes. Using a one-zone SSC model, \citet{Blazejowski2005} found that Mrk~421 ($L_{\rm bol}=3\times10^{45}\rm\, erg \,s^{-1}$) requires a beaming factor of $\delta=10$, an emitting region a radius of $R=7.0\times10^{15}\rm\,cm$ and magnetic field $B = 0.405\rm\, G$. The stronger beaming, due to the small jet angle of Mrk~421, boosts the intrinsic emission strongly. While the emitting region is smaller compared to that used to model M~87, the magnetic field is much stronger, in combination increasing the flux. \subsection{M~87, a radio galaxy with a low luminosity BL Lac core} In the overall context of SED models of $\gamma$-ray bright sources, we can compare the derived values with the average ones for Fermi/LAT detected BL~Lacs and FSRQs. As the FR-I radio galaxies can be understood as the parent population of the BL Lacs (e.g. \citealt{Urry1995}), also their SED parameters are closer to the ones found for BL~Lacs than for the brighter FSRQ class. As pointed out by \cite{Ghisellini2010}, the BL~Lacs on average appear to have similar masses as the FSRQs, with values around $10^8 - 10^9 \, \rm M_\odot$. The SMBH in M~87 exceeds this average value, and the bolometric luminosity $L_{\rm bol} \simeq 10^{41}- 10^{42} \rm \, erg \, s^{-1}$ results in an Eddington ratio of only $\lambda = L_{\rm bol}/L_{\rm Edd} \simeq 10^{-3}- 10^{-2}$ \citep{Owen2000,DiMatteo2003}. Nevertheless, a jet model similar to BL~Lacs can be applied to the M~87 SED. Only 15 radio galaxies and a few misaligned blazars have been detected with {\it Fermi}/LAT, compared to more than 1500 observed blazars \citep{thirdagncat}. In the case of blazars the $\gamma$-ray emission is postulated to originate in the relativistic jet, which is pointed towards the observer. For M~87 the observed jet angle is $\theta=15\,^{\circ}$, causing the emission to be de-boosted as shown by the low Doppler factor ($\delta \sim 3.9$ ) of the emission. Even though the origin of the $\gamma$-ray emission is not completely clear, the modelling with a SSC model has shown that the high-energy emission is likely to arise in the jet BL Lac sources are thought to have an advection dominated accretion flow (ADAF; \citealt{Reynolds1996,DiMatteo2003}) rather than an accretion disk, which is present in FSRQs. Due to the ADAF, the accretion is less efficient, and as such causes the source to be less luminous compared to FSRQs. Since in the unified scheme FR-I radio galaxies, such as M~87 are considered to be misaligned BL Lacs, FR-I galaxies are thought to be powered by an ADAF as well. In the case of M~87 the broad-band emission is shown to be jet-dominated. The properties of M~87's central engine are closer to that of BL~Lacs than to FSRQs. The strength of the magnetic field, the bulk Lorentz factor of the jet and also the electrons' Lorentz factor distribution indicate rather a BL~Lac type emission. Also the overall bolometric luminosity of \mbox{$L_{\rm bol} \simeq 10^{41} - 10^{42} \rm \, erg \, s^{-1}$} \citep{Owen2000, DiMatteo2003} giving an Eddington ratio of only $\lambda \simeq 10^{-3} - 10^{-2}$ points in this direction. The observed physical properties of this AGN put it in a low luminous BL Lac class. The low power jet can be explained by the low accretion rate, rather than by the large mass, although the finding by \citet{Laor2000} that AGN with black hole masses larger than $M_{\rm BH} > 10^9 \rm \, M_\odot$ are radio-loud, seems to hold also in the case of M~87. 3C~120, another FR-I radio galaxy, was not included in the first, second or third {\it Fermi} catalogue, but using a 15-month data set \citet{Abdo2010misaligned} derived a significance of $\sim 5.6\sigma$ between 0.1--100 GeV for this source. {This source undergoes series of flares with a low long-term average flux \citep[e.g.][]{Sahakyan2014}. 3C~120 has been monitored in the radio, UV and X-ray band \citep{Lohfink2013}. From the X-ray observations by {\it Suzaku} and {\it XMM-Newton} a 0.4--10 keV spectrum is derived, where a fit with a composite jet+accretion disk model is favoured. The broad-band SED of this source has shown that the jet dominates the radio and $\gamma$-ray emission, and contributes only $\sim$10\% in the optical, UV and X-ray bands \citep{Kataoka2011}. The $\gamma$-ray spectrum is soft, with $\Gamma_{\rm \gamma}\sim2.7$, and the source is not detected in the TeV band, implying that the inverse Compton peak of this source is located in the X-ray to MeV band. In the case of 3C~120, as the $\gamma$-ray emission is variable on the time scale of months. \citet{Sahakyan2014} argue that the emission comes from a compact, relativistically moving emission region, but exclude the jet knots as the origin as these are deemed too large. Only a few of the $\gamma$-ray detected radio galaxies have been observed in VHE. Recently the MAGIC telescopes have detected another radio galaxy, the FR-I IC~310 at energies $E>300\rm\,GeV$ \citep{IC310}. The SED of this source shows an inverse Compton peak in the TeV range. Combined with the low luminosity of this source, it was argued that IC~310 is an extreme case within the blazar sequence, with an extremely low accretion rate. Another possibility is that IC~310 is rather a misaligned version of an extreme BL~Lac, where the low luminosity is connected to the large viewing angle. This does, however not explain the observed TeV variability. M~87 is also observed in the VHE band, and shows an inverse Compton peak in the hard X-ray/soft $\gamma$-ray band. With its low bolometric luminosity, M87 is closer to IC~310 than to 3C~120, since both IC~310 and M87 have been detected firmly in both the $\gamma$-ray and TeV band. The low Eddington ratio can be understood considering that the M~87 core is likely not fed by an accretion disk but by a radiatively inefficient accretion flow (RIAF) or advection dominated accretion flow (ADAF; \citealt{Reynolds1996,DiMatteo2003}). The assumption that the accretion is radiatively inefficient also explains why in the case of M~87 we do not see a significant thermal inverse Compton component in the X-rays, and an optical spectrum consistent with that of a LINER. This is different to other $\gamma$-ray bright radio galaxies, which show an optical Seyfert core as e.g. Cen~A \citep{Beckmann2011} and 3C~111 \citep{deJong2012}. \section{Conclusion} We report, for the first time, a hard X-ray detection of the FR-I radio galaxy M~87 using {\it Suzaku}/PIN data. The observations were made between November 29 and December 2, 2006 with an elapsed time of 187 ks, resulting in a flux of $f = 10^{-11}\rm \, erg \, cm^{-2} \, s^{-1} $ between 20 and 60 keV. In addition, we derive a 3$\sigma$ upper limit of \mbox{$f(20-60 \rm \, keV) < 3\times 10^{-12} \rm \, erg \, cm^{-2} \, s^{-1}$} for the multi-year time averaged emission, based on 1.7 Ms of {\it INTEGRAL} IBIS/ISGRI data. By modelling the broad-band energy distribution with a one-zone SSC model we connect the average hard X-ray upper limit to the emission and $\gamma$-ray emission to the core emission. The SED parameters show that M~87 can be considered to be a weak BL~Lac object, consistent with the advection dominated accretion flow model for the core \citep{Ptak1998,Reynolds1996} and the overall low-luminous FR-I nature of this galaxy. The high X-ray flux detected with {\it Suzaku} at the end of 2006 seems to indicate the source was undergoing an outburst or flaring episode. The steep slope of the spectrum, with a power law index of \mbox{$\Gamma=2.8^{+0.5}_{-0.4}$} between 20--60 keV, indicates that the emission was likely the high-energy tail of the synchrotron branch. Using simultaneous observations we created an SED for both the core and the jet knot HST-1 which is known to flare in other wavebands such as the radio and optical. Due the non-imaging character of the {\it Suzaku}/PIN observations, we are not able to determine whether the enhanced emission results rather from the core of M~87 or in the jet based on the X-ray observations alone, but from the SED modelling we conclude that the jet knot is the more likely candidate for the hard X-ray emission detected in 2006. In the unification model for AGN, radio galaxies are the counterparts of blazars, where the lower luminosity BL~Lacs are linked to FR-I galaxies and the more powerful flat spectrum radio quasars (FSRQ) to the FR-II sources. One way to account for the differences in luminosity and Compton dominance between the two source classes is to model the SED of BL Lacs with a simple SSC model and the FSRQ with a more complex model, for example using an external Compton component in addition to the SSC \citep{Ghisellini1998}. The simple SSC model for the overall SED appears to be valid for this class of {\it Fermi}/LAT detected radio galaxies. Also in the cases of the brighter FR-II objects, like 3C~111, no external Compton component seems to be necessary to represent the SED, which is not in line with the unification model. Thus, in all these cases the dominating emission region is either far from a strong field of external seed photons, like the broad line region (as in the case of 3C~111), or the broad line region itself is weak because of radiatively inefficient accretion, as might be the case in M~87. The hypothesis, that EC is not significant in $\gamma$-ray detected radio galaxies should be tested in the case of the {\it Fermi}/LAT detected steep spectrum radio quasar 3C~207.0, which hosts a Seyfert 1.2 core. At a redshift of $z =0.68$ this object has a luminosity of $L(2-10 \rm \, keV) = 2.3 \times 10^{45} \rm \, erg \, s^{-1}$, and the strong Seyfert core that displays an iron K$\alpha$ line with $EW \simeq 60 \rm \, eV$ should give rise to a significant photon field able to provide ample seed photons for inverse Compton processes in this case. \section*{Acknowledgments} The authors thank Juan Antonio Zurita Heras, Fabio Mattana and Volodymyr Savchenko for their support in the {\it INTEGRAL} analysis and Katja Pottschmidt for her advice on the {\it Suzaku}/XIS data analysis. We also thank the referee Eric Perlman for the fruitful discussion and the advice that helped us to improve the manuscript. This research is based on data provided by {\it INTEGRAL}, an ESA project funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, Switzerland), Czech Republic, Poland, and with the participation of Russia and the USA. This research has also used data obtained from the Suzaku satellite, a collaborative mission between the space agencies of Japan (JAXA) and the USA (NASA). This research has made use of NASA's Astrophysics Data System Bibliographic Services. We acknowledge the financial support from the UnivEarthS Labex program of Sorbonne Paris Cit\'e (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02) within the project ``Impact of black holes on their environment''. S.S. acknowledges the Centre National d'\'{E}tudes Spatiales (CNES) for financial support. \bibliographystyle{mn2e_fix}
{ "timestamp": "2015-04-27T02:08:56", "yymm": "1504", "arxiv_id": "1504.06517", "language": "en", "url": "https://arxiv.org/abs/1504.06517" }
\section{Introduction} Newton's gravitational constant, $G$, is one of a handful of universal constants that comprise our understanding of fundamental physical processes \cite{Mohr2012} and plays an essential role in our understanding of gravitation, whether previously in Newton's attractive gravitational force between two massive bodies $m_1,m_2$ of magnitude \cite{Newton1687} \begin{equation} F = \frac{Gm_1 m_2}{r^2}, \end{equation} where $r$ is their separation distance, or currently as the proportionality constant in the interaction between energy-momentum content $T_{ab}$ (the stress-energy tensor) and space-time curvature $G_{ab}$ (Einstein tensor) in Einstein's general relativity \cite{Einstein1916, Wald1984} \begin{equation} G_{ab} = R_{ab} - \frac{1}{2}g_{ab}R= 8\pi G T_{ab}, \end{equation} in units where the local speed of light in vacuum $c=1$. Yet, experimental determination of Newton's gravitational constant remains a challenging endeavor. As reviewed in \cite{Speake2014}, several measurements over the last thirty years appear to give inconsistent values for $G$, of course an issue for our understanding of this universal constant. Our purpose with this letter is to inform the reader of a one-to-one correlation between an apparent temporal periodicity in measurements of $G$, generally thought to result from inconsistency in measurements, with recently reported oscillatory variations in measurements of LOD \cite{Holme2013}. LOD refers to the excess of the duration of the day (observed period of rotation of the Earth) relative to a standard unit and is calculated by taking the difference between atomic time (TAI) and universal time (UT1) divided by the aforementioned standard unit of $86400$~SI s \cite{Ray1996}. Variations in LOD can be used to determine changes in the Earth's rotation rate effectively providing a means to examine geophysical and atmospheric processes \cite{Peltier2007}. For the following discussion, we emphasize that our $G$ analysis and LOD analysis (a verification of the procedures employed in \cite{Holme2013}) are very much independent of one another with the determined fitting parameters for both the period and phase of the periodicities in these measurements coinciding in near perfect agreement. Although we recognize that the one-to-one correlation between the fit to the $G$ measurements and the LOD periodicity of 5.9 years could be fortuitous, we think this is unlikely, given the striking agreement shown in Fig.~\ref{PlotL1}. Furthermore, after taking into account this fitted oscillatory trend in the $G$ measurements, we obtain agreement amongst the different experiments mentioned in \cite{Speake2014} with a weighted mean value for $G$ of $( 6.673899 \pm 0.000069 ) \times 10^{-11}$~m$^3$~kg$^{-1}$~s$^{-2}$. \begin{figure} \includegraphics[width=8.0cm]{PlotL1.eps} \caption{Result of the comparison of the CODATA set of $G$ measurements with a fitted sine wave (solid curve) and the 5.9 year oscillation in LOD daily measurements (dashed curve), scaled in amplitude to match the fitted $G$ sine wave. The acronyms for the measurements follow the convention used by CODATA, with the inclusion of a relatively new BIPM result from Quinn {\it et al.}~\cite{Quinn2013} and another measurement LENS-14 from the MAGIA collaboration \cite{Rosi2014} that uses a new technique of laser-cooled atoms and quantum interferometry, rather than the macroscopic masses of all the other experiments. The green filled circle represents the weighted mean of the included measurements, along with its one-sigma error bar, determined by minimizing the L1 norm for all 13 points and taking into account the periodic variation.} \label{PlotL1} \end{figure} \section{Methods} In the July 2014 issue of Physics Today, Speake and Quinn \cite{Speake2014} lay out the problem and review the history of seemingly inconsistent measurements of the gravitational constant $G$. They plot twelve $G$ determinations, along with one-sigma error bars, extending from an experiment by Luther and Towler at the National Bureau of Standards (NBS) in 1982 \cite{Luther1982} to their own at BIPM in 2001 and 2007 (the latter of which was published in 2013) \cite{Quinn2001, Quinn2013}, two measurements in good agreement with each other, but not with the other 10 measurements. Though the vertical scale of years when the measurements were made is not linear, there is a striking appearance of a periodicity running through these values, characterized by a linear drift which suddenly reverses direction and then repeats more than once. With this pattern in mind, we compute a periodogram for the measured $G$ values versus estimated dates of when the experiments were run. A single clear period of 5.9 years emerges. The data for our $G$ analysis were obtained directly from Table XVII in the 2010 CODATA report published in 2012 \cite{Mohr2012}. There are 11 classical measurements made at the macroscopic level. To those we added two more recent data points, another macroscopic measurement, which we label BIPM-13, and the first ever quantum measurement with cold atoms, labeled LENS-14. Next we used our best estimates of when the experiments were run, not the publication dates, for purposes of generating a measured $G$ value versus date data file, with one-sigma errors included too. These dates were obtained from the respective articles. This gives us the best data set possible, defined by the measured $G$ values used for the CODATA recommendation plus two more published after 2012. We fit with the raw standard errors, $\sigma_i$, provided with each of the $G$ measurements and used a numerical minimization of the L1 and L2 norms of the weighted residuals, $r_i/\sigma_i$, where the residuals are about a fitting model of a single sine wave, $a_0 + a_1\cos{\omega t}+b_1\sin{\omega t}$, four parameters in all with 13 measurements. Results for the fit to the 13 measured $G$ values are summarized in Fig.~\ref{PlotL1}. The L2 minimization is equivalent to a weighted least squares fit, yet the L1 minimization (solid line in Fig.~\ref{PlotL1}) is a more robust estimator that discriminates against outliers. Both yield excellent fits with a suggestion that two measurements at Moscow \cite{Karagioz1999} and from the MAGIA collaboration \cite{Rosi2014} are outliers. However, the Moscow value is known to suffer from an unexplained temporal drift \cite{Karagioz1999} and the cold-atom value could be fundamentally different ($G$ at the quantum level). Still, we refrain from speculating further on the cold-atom outlier until more microscopic measurements of $G$ are obtained by different experimental groups. The other 11 measurements are consistent with the L1 fitting curve at the one-sigma level or better. Figure~\ref{PlotL1} appears to provide convincing evidence that there exists a 5.9 year periodicity in the macroscopic determinations of $G$ in the laboratory with variations at the level of $\Delta G/G \sim 2.4 \times 10^{-4}$ about a mean value of $6.673899 \times 10^{-11}$~m$^3$~kg$^{-1}$~s$^{-2}$, close to the value recommended by CODATA in 2010 \cite{Mohr2012} but with a much smaller standard error of 10.3 ppm instead of the CODATA recommended error of 120 ppm. The most accurate determination by the Washington group \cite{Gundlach2000} with a standard error of 14 ppm now falls squarely on the fitting curve. Because the two BIPM measurements were made at the peak of the fitting curve, they now not only agree, but they are consistent with all other measurements. Notably, the measurement with a simple pendulum gravity gradiometer at JILA is no longer biased to an unacceptably small value, but like the BIPM measurements it falls right on the fitting curve, but at the minimum of the sine wave. The Huazhong measurement is also at the minimum of the curve. \section{Results} With the 5.9 year periodicity in the $G$ measurements accepted, the question arises as to what could be the cause and what does it mean. The only thing we can think of is a correlation with a 5.9 year periodicity in the Earth's LOD, published by Holme and de Viron last year \cite{Holme2013}. The International Earth Rotation and Reference Systems Service (IERS), established in 1987, maintains downloadable data files containing daily values of several parameters related to Earth orientation and rotation. The files extend from 1962 January 01, when the Consultative Committee on International Radio (CCIR) established Universal Time Coordinated (UTC) as the standard for time keeping, to the most current date available. We extract two rotation files, the first is the difference UT1-UTC in seconds and the second the LOD, also expressed in seconds, along with daily estimates of standard errors for both. There is also a piecewise constant file in integer seconds for the standard of atomic time TAI minus UTC. By differencing these two files the phase of the Earth rotation is obtained as measured against a uniform atomic time. This difference can be thought of as a continuous phase function $\phi (t)$ in radians sampled once per day at the beginning of the day. It can be expressed in SI seconds, the units on the IERS files, by multiplying by the conversion factor $86400/2 \pi $. It essentially provides the time gained or lost over the years by a poor mechanical clock, the Earth, which runs slow with a loss of about 33 s over the 52 years of the downloaded file. Because of its name and units of seconds only, the second file LOD is more difficult to interpret. It is also the gain or loss of time by the Earth, but only over the current day, and because of definitions there is a reversal in sign. When expressed as a continuous function of the Earth's rotational frequency $\nu ( t )$, it is simply $\nu_0 - \dot{\phi} / 2 \pi$, where $ \nu_0$ is an adopted frequency of rotation with sidereal period of 86164.098903697 s. The quantity $\dot{\phi}/ ( 2 \pi \nu_0 )$ is small and can be taken to the first order in all calculations. Formally, the spectral density of frequency is related to the spectral density of phase by $S_{\mathrm{LOD}} ( f) = ( 2 \pi f )^2 S_{\mathrm{UT1}} ( f )$, where $f$ is the Fourier frequency. However, a separate computation of the spectrum for each file shows that before 1994 either file can be used for analysis, but after the introduction of Global Positioning (GPS) data in 1993, the LOD data become more accurate by a factor of seven or more. This conclusion is consistent with the standard errors included with the data files of LOD and UT1-UTC. We show our estimate of the spectral density for the LOD data in Fig.~\ref{PlotLOD}, obtained by weighted least squares and SVD, but this time with 850 Fourier coefficients, 430 degrees of freedom, and 19169 observations. The spectral resolution is $0.019$~yr$^{-1}$, which we oversample by a factor of four, and the frequency cut off is 2 yr$^{-1}$, far short of the Nyquist frequency of 0.5 d$^{-1}$. A window function is not applied to the data. It introduces undesirable artifacts into the low-frequency noise spectrum of interest and does little to isolate spectral lines. The Gaussian window produces a hint of a line at 5.9 yr, but only a hint. We proceed to an analysis of the data in the time domain. \begin{figure} \includegraphics[width=8.0cm]{PlotLOD.eps} \caption{One-sided power spectral density per unit frequency for LOD data over the years 1962 to 2014. The white-noise floor is indicated by the horizontal solid line and corresponds to a standard deviation of 0.54~ms~d$^{-1}$, achieved by introduction of GPS data in 1993 and consistent with the daily estimates of standard error archived with the LOD data. The upper dashed curve corresponds to mean spectral density for the numerical time derivative of the UT1 data, dependent on VLBI data from radio sources on the sky. For the low end of the spectrum the LOD and UT1 data both indicate a $f^{-2}$ random walk, which with only 52 years of data can be confused with a drift in the Earth's rotation. At the high end, the underlying spectrum indicates white LOD noise, but with a rich spectrum from tidal torques and atmospheric loading at higher frequencies not plotted. Although there is power in the region, there is no suggestion of a single spectral line from the 5.9 year oscillation, a term which must be extracted by analysis in the time domain \cite{Holme2013}.} \label{PlotLOD} \end{figure} The 5.9 year periodicity in the LOD data is plotted by Holme and de Viron in Figure 2 of their paper \cite{Holme2013}. Their plot looks in phase with the fit to the 13 $G$ values, but in order to obtain an independent check on the reality of the signal and for purposes of having a numerical sine wave extending into 2014, we first smooth the LOD data with a Gaussian filter with a radius of 600 days and a standard deviation of 200 days. As a result, the high-frequency noise at a period of one year and shorter is practically eliminated, and with little effect on the low-frequency noise spectrum. Next we fit a cubic spline to the smoothed data with a selection of knots or segments for the cubic polynomials done by eye, such that the fitting curve is sufficiently smooth but with a negligible effect on the 5.9 year periodicity. The resulting LOD residuals are fit with a sine wave of fixed 5.9 year period which is then subtracted from the smoothed data. The same procedure is applied to the new smoothed data and the procedure repeated four times with the knots for the spline at closer spacing with each iteration. The final result is the pure sine wave plotted as a dashed curve in Fig.~\ref{PlotL1}. It agrees with the periodic signal found by Holme and de Viron. A removal of the fitted spline representation of the random walk, and also the sine wave, from the smoothed data is all that is needed in order to reduce the LOD residuals about the fit to a one-sigma noise level of $4.8$~$\mu$s~d$^{-1}$. The amplitude of the fitted periodic signal is $92.64\pm 0.18$~$\mu$s~d$^{-1}$, reduced from the amplitude of $150$~$\mu$s~d$^{-1}$ \cite{Holme2013} by the Gaussian smoothing, but with a well-determined period of $5.90076 \pm 0.00074$~yr. With 99\% confidence the period lies between 5.898 and 5.903~yr. The phasing of the sine wave is as shown in Fig.~\ref{PlotL1} with a standard error of 0.25~yr. \begin{figure} \includegraphics[width=8.0cm]{Plotssn.eps} \caption{Result of the comparison of our $G$ data set with the monthly mean of the total sunspot number, appropriately scaled. The black curves represent solar activity as reflected in the international sunspot number.} \label{Plotssn} \end{figure} The correlation between LOD and $G$ measurements in Fig.~\ref{PlotL1} is most likely of terrestrial origin, but the period of 5.9 years is also close to one-half the principal period of solar activity. References \cite{Djurovic1996} and \cite{Rio2003} discuss in greater detail that a possible correlation between solar activity and LOD measurements is not unexpected. Solar activity has an effect on mass distribution in the atmosphere which ultimately affects the Earth's axial moment of inertia. It is feasible that this effect occurs at longer periods in the 5.9-year range, as well as at much shorter periods, on the order of days, for which models exist \cite{Holme2013}. Consequently, we plot in Fig.~\ref{Plotssn} the monthly mean of the total sunspot number and also a 13-month smoothing curve, both shown in black. The two curves, again scaled to the magnitude of the $G$ data, are taken directly from freely available downloads of data archived at www.sidc.be by WDC-SILSO, Royal Observatory of Belgium, Brussels. The smoothing is done by a standard tapered-boxcar approach and is generally regarded as a good measure of solar activity. Although the $G$ measurements show a general agreement with solar cycle 23, which peaked around 2002, the long and unexpected minimum that followed, and lasted until about 2010, is at odds with the rise in $G$ values during that minimum. There is also a negative correlation between the measurement from 1982 at the National Bureau of Standards, labeled NIST-82, and the sunspot number. It seems that solar activity can be disregarded as a cause of the variations in $G$ measurements. \section{Conclusions} Over the relatively short time span of 34 years considered here, variations in the rotation of the Earth can be considered either a random walk or possibly a drift. Over much longer time scales the rotation must be slowing because of the transfer of spin angular momentum to orbital angular momentum caused by tidal friction of the Moon. Similarly, a real increase in $G$ should pull the Earth into a tighter ball with an increase in angular velocity and a shorter day due to conservation of angular momentum, contrary to the correlation shown in Fig.~\ref{PlotL1}. Thus, we do not expect that this behavior necessarily points to a real variation in $G$ but instead to some yet-to-be determined mechanism affecting both measurements in a similar manner. Importantly, if the observed effect is connected with a centrifugal force acting on the experimental apparatus, changes in LOD are too small by a factor of about $10^5$ to explain the changes in $G$ for the following reason. The Earth's angular velocity $\omega_E$ is by definition \begin{equation} \omega_E = \omega_0 ( 1 - \mathrm{LOD} ), \end{equation} where $\omega_0$ is an adopted sidereal frequency equal to $72921151.467064$ picoradians per second and the LOD is in ms~d$^{-1}$ (www.iers.org). The total centrifugal acceleration is given by \begin{equation} a_c = r_s \omega_0^2 \bigg[ 1 - 2 A \sin\bigg(\frac{2\pi}{P} (t-t_0)\bigg) \bigg], \end{equation} where $A$ is the amplitude $0.000150/86400$ of the 5.9 year sinusoidal LOD variation and $r_s$ is the distance of the apparatus from the Earth's spin axis. The maximum percentage variation of the LOD term is $ 3.47 \times 10^{-9}$ of the steady-state acceleration, while $\Delta G/G$ is $2.4 \times 10^{-4}$, hence even the full effect of the acceleration with no experimental compensation changes $G$ by only $10^{-5}$ of the amplitude in Fig.~\ref{PlotL1}. Perhaps instead, the effect is connected with changing torques on the Earth's mantle due to changing motions in the core. Changes of circulation in the core must be accompanied by changes in density variations in the core causing variations in the gravitational acceleration $g$ in the laboratory. At least this mechanism links both LOD and gravitational changes to changes in the core although we do not immediately see how either of these mechanisms could affect measurements of $G$ in the laboratory given the torsion balance schemes employed. The least likely explanation is a new-physics effect that could make a difference in the macroscopic and microscopic determinations of $G$. Perhaps a repetition of the single 2014 quantum measurement over the next decade or so can show consistency with a constant value, although if the variations in $G$ measurements are caused by an unknown inertial or frame effect, not by systematic experimental error, it likely applies at both the macroscopic and the microscopic levels. The gravitational parameter for the Sun, $GM_{\odot}$, is known to ten significant figures from orbital motions in the Solar System (ssd.jpl.nasa.gov/?constants). The universal constant $G$ does not vary at that scale, although Krasinsky and Brumberg \cite{Krasinsky2004,Anderson2010} report a detection of an unexplained secular increase in the astronomical unit (AU) over the years 1976 to 2008, which can be interpreted as an increase in $GM_{\odot}$ proportional to the cube of the AU. However the effect on $G$, if real, is at the level of an increase of 3 parts in $10^{12}$ per year and undetectable with laboratory measurements of $G$. Nevertheless, the increase in $GM_{\odot}$ is not explainable as an increase of the solar mass by accretion as opposed to the mass radiated away by solar luminosity \cite{Anderson2010}. Apparently, there does seem to be a secular or very long period (greater than 20000 years) $G$ variation in the Solar System, but of order $10^{-6}$ smaller than the variation shown in Fig.~\ref{PlotL1}. \acknowledgments
{ "timestamp": "2015-05-25T02:12:04", "yymm": "1504", "arxiv_id": "1504.06604", "language": "en", "url": "https://arxiv.org/abs/1504.06604" }
\section{INTRODUCTION} Consider a half-filled band of electrons that interact through a short-range potential $U$ on a lattice with bandwidth $W$. As one increases interactions, the ground state can undergo a transition from a Fermi liquid to a magnetically ordered state for $U$ less than $W$, but if there is enough frustration, quantum fluctuations may prohibit long-range order. In that case, upon increasing $U$ further there may be a Fermi liquid to insulator transition (a Mott transition) where the insulator, a spin liquid, does not exhibit long-range order. Spin liquids have been extensively searched for since Anderson's proposal in the context of high-temperature superconductors ~\cite{Anderson:1987}. The pyrochlore spin ices with fractionalized excitations are the best candidates to date for spin liquid ground states in three dimensions ~\cite{GingrasReview:2013}. Since quantum fluctuations are large in low dimension, two-dimensional lattices are especially good candidates for spin liquid ground states. There is experimental evidence for such a state of matter in layered organic materials of the BEDT family that form a highly frustrated triangular lattice ~\cite{PowellMcKenzieReview:2011}. The first theoretical proposal for a spin liquid state in the 1970's was in fact for the triangular lattice ~\cite{Anderson:1973}. Theoretically, evidence for a spin liquid state has also been found on the kagome lattice ~\cite{yan_spin-liquid_2011}. The honeycomb lattice stands as a particularly interesting candidate for a spin liquid ground state because it has the smallest possible coordination for a two-dimensional lattice, leading to large quantum fluctuations. In addition, the Hubbard model on the honeycomb lattice may be relevant for a number of real systems, including graphene, carbon nanotubes, MgB$_2$, etc., as mentioned in Ref.\onlinecite{paiva2005}. Much recent work has focused on this model ever since very large scale quantum Monte Carlo simulations made the exciting prediction of a spin liquid over a small range of values of $U$, beyond which antiferromagnetism sets in.~\cite{meng2010} This claim has been confirmed by further numerical work~\cite{hohenadler_correlation_2011, hohenadler_erratum:_2012,hohenadler_quantum_2012, zheng_particle-hole_2011} but was later disputed by Sorella \emph{et al.}~\cite{sorella2012} using even larger lattices. Since methods based on dynamical mean-field theory (DMFT) and its extensions~\cite{Maier:2005,KotliarRMP:2006,LTP:2006}--so-called quantum cluster approaches--are particularly suited to find Mott transitions, they have been used to look for a spin liquid phase between the semimetal and the antiferromagnet. After early single-site DMFT studies ~\cite{tran_finite-temperature_2009,jafari_dynamical_2009,ebrahimkhas_exact_2011}, quantum cluster calculations confirmed the existence of the intermediate spin liquid phase~\cite{wu_quantum_2012,budich_fluctuation-driven_2013,yu_mott_2011} or of a Mott transition~\cite{WuDirac:2010}. However, Hassan \emph{et al.}\cite{hassan2013}, using the cluster dynamical impurity approximation (CDIA) found out that the Mott transition necessary for a spin liquid ground state is in fact pre-empted by antiferromagnetic long-range order. Careful analysis~\cite{LiebschHoneycomb:2011,LiebschWu:2013} of the influence of the cluster shape and of the various implementations of cluster extensions of dynamical mean-field theory~\cite{he_cluster_2012,seki_variational_2013} suggest that it is important to apply other quantitative methods to find the precise values of the critical values of $U/W$ for the phase transitions~\cite{Note_1_Arya:2013}. Other approaches that have been applied to this problem are briefly summarized in Refs.~\onlinecite{hohenadler_correlation_2013,LiebschWu:2013}. Since the Mott transition towards a spin liquid occurs in the absence of long-range order, one can expect that quantum cluster methods give a good upper bound for the occurrence of this transition~\cite{Note_2_Arya:2013}. However, to locate the precise value of $U_c/W$ where antiferromagnetism sets in, it is crucial that the method correctly include long-wavelength quantum fluctuations in the thermodynamic limit. Given that the critical value of $U_c/W$ for antiferromagnetism is of order $2/3$ ~\cite{meng2010}, a semianalytical, nonperturbative technique, valid from weak to intermediate coupling, the two-particle self consistent approach, (TPSC) is especially suited for this problem~\cite{Vilk:1997,TremblayMancini:2011}. This is the approach we use in this paper. Unlike RPA or Hartree-Fock theory, this method satisfies not only conservation laws, but also the Pauli principle, the Mermin-Wagner theorem, and important sum rules for spin and charge fluctuations. TPSC allows us to locate the crossover to the renormalized classical regime where the correlation length for antiferromagnetic fluctuations exceeds the thermal de Broglie wavelength. The extrapolation of that crossover line to zero temperature is one of the methods that can be used to find the value of $U_c/W$ where antiferromagnetism sets in. While TPSC has so far been used only in a single-band context, here we generalize it to the two-band case to find $U_c/W$. Previous estimates of $U_c$ for the antiferromagnetic transition of the Hubbard model with a nearest-neighbor hopping $t$ on the honeycomb lattice at half-filling and $T=0$ are in the range~\cite{sorella_semi-metal-insulator_1992,martelo:1997,furukawa_antiferromagnetism_2001,paiva2005,sorella2012,meng2010,hohenadler_correlation_2011, hohenadler_erratum:_2012,hohenadler_quantum_2012, zheng_particle-hole_2011,wu_quantum_2012,budich_fluctuation-driven_2013,yu_mott_2011,hassan2013} $3.5t$ to $5t$, much larger than the Hartree-Fock RPA mean-field result~\cite{sorella2012} $2.23t$. A number of numerical lattice-field theory solutions of the continuum problem (see Ref.~24 of Liebsch and Wu~\cite{LiebschWu:2013}) also suggest an antiferromagnetic phase (more precisely chiral symmetry breaking) at strong coupling. The most accurate estimate for $U_c$ should be the recent large scale quantum Monte Carlo calculation of Sorella \emph{et al.}~\cite{sorella2012}, $U_c/t=3.869\pm 0.013$, close to estimates from high temperature series expansion~\cite{paiva2005}, $U_{c}~\approx 4t$ and from a projection Monte Carlo method with an optimized initial state by Furukawa \cite{furukawa_antiferromagnetism_2001} $U_{c} \sim 3.6t$. Another accurate recent result, $U_c=3.78t$, is provided by the pinning field approach to quantum Monte Carlo of Assaad and Herbut ~\cite{pinning_assaad_2013}. Other quantum Monte Carlo calculations generally find higher values $U_{c}$. This includes the early ones by Sorella and Tosatti~\cite{sorella_semi-metal-insulator_1992} that yielded $U_{c}=4.5t$, those of Paiva {\emph{et al.}}~\cite{paiva2005} that found $U_{c}~\approx 5t$ and those of Meng {\emph{et al.}}~\cite{meng2010} with $U_{c} > 4.3t$. The most accurate weak-coupling method that can be compared with TPSC, namely, the functional renormalization group~\cite{honerkamp_density_2008,raghu_topological_2008}, gives $U_{c}~\approx 3.8t$, close to the best estimates mentioned above. The paper is organized as follows. In Sec.~\ref{sec:2}, we introduce the model and the notation for the Green function formalism. We generalize the TPSC approach to graphene in Sec.~\ref{sec:3}, obtaining the spin and charge fluctuations with a functional derivative approach. The scaling for the susceptibility is obtained in Sec.~\ref{Sec:Scaling}. The numerical procedure is explained in Sec~\ref{sec:4} and the numerical results are presented in Sec.~\ref{sec:5}. Three appendices contain analytical results that can be obtained for the spin susceptibility. \section{Model and Green function}\label{sec:2} The Hamiltonian is given by \begin{align} \mathcal{H} &= H_0 + U \sum_i n_{i\uparrow}n_{i\downarrow} \label{graphene} \\ H_0 &= -t \sum_{<ij>\sigma}a_{i\sigma}^{\dagger} b_{j\sigma} + \textrm{H.c.}\label{nih} \end{align} where $H_0$ is the noninteracting hopping Hamiltonian. Creation operators for a particle on sublattice $A$ and $B$ are represented by $a^{\dagger}$ and $b^{\dagger}$, respectively, $\sigma$ is the spin of the particle and $<ij>$ represents nearest-neighbor sites on the honeycomb lattice. Here, $t$ is the hopping parameter and $U$ is the strength of the on-site Coulomb interaction. In Fourier space, $H_0$ takes the form \begin{align} H_0= \begin{pmatrix} -\mu&-tf(\mathbf{k})\\ -tf(\mathbf{k})&-\mu \end{pmatrix} \end{align} where \begin{align} f(\mathbf{k})=1 + e^{i \mathbf{k}\cdot \textbf{a}_1} + e^{i\mathbf{k}\cdot \textbf{a}_2}\label{f_k} \end{align} with $\textbf{a}_1=\frac{\sqrt{3}}{2}\hat{\mathbf{x}}+\frac{1}{2}\hat{\mathbf{y}}$ and $\textbf{a}_2=\frac{\sqrt{3}}{2}\hat{\mathbf{x}}-\frac{1}{2}\hat{\mathbf{y}}$ the basis vectors of length unity for the underlying triangular Bravais lattice. We take the nearest-neighbor hopping $t$ equal to unity. Similarly, the Planck's constant $\hbar$ and Boltzmann constant $k_B$ are set to unity. The Green function for the Hamiltonian in Eq.~\eqref{graphene} is a $4$ x $4$ matrix since there are two sublattices and two spin indices. The Green function matrix $\mathbf{G}$ is diagonal in spin-space because of the spin rotational invariance of the Hamiltonian~\eqref{graphene}. Introducing the notation $1 = (\vec{r_{1}},\tau_{1})$, where $1$ stands for the position on the triangular lattice $\vec{r_{1}}$ and imaginary time $\tau_{1}$, the matrix elements of $\mathbf{G}$ are defined by \begin{align} G_{\alpha\beta}^{\sigma \sigma'}(1,2) & = -\langle T_{\tau} \alpha_{\sigma}(1) \beta_{\sigma}^\dagger(2) \rangle \delta_{\sigma \sigma'}, \label{gf_el} \end{align} where $\alpha=a,b$ and $\beta=a,b$ denote sublattice indices and $\sigma,\sigma' = \uparrow, \downarrow$ are the spin indices. The equation of motion for $G_{\alpha\beta}^{\sigma \sigma'}(1,2)$ in Eq.~\eqref{gf_el} is \begin{align} \frac{\partial G_{\alpha\beta}^{\sigma\sigma'}(1,2)}{\partial \tau_1} &= -\delta(\tau_1 - \tau_2) \delta_{\mathbf{r}_1\mathbf{r}_2} \delta_{\sigma \sigma'} \delta_{\alpha \beta} \nonumber \\ &- \langle T_{\tau} \frac{\partial }{\partial \tau_1}\alpha_{\sigma}(1) \beta_{\sigma}^\dagger(2) \rangle \delta_{\sigma \sigma'} . \end{align} The Heisenberg equation of motion in the grand canonical ensemble yields \begin{align} \frac{\partial }{\partial \tau_1}\alpha_{\sigma}(1) & = [\mathcal{H} - \mu \mathcal{N} ,\alpha_{\sigma}(1)] , \end{align} where $\mu$ is the chemical potential and $\mathcal{N}$ is the total-number operator. Defining \begin{align} h_{\alpha \beta}^{\sigma \sigma'}(1,2) & = -t\sum_{\Delta}\,\delta_{\mathbf{r_{1}}+\Delta,\mathbf{r_{2}}} \, \delta(\tau_1 - \tau_2) \zeta_{\alpha \beta}^{x} \delta_{\sigma \sigma'}, \label{nih} \end{align} where $\alpha,\beta = a,b$ are the sublattice indices, $\Delta$ runs over the nearest neighbors, and $\zeta^{x}$ is the Pauli matrix \begin{align} \zeta^{x} = \begin{pmatrix} 0 &1 \\ 1 &0 \end{pmatrix}, \end{align} the equation of motion for the Green function takes the form \begin{align} \left( -\frac{\partial }{\partial \tau_1} + \mu\right)&\delta_{\mathbf{r}_1\mathbf{r}_{\bar{3}}} G_{\alpha \beta}^{\sigma \sigma'}(\bar{3},2) - h_{\alpha \eta}^{\sigma \sigma''}(1,\bar{3}) G_{\eta \beta}^{\sigma'' \sigma}(\bar{3},2) \nonumber \\ &= \delta(\tau_1 - \tau_2) \delta_{\mathbf{r}_1\mathbf{r}_2}\delta_{\sigma \sigma'} \delta_{\alpha \beta} \nonumber \\ &- U \langle T_{\tau} \alpha_{\bar{\sigma}}^{\dagger}(1)\alpha_{\bar{\sigma}}(1)\alpha_{\sigma}(1) \beta_{\sigma}^\dagger(2) \rangle \delta_{\sigma \sigma'}, \end{align} where a bar over an index like $\bar{3}$ implies summation over the corresponding lattice positions and an integral over imaginary time, while the Einstein summation convention applies to repeated spin or sublattice indices. Using \begin{align} \mathbf{G}_0^{-1}(1,2) & = (-\partial_{\tau_1} + \mu)\mathbf{I} - \mathbf{h}(1,2), \label{nigf} \end{align} for the noninteracting Green function, a short-hand for the above equation of motion is \begin{align} \mathbf{G}_{0}^{-1}(1,\bar{3}) \mathbf{G}(\bar{3},2)& = \delta(\tau_1 - \tau_2) \delta_{\mathbf{r}_1\mathbf{r}_2}\mathbf{I} - \mathbf{u}(1,2) \end{align} where the four-point correlation matrix $\mathbf{u}$ is \begin{align} u^{\sigma \sigma'}_{\alpha \beta}(1,2)& = -U\langle T_{\tau} \alpha_{\bar{\sigma}}^\dagger(1^+)\alpha_{\bar{\sigma}}(1)\alpha_{\sigma}(1) \beta_{\sigma}^\dagger(2) \rangle \delta_{\sigma \sigma'}.\label{u} \end{align} The correlation matrix can be rewritten in terms of the self-energy using Dyson's equation \begin{align} \mathbf{G}^{-1}(1,2) & = \mathbf{G}_{0}^{-1}(1,2) - \boldsymbol{\Sigma}(1,2) \end{align} where from the spin-symmetry of the Hamiltonian, the self-energy is block-diagonal in spin subspace. This leads to \begin{align} \boldsymbol{\Sigma}(1,\bar{3}) \mathbf{G} (\bar{3},2) & = \mathbf{u}(1,2). \label {full_se_pe_tpc} \end{align} This clearly shows how the self-energy is related to two-particle correlation functions and to the potential energy in the special case where the first and last indices are equal (with a small positive shift in imaginary time for proper time-order). This well-known relation is obtained without any approximations. This is the multi-band generalization of an important consistency requirement between the self-energy and the double occupancy in the Hubbard model~\cite{Vilk:1997}. \section{Generalization of TPSC}\label{sec:3} The two-particle self-consistent (TPSC) approach was developed to study the single-band Hubbard model~\cite{Vilk:1994,Vilk:1996,Vilk:1997,Allen:2003,TremblayMancini:2011}. It has been benchmarked through detailed comparisons with quantum Monte Carlo calculations. This is a nonperturbative method that works best from weak to intermediate values of coupling $U/W$. The key features of this approach are that it satisfies conservation laws, the Pauli principle and the Mermin-Wagner theorem. Perturbative methods, which obey conservation laws (like FLEX~\cite{Bickers:1989}), tend to violate the Pauli principle, while those that satisfy the Pauli principle (like the Parquet re-summations) usually violate conservation laws~\cite{Bickers:1991}. Methods like RPA give a finite-temperature transition to an antiferromagnetic state with the long-range order, a scenario prevented by the Mermin-Wagner theorem in two dimensions ~\cite{Mermin:1966}. Although the spin and charge susceptibilities in TPSC are similar in form to those appearing in RPA, the two methods are fundamentally different. In TPSC, the irreducible spin and charge vertices are not equal. They are assumed to be momentum and frequency independent and are computed self-consistently at the two-particle level in such a way that local sum rules for spin and charge are satisfied. With TPSC, one can study the antiferromagnetic fluctuations in two-dimensional lattices without the unphysical finite-temperature phase transition. The renormalized classical regime, where the fluctuations are large and the correlation length becomes greater than the thermal de Broglie wavelength, can be studied using this theory. This crossover to a renormalized classical regime at a finite temperature is a precursor of the zero-temperature instability to the long-range order. Note, however, that TPSC is not valid deep inside the renormalized classical regime. TPSC has been used to demonstrate, for example, that antiferromagnetic fluctuations can induce a pseudogap in two dimensions~\cite{Vilk:1996,Kyung:2004,Hassan:2008} and that $d$-wave superconductivity mediated by these fluctuations is possible~\cite{Kyung:2003}. The method has been generalized to the attractive Hubbard model,~\cite{Allen:2001} and has been extended to the case where one includes a near-neighbor repulsion $V$. This is called extended TPSC, or ETPSC~\cite{davoudi:2006,davoudi:2007,davoudi:2008}. Here we generalize the method to two bands, but identical atoms within the unit cell. We do not consider the second step of the theory, which gives an improved formula for the self-energy~\cite{Moukouri:2000}. The final form of the theory is very natural but we give a detailed derivation below. The reader may skip to the results section without loss of continuity. The relevant equations are the spin and charge susceptibilities~\eqref{sp_susc} and~\eqref{ch_susc} and the local spin and charge sum rules~\eqref{sr_s} and~\eqref{sr_c} that have to be solved self-consistently with the ansatz~\eqref{A} and~\eqref{sp_vtx}. \subsection{TPSC ansatz for two bands} The renormalized interactions for spin and charge can be obtained from functional derivatives of the self-energy $\Sigma$ \cite{Vilk:1997,Allen:2003}. To obtain $\boldsymbol{\Sigma}$ from its definition in terms of the four-point function $\mathbf{u}$ in Eq.~\eqref{full_se_pe_tpc}, we assume that Hartree-Fock factorization as a product of two-point correlation functions is justified when the four points in $\mathbf{u}$ do not coincide ~\cite{Vilk:1994,Vilk:1997}. But when all four points in $\mathbf{u}$, Eq.~\eqref{u}, are identical, we impose the exact relation given by \begin{align} \left[ \mathbf{\Sigma}(1,\bar{3})\mathbf{G}(\bar{3},1^+)\right]^{\sigma \sigma'}_{\alpha \beta} \; \delta_{\alpha \beta} \delta_{\sigma \sigma'} = U\langle n_{\alpha \bar{\sigma}}(1) n_{\alpha{\sigma}}(1)\rangle \delta_{\alpha \beta} \delta_{\sigma \sigma'},\label{let} \end{align} obtained from Eq.~\eqref{full_se_pe_tpc} when $2\rightarrow1^{+}$ and $\alpha=\beta$, {\em{i.e.}} the positions coincide and the times are such that $\tau_{2} = \tau_{1}+\epsilon$, where $\epsilon$ is positive and infinitesimal. Using spin-rotational invariance, since~\eqref{let} is diagonal in spin indices, we focus on the diagonal elements and then, the above Hartree-Fock-like factorization of Eq.~\eqref{full_se_pe_tpc} can be written as \begin{align} \Sigma^\sigma_{\alpha\gamma}(1,\bar{3}) G^\sigma_{\gamma\beta}(\bar{3},2) &= \mathcal{A} G^{\bar{\sigma}}_{\alpha\alpha}(1,1^{+}) G^\sigma_{\alpha\beta}(1,2). \label{hf_like_factor} \end{align} For $2\rightarrow1^{+}$ and $\alpha=\beta$, we find that the exact result ~\eqref{let} is recovered if \begin{align} \mathcal{A} & = U \frac{\langle n_{\alpha\bar{\sigma}} n_{\alpha \sigma}\rangle}{\langle n_{\alpha \bar{\sigma}} \rangle \langle n_{\alpha \sigma} \rangle}. \label{A} \end{align} This expression for $\mathcal{A}$ involves double occupancy $\langle n_{\alpha \bar{\sigma}} n_{\alpha{\sigma}}\rangle$, which is obtained by the self-consistent calculations explained in the next subsection. Substituting for $\mathcal{A}$ in Eq.\eqref{hf_like_factor} and right multiplying by $G^{-1}$, we obtain \begin{align} \Sigma^\sigma_{\alpha\beta}(1,2) = U \frac{\langle n_{\alpha\bar{\sigma}} n_{\alpha \sigma}\rangle}{\langle n_{\alpha \bar{\sigma}} \rangle \langle n_{\alpha \sigma} \rangle} G^{\bar{\sigma}}_{\alpha\alpha}(1,1^{+})\delta(1-2)\delta_{\alpha\beta}. \label{approx_first_se} \end{align} This is our first approximation for the self-energy. It is local and frequency independent. A better approximation can be obtained by including the effects of fluctuations but this is not needed here~\cite{Vilk:1997,Moukouri:2000,TremblayMancini:2011}. As explained in the next section, the functional derivatives of the self-energy obtained above lead to the renormalized vertices for spin and charge. \subsection{Spin and charge susceptibilities} The spin and charge susceptibilities are calculated to reach an understanding of the competing spin and charge ordering transitions in the model. The value of $\mathcal{A}$ in Eq.~\eqref{A} is obtained from these susceptibilities. Unlike RPA, where vertices are the bare $U$ in both spin and charge susceptibilities, in TPSC~\cite{Vilk:1997,TremblayMancini:2011}, spin and charge vertices differ. The renormalized irreducible vertex for spin is denoted by $\mathcal{U}_s$ and for charge by $\mathcal{U}_c$. They will clearly both depend on $U$. The spin and charge vertices in the longitudinal spin channel are computed from the local particle-hole irreducible vertices $\Gamma^{\sigma\sigma}$ and $\Gamma^{\sigma \bar{\sigma}}$. These vertices are given by functional derivatives of the self-energy \begin{align} \Gamma^{\sigma\sigma'}_{\alpha \beta ,\gamma \zeta}(1,2;3,4) &= \frac{\delta\Sigma^{\sigma}_{\alpha \beta}(1,2)}{\delta G^{\sigma'}_{\gamma \zeta}(3,4)}. \end{align} In matrix notation for the sublattice indices, the irreducible spin vertex is given by \begin{align} \mathbf{\Gamma}_{s}(1,2;3,4) &= \frac{\delta\Sigma^{\uparrow}(1,2)}{\delta G^{\downarrow}(3,4)} - \frac{\delta\Sigma^{\uparrow}(1,2)}{\delta G^{\uparrow}(3,4)}, \end{align} where $\alpha \beta$ corresponds to the row index and $\gamma \zeta$ corresponds to the column index with $\alpha,\, \beta,\,\gamma,\,\zeta = a,b$. From our first approximation for the self-energy, Eq.~\eqref{approx_first_se}, we can calculate these functional derivatives. We check that the functional derivatives of the terms $\frac{\langle n_{\alpha \uparrow} n_{\alpha \downarrow}\rangle}{\langle n_{\alpha \uparrow} \rangle \langle n_{\alpha \downarrow} \rangle}$ cancel by spin-rotational invariance, and we obtain \begin{align} \mathbf{\Gamma}_{s}(1,2;3,4) &= \mathcal{U}_s \,\delta(1-3)\delta(1^{+}-4)\delta(1-2), \end{align} where the only non-zero elements of $\mathcal{U}_s$ are diagonal in sublattice index and are given by \begin{align} \mathcal{U}_s^{\alpha \alpha, \alpha \alpha} &= \mathcal{A} = U \frac{\langle n_{\alpha\bar{\sigma}} n_{\alpha \sigma}\rangle}{\langle n_{\alpha \bar{\sigma}} \rangle \langle n_{\alpha \sigma} \rangle}. \label{sp_vtx} \end{align} The irreducible charge vertex is \begin{align} \mathbf{\Gamma}_{c}(1,2;3,4) &= \frac{\delta\Sigma^{\uparrow}(1,2)}{\delta G^{ \downarrow}(3,4)} + \frac{\delta\Sigma^{\uparrow}(1,2)}{\delta G^{\uparrow}(3,4)}. \end{align} From the functional derivative of the terms $\frac{\langle n_{\alpha \uparrow} n_{\alpha \downarrow}\rangle}{\langle n_{\alpha \uparrow} \rangle \langle n_{\alpha \downarrow} \rangle}$, we obtain correlation functions of higher order. TPSC makes the assumption that the irreducible charge vertex, like the irreducible spin vertex, is constant and diagonal in sublattice index: \begin{align} \mathbf{\Gamma}_{c}(1,2;3,4)&= \mathcal{U}_c\,\delta(1-3)\delta(1^{+}-4)\delta(1-2) . \end{align} Introducing the short hand $q = (\mathbf{q},i\nu)$, which stands for the momentum space coordinate $\mathbf{q}$ and the bosonic Matsubara frequency $\nu$, we find a straightforward generalization of the particle-hole Bethe-Salpeter equation \cite{TremblayMancini:2011} to the case of a matrix susceptibility. The corresponding spin susceptibility $\boldsymbol\chi^s$ and the charge susceptibility $\boldsymbol\chi^c $ are given by, \begin{align} \boldsymbol\chi^s(q) & = \left(\mathbf{I} - \frac{1}{2}\boldsymbol\chi^0 (q)\mathcal{U}_s\right)^{-1} \boldsymbol\chi^0 (q)\label{sp_susc},\\ \boldsymbol\chi^c(q) & = \left(\mathbf{I} + \frac{1}{2}\boldsymbol\chi^0 (q)\mathcal{U}_c\right)^{-1} \boldsymbol\chi^0 (q)\label{ch_susc}, \end{align} where $\boldsymbol\chi^0$ is the noninteracting susceptibility (Lindhard function) defined by \begin{align} \chi^0_{\alpha\beta,\gamma\zeta}(q) & = -\frac{T}{N^{2}}\sum_{k \sigma \sigma'} {G}^{\sigma \sigma}_{0 \; \gamma\alpha}(k){G}^{\sigma' \sigma'}_{0 \; \beta\zeta}(k+q) \delta_{\sigma \sigma'}. \label{lf} \end{align} The summation is over the momentum space $\mathbf{k}$ as well as over the fermionic Matsubara frequencies, and the lattice size is $N\times N$. The sum rules~\cite{TremblayMancini:2011} needed for self-consistency are obtained by summing susceptibilities over all momenta and frequencies to recover local equal-time correlation functions. In the spin channel, we find \begin{align} \frac{T}{N^{2}} \sum_{q} \chi_{\alpha \alpha, \alpha \alpha}^s(q) & = \langle n_{\alpha \uparrow} \rangle + \langle n_{\alpha \downarrow} \rangle -2\langle n_{\alpha \uparrow} n_{\alpha \downarrow} \rangle. \label{sr_s} \end{align} On the right-hand side, we have used the fact that the Pauli principle must be satisfied in the form $n_{\sigma}^{2} = n_{\sigma}$. The corresponding sum rules for the charge susceptibility are \begin{align} \frac{T}{N^{2}} \sum_{q} \chi_{\alpha \alpha, \alpha \alpha}^c(\mathbf{q}) & = \langle n_{\alpha}^\uparrow \rangle + \langle n_{\alpha}^\downarrow \rangle + 2\langle n_{\alpha}^\uparrow n_{\alpha}^\downarrow \rangle - \langle n_{\alpha}\rangle^2 . \label{sr_c} \end{align} We already have an expression, Eq.~\eqref{sp_vtx}, for $\mathcal{U}_s$ in terms of double occupancy. By substituting this in Eq.~\eqref{sp_susc} for the spin susceptibility, we can evaluate the sum rules given by Eq.~\eqref{sr_s} and obtain the values of the double occupancies $\langle n_{a\uparrow} n_{a\downarrow} \rangle$ and $\langle n_{b\uparrow} n_{b\downarrow} \rangle$, and hence $\mathcal{U}_s$, in a self-consistent manner. By symmetry, here $\langle n_{a\uparrow} n_{a\downarrow} \rangle$ and $\langle n_{b\uparrow} n_{b\downarrow} \rangle$ are equal. We can determine the constant charge vertex $\mathcal{U}_c$ from the sum rules given by Eq.~\eqref{sr_c} once we know the values of double occupancies. Now that we have $\mathcal{U}_s$ and $\mathcal{U}_c$, the susceptibilities can be calculated from Eqs.~\eqref{sp_susc} and~\eqref{ch_susc}. We can study the fluctuations in the system as a function of temperature $T$ and on-site interaction $U$. The correlation lengths corresponding to various channels in the spin and charge susceptibilities give us an estimate of the magnitudes of the fluctuations and hence let us determine which ordering transition is dominant in the system. The crossover to a renormalized classical regime at lower temperatures can be detected from the corresponding correlation length. \section{Scaling form for the susceptibilities}\label{Sec:Scaling} The correlation length is useful to find the renormalized classical regime and the zero-temperature critical value of $U$. In the limit where the correlation lengths are large, a simple analytical form is useful. First, we introduce the notation \begin{align} \chi^{0}_{aa,aa}=\chi^{0}_{aa} ~;~ \chi^{0}_{aa,bb}=\chi^{0}_{ab}~ ;~ \chi^{0}_{bb,aa}=\chi^{0}_{ba}. \end{align} Since the $a$ and $b$ sublattices are equivalent, we will use \begin{align} \chi^{0}_{aa,aa}=\chi^{0}_{bb,bb}=\chi^{0}_{aa}. \end{align} Quite generally, we also have the following equality \begin{align} \left(\chi^{0}_{ab}(i\nu)\right)^*=\chi^{0}_{ba}(-i\nu), \label{chi_ab-Identity} \end{align} where $\nu$ is a bosonic Matsubara frequency. The spin susceptibilities can conveniently be rewritten in terms of susceptibilities that are either ferromagnetic or antiferromagnetic within a unit cell. First, rewrite the determinant entering the spin susceptibility~(\ref{sp_susc}) as \begin{widetext} \begin{align} \det\left(\mathbf{I} - \frac{1}{2}\boldsymbol\chi^0 (q)\mathcal{U}_s\right)= \left[1-\frac{U_s}{2}(\chi^{0}_{aa}(q) - \sqrt{\chi^{0}_{ab}(q)\chi^{0}_{ba}}(q)) \right]\left[1-\frac{U_s}{2}(\chi^{0}_{aa}(q) + \sqrt{\chi^{0}_{ab}(q)\chi^{0}_{ba}}(q)) \right], \end{align} \end{widetext} with an analogous result for the determinant entering the charge susceptibility. Clearly, the location of the poles is determined by the combinations of noninteracting susceptibilities \begin{align} \chi^{s,0}_{fm} & = \left(\chi^{0}_{aa} - \sqrt{\chi^{0}_{ab}\chi^{0}_{ba}}\right), \label{fm} \\ \chi^{s,0}_{afm} & = \left(\chi^{0}_{aa} + \sqrt{\chi^{0}_{ab}\chi^{0}_{ba}}\right). \label{afm} \end{align} These can be associated with the noninteracting ferromagnetic and antiferromagnetic spin susceptibilities, respectively. Note that the usual definition of antiferromagnetism, which we adopt here, corresponds to alternating spin directions on $a$ and $b$ sublattices but occurs at $\mathbf{q}=0$ as far as wave vectors are concerned. Explicit expressions for the noninteracting susceptibilities appear in Appendix~\ref{non-interacting}. Intraband terms contribute to the ferromagnetic susceptibility while the antiferromagnetic susceptibility involves interband transitions. Taking the analogous definition for the interacting case we find, after some algebra detailed in Appendix~\ref{appendix_AFM}, the following scalar equations \begin{align} \chi^{s}_{fm}(\mathbf{q},i\nu) &= \frac{\chi^{0}_{fm}(\mathbf{q},i\nu)}{1-\frac{U_{s}}{2}\chi^{0}_{fm}(\mathbf{q},i\nu)}\label{fm_simple}\\ \chi^{s}_{afm}(\mathbf{q},i\nu) &= \frac{\chi^{0}_{afm}(\mathbf{q},i\nu)}{1-\frac{U_{s}}{2}\chi^{0}_{afm}(\mathbf{q},i\nu)} \label{afm_simple}. \end{align} They resemble the expressions in the single-band case. Analogous definitions can be made for the charge susceptibilities. \begin{figure*}[ht] \begin{center} \includegraphics[scale = 0.4]{susc_cone} \caption{ (Color online) Plot of $\chi_{afm}^{0}(\mathbf{q},i\nu = 0)$ for $T = 0.005$.} \label{fig:susc_cone} \end{center} \end{figure*} The correlation length becomes large when the denominator of the interacting susceptibilities is close to zero at vanishing Matsubara frequency. Taking the antiferromagnetic susceptibility as an example, in that situation the numerator $\chi^{0}_{afm}(\mathbf{q},i\nu)$ can be replaced by the maximum value $\chi^{0}_{afm}(\mathbf{q} = 0,i\nu=0)$ while in the denominator $\chi^{0}_{afm}(\mathbf{q},i\nu)$ must be expanded about the maximum at $\mathbf{q} = 0, i\nu=0$. Because of the Dirac cones, the noninteracting susceptibility in the denominator does not have a derivative at $\mathbf{q} = 0,i\nu=0$. The left derivative and the right derivative as we approach $\mathbf{q} = 0$ are different. In Appendix~\ref{Appendix_analytic_dirac}, we estimate the derivatives in the Dirac approximation. We can proceed numerically to confirm the orders of magnitude obtained in Appendix~\ref{Appendix_analytic_dirac}. From the conical shape of the surface plot (see Fig.~\ref{fig:susc_cone}) of the antiferromagnetic susceptibility the dependence is on $q = \sqrt{q_{x}^{2} + q_{y}^{2}}$. The derivative is obtained by fitting the data for $\chi^{0}_{afm}(\mathbf{q},0)$ about $\mathbf{q} = 0$, using a form given by $ \chi^{0}_{afm}(\mathbf{q},0) = a + b\,\sqrt{q_{x}^{2}+q_{y}^{2}}+c (q_{x}^{2}+q_{y}^{2})$. The coefficient $b=\partial \chi_{afm}^{0}/\partial q$ is found to be one order of magnitude greater than the coefficient $c$. Finally, when the correlation length is large, the above procedure leads to the approximate scaling form for the retarded function \begin{equation} \chi_{afm}^{s}(\mathbf{q},\omega+i\delta)= \frac{2 \xi}{U_{s}\xi_{0}} \frac{1}{1 + q\xi + \frac{i\omega \xi}{\Gamma_{0}}} \label{eq:scaling_form} \end{equation} where the correlation length is given by \begin{equation} \xi = \xi_{0} \frac{U_{s}}{\delta U} .\label{xi(U)} \end{equation} In these equations we have used the following definitions: the microscopic length scale \begin{equation} \xi_{0} = -\frac{1}{\chi_{afm}^{0}(\mathbf{q} = 0,i\nu = 0)}\;\frac{\partial \chi_{afm}^{0}(\mathbf{q},i\nu)}{\partial q} \Bigg{|}_{\mathbf{q}=0,i\nu = 0}, \label{Def_xi0} \end{equation} the mean field $U$ for a phase transition \begin{align} U_{mf} &= \frac{2}{\chi_{afm}^{0}(\mathbf{q} = 0,i\nu = 0)} , \end{align} the deviation from the mean field $U$, \begin{equation} \delta U = U_{mf} - U_{s}, \end{equation} and \begin{equation} \frac{1}{\Gamma_{0}} = \frac{1}{\xi_{0}\chi_{afm}^{0}(\mathbf{q} = 0,i\nu = 0)} \frac{\partial \chi_{afm}^{0''}}{\partial \omega} \Bigg{|}_{\mathbf{q}=0,\omega=0} \label{Def_Gamma0} \end{equation} with $\chi_{afm}^{0''}$ the imaginary part of the retarded susceptibility. For practical calculations, it is convenient to define the spin correlation lengths for the ferromagnetic and antiferromagnetic channels as \begin{align} \xi^{s}_{fm} & = \frac{\chi^{s}_{fm}(\mathbf{q} = \mathbf{0},i\nu = 0)}{\chi^0_{fm}(\mathbf{q} = \mathbf{0},i\nu = 0)} \label{corr_len_fm}, \\ \xi^{s}_{afm} & = \frac{\chi^{s}_{afm}(\mathbf{q} = \mathbf{0},i\nu = 0)}{\chi^0_{afm}(\mathbf{q} = \mathbf{0},i\nu = 0)}. \label{corr_len_afm} \end{align} Indeed, using the scaling form Eq. \eqref{eq:scaling_form}, the above definition corresponds to \begin{align} \xi^{s}_{afm}& =\frac{2 \xi}{U_{s}\xi_{0}}\frac{1}{\chi^0_{fm}(\mathbf{q} = \mathbf{0},i\nu = 0)}\\ &=\frac{\xi}{\xi_{0}}\frac{U_{mf}}{U_s}. \label{xi} \end{align} Since $U_{mf}/U_s\sim 1$ when $\xi$ is large, the two definitions of correlation lengths essentially agree in that limit. Although similar definitions of correlation lengths can be adopted in the charge channel, these lengths never become large so they are not really useful. We end with a note on the critical exponents. TPSC gives us a good estimate of the zero-temperature critical value of $U$, although the exponents usually take values associated with the spherical model~\cite{Dare:1996}. Accurate values of exponents are usually found with the renormalization group approach. This is complementary to our approach since the latter methods do not give nonuniversal numbers such as the critical $U$. For graphene, the universality class is that of the Gross-Neveu model~\cite{herbut_interactions_2006,pinning_assaad_2013} with $0.88$ as the value of the correlation length exponent to leading order in $\epsilon$. Instead, we have the value $1$, as follows from Eq.~\eqref{xi(U)}. From the scaling form~\eqref{eq:scaling_form}, we see that the dynamical critical exponent defined by $\omega\sim \xi^{-z}$ is $z = 1$. Lorentz invariance suggests that $\Gamma_0$ equals the Fermi velocity $v_F$, while a better formula for the $q$ and $\omega$ dependence in the denominator of Eq.~\eqref{eq: scaling_form} would probably replace $q + \frac{i\omega }{\Gamma_{0}}$ by $\sqrt{q^2-(\omega/v_F)^2}$. Further details appear in Appendix~\ref{Appendix_analytic_dirac}. \section{Numerical Procedure}\label{sec:4} We first evaluate the noninteracting susceptibility (Lindhard function) $\boldsymbol\chi^0(q)$ in Eq.~ \eqref{lf}. We then take a guess for $\langle n_{\alpha \uparrow} n_{\alpha \downarrow} \rangle$ to initialize the irreducible spin vertex $\mathcal{U}_s$, Eq.~\eqref{sp_vtx}. Using Eq.~\eqref{sp_susc}, we compute $\boldsymbol\chi^s$, which, when substituted in the spin sum-rule, Eq.~\eqref{sr_s}, allows us to update the variables $\langle n_{a\uparrow} n_{a\downarrow} \rangle$ and $\langle n_{b\uparrow} n_{b\downarrow} \rangle$ since we know the filling $\langle n_{\alpha \sigma} \rangle = 0.5$ on the right-hand side. We repeat the procedure till we obtain self-consistent solutions for $\langle n_{a\uparrow} n_{a\downarrow} \rangle$ and $\langle n_{b\uparrow} n_{b\downarrow} \rangle$ and thereby obtain the irreducible spin vertex $\mathcal{U}_s$. A C++ code was written to calculate the noninteracting susceptibilities $\chi^{0}_{aa}$, $\chi^{0}_{ab}$, and $\chi^{0}_{ba}$. FFTs are used in computations to exploit the convolutions in the definitions of the susceptibilities~\cite{Bergeron:2011}. First, the susceptibilities are computed in the position-imaginary time representation where the convolution is just a product. FFT in the position space and a combination of cubic splines and FFT in the imaginary time space are implemented to obtain the final result in the momentum-bosonic Matsubara frequency representation. The real(momentum) space grid is $N \times N$, where $N = 50$, $100$ and $200$ were taken. Since the noninteracting susceptibility obeys, \begin{align} \frac{T}{N^{2}}\sum_{q}\chi^{0}_{\alpha \alpha}(q) &= \langle n_{\alpha} \rangle = \frac{1}{2}, \label{sum_lf} \end{align} we fixed the optimum value for the number of Matsubara frequencies $N_{\omega}$ by requiring that the above be satisfied to $1\%$ accuracy. Accordingly, the range of imaginary time from $0$ to $\beta$ was divided into $N_T = 2 \; N_{\omega}$ slices. Further comments on finite-size effects and computational procedure may be found at the end of Appendix~\ref{non-interacting}. \begin{figure} \begin{center} \includegraphics[width=7cm]{fig_nud} \caption{ (Color online) Plot of $\langle n_{\uparrow}n_{\downarrow}\rangle$ as a function of $U$ for given temperatures($N = 100$). The temperature dependence is very small, as can be checked from the inset.} \label{fig:nupdown} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{fig_Us} \caption{ (Color online) Plot of the irreducible vertex for the spin $\mathcal{U}_s$ as a function of $U$ for given temperatures ($N = 100$). The temperature dependence is extremely small as can be seen from the inset.} \label{fig:Us} \end{center} \end{figure} \section{Results and Discussion}\label{sec:5} Following the numerical procedure detailed above, we computed the double occupancy $\langle n_{a \uparrow} n_{a \downarrow} \rangle$ self-consistently. This allowed us to obtain the TPSC spin susceptibility, Eq.~\eqref{sp_susc}, as well as the correlation length in the antiferromagnetic channel. {\em{Double Occupancy}}$\;\langle n_{\uparrow}n_{\downarrow}\rangle$. Due to bipartite symmetry, $\langle n_{a\uparrow} n_{a\downarrow} \rangle = \langle n_{b\uparrow} n_{b\downarrow} \rangle$ which we define as $\langle n_{\uparrow}n_{\downarrow}\rangle$. Figure~\ref{fig:nupdown} shows $\langle n_{\uparrow}n_{\downarrow}\rangle$ plotted as a function of interaction $U$ for given temperatures. In the noninteracting case $U = 0$, double occupancy factors into a product of the occupations for up and down electrons. At half-filling, $\langle n_{\uparrow,\downarrow} \rangle = 0.5$, so that $\langle n_{\uparrow}n_{\downarrow}\rangle = \langle n_{\uparrow} \rangle \langle n_{\downarrow} \rangle = 0.25$ for $U = 0$. As $U$ increases, the energy cost for two electrons occupying a single site increases, thereby leading to a decreasing value of $\langle n_{\uparrow} n_{\downarrow} \rangle$. {\em{Spin vertex}} $\mathcal{U}_s$. Figure~\ref{fig:Us} shows the spin vertex $\mathcal{U}_s$ as a function of the interaction $U$ for given temperatures. For very small values of $U$, $\mathcal{U}_s$ is almost equal to $U$. As $U$ increases, $\mathcal{U}_s$ becomes less than $U$ and it shows a tendency to saturate to a constant value. This is a result of Kanamori-Brueckner~\cite{kanamori_electron_1963,Vilk:1994,Vilk:1997} screening: the physics reflects the fact that, as $U$ increases, the two-body wave-function becomes smaller when electrons are on the same site to reduce the probability of double occupancy, thereby decreasing the value of the effective on-site interaction. The maximum energy this can cost is the bandwidth so that at large values of $U$, $\mathcal{U}_s$ saturates to a value of the order of the bandwidth.\cite{Vilk:1997,TremblayMancini:2011} {\em{Correlation lengths for spin and charge susceptibilities}}. With the irreducible spin and charge vertices, we can calculate the spin and charge susceptibilities using the particle-hole Bethe-Salpeter equations \eqref{sp_susc} and \eqref{ch_susc}. From the definitions Eqs.~\eqref{fm} and \eqref{afm} we obtain the spin susceptibilities in the ferromagnetic and antiferromagnetic channels and deduce the correlation lengths in the respective channels from Eqs.~\eqref{corr_len_fm} and \eqref{corr_len_afm}. The charge correlation lengths are not physically relevant since the irreducible charge vertex is generally larger than $U$ and suppresses the charge susceptibility compared with its noninteracting value, meaning that the correlation lengths are always small and ill-defined. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{fig_xisafm} \caption{ (Color online) Semi-logarithmic plot of the spin correlation length in the antiferromagnetic channel $\xi$ as a function of $U$ for various temperatures. ($N=100$)} \label{fig:xisafm} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{fig_xisfm} \caption{ (Color online) Plot of the ferromagnetic correlation length $\xi_{fm}^{s}$ as a function of $U$ for given values of temperature.($N = 100$).} \label{fig:xisfm} \end{center} \end{figure} Figure~\ref{fig:xisafm} shows the variation of spin correlation length in the antiferromagnetic channel $\xi$ as a function of $U$ for various temperatures. We first obtain the ratio of the interacting susceptibility to the noninteracting susceptibility in the antiferromagnetic channel $\xi_{afm}^{s}$ using Eq. ~\eqref{corr_len_afm} and multiply it by the microscopic length $\xi_{0}$ (Eq.~\eqref{xi}) to obtain the correlation length $\xi$ in units of the lattice spacing. The figure clearly indicates that the spin susceptibility in the antiferromagnetic channel increases steadily with increasing $U$ and with decreasing $T$, as expected, with a clear tendency to diverge at sufficiently large $U$ and low $T$. The quantitative accuracy of the results cannot be trusted for correlation lengths smaller than unity or larger than about half the system size. Figure~\ref{fig:xisfm} shows the plots for the ferromagnetic correlation length $\xi_{fm}^{s}$, estimated from the ratio of the interacting susceptibility to the noninteracting susceptibility, Eq.~\eqref{corr_len_fm}, as a function of $U$ for various temperatures for $N = 100$. We can see that the ratio decreases as temperature decreases. Thus in the ferromagnetic channel, the correlation length never becomes larger than the lattice spacing and hence we do not focus on that case. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{fig_xi0_G0_2} \caption{ (Color online) Plot of $\xi_{0}$ (dashed blue line with circles) and of $\Gamma_{0}$ (dot-dashed green line with squares) as a function of $\ln T$ for the range $T = 0.001$ to $T = 0.1$. Each quantity has its own vertical axis: left for $\xi_{0}$ and right for $\Gamma_{0}$.} \label{fig:xi0_G0inv} \end{center} \end{figure} {\em{$\xi_{0}$ and $\Gamma_{0}^{-1}$}}. We numerically determine $\xi_{0}$ and $\Gamma_{0}$ for the spin susceptibility in the antiferromagnetic channel from the relations given in Eqs.~\eqref{Def_xi0} and \eqref{Def_Gamma0} respectively. Figure~\ref{fig:xi0_G0inv} shows the resulting temperature dependence of $\Gamma_{0}$ and of $\xi_{0}$. The microscopic length scale $\xi_{0}$ is almost $T$ independent and of order 0.25. $\Gamma_{0}$ converges to the expected value $v_F = \frac{\sqrt{3}}{2}$ only at very low $T$ and for large values of $N \sim 2000$. Further discussion and numerical estimates appear in Appendix~\ref{Appendix_analytic_dirac}. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{fig_crossover} \caption{ (Color online) Plots of the crossover temperature as a function of interaction strength for given values of $N$. The crossover temperature was determined from $\xi = {v_{F}}/{T}$ scanning $U$ at fixed $T$, and scanning $T$ at fixed $U$ to obtain an estimate of the error in finding the intersection $\xi(U,T) = {v_{F}}/{T}$. } \label{fig:crossover} \end{center} \end{figure} {\em{Crossover temperature and $U_{c}$}}. For $U>U_c$ there is antiferromagnetism at $T=0$. We expect then that, when $U>U_c$, below a crossover temperature $T_{X}$, the antiferromagnetic correlation length becomes so large that one enters the renormalized-classical regime where the characteristic spin fluctuation frequency $\omega_{sf}$ is less than temperature. And indeed, since the scaling form~\eqref{eq:scaling_form} implies that $\omega_{sf}\sim \xi^{-1}$ and $\xi$ increases faster than $T$ at sufficiently low $T$ when $U$ is larger than $U_c$ (Fig.~\ref{fig:log_xi_log_T_U}), this implies that for $U>U_c$ there is necessarily a temperature below which the condition $\omega_{sf}<T$ is realized. In the more standard case where the dynamical critical exponent satisfies $z=2$, a pseudogap in the single-particle density of states appears at a temperature smaller than that where $\omega_{sf}\sim T$. At that temperature, $\xi$ becomes larger than the thermal de Broglie wavelength $v_F/T$ (with $v_F$ the Fermi velocity).~\cite{Vilk:1995,Vilk:1997,TremblayMancini:2011} The question of the appearance of a pseudogap in the present case remains to be investigated, but it is expected as a precursor since there is a real gap in the antiferromagnetic state. The pseudogap should appear basically when we enter the renormalized classical regime since here frequency and wavevector scale in the same way. We thus define the crossover temperature to the renormalized classical regime by the condition $\xi = {v_{F}}/{T}$, with $v_F$ at the Dirac point. In order to extract the crossover temperature for a fixed value of $U$, we plot the correlation length as a function of temperature and pick the value of temperature ($T_{X}$) where this plot intersects the plot of $v_{F}/T$ as a function of temperature. Similarly, for a fixed value of $T$, we can pick the value of interaction ($U_{X}$) where the correlation length exceeds ${v_{F}}/{T}$. Figure~\ref{fig:crossover} shows the plots of crossover temperature as a function of interaction determined using both approaches detailed above, for $N = 50$, $100$, and $200$. By quadratic and linear extrapolations of the curves to zero temperature, one obtains the results for $U_c$ that appear in Table \ref{table:crossover}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|}\cline{3-5} \multicolumn{2}{c|}{}&$N = 50$&$N = 100$&$N = 200$\\ \hline {$T$ vs. $U_{X}$}&linear&$3.8$&$3.794$&$3.793$\\ \cline{2-5} &quadratic&$3.825$&$3.806$&$3.808$\\ \cline{1-5} {$T_{X}$ vs. $U$}&linear&$3.779$&$3.775$&$3.775$\\ \cline{2-5} &quadratic&$3.809$&$3.795$&$3.789$\\ \cline{1-5} \end{tabular} \end{center} \caption{Values of the critical interaction strength $U_c$ obtained from the linear and quadratic extrapolation of the crossover plots in Fig.~\ref{fig:crossover} for various values of $N$.} \label{table:crossover} \end{table} {\em{Critical exponent $z$ and an alternate determination of $U_{c}$}}. We can find the critical value $U_c$ using another approach. This approach lets us estimate the dynamical critical exponent $z$ also. In Fig.~\ref{fig:log_xi_log_T_U} we plot $\ln \xi$ as a function of $\ln T$ for $N = 100$ and $200$ where the correlation length is sufficiently small that finite-size errors are not important (except far from $U_c$). For $U < U_{c}$, $\ln \xi$ saturates at low temperatures, while for $U > U_{c}$, $\ln \xi$ diverges and finally, at $U_{c}$, $\xi$ has a pure power law behavior. In order to determine $U_c$, we fit $\ln \xi$ versus $\ln T$ for various values of $U$ with straight lines. The value of $U$ that gives the best fit is taken as $U_{c}$. It is the slope of $\ln \xi$ vs $\ln T$ that gives us the numerical estimate of the dynamical critical exponent $z$. Despite the fact that we are not in the asymptotic regime for $\xi_{0}$ and $\Gamma_{0}$, the value so obtained is $z = 1.00$ for $U_{c} = 3.8 \pm 0. 005$. For high temperatures, all curves have the same slope as the case $U = U_{c}$. For $N = 50$, where we saw finite-size effects in Fig.~\ref{fig:crossover}, the largest correlation length is close to $N/2$ at the smallest temperature for $U=U_c$, invalidating the estimate. Indeed, in that case $U_{c} = 3.85 \pm 0.005$ but $z = 0.87$, which is clearly incorrect. Taking the average value of $U_c$ obtained for $N=100$ and $N=200$ in Table \ref{table:crossover} and estimating the error from the range of values obtained, we find that $U_c=3.79\pm0.01$, consistent with the result obtained from the estimate of the previous paragraph with $z=1$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{fig_log_xi_log_T_U} \caption{ (Color online) Plots of $\ln \xi$ as a function of $\ln T$ for various values of $U$ ($N = 100$ and $200$). The straight dashed magenta line corresponds to a pure power law, $z=1$, hence to the value $U_c$ for the quantum critical point.} \label{fig:log_xi_log_T_U} \end{center} \end{figure} \section{CONCLUSION}\label{sec:6} The nonperturbative TPSC theory has been extended to a multi-band case, namely the Hubbard model on the honeycomb lattice. In TPSC, valid from weak to intermediate coupling, charge and spin irreducible interactions are determined self-consistently in such a way that conservation laws and the Pauli principle are satisfied. The Mermin-Wagner theorem is also automatically satisfied and the physics of Kanamori-Brueckner screening that renormalizes the spin and charge irreducible vertices is taken into account. On the honeycomb lattice, nearest-neighbor antiferromagnetic fluctuations are dominant. The TPSC value of $U_{c}/t$ for the quantum-critical semimetallic to antiferromagnetic transition is $U_c/t=3.79\pm0.01$ consistent with~\cite{sorella2012} $U_c/t=3.869 \pm 0.013$ and~\cite{pinning_assaad_2013} $U_c/t=3.78$ obtained from the large scale quantum Monte Carlo calculations and also consistent with the functional renormalization group~\cite{honerkamp_density_2008,raghu_topological_2008} $U_c/t=3.8$. These results rule out the existence of a spin liquid phase in the ground state of the graphene Hubbard model at intermediate couplings since estimates for the Mott transition yield a $U_{Mott}$ larger than $U_c$. We have also estimated the crossover line in the $T$-$U$ plane where one enters the renormalized classical regime and where a pseudogap is expected to open up. Generalized extensions of TPSC to multiband cases of the type presented here and in Ref.~\onlinecite{AritaTPSC:2013} have the potential to open the study of interacting systems, and to improve realistic materials calculations. In the latter case, TPSC offers the possibility to include long wave length spin fluctuations in addition to long wave length charge fluctuations already present in these approaches. \acknowledgments We are grateful to Dominic Bergeron and Wei Wu for illuminating discussions. S.A. would like to thank P. Mangalapandi for his expert advice on parallel computation. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), and by the Tier I Canada Research Chair Program (A.-M.S.T.). \vspace{0.5cm}
{ "timestamp": "2015-07-06T02:08:47", "yymm": "1504", "arxiv_id": "1504.06373", "language": "en", "url": "https://arxiv.org/abs/1504.06373" }
\section{Introduction}\label{sec:intro} Electronic structure calculations have played an important role in understanding the properties of a wide range of materials systems~\cite{Martin}. In particular, the Kohn-Sham formalism of density functional theory~\cite{kohn64,kohn65} has been the workhorse of ground-state electronic structure calculations. However, the Kohn-Sham approach requires the computation of single-electron wavefunctions to compute the kinetic energy of non-interacting electrons, whose computational complexity typically scales as $\mathcal{O}(N^3)$ for an $N$-electron system, thus, limiting standard calculations to materials systems containing few hundreds of atoms. While there has been progress in developing close to linear-scaling algorithms for the Kohn-Sham approach~\cite{goedecker99,bow2012}, these are still limited to a few thousands of atoms, especially for metallic systems~\cite{Motamarri2014}. The orbital-free approach to DFT~\cite{ParrYang}, on the other hand, models the kinetic energy of non-interacting electrons as an explicit functional of the electron density, thus circumventing the computationally intensive step of computing the single-electron wavefunctions. Further, the computational complexity of orbital-free DFT scales linearly with the system size as the ground-state DFT problem reduces to a minimization problem in a single field---the electron density. The past two decades has seen considerable progress in the development of accurate models for orbital-free kinetic energy functionals~\cite{Teter1992,Madden1994,Yan1998,Yan1999,Karasiev2012,Ke2013,Karasiev2014,Ke2014}, and, in particular, for systems whose electronic-structure is close to a free electron gas (for e.g. Al, Mg). Also, orbital-free DFT calculations are being increasingly used in the simulations of warm dense matter where the electronic structure is close to that of a free electron gas at very high temperatures~\cite{Flavian2006,Flavian2008,Collins2009,Collins2013,Sheppard2014}. As the reduced computational complexity of orbital-free DFT enables consideration of larger computational domains, recent studies have also focused on studying extended defects in Al and Mg, and have provided important insights into the energetics of these defects~\cite{Gavini2007c,Ho2007,Gang2008,Shin2009,Shin2013,Mrinal2015}. The widely used numerical implementation of orbital-free DFT is based on a Fourier space formalism using a plane-wave discretization~\cite{PROFESS2,PROFESS3}. A Fourier space formulation provides an efficient computation of the extended interactions arising in orbital-free DFT---electrostatics and kinetic energy functionals---through Fourier transforms. Further, the plane-wave basis is a complete basis and provides variational convergence in ground-state energy with exponential convergence rates. However, the Fourier space formulations are restricted to periodic geometries and boundary conditions that are suitable for perfect bulk materials, but not for materials systems containing extended defects. Also, the extended spatial nature of the plane-wave basis affects the parallel scalability of the numerical implementation and is also not suitable for multi-scale methods that rely on coarse-graining. In order to address the aforementioned limitations of Fourier space techniques, recent efforts have focussed on developing real-space formulations for orbital-free DFT and numerical implementations based on finite-element~\cite{Gavini2007,Bala2010,Mrinal2012} and finite difference discretizations~\cite{Carlos,Phanish,Phanish2}. In the present work, we build on these prior efforts to develop an efficient real-space formulation of orbital-free DFT employing the widely used non-local Wang-Govind-Carter (WGC)~\cite{Yan1999} kinetic energy functional. As in prior efforts~\cite{Gavini2007,Bala2010}, we reformulate the extended interactions in electrostatics and the non-local terms in the WGC kinetic energy functionals as local variational problems in auxiliary potential fields. However, the proposed reformulation of electrostatic interactions is notably different from previous works, and enables the evaluation of variational configurational forces corresponding to both internal atomic relaxations as well as external cell relaxation under a single framework. Further, the proposed formulation naturally extends to all-electron orbital-free DFT calculations of warm dense matter~\cite{Flavian2006,Flavian2008}. In the proposed real-space formulation, the ground-state orbital-free DFT problem is reformulated as an equivalent saddle point problem of a local functional in electron density, electrostatic potential and the auxiliary potential fields (kernel potentials) accounting for the extended interactions in the kinetic energy functional. We employ a higher-order finite-element basis to discretize the formulation, and demonstrate the optimal numerical convergence of both the ground-state energy and configurational forces with respect to the discretization. Further, we propose an efficient numerical approach to compute the saddle point problem in electron density, electrostatic potential and kernel potentials by expressing the saddle point problem as a fixed point iteration problem, and using a self-consistent field approach to solve the fixed point iteration problem. We subsequently investigate the accuracy and transferability of the proposed real-space formulation of orbital-free DFT for Al and Mg materials systems. To this end, we compute the bulk properties of Al, Mg and Al-Mg intermetallics, and compare it with Kohn-Sham DFT. As orbital-free DFT only admits local pseudopotentials, the Kohn-Sham DFT calculations are conducted using both local and non-local psedupotentials. Our studies indicates that the bulk properties computed using orbital-free DFT for Al, Mg and Al-Mg intermetallics are in good agreement with Kohn-Sham DFT. We further investigate the accuracy of orbital-free DFT by computing the interatomic forces in Al and Mg, which are also in good agreement with Kohn-Sham DFT calculations. Our studies demonstrate that orbital-free DFT is accurate and transferable across a wide range of properties for Al, Mg and Al-Mg intermetallics, and can be used to study properties of these materials systems that require computational domains that are not accessible using Kohn-Sham DFT. For instance, in the present study we computed the formation energy of $\beta'$ Al-Mg alloy containing $879$ atoms in a unit cell employing the proposed real-space formulation of orbital-free DFT, but the same system was found to be prohibitively expensive using Kohn-Sham DFT. We finally investigate the cell-size effects in the electronic structure of point defects, in particular a mono-vacancy in Al. Prior studies using Fourier-based formulations of orbital-free DFT have suggested that the formation energy of a mono-vacancy in Al is well converged by 108-256 atom cell-sizes~\cite{Ho2007}. However, coarse-grained real-space calculations have suggested that much larger cell-sizes of the order of 1,000 atoms are required for convergence of vacancy formation energies~\cite{Bala2010}, which was also supported by asymptotic estimates~\cite{Gavini2011}. In order to understand the underpinnings of this discrepancy, we use the finite-element discretized real-space formulation of orbital-free DFT and compute the vacancy formation energy using two boundary conditions: (i) periodic boundary conditions, equivalent to Fourier-space based formulations; (ii) bulk Dirichlet boundary conditions, where the perturbations in the electronic structure arising due to the vacancy vanishes on the boundary of the computational domain. Our study suggests that while the vacancy formation energy is well converged by 108 atom cell-size using periodic boundary conditions, the electronic fields are not well-converged by this cell-size. On the other hand the bulk Dirichlet boundary conditions show well converged formation energy as well as electronic fields by cell sizes of $\sim$1,000 atoms, which is consistent with prior real-space calculations. This study reveals that while periodic boundary conditions show a superior convergence in formation energies due to the variational nature of the formalism, the true cell-size effects which also measure convergence of electronic fields are provided by the bulk Dirichlet boundary conditions. We note that the proposed real-space formulation with finite-element discretization are crucial to employing bulk Dirichlet boundary conditions, which enable the study of isolated defects in bulk. The remainder of the paper is organized as follows. Section II provides a description of the orbital-free DFT problem. Section III presents the proposed real-space formulation of the orbital-free DFT problem, the configurational forces associated with structural relaxations, and the finite-element discretization of the formulation. Section IV discusses the numerical implementation of the formulation and presents an efficient numerical approach for the solution of the saddle point real-space variational problem. Section V presents the numerical convergence results of the finite-element discretization of the real-space formulation, the accuracy and transferability of the real-space orbital-free DFT formalism for Al-Mg materials system, and the study of the role of boundary conditions on the cell-size effects in electronic structure calculations of point defects. We finally conclude with a summary and outlook in Section VI. \section{Orbital-free density functional theory}\label{sec:OFDFT} The ground-state energy of a charge neutral materials system containing $M$ nuclei and $N$ valence electrons in density functional theory is given by~\cite{ParrYang,Martin} \begin{equation}\label{eq:DFT} E(\rho,\boldsymbol{\textbf{R}})=T_s(\rho)+E_{xc}(\rho)+E_H(\rho)+E_{ext}(\rho,\boldsymbol{\textbf{R}})+E_{zz}(\boldsymbol{\textbf{R}})\,, \end{equation} where $\rho$ denotes the electron-density and $\boldsymbol{\textbf{R}}=\{\boldsymbol{\textbf{R}}_{1},\boldsymbol{\textbf{R}}_{2},\ldots,\boldsymbol{\textbf{R}}_{M}\}$ denotes the vector containing the positions of $M$ nuclei. In the above, $T_s$ denotes the kinetic energy of non-interacting electrons, $E_{xc}$ is the exchange-correlation energy, $E_{H}$ is the Hartree energy or classical electrostatic interaction energy between electrons, $E_{ext}$ is the classical electrostatic interaction energy between electrons and nuclei, and $E_{zz}$ denotes the electrostatic repulsion energy between nuclei. We now discuss the various contributions to the ground-state energy, beginning with the exchange-correlation energy. The exchange-correlation energy, denoted by $E_{xc}$, incorporates all the quantum-mechanical interactions in the ground-state energy of a materials system. While the existence of a universal exchange-correlation energy as a functional of electron-density has been established by Hohenberg, Kohn and Sham~\cite{kohn64,kohn65}, its exact functional form has been elusive to date, and various models have been proposed over the past decades. For solid-state calculations, the local density approximation (LDA)~\cite{alder,perdew} and the generalized gradient approximation~\cite{gga1,gga2} have been widely adopted across a range of materials systems. In particular, the LDA exchange-correlation energy, which is adopted in the present work, has the following functional form: \begin{equation}\label{eq:exc} E_{\text{xc}}(\rho) = \int \varepsilon_{\text{xc}}(\rho)\rho(\boldsymbol{\textbf{x}}) \,d\bx\,\,, \end{equation} where $\varepsilon_{\text{xc}}(\rho)=\varepsilon_x(\rho)+\varepsilon_c(\rho)$, and \begin{equation} \varepsilon_x(\rho) = -\frac{3}{4}\left(\frac{3}{\pi}\right)^{1/3}\rho^{1/3}(\boldsymbol{\textbf{x}}) \,\,, \end{equation} \begin{equation}\label{eq:corr} \varepsilon_c(\rho) = \begin{cases} &\frac{\gamma}{(1 + \beta_1\sqrt(r_s) + \beta_2r_s)}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;r_s\geq1,\\ &A\,\log r_s + B + C\,r_s\log r_s + D\,r_s\;\;\;\;\;\;\;\;r_s\,<\,1, \end{cases} \end{equation} and $r_s = (3/4\pi\rho)^{1/3}$. In the present work, we use the Ceperley and Alder constants~\cite{perdew} in equation~\eqref{eq:corr}. The last three terms in equation~\eqref{eq:DFT} represent electrostatic interactions between electrons and nuclei. The Hartree energy, or the electrostatic interaction energy between electrons, is given by \begin{equation} E_{H}(\rho) = \frac{1}{2}\int\int\frac{\rho(\boldsymbol{\textbf{x}})\rho(\boldsymbol{\textbf{x}}')}{|\boldsymbol{\textbf{x}} - \boldsymbol{\textbf{x}}'|}\,d\bx\dx'\,.\label{eq:hartree} \end{equation} The interaction energy between electrons and nuclei, in the case of local pseudopotentials that are adopted in the present work, is given by \begin{eqnarray}\label{eq:external} E_{ext}(\rho,\boldsymbol{\textbf{R}}) &=& \int \rho(\boldsymbol{\textbf{x}}) V_{ext}(\boldsymbol{\textbf{x}},\boldsymbol{\textbf{R}}) \,d\bx \notag\\ &=&\sum_{J}\int \rho(\boldsymbol{\textbf{x}}) V^{J}_{ps}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_J|) d\boldsymbol{\textbf{x}}\,, \end{eqnarray} where $V^{J}_{ps}$ denotes the pseudopotential corresponding to the $J^{th}$ nucleus, which, beyond a core radius is the Coulomb potential corresponding to the effective nuclear charge on the $J^{th}$ nucleus. The nuclear repulsive energy is given by \begin{equation} E_{zz}(\boldsymbol{\textbf{R}}) = \frac{1}{2}\sum_{I}\sum_{J,J \neq I} \frac{Z_I Z_J}{|\boldsymbol{\textbf{R}}_I-\boldsymbol{\textbf{R}}_J|}\,, \label{eq:repulsive} \end{equation} where $Z_I$ denotes the effective nuclear charge on the $I^{th}$ nucleus. The above expression assumes that the core radius of the pseudopotential is smaller than internuclear distances, which is often the case in most solid-state materials systems. We note that in a non-periodic setting, representing a finite atomic system, all the integrals in equations~\eqref{eq:hartree}-\eqref{eq:external} are over $ {\mathbb{R}^{3}} $ and the summations in equations~\eqref{eq:external}-\eqref{eq:repulsive} include all the atoms. In the case of an infinite periodic crystal, all the integrals over $\boldsymbol{\textbf{x}}$ in equations ~\eqref{eq:hartree}-\eqref{eq:external} are over the unit cell whereas the integrals over $\boldsymbol{\textbf{x}}'$ are over $ {\mathbb{R}^{3}} $. Similarly, in equations~\eqref{eq:external}-\eqref{eq:repulsive}, the summation over $I$ is on the atoms in the unit cell, and the summation over $J$ extends over all lattice sites. Henceforth, we will adopt these notions for the domain of integration and summation. The remainder of the contribution to the ground-state energy is the kinetic energy of non-interacting electrons, denoted by $T_s$, which is computed exactly in the Kohn-Sham formalism by computing the single-electron wavefunctions (eigenfunctions) in the mean-field~\cite{Martin}. The conventional solution of the Kohn-Sham eigenvalue problem, which entails the computation of the lowest $N$ eigenfunctions and eigenvalues of the Kohn-Sham Hamiltonian, scales as $O(N^3)$ that becomes prohibitively expensive for materials systems containing a few thousand atoms. While efforts have been focused towards reducing the computational complexity of the Kohn-Sham eigenvalue problem~\cite{goedecker99,bow2012}, this remains a significant challenge especially in the case of metallic systems. In order to avoid the computational complexity of solving for the wavefunctions to compute $T_s$, the orbital-free approach to DFT models the kinetic energy of non-interacting electrons as an explicit functional of electron density~\cite{ParrYang}. These models are based on theoretically known properties of $T_{s}$ for a uniform electron gas, perturbations of uniform electron gas, and the linear response of uniform electron gas~\cite{ParrYang,Teter1992,Madden1994,Yan1998,Yan1999}. As the orbital-free models for the kinetic energy functional are based on properties of uniform electron gas, their validity is often limited to materials systems whose electronic structure is close to a free electron gas, in particular, the alkali and alkali earth metals. Further, as the orbital-free approach describes the ground-state energy as an explicit functional of electron-density, it limits the pseudopotentials calculations to local pseudopotentials. While these restrictions constrain the applicability of the orbital-free approach, numerical investigations~\cite{Yan1999,Huang2008} indicate that recently developed orbital-free kinetic energy functionals and local pseudopotentials can provide good accuracy for Al and Mg, which comprise of technologically important materials systems. Further, there are ongoing efforts in developing orbital-free kinetic energy models for covalently bonded systems and transition metals~\cite{Xia2012,Huang2012}. In the present work, we restrict our focus to the Wang-Goving-Carter (WGC) density-dependent orbital-free kinetic energy functional~\cite{Yan1999}, which is a widely used kinetic energy functional for ground-state calculations of materials systems with an electronic structure close to a free electron gas. In particular, the functional form of the WGC orbital-free kinetic energy functional is given by \begin{equation}\label{eqn:KE} T_s(\rho)=C_F\int \rho^{5/3}(\boldsymbol{\textbf{x}})\,d\boldsymbol{\textbf{x}} + \frac{1}{2}\int |\nabla \sqrt{\rho(\boldsymbol{\textbf{x}})}|^2\,d\boldsymbol{\textbf{x}} + T_{K}(\rho) \end{equation} where \begin{eqnarray} T_{K}(\rho)=C_F\int\int \rho^{\alpha}(\boldsymbol{\textbf{x}})\,K(\xi_{\gamma}(\boldsymbol{\textbf{x}},\boldsymbol{\textbf{x}}'),|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\,\rho^{\beta}(\boldsymbol{\textbf{x}}')\,d\boldsymbol{\textbf{x}}\,d\boldsymbol{\textbf{x}}'\,,\notag\\ \xi_{\gamma}(\boldsymbol{\textbf{x}},\boldsymbol{\textbf{x}}')=\Big(\frac{k_F^{\gamma}(\boldsymbol{\textbf{x}})+k_F^{\gamma}(\boldsymbol{\textbf{x}}')}{2}\Big)^{1/\gamma}, \quad k_F(\boldsymbol{\textbf{x}})=\big(3\pi^2\rho(\boldsymbol{\textbf{x}})\big)^{1/3}\,.\notag \end{eqnarray} In equation~\eqref{eqn:KE}, the first term denotes the Thomas-Fermi energy with $C_F=\frac{3}{10}(3\pi^2)^{2/3}$, and the second term denotes the von-Weizs$\ddot{a}$cker correction~\citep{ParrYang}. The last term denotes the density dependent kernel energy, $T_{K}$, where the kernel $K$ is chosen such that the linear response of a uniform electron gas is given by the Lindhard response~\cite{Finnis}. In the WGC functional~\citep{Yan1999}, the parameters are chosen to be $\{\alpha,\beta\}=\{5/6+\sqrt{5}/6,5/6-\sqrt{5}/6\}$ and $\gamma=2.7$. For materials systems whose electronic structure is close to a free-electron gas, the Taylor expansion of the density dependent kernel about a reference electron density ($\rho_0$), often considered to be the average electron density of the bulk crystal, is employed and is given by \begin{widetext} \begin{eqnarray}\label{eq:ker} \begin{split} K(\xi_{\gamma}(\boldsymbol{\textbf{x}},\boldsymbol{\textbf{x}}'),|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|) =\,& K_0(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)+K_1(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\big(\Delta\rho(\boldsymbol{\textbf{x}})+\Delta\rho(\boldsymbol{\textbf{x}}')\big)+\frac{1}{2}K_{11}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\big((\Delta\rho(\boldsymbol{\textbf{x}}))^2+(\Delta\rho(\boldsymbol{\textbf{x}}'))^2\big)\\ & + K_{12}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\Delta\rho(\boldsymbol{\textbf{x}})\Delta\rho(\boldsymbol{\textbf{x}}')+\ldots \,\,. \end{split} \end{eqnarray} \end{widetext} In the above equation, $\Delta\rho(\boldsymbol{\textbf{x}}) = \rho(\boldsymbol{\textbf{x}})-\rho_0$ and the density independent kernels resulting from the Taylor expansion are given by \begin{eqnarray} K_0(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|) = K(\xi_{\gamma},|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\Big|_{\rho=\rho_0}\notag\\ K_1(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|) = \frac{\partial K(\xi_{\gamma},|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)}{\partial \rho(\boldsymbol{\textbf{x}})}\Big|_{\rho=\rho_0}\notag\\ K_{11}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|) = \frac{\partial^2 K(\xi_{\gamma},|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)}{\partial \rho^2(\boldsymbol{\textbf{x}})}\Big|_{\rho=\rho_0}\notag\\ K_{12}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|) = \frac{\partial^2 K(\xi_{\gamma},|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)}{\partial \rho(\boldsymbol{\textbf{x}}) \partial \rho(\boldsymbol{\textbf{x}}')}\Big|_{\rho=\rho_0}\notag\\ \ldots \end{eqnarray} Numerical investigations have suggested that the Taylor expansion to second order provides a good approximation of the density dependent kernel for materials systems with electronic structure close to a free electron gas~\cite{Choly2002,Yan1999}. In particular, in the second order Taylor expansion, the contribution from $K_{12}$ has been found to dominate contributions from $K_{11}$. Thus, in practical implementations, often, only contributions from $K_{12}$ in the second order terms are retained for computational efficiency. \section{Real-space formulation of orbital-free DFT}\label{sec:RS-OFDFT} In this section, we present the local variational real-space reformulation of orbital-free DFT, the configurational forces associated with internal ionic relaxations and cell relaxation, and the finite-element discretization of the formulation. \subsection{Local real-space formulation}~\label{sec:RS-formulation} We recall that the various components of the ground-state energy of a materials system (cf. section~\ref{sec:OFDFT}) are local in real-space, except the electrostatic interaction energy and the kernel energy component of the WGC orbital-free kinetic energy functional that are extended in real-space. Conventionally, these extended interactions are computed in Fourier space to take advantage of the efficient evaluation of convolution integrals using Fourier transforms. For this reason, Fourier space formulations have been the most popular and widely used in orbital-free DFT calculations~\cite{PROFESS2,PROFESS3}. However, Fourier space formulations employing the plane-wave basis result in some significant limitations. Foremost of these is the severe restriction of periodic geometries and boundary conditions. While this is not a limitation in the study of bulk properties of materials, this is a significant limitation in the study of defects in materials. For instance, the geometry of a single isolated dislocation in bulk is not compatible with periodic geometries, and, thus, prior electronic structure studies have mostly been limited to artificial dipole and quadrapole arrangements of dislocations. Further, numerical implementations of Fourier-space formulations also suffer from limited scalability on parallel computing platforms. Moreover, the plane-wave discretization employed in a Fourier space formulation provides a uniform spatial resolution, which is not suitable for the development of coarse-graining techniques---such as the quasi-continuum method~\cite{QCOFDFT}---that rely on an adaptive spatial resolution of the basis. We now propose a real-space formulation that is devoid of the aforementioned limitations of a Fourier space formulation. The proposed approach, in spirit, follows along similar lines as recent efforts~\cite{Gavini2007, Bala2010}, but the proposed formulation differs importantly in the way the extended electrostatic interactions are treated. In particular, the proposed formulation provides a unified framework to compute the configurational forces associated with both internal ionic and cell relaxations discussed in~\ref{sec:ConfigurationalForces}. We begin by considering the electrostatic interactions that are extended in the real-space. We denote by $\tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_I)$ a regularized Dirac distribution located at $\boldsymbol{\textbf{R}}_I$, and the $I^{th}$ nuclear charge is given by the charge distribution $-Z_I \tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_I)$. Defining $\rho_{nu} (\boldsymbol{\textbf{x}})=-\sum_{I} Z_I \tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|)$ and $\rho_{nu} (\boldsymbol{\textbf{x}}')=-\sum_{J} Z_J \tilde{\delta}(|\boldsymbol{\textbf{x}}'-\boldsymbol{\textbf{R}}_J|)$, the repulsive energy $E_{zz}$ can subsequently be reformulated as \begin{equation}\label{eq:repulsiveEnergy} E_{zz} = \frac{1}{2}\int\int \frac{\rho_{nu}(\boldsymbol{\textbf{x}})\rho_{nu}(\boldsymbol{\textbf{x}}')}{|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|} d\boldsymbol{\textbf{x}} d\boldsymbol{\textbf{x}}' - E_{self} \,, \end{equation} where $E_{self}$ denotes the self energy of the nuclear charges and is given by \begin{equation}\label{eq:selfEnergy} E_{self}=\frac{1}{2}\sum_{I}\int\int \frac{Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|)Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}'-\boldsymbol{\textbf{R}}_{I}|)}{|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|} d\boldsymbol{\textbf{x}} d\boldsymbol{\textbf{x}}' \,. \end{equation} We denote the electrostatic potential corresponding to the $I^{th}$ nuclear charge ($-Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}'-\boldsymbol{\textbf{R}}_{I}|)$) as $\bar{V}_{\tilde{\delta}}^{I}(\boldsymbol{\textbf{x}})$, and is given by \begin{equation}\label{eq:selfPotential} \bar{V}^{I}_{\tilde{\delta}}(\boldsymbol{\textbf{x}}) = -\int \frac{Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}'-\boldsymbol{\textbf{R}}_{I}|)}{|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|} d\boldsymbol{\textbf{x}}'\,. \end{equation} The self energy, thus, can be expressed as \begin{equation}\label{eq:selfEnergy2} E_{self} = -\frac{1}{2}\sum_{I}\int Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|)\bar{V}^{I}_{\tilde{\delta}}(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}}\,. \end{equation} Noting that the kernel corresponding to the extended electrostatic interactions in equations~\eqref{eq:selfEnergy}-\eqref{eq:selfPotential} is the Green's function of the Laplace operator, the electrostatic potential and the electrostatic energy can be computed by taking recourse to the solution of a Poisson equation, or, equivalently, the following local variational problem: \begin{widetext} \begin{subequations}\label{eq:selfEnergyLocal} \begin{equation} E_{self} = -\sum_{I} \min_{V^{I}\in H^1( {\mathbb{R}^{3}} )} \Big\{\frac{1}{8\pi}\int |\nabla V^{I}(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} + \int Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|) V^I(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \Big\}\,, \end{equation} \begin{equation} \bar{V}^{I}_{\tilde{\delta}}(\boldsymbol{\textbf{x}}) = arg\,\min_{V^{I}\in H^1( {\mathbb{R}^{3}} )} \Big\{\frac{1}{8\pi}\int |\nabla V^{I}(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} + \int Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|) V^I(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \Big\}\,. \end{equation} \end{subequations} \end{widetext} In the above, $H^1( {\mathbb{R}^{3}} )$ denotes the Hilbert space of functions such that the functions and their first-order derivatives are square integrable on $ {\mathbb{R}^{3}} $. We next consider the electrostatic interaction energy corresponding to both electron and nuclear charge distribution. We denote this by $J(\rho,\rho_{nu})$, which is given by \begin{equation} J(\rho,\rho_{nu}) = \frac{1}{2}\int\int \frac{\big(\rho(\boldsymbol{\textbf{x}})+\rho_{nu}(\boldsymbol{\textbf{x}})\big)\big(\rho(\boldsymbol{\textbf{x}}')+\rho_{nu}(\boldsymbol{\textbf{x}}')\big)}{|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|} d\boldsymbol{\textbf{x}} d\boldsymbol{\textbf{x}}'\,. \end{equation} We denote the electrostatic potential corresponding to the total charge distribution (electron and nuclear charge distribution) as $\bar{\phi}$, which is given by \begin{equation} \bar{\phi}(\boldsymbol{\textbf{x}}) = \int \frac{\rho(\boldsymbol{\textbf{x}}')+\rho_{nu}(\boldsymbol{\textbf{x}}')}{|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|}d\boldsymbol{\textbf{x}}' \,. \end{equation} The electrostatic interaction energy of the total charge distribution, in terms of $\bar{\phi}$, is given by \begin{equation} J(\rho,\rho_{nu}) = \frac{1}{2}\int (\rho(\boldsymbol{\textbf{x}})+\rho_{nu}(\boldsymbol{\textbf{x}}))\bar{\phi}(\boldsymbol{\textbf{x}}) d \boldsymbol{\textbf{x}} \,. \end{equation} As before, the electrostatic interaction energy as well as the potential of the total charge distribution can be reformulated as the following local variational problem: \begin{widetext} \begin{subequations}\label{eq:TotElecEnergyLocal} \begin{equation} J(\rho,\rho_{nu}) = - \min_{\phi\in \mathcal{Y}} \Big\{\frac{1}{8\pi}\int |\nabla \phi(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} - \int (\rho(\boldsymbol{\textbf{x}})+\rho_{nu}(\boldsymbol{\textbf{x}}))\phi(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \Big\}\,, \end{equation} \begin{equation} \bar{\phi}(\boldsymbol{\textbf{x}}) = arg\, \min_{\phi\in \mathcal{Y}} \Big\{\frac{1}{8\pi}\int |\nabla \phi(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} - \int (\rho(\boldsymbol{\textbf{x}})+\rho_{nu}(\boldsymbol{\textbf{x}}))\phi(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \Big\}\,. \end{equation} \end{subequations} \end{widetext} In the above, $\mathcal{Y}$ is a suitable function space corresponding to the boundary conditions of the problem. In particular, for non-periodic problems such as isolated cluster of atoms $\mathcal{Y}=H^1( {\mathbb{R}^{3}} )$. For periodic problems, $\mathcal{Y}=H^1_{per}(Q)$ where $Q$ denotes the unit cell and $H^1_{per}(Q)$ denotes the space of periodic functions on $Q$ such that the functions and their first-order derivatives are square integrable. The electrostatic interaction energy in DFT, comprising of $E_H$, $E_{ext}$ and $E_{zz}$ (cf. equations~\eqref{eq:hartree}-\eqref{eq:repulsive}), can be rewritten in terms of $J(\rho,\rho_{nu})$ and $E_{self}$ as \begin{widetext} \begin{equation} E_{H}(\rho)+E_{ext} (\rho, \boldsymbol{\textbf{R}}) +E_{zz}(\boldsymbol{\textbf{R}}) = J(\rho,\rho_{nu})+\sum_{J}\int (V^{J}_{ps}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|)-\bar{V}^{J}_{\tilde{\delta}}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|))\rho(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} - E_{self} \,. \end{equation} \end{widetext} For the sake of convenience of representation, we will denote by $\mathcal{V}=\{V^1,V^2,\ldots,V^M\}$ the vector containing the electrostatic potentials corresponding to all nuclear charges in the simulation domain. Using the local reformulation of $J(\rho,\rho_{nu})$ and $E_{self}$ (cf. equations~\eqref{eq:selfEnergyLocal} and \eqref{eq:TotElecEnergyLocal}), the electrostatic interaction energy in DFT can now be expressed as the following local variational problem: \begin{widetext} \begin{subequations}\label{eq:ElecRSreformulation} \begin{equation} E_{H} + E_{ext} +E_{zz} = \max_{\phi\in \mathcal{Y}} \,\, \min_{V^{I}\in H^1( {\mathbb{R}^{3}} )} \mathcal{L}_{el} (\phi,\mathcal{V},\rho,\mathbf{R}) \end{equation} \begin{equation} \begin{split} \mathcal{L}_{el }(\phi,\mathcal{V},\rho,\mathbf{R}) = & - \frac{1}{8\pi}\int |\nabla \phi(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} + \int (\rho(\boldsymbol{\textbf{x}})+\rho_{nu}(\boldsymbol{\textbf{x}}))\phi(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} + \sum_{J}\int (V^{J}_{ps}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|)-\bar{V}^{J}_{\tilde{\delta}}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|))\rho(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \\ & + \sum_{I} \left\{\frac{1}{8\pi}\int |\nabla V^{I}(\boldsymbol{\textbf{x}})|^2 d\boldsymbol{\textbf{x}} + \int Z_I\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I}|) V^{I}(\boldsymbol{\textbf{x}}) d\boldsymbol{\textbf{x}} \right\} \,. \end{split} \end{equation} \end{subequations} \end{widetext} In the above, the minimization over $V^{I}$ represents a simultaneous minimization over all electrostatic potentials corresponding to $I=1,2,\ldots,M$. We note that, while the above reformulation of electrostatic interactions has been developed for pseudopotential calculations, this can also be extended to all-electron calculations in a straightforward manner by using $V^{J}_{ps} = \bar{V}^J_{\tilde{\delta}}$ and $Z_I$ to be the total nuclear charge in the above expressions. Thus, this local reformulation provides a unified framework for both pseudopotential as well as all-electron DFT calculations. We now consider the local reformulation of the extended interactions in the kernel energy component of the WGC orbital-free kinetic energy functional (cf.~\eqref{eq:ker}). Here we adopt the recently developed local real-space reformulation of the kernel energy~\cite{Bala2010,Mrinal2012}, and recall the key ideas and local reformulation for the sake of completeness. We present the local reformulation of $K_0$ and the local reformulations for other kernels ($K_1$, $K_{11}$, $K_{12}$) follows along similar lines. Consider the kernel energy corresponding to $K_0$ given by \begin{equation}\label{eq:ker0_energy} T_{K_0}(\rho)=C_F\int\int \rho^{\alpha}(\boldsymbol{\textbf{x}})\,K_0(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\,\rho^{\beta}(\boldsymbol{\textbf{x}}')\,d\boldsymbol{\textbf{x}}\,d\boldsymbol{\textbf{x}}'\,. \end{equation} We define potentials $v^0_{\alpha}$ and $v^0_{\beta}$ given by \begin{eqnarray}\label{eq:kerpotential_v} v^0_{\alpha}(\boldsymbol{\textbf{x}})=\int K_0(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\rho^{\alpha}(\boldsymbol{\textbf{x}}') d\boldsymbol{\textbf{x}}' \,,\notag \\ v^0_{\beta}(\boldsymbol{\textbf{x}})=\int K_0(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{x}}'|)\rho^{\beta}(\boldsymbol{\textbf{x}}') d\boldsymbol{\textbf{x}}' \,. \end{eqnarray} Taking the Fourier transform of the above expressions we obtain \begin{eqnarray}\label{eq:kerFT} \widehat{{v}^0_{\alpha}}(\mathbf{k})=\widehat{K_0}(|\mathbf{k}|)\widehat{\rho^{\alpha}}(\mathbf{k})\,,\notag \\ \widehat{{v}^0_{\beta}}(\mathbf{k})=\widehat{K_0}(|\mathbf{k}|)\widehat{\rho^{\beta}}(\mathbf{k})\,. \end{eqnarray} Following the ideas developed by Choly \& Kaxiras~\cite{Choly2002}, $\widehat{K_0}$ can be approximated to very good accuracy by using a sum of partial fractions of the following form \begin{equation}\label{eq:kernelAprrox} \widehat{K_0}(|\mathbf{k}|)\approx \sum_{j=1}^{m}\frac{A_j|\mathbf{k}|^2}{|\mathbf{k}|^2+B_j}, \end{equation} where $A_j$, $B_j$, $j=1\ldots m$ are constants, possibly complex, that are determined using a best fit approximation. Using this approximation and taking the inverse Fourier transform of equation~\eqref{eq:kerFT}, the potentials in equation (\ref{eq:kerpotential_v}) reduce to \begin{eqnarray} v^0_{\alpha}(\boldsymbol{\textbf{x}})=\sum\limits_{j=1}^m\,[\omega^0_{\alpha_j}(\boldsymbol{\textbf{x}})+A_j \rho^{\alpha}(\boldsymbol{\textbf{x}})]\,,\notag \\ v^0_{\beta}(\boldsymbol{\textbf{x}})=\sum\limits_{j=1}^m\,[\omega^0_{\beta_j}(\boldsymbol{\textbf{x}} )+A_j \rho^{\beta}(\boldsymbol{\textbf{x}})]\,. \end{eqnarray} where $\omega^0_{\alpha_j}(\boldsymbol{\textbf{x}})$ and $\omega^0_{\beta_j}(\boldsymbol{\textbf{x}})$ for $j=1\ldots m$ are given by the following Helmholtz equations: \begin{eqnarray}\label{eq:Helmholtz} &&-\nabla^2\omega^0_{\alpha_j}+B_j\omega^0_{\alpha_j}+A_jB_j \rho^{\alpha}=0\,,\notag\\ &&-\nabla^2\omega^0_{\beta_j}+B_j\omega^0_{\beta_j}+A_jB_j \rho^{\beta}=0\,. \end{eqnarray} We refer to these auxiliary potentials, $\omega^0_{\alpha}=\{\omega^0_{\alpha_1}\ldots \omega^0_{\alpha_m}\}$ and $\omega^0_{\beta}=\{\omega^0_{\beta_1}\ldots \omega^0_{\beta_m}\}$ introduced in the local reformulation of the kernel energy as \emph{kernel potentials}. Expressing the Helmholtz equations in a variational form, we reformulate $T_{K_0}$ in (\ref{eq:ker0_energy}) as the following local variational problem in kernel potentials: \begin{subequations}\label{eq:kernel_variational} \begin{equation} T_{K_0}(\rho)= \min_{\omega^0_{\alpha_j}\in \mathcal{Y}}\max_{\omega^0_{\beta_j}\in \mathcal{Y}}\, \mathcal{L}_{K_0} (\omega^0_{\alpha}, \omega^0_{\beta}, \rho)\,, \end{equation} \begin{equation} \begin{split} &\mathcal{L}_{K_0} (\omega^0_{\alpha}, \omega^0_{\beta}, \rho) = \sum_{j=1}^{m}C_F\Big\{\int\big[ \frac{1}{A_jB_j}\nabla\omega^0_{\alpha_j}(\boldsymbol{\textbf{x}}) \cdot\nabla\omega^0_{\beta_j}(\boldsymbol{\textbf{x}}) \\ & + \frac{1}{A_j}\omega^0_{\alpha_j}(\boldsymbol{\textbf{x}})\omega^0_{\beta_j}(\boldsymbol{\textbf{x}}) + \omega^0_{\beta_j}(\boldsymbol{\textbf{x}})\rho^{\alpha}(\boldsymbol{\textbf{x}}) +\omega^0_{\alpha_j}(\boldsymbol{\textbf{x}})\rho^{\beta}(\boldsymbol{\textbf{x}}) \\ & +A_j\rho^{(\alpha+\beta)}(\boldsymbol{\textbf{x}})\big]d\boldsymbol{\textbf{x}}\Big\}\,. \end{split} \end{equation} \end{subequations} The variational problem in equation~\eqref{eq:kernel_variational} represents a simultaneous saddle point problem on kernel potentials $\omega^0_{\alpha_j}$ and $\omega^0_{\beta_j}$ for $j=1,\ldots,m$. Following a similar procedure, we construct the local variational reformulations for the kernel energies $T_{K_1}$, $T_{K_{11}}$ and $T_{K_{12}}$ corresponding to kernels $K_{1}$, $K_{11}$ and $K_{12}$, respectively. We denote by $\mathcal{L}_{K_1}(\omega^1_{\alpha}, \omega^1_{\beta}, \rho)$, $\mathcal{L}_{K_{11}}(\omega^{11}_{\alpha}, \omega^{11}_{\beta}, \rho)$ and $\mathcal{L}_{K_{12}}(\omega^{12}_{\alpha}, \omega^{12}_{\beta}, \rho)$ the Lagrangians with respective kernel potentials corresponding to kernel energies of $K_{1}$, $K_{11}$ and $K_{12}$, respectively. We refer to the supplemental material for the numerical details of the approximations for each of the kernels used in the present work. Finally, using the local variational reformulations of the extended electrostatic and kernel energies, the problem of computing the ground-state energy for a given positions of atoms is given by the following local variational problem in electron-density, electrostatic potentials, and kernel potentials: \begin{widetext} \begin{equation}\label{eq:locVar1} \begin{split} E_0(\mathbf{R})= \min_{\sqrt{\rho} \in \mathcal{X}} \max_{\phi \in \mathcal{Y}} \min_{\omega^{s}_{\alpha_j}\in \mathcal{Y}}\max_{\omega^{s}_{\beta_j}\in \mathcal{Y}} \, \Big\{ & C_F \int {\rho (\boldsymbol{\textbf{x}})^{5/3}}\,d\boldsymbol{\textbf{x}} + \frac{1}{2}\int |\nabla \sqrt{\rho(\boldsymbol{\textbf{x}})}|^2\,d\boldsymbol{\textbf{x}} + \int \varepsilon_{\text{xc}}(\rho)\rho(\boldsymbol{\textbf{x}})\,d\bx \\ & + \sum_{s}\mathcal{L}_{K_s} (\omega^s_{\alpha}, \omega^s_{\beta}, \rho) + \min_{V^{I}\in H^1( {\mathbb{R}^{3}} )} \mathcal{L}_{el} (\phi,\mathcal{V},\rho,\mathbf{R}) \Big\}\,. \end{split} \end{equation} \end{widetext} In the above, $s$ denotes the index corresponding to a kernel, and $\mathcal{X}$ and $\mathcal{Y}$ are suitable function spaces corresponding to the boundary conditions of the problem. In particular, for periodic problems, $\mathcal{Y}=H^1_{per}(Q)$ and $\mathcal{X}=\{\sqrt{\rho}|\sqrt{\rho}\in H^1_{per}(Q), \int \rho=N\}$. It is convenient to use the substitution $u(\boldsymbol{\textbf{x}})=\sqrt{\rho(\boldsymbol{\textbf{x}})}$, and enforce the integral constraint in $\mathcal{X}$ using a Lagrange multiplier. Also, for the sake of notational simplicity, we will denote by $\omega_{\alpha}$ and $\omega_{\beta}$ the array of kernel potentials $\{\omega^0_{\alpha}, \omega^{1}_{\alpha}, \omega^{11}_{\alpha}, \omega^{12}_{\alpha} \}$ and $\{\omega^0_{\beta}, \omega^{1}_{\beta}, \omega^{11}_{\beta}, \omega^{12}_{\beta} \}$, respectively. Subsequently, the variational problem in equation~\eqref{eq:locVar1} can be expressed as \begin{widetext} \begin{eqnarray}\label{eq:locVar2} E_0(\mathbf{R}) = \min_{u \in \mathcal{Y}} \max_{\phi \in \mathcal{Y}} \min_{\omega^s_{\alpha_j}\in \mathcal{Y}}\max_{\omega^s_{\beta_j}\in \mathcal{Y}}\,\, \mathcal{L} (u,\phi,\omega_{\alpha},\omega_{\beta};\mathbf{R}) \qquad \mbox{subject to}: \int u^2(\boldsymbol{\textbf{x}}) \,d\bx = N \,,\\ \mathcal{L}(u,\phi,\omega_{\alpha},\omega_{\beta};\mathbf{R}) = \tilde{\mathcal{L}} (u) + \mathcal{L}_{K} (\omega_{\alpha}, \omega_{\beta}, u^2) + \mathcal{L}_{c} (u,\lambda) +\min_{V^{I}\in H^1( {\mathbb{R}^{3}} )} \mathcal{L}_{el} (\phi,\mathcal{V},u^2,\mathbf{R}) \,,\notag \\ \tilde{\mathcal{L}} (u) = C_F \int {u^{10/3}(\boldsymbol{\textbf{x}})}\,d\boldsymbol{\textbf{x}} + \frac{1}{2}\int |\nabla u(\boldsymbol{\textbf{x}})|^2\,d\boldsymbol{\textbf{x}} + \int \varepsilon_{\text{xc}}(u^2) u^2(\boldsymbol{\textbf{x}})\,d\bx \,, \notag \\ \mathcal{L}_{K} (\omega_{\alpha}, \omega_{\beta}, u^2) = \sum_{s} \mathcal{L}_{K_s} (\omega^s_{\alpha}, \omega^s_{\beta}, u^2) \,, \notag \\ \mathcal{L}_{c} (u,\lambda) = \lambda\left(\int u^2(\boldsymbol{\textbf{x}}) \,d\bx -N \right) \, .\notag \end{eqnarray} \end{widetext} \subsection {Configurational forces}~\label{sec:ConfigurationalForces} We now turn our attention to the configurational forces corresponding to geometry optimization. To this end, we employ the approach of inner variations, where we evaluate the generalized forces corresponding to perturbations of underlying space, which provides a unified expression for the generalized force corresponding to the geometry of the simulation cell---internal atomic positions, as well as, the external cell domain. We consider infinitesimal perturbations of the underlying space $\psi_{\epsilon}: {\mathbb{R}^{3}} \to {\mathbb{R}^{3}} $ corresponding to a generator $\Gamma(\boldsymbol{\textbf{x}})$ given by $\Gamma=\frac{d\psi_{\epsilon}(\boldsymbol{\textbf{x}})}{d\epsilon}|_{\epsilon=0}$ such that $\psi_{0}=I$. We constrain the generator $\Gamma$ such that it only admits rigid body deformations in the compact support of the regularized nuclear charge distribution $\rho_{nu}$ in order to preserve the integral constraint $\int\tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_I)d\boldsymbol{\textbf{x}}=1$. Let $\boldsymbol{\textbf{x}}$ denote a point in $Q$, whose image in $Q'=\psi_{\epsilon}(Q)$ is $\boldsymbol{\textbf{x}}'=\psi_{\epsilon}(\boldsymbol{\textbf{x}})$. The ground-state energy on $Q'$ is given by \begin{eqnarray} E_{0}(\psi_{\epsilon})= \mathcal{L_{\epsilon}} (u_{\epsilon},\phi_{\epsilon},{\omega_{\alpha}}_{\epsilon},{\omega_{\beta}}_{\epsilon};{\mathbf{R}}_{\epsilon}) \end{eqnarray} where $u_{\epsilon}$, $\phi_{\epsilon}$, ${\omega_{\alpha}}_{\epsilon}$ and ${\omega_{\beta}}_{\epsilon}$ are solutions of the saddle point variational problem given by equation~\eqref{eq:locVar2} evaluated over the function space $\mathcal{Y'}=H^1_{per}(Q')$. The subscript $\epsilon$ on $\mathcal{L}$ is used to denote that the variational problem is solved on $Q'=\psi_{\epsilon}(Q)$. For the sake of convenience, we will represent the integrand of the Lagrangian $\mathcal{L}$ in equation~\eqref{eq:locVar2} by $f(u,\nabla{u},\phi,\nabla\phi,\omega_{\alpha}, \nabla \omega_{\alpha}, \omega_{\beta}, \nabla\omega_{\beta};V_{ps},\bar{V}_{\tilde{\delta}},\boldsymbol{\textbf{R}})$ and $g(\bar{V}_{\tilde{\delta}}^{I},\nabla \bar{V}_{\tilde{\delta}}^{I};\boldsymbol{\textbf{R}})$, where $f$ denotes the integrand whose integrals are over $Q$ and $g$ denotes the integrand whose integrals are over $ {\mathbb{R}^{3}} $. The ground-state energy on $Q'$ in terms of $f$ and $g$ can be expressed as \begin{eqnarray} &&E_{0}(\psi_{\epsilon}) = \int_{Q'} f(u_{\epsilon}(\boldsymbol{\textbf{x}}'),\nabla_{\boldsymbol{\textbf{x}}'}{u}_{\epsilon}(\boldsymbol{\textbf{x}}'),\phi_{\epsilon}(\boldsymbol{\textbf{x}}'),\nabla_{\boldsymbol{\textbf{x}}'}\phi_{\epsilon}(\boldsymbol{\textbf{x}}'), {\omega_{\alpha}}_{\epsilon}(\boldsymbol{\textbf{x}}'), \notag\\ && \nabla_{\boldsymbol{\textbf{x}}'} {\omega_{\alpha}}_{\epsilon}(\boldsymbol{\textbf{x}}'), {\omega_{\beta}}_{\epsilon}(\boldsymbol{\textbf{x}}'), \nabla_{\boldsymbol{\textbf{x}}'}{\omega_{\beta}}_{\epsilon}(\boldsymbol{\textbf{x}}'); V_{ps}(\boldsymbol{\textbf{x}}'), \bar{V}_{\tilde{\delta}}(\boldsymbol{\textbf{x}}'), \psi_{\epsilon}(\boldsymbol{\textbf{R}})) d\boldsymbol{\textbf{x}}' \notag\\ &&+ \sum_{I}\int_{ {\mathbb{R}^{3}} } g(\bar{V}^{I}_{\tilde{\delta}_{\epsilon}}(\boldsymbol{\textbf{x}}'),\nabla_{\boldsymbol{\textbf{x}}'} \bar{V}^{I}_{\tilde{\delta}_{\epsilon}}(\boldsymbol{\textbf{x}}');\psi_{\epsilon}(\boldsymbol{\textbf{R}})) d\boldsymbol{\textbf{x}}' \,. \end{eqnarray} Transforming the above integral to domain $Q$, we obtain \begin{eqnarray} &&E_{0}(\psi_{\epsilon})= \int_{Q}f(u_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \nabla_{\boldsymbol{\textbf{x}}}u_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})).\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}, \phi_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \notag\\ &&\nabla_{\boldsymbol{\textbf{x}}}\phi_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})).\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}, {\omega_\alpha}_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \nabla_{\boldsymbol{\textbf{x}}}{\omega_{\alpha}}_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})).\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}, {\omega_\beta}_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \notag\\ && \nabla_{\boldsymbol{\textbf{x}}}{\omega_{\beta}}_{\epsilon}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})).\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}; V_{ps}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \bar{V}_{\tilde{\delta}}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})), \psi_{\epsilon}(\boldsymbol{\textbf{R}})) \det(\frac{\partial \boldsymbol{\textbf{x}}'}{\partial \boldsymbol{\textbf{x}}})\,d\boldsymbol{\textbf{x}} \notag\\ &&+ \sum_{I} \int_{ {\mathbb{R}^{3}} } g(\bar{V}^{I}_{\tilde{\delta}_{\epsilon}}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})),\nabla_{\boldsymbol{\textbf{x}}} \bar{V}^{I}_{\tilde{\delta}_{\epsilon}}(\psi_{\epsilon}(\boldsymbol{\textbf{x}})).\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'};\psi_{\epsilon}(\boldsymbol{\textbf{R}})) \det(\frac{\partial \boldsymbol{\textbf{x}}'}{\partial \boldsymbol{\textbf{x}}})\,d\boldsymbol{\textbf{x}} \notag \\ \end{eqnarray} We now evaluate the configurational force given by the G\^{a}teaux derivative of $E_{0}(\psi_{\epsilon})$: \begin{widetext} \begin{eqnarray}\label{eq:GateuxDerivative} &&\frac{d E_{0}(\psi_{\epsilon})}{d\epsilon}\Big{|}_{\epsilon=0} = \int_{Q} f(u_{0}(\boldsymbol{\textbf{x}}),\nabla{u}_{0}(\boldsymbol{\textbf{x}}),\phi_{0}(\boldsymbol{\textbf{x}}),\nabla\phi_{0}(\boldsymbol{\textbf{x}}), {\omega_{\alpha}}_{0}(\boldsymbol{\textbf{x}}),\nabla{\omega_{\alpha}}_{0}(\boldsymbol{\textbf{x}}), {\omega_{\beta}}_{0}(\boldsymbol{\textbf{x}}), \nabla{\omega_{\beta}}_{0}(\boldsymbol{\textbf{x}});V_{ps}(\boldsymbol{\textbf{x}}), \bar{V}_{\tilde{\delta}}(\boldsymbol{\textbf{x}}),\boldsymbol{\textbf{R}}) \frac{d}{d\epsilon}(\det(\frac{\partial \boldsymbol{\textbf{x}}'}{\partial \boldsymbol{\textbf{x}}}))\Big{|}_{\epsilon=0} d\boldsymbol{\textbf{x}} \notag \\ &&+ \int_{Q}\left(\frac{\partial f}{\partial \nabla u}(\nabla u_{0})\otimes\nabla u_{0} + \frac{\partial f}{\partial \nabla \phi}(\nabla \phi_{0})\otimes\nabla \phi_{0} + \sum_{s} \Big( \frac{\partial f}{\partial \nabla \omega^s_{\alpha}}(\nabla{\omega^s_{\alpha}}_{0})\otimes\nabla {\omega^s_{\alpha}}_{0} + \frac{\partial f}{\partial \nabla \omega^s_{\beta}}(\nabla{\omega^s_{\beta}}_{0})\otimes\nabla {\omega^s_{\beta}}_{0} \Big)\right) : \left(\frac{d}{d \epsilon}\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}\Big{|}_{\epsilon=0}\right) d\boldsymbol{\textbf{x}} \notag\\ &&+\sum_{J}\int_{Q} u^2_{0}(\boldsymbol{\textbf{x}})\left(\nabla V^{J}_{ps}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|) - \nabla \bar{V}^{J}_{\tilde{\delta}}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{J}|) \right) . \left(\frac{d \psi_{\epsilon}(\boldsymbol{\textbf{x}})}{d\epsilon}\Big{|}_{\epsilon=0}-\frac{d\psi_{\epsilon}(\boldsymbol{\textbf{R}}_J)}{d\epsilon}\Big{|}_{\epsilon=0} \right) d\boldsymbol{\textbf{x}} \quad \notag \\ &&+ \sum_{I}\int_{ {\mathbb{R}^{3}} } g(\bar{V}^{I}_{\tilde{\delta}_{0}}(\boldsymbol{\textbf{x}}),\nabla \bar{V}^{I}_{\tilde{\delta}_{0}}(\boldsymbol{\textbf{x}});\boldsymbol{\textbf{R}}) \frac{d}{d\epsilon}(\det(\frac{\partial \boldsymbol{\textbf{x}}'}{\partial \boldsymbol{\textbf{x}}}))\Big{|}_{\epsilon=0} d\boldsymbol{\textbf{x}} + \sum_{I}\int_{ {\mathbb{R}^{3}} } \frac{\partial g}{\partial \nabla \bar{V}^{I}_{\tilde{\delta}}}(\nabla\bar{V}^{I}_{\tilde{\delta}_{0}})\otimes\nabla \bar{V}^{I}_{\tilde{\delta}_{0}}: \left(\frac{d}{d \epsilon}\frac{\partial \boldsymbol{\textbf{x}}}{\partial \boldsymbol{\textbf{x}}'}\Big{|}_{\epsilon=0}\right) d\boldsymbol{\textbf{x}} \,. \end{eqnarray} \end{widetext} In the above, we denote by `$\otimes$' the outer product between two vector, by `$.$' the dot product between two vectors and by `$:$' the dot product between two tensors. We note that in the above expression there are no terms involving the explicit derivatives of $f$ and $g$ with respect to $\boldsymbol{\textbf{R}}$ as $\tilde{\delta}(|\boldsymbol{\textbf{x}}'-\psi_{\epsilon}(\boldsymbol{\textbf{R}})|)=\tilde{\delta}(|\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}|)$, which follows from the restriction that $\psi_{\epsilon}$ corresponds to rigid body deformations in the compact support of $\rho_{nu}$. We further note that terms arising from the inner variations of $E_{0}(\psi_{\epsilon})$ with respect to $u_{\epsilon}$, $\phi_{\epsilon}$, ${\omega_{\alpha}}_{\epsilon}$, ${\omega_{\beta}}_{\epsilon}$ and $\bar{V}^{I}_{\tilde{\delta}_{\epsilon}}$ vanish as $u_{0}$ $\phi_{0}$, ${\omega_{\alpha}}_{0}$, ${\omega_{\beta}}_{0}$ and $\bar{V}^{I}_{\tilde{\delta}_{0}}$ are the solutions of the saddle point variational problem corresponding to $E_{0}(\psi_{0})$. We now note the following identities \begin{equation} \begin{split} \frac{d}{d\epsilon}\left\{\frac{\partial x_i}{\partial x'_j}\right\}\Big{|}_{\epsilon=0}=&-\frac{\partial x_i}{\partial x'_k}\Big(\frac{d}{d\epsilon}\left\{\frac{\partial{\psi_\epsilon}_k}{\partial x_l}\right\}\Big)\frac{\partial x_l}{\partial x'_j}\,\Big{|}_{\epsilon=0}\\ =&-\frac{\partial\Gamma_i}{\partial x_j}\,, \end{split} \end{equation} \begin{equation} \begin{split} \frac{d}{d\epsilon}\left\{\det\big(\frac{\partial x'_l}{\partial x_m}\big)\right\}\Big{|}_{\epsilon=0}=& \det\big(\frac{\partial x'_l}{\partial x_m}\big)\frac{\partial x_j}{\partial x'_i} \Big(\frac{d}{d\epsilon}\left\{\frac{\partial {\psi_\epsilon}_i}{\partial x_j}\right\}\Big)\Big{|}_{\epsilon=0}\\ =&\frac{\partial\Gamma_j}{\partial x_j}. \end{split} \end{equation} Using these identities in equation~\eqref{eq:GateuxDerivative}, and rearranging terms, we arrive at \begin{eqnarray}\label{eq:Eshelby} &&\frac{d E_{0}(\psi_{\epsilon})}{d\epsilon}\Big{|}_{\epsilon=0} = \int_{Q}\mathbf{E}:\nabla\Gamma(\boldsymbol{\textbf{x}}) \,d\boldsymbol{\textbf{x}} + \sum_{I}\int_{ {\mathbb{R}^{3}} }{\mathbf{E}^{'}_I}:\nabla\Gamma(\boldsymbol{\textbf{x}}) \,d\boldsymbol{\textbf{x}} \notag\\ &&+\sum_{J}\int_{Q} u^2_{0}(\boldsymbol{\textbf{x}})\left(\nabla \big(V^{J}_{ps}-\bar{V}^{J}_{\tilde{\delta}}\big) \right).\left( \Gamma(\boldsymbol{\textbf{x}}) -\Gamma(\boldsymbol{\textbf{R}}_{J})\right) d\boldsymbol{\textbf{x}} \end{eqnarray} where $\mathbf{E}$ and $\mathbf{E}'$ denote Eshelby tensors corresponding to $f$ and $g$, respectively. The expressions for the Eshelby tensors $\mathbf{E}$ and $\mathbf{E}'_{I}$ explicitly in terms of $u$, $\phi$, $\omega_{\alpha}$, $\omega_{\beta}$, $V_{ps}$ and $\bar{V}_{\tilde{\delta}}$ are given by \begin{widetext} \begin{eqnarray} \mathbf{E} = &&\left(C_Fu^{10/3}+\frac{1}{2}|\nabla{u}|^2+\varepsilon_{xc}(u^2)u^2+\lambda u^2 -\frac{1}{8\pi}|\nabla\phi|^2+u^2\phi +\sum_{J}\big(V^{J}_{ps}-\bar{V}^{J}_{\tilde{\delta}}\big)u^2+\sum_{s}f_{K_s}(\omega_{\alpha}^{s}, \nabla\omega_{\alpha}^{s},\omega_{\beta}^{s}, \nabla\omega_{\beta}^{s}, u^2)\right)\mathbf{I}\notag\\ &&-\nabla{u}\otimes\nabla{u} + \frac{1}{4\pi}\nabla{\phi}\otimes\nabla{\phi} - \sum_{s} \left(\frac{\partial f_{K_s}}{\partial\nabla\omega_{\alpha}^{s}}\otimes\nabla\omega_{\alpha}^{s} + \frac{\partial f_{K_s}}{\partial\nabla\omega_{\beta}^{s}}\otimes\nabla\omega_{\beta}^{s}\right)\\ \mathbf{E}^{'}_{I} = && \frac{1}{8\pi} |\nabla\bar{V}^{I}_{\tilde{\delta}}|^2\mathbf{I}-\frac{1}{4\pi}\nabla\bar{V}^{I}_{\tilde{\delta}}\otimes\nabla\bar{V}^{I}_{\tilde{\delta}} \end{eqnarray} \end{widetext} In the above, for the sake of brevity, we represented by $f_{K_s}$ the integrand corresponding to $\mathcal{L}_{K_s}$. We also note that the terms $\phi\rho_{nu}$ and ${V}^{I}_{\tilde{\delta}}\tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_{I})$ do not appear in the expressions for $\mathbf{E}$ and $\mathbf{E}^{'}_{I}$, respectively, as $\nabla.\Gamma=0$ on the compact support of $\rho_{nu}$ owing to the restriction that $\Gamma$ corresponds to rigid body deformations in these regions. It may appear that evaluation of the second term in equation~\eqref{eq:Eshelby} is not tractable as it involves an integral over $ {\mathbb{R}^{3}} $. To this end, we split this integral on a bounded domain $\Omega$ containing the compact support of $\tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_I)$, and its complement. The integral on $ {\mathbb{R}^{3}} /\Omega$ can be computed as a surface integral. Thus, \begin{eqnarray} &&\int_{ {\mathbb{R}^{3}} }\mathbf{E}^{'}_{I}:\nabla\Gamma \,d\boldsymbol{\textbf{x}}=\int_{\Omega}\mathbf{E}^{'}_{I}:\nabla\Gamma \,d\boldsymbol{\textbf{x}}+ \int_{ {\mathbb{R}^{3}} /\Omega}\mathbf{E}^{'}_{I}:\nabla\Gamma \,d\boldsymbol{\textbf{x}}\notag\\ &&= \int_{\Omega}\mathbf{E}^{'}_{I}:\nabla\Gamma \,d\boldsymbol{\textbf{x}} - \int_{\partial\Omega} \mathbf{E}^{'}_{I}:\hat{\mathbf{n}}\otimes\Gamma \,d\mathbf{s}\,, \end{eqnarray} where $\hat{\mathbf{n}}$ denotes the outward normal to the surface $\partial \Omega$. The last equality follows from the fact that $\nabla^2\bar{V}^{I}_{\tilde{\delta}}=0$ on $ {\mathbb{R}^{3}} /\Omega$. The configurational force in equation~\eqref{eq:Eshelby} provides the generalized variational force with respect to both the internal positions of atoms as well as the external cell domain. In order to compute the force on any given atom, we restrict the compact support of $\Gamma$ to only include the atom of interest. In order to compute the stresses associated with cell relaxation (keeping the fractional coordinates of atoms fixed), we restrict $\Gamma$ to affine deformations. Thus, this provides a unified expression for geometry optimization corresponding to both internal ionic relaxations as well as cell relaxation. We further note that, while we derived the configurational force for the case of pseudopotential calculations, the derived expression is equally applicable for all-electron calculations by using $V^{J}_{ps}=\bar{V}^{J}_{\tilde{\delta}}$. \subsection{Finite-element discretization}~\label{sec:FE-discretization} Among numerical discretization techniques, the plane-wave discretization has been the most popular and widely used in orbital-free DFT~\cite{PROFESS2,PROFESS3} as it naturally lends itself to the evaluation of the extended interactions in electrostatic energy and kernel kinetic energy functionals using Fourier transforms. Further, the plane wave basis offers systematic convergence with exponential convergence in the number of basis functions. However, as noted previously, the plane-wave basis also suffers from notable drawbacks. Importantly, plane-wave discretization is restricted to periodic geometries and boundary conditions which introduces a significant limitation, especially in the study of defects in bulk materials~\cite{Mrinal2015}. Further, the plane-wave basis has a uniform spatial resolution, and thus is not amenable to adaptive coarse-graining. Moreover, the use of plane-wave discretization involves the numerical evaluation of Fourier transforms whose scalability is limited on parallel computing platforms. In order to circumvent these limitations of the plane-wave basis, there is an increasing focus on developing real-space discretization techniques for orbital-free DFT based on finite-difference discretization~\cite{Carlos, Phanish, Phanish2} and finite-element discretization~\cite{Gavini2007,Mrinal2012}. In particular, the finite-element basis~\cite{Brenner-Scott}, which is a piecewise continuous polynomial basis, has many features of a desirable basis in electronic structure calculations. While being a complete basis, the finite-element basis naturally allows for the consideration of complex geometries and boundary conditions, is amenable to unstructured coarse-graining, and exhibits good scalability on massively parallel computing platforms. Moreover, the adaptive nature of the finite-element discretization also enables the consideration of all-electron orbital-free DFT calculations that are widely used in studies of warm dense matter~\cite{Flavian2006,Flavian2008,Collins2013}. Further, recent numerical studies have shown that by using a higher-order finite-element discretization significant computational savings can be realized for both orbital-free DFT~\cite{Mrinal2012} and Kohn-Sham DFT calculations~\cite{Motamarri2013,Motamarri2014}, effectively overcoming the degree of freedom disadvantage of the finite-element basis in comparison to the plane-wave basis. Let $\mathcal{Y}_h$ denote the finite-element subspace of $\mathcal{Y}$, where $h$ represents the finite-element mesh size. The discrete problem of computing the ground-state energy for a given positions of atoms, corresponding to equation~\eqref{eq:locVar2}, is given by the constrained variational problem: \begin{eqnarray}\label{eq:VarFE} E_0(\mathbf{R}) = &&\min_{u_h \in \mathcal{Y}_h} \max_{\phi_h \in \mathcal{Y}_h} \min_{{{\omega^s_{\alpha_j}}_h}\in \mathcal{Y}_h}\max_{{{\omega^s_{\beta_j}}_h}\in \mathcal{Y}_h}\,\, \mathcal{L} (u_h,\phi_h,{\omega_{\alpha_h}},{\omega_{\beta_h}};\mathbf{R}) \notag \\ &&\mbox{subject to}: \int u_h^2(\boldsymbol{\textbf{x}}) \,d\bx = N\,. \end{eqnarray} In the above, $u_h$, $\phi_h$, ${\omega_{\alpha}}_h$ and ${\omega_{\beta}}_h$ denote the finite-element discretized fields corresponding to square-root electron-density, electrostatic potential, and kernel potentials, respectively. We restrict our finite-element discretization such that atoms are located on the nodes of the finite-element mesh. In order to compute the finite-element discretized solution of $\bar{V}^{J}_{\tilde{\delta}}$, we represent $\tilde{\delta}(\boldsymbol{\textbf{x}}-\boldsymbol{\textbf{R}}_J)$ as a point charge on the finite-element node located at $\boldsymbol{\textbf{R}}_{J}$, and the finite-element discretization provides a regularization for $\bar{V}^{J}_{\tilde{\delta}}$. Previous investigations have suggested that such an approach provides optimal rates of convergence of the ground-state energy (cf.~\cite{Mrinal2012,Motamarri2013} for a discussion). The finite-element basis functions also provide the generator of the deformations of the underlying space in the isoparametric formulation, where the same finite-element shape functions are used to discretize both the spatial domain as well as the fields prescribed over the domain. Thus, the configurational force associated with the location of any node in the finite-element mesh can be computed by substituting for $\Gamma$, in equation~\eqref{eq:Eshelby}, the finite-element shape function associated with the node. Thus, the configurational force on any finite-element node located at an atom location corresponds to the variational ionic force, which are used to drive the internal atomic relaxation. The forces on the finite-element nodes that do not correspond to an atom location represent the generalized force of the energy with respect to the location of the finite-element nodes, and these can be used to obtain the optimal location of the finite-element nodes---a basis adaptation technique. We note that the local real-space variational formulation in section~\ref{sec:RS-formulation}, where the extended interactions in the electrostatic energy and kernel functionals are reformulated as local variational problems, is essential for the finite-element discretization of the formulation. \section{Numerical Implementation}\label{sec:Numerics} In this section, we present the details of the numerical implementation of the finite-element discretization of the real-space formulation of orbital-free DFT discussed in section~\ref{sec:RS-OFDFT}. Subsequently, we discuss the solution procedure for the resulting discrete coupled equations in square-root electron-density, electrostatic potential and kernel potentials. \subsection{Finite-element basis}~\label{sec:FE-basis} A finite-element discretization using linear tetrahedral finite-elements has been the most widely used discretization technique for a wide range of partial differential equations. Linear tetrahedral elements are well suited for problems involving complex geometries and moderate levels of accuracy. However in electronic structure calculations, where the desired accuracy is commensurate with chemical accuracy, linear finite elements are computationally inefficient requiring of the order of hundred thousand basis functions per atom to achieve chemical accuracy. A recent study~\cite{Mrinal2012} has demonstrated the significant computational savings---of the order of 1000-fold compared to linear finite-elements---that can be realized by using higher-order finite-element discretizations. Thus, in the present work we use higher-order hexahedral finite elements, where the basis functions are constructed as a tensor product of basis functions in one-dimension~\cite{Brenner-Scott}. \subsection{Solution procedure}~\label{sec:NumerSoln} The discrete variational problem in equation~\eqref{eq:VarFE} involves the computation of the following fields---square-root electron-density, electrostatic potential and kernel potentials. Two solution procedures, suggested in prior efforts~\cite{Mrinal2012}, for solving this discrete variational problem include: (i) a simultaneous solution of all the discrete fields in the problem; (ii) a nested solution procedure, where for every trial square-root electron-density the discrete electrostatic and kernel potential fields are computed. Given the non-linear nature of the problem, the simultaneous approach is very sensitive to the starting guess and often suffers from lack of robust convergence, especially for large-scale problems. The nested solution approach, on the other hand, while constituting a robust solution procedure, is computationally inefficient due to the huge computational costs incurred in computing the kernel potentials which involves the solution of a series of Helmholtz equations (cf. equation~\eqref{eq:Helmholtz}). Thus, in the present work, we will recast the local variational problem in equation~\eqref{eq:VarFE} as the following fixed point iteration problem: \begin{subequations}\label{eq:fixedPoint} \begin{eqnarray}\label{eq:fixedPoint_a} \{\bar{u}_h,\bar{\phi}_{h}\} = && \,\,arg\, \min_{u_h}\, arg\, \max_{\phi_h} \mathcal{L}(u_h,\phi_h,\bar{\omega}_{{\alpha}_h}, \bar{\omega}_{{\beta}_h};\boldsymbol{\textbf{R}}) \notag\\ &&\mbox{subject to}: \int u_h^2(\boldsymbol{\textbf{x}}) \,d\bx = N. \end{eqnarray} \begin{equation}\label{eq:fixedPoint_b} \{\bar{\omega}_{{\alpha}_h}, \bar{\omega}_{{\beta}_h}\} = \,\, arg\, \min_{\omega_{{\alpha}_h}}\, arg\, \max_{\omega_{{\beta}_h}} \mathcal{L}(\bar{u}_h,\bar{\phi}_h,\omega_{{\alpha}_h}, \omega_{{\beta}_h};\boldsymbol{\textbf{R}}) \,. \end{equation} \end{subequations} We solve this fixed point iteration problem using a mixing scheme, and, in particular, we employ the Anderson mixing scheme~\cite{Anderson} with full history in this work. Our numerical investigations suggest that the fixed point iteration converges, typically, in less than ten self-consistent iterations even for large-scale problems, thus, providing a numerically efficient and robust solution procedure for the solution of the local variational orbital-free DFT problem. We note that this idea of fixed point iteration has independently and simultaneously been investigated by another group in the context of finite difference discretization~\cite{Phanish2}, and have resulted in similar findings. In the fixed point iteration problem, we employ a simultaneous solution procedure to solve the non-linear saddle point variational problem in $u_h$ and $\phi_h$ (equation~\eqref{eq:fixedPoint_a}). We employ an inexact Newton solver provided by the PETSc package~\cite{PETSC} with field split preconditioning and generalized-minimal residual method (GMRES)~\cite{GMRES} as the linear solver. The discrete Helmholtz equations in equation~\eqref{eq:fixedPoint_b} are solved by employing block Jacobi preconditioning and using GMRES as the linear solver. An efficient and scalable parallel implementation of the solution procedure has been developed to take advantage of the parallel computing resources for conducting the large-scale simulations reported in this work. \section{Results and Discussion}\label{sec:Results} In this section, we discuss the numerical studies on Al, Mg and Al-Mg intermetallics to investigate the accuracy and transferability of the real-space formulation of orbital-free DFT (RS-OFDFT) proposed in section~\ref{sec:RS-OFDFT}. Wherever applicable, we benchmark the real-space orbital-free DFT calculations with plane-wave based orbital-free DFT calculations conducted using PROFESS~\cite{PROFESS2}, and compare with Kohn-Sham DFT (KS-DFT) calculations conducted using the plane-wave based ABINIT code~\cite{abinit1,abinit2}. Further, we demonstrate the usefulness of the proposed real-space formulation in studying the electronic structure of isolated defects. \subsection{General calculation details}~\label{sec:calc} \begin{figure}[htbp] \includegraphics[width=0.46\textwidth]{alfccEnergyConv.eps} \caption{\label{fig:energyConv}\small{Convergence of the finite-element approximation in the energy of a fcc Al unit cell with lattice constant $a=7.2$ Bohr.}} \end{figure} \begin{figure}[htbp] \includegraphics[width=0.46\textwidth]{alfccStressConv.eps} \caption{\label{fig:stressConv}\small{ Convergence of the finite-element approximation in the hydrostatic stress of a fcc Al unit cell with lattice constant $a=7.2$ Bohr.}} \end{figure} In all the real-space orbital-free DFT calculations reported in this section, we use the local reformulation of the density-dependent WGC~\cite{Yan1999} kinetic energy functional proposed in section~\ref{sec:RS-formulation}, the local density approximation (LDA)~\cite{perdew} for the exchange-correlation energy, and bulk derived local pseudopotentials (BLPS)~\cite{Huang2008} for Al and Mg. Cell stresses and ionic forces are calculated using the unified variational formulation of configurational forces developed in section~\ref{sec:ConfigurationalForces}. In the second order Taylor expansion of the density-dependent WGC functional about the bulk electron density (cf. Section~\ref{sec:OFDFT}), we only retain the $K_{12}$ term for the computation of bulk properties as the contributions from $K_{12}$ dominate those of $K_{11}$ for bulk materials systems. However, in the calculations involving mono-vacancies, where significant spatial perturbations in the electronic structure are present, we use the full second order Taylor expansion of the density dependent WGC functional. We recall from section~\ref{sec:RS-formulation} that in order to obtain a local real-space reformulation of the extended interactions in the kinetic energy functionals, the kernels ($K_{0}$, $K_{1}$, $K_{11}$, $K_{12}$) are approximated using a sum of $m$ partial fractions where the coefficients of the partial fractions are computed using a best fit approximation (cf. equation~\eqref{eq:kernelAprrox}). These best fit approximations for $m=4,5,6$ that are employed in the present work are given in the supplemental material. It has been shown in recent studies that $m=4$ suffices for Al~\cite{Bala2010,Phanish2}. However, we find that $m=6$ is required to obtain the desired accuracy in the bulk properties of Mg, and Table~\ref{tab:bulkTrf2} shows the comparison between the kernel approximation with $m=6$ and plane-wave based orbital-free DFT calculations conducted using PROFESS~\cite{PROFESS2} for Mg. Thus, we use the best fit approximation of the kernels with $m=4$ for Al, and employ the approximation with $m=6$ for Mg and Al-Mg intermetallics. Henceforth, we will refer by RS-OFDFT-FE the real-space orbital-DFT calculations conducted by employing the local formulation and finite element discretization proposed in section~\ref{sec:RS-OFDFT}. The KS-DFT calculations used to assess the accuracy and transferability of the proposed real-space orbital-free DFT formalism are performed using the LDA exchange correlation functional~\cite{perdew}. The KS-DFT calculations are conducted using both local BLPS as well as the non-local Troullier-Martins pseudopotential (TM-NLPS) ~\cite{NLPS} in order to assess the accuracy and transferability of both the model kinetic energy functionals in orbital-free DFT as well as the local pseudopotentials to which the orbital-free DFT formalism is restricted to. The TM-NLPS for Al and Mg are generated using the fhi98PP code ~\cite{FHI98PP}. Within the fhi98PP code, we use the following inputs: $3d$ angular momentum channel as the local pseudopotential component for both Al and Mg, default core cutoff radii for the $3s$, $3p$, and $3d$ angular momentum channels, which are $\left \{ 1.790,\, 1.974,\, 2.124 \right \}$ Bohr and $\left\{2.087,\, 2.476,\, 2.476\right\}$ Bohr for Al and Mg respectively, and the LDA ~\cite{perdew} exchange-correlation. For brevity, henceforth, we refer to the KS-DFT calculations with BLPS and TM-NLPS as KS-BLPS and KS-NLPS, respectively. In all the RS-OFDFT-FE calculations reported in this work, the finite-element discretization, order of the finite-elements, numerical quadrature rules and stopping tolerances are chosen such that we obtain 1 meV/ atom accuracy in energies, $1\e{-7} \, \rm{Hartree}\,\, \rm{Bohr}^{-3}$ accuracy in cell stresses and $1\e{-5} \, \rm{Hartree}\,\, \rm{Bohr}^{-1}$ accuracy in ionic forces. Similar accuracies in energies, stresses and ionic forces are achieved for KS-DFT calculations by choosing the appropriate k-point mesh, plane-wave energy cutoff, and stopping tolerances within ABINIT's framework. All calculations involving geometry optimization are conducted until cell stresses and ionic forces are below threshold values of $5\e{-7} \, \rm{Hartree}\,\, \rm{Bohr}^{-3}$ and $5\e{-5} \, \rm{Hartree}\,\, \rm{Bohr}^{-1}$, respectively. \subsection{Convergence of finite-element discretization}~\label{sec:Convergence} We now study the convergence of energy and stresses with respect to the finite-element discretization of the proposed real-space orbital-free DFT formulation. In a prior study on the computational efficiency afforded by higher-order finite-element discretization in orbital-free DFT~\cite{Mrinal2012}, it was shown that second and third-order finite-elements offer an optimal choice between accuracy and computational efficiency. Thus, in the present study, we limit our convergence studies to HEX27 and HEX64 finite-elements, which correspond to second- and third-order finite-elements. As a benchmark system, we consider a stressed fcc Al unit cell with a lattice constant $a=7.2$ Bohr. We first construct a coarse finite-element mesh and subsequently perform a uniform subdivision to obtain a sequence of increasingly refined meshes. We denote by $h$ the measure of the size of the finite-element. For these sequence of meshes, we hold the cell geometry fixed and compute the discrete ground-state energy, $E_{h}$, and hydrostatic stress, $\sigma_{h}$. The extrapolation procedure proposed in Motamarri et. al~\cite{Mrinal2012} allows us to estimate the ground-state energy and hydrostatic stress in the limit as $h\to 0$, denoted by $E_0$ and $\sigma_0$. To this end, the energy and hydrostatic stress computed from the sequence of meshes using HEX64 finite-elements are fitted to expressions of the form \begin{eqnarray} \left|E_{0}-E_h \right| = C_{e} \left(\frac{1}{N_{el}}\right)^{\frac{q_e}{3}} \,, \notag\\ \left|\sigma_{0}-\sigma_h \right| = C_{\sigma} \left(\frac{1}{N_{el}}\right)^{\frac{q_{\sigma}}{3}} \,, \end{eqnarray} to determine $E_{0},\, q_e,\, \sigma_{0}, \& \, q_{\sigma}$. In the above expression, $N_{el}$ denotes the number of elements in a finite-element mesh. We subsequently use $E_0$ and $\sigma_0$ as the exact values of the ground-state energy and hydrostatic stress, respectively, for the benchmark system. Figures~\ref{fig:energyConv} and~\ref{fig:stressConv} show the relative errors in energy and hydrostatic stress plotted against $\left(\frac{1}{N_{el}}\right)^{\frac{1}{3}}$, which represents a measure of $h$. We note that the slopes of these curves provide the rates of convergence of the finite-element approximation for energy and stresses. These results show that we obtain close to optimal rates of convergence in energy of $\mathcal{O}(h^{2k})$, where $k$ is polynomial interpolation order ($k=2$ for HEX27 and $k=3$ for HEX64). Further, we obtain close to $\mathcal{O}(h^{2k-1})$ convergence in the stresses, which represents optimal convergence for stresses. The results also suggest that higher accuracies in energy and stress are obtained with HEX64 in comparison to HEX27. Thus, we will employ HEX64 finite-elements for the remainder of our study. \begin{table}[htbp] \caption{\label{tab:bulkTrf1} \small{The energy difference in eV between a stable phase and the most stable phase for Al and Mg computed using RS-OFDFT-FE and KS-DFT with TM-NLPS.}} \begin{ruledtabular} \begin{tabular}{cccccc} {Al} & fcc & hcp & bcc & sc & dia \\ \hline RS-OFDFT-FE & 0\footnotemark[1] & 0.016 & 0.075 & 0.339 & 0.843 \\ KS-NLPS & 0 & 0.038 & 0.106 & 0.400 & 0.819 \\ \hline {Mg} & hcp & fcc & bcc & sc & dia \\ \hline RS-OFDFT-FE & 0 & 0.003 & 0.019 & 0.343 & 0.847 \\ KS-NLPS & 0 & 0.014 & 0.030 & 0.400 & 0.822 \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{The zero in the first column is to indicate that these numbers are the reference against which energies of other phases are determined.} \end{table} \begin{table*}[!htpb] \caption{\label{tab:bulkTrf2} \small{Bulk properties of Al and Mg: Equilibrium ground-state energy per atom ($E_{\rm min}$ in eV), volume per atom ($V_{0}$ in $\angstrom^3$) and bulk modulus ($B_0$ in GPa) computed using RS-OFDFT-FE, PROFESS, and KS-DFT with BLPS and TM-NLPS.}} \begin{ruledtabular} \begin{tabular}{ccccc} Al\footnotemark[1] & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -57.935 & -57.936 &-57.954 & -57.207\\ $V_{0}$ & 15.68 & 15.68 & 15.62 & 15.55\\ $B_{0}$ & 81.7 & 81.5 & 84.1 & 83.6\\ \hline Mg\footnotemark[2] & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -24.647 & -24.647 & -24.678 & -24.514\\ $V_{0}$ & 21.40 & 21.43 & 21.18 & 21.26\\ $B_{0}$ & 36.8 & 36.6 & 38.5 & 38.6\\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Cell-relaxed lattice constant for fcc Al using RS-OFDFT-FE is $a_0=\, 7.51$ Bohr.} \footnotetext[2]{Cell-relaxed lattice constants for hcp Mg using RS-OFDFT-FE are $a_0=\, 5.89$ Bohr,\, $c_{0}=\, 9.62$ Bohr.} \end{table*} \subsection{Bulk properties of Al, Mg and Al-Mg intermetallics}~\label{sec:BulkTransferability} We now study the accuracy and transferability of the proposed real-space formulation of orbital-free DFT for bulk properties of Al, Mg and Al-Mg intermetallics. To this end, we begin with the phase stability study of Al and Mg, where we compute the difference in the ground-state energy of a stable phase and the ground-state energy of the most stable phase. The results for Al and Mg are shown in Table~\ref{tab:bulkTrf1}, and are compared against those obtained with KS-DFT employing TM-NLPS. We note that RS-OFDFT-FE correctly predicts the most stable phases of Al and Mg being fcc and hcp, respectively. Further, the stability ordering of the various phases computed using RS-OFDFT-FE is consistent with KS-DFT TM-NLPS calculations. Moreover, the energy differences between the various stable phases and the most stable phase computed using RS-OFDFT-FE are in close agreement with KS-DFT calculations. We next consider bulk properties of Al, Mg and Al-Mg intermetallics. To this end, for each system, we first optimize cell geometry and ionic positions to determine the equilibrium cell structure, equilibrium volume ($V_{0}$) and ground-state energy ($E_{\rm min}$). We subsequently compute the bulk modulus given by~\cite{Finnis} \begin{equation}\label{eq:bulkModulus} B= \left.V \frac{\partial^{2} E}{\partial V^2}\right|_{V=V_{0}}\, , \end{equation} where $E$ denotes the ground-state energy of a unit-cell with volume $V$. To compute the bulk modulus, we vary the cell volume by applying a volumetric deformation to the relaxed (equilibrium) unit-cell, which transforms the equilibrium cell vectors $\left\{ {\bf c}_1 \, ,{\bf c}_2 \, , {\bf c}_{3}\right\}$ to $\left\{ {\bf c}^{\prime}_1 \, ,{\bf c}^{\prime}_2 \, , {\bf c}^{\prime}_{3}\right\}$ and are given by \begin{equation} c^{\prime}_{ij}=c_{ij} \, (1+ \eta) \,. \end{equation} While keeping the cell structure fixed, we calculate the ground-state energy for each $\eta$ between $-0.01$ to $0.01$ in steps of 0.002 and fit a cubic polynomial to the $E-V$ data. We subsequently compute the bulk modulus, using equation~\eqref{eq:bulkModulus}, at the equilibrium volume, $V_0$. The computed bulk properties---ground-state energy, equilibrium volume and bulk modulus at equilibrium---for Al and Mg are given in Table~\ref{tab:bulkTrf2}, and those of Al-Mg intermetallics (${\rm Al}_{3}\rm{Mg}$, ${\rm Mg}_{13}{\rm Al}_{14}$, ${\rm Mg}_{17}{\rm Al}_{12}$, and ${\rm Mg}_{23}{\rm Al}_{30}$) are given in Table~\ref{tab:bulkTrf3}. These results suggest that the bulk properties of Al, Mg and Al-Mg intermetallics computed using RS-OFDFT-FE are in good agreement with PROFESS and KS-DFT calculations. \begin{table*}[htbp] \caption{\label{tab:bulkTrf3}\small{Bulk properties of Al-Mg intermetallics: Equilibrium ground-state energy per primitive cell ($E_{\rm min}$ in eV), volume of primitive cell ($V_{0}$ in $\angstrom^3$), and bulk modulus ($B_0$ in GPa) computed using RS-OFDFT-FE, PROFESS, and KS-DFT with BLPS and TM-NLPS.}} \begin{ruledtabular} \begin{tabular}{ccccc} ${\rm Al}_{3}\rm{Mg}$ & RS-OFDFT-FE & PROFESS &KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -198.492 & -198.496& -198.575 & -196.162\\ $V_{0}$ & 67.23 & 67.31&67.13 & 66.52\\ $B_{0}$ & 69.2 & 67.0& 67.6 & 71.0\\ \hline ${\rm Mg}_{13}{\rm Al}_{14}$ & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -1130.083 & -1130.100& -1130.972 & -1117.936\\ $V_{0}$ & 494.77 & 494.73& 498.19 & 492.73\\ $B_{0}$ & 53.1 &52.1 & 54.7 & 54.8\\ \hline ${\rm Mg}_{17}{\rm Al}_{12}$ & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -1114.446 & -1114.526& -1116.185 & -1104.012\\ $V_{0}$ & 545.32 & 544.85 & 543.67 & 544.21\\ $B_{0}$ & 51.1 & 52.3 & 55.2 & 54.4\\ \hline ${\rm Mg}_{23}{\rm Al}_{30}$ & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS\\ \hline $E_{\rm min}$ & -2306.785 & -2306.762 &-2307.989 & -2281.082\\ $V_{0}$ & 953.87 & 952.55& 963.72 & 957.46\\ $B_{0}$ & 64.2 & 60.9 & 60.5 & 60.5\\ \end{tabular} \end{ruledtabular} \end{table*} Finally, we consider the formation energies of Al-Mg intermetallics. In addition to the Al-Mg intermetallics for which we computed the bulk properties, we also compute the formation energy of the $\beta^{\prime}$ alloy. The $\beta^{\prime}$ alloy has a disorder in 10 out of 879 sites with each site having 0.5 chance of being occupied by either Al or Mg~\cite{SampsonPhase}. In our simulations, we consider the two limits where all 10 sites are occupied by either Al or Mg and refer to these as $\beta^{\prime}$(Al) and $\beta^{\prime}$(Mg), respectively. For these two systems, we do not provide KS-DFT results as they are computationally prohibitive. The formation energies for the range of Al-Mg intermetallics are reported in Table~\ref{tab:bulkTrf4}. Our results suggest that the formation energies predicted by RS-OFDFT-FE are in good agreement with PROFESS calculations, and in close agreement with KS-DFT calculations. \begin{table*}[htbp] \caption{\label{tab:bulkTrf4}\small{ Formation energy per atom (eV/atom) of Al-Mg intermetallics calculated using RS-OFDFT-FE, PROFESS, and KS-DFT with TM-NLPS.}} \begin{ruledtabular} \begin{tabular}{ccccccc} Method & ${\rm Al}_{3}\rm{Mg}$ & ${\rm Mg}_{13}{\rm Al}_{14}$& ${\rm Mg}_{17}{\rm Al}_{12}$ & ${\rm Mg}_{23}{\rm Al}_{30}$ & $\beta^{\prime} ({\rm Al})$ & $\beta^{\prime} ({\rm Mg})$\\ \hline RS-OFDFT-FE & -0.010 & 0.053 & -0.008 & -0.035 & -0.026 & -0.020\\ PROFESS & -0.011 & 0.052 & -0.011 & -0.034 & -0.029 & -0.023\\ KS-NLPS & -0.007 & 0.061 &-0.027 & -0.019 & - & -\\ \end{tabular} \end{ruledtabular} \end{table*} \subsection{Configurational forces and atomic displacements}~\label{sec:ForceTransferability} As a next step in our study of the accuracy and transferability of RS-OFDFT-FE, we compute the configurational forces on atoms that are perturbed from their equilibrium positions and compare these with Kohn-Sham DFT calculations. We investigate the accuracy of the forces in both fcc Al and hcp Mg. We begin by considering the relaxed Al fcc unit cell, and the relaxed Mg hcp unit cell. In the relaxed Al fcc unit cell, we perturb the face-centered atom with fractional coordinates $ 0 ,\,\,\frac{1}{2}, \,\,\frac{1}{2} $ by 0.1 Bohr in the [0 1 0] direction. In the relaxed Mg hcp unit cell, we perturb the atom with fractional coordinates $\frac{2}{3}, \,\,\frac{1}{3}, \,\,\frac{1}{2}$ by 0.1 Bohr in the [$\bar{2}$ $\bar{1}$ 3 0] direction (directions in hcp Mg are represented using \textit{Miller-Bravais} indices). The configurational forces on the perturbed atoms are computed using RS-OFDFT-FE, and compared against KS-DFT calculations. The computed restoring forces, along [0 $\bar{1}$ 0] for the Al system and along [2 1 $\bar{3}$ 0] for the Mg system, are reported in Table~\ref{tab:tableForceTrf1}. We note that the computed restoring forces from RS-OFDFT-FE are in good agreement with PROFESS and KS-DFT calculations. As a more stringent test of accuracy and transferability, we consider the atomic relaxations around a mono-vacancy in fcc Al and hcp Mg. In the case of mono-vacancy in Al, we consider a supercell containing $3 \times 3 \times 3$ fcc Al unit cells and remove an atom to create a mono-vacancy. We calculate the forces on the neighboring atoms of the mono-vacancy, and their relaxation displacements upon ionic relaxation using both RS-OFDFT-FE and KS-DFT calculations. Periodic boundary conditions are employed in these calculations. Table~\ref{tab:tableForceTrf3} reports the computed force and relaxation displacement in Al on the nearest neighboring atom, which experiences the largest ionic force and relaxation. In the case of a mono-vacancy in Mg, we consider a supercell containing $3\times 3\times 2$ hcp unit cells, and Table~\ref{tab:tableForceTrf4} reports the ionic force and relaxation displacement on the neighboring atom that has the largest force in the presence of the vacancy. As is evident from the results, the ionic forces and relaxed displacements for a mono-vacancy in Al and Mg computed using RS-OFDFT-FE are in good agreement with PROFESS, and in close agreement with KS-DFT calculations. These results suggest that the proposed real-space orbital-free DFT formulation provides a good approximation to KS-DFT for Al-Mg materials systems. \begin{table}[htbp] \caption{\label{tab:tableForceTrf1}\small{ Restoring force (eV/Bohr) on the perturbed atom in fcc Al and hcp Mg unit cells computed using RS-OFDFT-FE, PROFESS, and KS-DFT calculations.} } \begin{ruledtabular} \begin{tabular}{ccccc} & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS \\ \hline Al & 0.148 & 0.137 & 0.134 & 0.126 \\ Mg & 0.019 & 0.019 & 0.018 & 0.019 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[htbp] \caption{\label{tab:tableForceTrf3} \small{ Ionic forces (eV/Bohr) and relaxation displacement (Bohr) on the nearest neighboring atom to a mono-vacancy in a periodic $3\times 3\times 3$ fcc Al supercell, calculated using RS-OFDFT-FE, PROFESS, and KS-DFT. $f$ and $d$ denote the magnitudes of ionic force and relaxation displacement. $\angle {\bf f}$ and $\angle {\bf d}$ denote the angles (in degrees) of the force and displacement vectors with respect to the KS-NLPS force and displacement vectors.}} \begin{ruledtabular} \begin{tabular}{ccccc} & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS \\ \hline $f$ & 0.141 & 0.146 & 0.130 & 0.119 \\ $d$ & 9.90\e{-2} & 9.75\e{-2} & 9.47\e{-2} & 8.90\e{-2} \\ $\angle {\bf f}$ & 0.00 & 0.00 & 0.00 & 0.00\\ $\angle {\bf d}$ & 0.15 & 0.00 & 0.00 & 0.00\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[htbp] \caption{\label{tab:tableForceTrf4}\small{Ionic forces (eV/Bohr) and relaxation displacement (Bohr) on the nearest neighboring atom to a mono-vacancy in a periodic $3\times 3\times 2$ hcp Mg supercell, calculated using RS-OFDFT-FE, PROFESS, and KS-DFT.}} \begin{ruledtabular} \begin{tabular}{ccccc} & RS-OFDFT-FE & PROFESS & KS-BLPS & KS-NLPS \\ \hline $f$ & 0.059 & 0.060 & 0.053 & 0.046 \\ $d$ & 8.26\e{-2} & 8.64\e{-2} & 7.00\e{-2} & 5.83\e{-2} \\ $\angle {\bf f}$ & 5.11 & 4.73 & 2.75 & 0.0\\ $\angle {\bf d}$ & 5.66 & 5.27 & 3.58 & 0.0\\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Cell-size studies on a mono-vacancy in Al }~\label{sec:Mono-vacancy} Prior Fourier-space calculations using OF-DFT and WGC Functional~\cite{Ho2007}, and KS-DFT calculations~\cite{Chetty1995} have suggested that cell-sizes containing $\sim 256$ lattice sites are sufficient to obtain a well-converged (to within 3 meV) mono-vacancy formation energy in fcc Al. These Fourier-space calculations, which employ periodic boundary conditions, compute the properties of a periodic array of vacancies. On the other hand, real-space calculations on isolated mono-vacancies in bulk, computed using the recently developed coarse-graining techniques for orbital-free DFT~\cite{Bala2010,QCOFDFT}, suggest that cell-size effects in mono-vacancy calculations are present up to cell-sizes of $\sim 10^3$ atoms. Although both approaches give similar converged vacancy formation energies, this discrepancy in the cell-size effects has thus far remained an open question. In order to understand the source of this discrepancy, we conduct a cell-size study of the mono-vacancy formation energy in Al using RS-OFDFT-FE with two types of boundary conditions: (i) periodic boundary conditions on electronic fields; (ii) Dirichlet boundary conditions on electronic fields with values corresponding to that of a perfect crystal. These Dirichlet boundary conditions, which we refer to as bulk Dirichlet boundary conditions, correspond to the scenario where perturbations in the electronic structure due to the mono-vacancy vanish on the boundary of the computational domain, and the electronic structure beyond the computational domain corresponds to that of the bulk. We note that periodic boundary conditions mimic the widely used Fourier-space calculations on point defects, whereas the bulk Dirichlet boundary conditions correspond to simulating an isolated point defect embedded in bulk. We note that the local real-space formulation of orbital-free DFT and the finite-element basis are key to being able to consider these boundary conditions. We compute the vacancy formation at constant volume as~\cite{Finnis,Gillan1989} \begin{equation} E_{vf}= E\left(N-1,1,\frac{N-1}{N}\Omega\right)-\frac{N-1}{N}E\left(N,0,\Omega\right)\,, \end{equation} where $E\left(N,0,\Omega\right)$ denotes the energy of perfect crystal containing $N$ atoms occupying a volume $\Omega$, and $E(N-1,1,\frac{N-1}{N}\Omega)$ denotes energy of a computational cell containing $N-1$ atoms and one vacancy occupying a volume $\frac{N-1}{N}\Omega$. For both periodic boundary conditions and bulk Dirichlet boundary conditions, the lattice site where the vacancy is created is chosen to be the farthest site from the domain boundary. As we are primarily interested in the cell-size effects of the electronic structure, we do not consider ionic relaxations in this part of our study. Table~\ref{tab:tablemonovac1} shows the unrelaxed mono-vacancy formation energies for different cell sizes computed using RS-OFDFT-FE using both periodic boundary conditions and bulk Dirichlet boundary conditions. We note that the mono-vacancy formation energies using both sets of boundary conditions converge to the same value, and this is also in good agreement with PROFESS and KS-DFT calculations (cf. Table~\ref{tab:tablemonovac2}). However, it is interesting to note that the mono-vacancy formation energies with periodic boundary conditions are well converged (to within 10 meV) by $3\times 3\times 3$ cell-size (108 atoms), whereas we required a $6\times 6\times 6$ cell-size (864 atoms) to achieve a converged formation energy with bulk Dirichlet boundary conditions. \begin{table}[htbp] \caption{\label{tab:tablemonovac1}\small{Unrelaxed mono-vacancy formation energies for Al computed using RS-OFDFT-FE with periodic boundary conditions ($E^{p}_{vf}$ in eV) and bulk Dirichlet boundary conditions ($E^{bD}_{vf}$ in eV).}} \begin{ruledtabular} \begin{tabular}{cccc} Cell size & N & $E^{bD}_{vf}$ & $E^{p}_{vf}$ \\ \hline 2x2x2 & 32 & -0.390 & 0.955 \\ 3x3x3 & 108 & 0.864 & 0.915 \\ 4x4x4 & 256 & 0.971 & 0.908 \\ 5x5x5 & 500 & 0.944 & - \\ 6x6x6 & 864 & 0.918 & - \\ 7x7x7 & 1372 & 0.914 & - \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[htbp] \caption{\label{tab:tablemonovac2}\small{Unrelaxed mono-vacancy formation energies ($E_{vf}$ in eV) for Al computed using PROFESS~\cite{PROFESS2}, and KS-DFT on a $3 \times 3 \times 3$ computational cell. }} \begin{ruledtabular} \begin{tabular}{cc} & $E_{vf}$ \\ \hline PROFESS & 0.903\\ KS-DFT-BLPS & 0.815\\ KS-DFT-NLPS & 0.811\\ \end{tabular} \end{ruledtabular} \end{table} In order to understand this boundary condition dependence of the cell-size effects, we compute the perturbations in the electronic fields due to the presence of the mono-vacancy by subtracting from the electronic fields corresponding to the mono-vacancy the electronic fields of a perfect crystal. To this end, we define the normalized perturbations in the electronic fields computed on the finite-element mesh to be \begin{align}\label{eq:correctorFields} u^{c}_{h}= & \left(u_{h} -u^{p}_{h}\right)/ {\rm v}_{\rm av}\left( u^{p}_{h}\right)\,,\notag \\ \phi^{c}_{h}=& \left(\phi_{h} -\phi^{p}_{h}\right)/{\rm v}_{\rm av}\left( \phi^{p}_{h}\right)\,,\notag\\ k^{c}_{\alpha,h}=& \left(\sum\limits_{j=1}^m\,\omega_{\alpha_j,h}- \sum\limits_{j=1}^m\,\omega^p_{\alpha_j,h}\right) /{\rm v}_{\rm av}\left( \sum\limits_{j=1}^m\,\omega^p_{\alpha_j,h}\right)\,,\notag\\ k^{c}_{\beta,h}= & \left(\sum\limits_{j=1}^m\,\omega_{\beta_j,h}- \sum\limits_{j=1}^m\,\omega^p_{\beta_j,h}\right) /{\rm v}_{\rm av}\left( \sum\limits_{j=1}^m\,\omega^p_{\beta_j,h}\right)\,. \end{align} In the above, $\{u_{h}, \phi_{h}, \omega_{\alpha_j,h}, \omega_{\beta_j,h} \}$ and $\{u^{p}_{h}, \phi^{p}_{h}, \omega^{p}_{\alpha_j,h}, \omega^{p}_{\beta_j,h} \}$ denote the electronic fields in the computational domain with the vacancy and those without the vacancy (perfect crystal), respectively. ${\rm v}_{\rm av}( . )$ denotes the volume average of an electronic field over the computational cell. As a representative metric, in the definition of $k^{c}_{\alpha,h}$ and $k^{c}_{\beta,h}$ we only consider the kernel potentials corresponding to $K_0$. Figures~\ref{fig:monovacCf1} and ~\ref{fig:monovacCf2} shows the normalized corrector fields for the mono-vacancy, computed using periodic boundary conditions, along the face-diagonal of the periodic boundary. It is interesting to note from these results that the perturbations in the electronic structure due to the vacancy are significant up to $6\times 6\times 6$ computational cells. Thus, although the vacancy formation energy appears converged by $3\times 3\times 3$ computational cell while using periodic boundary conditions, the electronic fields are not converged till a cell-size of $6\times 6\times 6$ computational cell. On the other hand, the cell-size convergence in mono-vacancy formation energy suggested by the bulk Dirichlet boundary conditions is inline with the convergence of electronic fields. These results unambiguously demonstrate that the cell-size effects in the electronic structure of defects are larger than those suggested by a cell-size study of defect formation energies employing periodic boundary conditions. Using bulk Dirichlet boundary conditions for the cell-size study of defect formation energies provides a more accurate estimate of the cell-size effects in the electronic structure of defects, and the extent of electronic structure perturbations due to a defect. Further, while periodic boundary conditions are limited to the study of point defects, bulk Dirichlet boundary conditions can be used to also study defects like isolated dislocations~\cite{Mrinal2015}, whose geometry does not admit periodic boundary conditions. \begin{figure*}[htbp] \begin{center} \includegraphics[width=0.9\textwidth]{monovacancy_cf234.eps} \caption{\label{fig:monovacCf1}\small{Normalized corrector fields for a mono-vacancy, computed with periodic boundary conditions, along the face diagonal on the computational domain boundary. The abscissa $\bar{d}$ represents a normalized coordinate along the face diagonal. Results for computational cell sizes from $2 \times 2 \times 2$ to $4 \times 4 \times 4$ are shown. }} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=0.9\textwidth]{monovacancy_cf567.eps} \caption{\label{fig:monovacCf2}\small{Normalized corrector fields for a mono-vacancy, computed with periodic boundary conditions, along the face diagonal on the computational domain boundary, for cell sizes ranging from $5 \times 5 \times 5$ to $7 \times 7 \times 7$.}} \end{center} \end{figure*} \section{Summary}\label{sec:Summary} We have developed a local real-space formulation of orbital-free DFT with WGC kinetic energy functionals by reformulating the extended interactions in electrostatic and kinetic energy functionals as local variational problems in auxiliary potentials. The proposed real-space formulation readily extends to all-electron orbital-free DFT calculations that are commonly employed in warm dense matter calculations. Building on the proposed real-space formulation we have developed a unified variational framework for computing configurational forces associated with both ionic and cell relaxations. Further, we also proposed a numerically efficient approach for the solution of ground-state orbital-free DFT problem, by recasting the local saddle point problem in the electronic fields---electron density and auxiliary potential fields---as a fixed point iteration problem and employing a self-consistent iteration procedure. We have employed a finite-element basis for the numerical discretization of the proposed real-space formulation of orbital-free DFT. Our numerical convergence studies indicate that we obtain close to optimal rates of convergence in both ground-state energy and configurational forces with respect to the finite-element discretization. We subsequently investigated the accuracy and transferability of the proposed real-space formulation of orbital-free DFT for Al-Mg materials system. To this end, we conducted a wide range of studies on Al, Mg and Al-Mg intermetallics, including computation of bulk properties for these systems, formation energies of Al-Mg intermetallics, and ionic forces in bulk and in the presence of point defects. Our studies indicate that orbital-free DFT and the proposed real-space formulation is in good agreement with Kohn-Sham DFT calculations using both local pseudopotentials as well as non-local pseudpotentials, thus providing an alternate linear-scaling approach for electronic structure studies in Al-Mg materials system. We finally investigated the cell-size effects in the electronic structure of a mono-vacancy in Al, and demonstrated that the cell-size convergence in the vacancy formation energy computed by employing periodic boundary conditions is not commensurate with the convergence of the electronic fields. On the other hand, the true cell-size effects in the electronic structure are revealed by employing the bulk Dirichlet boundary conditions, where the perturbations in the electronic fields due to the defect vanish on the boundary of the computational domain. Our studies indicate that the true cell-size effects are much larger than those suggested by periodic calculations even for simple defects like point defects. We note that the proposed real-space formulation and the finite-element basis are crucial to employing the bulk Dirichlet boundary conditions that are otherwise inaccessible using Fourier based formulations. The proposed formulation, besides being amenable to complex geometries, boundary conditions, and providing excellent scalability on parallel computing platforms, also enables coarse-graining techniques like the quasi-continuum reduction~\cite{QCOFDFT,Mrinal2011} to conduct large-scale electronic structure calculations on the energetics of extended defects in Al-Mg materials system, and is an important direction for future studies. \begin{acknowledgments} We gratefully acknowledge the support from the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering under Award No. DE-SC0008637 that funds the Predictive Integrated Structural Materials Science (PRISMS) center at University of Michigan, under the auspices of which this work was performed. V.G. also gratefully acknowledges the hospitality of the Division of Engineering and Applied Sciences at the California Institute of Technology while completing this work. We also acknowledge Advanced Research Computing at University of Michigan for providing the computing resources through the Flux computing platform. \end{acknowledgments} \clearpage
{ "timestamp": "2015-06-15T02:11:07", "yymm": "1504", "arxiv_id": "1504.06368", "language": "en", "url": "https://arxiv.org/abs/1504.06368" }
\section{Introduction} A lateral density function (LDF) describes the density as a function of the radius with respect to the core of a shower. For vertical air showers the horizontal plane coincides with the plane of the front of the shower and the iso-density contours in the horizontal plane are circles. A polar symmetric LDF is of application for vertical showers and for polar averaged densities of inclined showers. For inclined showers the iso-density contours are rather ellipses \cite{Dova1999,Pryke}. As known, the centers of the elliptic iso-density contours do not coincide with the shower core, see Fig. \ref{corecentervis}. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{corecentervis.pdf} \caption{Impression of the lateral density by means of iso-density contours and the different positions of the shower core and the point of intersection of the major axis and the minor axis, the center, of an iso-density contour.} \label{corecentervis} \end{center} \end{figure} \\ The distance between the shower core and the center of an elliptic contour will be denoted as the `shift'. The application of an elliptic LDF instead of a polar symmetric LDF increases the accuracy of the reconstruction of an inclined air shower observed with detectors in a horizontal plane. The accuracy of the reconstruction of an inclined air shower can be increased further if the shift is taken into account. In order to give already an impression an LDF-A solely based on the projection, thus without a shift, and an LDF-A including the shift, are both plotted for the polar density for an average 100 PeV shower with zenith angle $45^\circ$ at a distance 100 m from the core in Fig. \ref{shiftellcomp}. We see the additional angular density variations caused by the shift are of the same order as the angular density variations caused by the projection. This suggest that if the ellipticity is taken into account for reconstruction purposes, then the shift might be taken into consideration as well. \begin{figure}[htbp] \begin{center} \includegraphics[width=7cm]{shiftellcomp.pdf} \caption{The polar density according to projection with a shift (solid) and projection without a shift (dashed) for an arbitrary 100 PeV shower with zenith angle $45^\circ$ at a distance 100 m from the core. The horizontal line (dotted) is the mean density.} \label{shiftellcomp} \end{center} \end{figure} \\ The main purpose of the paper is the construction of an asymmetric density function which includes the shift. To this end the shift will be investigated for different primary energies and different zenith angles. The shift is caused by the attenuation of the shower. For the electron part this is the atmospheric attenuation which can be modeled to a certain extent. For the muon part the attenuation is mainly due to decay. The decay of the muons can be modeled. However, a decaying muon contributes an electron to the electron part. The two distribution are therefore intertwined. It therefore does not make much sense to consider the shift for electrons and muons separately. Besides it would require to consider the ratio of the densities of the electrons and muons in a model. Although we are interested in the shift of the combined density of electron and muons together, we will for the model restrict to atmospheric attenuation. Even with this restriction the accuracy of the model is limited for several reasons of which the ignorance of local attenuation is the most important. Nevertheless, the model result gives an indication of the way the shift depends on zenith angle and on distance to the core. Accurate values for the shift are determined from lateral densities of MC showers. In plots of the determined shifts the model prediction will still be plotted for reasons of comparison. Furthermore it will be shown how the asymmetric LDF including the shift is constructed from a polar symmetric LDF. To avoid length we will denote a polar symmetric LDF just as LDF and an asymmetric LDF as LDF-A. For a clear distinction we will denote the density in the front plane as $\rho$ and the asymmetric density in the horizontal plane as $\nu$. \\ \\ The contents of the paper can be divided in three parts: the modeling of the shift (Section 2 - 5), the determination of the shift (Section 6 - 8) and the construction of a polar density function including the shift (Section 9 - 10). In Section 2 we consider a cylinder model for the shower in a suitable coordinate system. The consequences of the cylindrical projection will be considered for the situation without and with a shift. In Section 3 we model the effect of atmospheric attenuation of the shower on the lateral density. An analytical approximation for the shift is derived in Section 4. In Section 5 we obtain a comparable result on the basis of a cone model for the shower. In Section 6 a description is given of the method used for the investigation of the shift on the basis of horizontal densities of simulated showers. In Section 7 some general results will be presented as obtained from simulated showers. In Section 8 we focus on the behavior of the shift for the combined density of electrons and muons together. The muon energy deposit in scintillators is practically similar to the electron energy deposit \cite{Sima2011}. A combined lateral density of electrons and muons is therefore of interest for scintillator based observatories. The detecting efficiency of scintillator detectors become small for densities below 0.5 m$^{-2}$. We therefore will focus on combined densities larger than 0.5 m$^{-2}$. We will see that in this region the shift is independent of shower size. Moreover, the relation between shift and the size of the elliptic contour is almost linear. The shift of the combined density will be compared with a proposed linear approximation. In Section 9 the polar density is considered. It is shown how to convert an LDF to the LDF-A including the shift as a function of radius $r$ and polar angle $\alpha$ and parameterized by the zenith angle $\theta$. For the outline of the procedure we conveniently restrict to a proposed linear approximation for the shift since it allows for an analytical solution for the LDF-A. To illustrate the procedure the LDF-A will be constructed explicitly for example LDF's of three simulated showers in Section 10. The predictions of the constructed LDF-A will be compared with the polar density of the simulated showers. In Section 11 the paper is concluded with a brief summary. \section{Cylinder model} Atmospheric attenuation has a large effect on inclined showers \cite{Dova2003}. One of the consequences is that it shifts the center of an elliptic iso-density contour. To model it we will first assume that all the particles run parallel with the shower core at the moment of arrival. Furthermore, we assume that contours of equal density are circles in the plane perpendicular to the shower direction. For the coordinate system we take the $x$ and $y$ axes in the horizontal plane and the $z$ axis in the upward vertical direction. The origin is taken at the position where the shower core axis intersects the horizontal plane. The azimuth angle of the shower is, anti-clockwise, with respect to the positive $x$-axis. Without loss of generality we consider inclined showers with zero azimuthal angle, thus with the shower core in the $x$,$z$-plane. The situation is schematically shown in Fig. \ref{contour1}. The tilted circle is perpendicular to the shower direction. At the moment the core reaches the surface in the origin of the coordinate system the shower front intersects the horizontal plane at the $y$-axis. We take a point $N$ on the tilted circle. Its distance with respect to the origin is $r$. If the direction of the shower particles is conveniently assumed parallel to the shower core, the projection of $N$ on the horizontal plane is point $P$. $M$ is a point on the $y$-axis with identical $y$-coordinate as $N$ and $P$. The angle between $NM$ and $PM$ and the angle between $NP$ and the vertical axis both are equal to the zenith angle $\theta$. From the geometry it follows \begin{equation}\label{1} MN^2=ON^2-OM^2=r^2-y^2 \ ,f \end{equation} \begin{equation}\label{2} r^2=x^2 \cos^2 \theta +y^2 \end{equation} and \begin{equation}\label{3} NP^2=x^2 \sin^2 \theta \ \end{equation} Here and in the sequel ($x$,$y$) denote the coordinates of $P$ in the horizontal plane. \\ \begin{figure} \begin{center} \tdplotsetmaincoords{70}{20} \begin{tikzpicture}[scale=4.4,tdplot_main_coords] \tikzstyle{grid}=[thin,color=red,tdplot_rotated_coords] \draw[thick,->] (0,0,0) -- (1.2,0,0) node[anchor=west]{$x$}; \draw[thick] (0,0,0) -- (-1.2,0,0) ; \draw[thick,->] (0,0,0) -- (0,1.2,0) node[anchor=east]{$y$}; \draw[thick] (0,0,0) -- (0,-1,0); \draw[thick,->] (0,0,0) -- (0,0,.8) node[anchor=south]{$z$}; \draw (0,0,0) circle [x radius=.924,y radius=.8]; \draw[dashed] (0,.48,0) -- (-.739,.48,0) node[anchor=south east]{$P$}; \draw[ultra thick] (0,.0,0) -- (.5,0,.866); \draw[ultra thick,->] (.5,0,.866) -- (.375,0,.6495); \draw[dashed] (-.554,.48,.32) -- (-.739,.48,0); \draw[thin] (0,-.1,0) -- (0,0,0) node[anchor= north]{$O$};; \draw[dashed] (-.554,.48,.30) -- (-.554,.48,0); \draw[thin] (0,0,0) -- (0,.48,0) node[anchor=west]{$M$}; \node[rotate=60] at (.34,0,.73) {core axis}; \draw[<->] (.25,0,.433) arc (30:70:0.7) ; \node[] at (-.153,.553,0.01) {$\theta$}; \node[] at (-.14,.75,.30) {$\theta$}; \node[] at (-.588,.48,.2) {$\theta$}; \node[] at (-.51,.34,0) {$Q$}; \node[] at (.06,.94,0) {$K$}; \node[] at (-.32,.34,.2) {$r$}; \tdplotsetrotatedcoords{0}{30}{0} \draw[tdplot_rotated_coords] (0,0,0) circle [x radius=.8,y radius=.8]; \draw[tdplot_rotated_coords] (0,.48,0) -- (-.64,.48,0) node[anchor=south]{$N$}; \draw[tdplot_rotated_coords] (0,0,0) -- (-.64,.48,0); \end{tikzpicture} \caption{Front of inclined air shower falling on a horizontal surface.} \label{contour1} \end{center} \end{figure} \\ Without attenuation the asymmetry would be solely caused by the projection of the shower plane onto the horizontal observation plane. Alternatively, the intersection of a slant cylinder with a horizontal plane is an ellipse. The projection along the shower core axis means that the density along $NM$ is projected to the larger $PM$. As a consequence the density $\nu$ at the horizontal plane is smaller than the density $\rho$ of the inclined shower front by a factor $\cos \theta$: \begin{equation}\label{4} \nu(x,y)=\rho (r) \cos \theta \ . \end{equation} At the same time the iso-density contours are stretched to ellipses satisfying Eq. (\ref{2}). The horizontal ellipse and the inclined circle intersect each other and the positive $y$-axis at $K$. Denoting the $y$-coordinate of $K$ as $k$ we obtain \begin{equation}\label{5} x^2 \cos^2 \theta +y^2 = k^2 \ . \end{equation} This is an ellipse whose semi-major axis $a$ and semi-minor axis $b$ are related to each other via $b=a \cos \theta$ and where the center of the ellipse coincides with the shower core. \\ \\ Denoting the $x$-coordinate of the shifted center as $x_M$ the general equation for a shifted ellipse is \begin{equation}\label{5a} (x-x_M)^2 \cos^2 \theta +y^2 = b^2 \ , \end{equation} where $b$ is the size of the semi-minor axis. Since $y=k$ if $x=0$ we also obtain \begin{equation}\label{5b} x_M^2 \cos^2 \theta +k^2 = b^2 \ . \end{equation} From the latter two equations we can write the equation of the shifted ellipse also as \begin{equation}\label{5c} (x^2-2x x_M) \cos^2 \theta +y^2 = k^2 \ . \end{equation} The equation can be solved for $k$ after we have determined $x_M$ as a function of $k$ and $\theta$. By means of the solution the LDF-A can be constructed. \section{Modeling attenuation} At the early stages of the longitudinal development the size of a shower increases. After the shower size has reached a maximum it approximately falls of exponentially with atmospheric depth. The attenuation length $\lambda$ is about 185 g cm$^{-2}$ \cite{CiampaClay,Antoni2003}. A consequence of the attenuation of the shower during the traverse from $N$ to $P$ is that the density of shower particles is decreased by a factor $e^{-\Delta X / \lambda}$, where $\Delta X$ is the additional atmospheric depth met by shower particles between $N$ and $P$. The atmospheric depth exponentially decreases with altitude with a characteristic length of about 8 km. Except for shower with a very large energy and very large inclination the atmospheric depth between $N$ and $P$ approximately is constant. At the surface of the earth the increase $\Delta X$ is approximately equal to 0.13 g cm$^{-2}$ for every meter travelled through the air. Hence, \begin{equation}\label{6} \nu(x,y)=\rho (r) \cdot e^{- \xi \cdot NP} \cdot \cos \theta \ , \end{equation} where $\xi = 0.13 / 185 \approx 7.0 \cdot 10^{-4}$ m$^{-1}$. With the substitution of the Eq. (\ref{3}) for $NP$ this is: \begin{equation}\label{7} \nu(x,y)=\rho (r) \cdot e^{\xi x \sin \theta} \cdot \cos \theta \ , \end{equation} where, according to Eq. (\ref{2}), $r=\sqrt{x^2 \cos^2 \theta +y^2}$. This is the basic equation for the analysis. It accounts for the attenuation at the late part of the inclined shower and for the reverse at the early part. As a consequence it leads to a shift of the elliptic density in the horizontal plane. The performance of the reconstruction of shower core positions should improve if the polar density function is modified for the shift. In our coordinate system the late part of the shower is at the negative $x$-axis. Notice that a negative value for $x$ leads to a decrease of the density. For $x=0$, $y=k$ we have \begin{equation}\label{8} \nu(0,k)=\rho (k) \cdot \cos \theta \ . \end{equation} To obtain the iso-density contour in the horizontal plane through $(0,k)$ we have to solve the equation $\nu (x,y) = \nu (0,k)$. That is, we have to solve the equation \begin{equation}\label{8} \nu(x,y)=\rho (k) \cdot \cos \theta \end{equation} or, more explicitly, \begin{equation}\label{9} \rho(r) \cdot e^{- \xi x \sin \theta} = \rho(k) \ , \end{equation} where $r=\sqrt{x^2 \cos^2 \theta +y^2}$. In the next section we will derive analytically a first order solution for this equation. \section{Analytical approximation} The key in the following analysis is the observation that the lateral density can be roughly described by the following exponential function: \begin{equation}\label{10} \rho(r) \propto e^{-(r/r_0)^w} \ . \end{equation} In Fig. \ref{vwkfit} the polar averaged combined density, binned with bin-width 1 m, and their approximation by the exponential function are plotted for three showers: one with energy $10^{16}$ and zenith angle $30^\circ$, one with energy $10^{17}$ and zenith angle $45^\circ$ and one with energy $10^{18}$ and zenith angle $52.5^\circ$ respectively denoted as shower \textbf{a}, \textbf{b} and \textbf{c}. \begin{figure}[htbp] \begin{center} \includegraphics[width=8.5cm]{vwkfit.pdf} \caption{Polar averaged lateral density of electrons and muons together for the three showers \textbf{a}, \textbf{b} and \textbf{c} as given in the text. The dashed curves are the approximations by the exponential function as given in the text.} \label{vwkfit} \end{center} \end{figure} \\ It is evident that the approximation is not particularly good. The exponential function is solely intended as a toy function. The values for the parameters of shower \textbf{a}, \textbf{b} and \textbf{c} respectively are 0.022, 0.026, and 0.017 meter for $r_0$ and 0.25, 0.25, and 0.22 for $w$. In the remainder of the analysis we will solely use $r_0 = 0.025$ m and $w = 0.25$. By means of the `toy' function for the lateral density the equation $\nu (x,y) = \nu (0,k)$ can be written as \begin{equation}\label{11} e^{-(r/r_0)^w} e^{ \xi x \sin \theta} = e^{-(k/r_0)^w} \ . \end{equation} Hence, \begin{equation}\label{12} r^2 = k^2 \left(1+ \frac{\xi x \sin \theta}{(k/r_0)^w} \right)^{2/w} \ . \end{equation} The latter can be expressed as \begin{equation}\label{13} r^2 = k^2 \left(1+x w \eta \cos \theta \right)^{2/w} \ , \end{equation} where \begin{equation}\label{14} \eta = \frac{\xi}{w(k/r_0)^w} \tan \theta \ . \end{equation} Since $xw\eta \cos \theta <1$ for $x$ smaller than $10^4$ m we will take a first order approximation for the right hand side of Eq. (\ref{13}): \begin{equation}\label{15} r^2 \approx k^2 \left(1+2 x \eta \cos \theta \right) \ . \end{equation} With the substitution of Eq. (\ref{2}) for $r$ and some rearrangement we obtain \begin{equation}\label{16} \left(x \cos \theta - k^2 \eta \right)^2 +y^2 \approx k^2 \left(1+k^2 \eta^2 \right) \ . \end{equation} From the comparison with Eqs. (\ref{5a}) and (\ref{5b}) we find \begin{equation}\label{16a} b^2 \approx k^2+k^4 \eta^2 \end{equation} and \begin{equation}\label{17} x_M \approx \frac{k^2 \eta}{\cos \theta} \ . \end{equation} With the substitution of the Eq. (\ref{14}) for $\eta$ we obtain the following model prediction for the shift: \begin{equation}\label{18} x_M = \frac{\xi \cdot r_0^w}{w} \cdot k^{2-w} \cdot \frac{\tan \theta}{\cos \theta} \ . \end{equation} Since $x_M >0$ the shower attenuation does shift the center of the ellipse towards the early part of the shower. The shower attenuation did not change the eccentricity, which is equal to $\sin \theta$. Substituting $\xi=7.0 \cdot 10^{-4}$ m$^{-1}$, $r_0 = 0.025$ m and $w = 0.25$, we obtain as the `cylinder' model prediction for the shift in m: \begin{equation}\label{19} x_M =1.1 \cdot 10^{-3} \cdot k^{1.75} \cdot \frac{\tan \theta}{\cos \theta} \ . \end{equation} \section{Cone model} Another model is the cone model. That is, we consider paths from apex $A$ through the horizontal plane at ground level. The differences in experienced atmospheric depth, due to the different path lengths, will be translated in attenuation. To this end we consider a shower cone with apex $A$ in the same coordinate system as in Fig. \ref{contour1}. As before, we regard inclined showers with zero azimuthal angle, thus with the shower core in the $x$,$z$-plane. For positions inside the cone the opening angle is denoted as $\delta$. The situation is schematically shown in Fig. \ref{contour2}. \begin{figure}[htbp] \begin{center} \tdplotsetmaincoords{70}{20} \begin{tikzpicture}[scale=4,tdplot_main_coords] \tikzstyle{grid}=[thin,color=red,tdplot_rotated_coords] \draw[thick,->] (0,0,0) -- (1.0,0,0) node[anchor=west]{$x$}; \draw[thick] (0,0,0) -- (-1.2,0,0) ; \draw[thick,->] (0,0,0) -- (0,1.2,0) node[anchor=east]{$y$}; \draw[thick] (0,0,0) -- (0,-1,0); \draw[thick,->] (0,0,0) -- (0,0,.95) node[anchor=south]{$z$}; \draw (-1.1,.0,0) arc (180:360:0.935 ); \draw (-1.1,.0,0) arc (180:77:0.805 ); \draw (.767,0,0) arc (10:87:0.95); \draw[dashed] (0,0,0) -- (-.55,.48,0); \draw[dashed] (-.55,.48,.32) -- (-.88,.56,0); \draw[dashed] (-.55,.48,0) -- (-.88,.56,0) node[anchor=south ]{$P$}; \draw[ultra thick] (0,.0,0) -- (1.,0,1.732); \draw[thin] (1,0,1.732) -- (-.55,.48,.32); \draw[thin] (1,0,1.732) -- (.72,0,-.42); \draw[thin] (0,0,0) -- (.72,0,-.42); \draw[ultra thick,->] (.5,0,.866) -- (.375,0,.6495); \draw[thin] (0,-.1,0) -- (0,0,0) node[anchor= north]{$O$};; \draw[dashed] (-.554,.48,.30) -- (-.554,.48,0); \draw[thin] (0,0,0) -- (0,.48,0) node[anchor=west]{$M$}; \node[rotate=60] at (.34,0,.73) {core axis}; \draw[<->] (.25,0,.433) arc (30:70:0.7) ; \node[] at (0.8,0,1.5) {$\delta$}; \node[] at (0.92,0,1.5) {$\delta$}; \node[] at (-.14,.75,.30) {$\theta$}; \node[] at (-.035,.0,.13) {$\beta$}; \node[] at (-.51,.34,0) {$Q$}; \node[] at (-.25,.34,.12) {$r$}; \node[] at (.73,-.8,.0) {$r$}; \node[] at (.5,.0,.7) {$L$}; \node[] at (.07,.91,0) {$K$}; \node[] at (1.01,0,1.8) {$A$}; \tdplotsetrotatedcoords{0}{30}{0} \draw[tdplot_rotated_coords] (0,-.07,0) circle [x radius=.84,y radius=.84]; \draw[tdplot_rotated_coords] (0,.48,0) -- (-.64,.48,0) node[anchor=south]{$N$}; \draw[tdplot_rotated_coords] (0,0,0) -- (-.64,.48,0); \end{tikzpicture} \caption{Front of inclined air shower falling on a horizontal surface.} \label{contour2} \end{center} \end{figure} \\ As for the cylinder model, the tilted circle is perpendicular to the shower direction. We take a point $N$ on the tilted circle as shown in Fig. \ref{contour2}. Its distance with respect to the origin is denoted as $r$. The projection of $N$, along the cone, on the horizontal plane is point $P$. $M$ is a point on the $y$-axis with identical $y$-coordinate as $N$. The angle between $NM$ and $QM$ is equal to the zenith angle $\theta$. The angle between $NP$ and the vertical axis is equal to $\theta +\delta$. From the geometry it follows \begin{equation}\label{20} (x_A, y_A, z_A)= \left( L \sin \theta, 0, L \cos \theta \right) \end{equation} and \begin{equation}\label{21} (x_N, y_N, z_N)= L \tan \delta \left( - \sin \beta \cos \theta, \cos \beta, \sin \beta \sin \theta \right) \ , \end{equation} where \begin{equation}\label{22} \sin \beta = \frac{MN }{ON} \end{equation} and $ON=r$. The line through $A$ and $N$ intersects the $z=0$ plane in point $P$ with ($x$,$y$) coordinates \begin{equation}\label{23} (x, y)= \frac{L \tan \delta}{1-\tan \delta \sin \beta \tan \theta} \left( - \frac{\sin \beta}{\cos \theta}, \cos \beta \right) \ . \end{equation} As before we let $k$ be the $y$-coordinate where the projected contour intersects de positive $y$-axis. Using the coordinates as given before we find, to first order in $\delta$, the following approximate lengths of paths $AP$ and $AK$: \begin{equation}\label{24} AP\approx L(1+\delta \tan \theta \sin \beta) \end{equation} and \begin{equation}\label{25} AK\approx L \ . \end{equation} Taking 1030 g cm$^{-2}$ for the atmospheric depth at ground level the slant atmospheric depth experienced in these paths is \begin{equation}\label{26} X_{AP} \approx \frac{1030 \cdot (1+\delta \tan \theta \sin \beta )}{\cos \theta} \end{equation} and \begin{equation}\label{27} X_{AK} \approx \frac{1030}{\cos \theta} \ . \end{equation} The difference $\Delta X$ between the atmospheric depth experienced by path $AP$ and path $AK$ is \begin{equation}\label{28} \Delta X =X_{AP}-X_{AK} \approx \frac{1030 \cdot \delta \tan \theta \sin \beta}{\cos \theta} \ . \end{equation} This corresponds to an additional attenuation given by \begin{equation}\label{29} e^{- \Delta X / \lambda } \ , \end{equation} which can be elaborated to \begin{equation}\label{30} e^{- \Delta X / \lambda }=e^{- 7.9 \cdot 10^3 \xi \delta \sin \beta \tan \theta \cos^{-1} \theta} \ . \end{equation} For $\delta$ we have \begin{equation}\label{31} \delta \approx \frac{r}{L} = \frac{r \cos \theta}{h} \ , \end{equation} where $h$ is the altitude of the apex in m. Hence \begin{equation}\label{32} e^{- \Delta X / \lambda }=e^{- 7.9 \cdot 10^3 h^{-1} \xi r \sin \beta \tan \theta } \ . \end{equation} Since $r \sin \beta =MN \approx -x \cos \theta$ the attenuation can be written as \begin{equation}\label{33} e^{- \Delta X / \lambda }=e^{ 7.9 \cdot 10^3 h^{-1} \xi x \sin \theta} \ . \end{equation} Equating $\nu(x,y)$ with $\nu(0,k)$ we obtain \begin{equation}\label{34} e^{-(r/r_0)^w} e^{ 7.9 \cdot 10^3 h^{-1} \xi x \sin \theta} = e^{-(k/r_0)^w} \ . \end{equation} The exponent of the attenuation differs only by a factor $7.9 \cdot 10^3 h^{-1}$ from the one in the previous model. Proceeding in a similar way as in the previous section will therefore lead to a shift which is $7.9 \cdot 10^3 h^{-1}$ times as large as the one of the previous model: \begin{equation}\label{35} x_M = \frac{7.9 \cdot 10^3}{h} \cdot \frac{\xi}{vw} \cdot k^{2-w} \cdot \frac{\tan \theta}{\cos \theta} \ . \end{equation} For a shower with zenith angle $60^\circ$ it is found for the difference in atmospheric depth between the late and early part of the shower $2\Delta X =370$ g cm$^{-2}$ at a distance of 1000 m from the core \cite{valino2010}. From equations (\ref{28}) and (\ref{31}) it follows for the difference between the late ($\beta = \pi /2$) and early ($\beta = - \pi /2$) part: \begin{equation}\label{36} 2\Delta X \approx \frac{2 \cdot 1030 \cdot r \tan \theta }{h} \ . \end{equation} For $2 \Delta X =370$ g cm$^{-2}$, $r=1000$ m and $\theta=60^\circ$ the latter equation is satisfied if $h=9.6 \cdot 10^3$ m. If we substitute $h=9.6 \cdot 10^3$ m, $r_0=0.025$ m$^{-1}$, $w=0.25$ and $\xi = 7.0 \cdot 10^{-4}$ m$^{-1}$, the `cone' model prediction for the shift in m is \begin{equation}\label{37} x_M = 9.2 \cdot 10^{-4} \cdot k^{1.75} \cdot \frac{\tan \theta}{\cos \theta} \ . \end{equation} The latter is only 16\% smaller in comparison to the cylinder model prediction. It can be imagined that the energetic particles near the core point on average to an apex with larger. That would correspond to an even smaller prediction. We return to this in Section 7 where we discuss the situation for $h$ being a function of $k$. Anyway, in the following we will compare the shift solely with the cylinder model prediction ({\ref{19}), which is equal to the cone model prediction (\ref{35}) if $h \approx 7.9 \cdot 10^3$ m. \section{Method} In this section we describe the method of investigation of the shift in the lateral density. The method can be applied for the electron density, muon density and the combined density of electrons and muons together. We restricted ourselves to lateral densities of proton initiated showers. The showers were generated with CORSIKA-v7.4 \cite{corsika1}, with QGSJET-II-04 \cite{ostap1,ostap2} + GHEISHA \cite{gheisha1} for the hadronic interactions. The showers were generated \emph{without thinning}. The horizontal observation level was set to 10 m. The energy cuts are 0.3 GeV for hadrons and muons and 3 MeV for electrons and photons. For each shower the lateral distribution is binned with bin size equal to $10 / \sqrt{\rho}$ m. As an illustration the binned lateral combined density of an arbitrary $10^{17}$ eV shower with $45^\circ$ zenith angle and $0^\circ$ azimuth angle is shown in Fig. \ref{density17ev45}. \\ \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{density17ev45deggray.pdf} \caption{The asymmetric lateral density of a $10^{17}$ eV proton shower with zenith angle $45^\circ$ and azimuth angle $0^\circ$.} \label{density17ev45} \end{center} \end{figure} \\ From the binned density, smoothened by means of a Gaussian filter with one bin ($x \times x$) as sigma, the iso-density contours are determined. By means of minimization of the sum of squares the contours are fitted by an ellipse with equation \begin{equation}\label{38} \left(\frac{x -x_M}{a} \right)^2 + \left(\frac{y -y_M}{b} \right)^2 =1 \ , \end{equation} where $a$ and $b$ are the semi-major and semi-minor axes respectively. For the example shower of Fig. \ref{density17ev45} the final contour with density $< \rho > = 1$ m$^{-2}$ is shown in Fig. \ref{contourellips} together with the ellipse resulting from the fit. In this way we obtain values for the semi-major axis, the semi-minor axis and the value of $x_M$. The center of the ellipse is denoted as $M$. The shower core is at the origin $O$. The focal points $F_1$ and $F_2$ are at distance $c$ from the center $M$. This distance is related to the semi-major and semi-minor axis via $c^2=a^2-b^2$. \\ \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{contourellips.pdf} \caption{The iso-density contour for density $\rho = 1$ m$^{-2}$ of a $10^{17}$ eV proton shower with zenith angle $45^\circ$ and $0^\circ$ azimuth angle (solid). It is with good approximation equal to an ellipse (dashed).} \label{contourellips} \end{center} \end{figure} \\ Next to $x_M$ the fit also delivers a value for the $y_M$ coordinate. Its value, which is close to zero as it should, will be left out of the analysis. Ignoring $y_M$ the Eq. (\ref{38}) can be written either as \begin{equation}\label{39} \left( (x -x_M)\frac{b}{a} \right)^2 + y^2 =b^2 \end{equation} or as \begin{equation}\label{40} \left( x^2 - 2 x \cdot x_M \right) \frac{b^2}{a^2} + y^2 =k^2 \ , \end{equation} where \begin{equation}\label{41} k^2 = b^2-\frac{b^2}{a^2} x_M^2 \ . \end{equation} So, having determined the semi-major axis $a$, the semi-minor axis $b$ and the shift $x_M$, we also know the corresponding value of $k$. \\ \\ From the model analyses, we expect $b=a \cos \theta$. In Fig. \ref{ellipticity} the value of $b/a$ is plotted against zenith angle for the combined density of the simulated showers. \begin{figure}[htbp] \begin{center} \includegraphics[width=8.5cm]{ellipticity.pdf} \caption{The ratio of the semi-minor and semi-major axis of elliptic lateral combined density of simulated showers for different zenith angles. The dashed line is $\cos \theta$.} \label{ellipticity} \end{center} \end{figure} \\ We see $b/a$ follows $\cos \theta$ for small zenith angles. For larger zenith angles a small deviation shows up. The deviation increases with zenith angle. Since the expectation $b=a \cos \theta$ is based on polar symmetric iso-density contours in the front plane, the deviation suggests the iso-density contours in the front plane to be slightly elliptic with the major axis perpendicular to the azimuth direction. With the substitution of $\cos \theta$ for $b/a$ the Eqs. (\ref{39}) through (\ref{41}) reduce to the Eqs. (\ref{5a}) through (\ref{5c}). \\ \\ The method has been tested with artificial Poisson randomized shifted elliptic densities in order to check if the imposed shift is returned. The deviations between the imposed and returned shifts were small, around 1 m for density of 1 m$^{-2}$ or less. Next to the inaccuracy of the method there also are contributions to the deviations due to the fluctuations of the densities. As a measure for the uncertainty the deviations of $y_M$ with respect to the expected value 0 are taken. For the simulated showers we found the standard deviation of $y_M$ to depend on density roughly as $\sigma_y \approx \rho^{-1/3}$. Assuming the standard deviation of $x_M$ to be comparable, we use it for the size of the error bars in the diagrams in the next section. \section{General Monte Carlo results} In this section we consider the lateral densities of two proton initiated showers with energy 100 PeV and 10 PeV both with zenith angle $45^\circ$. For both showers the shifts were determined for the electron density, the muon density and the combined density. For the combined density and the muon density the shift was determined for densities 0.001, 0.002, 0.003, 0.004, 0.006, 0.008, 0.01, 0.02, 0.03, 0.04, 0.06, 0.08, 0.10, 0.20, 0.30, 0.40, 0.50, 0.64, 0.81, 1.00, 1.44, 2.00 and 5.0 m$^{-2}$. For the electron density the same densities were used with the densities 0.0002, 0.0003, 0.0004, 0.0006 and 0.0008 m$^{-2}$ added to that. The shifts are shown in Fig. \ref{explainshift45} respectively Fig. \ref{explainshift2}. The dashed curve in Figs. \ref{explainshift45} and \ref{explainshift2} is the model prediction (\ref{19}). \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{explainshift45.pdf} \caption{The shifts of the electron, muon and combined density of a 100 PeV shower with zenith angle $45^\circ$. The data points are connected with line pieces to guide the eye. The dashed curve is the model prediction. The densities 0.01, 0.001 and 0.0002 are depicted.} \label{explainshift45} \end{center} \end{figure} \\ Comparing the model prediction with the determined shifts of the electron density we see the model prediction follows to a certain extent the shifts as determined for the electron density. The model predicts too low for $k<700$ and $k<500$ m for the 100 PeV reps. 10 PeV shower. \\ \\ There are many reasons for the model to deviate from the determined shifts. To begin with, the plotted model prediction for the shift was based on a constant value for $h$. In reality $h$ will depend on the distance to the shower core, as visualized in Fig. 2 of \cite{garcia-pinto2009}. This suggests a large value for $h$ near the core and a decreasing value for $h$ for increasing $k$. According to the cone model a larger value for $h$ implies a smaller value for the shift. As a consequence it will enhance the underestimation near the core. A small effect has the atmospheric depth decreasing exponentially with altitude. This will flatten the model curve for large $k$. We just took a constant value for the atmospheric depth in the model which is sufficient for our region of interest: $k<1000$ m and $\theta < 60^\circ$. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{explainshift2.pdf} \caption{The shifts of the electron (triangle), muon (diamond) and combined (circle) density of a 10 PeV shower with zenith angle $45^\circ$. The dashed curve is the model prediction. The numbers 0.01, 0.001 and 0.0002 depict the density.} \label{explainshift2} \end{center} \end{figure} \\ The most important reason for the bad prediction probably is the local variation of the attenuation. It can be imagined that the attenuation is large near the core and decreases for increasing $k$. The latter would enhance the shift near the core and flatten the model curve further away from the core. Alternatively, it might bring the model curve more in agreement with the shift curve of the combined density. To model it requires the knowledge of $\xi(k)$ as a function of $k$. The local $h(k)$ can possibly be obtained from simulated showers by inspection of the directions of the electrons when they arrive at the observation plane. The knowledge of $\xi(k)$ seems more difficult: besides the inspection of the local energy distribution of electrons it also requires a relation between the distributions and the local attenuation. On the other hand, if one succeeds in describing and modeling the atmospheric depth and $h$ as functions of $k$, parameterized by zenith angle, there is an opportunity to retrieve the local attenuation $\xi(k)$ from the shift $x_M(k)$ as determined from the simulated electron density. For a model which predicts the shifts of the muon density one has to consider the decay of muons to electrons and the subsequent atmospheric attenuation of the electrons. The combined density the shift then follows from \begin{equation}\label{41a} x_{M,e+\mu}(k) =\frac{\rho_e x_{M,e}(k)+\rho_\mu x_{M,\mu}(k)}{\rho_e (k)+\rho_\mu (k)} \ . \end{equation} Because of the aforementioned reasons it is difficult to derive a precise model for the shifts of the electron density, let alone for the muon density and the combined density. Therefore we will not proceed in that direction. Instead, we will focus our attention on the behavior of the determined shift for relatively large densities. \\ \\ In Fig. \ref{shiftbranche} the shift curves of the combined density of a 10 PeV and a 100 PeV shower with zenith angle $45^\circ$ are once more plotted. They are identical to the ones in Fig. \ref{explainshift45} and Fig. \ref{explainshift2}, except that the dots and error bars are left. \begin{figure}[htbp] \begin{center} \includegraphics[width=9cm]{shiftbranche.pdf} \caption{The shifts of the combined density of a 10 PeV shower and a 100 PeV shower both with zenith angle $45^\circ$.} \label{shiftbranche} \end{center} \end{figure} \\ The slope of both curves show some curious irregularities. For the 100 PeV curve these are around $k=200$, 580 and 1400 m. For the 10 PeV curve we see them around $k=250$ and 650 m. The question arises whether these irregularities are the remnants of consecutive hadronic interactions. \\ \\ The two shift curves fall on top of each other for $k<200$ m. At $k=200$ m they do branch. Beyond the fork the difference between the curves slightly increase for increasing $k$. In general this means that the shift curves are not independent of shower size. The density of the 10 PeV shower at the branching point is about 0.06 m$^{-2}$. In Fig. \ref{explainshift3} the shifts of the combined density as found for the 10 PeV shower is plotted on top of the ones for the 100 PeV shower. For the 10 PeV shower the plotted densities (white) are 0.2, 0.3, 0.4, 0.5, 0.64, 0.81, 1.00, 1.44, 2.0, 5.0 and 10 m$^{-2}$. For the 100 PeV shower the plotted densities (black) are 0.3, 0.4, 0.5, 0.64, 0.81, 1.00, 1.44, 2.0, 5.0, 10, 20 and 50 m$^{-2}$. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{explainshift3.pdf} \caption{The shifts of the combined density for a 10 PeV shower (white) on top of the ones for a 100 PeV shower (black), both showers with zenith angle $45^\circ$.} \label{explainshift3} \end{center} \end{figure} \\ We see the shift curves practically fall on top of each other within the given density domains. This means that we can try to find a relation between $x_M$ and $k$ independent of shower size (or energy) similar to the model prediction. In addition, the curves are almost linear. As we will see further on the latter allows for an analytical solution for the shifted polar density. \section{Specific Monte Carlo results} In this section we will investigate the shift in combined densities of a set of simulated showers. The energies of the showers are $10^{15}$, $10^{16}$, $10^{17}$ and $10^{18}$ eV. The zenith angles of the showers range from $7.5^\circ$ through $60^\circ$, in steps of $7.5^\circ$. As an illustration the ratio $N_e / ( N_e + N _\mu )$ is plotted against energy for several zenith angles in Fig. \ref{eldivelmu}. The energy - zenith angle entries are shown as black dots. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{eldivelmu.pdf} \caption{The ratio $N_e / (N_e + N_\mu )$ against energy for zenith angles $0^\circ$ through $60^\circ$, in steps of $7.5^\circ$.} \label{eldivelmu} \end{center} \end{figure} \\ Not all possible combinations of energies and zenith angles are applicable for investigation. For showers with relatively low energy and relatively large zenith angle, shown as the gray region in Fig. \ref{eldivelmu}, the small shower size at observation level does in general not allow for a determination of the shift. The choice for generating showers without thinning is made to avoid possible deviations caused by thinning. The consumption of computer time and of storage space grows exponentially with the size of the simulated shower \cite{apcworkshop}. This is extremely the case for shower simulation without thinning. As a consequence the library of showers generated without thinning is limited, in particular for large energies. For the largest energy considered, $10^{18}$ eV, the library is momentarily limited to 10 showers for zenith angle $60^\circ$, 10 for zenith angle $52.5^\circ$, 8 for zenith angle $45^\circ$, 5 for zenith angle $37.5^\circ$ and none for zenith angles $30^\circ$, $22.5^\circ$ and $15^\circ$. To obtain a sort of equal share in our diagrams we take 10 showers for each of the other energy - zenith angle entries. For each shower we determine the iso-density contours for combined densities 0.50, 0.64, 0.81, 1.00, 1.44, 2.0, 5.0, 10, 20 and 50 m$^{-2}$ for as far as these densities occur in a shower, thus maximum 10 data points per shower. For the densities considered this is close to the shift as we would have obtained it from the electron density, except for regions were the muon component dominates: for large distances to the core and for zenith angles in the neighborhood of $60^\circ$ and larger. As depicted in Fig. \ref{eldivelmu} for zenith angle 7.5$^\circ$ through 30$^\circ$ showers were used with energy $10^{15}$, $10^{16}$ and $10^{17}$, for zenith angle 37.5$^\circ$ through 52.5$^\circ$ showers were used with energy $10^{16}$, $10^{17}$ and $10^{18}$ eV and for zenith angle 60$^\circ$ showers were used with energy $10^{17}$ and $10^{18}$ eV. For 10 showers at 3 energy decades we obtained a maximum of 300 data points for each zenith angle. \\ \\ As we will see, and as suggested by the model results, for each zenith angle the data points follow a curve independent of energy. The curves are fitted with a function similar to the one resulting from the models. To be specific, for each zenith angle we plot the $x_M$ against $k$ and fit the result by the equation \begin{equation}\label{43} x_M = Ak^B \cdot \frac{\tan \theta}{\cos \theta} \ . \end{equation} As an illustration the $x_M$ are plotted against $k$ for zenith angle $30^\circ$ and fitted with Eq. (\ref{43}), see Fig. \ref{xmvsk30}. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{xmvsk30.pdf} \caption{The shift $x_M$ against $k$ (unfilled circles) and the best fit curve for zenith angle $30^\circ$.} \label{xmvsk30} \end{center} \end{figure} \\ In the latter figure the error bars since they are in most of the cases smaller than the size of the plot markers. The $x_M$ grows with $k$ along a curve independent of the primary energy of the showers. For the 1 PeV showers the data points are in the region $k<80$ m. For the 10 PeV and 100 PeV the regions are $k<250$ m and $k<500$ m respectively. For the parameters of the fit we find $A=0.020$ and $B=1.28$. For the goodness of fit we find for the Pearson $\chi^2$ test 0.98 as the $p$-value. These figures hold for zenith angle $30^\circ$. For zenith angle 7.5$^\circ$ through 30$^\circ$ the diagrams are shown in Fig. \ref{xmvsklinalla}. For zenith angle 37.5$^\circ$ through 60$^\circ$ the diagrams are shown in Fig. \ref{xmvsklinallb}. \begin{figure}[htbp] \begin{center} \includegraphics[width=10.5cm]{xmvsklinalla.pdf} \caption{The shift $x_M$ against $k$ (unfilled circles), the best fit curve (solid), the model prediction (dotted) and a linear curve (dashed) for zenith angle 7.5$^\circ$ through 30$^\circ$.} \label{xmvsklinalla} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=10.5cm]{xmvsklinallb.pdf} \caption{The shift $x_M$ against $k$ (unfilled circles), the best fit curve (solid) and the model prediction (dotted) and a linear curve (dashed) for zenith angle 37.5$^\circ$ through 60$^\circ$.} \label{xmvsklinallb} \end{center} \end{figure} \begin{table}[htbp] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{6pt} \begin{center} \begin{tabular}{ | c | c | c | c | c | c |} \hline $\theta$ & $A$ & $B$ & $\chi^2$ & $p$-value & datasize \\ \hline 7.5$^\circ$ & 0.010 & 1.41 & 25.7 & 0.11 & 248 \\ \hline 15$^\circ$ & 0.013 & 1.36 & 8.74 & 0.95 & 220 \\ \hline 22.5$^\circ$ & 0.019 & 1.29 & 11.7 & 0.86 & 274 \\ \hline 30$^\circ$ & 0.020 & 1.28 & 7.79 & 0.98 & 253 \\ \hline 37.5$^\circ$ & 0.037 & 1.17 & 12.5 & 0.82 & 250 \\ \hline 45$^\circ$ & 0.043 & 1.13 & 10.0 & 0.93 & 267 \\ \hline 52.5$^\circ$ & 0.034 & 1.13 & 9.21 & 0.91 & 199 \\ \hline 60$^\circ$ & 0.019 & 1.09 & 11.1 & 0.74 & 156 \\ \hline \end{tabular} \caption{Parameters $A$ and $B$ of the fits, the $\chi^2$ and $p$-values and the number of inspected contours for zenith angles as given in the first column.} \label{abparam} \end{center} \end{table} \\ In both diagrams the model predictions for the shift of the electron densities are plotted as well for reasons of comparison; for small zenith angles the combined distribution is dominated by electrons. \\ \\ For small zenith angles, 7.5$^\circ$ and 15$^\circ$, the spread of the shifts are mainly governed by the uncertainty of the measurement. For large zenith angles, 52.5$^\circ$ and 60$^\circ$, the spread of the shifts are mainly due to shower to shower variations. For each zenith angle the values $A$, $B$, $\chi^2$ and the $p$-value have been tabulated, see Table \ref{abparam}. \\ \\ The values of $B$ as given in Table \ref{abparam} decrease for increasing zenith angle. The values of $B$ being close to unity suggests to consider a linear relation between $x_m$ and $k$. From fits with \begin{equation}\label{44a} x_M =C \cdot k \cdot \frac{\tan \theta}{\cos \theta} \end{equation} it is found that $C$ scales as $\cos \theta$. On average $C\approx 0.116 \cos \theta$, except for $\theta=60^\circ$ where $C \approx 0.075 \cos \theta$. Writing $C$ as $2S \cos \theta$, the proposed linear relation is as follows \begin{equation}\label{44} x_M = 2S \cdot k \tan \theta \ , \end{equation} where $S=0.058$. For $\theta=60^\circ$ the value of $S$ is about 35\% smaller. The linear relation is shown in Figs. \ref{xmvsklinalla} and \ref{xmvsklinallb} as a dashed curve. For small zenith angles the linear approximation overestimates the shift for $k \approx 150$ m, the difference being just about 1 m. For zenith angle 37.5$^\circ$ and 45$^\circ$ it underestimates by 20\% in the region where the density is small, $\rho <1$ m$^{-2}$. Taking the inaccuracies for granted, a linear equation is advantageous since, as we will see further on, it allows for an analytical solution for the description of the LDF-A. At the end of Section 10 a remark will be made about the possible application of the more accurate power law (\ref{43}). \section{The polar density} In this section we will perform the conversion of an LDF to an LDF-A. Substitution of the shift (\ref{44}) in Eq. (\ref{5c}) gives \begin{equation}\label{58} \left( x^2 - 4 x \cdot S \cdot k \tan \theta \right) \cos^2 \theta + y^2 =k^2 \ . \end{equation} Solving for $k$ we obtain \begin{equation}\label{58a} k=-S x \sin (2\theta) + \sqrt{y^2 + x^2 (1+4S^2 \sin^2 \theta) \cos^2 \theta } \ , \end{equation} For $S=0.058$ the term $4S^2 \sin^2 \theta$ is negligible with respect to 1. With good approximation we therefore have \begin{equation}\label{58b} k=-S x \sin (2\theta) + \sqrt{y^2 + x^2 \cos^2 \theta } \ , \end{equation} According to Eq. (\ref{8}) the polar density $\nu (x,y)$ including the shift is obtained in Cartesian coordinates by substituting expression (\ref{58b}) for $k$ in the polar symmetric density $\rho$ and by multiplying it by $\cos \theta$. The horizontal polar density $\nu$ can also be written in polar coordinates. With the substitution of $x=r\cos\alpha$ and $y=r\sin\alpha$ the Eqs. (\ref{58}) and (\ref{58b}) respectively read \begin{equation}\label{59} r^2 \cos^2 \alpha \cos^2 \theta - 2 r \cdot S \cdot k \cos \alpha \sin (2 \theta) + r^2 \sin^2 \alpha =k^2 \end{equation} and \begin{equation}\label{61} k=-S r \cos \alpha \sin (2\theta) + r \sqrt{1-\cos^2 \alpha \sin^2 \theta } \ . \end{equation} The second term, the square root part, is due to the ellipticity of the density as caused by the projection \cite{cillis}. The first term on the right hand side of Eq. (\ref{61}) is due to the shift. The polar density including the shift is obtained in horizontal polar coordinates in the same way as for Cartesian coordinates: \begin{equation}\label{62} \nu(r,\alpha)=\rho(k) \cos \theta \end{equation} with $k$ as given by (\ref{61}). \\ \\ To obtain the polar density $\nu$ we need the polar symmetric density $\rho$. A good approximation for $\rho$ is found by polar averaging the horizontal density and fitting it with a suitable LDF. For electromagnetic showers a well known LDF is the one of Nishimura, Kamata and Greisen (NKG) \cite{K&N,Greisen}. Most LDF's are modifications of the NKG function \cite{N&W, Apel}. For muons a well known lateral density function is the one of Vernov \cite{Vernov1968}. However, it can also be described by a NKG type of function \cite{Greisen1960, Antoni2001}. For radii smaller than about 300 m the combined density of electrons and muons can also be described by an NKG type of LDF: \begin{equation}\label{63} \rho_{e+\mu} (r)=N_{e+\mu} \cdot c \cdot f(r) \ , \end{equation} where \begin{equation}\label{64} f(r)=\left(\frac{r}{r_0}\right)^{s_1}\left(1+\frac{r}{r_0}\right)^{s_2} \end{equation} is the structure function and where $c$ usually is the normalization. Formally this LDF is similar to the one used for the KASCADE experiment \cite{Apel}. There the quantities $s-\alpha$ and $s-\beta$, with $s$ the shape parameter (a remnant of the age parameter), play a similar role as $s_1$ respectively $s_2$. The parameter $r_0$ plays a similar role as the Moli\`ere radius in the original NKG function. From the simulated showers it is found that $r_0$ is close to $30$ m. Fixing $r_0$ to 30 has only a marginally effect on the fit values for $s_1$ and $s_2$. \\ \\ For radii larger than about 300 m it underestimates the combined density. The deviation is caused by the relatively large muon component. To adjust for the muon component we let us motivate by the Greisen function \cite{greisen1960}. That is, we multiply the LDF by $(1+\frac{r}{11.4 \cdot r_0})$. The latter multiplication complicates the normalization. We therefore take $r_0=30$ and use $c$ as a fit parameter like $s_1$ and $s_2$. Thus \begin{equation}\label{67} \rho_{e+\mu}=N_{e+\mu} \cdot c \cdot \left( \frac{r}{30} \right)^{s_1} \left( 1+ \frac{r}{30} \right)^{s_2} \left( 1+\frac{r}{340} \right) \ . \end{equation} \section{Comparison with simulated densities} The performance will be illustrated by means of the same three showers \textbf{a}, \textbf{b} and \textbf{c} as already used in Section 4. Their polar averaged combined densities and their fitting curves are plotted in Fig. \ref{ldftripelmu}. As before the polar averaged densities are binned with bin-width 1 m. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{ldftripelmu.pdf} \caption{Polar averaged combined densities for the three showers \textbf{a}, \textbf{b} and \textbf{c} as given in the text. The dashed curves are the fits with the LDF as given in the text.} \label{ldftripelmu} \end{center} \end{figure} \\ We see the fit curves follow the combined densities also beyond 300 m. The number of electrons and muons and the values found for the parameters are shown in Table \ref{elmuparam}. \begin{table}[htbp] \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{9pt} \begin{center} \begin{tabular}{ | c | c | c | c | c | c |} \hline shower & $\theta$ &$N_{e+\mu}$ & $c$ & $s_1$ & $s_2$ \\ \hline \textbf{a} & $30^\circ$ & 490538 & 0.000283 & -0.441 & -2.787 \\ \hline \textbf{b} & $45^\circ$ &1967956 & 0.000205 & -0.355 & -2.638 \\ \hline \textbf{c} & $52.5^\circ$ & 5834026 & 0.0000903 & -0.309 & -2.253 \\ \hline \end{tabular} \caption{The zenith angle, the shower size and the fit values for $c$, $s_1$ and $s_2$ for the combined density of the showers \textbf{a}, \textbf{b} and \textbf{c} as given in the text.} \label{elmuparam} \end{center} \end{table} \\ To obtain the LDF-A of the three showers we multiply the polar averaged LDF with $\cos \theta$ and replace $r$ for $k$. The final prediction for the LDF-A including the shift is \begin{equation}\label{68} \nu_{e+\mu}=N_{e+\mu} \cdot \cos \theta \cdot c \cdot \left( \frac{k}{30} \right)^{s_1} \left( 1+ \frac{k}{30} \right)^{s_2} \left( 1+\frac{k}{340} \right) \ , \end{equation} where $k$ is as given by (\ref{61}) and with the parameter values as given in Table \ref{elmuparam}. \\ \\ For the three simulated showers we inspected the polar variation of the combined density at different radii. To this end the density was binned with bin size $\pi r / 12$ in the angular direction. For the bin size in the radial direction we took 1, 1, 2, 3, 4 and 5 m for radii 10, 20, 50, 100, 200 and 500 m respectively. The bins were just large enough to balance out to some extent the Poisson fluctuations in the density. For shower \textbf{a}, \textbf{b} and \textbf{c} the result is plotted in Fig. \ref{polar1630} through \ref{polar1852} together with the LDF-A prediction. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{polarelmu1630.pdf} \caption{The binned combined density of shower \textbf{a} (10$^{16}$ eV , zenith angle $30^\circ$) against polar angle for different radii (dots) and the constructed LDF-A prediction (solid).} \label{polar1630} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{polarelmu1745.pdf} \caption{The binned combined density of shower \textbf{b} (10$^{17}$ eV , zenith angle $45^\circ$) against polar angle for different radii (dots) and the constructed LDF-A prediction (solid).} \label{polar1745} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{polarelmu1852.pdf} \caption{The binned combined density of shower \textbf{c} (10$^{18}$ eV , zenith angle $52.5^\circ$) against polar angle for different radii (dots) and the constructed LDF-A prediction (solid).} \label{polar1852} \end{center} \end{figure} \\ We see the constructed LDF-A nicely follows the polar density. Angular independent deviations, such as in the upper left panel of Fig. \ref{polar1745}, are caused by the inaccuracy of $\rho$. Any inaccuracy in the underlying LDF will be reflected in the LDF-A. This does not take away that an LDF-A including the shift still follows the polar variation of the density better than an LDF-A without the shift. \\ \\ The accuracy of the LDF-A for the three example showers suggests that it is probably sufficient to consider a single linear relation between $x_M$ and $k$ independent of zenith angle. If a better accuracy is desired one has to apply the power law (\ref{43}) with fit values for $A$ and $B$ as given in Table \ref{abparam}. The latter approach requires the fit coefficients either to be tabulated for different $\theta$ or to be parameterized to $\theta$ by means of a suitable function. Next to this, for a shift as given by the power law the equation \ref{59} would have been as follows \begin{equation}\label{69} r^2 \cos^2 \alpha \cos^2 \theta - 2 r \cos \alpha \cdot A \cdot k^B \sin \theta + r^2 \sin^2 \alpha =k^2 \ . \end{equation} For $B$ a non-integer the latter equation has to be solved numerically for $k$. \section{Summary} For electron densities the shift of the center of elliptic iso-density contours is modeled. For combined densities of simulated showers the shift is determined for different zenith angles. An approximate linear relation between the shift and $k$ allows for an analytical solution for the asymmetric lateral density. The conversion of an LDF to an LDF-A including the shift is described. The conversion consists in two steps corresponding to the effects of the projection and of the attenuation. The first step is the multiplication of the LDF by $\cos \theta$, where $\theta$ is the zenith angle. The second step is to replace $r$ by $k$, where $k$ is given by Eq. (\ref{61}). The result is an LDF-A for the situation where the azimuth angle is equal to zero. The LDF-A for a non-zero azimuth angle $\phi$ requires a third, trivial step: replacing the polar angle $\alpha$ by $\alpha - \phi$. \\ \\ Zenith angle $60^\circ$ is a sort of transition point. Below this point there is a relatively large shift while the effect of the geomagnetic field is negligible. Above the transition point the muon component becomes dominant. As a consequence the relative shift is smaller. At the same time the influence of the geomagnetic field rapidly grows with zenith angle. Above the transition point the consequences of the shift for the asymmetry of the density will be overwhelmed by the effect of the geomagnetic field. The influence of the geomagnetic field on very inclined air showers requires a different modeling \cite{valino2010, rodriguez2008, ave2000, Dembinski2010}. \\ \\ The aim of the paper was to consider the situation for zenith angles smaller than $60^\circ$, where the shift is mainly governed by the attenuation of the electron component and therefore to a certain extend substantial. The inclusion of the shift leads to a more accurate description of an asymmetric polar density. It therefore may be worthwhile to take the shift into account for reconstruction purposes. It should be emphasized that the Eq. (\ref{44}) and the shift part of Eq. (\ref{61}) were derived for observation at sea level. For an observation level at a different altitude, the shift will be different. Even if the inclusion of the shift improves the accuracy of reconstruction only marginally, the contents of the paper may still contribute to the description and understanding of horizontal polar densities of inclined cosmic air showers. \section{Acknowledgements} I wish to thank Dr. J.J.M. Steijger for his comments on an earlier draft of this paper. I am grateful to A. P. L. S. de Laat for his efforts in shower simulations. The work is supported by a grant from NWO (Netherlands Organization for Scientific Research).
{ "timestamp": "2015-08-31T02:07:22", "yymm": "1504", "arxiv_id": "1504.06227", "language": "en", "url": "https://arxiv.org/abs/1504.06227" }
\section*{Results} {\bf Squashed entanglement.} Before discussing our main results, we briefly review the squashed entanglement, which plays an important role in our work. The secret-key agreement capacity assisted by public communication was defined for a classical channel $p_{Y,Z|X}$ ($X=$ Alice, $Y=$ Bob, $Z=$ Eve), independently by Maurer~\cite{M93}, and Ahlswede and Csisz{\'{a}}r~\cite{AC93}, who proved lower and upper bounds on the capacity. Maurer and Wolf later introduced the intrinsic information $I(X;Y{\downarrow}Z) \equiv \min\left\{I(X;Y|Z^\prime): P_{X,Y,Z,Z^\prime} = P_{X,Y,Z}P_{Z^\prime |Z}\right\}$, and proved that this quantity optimized over all channel input distributions is a sharp upper bound on the secret key agreement capacity of $p_{Y,Z|X}$ \cite{MW99}. Leveraging strong parallels discovered between secrecy and quantum coherence~\cite{SW98,LC99,SP00}, Christandl and Winter extended the intrinsic information quantity to the realm of quantum information theory. They defined the squashed entanglement $E_{\text{sq}}\left(A;B\right) _{\rho}$\ of a bipartite quantum state $\rho_{AB}$, and proved it to be an upper bound on the rate at which two parties can distill maximally entangled (Bell) states $\left(\left\vert 0\right\rangle \left\vert 0\right\rangle +\left\vert 1\right\rangle\left\vert 1\right\rangle \right) /\sqrt{2}$ from many copies of $\rho_{AB}$ using local operations and classical communication (LOCC)~\cite{CW04}. Using a similar technique, the squashed entanglement was proved to upper bound the distillable secret key rate~\cite{CEHHOR07,C06}. The squashed entanglement of a bipartite state $\rho_{AB}$ is defined as \begin{equation} E_{\text{sq}}\left( A;B\right) _{\rho}\equiv\tfrac{1}{2}\inf_{\mathcal{S}% _{E\rightarrow E^{\prime}}}I\left( A;B|E^{\prime}\right) , \label{eq:squashed-ent-state}% \end{equation} where $I(A;B|E^{\prime}) \equiv H(AE') + H(BE') - H(E') - H(ABE')$ is the conditional quantum mutual information and the infimum is taken over all noisy `squashing channels' $\mathcal{S}_{E\rightarrow E^{\prime}}$ taking the $E$ system of a purification $\left\vert \phi^{\rho}\right\rangle_{ABE}$\ of $\rho_{AB}$ to a system $E^{\prime}$ of arbitrary dimension. In related work, Tucci has defined a functional bearing some similarities to squashed entanglement \cite{T99,T02}. We can interpret $E_{\text{sq}}\left( A;B\right) _{\rho}$ as quantifying the minimum remnant quantum correlations between $A$ and $B$ after an adversary possessing the purifying system $E$ performs a quantum operation on it with the intent of `squashing down' the correlations that $A$ and $B$ share. It should also be noted that among the many entanglement measures, squashed entanglement is the only one known to satisfy all eight desirable properties that have arisen in the axiomatization of entanglement theory \cite{CW04,KW04,AF04,BCY11}. {\bf Squashed entanglement of a quantum channel.} The upper bound from \cite{CEHHOR07} on the distillable key rate applies to the scenario in which Alice and Bob share many copies of some bipartite state $\rho_{AB}$. In order to upper bound the key agreement capacity of a channel, we define the squashed entanglement of a quantum channel $\mathcal{N}_{A^{\prime}\rightarrow B}$ as the maximum squashed entanglement that can be registered between a sender and receiver with access to the input $A^{\prime}$\ and output $B$\ of this channel, respectively:% \begin{equation} E_{\text{sq}}\left( \mathcal{N}\right) \equiv\max_{\left\vert \phi \right\rangle _{AA^{\prime}}}E_{\text{sq}}\left( A;B\right) _{\rho}, \label{eq:squashed-ent-channel}% \end{equation} where $\rho \equiv \rho_{AB}\equiv\mathcal{N}_{A^{\prime}\rightarrow B}(\left\vert \phi\right\rangle \left\langle \phi\right\vert _{AA^{\prime}})$. Note that, in the above formula, we can take a maximum rather than a supremum if the input space is finite-dimensional because in this case, the input space is compact and the squashed entanglement measure is continuous \cite{AF04}. Also, we can restrict the optimization to be taken over pure bipartite states, due to the convexity of squashed entanglement \cite{CW04}. We now prove that $E_{\text{sq}}\left( \mathcal{N}\right)$ plays an operational role analogous to intrinsic information, i.e., it upper bounds the secret-key agreement capacity $P_{2}\left( \mathcal{N}\right) $. {\it Theorem 1}: $E_{\operatorname{sq}}\left( \mathcal{N}\right) $ is an upper bound on $P_{2}\left( \mathcal{N}\right) $, the private capacity of $\mathcal{N}$\ assisted by unlimited forward and backward classical communication:% \begin{equation} \label{eq:squashed_ent_upper_bound} P_{2}\left( \mathcal{N}\right) \leq E_{\operatorname{sq}}\left( \mathcal{N}\right) . \end{equation} {\it Proof}: First recall that the squashed entanglement is a secrecy monotone, that is, it does not increase under local operations and public classical communication (LOPC) in the sense that $E_{\text{sq}}\left(A;B\right)_{\rho}\geq E_{\text{sq}}\left(A;B\right)_{\sigma}$ if Alice and Bob can obtain the state $\sigma_{AB}$ from $\rho_{AB}$ by LOPC \cite{C06,CEHHOR07}. The method for doing so was to exploit the fact that LOPC distillation of secret key is equivalent to LOCC distillation of private states \cite{HHHO05,HHHO09}. A private state has the following form \cite{HHHO05,HHHO09}: \[ \gamma_{ABA'B'} = U_{ABA'B'} \left( |\Phi\rangle\langle\Phi|_{AB} \otimes \rho_{A'B'} \right) U_{ABA'B'}^\dagger , \] where $U_{ABA'B'}=\sum_{i,j} |i \rangle\langle i|_A \otimes |j \rangle\langle j|_B \otimes U^{ij}_{A'B'}$ is a global unitary operation, $|\Phi\rangle_{AB} = \sum_i |i\rangle_A |i\rangle_B / \sqrt{d}$ is a maximally entangled state of Schmidt rank $d$, and $\{|i\rangle_{A}\}$ and $\{|i\rangle_{B}\}$ are complete orthonormal bases for quantum systems $A$ and $B$, respectively. Furthermore, the squashed entanglement is normalized, in the sense that $E_{\text{sq}}\left(A;B\right)_{\gamma} \ge \log d$ (see Proposition 4.19 of \cite{C06}). Finally, the squashed entanglement satisfies the following continuity inequality \cite{AF04,C06}:% \begin{multline} \label{eq:continuity} \text{if \ \ \ }\left\Vert \rho_{AB}-\sigma_{AB}\right\Vert _{1}% \leq\varepsilon,\text{ \ \ \ then}\\ \left\vert E_{\text{sq}}\left( A;B\right) _{\rho}-E_{\text{sq}}\left( A;B\right) _{\sigma}\right\vert \leq16\sqrt{\varepsilon}\log d'+4h_{2}\left( 2\sqrt{\varepsilon}\right) , \end{multline} where $d'=\min\left\{ \left\vert A\right\vert ,\left\vert B\right\vert \right\} $ and $h_{2}\left( x\right) $ is the binary entropy function with the property that $\lim_{x\rightarrow0}h_{2}\left( x\right) =0$. The most general $(n,R,\varepsilon)$ protocol in this setting is described as follows, where $n$ is the number of channel uses, $R$ is the key generation rate (measured in secret key bits per channel use), and $\varepsilon$ is a parameter quantifying the security (see below for their formal definitions). The protocol begins with Alice preparing a state $\rho_{AA_{1}\cdots A_{n}}^{\left( 1\right) }$ on $n+1$ systems. She then transmits the system $A_{1}$ through one use of the channel $\mathcal{N}$, and considering its isometric extension $U_{A_{1}% \rightarrow B_{1}E_{1}}^{\mathcal{N}}$, we write the output state as $\sigma_{AB_{1}E_{1}A_{2}\cdots A_{n}}^{\left( 1\right) }$. Let $R^{(1)}$ be a system that purifies this state. There is then a round of an arbitrary amount of LOPC\ between Alice and Bob, resulting in a state $\rho_{AB_{1}% E_{1}A_{2}\cdots A_{n}}^{\left( 2\right) }$. This procedure continues, with Alice transmitting system $A_{2}$ through the channel, leading to a state $\sigma_{AB_{1}E_{1}B_{2}E_{2}A_{3}\cdots A_{n}}^{\left( 2\right) }$, etc. After the $n$th channel use, the state is $\sigma_{AB_{1}E_{1}B_{2}E_{2}\cdots B_{n}E_{n}}^{\left( n\right) }$ (note that the dimension of the system $A$ might change throughout the protocol). Let $R^{(n)}$ be a reference system that purifies this state. There is a final round of LOPC, producing a state $\omega _{ABE_{1}\cdots E_{n}}$, whose reduction $\omega_{AB}$ satisfies \[ \left\Vert \omega_{AB}-\gamma_{AB} \right\Vert _{1}\leq\varepsilon, \] where $\gamma_{AB}$ is a private state of $nR$ bits. Note that we are implicitly including the systems $A'$ and $B'$ in $A$ and $B$, respectively. We can now proceed by bounding the secret key generation rate of any such protocol as follows:% \begin{align*} nR & \leq E_{\text{sq}}\left( A;B\right) _{\gamma}\\ & \leq E_{\text{sq}}\left( A;B\right) _{\omega}+nf\left( \varepsilon \right) . \end{align*} The first inequality follows from the normalization of the squashed entanglement on private states (as mentioned above). The second inequality follows from the continuity of squashed entanglement, with an appropriate choice of $f\left( \varepsilon\right)$ so that $\lim_{\varepsilon\rightarrow0}f\left( \varepsilon\right)=0$ (see the Methods section for more details). To continue, we introduce the following new subadditivity inequality for the squashed entanglement: {\it Lemma 2}: For any five-party pure state $\psi_{AB_1E_1B_2E_2}$, \begin{equation} E_{\operatorname{sq}}\left( A;B_{1}B_{2}\right) _{\psi} \leq E_{\operatorname{sq}}\left( AB_{2}E_{2};B_{1}\right) _{\psi} + E_{\operatorname{sq}}\left( AB_{1}E_{1};B_{2}\right) _{\psi}.\nonumber \end{equation} {\it Proof}: See the Supplementary Note 1 for a proof. With this new inequality in hand, we can establish the following chain of inequalities: \begin{align*} E_{\text{sq}}\left( A;B\right) _{\omega} & \leq E_{\text{sq}}\left( A;B_{1}\cdots B_{n}\right) _{\sigma^{\left( n\right) }}\\ & \leq E_{\text{sq}}(AB_{1}E_{1}\cdots B_{n-1}E_{n-1}R^{(n)};B_{n}% )_{\sigma^{\left( n\right) }}\\ & \ \ \ \ \ +E_{\text{sq}}\left( AB_{n}E_{n};B_{1}\cdots B_{n-1}\right) _{\sigma^{\left( n\right) }}\\ & \leq E_{\text{sq}}\left( \mathcal{N}\right) +E_{\text{sq}}\left( AB_{n}E_{n};B_{1}\cdots B_{n-1}\right) _{\sigma^{\left( n\right) }}\\ & =E_{\text{sq}}\left( \mathcal{N}\right) +E_{\text{sq}}\left( AA_{n};B_{1}\cdots B_{n-1}\right) _{\rho^{\left( n\right) }}\\ & \leq nE_{\text{sq}}\left( \mathcal{N}\right) . \end{align*} The first inequality follows from monotonicity of the squashed entanglement under LOCC. The second inequality is an application of the subadditivity inequality in Lemma 2. The third inequality follows because $E_{\text{sq}}(AB_{1}E_{1}\cdots B_{n-1}E_{n-1}R^{(n)};B_{n})_{\sigma^{\left( n\right) }}\leq E_{\text{sq}}\left( \mathcal{N}\right) $ (there is a particular input to the $n$th channel, while the systems $AB_{1}E_{1}\cdots B_{n-1}E_{n-1}R^{(n)}$ purify the system being input to the channel). The sole equality follows because the squashed entanglement is invariant under local isometries (the isometry here being the isometric extension of the channel). The last inequality follows by induction, i.e., repeating this procedure by using secrecy monotonicity and subadditivity, \textquotedblleft peeling off\textquotedblright\ one term at a time. Putting everything together, we arrive at \[ nR\leq nE_{\text{sq}}\left( \mathcal{N}\right) +nf\left( \varepsilon \right) , \] which we can divide by $n$ and take the limit as $\varepsilon\rightarrow0$ to recover the result that $P_{2}\left(\mathcal{N}\right)\leq E_{\text{sq}} \left( \mathcal{N}\right)$. This completes the proof of Theorem 1. It should be stressed that the right hand of \eqref{eq:squashed_ent_upper_bound} is a `single-letter' expression, meaning that the expression is a function of a single channel use. This is in spite of the fact that the quantity serves as an upper bound on the secret key agreement capacity, which involves using the channel many independent times, entangled input states, and/or measurements over many channel outputs. Lemma 2 is critical for establishing the `single-letterization.' The simple expression in \eqref{eq:squashed_ent_upper_bound} allows us to apply the bound to various channels, including the optical channel as shown below. Also, as mentioned in the introduction, Theorem 1 states that $E_{\rm sq}(\mathcal{N})$ is a weak converse upper bound which bounds the key rate in the asymptotic limit of many channel uses. However, our bound is also valid for any finite number of channel uses, in the sense that the key rate is upper bounded in terms of $E_{\rm sq}(\mathcal{N})$ and the reliability and security of the protocol. It might be possible to improve upon our upper bound, by establishing a so-called strong converse theorem (see, e.g., \cite{ON99,W99}) or a refined second-order analysis, along the lines of \cite{TH12}, which is left as an important open question. We point the reader to the Methods section for a quantitative discussion and an example scenario involving a pure-loss optical channel. A variation of this setting is one in which there is a forward quantum channel $\mathcal{N}$\ from Alice to Bob and a backward quantum channel $\mathcal{M}$\ from Bob to Alice. The most general protocol for generating a shared secret will have Alice and Bob each prepare a state on $n$ systems, Alice sending one system through the forward channel, them conducting a round of LOPC, Bob sending one of his systems through the backward channel, them conducting a round of LOPC, etc. Using essentially the same proof technique as above, it follows that $E_{\text{sq}}\left( \mathcal{N}\right) +E_{\text{sq}}\left(\mathcal{M}\right) $ serves as an upper bound on the total rate at which Alice and Bob can generate a shared secret key using these two channels many independent times. It is also worth noting that $E_{\text{sq}}(\mathcal{N})$ is an upper bound on the two-way assisted quantum capacity $Q_2(\mathcal{N})$ (defined in \cite{BDS97}) because $P_2(\mathcal{N}) \ge Q_2(\mathcal{N})$ holds in general. {\bf Pure-loss optical channel.} Now we are in a position to derive a limit on the key generation rate of any QKD protocol which uses a lossy optical channel. The following input-output relation models linear loss in the propagation of an optical mode, such as through a lossy fiber or free space:% \[ \hat{b}=\sqrt{\eta}\,\hat{a}+\sqrt{1-\eta}\,\hat{e}, \] where $\hat{a}$, $\hat{b}$, and $\hat{e}$ are bosonic mode operators corresponding to the sender Alice's input, the receiver Bob's output, and the environmental input, respectively. For the pure-loss bosonic channel, the environment mode is in its vacuum state. The transmittance of the channel, $\eta\in\left[ 0,1\right] $ is the fraction of input photons that makes it to the output on average. Let $\mathcal{N}_{\eta}$ denote the channel from Alice to Bob. For a secret-key agreement protocol assisted by two-way classical-communication over this channel, we assume that it begins and ends with finite-dimensional states, but the processing between the first and final step can be conducted with infinite-dimensional systems (see the Methods section for further discussion of this point). Furthermore, as is common in bosonic channel analyses \cite{GGLMSY04}, we begin by imposing a mean input power constraint. That is, for each input mode, we require that $\left\langle \hat{a}^{\dag}\hat{a}\right\rangle \leq N_{\rm S}$, with $0\leq N_{\rm S}<\infty$. Thus, $E_{\text{sq}}\left( \mathcal{N}_{\eta}\right) $, with the additional photon number constraint on the channel input is an upper bound on $P_{2}\left(\mathcal{N}_{\eta}\right)$. By taking the squashing channel for the environment Eve to be another pure-loss bosonic channel of transmittance $\eta_{1}\in\left[0,1\right]$ (see Fig.~\ref{fig:bosonic_setup}), noting that the resulting conditional mutual information can be written as a sum of two conditional entropies, and applying the extremality of Gaussian states with respect to conditional entropies~\cite{EW07,WGC06}, we find that the following quantity serves as an upper bound on $E_{\text{sq}}\left( \mathcal{N}_{\eta }\right) $ for all $\eta_{1}\in\left[ 0,1\right] $ (see Supplementary Note 2 for a detailed proof):% \begin{multline} \tfrac{1}{2}\Big[g\left( \left( 1-\eta_{1}+\eta\eta_{1}\right) N_{\rm S}\right) +g\left( \left( \eta_{1}+\eta\left( 1-\eta_{1}\right) \right) N_{\rm S}\right) \label{eq:bosonic-upper-bounds}\\ -g\left( \eta_{1}\left( 1-\eta\right) N_{\rm S}\right) -g\left( \left( 1-\eta_{1}\right) \left( 1-\eta\right) N_{\rm S}\right) \Big], \end{multline} where $g\left( x\right) \equiv\left( x+1\right) \log_{2}\left( x+1\right) -x\log_{2}x$ is the Shannon entropy of a geometric distribution with mean $x$. The function $g(x)$ is also equal to the von Neumann entropy of a zero-mean circularly-symmetric thermal state with mean photon number $x$. Since the function in (\ref{eq:bosonic-upper-bounds}) is symmetric and convex in $\eta_{1}$, its minimum occurs at $\eta _{1}=1/2$, leading to the following simpler upper bound:% \[ g\left( \left( 1+\eta\right) N_{\rm S}/2\right) -g\left( \left( 1-\eta\right) N_{\rm S}/2\right) . \] By taking the limit of this upper bound as $N_{\rm S}\rightarrow\infty$, we obtain the photon-number independent expression, \[ \log_{2}\left( \left( 1+\eta\right) /\left( 1-\eta\right) \right) , \] which recovers the upper bound stated in \eqref{eq:good-upper-bound}. As mentioned in the introduction, any excess noise in the channel can only reduce the squashed entanglement of a quantum channel and thus \eqref{eq:good-upper-bound} serves as a fundamental upper limit on the secret key agreement capacity of a lossy optical channel. This statement follows from a quantum data processing argument, i.e., the quantum conditional mutual information does not increase under processing (including noise additions) of one of the systems that is not the conditioning system (see Proposition 3 of \cite{CW04}). Note that the statement does not prohibit the improvement of the key rates by applying `noisy processing' in specific existing QKD protocols such as BB84 as proposed in \cite{RGK05,RS07}. However, such an improved key rate is always equal or lower than our bound in \eqref{eq:good-upper-bound}. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{Figure-2takeoka.pdf} \caption{{\bf Setup for calculating the upper bound on the secret key rate of the pure-loss optical channel}}. \label{fig:bosonic_setup} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.0in]{Figure-3takeoka.pdf} \caption{ {\bf Upper and lower bounds on $P_2$ for a pure-loss bosonic channel}. The magenta solid curve is the squashed entanglement upper bound in (\ref{eq:good-upper-bound}). The blue solid curve is the reverse coherent information lower bound. The black solid curve is the efficient BB84 protocol with perfect devices and single photons, and the gray dotted curve is the decoy BB84 protocol including experimental imperfections \cite{SBCDLP09}. The orange solid curve is the Gaussian modulated coherent state continuous variable protocol (CV-GG02) \cite{GG02} with perfect devices. The brown dashed and pink dash-dotted curves are the CV-GG02 protocol including experimental imperfections with the uncalibrated- and calibrated-device scenarios, respectively. The details of the protocols and device parameters are described in the Methods section. } \label{lossybosonic} \end{figure} \section*{Discussion} It is instructive to compare our upper bound to the known lower bounds on the secret-key agreement capacity of optical QKD protocols. BB84 is the most widely examined QKD protocol. When operating under polarization encoding and ideal conditions over a lossy channel (perfect single photon sources and detectors, and with the efficient BB84 protocol as proposed in~\cite{LCA06}), and with no excess noise (i.e., Eve can do only passive attacks consistent with the channel loss alone), the key rate is simply given by $\eta/2$ secret key bits per mode~\cite{SBCDLP09,LCA06}. The best known lower bound on the secret-key agreement capacity of the pure-loss channel ${\mathcal N}_\eta$ was established in Ref.~\cite{PGBL09}: \begin{equation} P_2(\mathcal{N}_\eta) \ge \log_{2}\left( 1/\left( 1-\eta\right) \right)\;{\text{key bits per mode}}. \label{eq:lower-bound-p2}% \end{equation} These lower bounds and our upper bound are compared in Fig.~\ref{lossybosonic}, where we also plot the rate achievable with coherent-state continuous variable protocol (CV-GG02) \cite{SBCDLP09,GG02}, another major protocol, without any excess noise or imperfections. Also, as examples of the practical performance of QKD, we plot the decoy-state BB84 protocol including device imperfections as well as the imperfect CV-GG02 with uncalibrated- and calibrated-device scenarios (see the Methods section for the details of the protocols and parameters). We note the following important observations. First, the two bounds in (\ref{eq:good-upper-bound}) and (\ref{eq:lower-bound-p2}) become close for $\eta\ll1$ (the high-loss regime, relevant for long-distance QKD). Thus, for small $\eta$, our upper bound demonstrates that the protocol from \cite{PGBL09} achieving the lower bound in (\ref{eq:lower-bound-p2}) is nearly optimal. To be precise, the upper and the lower bounds are well approximated by $2\eta/\ln 2$ and $\eta/\ln 2$ key bits per mode, when $\eta \ll 1$ (see Fig.~\ref{lossybosonic}). Furthermore, the ideal BB84 rate ($\eta$ key bits per mode) is worse than the reverse coherent information lower bound, only by a constant ($2/\ln 2 \approx 2.88$) factor in the high-loss regime where the factor 2 reflects the fact that the BB84 uses two polarization (or other) modes to send one bit and $\ln 2$ reflects some kind of gap between the qubit and continuous variable protocols. On the other hand, the protocol described in~\cite{PGBL09} that attains the reverse coherent information rate, requires an ideal SPDC source, and collective quantum operations and measurements, structured realizations of which are not known. In addition, even with detector impairments, both the decoy-state BB84 as well as the CV-GG02 protocol (with or without the assumption of calibrated devices) can achieve secret key rates that scale linearly with the channel transmittance, until a maximum channel loss threshold where the rate plummets to zero. For BB84, this loss value where the rate-cliff occurs is driven by the detector dark counts, whereas for CV-GG02, it is driven primarily by the electronic noise in Bob's homodyne detector. Hence, given the comparisons shown in Fig.~\ref{lossybosonic}, and since BB84 and CV protocols are realizable with currently available resources, it does not seem very worthwhile to pursue alternative repeater-less QKD protocols for higher key generation rate over a lossy channel. Second, our bound significantly advances one of the long-standing open problems in quantum information theory, that of finding a good upper bound on $P_2(\mathcal{N})$, as well as for the two-way assisted quantum capacity $Q_2(\mathcal{N})$ (number of qubits that can be sent perfectly per use of a quantum channel with two-way classical-communication assistance). This is true for a general quantum channel ${\mathcal N}$, and in particular for optical quantum channels such as $\mathcal{N}_\eta$. One of the important open questions is whether or not the true $P_2(\mathcal{N}_\eta)$ is additive. In other words, the question is whether the protocol that attains $P_2(\mathcal{N}_\eta)$ requires an input state that is entangled over several channel uses, or if a product input state suffices. In general, it is likely that $P_2(\mathcal{N})$ is super-additive for some channel as is the case for the unassisted secret-key agreement capacity $P(\mathcal{N})$~\cite{LWZG09} and the classical capacity of quantum channels~\cite{H09}. On the other hand, it is known that the classical capacity and the unassisted quantum capacity of the lossy optical (bosonic) channel are additive~\cite{GGLMSY04,WPG07}. As mentioned above, our upper bound on $P_2(\mathcal{N})$ is a single-letter expression for any channel, i.e., the input optimization to evaluate our upper bound needs to be performed over a single channel use. The lower bound~\eqref{eq:lower-bound-p2} is the single-letter reverse coherent information evaluated for the lossy bosonic channel, whose operational interpretation is an entanglement distribution rate achievable via a product input realizable using a two-mode squeezed vacuum state, and collective quantum operations in the subsequent steps of the protocol, which uses classical feedback~\cite{GPLS09}. Thus, despite the fact that the additivity question for $P_2(\mathcal{N}_\eta)$ remains open, any super-additive gain cannot be very large in the high loss regime, and $P_2(\mathcal{N}_\eta)$ must scale as $\sim \eta$, when $\eta \ll 1$. As a final point, consider a two-way QKD protocol, i.e., when Alice and Bob may use the lossy optical channel ${\mathcal N}_\eta$ in both directions, and also communicate freely over a two-way public channel. In such a case, the secret-key agreement capacity is upper bounded by $2\log_2\left((1+\eta)/(1-\eta)\right)$ secret key bits per mode transmitted in both directions. In summary, we have established in \eqref{eq:good-upper-bound} an upper bound on the rate at which any QKD protocol can generate a shared secret key over a point-to-point lossy optical channel. This upper bound is a function of the channel loss only, and it does not increase with increasing transmit power. We compared our upper bound with the best known lower bound on the key rate and a key rate attainable in principle by the BB84 and CV-GG02 protocols under ideal operating conditions. This comparison reveals that there is essentially no scaling gap between the rates of known protocols and the ultimate secret-key agreement capacity, and thereby that there is no escaping the fundamental exponential decay of key rate with distance. The result of this paper on the one hand provides a powerful new tool for upper bounding the private capacity with two-way classical communication assistance, for a general quantum channel. On the other hand, the application to QKD over optical channels strongly suggests the need for quantum repeaters to perform QKD at high rates over long distances, no matter which actual QKD protocol one may choose to use. Some important open questions remain. Although our bound applies for any finite number of channel uses, one might be able to improve upon our result by establishing a strong converse theorem or even better by considering a second-order analysis, along the lines discussed in \cite{TH12}. For establishing a strong converse theorem, some combination of the ideas presented in \cite{BBCW13,Oppenheim08} might be helpful. Another important open question is whether our upper bound in (\ref{eq:good-upper-bound}) could be achievable using some QKD protocol, or whether the bound can be further tightened by choosing a squashing channel for Eve other than the pure-loss channel (as shown in Fig.~\ref{fig:bosonic_setup}) or by investigating upper bounds other than $E_{\rm sq}(\mathcal{N})$. For this last question, some recent results in classical information theory \cite{GA10-1,GA10-2} might be helpful. \section*{Methods} {\bf Weak converse and the key rate upper bound for a finite number of channel uses.} Although our main result establishes only a weak converse theorem, it is possible to estimate the effect of a finite number of channel uses, which is always the case in any practical setting. We carefully estimate $f(\varepsilon)$ discussed in the proof of Theorem 1. From the continuity inequality in (\ref{eq:continuity}), we can explicitly describe the additional term $f(\varepsilon)$: \begin{equation*} nR \le n E_{\rm sq}(\mathcal{N}) + 16\sqrt{\varepsilon} \log d + 4 h_2\left(2\sqrt{\varepsilon}\right) , \end{equation*} where $d = 2^{nR}$. This implies the following bound: \begin{equation*} R \le \frac{1}{1-16\sqrt{\varepsilon}} \left( E_{\rm sq}(\mathcal{N}) + 4 h_2\left(2\sqrt{\varepsilon}\right)/n \right). \end{equation*} In the limit of large $n$, the second term $4 h_2\left(2\sqrt{\varepsilon}\right)/n$ vanishes and the upper bound becomes \begin{equation*} R \le \frac{1}{1-16\sqrt{\varepsilon}} \left( E_{\rm sq}(\mathcal{N})\right), \end{equation*} which suggests that there might be room for a trade-off between communication rate and error probability/security as quantified by $\varepsilon$. If one could establish a strong converse theorem, this would eliminate the implied trade-off given above, in the ideal case showing that the bound $R \le E_{\rm sq}(\mathcal{N})$ would hold in the large $n$ limit regardless of the value of $\varepsilon$. Let us consider a quantitative example, consisting of a pure-loss channel with $\varepsilon = 10^{-10}$ and $n=10^4$ (these are not too far from realistic QKD parameters \cite{TOKYO_QKD}). For these values, we get \begin{align} 1 / (1-16\sqrt{\varepsilon}) & \approx 1.0002 , \\ 4 h_2\left(2\sqrt{\varepsilon}\right)/n & \approx 1.36 \times 10^{-7}. \end{align} Furthermore, a 200km fiber with $0.2$dBkm$^{-1}$ corresponds to $\eta =10^{-4}$ and $\log ((1+\eta)/(1-\eta)) \approx 2.885 \times 10^{-4}$. Replacing $E_{\rm sq}(\mathcal{N})$ with our upper bound $\log ((1+\eta)/(1-\eta))$ (see \eqref{eq:good-upper-bound}) and plugging in the above values of $\varepsilon$ and $n$, we find that \begin{equation*} R \leq 2.887 \times 10^{-4}, \end{equation*} which is rather close to $\log ((1+\eta)/(1-\eta)) \approx 2.885 \times 10^{-4}$. However, one can improve upon our upper bound by establishing a strong converse theorem or even better by providing a refined second-order analysis along the lines discussed in \cite{TH12}. {\bf Infinite-dimensional system.} An optical channel can transmit infinite-dimensional (i.e., continuous variable) quantum states while Theorem 1 implicitly assumes finite-dimensional systems. We can circumvent this subtlety by assuming the protocol between Alice and Bob begins and ends with finite-dimensional states, but the processing between the first and final step can be conducted with infinite-dimensional systems. That is, their objective is to generate a maximally entangled state $\left\vert \Phi\right\rangle _{AB}$\ or a finite number of secret key bits, and they do so by Alice encoding a finite-dimensional quantum state into an infinite-dimensional system and the final step of the protocol has them truncate their systems to be of finite dimension. In this way, the continuity inequality in the proof of Theorem 1 safely applies and all of the other steps in between involve only the quantum data processing inequality, which has been proven to hold in the general infinite-dimensional setting \cite{U77}. {\bf Decoy state BB84 and CV-GG02 protocols with experimental imperfections.} The asymptotic secret key rates of the decoy state BB84 protocol and the CV-GG02 in Fig.~\ref{lossybosonic} are calculated from the theoretical models including imperfections summarized in \cite{SBCDLP09}. The parameters used for the plots are as follows: For the decoy BB84, the visibility of interference at Bob's receiver is 0.99, the transmittance of Bob's device is unity, the detector efficiency is 0.2, dark count rate is $10^{-6}$, and the information leakage parameter due to the practical error code is set to be 1.2. For the CV-GG02 protocol, the optical noise is 0.005, the detector efficiency is 0.5, the electronic noise of the detector is 0.01, and the efficiency of the error correction code is set to be 0.9. These parameters are chosen to reflect the state of the art device technologies. In the `uncalibrated-device' scenario, Eve is able to access Bob's homodyne detector imperfections, e.g. to entangle the loss and noise of the detector. The `calibrated-device' scenario is based on the assumption that the homodyne detector is calibrated in a secure laboratory such that Eve cannot entangle her system to the detector imperfections. This assumption allows Alice and Bob to significantly extend the key rate and the loss tolerance. Note that the purpose of these plots is to compare these protocols under imperfections with our fundamental upper bound but is not to compare the practical aspects between these protocols (for example, our model does not include important practical conditions such as phase stability, repetition rate of the system, actual coding strategies, etc). For completeness, the key rate formulae for each protocol and scenario are described in Supplementary Note 3. {\bf Acknowledgements}. We are grateful to Francesco Buscemi, Mikio Fujiwara, Min-Hsiu Hsieh, Seth Lloyd, Cosmo Lupo, Kiyoshi Tamaki, and Andreas Winter for insightful discussions. We also acknowledge Mark Byrd, Eric Chitambar, and the other participants of the Boris Musulin Workshop on Open Quantum Systems and Information Processing for helpful feedback. Finally, we thank Bob Tucci for kindly pointing us to his related work on squashed entanglement. This research was supported by the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019. MT was partially supported by Open Partnership Joint Projects of JSPS Bilateral Joint Research Projects. SG acknowledges partial support from the SeaKey program, through the US Office of Naval Research contract number N00014-14-C-0002. {\bf Author contributions}. All authors contributed to this work extensively and the write of the manuscript. {\bf Competing financial interests}. The authors declare no competing financial interests.
{ "timestamp": "2015-04-27T02:04:02", "yymm": "1504", "arxiv_id": "1504.06390", "language": "en", "url": "https://arxiv.org/abs/1504.06390" }
\section{Introduction} \subsection{Osculating flags} Consider the following construction: let $f : \mathbb{P}^1 \to \mathbb{P}^{n-1}$ be the Veronese embedding $t \mapsto (t,t^2,\ldots, t^{n-1})$, and consider the Grassmannian \[G(k,n) = G(k,H^0(\mathcal{O}_{\mathbb{P}^1}(n-1)))\] of linear series $V$ of rank $k$ and degree $n-1$ on $\mathbb{P}^1$. At each point $f(p) \in \mathbb{P}^{n-1}$, there is the \emph{osculating flag} $\mathscr{F}(p)$ of planes intersecting $f(\mathbb{P}^1)$ at $f(p)$ with the highest possible multiplicity. In this paper, we consider Schubert conditions with respect to such flags. Let ${\scalebox{.3}{\yng(3,3)}}$ denote the $k \times (n-k)$ rectangular partition. For a partition $\lambda \subseteq {\scalebox{.3}{\yng(3,3)}}$, we denote by $\Omega(\lambda,p)$ the Schubert variety for $\lambda$ with respect to $\mathscr{F}(p)$, and for a collection of distinct points $p_\bullet = (p_1, \ldots, p_r)$ and partitions $\lambda_\bullet = (\lambda_1, \ldots, \lambda_r)$, we set \[S(\lambda_\bullet, p_\bullet) = \bigcap_{i=1}^r \Omega(\lambda_i, p_i).\] Note that the codimension of $\Omega(\lambda,p_i)$ is $|\lambda|$. We call the quantity $\rho(\lambda_\bullet) := k(n-k) - \sum |\lambda_i|$ the \emph{expected dimension} of $S(\lambda_\bullet, p_\bullet)$. Geometrically, such Schubert conditions describe linear series $V \subset H^0(\mathcal{O}_{\mathbb{P}^1}(n-1))$ satisfying specified \emph{vanishing conditions} at each $p_j$. That is, the finite set $\{\mathrm{ord}_p(s) : s \in V\} \subset \mathbb{Z}_{\geq 0}$ of orders of vanishing at $p$ of sections $s\in V$ is specified. These conditions first arose in the study of limit linear series \cite{EH86}. There it was shown that a collection of linear series on the components of a reducible nodal curve $C$ occurs as the limit of a single linear series on a smoothing of $C$ if and only if the collection satisfies `complementary' vanishing conditions with respect to the nodes of $C$. On the other hand, such Schubert conditions have been of interest in intersection theory and real Schubert calculus, thanks to such transversality theorems as: \begin{thm}\emph{\cite{EH86}} \label{thm:EH86} For any choice of points $p_i$ and partitions $\lambda_i$, the intersection $S(\lambda_\bullet,p_\bullet)$ is dimensionally transverse. (It is empty if $\rho(\lambda_\bullet) < 0$.) \end{thm} and \begin{thm} \emph{\cite{MTV09}} \label{thm:MTV09} If the $p_i$ are all in $\mathbb{RP}^1$ and $\rho(\lambda_\bullet) = 0$, then $S(\lambda_\bullet, p_\bullet)$ is reduced and consists entirely of real points. \end{thm} Theorem \ref{thm:MTV09} (originally known as the \emph{Shapiro-Shapiro Conjecture}) has inspired work relating the real structure of $S(\lambda_\bullet, p_\bullet)$, as the points $p_\bullet$ vary, to combinatorial Schubert calculus and the theory of Young tableaux. An excellent survey of this material is \cite{Sottile}. The key observation is that the cardinality of $S(\lambda_\bullet, p_\bullet)$ is the Littlewood-Richardson coefficient $c_{\lambda_1, \ldots, \lambda_r}^{{\scalebox{.3}{\yng(3,3)}}}$. We may then ask if there is a canonical bijection between the points of $S$ and the corresponding combinatorial objects. First, we allow the points to vary. The construction above gives a family $S(\lambda_\bullet)$ over the space $\mathcal{U}_r$ of $r$-tuples of distinct points of $\mathbb{P}^1$ or, working up to automorphism, the moduli space $\Mo{r}$. Speyer \cite{Sp} extended the family to allow the points $p_i$ to collide, working over the compactification $\Mbar{r}$. \begin{thm}\emph{\cite{Sp}} There are flat, Cohen-Macaulay families $\mathcal{S}(\lambda_\bullet) \subset \mathcal{G}(k,n) \to \Mbar{r}$, whose restriction to $\Mo{r}$ is $S(\lambda_\bullet) \subset G(k,n) \times \Mo{r} \to \Mo{r}$. The boundary fibers of $\mathcal{G}(k,n)$ consist of limit linear series and the boundary fibers of $\mathcal{S}(\lambda_\bullet)$ consist of limit linear series satisfying the conditions $\lambda_\bullet$ at the marked points. \end{thm} In the case of zero-dimensional Schubert problems, the real locus of $\mathcal{S}(\lambda_\bullet)$ has the following remarkable structure: \begin{thm}\emph{\cite{Sp}} \label{thm:Sp14} When $\rho(\lambda_\bullet) = 0$, the map of manifolds $\mathcal{S}(\lambda_\bullet)(\mathbb{R}) \to \Mbar{r}(\mathbb{R})$ is a smooth covering map. The fibers of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$ are indexed by certain collections of Young tableaux; the monodromy of the cover is then given by operations from Sch\"{u}tzenberger's jeu de taquin. \end{thm} In addition to giving the desired bijection from points of $S$ to tableaux, Theorem \ref{thm:Sp14} provides a geometrical interpretation of jeu de taquin as the result of lifting arcs from $\Mbar{r}(\mathbb{R})$ to $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$, i.e. varying the Schubert problem in real 1-parameter families. Related operations such as promotion and evacuation also have geometrical meanings in $\mathcal{S}(\lambda_\bullet)$, and will play a starring role in the main content of this paper, which is the study of one-dimensional Schubert problems (below). The most important type of marked stable curve $C$ in this setting consists of $r-2$ components connected in a chain. We will call $C$ a \newword{caterpillar curve} (see Figure \ref{fig:caterpillar-curve}). Such a curve is automatically defined over $\mathbb{R}$. The statement of Theorem \ref{thm:Sp14} is simpler for caterpillar curves. For example, let $\mathcal{S} = \mathcal{S}({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}})$, with $k(n-k)$ copies of ${\scalebox{.5}{\yng(1)}}$, and let $C$ be a caterpillar curve. Let $SYT({\scalebox{.3}{\yng(3,3)}})$ be the set of standard Young tableaux of shape ${\scalebox{.3}{\yng(3,3)}}$. Then: \newtheorem*{thmsp}{Theorem 1.4 (special case)} \begin{thmsp} The fiber of $\mathcal{S}$ over $C$ is in bijection with $SYT({\scalebox{.3}{\yng(3,3)}})$. If we follow an arc to another caterpillar curve $C'$, the tableau is either unchanged or altered by a Bender-Knuth involution. \end{thmsp} Purbhoo in \cite{Pu} has analogous results regarding the real monodromy of the Wronski map $G(k,n) \to \mathrm{Hilb}_{k(n-k)}(\mathbb{P}^1)$. This map associates to a linear series $V$ its higher ramification locus, as a subscheme of $\mathbb{P}^1$. Here also, the monodromy (over the locus where the map is unramified) is described in terms of jeu de taquin, yielding a geometrical interpretation of JDT and the Littlewood-Richardson rule. The primary difference is that the Wronski map is not a covering map: the fibers collide over the boundary of the Hilbert scheme. \subsection{The case of curves; results of this paper} We now study the case $\rho(\lambda_\bullet) = 1$, so that $\mathcal{S}(\lambda_\bullet) \to \Mbar{r}$ is a family of curves. We are interested in both the geometry of the family and a combinatorial description of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$ as a CW-complex. We state our main geometrical result first: \begin{thm} \label{thm:main-geom} There is a finite, flat, surjective morphism $\mathcal{S}(\lambda_\bullet) \to \mathcal{C}$ of varieties over $\Mbar{r}$, where $\mathcal{C} \to \Mbar{r}$ is the universal curve. This map is defined over $\mathbb{R}$, is \'{e}tale over the real points of $\mathcal{C}$, and the preimage of every real point consists entirely of real points. In particular, for $[C] \in \Mbar{r}(\mathbb{R})$, the map of curves $\mathcal{S}(\lambda_\bullet)|_{[C]}(\mathbb{R}) \to C(\mathbb{R})$ is a covering map. \end{thm} The key idea behind Theorem \ref{thm:main-geom} is the following. A point $s \in S(\lambda_\bullet, p_\bullet)$ is a solution to an `underspecified' Schubert problem, and it is not hard to show that, for generic $s$, there is a unique $(r+1)$-st point $z \in \mathbb{P}^1 - \{p_\bullet\}$, such that $s$ satisfies the single-box Schubert condition $\Omega({\scalebox{.5}{\yng(1)}}, z)$. We show that the assignment $s \mapsto z$ extends to a morphism $S(\lambda_\bullet, p_\bullet) \to \mathbb{P}^1$. We then extend this construction to the boundary fibers. (We think of $z$ as an additional `moving ${\scalebox{.5}{\yng(1)}}$ condition'.) In particular, thinking of $\mathcal{C} \to \Mbar{r}$ as the `forgetting map' $\Mbar{r+1} \to \Mbar{r}$, we have a diagram \[ \xymatrix{ \mathcal{S}(\lambda_\bullet;\ {\scalebox{.5}{\yng(1)}}_{r+1}) \ar[r] \ar[d]_{\pi} & \Mbar{r+1} \ar[d] \\ \mathcal{S}(\lambda_\bullet) \ar[r] \ar@{-->}[ur]^f & \Mbar{r} } \] and we show that $\pi$ is an isomorphism of total spaces. The map $f$ is the map of Theorem \ref{thm:main-geom}. We then use the description of the total space of the zero-dimensional Schubert problem $\mathcal{S}(\lambda_\bullet;\ {\scalebox{.5}{\yng(1)}}_{r+1})$ to study the (one-dimensional) fibers of $\mathcal{S}(\lambda_\bullet)$.\\ Over $\Mo{r}$, this result leads to the following: \newtheorem*{cor:as-real-as-poss}{Corollary \ref{cor:as-real-as-poss}} \begin{cor:as-real-as-poss} If the $p_i$ are all in $\mathbb{RP}^1$, the curve $S = S(\lambda_\bullet, p_\bullet) \subset G(k,n)$ has smooth real points. Moreover, $S(\mathbb{C}) - S(\mathbb{R})$ is disconnected. \end{cor:as-real-as-poss} We think of Corollary \ref{cor:as-real-as-poss} as saying that $S$ is `almost as real as possible' when all the $p_j$ are real. We say `almost' because while it is often desirable for a real integral algebraic curve of genus $g$ to have $g+1$ real connected components, this is not the case for $S$ (see Example \ref{exa:large-k} for a smooth curve of genus $2$ with one real connected component). Instead, the real connected components of $S$ are determined by combinatorial data, which we state below. Note that we do not assert in general that $S$ is smooth or integral, though it is reduced. In fact there are cases where $\chi(\mathcal{O}_S) > 1$ (see Example \ref{exa:disconnected}), from which we observe: \newtheorem*{cor*}{Corollary} \begin{cor*} A one-dimensional Schubert problem in $G(k,n)$ need not be a connected curve. \end{cor*} We remark that our other results primarily concern \emph{fibers} of $\mathcal{S}(\lambda_\bullet)$, not its total space. The latter is isomorphic to the total space of $\mathcal{S}(\lambda_\bullet;\ {\scalebox{.5}{\yng(1)}}_{r+1})$, hence has a description from Theorem \ref{thm:Sp14}. We do note that Theorem \ref{thm:main-geom} implies that the topology of the fiber $S(\lambda_\bullet, p_\bullet)(\mathbb{R})$ does not change over a connected component $X \subset \Mo{r}(\mathbb{R})$. In particular: \begin{cor*} Each real connected component of $S(\lambda_\bullet)(\mathbb{R})|_X$ is homeomorphic to a cylinder $S^1 \times X$. \end{cor*} We now describe the real topology of the fibers of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$ in terms of Young tableaux. Our description extends that of Theorem \ref{thm:Sp14} via the isomorphism $\pi$ above, and is in terms of orbits of Young tableaux and dual equivalence classes under operations related to Sch\"{u}tzenberger promotion and evacuation. We define a \newword{chain of dual equivalence classes from $\alpha$ to $\beta$} to be a sequence ${\bf D} = (D_1, \ldots, D_r)$ of dual equivalence classes of skew standard Young tableaux, such that $\mathrm{sh}(D_1)$ extends $\alpha$, $\mathrm{sh}(D_{i+1})$ extends $\mathrm{sh}(D_i)$ for each $i$, and $\beta$ extends $\mathrm{sh}(D_r)$. We say the chain has \newword{type} $(\lambda_1, \ldots, \lambda_r)$ if $\lambda_i$ is the rectification shape of $D_i$. Let $X_\alpha^\beta(\lambda_\bullet) := X_\alpha^\beta(\lambda_1, \ldots, \lambda_r)$ denote the set of such chains. In section \ref{sec:shuffling-ops}, we define noncommuting involutions $\mathrm{sh}_i$ and $\mathrm{esh}_i$, called {\bf shuffling} and {\bf evacuation-shuffling}, both of which switch $\lambda_i$ and $\lambda_{i+1}$ in the type of the chain. We note that $X_\varnothing^\lambda({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}})$, with $|\lambda|$ copies of ${\scalebox{.5}{\yng(1)}}$, is just $ SYT(\lambda)$, and under this identification $\mathrm{esh}_i$ is the identity function and $\mathrm{sh}_i$ is the $i$-th Bender-Knuth involution. We note that Sch\"{u}tzenberger promotion on $SYT(\lambda)$ then corresponds to the composition \[\mathrm{sh}_{|\lambda|-1} \circ \cdots \circ \mathrm{sh}_2 \circ \mathrm{sh}_1 : X_\varnothing^\lambda({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}}) \to X_\varnothing^\lambda({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}}).\] We think of chains of dual equivalence classes as generalizations of standard tableaux. \\ Our main combinatorial result is the following: \newtheorem*{thm:DE-caterpillar-covering}{Theorem \ref{thm:DE-caterpillar-covering}} \begin{thm:DE-caterpillar-covering} Let $C$ be a caterpillar curve with marked points $p_1, \ldots, p_r$ from left to right. Let $S = \mathcal{S}(\lambda_\bullet)|_{[C]}$. The covering map $S(\mathbb{R}) \to C(\mathbb{R})$ is as follows: \begin{enumerate} \item[(i)] If $q$ is the node between $p_i$ and $p_{i+1}$, the fiber of $S$ over $q$ is indexed by the set \[X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}},\lambda_{i+1}, \ldots, \lambda_r).\] The fibers over $p_1$ and $p_r$ are analogous, with ${\scalebox{.5}{\yng(1)}}$ in the second and second-to-last positions, respectively. \end{enumerate} Then, for $i = 2, \ldots, r-1$, we have: \begin{enumerate} \item[(ii)] The arc \emph{through $p_i$} lifts to an arc from ${\bf D}$ to $\mathrm{esh}_i({\bf D})$, where \[\mathrm{esh}_i : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, {\scalebox{.5}{\yng(1)}},\lambda_i, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}}, \ldots, \lambda_r)\] is the $i$-th evacuation-shuffle. \item[(iii)] The arc \emph{opposite $p_i$} lifts to an arc from ${\bf D}$ to $\mathrm{sh}_i({\bf D})$, where \[\mathrm{sh}_i : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, {\scalebox{.5}{\yng(1)}},\lambda_i, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}}, \ldots, \lambda_r)\] is the $i$-th shuffle. \end{enumerate} \end{thm:DE-caterpillar-covering} \begin{figure}[h] \centering \includegraphics[scale=0.7]{covering-space.pdf} \\ \caption{Center: a covering map $\pi : S(\mathbb{R}) \to C(\mathbb{R})$, where $C$ is a caterpillar curve with marked points $p_1, \ldots, p_4$. Left, right: two nearby desingularizations, obtained by smoothing the node in two different ways. Note that the number of connected components may change.} \label{fig:covering-space} \end{figure} By passing to a nearby desingularization, we obtain a description for fibers over $\Mo{r}$ in terms of orbits of tableau promotion: \newtheorem*{cor:promotion}{Corollary \ref{cor:promotion}} \begin{cor:promotion} If the $p_i$ are all in $\mathbb{RP}^1$ and $S = S({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}} \ ; p_\bullet) \subset G(k,n)$, there is a bijection \[\left\{\begin{split}\text{components\ } \\ \text{ of } S(\mathbb{R}) \hspace{0.4cm} \end{split}\right\} \longleftrightarrow SYT({\scalebox{.3}{\yng(3,3)}})/\omega,\] where $\omega : SYT({\scalebox{.3}{\yng(3,3)}}) \to SYT({\scalebox{.3}{\yng(3,3)}})$ is Sch\"{u}tzenberger promotion. Similarly, if $S = S(\lambda_\bullet, p_\bullet) \subset G(k,n)$, and the circular ordering of the points is $p_1, \ldots, p_r$, there is a bijection \[\left\{\begin{split}\text{components\ } \\ \text{ of } S(\mathbb{R}) \hspace{0.4cm} \end{split}\right\} \longleftrightarrow X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_\bullet)/\omega',\] where $\omega'$ is the composition \[\omega' = \iota^{-1} \circ \mathrm{esh}_1 \cdots \mathrm{esh}_{r-1} \mathrm{sh}_{r-1} \cdots \mathrm{sh}_{1} \circ \iota\] and $\iota$ is the natural bijection $X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_\bullet) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}}, \lambda_\bullet).$ \end{cor:promotion} We emphasize that while our proofs rely crucially on degenerations over $\Mbar{r}$, Corollary \ref{cor:promotion} describes Schubert problems contained in a single Grassmannian. We also note that the operator $\omega'$ depends on the circular ordering of the points $p_\bullet$. If two circular orderings degenerate to the same caterpillar curve $C$, we may view the operators $\omega'_1, \omega'_2$ as different sequences of shuffles and evacuation shuffles applied to $X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r)$. See Corollary \ref{cor:connected-components-desing}. In general the orbit structure may differ; see Example \ref{exa:minimal-counterex} for an example in $G(3,8)$. A necessary condition, however, is that certain Littlewood-Richardson numbers be greater than 1: \newtheorem*{cor:mult-free}{Corollary \ref{cor:mult-free}} \begin{cor:mult-free} Suppose the pairwise products $\lambda_i \cdot \lambda_j$ in $H^*(G(k,n))$ are multiplicity-free. Then the operators $\omega$ for different circular orderings are all conjugate. In particular, the number of real connected components of $S(\mathbb{R})$ does not depend on the ordering of the $p_\bullet$. \end{cor:mult-free} We note that the condition above holds for any Schubert problem on $G(2,n)$, and for any Schubert problem in which every $\lambda_i$ is a rectangular partition. \subsection{The genus of $S$ and K-theory} A smooth, integral algebraic curve $S$ defined over $\mathbb{R}$ that is disconnected by its real points has the property \[\#\left\{\begin{split}\text{components\ } \\ \text{ of } S(\mathbb{R}) \hspace{0.4cm} \end{split}\right\} \equiv g(S) + 1 \equiv \chi(\mathcal{O}_S) \ (\mathrm{mod}\ 2).\] In fact, as long as $S(\mathbb{R})$ is smooth, the above equation holds (with $\chi(\mathcal{O}_S)$) even if $S$ is singular or reducible, since its singularities then occur in complex conjugate pairs. For our curves $S(\lambda_\bullet, p_\bullet)$, we have described the left-hand side of this identity in terms of objects from $H^*(G(k,n))$; on the other hand, we may compute $\chi(\mathcal{O}_S)$ in the K-theory ring $K(G(k,n))$. Let $[\mathcal{O}_\lambda]$ denote the class of the structure sheaf of the Schubert variety for $\lambda$, and let $k_{\lambda_\bullet}^\nu$ be the absolute value of the coefficient of $[\mathcal{O}_\nu]$ in the K-theory product $\prod_i [\mathcal{O}_{\lambda_i}]$. This is zero unless $|\nu| \geq \sum |\lambda_i|$, and if equality holds then $k_{\lambda_\bullet}^\nu = c_{\lambda_\bullet}^\nu$. We have the following: \newtheorem*{cor:parity-eqn}{Corollary \ref{cor:parity-eqn}} \begin{cor:parity-eqn} Let $\alpha, \beta, \gamma$ be partitions with $|\alpha| + |\beta| + |\gamma| = k(n-k) - 1$. Let $\omega' = \mathrm{esh}_2 \circ \mathrm{sh}_2$, where \[\mathrm{esh}_2, \mathrm{sh}_2 : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, \beta, {\scalebox{.5}{\yng(1)}}, \gamma)\] are the shuffle and evacuation-shuffle operators. Then \[\#\mathrm{orbits}(\omega') = c_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} - k_{\alpha \beta}^{\gamma^c} \ (\mathrm{mod} \ 2) \ \ \text{ and }\ \ \mathrm{sign}(\omega') = k_{\alpha \beta}^{\gamma^c} \ (\mathrm{mod} \ 2),\] where we think of $\omega'$ as a permutation with sign $0$ or $1$. We also have an inequality \[ c_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} \leq k_{\alpha \beta}^{\gamma^c} + \#\mathrm{orbits}(\omega').\] \end{cor:parity-eqn} Similar statements hold for products of more than three Schubert classes. Corollary \ref{cor:parity-eqn} has intriguing connections to Thomas and Yong's K-theoretic jeu de taquin for increasing tableaux: \begin{thm}\emph{\cite{ThYo}} \label{thm:thomas-yong-kjdt} Let $\alpha, \beta, \gamma$ be partitions satisfying $|\alpha| + |\beta| \leq |\gamma^c|$. Then $k_{\alpha \beta}^{\gamma^c}$ is the number of increasing tableaux of shape $\gamma^c/\alpha$ that rectify to the highest-weight tableau of shape $\beta$ under K-theoretic jeu de taquin. \end{thm} When $|\alpha| + |\beta| + |\gamma| = k(n-k) - 1$, any such tableau is standard except for a single repeated entry. An element of $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma)$ is represented by similar data: a filling of $\gamma^c/\alpha$ by, first, a single box extending $\alpha$, say to $\alpha^+$, followed by a standard tableau $T$ of shape $\gamma^c/\alpha^+$, rectifying to $\beta$. The operator $\omega'$ slides the ${\scalebox{.5}{\yng(1)}}$ through $T$; if we view ${\scalebox{.5}{\yng(1)}}$ as an `extra' entry for $T$, we obtain a sequence of increasing tableaux. Despite this similarity, we do not know a direct combinatorial proof of Corollary \ref{cor:parity-eqn} in general. We do obtain an explicit description of $\omega'$ when $\beta$ is a horizontal or vertical strip (the `Pieri case'): \newtheorem*{thm:pieri-case}{Theorem \ref{thm:pieri-case}} \begin{thm:pieri-case} Let $\beta$ be a horizontal strip and let $X = X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha,{\scalebox{.5}{\yng(1)}},\beta,\gamma)$. There is a natural indexing of $X$ by the numbers $1, \ldots, |X|$, and under this indexing, the action of $\omega'$ is given by $\omega'(i) = i+1 \ (\mathrm{mod} \ |X|).$ In K-theory, $k_{\alpha \beta}^{\gamma^c} = |X| - 1$, and each increasing tableau corresponds to a successive pair $(X_i, X_{i+1})$ in the orbit, excluding the final pair $(X_{|X|},X_1)$. \end{thm:pieri-case} In this and certain other cases, the equations of Corollary \ref{cor:parity-eqn} in fact hold over $\mathbb{Z}$, and the inequality is an equality. In general, however, the quantity $c_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} - k_{\alpha \beta}^{\gamma^c}$ may be negative, and equality only holds mod 2. The author would be interested in combinatorial explanations of these facts. \subsection{Acknowledgments} I am indebted first and foremost to David Speyer for introducing me to Schubert calculus (with and without osculating flags) and for many helpful conversations. Thanks also to Rohini Ramadas and Maria Gillespie, and to Oliver Pechenik for pointing out Example \ref{exa:disconnected}. Part of this work was done while supported by NSF grant DMS-1361789. \subsection{Structure of this paper} The paper is as follows. In Section \ref{sec:schubert-mbar}, we give background on $\Mbar{r}$ and geometrical Schubert calculus; we then prove Theorem \ref{thm:main-geom}. In Section \ref{sec:tableau-combinatorics}, we give background on tableau combinatorics and dual equivalence. In Section \ref{sec:schubert-real}, we prove Theorem \ref{thm:DE-caterpillar-covering} and Corollary \ref{cor:promotion}. Finally, we discuss connections to K-theory in Section \ref{sec:k-theory}. \section{Schubert problems over $\Mbar{r}(\mathbb{C})$} \label{sec:schubert-mbar} \subsection{Grassmannians and Schubert varieties} \label{sec:grass-schubert} We write $G(k,n)$ for the Grassmannian of dimension-$k$ vector subspaces of $\mathbb{C}^n$, or $G(k,V)$ if we wish to specify an ambient vector space $V$. Let $\lambda = (\lambda^{(1)} \geq \lambda^{(2)} \geq \cdots \geq \lambda^{(k)} \geq 0)$ be a partition with $k$ parts, each of size at most $n-k$. We write $\lambda \subseteq {\scalebox{.3}{\yng(3,3)}}$ to denote this. Let $\mathscr{F}$ be a complete flag; let $F^j$ be the codimension-$j$ part of $\mathscr{F}$. We define the \newword{Schubert variety} \[\Omega(\lambda, \mathscr{F}) = \{V \in G(k,n) : \dim(V \cap F^{k-j+\lambda^{(j)}}) \geq j\};\] this is an integral subvariety of codimension $|\lambda| = \sum \lambda^{(j)}.$ The cohomology class of $\Omega(\lambda,\mathscr{F})$ does not depend on the choice of $\mathscr{F}$; we write $[\Omega(\lambda)]$ for this class. It is well-known that the classes $\{ [\Omega(\lambda)]\}_{\lambda \subseteq {\scalebox{.3}{\yng(3,3)}}}$ form an additive basis for $H^*(G(k,n))$. Given partitions $\lambda_1, \ldots, \lambda_r$, we may write \[[\Omega(\lambda_1)] \cdots [\Omega(\lambda_r)] = \sum_\nu c_{\lambda_\bullet}^\nu [\Omega(\nu)],\] and we call the structure constants $c_{\lambda_\bullet}^\nu$ the Littlewood-Richardson numbers. (Note that $c_{\lambda_\bullet}^\nu = 0$ unless $|\nu| = \sum |\lambda_j|$.) We will occasionally use the identities \[c_{\lambda_1, \ldots, \lambda_r}^{{\scalebox{.3}{\yng(3,3)}}} = c_{\lambda_1, \ldots, \lambda_{r-1}}^{\lambda_r^c},\] where $\lambda_r^c$ denotes the complementary partition with respect to ${\scalebox{.3}{\yng(3,3)}}$, \[\lambda^c = (n-k+1-\lambda^{(k)} \geq n-k+1- \lambda^{(k-1)} \geq \cdots \geq n-k+1- \lambda^{(1)}),\] and the Pieri rule: \[c_{\lambda, {\scalebox{.3}{\yng(1)}}}^{\mu} = \begin{cases} 1 &\text{if } |\mu| = |\lambda| + 1 \text{ and } \mu \supset \lambda, \\ 0 & \text{otherwise.} \end{cases} \] \subsection{Linear systems and higher ramification} \label{subsec:lin-sys} We fix the following notation: for an integral projective curve $X$ of genus 0, let $G(k,n)_X = G(k,H^0(\mathcal{O}_X(n-1)))$. The points $V \in G(k,n)_X$ parametrize projections from the degree $(n-1)$ Veronese embedding, \[X \hookrightarrow \mathbb{P}(H^0(\mathcal{O}_X(n-1))^\vee) \dashrightarrow \mathbb{P}(V^\vee) = \mathbb{P}^{k-1},\] that is, morphisms $X \to \mathbb{P}^{k-1}$ of degree at most $n-1$. For $p \in X$, we define the \newword{osculating flag} $\mathscr{F}(p)$ in $H^0(\mathcal{O}_X(n-1))$ by \[\mathscr{F}(p)^j = \{D \in H^0(\mathcal{O}_X(n-1)) : D - j[p] \text{ is effective}\}.\] Geometrically, $\mathscr{F}(p)$ is dual to the unique flag $\mathscr{H}$ whose projectivizations intersect $X$ at $p$ with the highest possible multiplicity. Explicitly, $\mathscr{H}$ is given by the projective planes \[\mathbb{H}_j = \mathbb{P}\left(\left(H^0(\mathcal{O}_X(n-1))/\mathscr{F}(p)^{j+1}\right)^\vee\right).\] This is the unique plane of (projective) dimension $j$ that intersects $X$ at $p$ with multiplicity $j+1$ in the Veronese embedding. Thus $\mathbb{H}_0 = p$, $\mathbb{H}_1$ is the tangent line to $X$ at $p$, and so on. In coordinates, the embedding is \[[z:1] \mapsto [1 : z : z^2 : \cdots : z^{n-1}]\] and $\mathscr{H}$ is given by the top rows of the $n \times n$ matrix \begin{equation} \label{eqn:flag-matrix} \begin{bmatrix} \frac{d^i}{dz^i}(z^{j-1}) \end{bmatrix} = \begin{bmatrix} 1 & z & z^2 & \cdots & z^{n-1} \\ 0 & 1 & 2z & \cdots & (n-1) z^{n-2} \\ 0 & 0 & 2 & \cdots & (n-1)(n-2) z^{n-3} \\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & 0 & \cdots & (n-1)! \end{bmatrix}. \end{equation} Schubert conditions with respect to $\mathscr{F}(p)$ or $\mathscr{H}$ are called \emph{higher ramification conditions} at $p$ for the map $X \to \mathbb{P}^{k-1}$. We will only consider Schubert varieties with respect to osculating flags, so by abuse of notation we will write $\Omega(\lambda,p)$ for the Schubert variety with respect to $\mathscr{F}(p)$ in $G(k,n)_X$. Given points $p_1, \ldots, p_r$ on $X$ and partitions $\lambda_1, \ldots, \lambda_r$, we define the Schubert problem \[S(\lambda_\bullet, p_\bullet) = \bigcap_{i=1}^r \Omega(\lambda_i,p_i).\] We will sometimes think of a point $x \in S(\lambda_\bullet,p_\bullet)$ as a morphism $X \to \mathbb{P}^{k-1}$ with prescribed ramification conditions $\lambda_i$ at $p_i$ for each $i$. We have the \newword{Pl\"{u}cker Formula}, which says that the total amount of ramification of a linear series $V$ is always equal to $\dim_\mathbb{C} G(k,n)$: \begin{thm}[Pl\"{u}cker formula] Let $V \in G(k,n)_X$. For each $x \in X$, let $\lambda_x$ be the largest Schubert condition such that $V \in \Omega(\lambda_x,x)$. Then $\lambda_x = \varnothing$ for all but finitely-many $x$, and \[\sum_{x \in X} |\lambda_x| = k(n-k).\] \end{thm} See \cite{GrHaBook} for a proof. When $k=2$, the Pl\"{u}cker formula reduces to the Riemann-Hurwitz formula for ramification points of maps $\mathbb{P}^1 \to \mathbb{P}^1$ of degree $n-1$. The Pl\"{u}cker formula is essentially equivalent to Theorem \ref{thm:EH86} of \cite{EH86}, that the dimension of $S(\lambda_\bullet,p_\bullet)$ is always $k(n-k) - \sum |\lambda_i|$. Here it is helpful to note that $\Omega({\scalebox{.5}{\yng(1)}}, \mathscr{F})$ is an ample divisor. Finally, we have the following formula for minors of the matrix \eqref{eqn:flag-matrix} above and the Schubert condition ${\scalebox{.5}{\yng(1)}}$: \begin{lemma} \label{lem:local-equation-box} For a subset $J \subset [n]$, let \begin{align*} \Delta_J &= \prod_{j_1 < j_2 \in J} (j_2 - j_1), \\ e_J &= \sum_{j \in J} j - {|J| + 1 \choose 2}. \end{align*} Note that $e_J + e_{J^c} = |J|(n-|J|).$ We have \[\Delta_J(z) = \Delta_J \cdot z^{e_J},\] where $\Delta_J(z)$ is the determinant of the top-justified square minor of the matrix \eqref{eqn:flag-matrix} using the columns $J$. A point $V \in G(k,n)_X$, with Pl\"{u}cker coordinates $pl_I$, satisfies the Schubert condition ${\scalebox{.5}{\yng(1)}}$ with respect to the flag \eqref{eqn:flag-matrix} if and only if \[\sum_{I \in {[n] \choose k}} (-1)^{e_{I_c}}\Delta_{I^c}(z) \cdot pl_I = \sum_{I \in {[n] \choose k}} \Delta_{I^c} \cdot (-z)^{e_{I_c}} \cdot pl_I = 0.\] \end{lemma} \begin{proof} See \cite{Pu}. \end{proof} \subsection{Curves with marked points} \label{sec:M0r} A \newword{nodal curve} is a connected, reduced projective variety $C$ of dimension 1, all of whose singularities are simple nodes. We define the \newword{dual graph} of $C$ to be the graph $G = (V,E)$, where \[V = \{\text{irreducible components of } C\},\ E = \{\text{nodes of } C\}.\] We say $C$ has arithmetic genus $g = \dim_\mathbb{C} H^1(C,\mathcal{O}_C)$. We are interested in curves of genus zero, and we note that $C$ is genus zero if and only if every irreducible component of $C$ is isomorphic to $\mathbb{P}^1$ (in particular, is smooth) and the dual graph of $C$ is a tree. We select distinct smooth points $p_1, \ldots, p_r$ on $C$, and consider $C$ up to automorphisms $\phi$ that fix the $p_i$. We say $C$ is \newword{stable} if the only automorphism of $C$ fixing the marked points is the identity. Since $\mathrm{Aut}(\mathbb{P}^1)$ is simply 3-transitive, $C$ is stable if and only if every component of $C$ has $\geq3$ nodes and/or marked points. We say $p \in C$ is a \newword{special point} if it is a node or a marked point. We define \begin{align*} \mathcal{U}_r &= \{(p_1, \ldots, p_r) : p_i \ne p_j \text{ for all } i \ne j\} \subset \mathbb{P}^1 \times \cdots \times \mathbb{P}^1, \\ \Mo{r} &= \mathcal{U}_r / \mathrm{Aut}(\mathbb{P}^1), \end{align*} where the action of $\mathrm{Aut}(\mathbb{P}^1)$ is the diagonal. We think of $\Mo{r}$ as the moduli space of \emph{irreducible} stable curves with $r$ distinct marked points. We have an open immersion \[\Mo{r} \hookrightarrow \Mbar{r} = \{\text{stable, genus-0 curves with $r$ distinct smooth marked points}\} / \sim,\] where two curves $(C, p_\bullet)$ and $(C', p'_\bullet)$ are equivalent if there is an isomorphism $\phi : C \to C'$ such that $\phi(p_i) = p'_i$. The space $\Mbar{r}$ is a smooth projective variety of dimension $r-3$, with a universal family $\mathcal{C} \to \Mbar{r}$, flat and of relative dimension 1, where the fiber over the point $[C]$ is the curve $C$ itself. We note that $\Mbar{r}(\mathbb{C})$ has a stratification into locally closed cells indexed by trees $T$ with $r$ labeled leaves, such that every internal vertex has degree $\geq 3$, up to graph isomorphism preserving the leaf labels. The cell corresponding to $T$ is the set of stable curves whose dual graph is $T$; it has dimension \[\sum_{\text{internal vertices } v \in T} (\deg(v) - 3).\] The unique maximal cell, corresponding to the graph with only one internal vertex, is $\Mo{r}$. The 0-dimensional cells of $\Mbar{r}$ correspond to `minimally stable' curves $C$, where each component has exactly 3 special points. If $C$ is minimally stable and the internal nodes of its dual graph form a line (i.e. there are no components having 3 nodes), we say $C$ is a \newword{caterpillar curve}. Caterpillar curves will play a special combinatorial role in this paper. \begin{figure}[h] \centering \includegraphics[scale=0.5]{caterpillar-curve.pdf} \caption{A caterpillar curve with 6 marked points.} \label{fig:caterpillar-curve} \end{figure} \subsubsection{Forgetting maps} Let $T = \{1, \ldots, n\}$ and let $T' \subseteq T$. We define the \newword{forgetting map} $\varphi_{T'} : \Mbar{T} \to \Mbar{T - T'}$ as follows: given $[C] \in \Mbar{T}$, we forget the points with labels in $T'$; then we contract any irreducible component of $C$ that is left with fewer than three special points. This gives a stable curve with marked points labeled by $T - T'$. See Figure \ref{fig:forgetful}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{forgetful.pdf} \\ \caption{If we forget the points labeled 3 and 5, we contract their components.} \label{fig:forgetful} \end{figure} The simplest forgetting map $\varphi_{r+1} : \Mbar{r+1} \to \Mbar{r}$ is of special importance: the fiber over $[C] \in \Mbar{r}$ is a copy of $C$ itself. We may think of the $(r+1)$-st marked point as moving around $C$, bubbling off new components when it collides with the existing special points. Thus $\Mbar{r+1}$ is isomorphic to the universal family $\mathcal{C} \to \Mbar{r}$. \subsubsection{Topology of $\Mbar{r}$ over $\mathbb{R}$} The locus $\Mbar{r}(\mathbb{R})$ is a smooth manifold of (real) dimension $r-3$ with the structure of a CW-complex, which we now describe. We refer to \cite{Devadoss} for this material. First, a \newword{dihedral ordering} of a finite set $X$ is an equivalence class of circular orderings $<$ on $X$, where $<_1$ and $<_2$ are equivalent if they are opposites, that is \[a <_1 b \text{ if and only if } b <_2 a.\] In other words, a dihedral ordering is a circular ordering, up to reflection. The cells of $\Mbar{r}(\mathbb{R})$ are indexed by the following data: \begin{enumerate} \item An unrooted tree $T$ with $r$ labeled leaves, up to isomorphism, as above; \item For each internal vertex $v \in T$, a dihedral ordering of the edges incident to $v$. \end{enumerate} The dihedral ordering arises from the fact that $\mathrm{Aut}(\mathbb{R}\mathbb{P}^1)$ acts only by rotating and reflecting the marked points. There are $\tfrac{1}{2}(r-1)!$ maximal cells of real dimension $r-3$, corresponding to the dihedral orderings of $r$ points on $\mathbb{R}\mathbb{P}^1$. The codimension-one cells correspond to curves with exactly one node; we will speak of \newword{wall-crossing} from one maximal cell to an adjacent one, which results in reversing the order of a consecutive sequence of points. \subsection{Node labelings.} Now let $C$ be a stable curve with components $C_i$ and marked points $p_1, \ldots, p_r$. A \newword{(strict) node labeling} $\nu$ of $C$ is a function \[\nu : \big\{(q,C_i) : q \text{ is a node on } C_i\big\} \to \big\{ \text{partitions } \lambda \subseteq {\scalebox{.3}{\yng(3,3)}} \big\},\] such that if $q$ is the node between $C_i$ and $C_j$, then \[\nu(q,C_i) = \nu(q,C_j)^c,\] where $\nu^c$ denotes the complementary partition. This is a choice of a pair of complementary partitions on opposite sides of each node. We will also consider \newword{excess node labelings}, where instead we allow $\nu(q,C_i) \supseteq \nu(q,C_j)^c$, i.e. the partitions may be more than complementary. Given a node labeling $\nu$, we define the space \[\Phi_\nu = \prod_{\text{components } C_i}\ \bigcap_{\text{nodes } q \in C_i} \Omega(\nu(q,C_i),q) \subseteq \prod_{i} G(k,n)_{C_i},\] so $\Phi_\nu$ applies the Schubert conditions from $\nu$ separately in each $G(k,n)_{C_i}$. All our Schubert problems on $C$ take place in the ambient space \begin{equation} \label{eqn:node-labeling-space} \mathcal{G}(k,n)_C = \bigcup_{\text{strict node labelings } \nu} \Phi_\nu \ \ \subset\ \prod_i G(k,n)_{C_i}. \end{equation} We note that this is the space of limit linear series on $C$, in the sense of Eisenbud-Harris \cite{EH86}. Let $\lambda_i\ (1 \leq i \leq r)$ be a choice of partition for each marked point of $C$. Let $C_{p_i}$ be the component containing $p_i$ and $\Omega(\lambda_i,p_i) \subset G(k,n)_{C_{p_i}}$ be the Schubert variety in the appropriate Grassmannian. We define the Schubert problem on $C$, \[\mathcal{S}(\lambda_\bullet)_C = \mathcal{G}(k,n)_C \cap \bigg( \bigcap_{i=1}^r \Omega(\lambda_i,p_i) \bigg),\] Thus the components of $\mathcal{S}(\lambda_\bullet)_C$ are precisely the components of \[\Phi_\nu \cap \bigg( \bigcap_{i=1}^r \Omega(\lambda_i,p_i) \bigg),\] for all (strict) node labelings $\nu$ of $C$. Our Schubert problems therefore describe collections of morphisms $C_i \to \mathbb{P}^{k-1}$ with prescribed ramification at the nodes and marked points of $C_i$. Speyer has shown that the above space moves in families of marked stable curves: \begin{thm}[Theorem 1.1 of \cite{Sp}] \label{thm:recall-Sp14} There exist flat, Cohen-Macaulay families $\mathcal{G}(k,n)$ and $\mathcal{S}(\lambda_\bullet)$ over $\Mbar{r}$, with an inclusion \[ \xymatrix{ \mathcal{S}(\lambda_\bullet) \ar[dr] \ar@{^{(}->}[r] & \mathcal{G}(k,n) \ar[d] \\ & \Mbar{r}. }\] The relative dimensions of $\mathcal{G}(k,n)$ and $\mathcal{S}(\lambda_\bullet)$ are $k(n-k)$ and $k(n-k) - \sum|\lambda_i|$, and for each point $[C] \in \Mbar{r}$, the fibers are the spaces $\mathcal{G}(k,n)_C$ and $\mathcal{S}(\lambda_\bullet)_C$ described above. \end{thm} We sketch the construction. First, for each subset $T \subseteq [n]$ of size 3, we have a forgetting map $\varphi_T : \Mbar{r} \to \Mbar{T}$, and a Grassmannian $G(k,n)_T$. We pull these back to $\Mbar{r}$ and form a large fiber product \[\mathscr{B} = \prod_{\varphi_T} G(k,n)_T.\] This is a trivial bundle over $\Mbar{r}$, with fibers isomorphic to products of ${r \choose 3}$ copies of $G(k,n)$. Over $\Mo{r}$, we have the trivial bundle \[G(k,n)_{\mathbb{P}^1} \times \Mo{r} \to \Mo{r},\] with a diagonal embedding \[\Delta: G(k,n)_{\mathbb{P}^1} \hookrightarrow \mathscr{B}\] commuting with the projection to $\Mo{r}$. Then $\mathcal{G}(k,n)$ is the closure of the image of $\Delta$. A detailed analysis of the factors in $\mathscr{B}$ then establishes that $\mathcal{G}(k,n)$ is flat and Cohen-Macaulay and the boundary fibers of $\mathcal{G}(k,n)$ have the desired form. An important element of the construction is the following. Let $[C] \in \Mbar{r}$ be a stable curve. For each irreducible component $C_i \subset C$, there is a factor $G(k,n)_{T_i}$, such that, over a neighborhood $U$ of $[C]$, the projection \[\pi : \mathscr{B} \to \prod_i G(k,n)_{T_i}\] gives an isomorphism of $\mathcal{G}(k,n)$ onto its image. Moreover, this isomorphism identifies $G(k,n)_{T_i}$ with the Grassmannian $G(k,n)_{C_i}$ defined above. In particular, this embeds the fiber of $\mathcal{G}(k,n)$ over $[C]$ into $\prod_{C_i} G(k,n)_{C_i}$, where it is the space $\mathcal{G}(k,n)_C$ of Equation \eqref{eqn:node-labeling-space}. (The same is true of $\mathcal{S}(\lambda_\bullet)$.) We will use this fact in our proof of Theorem \ref{thm:main-geom}. \subsubsection{Excess node labelings} The irreducible components of $\mathcal{S}(\lambda_\bullet)$ are described by strict node labelings, but we must also consider excess node labelings. They will arise in two ways: \begin{itemize} \item by intersecting components of $\mathcal{S}(\lambda_\bullet)$ described by different node labelings, and \item by the forgetting maps $\Mbar{r} \to \Mbar{s}$ with $s < r$. \end{itemize} For the first, consider two node labelings $\nu$ and $\nu'$ and the corresponding subsets of $S_\nu, S_{\nu'} \subseteq \mathcal{S}(\lambda_\bullet,p_\bullet)$. Then we have \[S_\nu \cap S_{\nu'} = S_{\nu \cup \nu'},\] where $\nu \cup \nu'$ is the excess node labeling obtained by taking the union of the labels of $\nu$ and $\nu'$. Note that this intersection is nonempty if and only if, for each component $C_i \subset C$, the excess Schubert problem from $\nu \cup \nu'$ on $C_i$ is nonempty. For the second, let $\nu$ be a node labeling on $C$. Consider a forgetting map $\Mbar{r} \to \Mbar{s}$. Let $C'$ be the image of $C$. \begin{lemma}Assume that $S_\nu$ is nonempty. Then the labeling of the nodes of $C'$ by the same labels as $C$ (on the remaining components) is an excess node labeling. \end{lemma} \begin{proof} By forgetting points one at a time, it is sufficient to consider the case $s = r-1$. In this case, at most one component is contracted. Call it $Z$, and assume $Z$ is connected to two other components $X,Y$. (If $Z$ is connected to only one other component, the node vanishes when we contract $Z$, so there is nothing to prove.) Let $q_X, q_Y$ be the pair of nodes connecting $Z$ to $X,Y$. So we have \[\nu(q_X,X) \supseteq \nu(q_X,Z)^c \text{ and } \nu(q_Y,Z) \supseteq \nu(q_Y,Y)^c.\] By definition, in $C'$, the node between $X$ and $Y$ has labels $\nu(q_X,X)$ and $\nu(q_Y,Y)$. Since we assumed $S_\nu$ was nonempty (for $C$), the Schubert problem on $Z$ must be nonempty, so \[\nu(q_X,Z)^c \supseteq \nu(q_Y,Z),\] which gives the desired containment $\nu(q_X,X) \supseteq \nu(q_Y,Y)^c.$ \end{proof} \begin{rmk} In the case where $Z$ is connected to only one other component $X$, the second special point on $Z$ must be a marked point $p$. If $p$ is labeled by $\lambda$, the same proof shows that $\lambda \subseteq \nu(q_X,X)$, so our contraction procedure also produces an excess Schubert condition at $p$. \end{rmk} Finally, we will use the fact that any excess node labeling on $C$ comes from contracting a strict node labeling with additional marked points: \begin{lemma} Let $\nu$ be an excess node labeling. Assume there is only one node $q = X \cap Y$ with excess labels. Then there is a unique curve $\tilde C$ with $r+1$ marked points, and a unique \emph{strict} node labeling $\tilde \nu$ of $\tilde C$, such that forgetting the $(r+1)$-st point takes $\tilde C$ to $C$ and $\tilde \nu$ to $\nu$. \end{lemma} \begin{proof} Let $\tilde C$ be the curve in which $q$ is replaced by an extra component $Q$, having nodes $q_X$ and $q_Y$, and mark $Q$ with an $(r+1)$-st marked point $p_{r+1}$. Define $\tilde \nu$ to be the same as $\nu$ for nodes other than $q_X$ and $q_Y$, and set \begin{align*} \tilde \nu(q_X,X) = \nu(q,X) &\text{ and } \tilde\nu(q_X,Q) = \nu(q,X)^c, \\ \tilde \nu(q_Y,Y) = \nu(q,Y) &\text{ and } \tilde\nu(q_Y,Q) = \nu(q,Y)^c. \end{align*} (See Figure \ref{fig:excess-pop}.) It is clear that $\tilde C$ contracts to $C$ and $\tilde \nu$ to $\nu$ under the forgetting map $\varphi_{r+1}$, and that this construction is unique. \end{proof} \begin{figure}[h] \centering \includegraphics[scale=0.5]{excess-pop.pdf} \caption{Turning an excess node labeling into a strict node labeling (of a larger curve).} \label{fig:excess-pop} \end{figure} \subsubsection{The dimension-1 case} We now assume $\sum |\lambda_i| = k(n-k)-1$, so $\mathcal{S}(\lambda_\bullet,p_\bullet)$ has dimension 1. For each node labeling $\nu$, precisely one component $C_\nu$ of $C$ has labels that sum to $k(n-k)-1$; all other components have labels that sum to $k(n-k)$. We will call $C_\nu$ the \emph{main component} of $C$ and the other $C_i$'s the \emph{frozen components} for the node labeling $\nu$. We have the following description of the connectivity between $S_\nu$ for different $\nu$'s: \begin{lemma} \label{excess-labels-dim1} Let $\nu$ and $\nu'$ be distinct strict node labelings, and suppose $S_\nu \cap S_{\nu'}$ is nonempty. Then the main components $C_\nu$ and $C_{\nu'}$ of $C$ are distinct and adjacent, $\nu$ and $\nu'$ agree everywhere except at the node $q = C_\nu \cap C_{\nu'}$, and $\nu'(q,C_\nu)$ is an extension of $\nu(q,C_\nu)$ by exactly 1 box (and vice versa for the labels on $C_{\nu'}$.) \end{lemma} \begin{proof} If $C_i$ is a ``frozen'' component, the labels on it cannot change: otherwise the Schubert problem on $C_i$ will be overdetermined and $S_\nu \cap S_{\nu'}$ will be empty. Hence $\nu$ and $\nu'$ agree on any component which is frozen for both. Moreover, if $q$ is a node and one side of $q$ is frozen for both labelings, then $\nu$ and $\nu'$ agree on the frozen side, hence on both sides (since the labelings are strict). In particular, if the main components $C_\nu, C_{\nu'}$ are equal or non-adjacent, every node must have at least one side on a shared frozen component, hence $\nu = \nu'$, a contradiction. The only case remaining is where there exists a node $q$ between the main components. Since $C_\nu$ is frozen for $\nu'$, we see that $\nu(q,C_\nu) \cup \nu'(q,C_\nu) = \nu'(q,C_\nu)$; by counting, the latter is exactly one box larger than $\nu(q,C_\nu)$. \end{proof} \subsection{Lifting to $\Mbar{r+1}$.} Consider the following observation: let $p_1, \ldots, p_r$ be distinct points on $\mathbb{P}^1$, and let $x \in S(\lambda_\bullet,p_\bullet) \subset G(k,n)$ be a point of the solution to the corresponding Schubert problem. So $x$ induces a morphism $f : \mathbb{P}^1 \to \mathbb{P}^{k-1}$ with higher ramification as specified by $\lambda_\bullet$. We have prescribed only $k(n-k)-1$ worth of higher ramification of $f$. Hence, by the Pl\"{u}cker formulas, there exists a unique point having additional ramification by 1 box. (Either some $p_i$ satisfies a one-box-larger Schubert condition $\lambda_i'$, or some unmarked point $z \in \mathbb{P}^1$ is simply ramified and $x \in S(\lambda_\bullet, p_\bullet) \cap S({\scalebox{.5}{\yng(1)}}, z)$ for a unique $z$.) Let $S'$ be the incidence correspondence \begin{equation} \label{incidence-correspondence} S' = \{(x,z) : x \in \mathcal{S}(\lambda_\bullet,p_\bullet)) \cap S({\scalebox{.5}{\yng(1)}}, z)\} \subset G(k,n) \times (\mathbb{P}^1 - \{p_1, \ldots, p_r\}). \end{equation} The projection to $G(k,n)$ induces a map $\pi: S' \to S(\lambda_\bullet, p_\bullet)$; by the above remarks, $\pi$ is injective. In fact, letting $\overline{S'} \subset G(k,n) \times \mathbb{P}^1$ be the closure, we will show that $\pi : \overline{S'} \to S(\lambda_\bullet,p_\bullet)$ is an isomorphism, and remains so when the $p_i$ (and $z$) are allowed to collide. (Note that we are not assuming $S(\lambda_\bullet,p_\bullet)$ to be smooth.) We will need the following lemma on simple nodes: \begin{lemma} \label{nodal-iso} Let $X,Y \subseteq Z$ be subschemes such that $X \cup Y = Z$ and the scheme-theoretic intersection $X \cap Y$ is one reduced point. Let $f : A \to Z$ be a morphism whose restrictions $f^{-1}(X) \to X$ and $f^{-1}(Y) \to Y$ are isomorphisms. Then $f$ is an isomorphism. \end{lemma} \begin{proof} See Corollary \ref{nodal-iso-appendix} in the appendix. \end{proof} \begin{thm} \label{box-lift} With notation as above, let $z$ be an $(r+1)$-st marked point; label $z$ with a single box. Composing the ``forgetting" map $\varphi_{r+1} : \Mbar{r+1} \to \Mbar{r}$ with the structure map for $\mathcal{S}(\lambda_\bullet;{\scalebox{.5}{\yng(1)}}_z)$ yields the following diagram: \[ \xymatrix{ \mathcal{S}(\lambda_\bullet;{\scalebox{.5}{\yng(1)}}_z) \ar[r] \ar[d]_{\pi} & \Mbar{r+1} \ar[d]^{\varphi_{r+1}} \\ \mathcal{S}(\lambda_\bullet) \ar[r] & \Mbar{r} } \] (This diagram is not Cartesian.) Then $\pi$ is an isomorphism. \end{thm} If $\tilde x \in \mathcal{S}(\lambda_\bullet;\ {\scalebox{.5}{\yng(1)}}_z)$ lying over $\tilde C \in \Mbar{r+1}$, the map $\pi$ consists of forgetting the marked point $z$, then possibly contracting the component $Z \subset \tilde C$ containing $z$. In the latter case, $\pi(\tilde x)$ also forgets the morphism $Z \to \mathbb{P}^{k-1}$, which had ramification exactly ${\scalebox{.5}{\yng(1)}}$ at $z$. Thus we must recover both $z$ and, when necessary, the additional morphism. \begin{proof} We first construct the set-theoretic inverse for $\pi$. Let $x \in \mathcal{S}(\lambda_\bullet)$, lying over a stable curve $C$, and let $\nu$ be a node labeling of $C$ with $x \in S_\nu$. Let $C_\nu \subseteq C$ be the main component of $\nu$, so $x$ gives a morphism $C_\nu \to \mathbb{P}^{k-1}$ for which all but one point of ramification has been specified. Let $t \in C_\nu$ be the point with additional ramification. (Note that $t$ does not depend on the choice of $\nu$: if $x \in S_\nu \cap S_{\nu'}$, then the main components of $\nu$ and $\nu'$ are adjacent by Lemma \ref{excess-labels-dim1}, and $t$ must be the node between them, where the excess labels occur.) The assignment $x \mapsto t$ gives a (set-theoretic) diagonal map $\mathcal{S}(\lambda_\bullet) \to \Mbar{r+1}$ that commutes with the diagram (thinking of $\Mbar{r+1}$ as the universal family over $\Mbar{r}$.) Let $\tilde C$ be the curve corresponding to $t \in \Mbar{r+1}$. If $t$ is not a special point, then $\tilde C$ is the same curve as $C$, with $z=t$ as the $(r+1)$-st marked point. The morphisms $C_i \to \mathbb{P}^{k-1}$ corresponding to $x \in \mathcal{S}(\lambda_\bullet)$ already satisfy ${\scalebox{.5}{\yng(1)}}$ at $t$, so they recover the point $\tilde x \in \mathcal{S}(\lambda_\bullet;{\scalebox{.5}{\yng(1)}}_z)$ lying over $\tilde C$. But if $t$ is a special point, $\tilde C$ has an additional component $Z$ bubbled off at $t$; we must recover the morphism $Z \to \mathbb{P}^{k-1}$. There are two cases. \emph{Case 1.} Suppose $t = p_i$. Then $Z$ has one node and two marked points $p_i$ and $z$. Let $\lambda_i^+$ be the (stricter) Schubert condition satisfied at $t$ for the map $C_\nu \to \mathbb{P}^{k-1}$, so $\lambda_i^+/\lambda_i$ is one box. The morphism $Z \to \mathbb{P}^{k-1}$ must satisfy $\lambda_i$ at $p_i$, ${\scalebox{.5}{\yng(1)}}$ at $z$ and the strict node labeling condition $(\lambda_i^+)^c$ at the node with $C_\nu$. The Littlewood-Richardson coefficient $c_{\lambda_i{\scalebox{.5}{\yng(1)}}(\lambda_i^+)^c}^{{\scalebox{.3}{\yng(3,3)}}}$ is 1, so there is a unique such morphism. \emph{Case 2.} Suppose $t$ is a node between components $A,B$. Then $x$ satisfied an excess node labeling $\nu \cup \nu'$; let its excess labels at $t$ be $\alpha$ on $A$ and $\beta$ on $B$, so by Lemma \ref{excess-labels-dim1}, $\alpha \supset \beta^c$ and $\alpha/\beta^c$ is one box. Now $Z$ has two nodes and the marked point $z$, and the morphism $Z \to \mathbb{P}^{k-1}$ must satisfy ${\scalebox{.5}{\yng(1)}}$ at $z$, along with the strict node labeling conditions $\alpha^c$ at the node with $A$ and $\beta^c$ at the node with $B$. The Littlewood-Richardson coefficient $c_{\beta^c {\scalebox{.5}{\yng(1)}} \alpha^c}^{{\scalebox{.3}{\yng(3,3)}}} = 1$, so again the morphism exists and is unique. We now show that $\pi$ is an isomorphism. In particular, we show that for every point $[C] \in \Mbar{r}$, the restriction of $\pi$ in the diagram \[ \xymatrix{ \mathcal{S}(\lambda_\bullet; {\scalebox{.5}{\yng(1)}}_z)\big|_C \ar[r] \ar[d]_{\pi} & C \ar[d]^{\varphi_{r+1}} \\ \mathcal{S}(\lambda_\bullet)\big|_{[C]} \ar[r] & [C] } \] is an isomorphism. (Recall that the fiber of the forgetting map over $[C]$ is $C$ itself.) In particular, it follows that for every $x \in \mathcal{S}(\lambda_\bullet)$, the scheme-theoretic fiber $\pi^{-1}(x)$ is one reduced point; hence $\pi$ will be a (global) isomorphism. \emph{Reduction to the case where $C$ has one component}. Let $\nu$ be a node labeling and let $S_\nu$ be the corresponding subscheme of $\mathcal{S}(\lambda_\bullet)\big|_{[C]}$. For any frozen component $C_i$, the Schubert problem in $G(k,n)_{C_i}$ has a finite set of solutions. So, in the containment $S_\nu \subset \prod_i G(k,n)_{C_i}$, the coordinates in the $G(k,n)_{C_i}$ factors corresponding to frozen components are locally constant. In particular, projection to $G(k,n)_{C_\nu}$, where $C_\nu$ is the main component, is locally an isomorphism. Let $\nu'$ be any other node labeling. We claim that the scheme-theoretic intersection $S_\nu \cap S_{\nu'}$ is reduced. By Lemma \ref{excess-labels-dim1}, if the intersection is nonempty, the main components $C_\nu$ and $C_{\nu'}$ are distinct and adjacent. Let $x \in S_\nu \cap S_{\nu'}$. We project to $G(k,n)_{C_\nu} \times G(k,n)_{C_{\nu'}}$; this is an isomorphism on a neighborhood of $x$. But locally, the projections $S_\nu$ and $S_{\nu'}$ are contained in transverse fibers: $S_\nu \subseteq G(k,n)_{C_\nu} \times \{\text{pt}\}$ and $S_{\nu'} \subset \{\text{pt}\} \times G(k,n)_{C_{\nu'}}$. Thus the scheme-theoretic intersection $S_\nu \cap S_{\nu'}$ is reduced at $x$. It follows from Lemma \ref{nodal-iso} that $\pi$ is an isomorphism if and only if it is an isomorphism when restricted to each $S_\nu$. So, by forgetting all the marked points on the frozen components for $\nu$, and contracting down to the main component, we may assume that $C$ has only one component. So $C = \mathbb{P}^1$ with distinct marked points $p_1,\ldots, p_r$, and $\mathcal{S}(\lambda_\bullet)$ lives in the single Grassmannian $G(k,n)_C$. \emph{Factoring $\pi$}. For each marked point $p \in C \cong \mathbb{P}^1$, let $C_p$ be the component obtained when $z$ collides with $p$ and bubbles off. We have containments \[\mathcal{S}(\lambda_\bullet; {\scalebox{.5}{\yng(1)}}_z)\big|_C \subset \mathcal{G}(k,n) \subset G(k,n)_C \times \prod_p G(k,n)_{C_p} \times \mathbb{P}^1_z,\] and the map $\pi$ is the projection that forgets the $G(k,n)_{C_p}$ factors and the $z$ coordinate. Note that the projection from $G(k,n)_{C_p}$ gives an isomorphism everywhere except possibly at $z=p$. We factor $\pi$ into two projections, \[\xymatrix{ \mathcal{S}(\lambda_\bullet; {\scalebox{.5}{\yng(1)}}_z)\big|_C \ar[r]^-\alpha & S' \ar[r]^-\beta & \mathcal{S}(\lambda_\bullet), }\] where $S' \subset G(k,n)_C \times \mathbb{P}^1$ is obtained by forgetting the $G(k,n)_{C_p}$ factors, but not the $z$ coordinate. The map $\beta : S' \to \mathcal{S}(\lambda_\bullet)$ is the closure of the incidence correspondence \eqref{incidence-correspondence}. \\ \emph{The map $\beta$ is an isomorphism}. Choose coordinates on $\mathbb{P}^1_z$ so that $p_1 = 0$ and $\infty$ is not a marked point; we restrict to the set $\AA^1 = \mathbb{P}^1 - \{\infty\}$. With notation from Lemma \ref{lem:local-equation-box}, the equation for $S'$ is then \[f(z) = \sum_{I \in {[n] \choose k}} pl_I \Delta_{I^c} \cdot (-z)^{e_{I^c}}.\] The leading term of $f(z)$ is $pl_{[k]} \Delta_{[n] \setminus [k]}\cdot z^{k(n-k)}$; note that $pl_{[k]}$ is a unit since (over $\AA^1$) the Schubert condition ${\scalebox{.5}{\yng(1)}}$ is never satisfied at $\infty$. Now, since $S'$ satisfies the Schubert condition $\lambda_1$ at $z=0$, all the Pl\"{u}cker coordinates $pl_I$ are zero for $I > I(\lambda_1)$, where \[I(\lambda) = (n-k+1, \ldots, n-1, n) - \lambda.\] In particular, the lowest-degree term (corresponding to $I = I(\lambda_1)$) is $z^{|\lambda_1|}$, so we see that $z^{|\lambda_1|}$ divides $f(z)$. Our choice of coordinates was arbitrary, so by the same logic applied to the other marked points, we see that $(z-p_i)^{|\lambda_i|}$ divides $f(z)$ for each $i = 1, \ldots, r$. This gives \[f(z) = z^{|\lambda_1|}(z-p_2)^{|\lambda_2|} \cdots (z-p_r)^{|\lambda_r|}g(z),\] and by inspection we see $g(z)$ is linear, with leading term $pl_{[k]} \Delta_{[n] \setminus [k]} \ z$. Now, on the open set $\mathbb{P}^1 - \{p_1, \ldots, p_r\}$, we may invert the $(z-p_i)$ factors. Hence the equation for $S'$, the closure over this open set, is just $g(z)$. Since $pl_{[k]}$ is a unit, the equation $g(z) = 0$ gives an isomorphism. \\ \emph{The map $\alpha$ is an isomorphism}. We consider the maps \[\xymatrix{ \mathcal{S}(\lambda_\bullet; {\scalebox{.5}{\yng(1)}}_z)\big|_C \ar[r]^-\alpha & S' \ar[r]^-{\pi_2} & \mathbb{P}^1_z. }\] We know that $\alpha$ is an isomorphism except possibly over the points $z = p_i$ for each $i$. We restrict to $z = p_i$, so $z$ and $p_i$ bubble off on the component $C_{p_i}$. We may project away from all the $G(k,n)_{C_p}$ factors except the one corresponding to $G(k,n)_{C_{p_i}}$, so the map $\alpha$ is the projection of $G(k,n)_C \times G(k,n)_{C_{p_i}}$ onto its first component. The fiber of $S(\lambda_\bullet; {\scalebox{.5}{\yng(1)}}_z)$ at $z=0$ is now a union of the form \[\bigcup_{\nu} A_\nu \times B_\nu,\] where $\nu$ is a node labeling and $A_\nu, B_\nu$ are the corresponding subschemes of $G(k,n)_C$ and $G(k,n)_{C_{p_i}}$. Let $q$ be the node; then $\nu(q,C)$ is an extension of $\lambda_i$ by one box; in particular, the Littlewood-Richardson coefficient $c_{\lambda_i, {\scalebox{.5}{\yng(1)}}}^{\nu(q,C_{p_i})}$ is 1, so \[B_\nu = \Omega(\lambda_i,p_i) \cap \Omega({\scalebox{.5}{\yng(1)}},z) \cap \Omega(\nu(q,C_{p_i}),q) \subset G(k,n)_{C_{p_i}}\] is one reduced point. Thus the fiber is in fact of the form \[\bigcup_\nu A_\nu \times \{pt\},\] so the projection to the first factor is an isomorphism. \end{proof} Theorem \ref{thm:main-geom} now follows from Speyer's description of $\mathcal{S}(\lambda_\bullet,{\scalebox{.5}{\yng(1)}}_z) \to \Mbar{r+1}$. We obtain, as a corollary, our theorem on reality of curves over $\Mo{r}(\mathbb{R})$: \begin{cor} \label{cor:as-real-as-poss} If the $p_i$ are all in $\mathbb{RP}^1$, the curve $S = S(\lambda_\bullet, p_\bullet) \subset G(k,n)$ has smooth real points. Moreover, $S(\mathbb{C}) - S(\mathbb{R})$ is disconnected. \end{cor} \begin{proof} We have a map $f: S \to \mathbb{P}^1$. By Theorem \ref{thm:Sp14}, $S(\mathbb{R}) \to \mathbb{R}\mathbb{P}^1$ is a covering map; in particular $S(\mathbb{R})$ is smooth. Also, since the preimage of every point $z \in \mathbb{R}\mathbb{P}^1$ consists of real points, we have $f^{-1}(\mathbb{R}\mathbb{P}^1) = S(\mathbb{R})$. Let $H^+, H_{-} \subset \mathbb{C}$ be the (strict) upper and lower half-planes. Then $S$ is disconnected by its real points since \[S(\mathbb{C}) - S(\mathbb{R}) = f^{-1}(H^+) \sqcup f^{-1}(H_-). \qedhere\] \end{proof} For our applications to K-theory, we also need the following slightly stronger statement, in the case where $S$ is singular or reducible: \begin{cor} \label{cor:normalization-disconnected} Let $S' \subset S$ be any irreducible component, and let $\pi : \widetilde{S'} \to S'$ be its normalization. Then $\widetilde{S'}(\mathbb{R})$ is nonempty and $\widetilde{S'}(\mathbb{C}) - \widetilde{S'}(\mathbb{R})$ is disconnected. \end{cor} \begin{proof} Since $f : S \to \mathbb{P}^1$ is flat, the map $S' \to \mathbb{P}^1$ is surjective, and the fibers over $\mathbb{R}\mathbb{P}^1$ are all smooth real points of $S'$. Thus $\widetilde{S'}(\mathbb{R}) = S'(\mathbb{R}) \ne \varnothing$. The argument above, applied to $f \circ \pi$, shows that $\widetilde{S'}(\mathbb{C}) - \widetilde{S'}(\mathbb{R})$ is disconnected. \end{proof} \section{Tableau combinatorics}\label{sec:tableau-combinatorics} \subsection{Young tableaux and growth diagrams} \label{sec:combo-gds} We recall the notion of a growth diagram of partitions. Let $G$ be the directed grid graph with vertices $\mathbb{Z} \times \mathbb{Z}$ and edges pointing up and to the right. We use the Cartesian convention for coordinates, so $(i,j)$ is $i$ steps to the right and $j$ steps up from the origin. An induced subgraph $D \subset G$ is \emph{convex} if whenever $(a,b),(c,d) \in D$, the rectangle $(a,b) \times (c,d) \subseteq D$. A \newword{growth diagram on $D$} is a labeling $\lambda_{ij}$ of the vertices of $D$ by partitions, such that \begin{enumerate} \item[(i)] For each directed edge $\alpha \to \beta$, $\beta$ is an extension of $\alpha$ by a single box; \item[(ii)] For each square \[\xymatrix{ \alpha \ar[r] & \beta \\ \gamma \ar[u] \ar[r] & \delta, \ar[u] }\] if the two boxes of $\beta/\gamma$ are nonadjacent, then $\alpha$ and $\delta$ are the two distinct intermediate partitions between $\gamma$ and $\beta$. \end{enumerate} We think of (i) as the `growth condition' and (ii) as a `recurrence condition', for the following reason: \begin{lemma} Let $D$ be the rectangle $[a,b] \times [c,d]$ and let $\lambda_{ij}$ be a choice of partitions along a single path connecting $(a,b)$ to $(c,d)$. Then $\lambda_{ij}$ extends to a unique growth diagram on $D$. \end{lemma} \begin{proof} Repeated application of condition (ii) uniquely specifies the remaining entries. \end{proof} Growth diagrams encode the \emph{jeu de taquin} (JDT) algorithm, as follows. Let $S,T$ be skew standard tableaux such that $T$ extends $S$. Let $\mathrm{sh}(S) = \beta/\alpha$ and $\mathrm{sh}(T) = \gamma/\beta$. We think of $S$ as a sequence $\alpha \subset \alpha_1 \subset \cdots \subset \alpha_n = \beta$ of partitions, where for each $i$, $\alpha_i/\alpha_{i-1}$ is the box of $S$ labeled $i$. Likewise, we will think of $T$ as a sequence of partitions $\beta_j$ growing from $\beta$ to $\gamma$. Let $D$ be a rectangular grid of size $|\gamma/\beta| \times |\beta/\alpha|$. Label the left side of $D$ with the partitions $\alpha_i$ for $S$ and the top with the $\beta_j$ for $T$. Let $\widetilde{T}$ (resp. $\widetilde{S}$) be the bottom (resp. right) edges of the resulting growth diagram, thought of as skew standard tableaux. \begin{lemma} The tableau $\widetilde{S}$ is the result of applying forward JDT slides to $S$ in the order indicated by the entries of $T$ (starting with the smallest entry). The tableau $\widetilde{T}$ is the result of applying reverse slides to $T$ in the order indicated by the entries of $S$ (starting with the largest entry). \end{lemma} \begin{proof} See \cite{Hai}. \end{proof} In this case we say $(S,T)$ \newword{shuffled} to $(\widetilde{T},\widetilde{S})$. We say $T$ is \newword{slide equivalent} to $\widetilde{T}$, and likewise $S$ is slide equivalent to $\widetilde{S}$. \begin{lemma} Shuffling is an involution. \end{lemma} \begin{proof} The transpose of the growth diagram $D$ used to shuffle $(S,T)$ is again a growth diagram, with left and top edges $(\widetilde{T},\widetilde{S})$ and bottom and right edges $(S,T)$. \end{proof} We will be interested in growth diagrams on the downwards-slanting diagonal region \[D = \{(i,j) : 0 \leq i + j \leq r\},\] where every vertex on the main diagonal is labeled $\varnothing$, and every vertex on the outer diagonal is labeled by the rectangle ${\scalebox{.3}{\yng(3,3)}}$. We call these \newword{cylindrical growth diagrams}, for the following reason: \begin{lemma}\label{cylindrical-recurrence} Let $\lambda_{ij}$ be a cylindrical growth diagram. Then \begin{itemize} \item[(i)] $\lambda_{(i+r)(j-r)} = \lambda_{ij}$, and \item[(ii)] $\lambda_{(r-j)(-i)} = \lambda_{ij}^c$. \end{itemize} Here $\lambda^c$ denotes the complementary partition with respect to ${\scalebox{.3}{\yng(3,3)}}$. \end{lemma} \begin{proof} See \cite[Chapter 7, Appendix 1]{StanEC2}. Note that this fact is often attributed to Sch\"{u}tzenberger. \end{proof} Thus the rows of a cylindrical growth diagram repeat with period $r$; we may think of them as wrapping around a cylinder. \subsection{Dual equivalence} Let $S,S'$ be skew standard tableaux of the same shape. We say $S$ is \newword{dual equivalent} to $S'$ if the following is always true: let $T$ be a skew standard tableau whose shape extends, or is extended by, $\mathrm{sh}(S)$. Let $\widetilde{T}, \widetilde{T}'$ be the results of shuffling $T$ with $S$ and with $S'$. Then $\widetilde{T} = \widetilde{T}'$. In other words, $S$ and $S'$ are dual equivalent if they have the same shape, and they transform \emph{other} tableaux the same way under JDT. \begin{lemma} \label{lem-dual-def2} Let $S,S'$ be skew standard tableaux of the same shape. Then $S$ is dual equivalent to $S'$ if and only if the following is always true: \begin{itemize} \item Let $T$ be a tableau whose shape extends, or is extended by, $\mathrm{sh}(S)$. Let $\widetilde{S}$ and $\widetilde{S'}$ be the results of shuffling $S,S'$ with $T$. Then $\mathrm{sh}(\widetilde{S}) = \mathrm{sh}(\widetilde{S}')$. \end{itemize} Additionally, in this case $\widetilde{S}$ and $\widetilde{S}'$ are also dual equivalent. \end{lemma} Thus $S$ and $S'$ are dual equivalent if \emph{their own} shapes evolve the same way under any sequence of slides. See \cite{Hai} for these and other properties of dual equivalence. Following Speyer \cite{Sp}, we extend the definition of \newword{shuffling} to dual equivalence classes: \begin{lemma} Let $S,T$ be skew tableaux, with $\mathrm{sh}(T)$ extending $\mathrm{sh}(S)$, and let $(S,T)$ shuffle to $(\widetilde{T},\widetilde{S})$. The dual equivalence classes of $\widetilde{T}$ and $\widetilde{S}$ depend only on the dual equivalence classes of $S$ and $T$. \end{lemma} The fact that rectification of skew tableaux is well-defined, regardless of the rectification order (the `fundamental theorem of JDT') is the following statement: \begin{thm} \label{thm-dual-jdt} Any two tableaux of the same straight shape are dual equivalent. \end{thm} We will write $D_\lambda$ for the unique dual equivalence class of straight shape $\lambda$. \\ Since we may use any tableau of straight shape $\beta$ to rectify a skew tableau $S$ of shape $\alpha/\beta$, we may speak of the \newword{rectification tableau} of a slide equivalence class. Similarly, by Lemma \ref{lem-dual-def2} and Theorem \ref{thm-dual-jdt} we may speak of the \newword{rectification shape of a dual equivalence class} $\mathrm{rsh}(D)$: this is the shape of any rectification of any representative of the class $D$. \begin{lemma}\label{slide-dual} Let $D,S$ be a dual equivalence class and a slide equivalence class, with $\mathrm{rsh}(D) = \mathrm{sh}(\mathrm{rect}(S))$. There is a unique tableau in $D \cap S$. \end{lemma} \begin{proof} Uniqueness is clear. To produce the tableau, pick any $T_D \in D$. Rectify $T_D$ using an arbitrary tableau $X$, so $(X,T_D)$ shuffles to $(\widetilde{T_D},\widetilde{X})$ (and $X$ and $ \widetilde{T_D}$ are of straight shape). Replace $\widetilde{T_D}$ by the rectification tableau $R_S$ for the class $S$, and let $(R_S,\widetilde{X})$ shuffle back to $(X,T)$. Then $T$ and $R_S$ are slide equivalent, and by Theorem \ref{thm-dual-jdt} and Lemma \ref{lem-dual-def2}, $T$ and $T_D$ are dual equivalent. \end{proof} The dual equivalence classes of a given shape and rectification shape are counted by a Littlewood-Richardson coefficient: \begin{lemma} \label{lem-dual-LRcoeff} Let $\beta/\alpha$ be a skew shape and let \[X_\alpha^\beta(\lambda) = \{\text{dual equivalence classes } D \text{ with } \mathrm{sh}(D) = \beta/\alpha \text{ and } \mathrm{rsh}(D) = \lambda \}.\] Then $|X_\alpha^\beta(\lambda)| = c_{\alpha \lambda}^\beta.$ \end{lemma} \begin{proof} It is well-known that $c_{\alpha \lambda}^\beta$ counts tableaux $T$ of shape $\beta/\alpha$ whose rectification is the highest-weight tableau of shape $\lambda$. This specifies the slide equivalence class of $T$; by Lemma \ref{slide-dual}, such tableaux are in bijection with $X_\alpha^\beta(\lambda)$. \end{proof} We remark that tableau shuffling commutes with rotation by $180^\circ$. Let $T$ be a tableau of skew shape $\alpha/\beta$, and write $T^R$ for the tableau of shape $\beta^c/\alpha^c$ obtained by rotating $T$ by $180^\circ$, then reversing the numbering of its entries. Then the dual equivalence class of $T^R$ depends only on the dual equivalence class of $T$. This gives an involution of dual equivalence classes \[D \mapsto D^R : X_\alpha^\beta(\lambda) \to X_{\beta^c}^{\alpha^c}(\lambda).\] In particular, it follows that any tableaux $T, T'$ of `anti-straight-shape' ${\scalebox{.3}{\yng(3,3)}}/\lambda^c$ are dual equivalent, and their rectifications have shape $\lambda$. We define a \newword{chain of dual equivalence classes} to be a sequence $(D_1, \ldots, D_r)$ of dual equivalence classes, such that $\mathrm{sh}(D_{i+1})$ extends $\mathrm{sh}(D_i)$, for each $i$. We say the chain has \newword{type} $(\lambda_1, \ldots, \lambda_r)$ if for each $i$, $\mathrm{rsh}(D_i) = \lambda_i$. Let $X_\alpha^\beta(\lambda_1, \ldots, \lambda_r)$ denote the set of chains of dual equivalence classes of type $(\lambda_1, \ldots, \lambda_r)$, such that $\mathrm{sh}(D_1)$ extends $\alpha$ and $\beta$ extends $\mathrm{sh}(D_r)$. This has cardinality equal to the Littlewood-Richardson coefficient $c_{\alpha, \lambda_1, \ldots, \lambda_r}^\beta$. Note that there is a natural identification $X_\alpha^\beta({\scalebox{.5}{\yng(1)}}, \cdots, {\scalebox{.5}{\yng(1)}})$ (with $|\beta/\alpha|$ boxes) with the set $\mathrm{SYT}(\beta/\alpha)$ of skew standard tableaux. We will think of chains of dual equivalence classes as generalizations of standard tableaux. \subsubsection{Operations on chains of dual classes} \label{sec:shuffling-ops} We define the \newword{shuffling} operation \[\mathrm{sh}_i : X_\alpha^\beta(\lambda_1, \ldots, \lambda_i, \lambda_{i+1}, \cdots \lambda_r) \to X_\alpha^\beta(\lambda_1, \ldots, \lambda_{i+1}, \lambda_i, \cdots \lambda_r)\] by shuffling $(D_i,D_{i+1})$. These satisfy the relations $\mathrm{sh}_i^2 = \mathrm{id}$ and $\mathrm{sh}_i \mathrm{sh}_j = \mathrm{sh}_j \mathrm{sh}_i$ when $|i-j| > 1$. Note, however, that $\mathrm{sh}_i \mathrm{sh}_{i+1} \mathrm{sh}_i \ne \mathrm{sh}_{i+1} \mathrm{sh}_i \mathrm{sh}_{i+1}$ in general. (In the case where $\lambda_i = {\scalebox{.5}{\yng(1)}}$ for all $i$, $\mathrm{sh}_i$ reduces to the Bender-Knuth involution for standard tableaux.) We next define the $i$-th \newword{evacuation} operation \[\mathrm{ev}_i : X_\alpha^\beta(\lambda_1, \ldots, \lambda_r) \to X_\alpha^\beta(\lambda_i, \ldots, \lambda_1, \lambda_{i+1}, \ldots, \lambda_r)\] by $\mathrm{ev}_i = \mathrm{sh}_1 (\mathrm{sh}_2 \mathrm{sh}_1) \cdots (\mathrm{sh}_{i-2} \cdots \mathrm{sh}_1) (\mathrm{sh}_{i-1} \cdots \mathrm{sh}_1)$. This results in reversing the first $i$ parts of the chain's type, by first shuffling $D_1$ outwards past $D_i$, then shuffling the $D_2'$ (now the first element of the chain) out past $D_i'$, and so on. In the case where $\alpha = \varnothing$ and $\lambda_i = {\scalebox{.5}{\yng(1)}}$ for all $i$, the operation $\mathrm{ev}_i$ reduces to evacuation of the standard tableau formed by the first $i$ entries. In general, $\mathrm{ev}_i$ is an involution: \begin{lemma} \label{evac-involution} The operation $\mathrm{ev}_i$ is an involution. \end{lemma} \begin{proof} By definition, $\mathrm{ev}_i = \mathrm{ev}_{i-1} (\mathrm{sh}_{i-1} \cdots \mathrm{sh}_1)$. On the other hand, observe that $(\mathrm{sh}_{i-1} \cdots \mathrm{sh}_1)\mathrm{ev}_i = \mathrm{ev}_{i-1}$. (Each extra $\mathrm{sh}_j$ cancels the leftmost instance of $\mathrm{sh}_j$ in $\mathrm{ev}_i$.) Thus we have \[\mathrm{ev}_i^2 = \mathrm{ev}_{i-1}(\mathrm{sh}_{i-1} \cdots \mathrm{sh}_1) \mathrm{ev}_i = \mathrm{ev}_{i-1}^2,\] and the claim follows by induction. \end{proof} In the case $\alpha = \varnothing$ and $\beta = {\scalebox{.3}{\yng(3,3)}}$, the operation $\mathrm{ev}_r$ is just reversal: \begin{lemma} \label{DE-evac-involution} The operation \[\mathrm{ev}_r : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_r, \ldots, \lambda_1)\] is given by $\mathrm{ev}_r(D_1, \ldots, D_r) = (D_r^R, \ldots, D_1^R)$. \end{lemma} We will give a proof below, using growth diagrams of dual equivalence classes. Finally, we define the $i$-th \newword{evacuation-shuffle} operation \[\mathrm{esh}_i : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i, \lambda_{i+1}, \cdots \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_{i+1}, \lambda_i, \cdots \lambda_r)\] by \[\mathrm{esh}_i = \mathrm{ev}_{i+1}^{-1} \mathrm{sh}_1 \mathrm{ev}_{i+1}.\] This operation is simpler than it appears: it only affects the $i$-th and $(i+1)$-th entries of the chain, and its effect is local. Moreover, it does not depend on the other dual equivalence classes in the chain. We have the following: \begin{lemma} \label{upper-shuffle} Let ${\bf D} = (D_1,\ldots, D_r) \in X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r)$ and write \[\mathrm{esh}_i({\bf D}) = (D_1', \ldots, D'_{i+1}, D'_i, \ldots, D_r').\] \begin{itemize} \item[(i)] For $j \ne i, i+1$, we have $D_j = D_j'$. \item[(ii)] The remaining two classes $D_i', D_{i+1}'$ are computed as follows: Let $\tau, \sigma$ be, respectively, the inner shape of $D_i$ and the outer shape of $D_{i+1}$. Let ${\bf D}^* = (D_\tau, D_{i}, D_{i+1}) \in X_\varnothing^\sigma(\tau, \lambda_i, \lambda_{i+1})$, with $D_\tau$ the unique dual equivalence class of straight shape $\tau$. Then \[\mathrm{esh}_2({\bf D}^*) = \mathrm{sh}_1 \mathrm{sh}_2 \mathrm{sh}_1 \mathrm{sh}_2 \mathrm{sh}_1({\bf D}^*) = (D_\tau, D_{i+1}', D_i').\] \end{itemize} \end{lemma} We will also prove this using growth diagrams. For now, we note that from the definition, $\mathrm{esh}_i^2 = \mathrm{id}$, and by Lemma \ref{upper-shuffle}, when $|i-j|>1$, $\mathrm{esh}_i \mathrm{esh}_j = \mathrm{esh}_j \mathrm{esh}_i$ and $\mathrm{esh}_i \mathrm{sh}_j = \mathrm{sh}_j \mathrm{esh}_i$. \\ Let $G$ and $D$ be as in the definition of (ordinary) growth diagrams. Let $\lambda_{ij}$ be an assignment of a partition to each vertex $(i,j)$ of $D$, and let $H_{ij}, V_{ij}$ be assignments of dual equivalence classes to the horizontal and vertical edges beginning at $(i,j)$. We say the triple $\Gamma = (\lambda_{ij}, H_{ij}, V_{ij})$ is a \newword{dual equivalence growth diagram} if: \begin{enumerate} \item[(i)] For each directed edge $\xymatrix{\alpha \ar[r]^T & \beta}$, the dual equivalence class $T$ has shape $\beta/\alpha$, \item[(ii)] For each square \[\xymatrix{ \alpha \ar[r]^T & \beta \\ \gamma \ar[u]^S \ar[r]_{\widetilde{T}} & \delta, \ar[u]_{\widetilde{S}} }\] the dual equivalence classes $(S,T)$ shuffle to $(\widetilde{T},\widetilde{S})$. \end{enumerate} By definition, shuffling of dual equivalence classes is computed by choosing representatives, then computing the shuffles using an ordinary growth diagram with edges described by the square shown above. Thus each \emph{square} in a dual equivalence growth diagram is an equivalence class of ordinary growth diagrams. (A dual equivalence growth diagram in which adjacent partitions differ by one box is the same as an ordinary growth diagram.) We will again only consider dual equivalence growth diagrams on the downwards-slanting diagonal region \[D = \{(i,j) : 0 \leq i + j \leq r\},\] with every vertex on the main diagonal labeled $\varnothing$, and every vertex on the outer diagonal labeled ${\scalebox{.3}{\yng(3,3)}}$. We omit the leftmost and rightmost edge labels. We call such a diagram a \newword{dual equivalence cylindrical growth diagram}, or \newword{decgd}. Decgds inherit the periodicity and symmetry of ordinary cylindrical growth diagrams: \begin{lemma}\label{DE-cylindrical-recurrence} Let $\Gamma = (\lambda_{ij}, H_{ij}, V_{ij})$ be a decgd. Then: \begin{itemize} \item[(i)] $\lambda_{(i+r)(j-r)} = \lambda_{ij}, H_{(i+r)(j-r)} = H_{ij}$, and $V_{(i+r)(j-r)} = V_{ij}$; \item[(ii)] $\lambda_{(r-j)(-i)} = \lambda_{ij}^c, H_{(r-1-j)(-i)} = V_{ij}^R$, and $V_{(r-j)(-i-1)} = H_{ij}^R$. \end{itemize} \end{lemma} \begin{proof} Choose a fixed path across $\Gamma$ and choose tableau representatives for each dual equivalence class along the path. Consider a new diagram obtained by replacing each edge in the path by a sequence of edges encoding the chosen tableau. This extends to a unique \emph{ordinary} growth diagram $\Gamma'$, using the recurrence rule. Then the result for decgds follows from Lemma \ref{cylindrical-recurrence} for $\Gamma'$. \end{proof} We say the decgd has \newword{type} $(\lambda_1, \ldots, \lambda_r)$ if the entries of the first superdiagonal are the partitions $\lambda_1, \ldots, \lambda_r$. In particular, the type of the decgd is the same as the type of the chain of dual equivalence classes in its first row. Any path from the main diagonal to the rightmost diagonal gives a chain of dual equivalence classes; on the other hand, by the recurrence condition and the uniqueness of the outermost edge labels, this uniquely specifies the remaining entries of the growth diagram. \begin{figure}[h] \centering \[\xymatrix{ \varnothing \ar[r] & \lambda_1 \ar[r]^-{D_2} & \cdot \ar@{..}[r] &\cdot \ar[r]^-{D_{r-1}} & \cdot \ar[r] & {\scalebox{.3}{\yng(3,3)}} \\ & \varnothing \ar[r]\ar[u] & \lambda_2 &&&\lambda_1^c \ar[u] \ar[r] & {\scalebox{.3}{\yng(3,3)}} \\ && \ar@{}[ul]|{\ddots} \ar@{}[u]|{\vdots} & \lambda_{r-2} \ar@{}[ul]|{\ddots} &&& \ar@{}[ul]|{\ddots} \\ &&& \ar@{}[ul]|{\ddots} \varnothing \ar[r] \ar[u]& \lambda_{r-1} \\ &&&& \varnothing \ar[u] \ar[r] & \lambda_r \\ &&&&& \varnothing \ar[u] }\] \caption{The top row of this decgd is a chain of dual equivalence classes of type $(\lambda_1, \ldots, \lambda_r)$. It uniquely determines the remaining entries of the diagram.} \label{fig:decgd} \end{figure} We now prove Lemmas \ref{DE-evac-involution} and \ref{upper-shuffle}. We first describe $\mathrm{ev}_{i+1}$ in terms of decgds. Let $\Gamma$ be the decgd whose $j=0$ row is given by ${\bf D} = (D_1, \ldots, D_r)$. By the definition of evacuation, $\mathrm{ev}_{i+1}({\bf D})$ is the concatenation ${\bf AB}$, where ${\bf A}$ is the chain of labels on the vertical path from $(i+1,-i-1)$ to $(i+1,0)$ and ${\bf B} = (D_{i+2}, \ldots, D_r)$ is the chain of labels on the horizontal path from $(i+1,0)$ to $(r,0)$. \begin{proof}[Proof of Lemma \ref{DE-evac-involution}] Setting $i+1=r$, we have $\mathrm{ev}_r({\bf D}) = {\bf A}$, the path from $(r,-r)$ to $(r,0)$. By Lemma \ref{DE-cylindrical-recurrence}(ii), this sequence is $(D_r^R, \ldots, D_1^R)$. \end{proof} \begin{proof}[Proof of Lemma \ref{upper-shuffle}] Let ${\bf D} = ({\bf W}, D_i, D_{i+1}, {\bf B})$ and ${\bf A} = (D_{\lambda_{i+1}},X,{\bf A'})$. We build a new decgd $\Gamma'$ as follows: we replace ${\bf A}$ by $\mathrm{sh}_1({\bf A})$ (in the same location) and keep ${\bf B}$ unchanged. By the recurrence condition, the remaining entries of the decgd are uniquely determined from $\mathrm{sh}_1({\bf A})$ and ${\bf B}$; by definition, the first row of $\Gamma'$ is $\mathrm{esh}_i(\bf{D})$. (See Figure \ref{fig:upper-shuffle}.) Since ${\bf B}$ and ${\bf A'}$ are unchanged in $\Gamma'$, so is ${\bf W}^R$ and therefore (by Lemma \ref{DE-cylindrical-recurrence}(ii)) ${\bf W}$. In particular, we see that ${\bf D}$ and $\mathrm{esh}_i({\bf D})$ agree outside the $i,i+1$ spots. \begin{figure}[h] \centering \includegraphics[scale=0.7]{upper-shuffle-edit.pdf} \hspace{0.5in} \includegraphics[scale=0.7]{upper-shuffle-2} \caption{Computing $\mathrm{esh}_i$ using a pair of decgds $\Gamma, \Gamma'$. Only the central portion changes. Left: the decgd $\Gamma$. Right: the central portion of the decgd $\Gamma'$.} \label{fig:upper-shuffle} \end{figure} With notation as in Figure \ref{fig:upper-shuffle}, we have $\mathrm{sh}_1({\bf A}) = (D_{\lambda_i}, Y, {\bf A}')$. Thus the central portion of $\Gamma'$ is the second decgd pictured. To compute $(D'_i, D'_{i+1})$, let $A'$ be the dual equivalence class obtained by concatenating the classes of the chain ${\bf A'}$ and let $\tau$ be its rectification shape. We have: \begin{align*} \mathrm{sh}_2\mathrm{sh}_1(D_\tau, D_i, D_{i+1}) &= (D_{\lambda_i},Y,A'), \hspace{0.5cm} \text{(from the first decgd)}\\ \mathrm{sh}_2\mathrm{sh}_1(D_\tau, D'_{i+1}, D'_i) &= (D_{\lambda_{i+1}},X,A') \hspace{0.3cm} \text{(from the second decgd)} \end{align*} This gives the desired relation $(D_\tau, D'_{i+1}, D'_i) = \mathrm{sh}_1 \mathrm{sh}_2 \mathrm{sh}_1 \mathrm{sh}_2 \mathrm{sh}_1(D_\tau, D_i, D_{i+1})$. \end{proof} \section{Schubert problems over $\Mbar{r}(\mathbb{R})$} \label{sec:schubert-real} By Theorem \ref{box-lift}, we may think of $\mathcal{S}(\lambda_\bullet)$ as having an extra marked point $z$, labeled by a single box, parametrizing the last point of ramification, which gives a map $\mathcal{S}(\lambda_\bullet) \to \mathcal{C}$. We recall our results for stable curves defined over $\mathbb{R}$: \begin{cor} \label{basic-real-topology} Let $[C] \in \Mbar{r}(\mathbb{R})$ and let $S = \mathcal{S}(\lambda_\bullet)\big|_{[C]}$ be the fiber over $[C]$. We have a finite flat map $S \to C$. \begin{itemize} \item[(i)] The map $S \to C$ is unramified over the real points of $C$. In particular, the only real singular points of $S$ are irreducible components meeting at simple nodes. \item[(ii)] The map $S(\mathbb{R}) \to C(\mathbb{R})$ is a covering map. \item[(iii)] For every irreducible component $S' \subseteq S$, $S'(\mathbb{R})$ is a smooth manifold of (real) dimension 1. In particular, $S'(\mathbb{R})$ is nonempty. \end{itemize} \end{cor} If $C$ has a single component, $S(\mathbb{R})$ is smooth. In particular, as $C$ varies over a maximal cell of $\Mbar{r}(\mathbb{R})$, the real topology of $S(\mathbb{R})$ (notably the number of connected components) does not change. We give a combinatorial interpretation of the connected components of $S(\mathbb{R})$ below. We remark that $S(\mathbb{C})$ need not be connected (see Example \ref{exa:disconnected}). Also, we have not ruled out the possibility that $S$ may have complex conjugate pairs of singularities. We note that (if the generic fiber is smooth) a generic singular fiber of $\mathcal{S}(\lambda_\bullet)$ over a complex point $[C] \in \Mo{r}$ should have only one singularity. But if $[C] \in \Mo{r}(\mathbb{R})$, there must be at least two distinct singular points. We have the following conjecture: \begin{conj} Let $X \subseteq \Mo{r}$ be the closure of the locus where the fiber of $\mathcal{S}(\lambda_\bullet)$ has at least 2 singularities. Then $\mathrm{codim}(X) \geq 2$. In particular, $\Mo{r}(\mathbb{R}) - X(\mathbb{R})$ is connected, so every fiber of $\mathcal{S}(\lambda_\bullet)$ over $\Mo{r}(\mathbb{R}) - X(\mathbb{R})$ has the same \emph{complex} topology. \end{conj} In certain cases, there are no singularities: \begin{exa} Let $\lambda_\bullet = \{{\scalebox{.5}{\yng(1)}}, {\scalebox{.5}{\yng(1)}}, {\scalebox{.5}{\yng(1)}}, {\scalebox{.5}{\yng(1)}}, {\scalebox{.5}{\yng(1)}}\}$ and consider $\mathcal{S}(\lambda_\bullet) \subseteq \mathcal{G}(2,5) \to \Mbar{5}$. Let $[C] \in \Mo{5}(\mathbb{R})$; then the (complex) curve $S = \mathcal{S}(\lambda_\bullet)|_{[C]}$ is smooth. To show this, we compute in coordinates: we set $p_1, p_2, p_3, p_4, p_5$ to be $0,1,\infty,z,w$ and work over $\Mo{5} \cong \mathbb{A}^2_{z,w} - \mathbb{V}(zw(z-1)(w-1)(w-z))$. For real $z,w$, a singular point $s \in S$ cannot satisfy a stricter Schubert condition at any marked point, since the covering $S \to \mathbb{P}^1$ must send $s$ to a complex point. So we may work in the $\lambda = {\scalebox{.5}{\yng(1)}}$ open Schubert cell for the flag $\mathscr{F}(\infty)$: \[ \left( \begin{array}{ccccc} 1 & a & 0 & b & c \\ 0 & 0 & 1 & d & e \end{array} \right) \subseteq G(2,5).\] We may eliminate $b,c,e$ from the saturated ideal for the remaining four Schubert conditions, and are left with one equation $f_{z,w}(a,d)$, giving us a plane curve in $\mathbb{A}^2_{a,d}$. We consider the locus \[X = \{\mathrm{disc}_a(\mathrm{disc}_d(f_{z,w})) = 0 \} \subseteq \mathbb{A}^2_{z,w}.\] The discriminant $\mathrm{disc}_d(f)$ gives the ramification locus of $S$ under the projection $\mathbb{A}^2_{a,d} \to \mathbb{A}^1_a$; then the $a$-discriminant gives the locus where the ramification index is at least $2$. In particular, this includes any singularity, so $X$ includes any $(z,w)$ for which $S$ is a singular curve. The equation for $X$ is: \begin{align*} &2415919104 \big(z(z-1)w(w-1)(w-z)\big)^4\\ & (w^2-w+1)(z^2-z+1)(z^2-zw+w^2) \\ &(1 - w + w^2 - z - w z + z^2) (w^2 - w z - w^2 z + z^2 - w z^2 + w^2 z^2) = 0. \end{align*} The factor on line 1 is a unit; the remaining factors have real solutions only at $z=w=0$ and $z=w=1$ (which are not in the open set $\Mo{5}$). \end{exa} \begin{rmk} The discriminant above is a sum of squares. For example, the last two factors are \begin{align*} 1 - w + w^2 - z - w z + z^2 &= \tfrac{1}{2}\big( (w-z)^2 + (w-1)^2 + (z-1)^2 \big), \\ w^2 - w z - w^2 z + z^2 - w z^2 + w^2 z^2 &= \tfrac{1}{2}\big( (w-z)^2 + (wz - w)^2 + (wz-z)^2\big). \end{align*} Sottile has conjectured that for zero-dimensional Schubert problems, the discriminants are always sums of squares of this form (see e.g. Conjecture 7.8 in \cite{Sottile}), and are in fact strictly positive. If the same holds for one-dimensional Schubert problems, it would follow that the fibers of $\mathcal{S}(\lambda_\bullet)$ over $\Mo{r}(\mathbb{R})$ are smooth algebraic curves. We also note that each quadratic factor is the pullback, by one of the five possible forgetting maps $\Mbar{5} \to \Mbar{4}$, of the pair of points on $\Mbar{4}$ having symmetry group $A_4$. It is interesting to note that the nonreduced (complex) fibers of the zero-dimensional family $\mathcal{S}({\scalebox{.5}{\yng(1)}}^4) \subset \mathcal{G}(2,4)$ over $\Mbar{4}$ also occur over this pair of points. \end{rmk} \begin{conj} Let $[C] \in \Mo{r}(\mathbb{R})$. Then the (complex) fiber $\mathcal{S}(\lambda_\bullet)|_{[C]}$ is smooth, for any $\lambda_\bullet$. \end{conj} In this case the complex topology of $\mathcal{S}(\lambda_\bullet)|_{[C]}$ will not change over any maximal cell of $\Mo{r}(\mathbb{R})$. \subsection{Connected components of real fibers} We now recall Speyer's description of the topology of zero-dimensional Schubert problems $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$, as covering spaces of $\Mbar{r}(\mathbb{R})$. Let $X$ be a maximal cell of $\Mbar{r}(\mathbb{R})$, corresponding to a circular ordering $\sigma(1), \ldots, \sigma(r)$ of the marked points. Let $Y \subset \mathcal{S}(\lambda_\bullet)(\mathbb{R})$ be a cell lying over $X$. Consider an arc in $\overline{X}$ corresponding to a degeneration of $\mathbb{P}^1$ to a curve $C_1 \cup C_2$, where $C_1$ contains $\sigma(i), \ldots, \sigma(j)$, and $C_2$ contains $\sigma(j+1), \ldots, \sigma(i-1)$ (in circular order). Let $S$ be the limit fiber of $\mathcal{S}(\lambda_\bullet)$ and $y = S \cap \overline{Y}$ the point obtained by lifting the arc to $\overline{Y}$. By Theorem \ref{thm:recall-Sp14}, $y$ corresponds to some node labeling on $C_1 \cup C_2$; we denote by $\lambda_{j,-i}$ the partition on the $C_2$ side. These partitions turn out to be organized in a dual equivalence growth diagram. \begin{thm}[Theorem 1.6 and Proposition 7.6 of \cite{Sp}] \label{speyer-covering-space} Let $\sum |\lambda_i| = k(n-k)$. Then $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$ is a covering space of $\Mbar{r}(\mathbb{R})$, so we may lift the CW-complex structure of $\Mbar{r}(\mathbb{R})$ to $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$. In particular: \begin{itemize} \item[(i)] Let $X$ be a maximal cell of $\Mbar{r}(\mathbb{R})$, corresponding to a circular ordering $(\sigma(1), \ldots, \sigma(r))$ of the marked points. The cells $Y$ of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$ lying over $X$ are indexed by decgds $\Gamma$ of type $(\lambda_{\sigma(1)}, \ldots, \lambda_{\sigma(r)})$. \item[(ii)] (Wall-crossing) Let $Y$ be a cell lying over $X$ and $\Gamma$ the corresponding decgd. Let $X'$ be the cell obtained by reversing the interval $\sigma(i)\sigma(i+1) \cdots \sigma(j)$ in the circular ordering, and $Y'$ the corresponding cell in $\mathcal{S}(\lambda_\bullet)(\mathbb{R})$. Let $A$ be the triangular region of $\Gamma$ with vertices $(i,-i), (j,-j), (j,-i)$ and $B$ be the ``opposite'' triangle with vertices $(r+j,-i),(j,r-i),(j,-i)$ (see Figure \ref{fig:wall-crossing}). The decgd $\Gamma'$ for $Y'$ is obtained by transposing $A$, leaving $B$ unchanged, deleting all other entries, and refilling them using the decgd recurrence condition. \end{itemize} \end{thm} \begin{figure}[h] \centering \includegraphics[scale=1.1]{wall-crossing.pdf} \caption{The wall-crossing rule for decgds (shown truncated at top and bottom): Transpose $A$ and leave $B$ unchanged; refill using the decgd recurrence rule.} \label{fig:wall-crossing} \end{figure} We note that a path from the left edge to the right edge of $\Gamma$ corresponds to a choice of caterpillar curve $[\widetilde{C}]$ in the boundary of $X$. The resulting chain of partitions forms the node labeling corresponding to the point $y \in \overline{Y}$ lying over $[\widetilde{C}]$; in fact Theorem \ref{speyer-covering-space} says $y$ has the additional data of the chain of dual equivalence classes. We now return to the case of curves. By Theorem \ref{box-lift}, when $\sum |\lambda_i| = k(n-k) - 1$, the total space of $\mathcal{S}(\lambda_\bullet)$ over $\Mbar{r}$ is isomorphic to the total space of $\mathcal{S}(\lambda_\bullet\ ; \ \ybox_z)$ over $\Mbar{r+1}$. Since we wish to think of this space as fibered in curves over $\Mbar{r}$, we adapt the description from Theorem \ref{speyer-covering-space}. For simplicity, we take the circular ordering $\sigma(i) = i$. Let $\mathrm{DECGD}({\scalebox{.5}{\yng(1)}}, \lambda_1, \ldots, \lambda_r)$ be the set of decgds of type $({\scalebox{.5}{\yng(1)}}, \lambda_1, \ldots, \lambda_r)$. Let \[\pi : \mathrm{DECGD}({\scalebox{.5}{\yng(1)}}, \lambda_1, \ldots, \lambda_r) \to \mathrm{DECGD}({\scalebox{.5}{\yng(1)}}, \lambda_1, \ldots, \lambda_r)\] be the result of successively wall-crossing ${\scalebox{.5}{\yng(1)}}$ past each of the $\lambda_i$'s ($i=1, \ldots, r$). \begin{thm} \label{thm:decgd-curve-orbits} Let $\sum |\lambda_i| = k(n-k)-1$. Let $X$ be the maximal cell of $\Mo{r}(\mathbb{R})$ corresponding to the circular ordering $1, 2, \ldots, r$, and let $S = \mathcal{S}(\lambda_\bullet)|_X$. The connected components of $S(\mathbb{R})$ are in bijection with the orbits of $\pi$; each component is homeomorphic to $S^1 \times X$. \end{thm} \begin{proof} Let $[C] \in X$. The fiber $C \subseteq \Mbar{r+1}$ passes through $r$ maximal cells of $\Mbar{r+1}$, corresponding to the possible placements of the $(r+1)$-st marked point. The decgds labeling these cells for the covering space $\mathcal{S}(\lambda_\bullet\ ; \ \ybox_z)(\mathbb{R}) \to \Mbar{r+1}(\mathbb{R})$ have type $(\lambda_1, \ldots, \lambda_i, {\scalebox{.5}{\yng(1)}}, \lambda_{i+1}, \ldots, \lambda_r)$. When the ${\scalebox{.5}{\yng(1)}}$ switches places with $p_i$, we apply the wall-crossing procedure. Thus, when ${\scalebox{.5}{\yng(1)}}$ travels around the $\mathbb{R}\mathbb{P}^1$, the decgd changes by $\pi$. Since $S(\mathbb{R})|_{[C]}$ is a union of circles and the topology does not change as $[C]$ varies over $X$, the homeomorphism follows. \end{proof} A natural question is whether $S(\mathbb{R})$ has the same number of connected components over every cell $X$. We address this and related questions in the next section. \subsection{Caterpillar curves and desingularizations} We give a different combinatorial description with two advantages: first, it is more amenable to computation; second, it makes it easier to compare $\mathcal{S}(\lambda_\bullet)$ over different cells of $\Mo{r}(\mathbb{R})$. It will also connect the operator $\pi$ of Theorem \ref{thm:decgd-curve-orbits} to promotion and evacuation of tableaux. The idea is to pass to a caterpillar curve $\widetilde{C}$ in the boundary of the maximal cell. We describe the covering space $\widetilde{S}(\mathbb{R}) \to \widetilde{C}(\mathbb{R})$ in terms of chains of dual equivalence classes. For the remainder of this section, let $\widetilde{C}$ be the caterpillar curve with marked points, from left to right, $p_1, \ldots, p_r$. Let the nodes be $q_{23}, \ldots, q_{(r-2)(r-1)}$, and let $q_{12} = p_1$ and $q_{(r-1)r} = p_r$. For $i = 2, \ldots, r-1$, let $\ell_i$ be the arc from $q_{(i-1)i}$ to $q_{i(i+1)}$ \emph{through} $p_i$, and let $u_i$ be the arc \emph{opposite} $p_i$. We define a covering space $S_{DE} \to \widetilde{C}(\mathbb{R})$ as follows: \begin{enumerate} \item[(i)] The fiber of $S_{DE}$ over $q_{i(i+1)}$ is indexed by the set $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}},\lambda_{i+1}, \ldots, \lambda_r)$. \item[(ii)] The arcs covering $\ell_i$ connect ${\bf D}$ to $\mathrm{esh}_i({\bf D})$, where \[\mathrm{esh}_i : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, {\scalebox{.5}{\yng(1)}},\lambda_i, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}}, \ldots, \lambda_r)\] is the $i$-th evacuation-shuffle. \item[(iii)] The arcs covering $u_i$ connect ${\bf D}$ to $\mathrm{sh}_i({\bf D})$, where \[\mathrm{sh}_i : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, {\scalebox{.5}{\yng(1)}},\lambda_i, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_i,{\scalebox{.5}{\yng(1)}}, \ldots, \lambda_r)\] is the $i$-th shuffle. \end{enumerate} (Note: we do not explicitly label the fibers over $p_2, \ldots, p_{r-1}$.) \\ See Figure \ref{fig:covering-space} for a possible such covering, along with the smooth curves obtained by desingularizing the caterpillar curve. \begin{thm} \label{thm:DE-caterpillar-covering} Let $\tilde S = \mathcal{S}(\lambda_\bullet)|_{[\widetilde{C}]}$. Then $S_{DE} \cong \widetilde{S}(\mathbb{R})$ as covering spaces of $\widetilde{C}(\mathbb{R})$. \end{thm} \begin{proof} Let $X$ be the cell of $\Mo{r}(\mathbb{R})$ containing $\widetilde{C}$ in its boundary, corresponding to the circular ordering $12\cdots r$. For $i = 1, \ldots, r-1$, let $X_{i(i+1)}$ be the cell of $\Mo{r+1}(\mathbb{R})$ over $X$ in which ${\scalebox{.5}{\yng(1)}}$ is between $\lambda_i$ and $\lambda_{i+1}$. Finally, let $X_{r1}$ be the cell in which ${\scalebox{.5}{\yng(1)}}$ is between $\lambda_r$ and $\lambda_1$. We first describe the indexing of the fiber over $q_{i(i+1)}$. Let $s \in \pi^{-1}(q_{i(i+1)})$. There are many cells of $\Mo{r+1}(\mathbb{R})$ containing $q_{i(i+1)}$ in their boundary (for example, $X_{i(i+1)}$ and $X_{r1}$). Let $X^*$ be any such cell and $Y$ the unique cell lying over $X^*$ containing $s$ in its boundary. Let $\Gamma$ be the decgd corresponding to $Y$. There is a unique path through $\Gamma$ that yields a chain of dual equivalence classes ${\bf D}$ of type $(\lambda_1, \ldots, \lambda_i, {\scalebox{.5}{\yng(1)}}, \lambda_{i+1}, \ldots, \lambda_r)$. We label $s$ by ${\bf D}$. (By Theorem \ref{speyer-covering-space}, ${\bf D}$ does not depend on our choice of cell.) This gives (i). We next compute the effect of lifting an arc from $q_{(i-1)i}$ to $q_{i(i+1)}$, starting from $s$. Let $Y_{(i-1)i}$ be the cell covering $X_{(i-1)i}$ corresponding to the decgd $\Gamma$ whose first row is ${\bf D}$. Let $Y_{i(i+1)}$ be the cell obtained by following the arc $\ell_i$ (crossing a wall when ${\scalebox{.5}{\yng(1)}}$ collides with $\lambda_i$). Let the decgd for $Y_{i(i+1)}$ be $\Gamma'$. From the wall-crossing rule of Theorem \ref{speyer-covering-space}, $\Gamma'$ is obtained by transposing the portion of $\Gamma$ consisting of ${\scalebox{.5}{\yng(1)}}, \lambda_i$ and the partition covering the two. By Lemma \ref{upper-shuffle}, the $\varnothing \longrightarrow \lambda_1 \longrightarrow \cdots$ row of $\Gamma'$ is $\mathrm{esh}_i({\bf D})$. This gives (ii). Now let $Y_{r1}$ be the unique cell covering $X_{r1}$ containing $s$ in its boundary, and let $\Gamma$ be its decgd. Let ${\bf D}'$ be the label on the point obtained by lifting the arc $u_i$ to $s$. The lift of $u_i$ does not cross a wall (it lies entirely in $\overline{Y_{r1}}$), so ${\bf D, D'}$ both appear in $\Gamma$ as the paths: \[\xymatrix{ \varnothing \ar[r] & {\scalebox{.5}{\yng(1)}} \ar@{..}[r] &\ar@{..}[r] &\ar@{..}[r] &\ar@{..}[r] & \cdot \ar[r]^{D_i} & \cdot \ar@{--}[r] &\cdot \ar[r]^-{D_{r-1}} & \lambda_r^c \ar[r] & {\scalebox{.3}{\yng(3,3)}} \\ & \varnothing \ar[u] \ar[r] & \lambda_1 \ar[r]^-{D_2} & \cdot \ar@{--}[r] &\cdot \ar[r]^-{D_{i-1}} & \cdot \ar[u]^{D_{\scalebox{.5}{\yng(1)}}} \ar[r]^{D_i'} & \cdot \ar[u]_{D'_{\scalebox{.5}{\yng(1)}}} \ar@{..}[r] & \ar@{..}[r] & \ar@{..}[r] & {\scalebox{.5}{\yng(1)}}^c\ar[u] \ar[r] & {\scalebox{.3}{\yng(3,3)}}, }\] so we see that ${\bf D'} = \mathrm{sh}_i({\bf D})$, which is (iii). \end{proof} We next compare $\widetilde{\pi} : \widetilde{S}(\mathbb{R}) \to \widetilde{C}(\mathbb{R})$ to a nearby desingularization $\pi : S(\mathbb{R}) \to C(\mathbb{R}) \cong \mathbb{R}\mathbb{P}^1$, with $C \in \Mo{r}(\mathbb{R})$. Let $\gamma$ be the loop around the circle $C(\mathbb{R})$, starting from $p_1$ and traversing $p_2$ last. Inside $\Mbar{r+1}$, $\gamma$ is homotopic to a unique sequence of arcs around $\widetilde{C}(\mathbb{R})$, as in Figure \ref{fig:desings}. Let $\omega$ be the corresponding composition of shuffles and evacuation-shuffles. The monodromy action of $\pi_1(C(\mathbb{R}))$ on $\pi^{-1}(p_1)$ is equivalent to the action of $\omega$ on $\widetilde{\pi}^{-1}(p_1) \subset \widetilde{S}(\mathbb{R})$. \begin{figure}[h] \centering \includegraphics[scale=0.6]{desing-1.pdf} \includegraphics[scale=0.6]{desing-2.pdf} \caption{Two desingularizations of a caterpillar curve, with the associated operations $\omega$. For $i=2, \ldots, 5$, the marked point $p_i$ is on the arc labeled $\mathrm{esh}_i$.} \label{fig:desings} \end{figure} It is convenient to reindex the fiber of $\widetilde{S}$ over $p_1$ by $X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r)$. Note that there is a canonical bijection \[\iota : X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}},\lambda_1, \ldots, \lambda_r),\] and that the two operations \[\mathrm{sh}_1, \mathrm{esh}_1 : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}}, \lambda_1, \ldots, \lambda_r) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, {\scalebox{.5}{\yng(1)}}, \ldots, \lambda_r)\] are the same. We deduce: \begin{cor} \label{cor:connected-components-desing} Let $X$ be a maximal cell of $\Mo{r}(\mathbb{R})$, containing $[\widetilde{C}]$ in its boundary. Let $[C] \in X$ be any desingularization and let $S = S(\lambda_\bullet)|_{[C]}$ be the Schubert curve over $[C]$. Let \[\omega_X : X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r) \to X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r)\] be the composition of shuffles and evacuation-shuffles corresponding to the loop around $C(\mathbb{R})$. There is a bijection \[\left\{\begin{split}\text{components\ } \\ \text{ of } S(\mathbb{R}) \hspace{0.4cm} \end{split}\right\} \longleftrightarrow X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r) / \omega_X.\] \end{cor} \begin{cor} \label{cor:promotion} If $X$ is the cell corresponding to the circular ordering $p_1, \ldots, p_r$, the connected components of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})|_X$ are the orbits of \[\omega_X = \iota^{-1} \circ \mathrm{esh}_1 \cdots \mathrm{esh}_{r-1} \mathrm{sh}_{r-1} \cdots \mathrm{sh}_{1} \circ \iota.\] In particular, the connected components of $\mathcal{S}({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}})(\mathbb{R})|_X$ are in bijection with the orbits of tableau promotion on $\mathrm{SYT}({\scalebox{.3}{\yng(3,3)}})$. \end{cor} \begin{proof} Tableau promotion is the composition $\mathrm{sh}_{r-1} \cdots \mathrm{sh}_1$. (Recall that under the identification of $X_{\scalebox{.3}{\yng(1)}}^{\scalebox{.3}{\yng(3,3)}}({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}})$ with $SYT({\scalebox{.3}{\yng(3,3)}})$, $\mathrm{esh}_i$ becomes trivial.) \end{proof} We remark that the identification above was contingent on the choice of caterpillar curve $\widetilde{C}$ in the boundary of $X$. (The statement in Theorem \ref{thm:decgd-curve-orbits} is canonical for the entire cell, though the connection to tableau promotion is less apparent.) A different caterpillar curve $\widetilde{C}'$ in the boundary of $X$, with points $p_{\sigma(1)}, \ldots, p_{\sigma(r)}$ from left to right, yields an operator $\omega'$ that differs from $\omega$ by an intertwining operator \[\psi : X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r) \to X_{\scalebox{.3}{\yng(1)}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_{\sigma(1)}, \ldots, \lambda_{\sigma(r)}).\] The function $\psi$ is a sequence of shuffles and evacuation-shuffles, corresponding to changing paths in a decgd. We do not describe $\psi$ explicitly. The advantage of Corollary \ref{cor:connected-components-desing} is that we may compare different cells $X_i$ by desingularizing $\widetilde{C}$ in different ways. We have: \begin{cor} Let $\eta(X)$ be the number of connected components of $\mathcal{S}(\lambda_\bullet)(\mathbb{R})|_X$. For any two maximal cells $X, X'$ of $\Mo{r}(\mathbb{R})$, $\eta(X) \equiv \eta(X')\ \mathrm{mod} \ 2.$ \end{cor} \begin{proof} We may assume $X$ and $X'$ share a wall and that $\widetilde{C}$ is in the closure of this wall. Then the operations $\omega_X, \omega_{X'}$ are reorderings of the same set of bijections (each $\mathrm{esh}_i$ and $\mathrm{sh}_i$ appears once). Thus, as permutations of $X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_1, \ldots, \lambda_r)$, $\omega_X$ and $\omega_{X'}$ have the same sign. This determines the parity of the number of orbits. \end{proof} In general, $\eta(X)$ and $\eta(X')$ need not be equal, as the following example shows: \begin{exa} \label{exa:minimal-counterex} Let $\lambda_\bullet = \big\{{\tiny \yng(2), \yng(2,1), \yng(3,1), \yng(3,2) }\big\}$ and consider $\mathcal{S}(\lambda_\bullet) \subseteq \mathcal{G}(3,8)$ over $\Mbar{4}$. Let $X, X', X'' \subseteq \Mo{4}(\mathbb{R})$ be the cells corresponding to the circular orderings $1234, 1243,1324$. Then $\eta(X) = 3$, but $\eta(X') = \eta(X'') = 1$. \end{exa} The absence of smaller examples is explained in part by the following. \begin{lemma} \label{lem:mult-free-transpositions} Let the circular orderings for $X, X'$ differ by exchanging two adjacent points $p_i, p_j$, and suppose the product $\lambda_i \cdot \lambda_j$ in $H^*(G(k,n))$ is multiplicity-free, that is, $c_{\lambda_i \lambda_j}^\nu \leq 1$ for all $\nu$. Then $\eta(X) = \eta(X')$. \end{lemma} \begin{proof} We may assume $i=1$, $j=2$ and the circular ordering for $X$ is $123\cdots r$. Let $\omega_X, \omega_{X'}$ be the bijections as in Corollary \ref{cor:connected-components-desing} corresponding to the loops for $X,X'$: \begin{align*} \omega_X &= \mathrm{esh}_2 \circ \mathrm{esh}_3 \circ \cdots \circ \mathrm{esh}_{r-1} \circ \mathrm{sh}_{r-1} \circ \cdots \circ \mathrm{sh}_3 \circ \mathrm{sh}_2, \\ \omega_{X'} &= \mathrm{sh}_2 \circ \mathrm{esh}_3 \circ \cdots \circ \mathrm{esh}_{r-1} \circ \mathrm{sh}_{r-1} \circ \cdots \circ \mathrm{sh}_3 \circ \mathrm{esh}_2. \end{align*} We see that $\omega_X$ is conjugate to $\omega_{X'} (\mathrm{esh}_2 \mathrm{sh}_2)^2$, and $\mathrm{esh}_2 \mathrm{sh}_2$ corresponds to the loop around the first component of the caterpillar curve. Let $s \in \pi^{-1}(p_1) \subset \widetilde{S}(\mathbb{R})$ and ${\bf D}$ the corresponding chain of dual equivalence classes. Let $\nu$ be the node labeling with $s \in S_\nu$ and $\nu(q_{23}, C_1)$ the node label of $q_{23}$ on the first component. Since $\mathrm{esh}_2 \mathrm{sh}_2$ only affects the ${\scalebox{.5}{\yng(1)}}, \lambda_1, \lambda_2$ dual equivalence classes, we truncate ${\bf D}$ and work in the set $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}},\lambda_1, \lambda_2,\nu(q_{23},C_1))$. By the Pieri rule and our assumption on $\lambda_1$ and $\lambda_2$, $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}},\lambda_1, \lambda_2,\nu(q_{23},C_1))$ has cardinality $\leq 2$, so $(\mathrm{esh}_2 \mathrm{sh}_2)^2 = \mathrm{id}$. This holds for all points $s$, so $\omega_X$ and $\omega_{X'}$ are conjugate, hence have the same orbit structure. \end{proof} \begin{cor}\label{cor:mult-free} Suppose every pairwise product $\lambda_i \cdot \lambda_j$ in $H^*(G(k,n))$ is multiplicity-free. Then the operators $\omega$ for different circular orderings are all conjugate. In particular, the number of real connected components of $S(\mathbb{R})$ does not depend on the ordering of the $p_\bullet$. \end{cor} As an example, if $\alpha, \beta$ are rectangular partitions, then $\alpha \cdot \beta$ is known to be multiplicity free. We may make a slightly stronger statement: \begin{cor} Let the circular orderings of $X, X'$ differ by any permutation $\sigma$, where $\sigma$ fixes all non-rectangular partitions and does not move any partition past a non-rectangular partition. Then $\eta(X) = \eta(X')$. \end{cor} \begin{proof} $X$ and $X'$ are connected by a sequence of transposition wall-crossings as in Lemma \ref{lem:mult-free-transpositions}. \end{proof} \begin{cor} If all or all but one $\lambda_i$ are rectangular, $\eta(X)$ is the same for all $X$. \end{cor} We also note that Corollary \ref{cor:mult-free} applies to any Schubert problem on $G(2,n)$. Certain other cases also trivially have $\eta(X) = \eta(X')$, such as when two identical partitions switch places. It is interesting to point out a smaller candidate counterexample $\lambda_\bullet = \big\{{\tiny \yng(2), \yng(2), \yng(2,1), \yng(3,1)}\big\}$, with $\mathcal{S}(\lambda_\bullet) \subset \mathcal{G}(3,7)$. Here, $\eta(X) = 2$ for all circular orderings, but the permutations $\omega_X$ and $\omega_{X'}$ for the circular orderings $1234$ and $1324$ are not conjugate: $X_{{\scalebox{.3}{\yng(1)}}}^{{\scalebox{.3}{\yng(3,3)}}}(\lambda_\bullet)$ has $8$ elements, which are partitioned into two orbits of sizes $3,5$ by $\omega_X$ and $4,4$ by $\omega_{X'}$. \section{Connections to K-theory} \label{sec:k-theory} \subsection{Basic facts} The classes of the Schubert structure sheaves $[\mathcal{O}_\lambda] := [\mathcal{O}_{\Omega(\lambda)}]$ form an additive basis for the K-theory of $G(k,n)$. We write $k_{\lambda_\bullet}^\nu$ for the absolute value of the coefficient of $[\mathcal{O}_\nu]$ in the product $\prod_i [\mathcal{O}_{\lambda_i}]$. This is zero unless $|\nu| \geq \sum |\lambda_i|$, and the leading terms agree with cohomology: \[c_{\lambda_\bullet}^\nu = k_{\lambda_\bullet}^\nu \text{ when } |\nu| = \sum |\lambda_i|.\] The coefficients alternate in sign: \begin{thm}\emph{\cite{Bu02}} The structure constant $k_{\lambda_\bullet}^\nu$ appears with sign $\displaystyle{(-1)^{|\nu| - \sum |\lambda_i|}}.$ \end{thm} \noindent We note that a Schubert variety for ${\scalebox{.5}{\yng(1)}}^c$ is isomorphic to $\mathbb{P}^1$; in particular, the Euler characteristic is $\chi(\mathcal{O}_{{\scalebox{.3}{\yng(1)}}^c}) = 1$. \begin{lemma} \label{lem:complementary-ktheory} Suppose $|\mu| + |\lambda| = k(n-k) - 1$. Then $k_{\lambda \mu}^{{\scalebox{.3}{\yng(3,3)}}} = 0$ and $[\mathcal{O}_\lambda] \cdot [\mathcal{O}_\mu] = k_{\lambda \mu}^{{\scalebox{.3}{\yng(1)}}^c} [\mathcal{O}_{{\scalebox{.3}{\yng(1)}}^c}].$ \end{lemma} \begin{proof} We may write \[[\mathcal{O}_\lambda] \cdot [\mathcal{O}_\mu] = k_{\lambda \mu}^{{\scalebox{.3}{\yng(1)}}^c} [\mathcal{O}_{{\scalebox{.3}{\yng(1)}}^c}] - k_{\lambda \mu}^{{\scalebox{.3}{\yng(3,3)}}} [\mathcal{O}_{{\scalebox{.3}{\yng(3,3)}}}], \text{ and so } \chi(\mathcal{O}_S) = k_{\lambda \mu}^{{\scalebox{.3}{\yng(1)}}^c} - k_{\lambda \mu}^{{\scalebox{.3}{\yng(3,3)}}},\] where $S$ is the corresponding intersection of Schubert varieties. There are two cases. If $\mu^c \not\supset \lambda$, $S$ is empty and both coefficients are zero. Otherwise, $S$ is a reduced curve, whose degree in the Pl\"{u}cker embedding is 1 because (by the Pieri rule) $k_{\lambda \mu}^{{\scalebox{.3}{\yng(1)}}^c} = 1$. Hence $S$ must be isomorphic to $\mathbb{P}^1$ and have Euler characteristic 1. \end{proof} \begin{lemma} Let $\alpha, \beta, \gamma$ be partitions such that $|\alpha| + |\beta| + |\gamma| = k(n-k) - 1$. Then \[[\mathcal{O}_\alpha] \cdot [\mathcal{O}_\beta] \cdot [\mathcal{O}_\gamma] = k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} [\mathcal{O}_{{\scalebox{.3}{\yng(1)}}^c}] - k_{\alpha \beta}^{\gamma^c} [\mathcal{O}_{{\scalebox{.3}{\yng(3,3)}}}].\] \end{lemma} \begin{proof} We note that $k_{\alpha \beta \gamma}^{{\scalebox{.3}{\yng(1)}}^c} = k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}}$ by the Pieri rule (in cohomology). For the coefficient of $[\mathcal{O}_{{\scalebox{.3}{\yng(3,3)}}}]$, by definition, we have \[k_{\alpha \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} = \sum_\nu \pm k_{\alpha \beta}^\nu k_{\nu \gamma}^{{\scalebox{.3}{\yng(3,3)}}}.\] If $|\nu| + |\gamma| = k(n-k) - 1$, then $k_{\nu \gamma}^{{\scalebox{.3}{\yng(3,3)}}} = 0$ by Lemma \ref{lem:complementary-ktheory}. But if $|\nu| + |\gamma| = k(n-k)$, we know from cohomology that $k_{\nu \gamma}^{{\scalebox{.3}{\yng(3,3)}}}$ is $1$ if $\nu = \gamma^c$ and $0$ otherwise. \end{proof} The coefficient $k_{\alpha \beta}^{\gamma^c}$ counts increasing tableaux of shape $\gamma^c/\alpha$ whose rectification, under K-theoretic jeu de taquin, is the highest-weight standard tableau of shape $\beta$ (see \cite{ThYo}). When $|\gamma^c / \alpha| = |\beta| + 1$, any such tableau is standard except for a single repeated entry. \subsection{Schubert curves in K-theory} Our key connection to K-theory comes from the following: \begin{lemma}\label{lem:euler-char-integral} Let $S$ be a smooth, integral projective curve, defined over $\mathbb{R}$, and suppose $S(\mathbb{C}) - S(\mathbb{R})$ is disconnected. Let $\eta(S)$ be the number of connected components of $S(\mathbb{R})$. Then $\eta(S) \equiv \chi(\mathcal{O}_S)\ (\mathrm{mod}\ 2).$ \end{lemma} \begin{proof} This is well-known (see, for example, \cite{GrHa}). \end{proof} Our curves may not be smooth or integral, but the identity holds nonetheless. \begin{lemma}\label{lem:recall-euler-char} Let $S = S(\lambda_\bullet,p_\bullet)$ be the Schubert curve, with $p_i \in \mathbb{R}\mathbb{P}^1$ for each $i$. Let $\eta(S)$ be the number of connected components of $S(\mathbb{R})$. Then $\eta(S) \equiv \chi(\mathcal{O}_S)\ (\mathrm{mod}\ 2).$ \end{lemma} \begin{proof} Let $S$ have irreducible components $S_i$, and let $\widetilde{S} = \bigsqcup \widetilde{S_i}$, where $\widetilde{S_i} \to S_i$ is the normalization. We have a birational morphism $\pi: \widetilde{S} \to S$, and an exact sequence \[0 \to \mathcal{O}_S \to \pi_* \mathcal{O}_{\widetilde{S}} \to \mathcal{F} \to 0,\] with cokernel supported at the singular points of $S$. By Theorem \ref{cor:as-real-as-poss}, $S$ has smooth real points. The singularities of $S$ therefore occur in (isomorphic) complex conjugate pairs, so $\chi(\mathcal{F}) = \dim_\mathbb{C} H^0(\mathcal{F})$ is even and $\chi(\mathcal{O}_S) \equiv \chi(\mathcal{O}_{\widetilde{S}})\ \mathrm{mod}\ 2$. By Corollary \ref{cor:normalization-disconnected}, each $\widetilde{S_i}$ is disconnected by its real points, so our conclusion follows by summing over the $\widetilde{S_i}$. \end{proof} We also have the following inequality: \begin{lemma} With notation as above, let $\iota(S)$ be the number of irreducible components of $S$. Then $\chi(\mathcal{O}_S) \leq \iota(S) \leq \eta(S)$. \end{lemma} \begin{proof} Since $S$ is reduced, $h^0 = \dim_\mathbb{C} H^0(\mathcal{O}_S)$ is the number of connected components of $S(\mathbb{C})$. We have $\chi(\mathcal{O}_S) \leq h^0$. We have shown (Corollary \ref{cor:normalization-disconnected}) that every irreducible component of $S$ contains a real point, and $S(\mathbb{R})$ is smooth, so $h^0 \leq \iota(S) \leq \eta(S)$. \end{proof} For the remainder of this section, we specialize to the case of three partitions $\alpha, \beta, \gamma$ whose sizes sum to $k(n-k)-1$. By Corollary \ref{cor:connected-components-desing}, the connected components of $S(\mathbb{R})$ are in bijection with the orbits of $\omega = \mathrm{esh}_2 \circ \mathrm{sh}_2$, where \[\mathrm{esh}_2, \mathrm{sh}_2 : X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma) \to X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, \beta, {\scalebox{.5}{\yng(1)}}, \gamma)\] are the shuffle and evacuation-shuffle on chains of dual equivalence classes. Note that the cardinality of $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma)$ is $k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}}$. We have proven the following combinatorial facts: \begin{cor} \label{cor:parity-eqn} We have \begin{equation} \label{eqn:recall-parity-eqn} \#\mathrm{orbits}(\omega) \equiv k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} - k_{\alpha \beta}^{\gamma^c} \ (\mathrm{mod}\ 2) \ \ \text{ and }\ \ \mathrm{sign}(\omega) \equiv k_{\alpha \beta}^{\gamma^c}\ (\mathrm{mod}\ 2), \end{equation} where $\mathrm{sign}(\omega) = 0$ or $1$, and the inequality \begin{equation} \label{eqn:ktheory-inequality} k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} \leq \#\mathrm{orbits}(\omega) + k_{\alpha \beta}^{\gamma^c}. \end{equation} \end{cor} We note that if $k_{\alpha \beta}^{\gamma^c} = 0$, then $\omega$ is the identity permutation. In this case $[\mathcal{O}_S] = k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} \cdot [\mathcal{O}_{{\scalebox{.3}{\yng(1)}}^c}]$ in K-theory, and it is easy to see that $S$ must then be a disjoint union of $k_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}}$ copies of $\mathbb{P}^1$. \begin{exa}[A disconnected Schubert curve] \label{exa:disconnected} Let $\alpha = \beta = \gamma = {\tiny \yng(3,1,1)}$, and let $S = S(\alpha, \beta, \gamma; p_\bullet) \subseteq G(4,8)$. Then $k_{\alpha \beta}^{\gamma^c} = 0$ and $k_{\alpha \beta {\scalebox{.3}{\yng(1)}} \gamma}^{{\scalebox{.3}{\yng(3,3)}}} = 2$, so $S \cong \mathbb{P}^1 \sqcup \mathbb{P}^1$. \end{exa} On the other hand, there are examples where $S$ is integral and $\eta(S) < g(S) + 1$: \begin{exa}[A Schubert curve with fewer than $g+1$ components]\label{exa:large-k} Let $S = S(\alpha, \beta, \gamma; p_\bullet) \subseteq G(4,9)$, with \[\alpha = \gamma = {\tiny \yng(3,2,1)} \text{ and } \beta = {\tiny \yng(4,2,1)}.\] Then $\eta(S) = 1$, $k_{\alpha \beta {\scalebox{.3}{\yng(1)}} \gamma}^{{\scalebox{.3}{\yng(3,3)}}} = 12$ and $k_{\alpha \beta}^{\gamma^c} = 13$, so $S$ is integral with arithmetic genus $2$. A computation in coordinates shows that $S$ is smooth. \end{exa} We do not know a combinatorial explanation in general for equations \eqref{eqn:recall-parity-eqn} or \eqref{eqn:ktheory-inequality} or their analogs for products of more than three partitions. Below, we prove equation \eqref{eqn:recall-parity-eqn} in the case where $\beta$ is a horizontal or vertical strip (the `Pieri case') and $\alpha, \gamma$ are arbitrary. By the associativity of Littlewood-Richardson numbers, this gives an independent combinatorial proof of the analog of equation \eqref{eqn:recall-parity-eqn} for arbitrary products of horizontal and vertical strips. \begin{rmk} In the Pieri case, \eqref{eqn:ktheory-inequality} is actually an equality, and \eqref{eqn:recall-parity-eqn} holds over $\mathbb{Z}$. Example \ref{exa:large-k} shows that this is not the case in general. \end{rmk} We also give a simple proof of the parity identity for the product of $k(n-k)-1$ copies of ${\scalebox{.5}{\yng(1)}}$ (the `promotion case'). \subsection{The Pieri Case} Let $\beta$ be a horizontal strip of length $d$ and $\alpha, \gamma$ be arbitrary. (The proof for vertical strips is entirely analogous.) Assume $c_{\alpha {\scalebox{.3}{\yng(1)}} \beta \gamma}^{{\scalebox{.3}{\yng(3,3)}}} \ne 0$. There are two cases to consider: \emph{Case 1}. Suppose $\gamma^c/\alpha$ is not a horizontal strip. Then $\gamma^c/\alpha$ must contain a single vertical domino ${\tiny \yng(1,1)}$, but be a horizontal strip otherwise. Then $X_{\varnothing}^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma)$ has only one element, since the ${\scalebox{.5}{\yng(1)}}$ must go in the top box of the domino, and $k_{\alpha \beta}^{\gamma^c} = 0$. \\ \emph{Case 2}. Suppose $\gamma^c/\alpha$ is a horizontal strip of $d+1$ boxes; let $r$ be the number of nonempty rows of the skew shape $\gamma^c/\alpha$. Then there is a natural ordering of the chains \[X_{\varnothing}^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma) = \{{\bf D}_1, \ldots, {\bf D}_r\},\] where ${\bf D}_i$ is the chain where the ${\scalebox{.5}{\yng(1)}}$ is at the start of the $i$-th lowest row of $\gamma^c/\alpha$. (The other dual equivalence classes are all determined by this choice.) \begin{thm} \label{thm:pieri-case} Let $\omega = \mathrm{esh}_2 \circ \mathrm{sh}_2$. Then $\omega({\bf D}_i) = {\bf D}_{i+1 \ \mathrm{mod}\ r}.$ \end{thm} \begin{proof} We first show that $\omega({\bf D}_r) = {\bf D}_1$. Observe that $\mathrm{sh}_2({\bf D}_r)$ has the ${\scalebox{.5}{\yng(1)}}$ at the end of the top row of $\gamma^c/\alpha$. We think of the filling of $\gamma^c/\alpha$ as a single skew tableau $T$, with ${\scalebox{.5}{\yng(1)}}$ as its largest entry. Then $\mathrm{sh}_2\mathrm{sh}_1(\mathrm{sh}_2{\bf D}_r)$ rectifies $T$, and since the entries of $T$ strictly increase from left to right, the rectification is a horizontal strip of length $d+1$, with ${\scalebox{.5}{\yng(1)}}$ at the end. Then $\mathrm{sh}_1$ slides the ${\scalebox{.5}{\yng(1)}}$ to the beginning of the strip, so $\mathrm{sh}_1\mathrm{sh}_2$ must move the ${\scalebox{.5}{\yng(1)}}$ to the leftmost space of $\gamma^c/\alpha$, i.e. the beginning of the lowest row. (See Figure \ref{fig:pieri-r1}.) Thus $\omega({\bf D}_r) = {\bf D}_1$. Next, we show that, for all $i$, $\omega({\bf D}_i) = {\bf D}_j$ with $j \leq i+1$. Since we know $\omega({\bf D}_r) = {\bf D}_1$, this forces $\omega$ to be the desired permutation. We may assume $i + 1 < r$. By definition, $\mathrm{sh}_2({\bf D}_i)$ has the ${\scalebox{.5}{\yng(1)}}$ at the end of the $i$-th lowest row of $\gamma^c/\alpha$. Let $X \subset \gamma^c/\alpha$ be the subtableau consisting only of the entries in the $(i+2)$-th row and above. We analyze the rectification ${\bf R} = \mathrm{sh}_2\mathrm{sh}_1(\mathrm{sh}_2{\bf D}_i)$, using the highest-weight tableau $T$ of shape $\alpha$. Note that the ${\scalebox{.5}{\yng(1)}}$ must end up as the first entry in the second row in ${\bf R}$, and that $\mathrm{sh}_1({\bf R})$ slides the ${\scalebox{.5}{\yng(1)}}$ upwards. We claim the following: no square of the rectification path of the ${\scalebox{.5}{\yng(1)}}$ is immediately south or east of any square on the rectification path of any box of $X$. So, when computing ${\bf D}_j := \mathrm{sh}_1\mathrm{sh}_2(\mathrm{sh}_1{\bf R})$, the rectified squares of $X$ must return to their original locations. It follows that $j \leq i+1$. Let $a_j$ be the number of boxes in the $j$-th lowest row of $\gamma^c/\alpha$. To prove the claim, we observe the following: when we compute $\mathrm{sh}_1(\mathrm{sh}_2 {\bf D}_i)$, the squares in the $j$-th lowest row of $\gamma^c/\alpha$ first slide left until the leftmost is in column $a_1 + \cdots + a_{j-1}+1$, then directly upwards. In particular, the leftmost box of $X$ lands in column $c = a_1 + \cdots + (a_i - 1) + a_{i+1} + 1$. Similarly, when we compute $\mathrm{sh}_2\mathrm{sh}_1(\mathrm{sh}_2{\bf D}_i)$, the ${\scalebox{.5}{\yng(1)}}$ slides left to column $c' = a_1 + \cdots + a_i$, then up to row 2, then left to column 1. (See Figure \ref{fig:pieri-i}.) For the first set of slides, ${\scalebox{.5}{\yng(1)}}$ is at least two rows lower than any square of $X$; afterwards it is strictly left of any square of $X$, since $c' < c$. \end{proof} \begin{figure}[h] \centering \includegraphics[scale=0.7]{pieri-r1.pdf} \caption{Applying $\mathrm{esh}_2$ to the chain $\mathrm{sh}_2({\bf D_r})$. The result is ${\bf D}_1$.} \label{fig:pieri-r1} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{pieri-i.pdf} \caption{Showing that $\omega(2) \leq 3$. The rectification path taken by the black box is never immediately south or east of the path taken by the highest strip.} \label{fig:pieri-i} \end{figure} Finally, we recall the following description of $k_{\alpha \beta}^{\gamma^c}$: \begin{prop}[\cite{ThYo}] Let $\gamma^c/\alpha$ be a horizontal strip of size $d+1$ and $\beta = (d)$. Let $r$ be the number of nonempty rows in $\gamma^c / \alpha$. Then $k_{\alpha \beta}^{\gamma^c} = r-1$. The corresponding increasing skew tableaux are $K_{\alpha \beta}^{\gamma^c} = \{T_{12}, \ldots, T_{r,r-1}\}$, where the entries of $T_{i,i+1}$ are strictly increasing from left to right, except that the last entry of the $i$-th lowest row equals the first entry of the row above it. \end{prop} \begin{exa} Let $\gamma^c/\alpha = {\tiny \young(::::\hfil\hfil,::\hfil\hfil,\hfil)}$ and let $\beta = {\tiny \yng(4)}$. The corresponding tableaux are \[X_\varnothing^{\scalebox{.3}{\yng(3,3)}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma) = \{{\bf D}_1, {\bf D}_2, {\bf D}_3\} = \bigg\{{\tiny \young(::::34,::12,\hfil), \young(::::34,::\hfil2,1), \young(::::\hfil4,::23,1)} \bigg\},\] \[K_{\alpha \beta}^{\gamma^c} = \{T_{12}, T_{23}\} = \bigg\{{\tiny \young(::::34,::12,1), \young(::::34,::23,1)} \bigg\}.\] We think of the tableau $T_{i,i+1}$ in $K$-theory as corresponding to the equation $\omega({\bf D}_i) = {\bf D}_{i+1}$ (for $i < r$). \end{exa} \begin{cor} If $\gamma^c/\alpha$ is a horizontal strip of size $d+1$ and $\beta = (d)$, then $\omega$ has one orbit on $X_{\varnothing}^{{\scalebox{.3}{\yng(3,3)}}}(\alpha, {\scalebox{.5}{\yng(1)}}, \beta, \gamma)$. In particular, the corresponding Schubert curve is irreducible, hence isomorphic to $\mathbb{P}^1$ (since $\chi(\mathcal{O}_S) = k_{\alpha \beta {\scalebox{.3}{\yng(1)}} \gamma}^{{\scalebox{.3}{\yng(3,3)}}} - k_{\alpha \beta}^{\gamma^c} = r - (r-1) = 1.$) \end{cor} \subsection{The promotion case} We consider the case of $N = k(n-k)-1$ copies of ${\scalebox{.5}{\yng(1)}}$. In this case $\omega$ is given by tableau promotion on $X_\varnothing^{{\scalebox{.3}{\yng(3,3)}}}({\scalebox{.5}{\yng(1)}}, \ldots, {\scalebox{.5}{\yng(1)}}) = SYT({\scalebox{.3}{\yng(3,3)}})$, that is, \[\omega = \mathrm{sh}_N \circ \cdots \circ \mathrm{sh}_1.\] Note that, under the identification with $SYT({\scalebox{.3}{\yng(3,3)}})$, the evacuation-shuffles correspond to the identity map. We claim the following: \begin{prop} We have $\displaystyle{\mathrm{sign}(\omega) = \sum_i \mathrm{sign}(\mathrm{sh}_i) = k_{\underbrace{{\scalebox{.3}{\yng(1)}}, \ldots, {\scalebox{.3}{\yng(1)}}}_N}^{{\scalebox{.3}{\yng(3,3)}}}. \ (\mathrm{mod}\ 2).}$ \end{prop} \begin{proof} Let $S \in SYT({\scalebox{.3}{\yng(3,3)}})$. Then $\mathrm{sh}_i$ is the $i$-th Bender-Knuth involution, so $\mathrm{sh}_i$ acts by swapping the $i$-th and $(i+1)$-th entries of $S$ if they are nonadjacent. Let $Y_i$ be the set of (unordered) pairs $\{S, S'\}$ of standard tableaux exchanged by $\mathrm{sh}_i$. Note that $\mathrm{sign}(\mathrm{sh}_i) = Y_i \ (\mathrm{mod}\ 2).$ In $K$-theory, it follows from the K-theoretic Pieri rule that $k_{({\scalebox{.3}{\yng(1)}}^N)}^{{\scalebox{.3}{\yng(3,3)}}}$ is the number of increasing tableaux $T$ of shape ${\scalebox{.3}{\yng(3,3)}}$ with entries $1, \ldots, N$. In particular, any such $T$ has a single repeated entry $i$, which occurs exactly twice in nonadjacent boxes. Let $X_i$ be the set of tableaux for which the repeated entry is $i$. Given $T \in X_i$, let $T'$ be the tableau in which the $i$'s are replaced by $*$ and each entry $j > i$ is replaced by $j+1$: \[T = \young(123,345) \ \ \longrightarrow \ \ T' = \young(12*,*56).\] We may set either $*$ of $T'$ to be $i$ or $i+1$, and so obtain a pair of \emph{standard} tableaux $S, S'$, and it is clear that $\mathrm{sh}_i(S) = S'$. This gives a bijection $X_i \to Y_i$. \end{proof}
{ "timestamp": "2015-04-27T02:09:45", "yymm": "1504", "arxiv_id": "1504.06542", "language": "en", "url": "https://arxiv.org/abs/1504.06542" }
\section{Introduction} Accurate characterization of control gates is an essential task for developing any quantum computing device. Quantum process tomography (QPT)~\cite{chuang97,poyatos97,chow09} has been the standard method for characterizing quantum gates because, ideally, it produces a full reconstruction of the quantum process. In practice however, QPT suffers from many drawbacks, the most inimical being its exponential scaling in the number of quantum bits (qubits) comprising the system and that it is limited by state preparation and measurement (SPAM) errors. Various methods such as randomized benchmarking (RB)~\cite{emerson05, knill08,magesan11,martinis} and gate set tomography (GST)~\cite{Merkel13,blumekohout} have recently been developed to help overcome these limitations. RB is both insensitive to SPAM errors and efficient~\cite{magesan12}. However, it only extracts a single piece of information, the average gate fidelity. GST on the other hand helps to overcome limitations from SPAM errors by reconstructing an entire library of gates in a self-consistent manner. The price paid for this self-consistent reconstruction is an even worse scaling than QPT. As control calibration techniques continue to improve and quantum gates approach the fidelity required for fault tolerant quantum computation, it becomes both important and difficult to verify the presence of increasingly small errors. Error verification constitutes a critical first step in a debugging routine since different physical mechanisms can lead to different error types. QPT and GST are often poor choices for error verification since they are time consuming and contain so much information that backing out the presence of specific error types on small scales can be a challenge in itself. In addition, SPAM errors in QPT sets a lower limit on the detectable error strengths \cite{Merkel13}. At the other end of the spectrum, while standard RB is efficient the information it contains about the gate is typically not enough to perform any sort of useful error verification. An extension of standard RB, interleaved randomized benchmarking, consists of interleaving a target gate in a benchmarking sequence and provides bounds on the error for the gate of interest~\cite{magesanIRB,gaebler}. Interleaved benchmarking can identify gates that are poorly calibrated, but does not reveal if the errors are due to decoherence, over-/underrotations, or off-resonance effects amongst other error types. Thus, fast and reliable routines that determine the presence of specific error types are required. Others have proposed to use RB for measuring the unital part of a quantum map \cite{wallman}, correlated errors on a multi-qubit space \cite{corcoles}, and recently Ref.~\cite{kimmel} has proposed an alternative method for measuring unitary errors. In this paper we propose and experimentally implement a protocol, largely based on the ideas of RB, that verifies the presence of unitary versus non-unitary errors. A major source of unitary errors in transmon qubits originates from the presence of higher levels, which can be removed by the derivative removal via adiabatic gate (DRAG) protocol \cite{motzoi}. To quantify this error source, we compare experimental randomized benchmarking fidelities for several gate times with two simulations, one assuming a DRAG-corrected pulse shape and the other without DRAG (Fig. \ref{fig:tickPlot}). The measurements described here are performed on a two-qubit sample consisting of two transmon qubits coupled by a coplanar waveguide resonator, with independent readout resonators for each qubit. The qubit of interest has a transition frequency of $5.0154\,\text{GHz}$ and anharmonicity of $-323\,\text{MHz}$. $T_1$ and $T_2$ are $45\pm6\,\text{\textmu s}$ and $53\pm10\,\text{\textmu s}$, respectively. These characteristic times are the mean values from 500 measurements taken over 14 hours, and the error bars are the standard deviation of this data; each independent experiment is well fit by an exponential decay. The pulses used in the RB sequence are truncated Gaussian pulses having total length equal to four times the standard deviatiation of the Gaussian and with the DRAG correction applied to the quadrature component. A typical benchmarking sequence consists of a set of random Clifford gates that together compose to an identity operation~\cite{magesan11}. Under realistic assumptions on the noise, the fidelity between the implementation of this sequence with the identity operation decays exponentially as a function of the number of Clifford gates \cite{magesan12}. When the fidelity decay is averaged over many realizations of the random sequence, the decay constant serves as the single metric for the average noise in the system. The weak anharmonicity, $\delta$, of the transmon limits the gate fidelity as $1/\delta$, which can be seen for short gate times in Fig. \ref{fig:tickPlot}. The experimental data falls below the non-DRAG curve (brown dotted line in Fig. \ref{fig:tickPlot}), showing that we have partially removed unitary errors due to presence of higher levels in the transmon. At the gate length $t_g = 16.7\,\text{ns}$, the error rate corresponds to an average fidelity per gate of $99.95\%$ but is not yet limited by $T_1$ and $T_2$ with the DRAG correction (blue solid line). With the current of control, we can calibrate pulses to within a factor of four of the limit set by $T_1$ and $T_2$, but it is clear that there are still errors remaining in the system. (The remaining simulations in Fig. \ref{fig:tickPlot} will be described later in this text). \begin{figure} \begin{align} &\mathrm{(a)}\nonumber\\&\includegraphics[width=\columnwidth]{tick2}\nonumber\\ &\mathrm{(b)}\nonumber\\ &\includegraphics[width=\columnwidth]{interleavedPicture.png} \nonumber \end{align} \caption{(color online)(a) Randomized benchmarking fidelity as a function of gate length. Simulated fidelity with a DRAG correction in solid blue and without in dotted brown. Experimental data (points), with the highest fidelity of 0.9995 occuring at $16.7\,\text{ns}$. Dashed black line: simulated fidelity when all gates are overrotated by $\pi/64$ (which would be detectable by IRB). Green dot-dashed line: simulated fidelity with gate-dependent dephasing proportional to the drive amplitude $\gamma_\phi=k\Omega$. (b)~The iterative benchmarking sequence with target gate $C$ repeated $n$ times between random Clifford gates, $C_i$. The case $n=0$ corresponds to a regular randomized benchmarking sequence as used for the data in (a).} \label{fig:tickPlot} \end{figure} For longer pulses the fidelity is limited by the finite coherence time of the qubit. The tradeoff between decoherence and unitary errors shown in Fig.~\ref{fig:tickPlot} is generic across quantum computing hardware. For optimal fidelity, any quantum processor will be operating with fidelity at least partially limited by unitary errors: if this were not the case, then the fidelity could surely be improved by shortening the gate time. We extend interleaved randomized benchmarking by repeating a target Clifford $n$ times between the random Clifford gates and measuring the fidelity as a function of $n$ repetitions [Fig. \ref{fig:tickPlot}(b)]. If the gate errors are non-unitary, then the fidelity will only depend on the total length of the interleaved segment, and the resulting error per segment will thus be linear with $n$. If there are unitary errors of an over-/underrotation type, they will add coherently with $n$, and the fidelity decay will be quadratic to leading order. To see this, suppose we have a single-qubit unitary error of the form \begin{equation} U=\exp\left(-i\frac{\epsilon}{2} \hat{r}\cdot \vec{\sigma}\right), \end{equation} where $\epsilon$, $\hat{r}$, and $\vec{\sigma}$ are the error angle, axis of rotation, and vector of Pauli operators respectively. Assuming $\epsilon \ll 1$ we can write $U^n$ to second order in $\epsilon$ as \begin{align} U^n &= \openone - i n \frac{\epsilon}{2}\hat{r}\cdot \vec{\sigma} - \left(n(2n-1)\right)\frac{\epsilon^2}{4} \left(\hat{r}\cdot \vec{\sigma}\right)^2 + O\left(\epsilon^3\right). \end{align} The average fidelity $F$ of the error gate compared to the identity is given by $F= \left(\left|\text{tr}\left(U^n\right)\right|^2 + 2\right)/6$ and writing $F$ in terms of the benchmarking parameter $\alpha = 2F-1$ gives \cite{magesan11} \begin{align} \alpha &= 1-\left(\frac{n(2n-1)\epsilon^2}{3}\right), \end{align} which shows the quadratic dependence in $n$. A similar analysis finds that errors due to a $T_1$ or $T_2$ process do decay linearly in $n$. We use single sideband~(SSB) modulation of our control pulses and calibrate the in-phase/quadrature (IQ) mixers (MITEQ IRM0408LC2Q) for the chosen intermodulation frequency (IF) to ensure only the correct sideband was produced with minimal leakage at the carrier frequency. We then calibrate the in-phase control pulse amplitude and the amplitude of the quadrature component for the DRAG correction. The pulse amplitudes for a $\pi$-pulse ($X_{\pi}$) and a $\pi/2$-pulse ($X_{\pi/2}$) about the $x$-axis are tuned up by repeating the pulses in the sequence $X_{\pi/2} - (X_{\{\pi,\pi/2\}})^{2n}$ in order to amplify the errors. The evolution of the qubit's Bloch vector during the first three points of this sequence is depicted in Fig. \ref{fig:cals_example}(a). We correct for over- or under-rotations by fitting to the measured population of the qubit ground state, $P(\ket{0})$ [see Fig. \ref{fig:cals_example}(b)]. Under the assumption that the error is only an over- or underrotation, it is simple to derive a fitting formula for the amplitude calibration sequences. The fit function for the $X_{\pi/2}$ pulse in this sequence is \begin{equation} P(\ket{0}) = a+\left(\frac{1}{2}(-1)^n \cos(\pi/2+2 n \epsilon)\right), \label{eq:calFit} \end{equation} where $a$ is left as a fit parameter and goes to $1/2$ for perfect $X_{\pi/2}$ pulses. For $X_{\pi}$ the fit function is \begin{equation} P(\ket{0}) = a+\left(\frac{1}{2} \cos(\pi/2+2 n \epsilon)\right). \label{eq:calFitPi} \end{equation} The angle error, $\epsilon$, found by this fit corresponds to a gate error $r \approx \epsilon^2/6$. After fitting the error, we update the pulse amplitude accordingly. Lastly, we determine the DRAG correction by applying the sequence $(X_{\pi/2}-X_{-\pi/2})$ while varying the amplitude of the derivative pulse on the quadrature channel [Fig. \ref{fig:cals_example}(c)]. The final state of the qubit traces a cosine as a function of this DRAG amplitude, and we select the value that returns the qubit in the ground state, $\lvert0\rangle$. \begin{figure} \includegraphics[width = 0.9\columnwidth]{blochSphere} \includegraphics[width=0.48\columnwidth]{pi2_ampCal_withError} \hspace{1mm}\includegraphics[width=0.49\columnwidth]{drag_example} \caption{Calibrations of the control pulses: (a) Bloch sphere depiction of the qubit for the first three points of the error amplification sequence given in Eq \label{eq:calFit}. (b) The amplitude calibration for a $X_{\pi/2}$ pulse. The initial guess for the pulse amplitude has some error, which the sequence amplifies so the deviation from 1/2 grows with $n$, the number of repeated pulses. (c) The calibration of the DRAG parameter performs the $X_{\pi/2}-X_{-\pi/2}$ sequence while varying $\lambda$, the amplitude of the derivative pulse on the quadrature channel. The correct derivative amplitude corresponds to the point where the qubit returns to the ground state. } \label{fig:cals_example} \end{figure} The calibrated pulses are used for iterative randomized benchmarking (IRB), in which we interleave each target sequence zero to 16 times within random sequences of up to 365 Clifford gates [as depicted in Fig.~\ref{fig:tickPlot}(b)]. We average over 35 instances of each sequence and fit the decay to $A_n\alpha_n^i+B_n$, where $i$ is the number of Clifford gates, and $n$ is the number of interleaved gates. Error bars are equal to the 95$\%$ confidence interval of this fit. We performed this protocol with a $16.7\,\text{ns}$ gate time [the time producing the minimum error per gate, Fig.~\ref{fig:tickPlot}(a)] and interleave the targets $I$, $X_\pi$, and $X_{\pi/2}$. For these three gates, the decay in $\alpha$ versus the number interleaved gates is linear [Fig.~\ref{fig:iterativeRB}(a)]. This is consistent with the RB data that suggests the unitary errors at this gate time are small. We then intentionally add overrotation errors to the $X_{\pi}$ gate to determine a bound on the sensitivity of this procedure to amplitude errors. We repeat the iterative benchmarking procedure with the $X_{\pi/2}$ pulse replaced with $X_{\pi/2+\epsilon}$, where $\epsilon = \{\pi/64$, $\pi/128$, $\pi/256\}$. The $\pi/64$ and $\pi/128$ overrotations lead to fidelities that fall off quadratically and are clearly distinguishable from gates approaching the coherence limit. The $\pi/256$ appears to have similar errors to the calibrated gates, giving a bound on the sensitivity to overrotation errors. Note that with infinite $T_1$ we could increase the sensitivity of this scheme by repeating a larger number of interleaved gates. In order to quantify the amount of unitary versus non-unitary errors in the iterative randomized benchmarking data, we fit the data to both quadratic and linear models. Using the Akaike information criterion~(AIC), we determine which model most accurately describes the data \cite{akaike,burnham}. The AIC is a useful tool for model selection and has been applied to quantum information previously \cite{vanenk}. For $n$ data points and $k$ fitting parameters, the AIC is given by \begin{equation} C = n \ln\Bigl(\frac{R}{n}\Bigr) +2k+\frac{2k(k+1)}{n-k-1}, \end{equation} where $R$ is the residual sum of squares for the fit. The final term in this expression is a correction under the condition that $n < 40k$. This correction increases the penalty for overfitting when the sample size is small. We compute the $C$ for three models: linear, quadratic with no linear component, and combined linear and quadratic (see Table \ref{tab:AICvalues}). The relative probability that the $i$th model is correct is \begin{equation} P_i = \exp\left[\frac{1}{2}\bigl(C_{\mathrm{min}}-C_i\bigr)\right], \end{equation} with $C_{\mathrm{min}}$ the smallest AIC value for the set of models. The model with the best fit to the data will have $P_i = 1$. We calculate the relative probabilities for the three models for iterative randomized benchmarking data with $X_{\pi/2}$ pulses with no overrotation, $\pi/128$ and $\pi/256$ overrotations. As detailed in Table \ref{tab:AICvalues}, the calibrated gate with no added error is best fit by a linear model, as expected when there is little unitary error present. The gate with $\pi/256$ overrotation is fit best by the combined model. The preferred model according to the AIC for the gate with $\pi/128$ error is the quadratic model, but this is in part due to the penalty placed on adding extra parameters to the fit function. \begin{table}[tb] \vspace{5mm} \centering \begin{tabular}{c|ccc} \hline\hline Fit Function&0&$\pi/256$&$\pi/128$\\ \hline $ax+b$&1&$1.3\times 10^{-3}$&$2.2\times 10^{-3}$\\ $ax^2+b$&$2.0\times 10^{-7}$&0.18&1\\ $ax^2+bx+c$&0.29&1&0.16\\ \hline\hline \end{tabular} \caption{AIC values for gates with no overrotation, $\pi/256$ overrotation, and $\pi/128$ overrotation for linear and quadratic model functions.} \label{tab:AICvalues} \end{table} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{iterativeData_16pts} \includegraphics[width=0.8\columnwidth]{iterativeData_8pts} \caption{(color online) Iterative benchmarking data for (a)~a $16.7\,\text{ns}$ gate and (b)~a $10.0\,\text{ns}$ gate. The interleaved gates are the identity (blue squares), $X_{\pi/2}$ (red circles), $Y_{\pi/2}$ (magenta diamonds), and $X_{\pi/2} Y_{\pi/2}$ (black stars). The product of $\alpha$ for $X_{\pi/2}$ and $Y_{\pi/2}$ is shown (dashed black stars) for comparison to the $X_{\pi/2} Y_{\pi/2}$ gate. Also in (a) are interleaved overrotations on an $X_{\pi/2}$ by $\pi/256$ (aqua triangles), $\pi/128$ (dotted green triangles), and $\pi/64$ (dashed orange triangles). The error bars here are the 95$\%$ confidence interval of the fit to the IRB data averaged over 35 instances.} \label{fig:iterativeRB} \end{figure} From this analysis it follows that a $\pi/128$ overrotation is detectable with this method and that consequently coherent rotation errors must be smaller than this value. We therefore simulate RB in the presence of a systematic $\pi/64$ overrotation (easily detectable by IRB were it present), demonstrating that this is not sufficient to explain the deviation of the experiment from the simulated RB [dashed black in Fig. \ref{fig:tickPlot}(a)]. We conclude that there is an additional source of decoherence that is present under the continuous-driving conditions of an RB experiment. One possible form for such non-unitary error, would be a dephasing proportional to the Rabi rate of the drive, as would result from amplitude fluctuations in the local oscillator, an amplifier, or other microwave electronics along the control line. Simulated RB in presence of such noise (green dot-dashed) shows reasonable agreement with the experimental data. Drive noise with a $1/f$ dependence has been measured in flux qubits \cite{yoshihara}, and such low freqeuncy noise has been studied in the context of randomized benchmarking \cite{fogarty,epstein}. We notice that there is still a deviation from the best fit at the shortest gate time in Fig. \ref{fig:tickPlot}(a). To understand the origin of this larger error rate we calibrate gates of length $10\,\text{ns}$ and apply IRB. For interleaved $I$, $X_{\pi/2}$, and $Y_{\pi/2}$ the iterative benchmarking data appears to decay linearly [Fig.~\ref{fig:iterativeRB}(b)]. First, we notice that the error of a $Y_{\pi/2}$ gate is larger than the $X_\pi/2$ gate error. We attribute this to our calibration procedure, in which the amplitude of the $Y_{\pi/2}$ is assumed to be equal to the $X_{\pi/2}$ pulse amplitude, but sampling errors in the pulse generation are not taken into account. Second, when the interleaved sequence is $X_{\pi/2} Y_{\pi/2}$ (black stars) a larger decay is observed. This cannot be accounted for by multiplying (dashed black stars) the individual benchmarking parameters, $\alpha$, for the $X_{\pi/2}$ (red circles) and $Y_{\pi/2}$ (magenta diamonds) implying an additional error on the $X_{\pi/2} Y_{\pi/2}$ gate. (Note that, in contrast, no additional error for the $X_{\pi/2} Y_{\pi/2}$ sequence is observed for the 16.67 ns gate, for which the product of $X_{\pi/2}$ and $Y_{\pi/2}$ matches the error for $X_{\pi/2} Y_{\pi/2}$.) The $X_{\pi/2}Y_{\pi/2}$ is not directly calibrated, and the presence of unitary errors here indicates a phase error, despite the fact that SSB modulation ensures the orthogonality of X and Y pulses by imposing a $\pi/2$ phase shift on the IF signal. After identifying the phase error, we have developed an error amplification sequence similar to those of Fig. \ref{fig:cals_example} in order to quantify an X-Y axes error. The sequence is a repetition of $X_\pi Y_\pi$ within a Ramsey experiment: $$X_{\pi/2} - \left(X_{\pi} - Y_{\pi} \right)^n - Y_{-\pi/2}.$$ The fit function for the error case when $X$ and $Y$ are not orthogonal is the same function as for a $\pi/2$ amplitude error given in Eq. \ref{eq:calFit}. The gate error measured by this sequence is $2\epsilon^2/3$. We measure this error as a function of the buffer time between pulses for three different pulse lengths, as shown in Fig. \ref{fig:bufferVSgateLength}. The IRB data was taken with a 3.33 ns buffer indicated by the vertical line [with pulse length of 13.33 ns for the data in Fig. \ref{fig:iterativeRB}(a) and 6.67ns for Fig. \ref{fig:iterativeRB}(b)]. The gate error is $2\times 10^{-5}$ for the pulse length corresponding to the 16.67 ns gate, and $3\times 10^{-3}$ for the 10 ns gate. This is consistent with the IRB data that demonstrates an axis error is present for the 6.67 ns pulse (red squares in Fig. \ref{fig:bufferVSgateLength}) but is not detected for 13.33 ns (violet triangles). The gate error decreases as the buffer time is increased until it levels off by around 15 ns, at which point the resolution of the fit is not better than $1\times 10^{-5}$. Because the error decreases with longer buffer time, it is likely due to distortions that cause successive pulses to overlap when the time between them is insufficient. Note that this effect is not typically considered in RB, in which it is assumed a pulse knows no history of previous pulses in the sequence. This pulse distortion may be alleviated by further pulse shaping (as shown in \cite{gustavsson} with pulse distortions on flux qubits) and will be the subject of future investigations. \begin{figure} \centering \vspace{5mm} \includegraphics[width = \columnwidth]{bufferPlot_log} \caption{The gate error measured as a fit to the error amplification sequence $X_{\pi/2} - \left(X_{\pi} - Y_{\pi} \right)^n - Y_{\pi/2}$. The gate error is plotted versus buffer length for three pulse lengths: $6.67\,\text{ns}$ in red squares, $10\,\text{ns}$ in blue circles, and $13.33\,\text{ns}$ in violet triangles. The buffer length used for the data taken in Fig. \ref{fig:iterativeRB} was the shortest one shown here, $3.33\,\text{ns}$ (indicated by the solid vertical line).} \label{fig:bufferVSgateLength} \end{figure} We have introduced a variation of randomized benchmarking, useful for distinguishing non-unitary from unitary errors, and have validated this method on a superconducting qubit experiment. IRB will work for most physical unitaries without knowledge of the type of error present. Once a unitary error is discovered, one can develop a calibration sequence to reduce the error. By pushing gate lengths down and paying careful attention to calibrating the resulting unitary errors, we have achieved a benchmarked single-qubit gate fidelity of $99.95\%$. The error rate corresponding to this fidelity still deviates from the expected coherence by about a factor of four, but our iterative randomized benchmarking data indicates that we are \textit{not} limited by unitary errors at this point. We now seek to identify sources of drive-activated non-unitary errors (beyond $T_1$ and $T_2$) that must be limiting our fidelity at this time. We acknowledge discussions and contributions from Oliver E. Dial, Matthias Steffen, George A. Keefe, and Mary B. Rothwell. This work was supported by ARO under contract W911NF-14-1-0124.
{ "timestamp": "2015-04-27T02:11:21", "yymm": "1504", "arxiv_id": "1504.06597", "language": "en", "url": "https://arxiv.org/abs/1504.06597" }
\section{Introduction}\label{sec:introduction} Kantorovich norm \cite{kan} plays a major role in various areas of mathematics, economics and computer science (see \cite{DengDu,GaoPest,MPV,pest-wh,RachRush,Sip,Ver,Villani}), for instance, in Monge-Kantorovich transportation problem. The seminorms that determine the topology of the free real locally convex space are in fact Kantorovich seminorms (see \cite{GaoPest,MPV,pest-wh,Sip}). Uspenskij \cite{Us-free} provided a simplified formula for these seminorms. In this paper we deal with discrete transportation problems. In Subsection \ref{sub:dem} we present a slightly more flexible ("democratic") approach to the classical Kantorovich problem. This approach is related to the \emph{transshipment problem}. Continuing in this direction, in Section \ref{s:naTP} we study \emph{non-archimedean transportation problems}. Since the term \emph{non-archimedean} appears many times in this work, we write shortly: NA. In Section \ref{sec:kanult} we present an NA version of the Arens-Eells embedding (Theorem \ref{t:AE}). We introduce the naturally arising Kantorovich ultra-(semi)norms, defined for an ultra-(pseudo)metric space $(X,d)$ on free vector spaces $L_\mathbb{F}(X)$ via min-max formula. Theorem \ref{t:AE} shows that for an arbitrary NA valued field $\mathbb F$ and for any $u\in L_{\mathbb F}(X)$ the value of the Kantorovich ultra-(semi)norm $||u||$ can be approximated as $$||u||=\inf\bigg \{\max \limits_{1\leq i\leq k}| s_i|d(x_i,y_i): u=\sum\limits_{i=1}^{k} s_i(x_i-y_i), \ x_i,y_i\in \operatorname{supp}(u), \ s_i\in \mathbb F \bigg \},$$ where $\operatorname{supp}(u)$ is the support of $u$ (see Notation \ref{notation}). Note that the analogous property in the archimedean case does not hold in general. Indeed, it is no longer true when $\mathbb F=\Bbb C$, the field of complex numbers, in contrast to the case $\mathbb F=\Bbb R$ (see \cite{Flood,Wea} and Remark \ref{r:3}.3 below). The infimum in Theorem \ref{t:AE} is, in fact, a minimum. This refinement, which comes from Min-attaining Theorem \ref{t:attaining}, provides another contrast to the archimedean case. Indeed, in the Appendix we give an example in which the infimum is not attained for $\mathbb F=\Bbb Q(i).$ Another refinement concerns the coefficients (the \emph{$G$-value property}), that is, it is enough to take the coefficients from the additive subgroup $G_u$ of $\mathbb F$ generated by the normal coefficients $\lambda_i$ of $u=\sum\limits_{i=1}^n \lambda_i x_i.$ Namely, we show that $$||u||=\min \bigg \{\max \limits_{1\leq i,j \leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in \mathbb F, \ \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg\},$$ such that all coefficients $c_{ij}$ belong to $G_u$. Note that a matrix $(c_{ij})\in \mathbb F^{m\times m}$ satisfies the equations $\sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \ \forall i: 1\leq i\leq n$ if and only if $u=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}c_{ij}(x_i-x_j).$ As a particular case we get an NA generalization of the so-called \emph{integer value property} (well known in case $\mathbb F=\Bbb R$). Probably one can encounter a variety of min-max optimization problems when dealing with the Kantorovich ultra-norms. It is worth noting that different algorithms for solving such problems are known (see \cite{BDM} for example). In Section \ref{s:freeLCS} we introduce the \emph{free NA locally convex spaces} for NA uniform spaces. We describe their topologies in terms of Kantorovich ultra-seminorms (Theorem \ref{t:freeLCS}). We show that for an ultra-metric space $(X,d)$ and a trivially valued field $\mathbb F$, the free NA locally convex space $ L_\mathbb{F}(X,\mathcal{U}(d))$ (of the uniformity $\mathcal{U}(d)$ of $d$) is normable by the Kantorovich ultra-norm induced by $d$ (Theorem \ref{t:normable}). By Tkachenko-Uspenskij theorem (in the archimedean case $\mathbb F=\Bbb R$) the free abelian topological group $A(X)$ is a topological subgroup of $L(X)$. Using Ostrowski's classical theorem we prove that in case $\mathbb F$ is an NA valued field of zero characteristic, the uniform free NA abelian topological group $A_{\scriptscriptstyle\mathcal{NA}}(X,\mathcal{U})$ is a topological subgroup of $ L_\mathbb{F}(X,\mathcal U)$ if and only if the restricted valuation on $\Bbb Q$ is trivial (Theorem \ref{t: Tk-Usp}). For example, this is the case for the Levi-Civita field (Example \ref{Levi-Civita}). \section{Kantorovich norm}\label{sec:cla} For a nonempty set $X$ and a field $\mathbb F$ denote by $L_\mathbb F(X)$ the free $\mathbb F$-vector space on the set $X.$ We simply write $L(X)$ in case $\mathbb F=\Bbb R$. Define $\overline{X}:=X\cup \{\textbf{0}\}$ where $\mathbf{0}\notin X$ is the zero element of $L_\mathbb F(X).$ The zero element of the field $\mathbb F$ is denoted by $0_\mathbb F.$ Denote by $L^0_\mathbb F(X)$ the kernel of the linear functional $$L_\mathbb F(X)\to \mathbb F, \ \sum\limits_{i=1}^{n}\lambda_i x_i\mapsto \sum\limits_{i=1}^{n}\lambda_i.$$ \begin{notation} \label{notation} Every non-zero vector $u \in L_\mathbb{F}(X)$ has a \emph{normal form} as follows: $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$, where $x_i\in X, \ \lambda_i\in \mathbb F\setminus \{0_{\mathbb F}\}$ $\forall i: 1\leq i\leq n$ and $x_i\neq x_j$ whenever $i\neq j.$ If $u\in L^0_\mathbb F(X)$ then define the {\it support of $u$} as $\operatorname{supp}(u):=\{x_1, \ldots, x_n\}.$ Otherwise, let $\operatorname{supp}(u):=\{x_1, \ldots, x_n,x_{n+1}\} $ where $x_{n+1}=\mathbf{0}.$ We denote by $m:=|\operatorname{supp}(u)|$ the length of the support, so $m$ is either $n$ or $n+1$. The support of $\mathbf{0}$ is $\{\mathbf{0}\}$. In what follows, by writing $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$ we mean that it is a normal form. \end{notation} \subsection{Classical transportation problem} \label{ss:classical} Recall the following transportation problem from the historical work of Kantorovich \cite{kan}. Let $(X,d)$ be a metric space and denote by $\mathbb R_{\geq 0}$ the set of non-negative reals. Suppose that a network of railways connects a number of production locations $x_1,\ldots, x_n\in X$ with daily output of $\lambda_1,\ldots, \lambda_n$ carriages of certain goods, respectively, to a number of consumption locations $y_1,\ldots, y_m\in X$ with daily demand of $\mu_1,\ldots, \mu_m$ carriages. So, we have $\sum\limits_{i=1}^n\lambda_i=\sum\limits_{j=1}^m\mu_j,$ where $\lambda_i, \mu_j$ are positive. Let $c_{ij}$ denote the real number transferred from point $x_i$ to point $y_j.$ We view the metric $d$ as a cost function, and we want to minimize our total sum-cost. The value we are seeking is \begin{equation} \label{classform} \inf\bigg \{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{m}c_{ij}d(x_i,y_j): c_{ij}\in \mathbb R_{\geq 0}, \sum\limits_{i=1}^{n}c_{ij}=\mu_j, \sum\limits_{j=1}^{m}c_{ij}=\lambda_i\bigg \}. \end{equation} This infimum is known as the \emph{Kantorovich distance} in $L(X) $ between $\sum\lambda_ix_i$ and $\sum\mu_jy_j.$ It coincides with $||u||$ where $u=\sum\lambda_ix_i-\sum\mu_jy_j\in L^0(X) $ and $||\cdot||$ is the norm defined on $L^0(X)$ as follows. For every $v=\sum\limits_{i=1}^n \lambda_ix_i \in L^0(X) $ \begin{equation} \label{secform} ||v||=\inf\bigg\{\sum\limits_{i=1}^{l}|\rho_i|d(a_i,b_i):v=\sum\limits_{i=1}^{l}\rho_i(a_i-b_i), \ \rho_i\in \mathbb R, a_i,b_i\in X\bigg\}. \end{equation} This norm on $L^0(X)$ is called the \emph{Kantorovich norm}, \cite{MPV}. If $(X,d)$ is a pseudometric space then (\ref{classform}) and (\ref{secform}) define the \emph{Kantorovich pseudometric} and the \emph{Kantorovich seminorm} respectively. Let $X$ be a Tychonoff space. Denote by $D$ the family of all continuous pseudometrics on $\overline{X}:=X\cup \{\textbf{0}\}$. For each $d \in D$ there exists a maximal seminorm $p_d$ on $L(X)$ which extends $d$. We retain the name \emph{Kantorovich seminorm} for $p_d$ (and for its restriction on $L^0(X)$), although several authors use the name \emph{Kantorovich-Rubinstein seminorm}. The vector $\Bbb R$-space $L(X)$ and the family of seminorms $\{p_d: d \in D\}$ determine the free locally convex space over $X$. See Pestov \cite{pest-wh}, for example, and compare with Raikov \cite{MPV} in the case of \emph{pointed uniform spaces}. In Section \ref{s:freeLCS} we study the free NA locally convex $\mathbb{F}$-space $L_\mathbb{F}(X)$ of an NA uniform space $(X,{\mathcal U}).$ \vskip 0.3cm Equation (\ref{secform}) has a natural generalization. Let $(\mathbb F, | \cdot|)$ be an archimedean valued field and $(X,d)$ be a pseudometric space. For every $v=\sum\limits_{i=1}^n \lambda_ix_i \in L_{\mathbb F}^0(X)$ define the \emph{Kantorovich seminorm} as follows: \begin{equation} \label{arcgen} ||v||=\inf\bigg\{\sum\limits_{i=1}^{l}|\rho_i|d(a_i,b_i):v=\sum\limits_{i=1}^{l}\rho_i(a_i-b_i), \ \rho_i\in \mathbb F, a_i,b_i\in X\bigg\}. \end{equation} Note that every archimedean valued field $(\mathbb F, | \cdot|)$ is essentially a subfield of $\Bbb C$ and the valuation is equivalent to the usual valuation on $\Bbb C$ (see \cite[p. 4]{ro} for example). \subsection{"Democratic" reformulation}\label{sub:dem} We wish to highlight a point that will become important in the sequel. In the problem described above two disjoint sets $A=\{x_1,\ldots, x_n\} $ and $B=\{y_1,\ldots, y_m\}$ are considered. The distances between the elements in each set seem irrelevant. Indeed, every distance which appears in Formula (\ref{classform}) is between an element of $A$ and an element of $B$. Now we consider a more flexible form of the transportation problem (see also \cite[p. 44]{Wea}). Let $\lambda_1,\ldots, \lambda_n\in \mathbb R$ with $\sum\limits_{i=1}^{n}\lambda_i=0.$ We have to transfer real numbers between the points $x_1,\ldots, x_n\in X$ in the following way. The sum of numbers transferred from $x_i$ minus the sum of numbers transferred to $x_i$ is $\lambda_i.$ Let $c_{ij}$ denote the real number transferred from point $x_i$ to point $x_j.$ We want to minimize our cost, that is, the value of $\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|c_{ij}|d(x_i,x_j)$. Clearly, one may assume that $c_{ii}=0.$ \vskip 0.3cm As the following lemma suggests, the Kantorovich norm serves both of the approaches described above. \begin{lemma}[Democratic reformulation] \label{lem:dem} If $v=\sum\limits_{i=1}^n \lambda_ix_i\in L^0 (X),$ then \begin{equation} \label{lastfor} ||v||=\inf\bigg \{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|c_{ij}|d(x_i,x_j):\sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}=\lambda_i \ \forall i \bigg\}.\end{equation} \end{lemma} \begin{proof} Denote by $||v||'$ the expression on the right hand side of Equation (\ref{lastfor}). We want to show that $||v||=||v||'.$ Let $(c_{ij})\in \mathbb R^{n\times n}$ such that $\sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}=\lambda_i. $ The coefficient of $x_i$ in $\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}c_{ij}(x_i-x_j)$ is just $\sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}.$ It follows that $$v=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}c_{ij}(x_i-x_j).$$ So, by Equation (\ref{secform}) $$||v||\leq ||v||'.$$ On the other hand, using reductions from \cite{Us-free}, we show that if $v=\sum\limits_{i=1}^{l}\rho_i(a_i-b_i)$ then there exists a decomposition $v=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}c_{ij}(x_i-x_j)$ with $$\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|c_{ij}|d(x_i,x_j)\leq \sum\limits_{i=1}^{l}|\rho_i|d(a_i,b_i).$$ To see this, first observe that we may assume that $\rho_i>0 \ \forall i.$ Consider the following reductions which do not increase the value of the corresponding sum: \begin{enumerate} \item Delete any term of $v$ of the form $\rho_i(x-x).$ \item If there exist two terms $\lambda(x_i-x_j)$ and $\mu(x_i-x_j)$ with $\lambda, \mu>0$ replace them with the single term $(\lambda+\mu)(x_i-x_j).$ \item Assuming the decomposition contains the term $\lambda(x-z)$ where $z\notin \operatorname{supp}(v),$ then we necessarily have also a term of the form $\mu(z-y)$ where $\lambda,\mu>0.$ We have three subcases to consider replacing in each case the terms $\lambda(x-z)$ and $\mu(z-y)$. \vskip 0.2cm \begin{enumerate} [(a)] \item If $\lambda= \mu$ then replace the pair of terms above with one term $\lambda(x-y).$ It is possible since $\lambda d(x,y)\leq \lambda d(x,z)+\lambda d(z,y).$ \item If $\lambda<\mu$ then replace the terms with $\lambda(x-y)$ and $(\mu-\lambda)(z-y).$ The value of the sum does not increase since \begin{align*} \lambda d(x,z)+\mu d(z,y) &=\lambda(d(x,z)+d(z,y))+(\mu-\lambda)d(z,y)\geq \\ &\geq \lambda d(x,y)+(\mu-\lambda)d(z,y). \end{align*} \item If $\lambda>\mu$ then replace the terms with $(\lambda-\mu)(x-z)$ and $\mu(x-y).$ This time we have \begin{align*} \lambda d(x,z)+\mu d(z,y) &=(\lambda-\mu)d(x,z)+\mu(d(x,z)+d(z,y))\geq \\ &\geq (\lambda-\mu)d(x,z)+\mu d(x,y). \end{align*} \end{enumerate} \end{enumerate} Using reduction $(3)$ the number of terms containing $z$ decreases. Applying finitely many substitutions of this form and taking into account that the sum of $z's$ coefficients in any decomposition of $v$ is equal to zero, we obtain a decomposition of $v$ with only two terms containing $z$: $\lambda(x-z)$ and $\lambda(z-y).$ Now use reduction $(3.a).$ Therefore, we can assume that the decomposition only contains terms with support elements. That is, terms of the form $\lambda(x_i-x_j) $ where $\lambda\geq 0.$ At this point we use reduction $(2)$ if necessary. We obtain a decomposition $v=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}c_{ij}(x_i-x_j)$ with $$\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|c_{ij}|d(x_i,x_j)\leq \sum\limits_{i=1}^{l}|\rho_i|d(a_i,b_i).$$ It follows that $||v||'\leq ||v||$ and we conclude that $||v||=||v||'.$ \end{proof} \begin{remark} \label{r:3} \ \begin{enumerate} \item Every non-zero element $v\in L^0(X)$ has the form $v=\sum\limits_{i=1}^{n} a_ix_i-\sum\limits_{j=1}^{m} b_jy_j $ where $ \ \sum\limits_{i=1}^n a_i=\sum\limits_{j=1}^m b_j$ and $\forall i: 1\leq i\leq n \ \forall j: 1\leq j \leq m \ a_i,b_j>0.$ Using this fact one can move back from the democratic approach to the classical one as in Section \ref{ss:classical}. \item Using compactness arguments one can prove that the infimum in Formula (\ref{classform}) is attained. By the proof of Lemma \ref{lem:dem}, for any minimizing matrix $(c_{ij})$ from (\ref{classform}) there exists a matrix $(t_{ij})$ from (\ref{lastfor}) such that $$\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|t_{ij}|d(x_i,x_j)\leq \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}|c_{ij}|d(x_i,x_j).$$ It follows that the infimum in (\ref{lastfor}) is attained at $(t_{ij})$. \item Replacing $\Bbb R$ with $\Bbb C$ completely changes the situation. As it follows from \cite{Flood,Wea}, in the latter case of $\mathbb F=\Bbb C$ we cannot even guarantee that the infimum in (\ref{secform}) can be approximated by computations on support elements of a vector $u \in L_{\Bbb C}^0(X)$. A detailed example is provided in the Appendix (Theorem \ref{main} and Example \ref{e:outside}). \end{enumerate} \end{remark} \section{Non-archimedean transportation problem} \label{s:naTP} In this section we discuss the main object of our work: a {\it non-archimedean transportation problem} (NATP). First we recall some definitions. \subsection{Preliminaries} A metric space $(X,d)$ is an \emph{ultra-metric space} if $d$ is an \emph{ultra-metric}, i.e., it satisfies {\it the strong triangle inequality} $$d(x,z) \leq \max \{d(x,y), d(y,z)\}.$$ Allowing the distance between distinct elements to be zero we obtain the definition of an \emph{ultra-pseudometric}. It is well known that if $d(x,y) \neq d(y,z)$ then $d(x,z) = \max \{d(x,y), d(y,z)\}.$ A uniform space $(X,{\mathcal U})$ is NA if it has a base $B$ consisting of equivalence relations on $X.$ For every ultra-pseudometric $d$ on $X$ the open balls of radius $\varepsilon >0$ form a clopen partition of $X.$ So, the uniformity induced by any ultra-pseudometric $d$ on $X$ is NA. A uniformity is NA if and only if it is generated by a system $\{d_i\}_{i \in I}$ of {\it ultra-pseudometrics}. Recall that a topological group is {\it non-archimedean} if it has a base at the identity consisting of open subgroups. For some properties of this class of topological groups see for example \cite{MS1,MS}. We say that a topological ring (or field or vector space) is NA if its additive group is NA. Note that Lyudkovskii \cite{Lyu} studied NA free Banach spaces. A {\it valuation} on a field $\mathbb F$ is a function $|\cdot|: \mathbb F\to [0,\infty)$ which satisfies the following ($x,y\in \mathbb F$): \begin{enumerate} \item $|x|\geq 0$; \item $|x|=0$ if and only if $x=0_{\mathbb F}$; \item $|x+y|\leq |x|+|y|$; \item $|xy|=|x||y|$. \end{enumerate} Replacing condition $(3)$ with $|x+y|\leq \max\{|x|,|y|\}$ we obtain a {\it non-archimedean valuation}. In this case the metric $d$ defined by $d(x,y)=|x-y|$ is an ultra-metric. An (NA) {\it valued field} is a field $\mathbb F$ with a (resp., NA) valuation $|\cdot|$. Every NA valued field is NA as a topological group because every open ball $\{x \in \mathbb F: |x| <r\}$ is a (clopen) additive subgroup. A valuation which is not NA is called an {\it archimedean valuation}. Let $(\mathbb F, | \cdot|)$ be a valued field. A \emph{seminorm} on an $\mathbb F$-vector space $V$ is a map $||\cdot||: V \to [0,\infty)$ such that ($x,y\in V, \alpha\in\mathbb F$): \begin{enumerate} \item $||0_{V}||=0$; \item $||x+y|| \leq ||x||+||y||;$ \item $||\alpha x||=|\alpha| ||x||.$ \end{enumerate} If instead of condition $(1)$ we have: $||x||=0$ if and only if $x=0_{V}$, then $||\cdot||$ is called a {\it norm}. If the valuation on $\mathbb F$ is NA and condition $(2)$ is replaced by $||x+y|| \leq \max \{||x||,||y||\}$, then the norm (seminorm) $||\cdot||$ is an {\it ultra-norm} (respectively, ultra-seminorm). Let $(\mathbb F,|\cdot|)$ be an NA valued field. The set $\{|x|: |x|\neq 0\}$ is a subgroup of the multiplicative group $\Bbb R_{>0}$ of all positive reals and is said to be the {\it value group} of the valuation $|\cdot|.$ The value group is either discrete or dense in $\Bbb R_{>0}$. Accordingly the valuation is called {\it discrete} or {\it dense}. If the value group is the trivial subgroup $\{1\}$ then the valuation is said to be {\it trivial}. For any non-trivial discrete valuation the value group is the infinite cyclic closed subgroup $\{a^k: k \in \Bbb Z\}$ of $\Bbb R_{>0}$, where $a:=\max\{|x|: |x| <1 \}$. Note that discretely valued fields form a major subclass in the class of NA valued fields. This subclass is closed under taking arbitrary subfields, completions and finite extensions. The $p$-adic valuation on the field $\Bbb Q$ of rationals is a classical particular case (for every prime $p$). The completion is the field of $p$-adic numbers $\mathbb Q_{p}$, a locally compact NA valued field. The valuation of every locally compact NA valued field is discrete (see \cite{ro}). The natural valuation on the field $\Bbb C\{\{T\}\}$ of formal Laurent series (which is not locally compact) is discrete \cite{Schn}. Below we use several times the following well known theorem of Ostrowski (see for example \cite[Theorem 1.2]{ro}) which shows that the $p$-adic valuation, up to a natural equivalence, is the only NA non-trivial valuation on $\Bbb Q$. In particular, any NA valuation on $\Bbb Q$ is discrete. \begin{thm} \label{Ostrowski} \emph{(Ostrowski's Theorem)} Let $|\cdot|$ be a non-trivial NA valuation on the field $\Bbb Q$ of rationals. Then there exists a prime $p$ such that $|\cdot|$ is equivalent to the $p$-adic valuation $|\cdot|_p$ (namely, there exists $c>0$ such that $|x|=|x|_p^c \ \ \forall x \in \Bbb Q$). \end{thm} The following is an important example of a densely valued NA field. \begin{example} \label{Levi-Civita1} Recall that the elements of the Levi-Civita field $\mathcal R$ (see \cite{Sham} for example) are real functions $f: \Bbb Q \to \Bbb R$ with left-finite support. That is, for every rational number $q$ the set $A_q:=\{a<q| \ f(a)\neq 0\}$ is finite. The field operations are addition and convolution. $\Bbb R$ is (algebraically) isomorphic to a subfield of $\mathcal R.$ Indeed, the map $a \mapsto f_a$ from $\Bbb R$ to $\mathcal R$, where $f_a(0)=a$ and $f_a(x)=0 \ \forall x\neq 0,$ is a field embedding. For every non-zero element $f\in \mathcal R$, the support of $f$ (notation: $\operatorname{supp}(f)$) has a minimum, due to its left-finiteness. Recall that $\mathcal R$ admits a natural NA valuation defined by $|f|=e^{-\min \operatorname{supp}(f)}$ for non-zero $f$. It is easy to see that this valuation is dense. At the same time the restricted valuation on $\Bbb Q$ is trivial. \end{example} \subsection{Formulation of NATP} \label{s:NATP} We formulate here a non-archimedean transportation problem using a democratic approach (compare Section \ref{lem:dem}). Let $\mathbb F$ be an NA valued field, $(X,d)$ be an ultra-(pseudo)metric space and $x_i\in X$ for every $1\leq i\leq n$. We have to transfer field elements between these points in the following way. The sum of elements transferred from $x_i$ minus the sum of elements transferred to $x_i$ is $\lambda_i,$ where $\lambda_1,\ldots, \lambda_n$ are given elements in $\mathbb F$ with $\sum\limits_{i=1}^{n}\lambda_i=0_{\mathbb F}.$ Let $c_{ij} \in \mathbb F$ denote the element transferred from $x_i$ to $x_j.$ Note that by the setting of NATP we have $\forall i \ \sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}=\lambda_i.$ We want to minimize as much as possible our max-cost, that is, the value of $$\max\limits_{1\leq i,j\leq n}|c_{ij}|d(x_i,x_j).$$ A natural question arises: \begin{quest} \label{quest:na} Is the infimum \begin{equation} \label{mul:inf} \inf\bigg \{\max \limits_{1\leq i, j \leq n}|c_{ij}|d(x_i,x_j): \forall i \ \sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}=\lambda_i \bigg\} \end{equation} attained? \end{quest} Min-attaining Theorem \ref{t:attaining} implies that the answer to Question \ref{quest:na} is positive for every NA valued field $\mathbb F$ (e.g., $\mathbb Q_{p}$) and any ultra-(pseudo)metric space $(X,d).$ In fact we will show in Theorem \ref{t:AE} that (\ref{mul:inf}) can be studied via a special ultra-(semi)norm $||\cdot||_d$ on $L_{\mathbb F}(X)$. We call it the \emph{Kantorovich ultra-(semi)norm} associated with $d$ (Definition \ref{d:KantUltraNorm}) because its role is similar to the role of the Kantorovich (semi)norm in the classical transportation problem (with $\mathbb F=\Bbb R$). Indeed, the infimum in (\ref{mul:inf}) coincides with $||u||_d,$ where $u=\sum\limits_{i=1}^{n}\lambda_ix_i \in L^0_\mathbb F(X).$ \section{Kantorovich ultra-norms} \label{sec:kanult} Let $(X,d)$ be an ultra-pseudometric space. Consider the set $\overline{X}:=X\cup \{\textbf{0}\},$ where $\textbf{0} \notin X$. In the sequel we repeatedly use the following simple lemma. \begin{lemma} \label{l:extend} For every ultra-pseudometric $d$ on $X$ there exists an ultra-pseudometric (denoted also by $d$) which extends $d$ on $\overline{X}:=X\cup \{\mathbf{0}\},$ such that $\mathbf{0}$ is an isolated point in $(\overline{X},d)$. \end{lemma} \begin{proof} Fix $x_0\in X$ and extend the definition of $d$ from $X$ to $\overline{X}$ by letting $d(x,\mathbf{0})=\max\{d(x,x_0),1\}.$ For more details see Claim 1 of \cite[Theorem 8.2]{MS}. \end{proof} \begin{defi} \label{d:KantUltraNorm} Let $(\overline{X},d)$ be an ultra-pseudometric space and $\mathbb{F}$ be an NA valued field. Let us say that an ultra-seminorm $p$ on $L_{\mathbb F}(X)$ is \emph{$d$-compatible} if the pseudometric induced on $\overline{X}$ by $p$ is $d$. We say that $p$ is a \emph{Kantorovich ultra-seminorm for $d$} if $p$ is the maximal $d$-compatible ultra-seminorm on $L_{\mathbb F}(X)$. \end{defi} The maximal property of the Kantorovich norm in the classical non-discrete transportation problem was proved in \cite{MPV}, and this justifies Definition \ref{d:KantUltraNorm}. The Kantorovich ultra-norm $||\cdot||$ in Theorem \ref{t:AE} serves the NA transportation problem described in Section \ref{s:NATP}. To see this observe that one of the reformulations of this ultra-norm ($m=n$ in Claim $3$ below) coincides with the infimum in Formula (\ref{mul:inf}) above. Moreover, using the description of the Kantorovich ultra-norm one can obtain an \emph{unbalanced} version of NATP, that is, the case $u=\sum\limits_{i=1}^{n}\lambda_ix_i \notin L^0_\mathbb F(X).$ The classical analogue of the following Theorem \ref{t:AE} is the Arens-Eells embedding \cite{AE}. Its usual verification is based on the dual space, involving the space of Lipschitz functions \cite{Flood,Wea,GaoPest}. In our case the approach is different. If $d$ is a metric on $X$, then the Kantorovich seminorm defined on $L^0(X)$ is, in fact, a norm. This fact relies on the classical Hahn-Banach theorem (see \cite[Corollary 2.2.3]{Wea}) which does not always hold for general NA Banach spaces, \cite{Schn,PS}. The proof that the ultra-seminorm in the following theorem is an ultra-norm uses only the fact that the valuation of $\mathbb F$ is NA. Below, in Corollary \ref{c:min}, we show that the infimum in this theorem is, in fact, a minimum. \begin{thm} \label{t:AE} \emph{(Non-archimedean Arens-Eells embedding)} Let $(\overline{X},d)$ be an ultra-pseudometric space and $\mathbb{F}$ be an NA valued field. \begin{enumerate} \item There exists a Kantorovich ultra-seminorm $||\cdot||:=||\cdot||_d$ on $L_\mathbb{F}(X)$ for $d$. Furthermore, if $d$ is an ultra-metric then $||\cdot||_d$ is an ultra-norm. \item $||u||$ can be computed on the support of $u$ for every $u \in L_\mathbb{F}(X)$. That is, $$||u||=\inf\bigg \{\max \limits_{1\leq i\leq k}| s_i|d(x_i,y_i): u=\sum\limits_{i=1}^{k} s_i(x_i-y_i), \ x_i,y_i\in \operatorname{supp}(u), \ s_i\in \mathbb F \bigg \}.$$ \item Moreover, if $u=\sum\limits_{i=1}^n \lambda_i x_i$ (normal form) then $$||u||=\inf\bigg \{\max \limits_{1\leq i,j \leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in \mathbb F, \ \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg\},$$ where $c_{ii}=0_{\mathbb F}$ and $m=|\operatorname{supp}(u)|$ (see Notation \ref{notation}). \end{enumerate} \end{thm} \begin{proof} For $u\in L_\mathbb{F}(X)$ define $$||u||:=\inf\bigg \{\max \limits_{1\leq i\leq n}|\lambda_i|d(x_i,y_i): u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i), \ x_i,y_i\in \overline{X}, \ \lambda_i\in \mathbb F \bigg \} .$$ \vskip 0.3cm \noindent \textbf{Claim 1:} $||\cdot||$ is an ultra-seminorm on $L_\mathbb{F}(X).$ \begin{proof} Clearly, $||u||\geq 0$ for every $u\in L_\mathbb{F}(X).$ Since $\textbf{0}=\textbf{0}-\textbf{0}$ we also have $||\textbf{0}||\leq d(\textbf{0},\textbf{0})=0$ and thus $||\textbf{0}||=0.$ The equality $||\lambda u||=|\lambda||| u||$ follows from the fact that for every $\lambda\neq 0_\mathbb F$, if $u=\sum\limits_{i=1}^n \lambda_i(x_i-y_i)$ then $\lambda u=\sum\limits_{i=1}^n \lambda\lambda_i(x_i-y_i)$ and, if $\lambda u=\sum\limits_{i=1}^n \lambda_i(x_i-y_i)$ then $ u=\sum\limits_{i=1}^n \lambda^{-1}\lambda_i(x_i-y_i).$ Of course, we also use axiom (4) in the definition of valuation. \\ Finally, observe that $$||u+v|| \leq \max \{||u|| , ||v|| \} \ \ \ \forall \ u,v \in L_\mathbb{F}(X).$$ Indeed, assuming the contrary, there exist decompositions $$u=\sum\limits_{i=1}^{k} \lambda_i(x_i-y_i), \ v=\sum\limits_{i=k+1}^{l} \lambda_i(x_i-y_i)$$ such that $$||u+v||>c:=\max\{\max\limits_{1\leq i\leq k}|\lambda_i|d(x_i,y_i),\max\limits_{k+1\leq i\leq l}|\lambda_i|d(x_i,y_i)\}.$$ This contradicts the definition of $||u+v||$ since $u+v=\sum\limits_{i=1}^{l} \lambda_i(x_i-y_i)$ with $$||u+v||>\max\{\max\limits_{1\leq i\leq k}|\lambda_i|d(x_i,y_i),\max\limits_{k+1\leq i\leq l}|\lambda_i|d(x_i,y_i)\}=\max\limits_{1\leq i\leq l}|\lambda_i|d(x_i,y_i).$$ \end{proof} \noindent \textbf{Claim 2:} For every $u \in L_\mathbb{F}(X)$ the value of $||u||$ can be computed on the support of $u.$ That is, $$||u||=\inf\bigg \{\max \limits_{1\leq i\leq k}| s_i|d(x_i,y_i): u=\sum\limits_{i=1}^{k} s_i(x_i-y_i), \ x_i,y_i\in \operatorname{supp}(u), \ s_i\in \mathbb F \bigg \}.$$ \begin{proof} Let $u=\sum\limits_{i=1}^{k} s_i(x_i-y_i)$ be a decomposition of $u\in L_\mathbb{F}(X). $ Consider the following steps which do not increase the value of $\max \limits_{1\leq i\leq k}|s_i|d(x_i,y_i)$: \begin{enumerate} \item Delete any term of $u$ of the form $0_\mathbb F(x-y)$ or $s_i(x-x).$ \item Replace the term $s_i(x_i-y_i)$ with $-s_i(y_i-x_i).$ \item Assume there exist $1\leq i_0\leq n$ and $\mathbf{0}\neq z\notin \operatorname{supp}(u)$ such that $z=x_{i_0}$ or $z=y_{i_0}.$ Using steps $(1)-(2)$ we may assume without loss of generality that the terms $\lambda(x-z)$ and $\mu(z-y)$ appear in the decomposition of $u= \sum\limits_{i=1}^{k}s_i (x_i-y_i)$ with $|\lambda|\leq |\mu|.$ Replace them with $\lambda(x-y)$ and $(\mu-\lambda)(z-y).$ This way the number of terms in which the element $z$ appears decreases. The value of the corresponding maximum $\max \limits_{1\leq j\leq k}|\mu_j|d(x_j,y_j)$ does not increase under such a substitution, because $$\max \{|\lambda|d(x,y),|\mu-\lambda|d(z,y)\}\leq \max \{|\lambda|d(x,z),|\mu|d(z,y)\}.$$ Indeed, using the strong triangle inequality and the fact that $|\lambda|\leq |\mu|$ we obtain $$|\lambda|d(x,y)\leq \max \{|\lambda|d(x,z),|\lambda|d(z,y)\}\leq \max \{|\lambda|d(x,z),|\mu|d(z,y)\}.$$ Also, assuming that $|\mu-\lambda|d(z,y)>|\mu|d(z,y)$ we obtain $|\mu-\lambda|>\max\{|\lambda|,|\mu|\},$ which contradicts the strong triangle inequality. Thus, $|\mu-\lambda|d(z,y)\leq |\mu|d(z,y).$ Applying finitely many substitutions of this form and taking into account that the sum of $z's$ coefficients in any decomposition of $u$ is equal to zero, we obtain a decomposition of $u$ with only two terms in which $z$ appears: $\lambda(x-z)$ and $\lambda(z-y).$ These terms can be replaced by the single term $\lambda(x-y)$ since $\lambda(x-z)+\lambda(z-y)=\lambda(x-y)$ and $|\lambda|d(x,y)\leq \max \{|\lambda|d(x,z),|\lambda|d(z,y)\}.$ Now the term $\lambda(x-y)$ and all other terms in the new decomposition do not contain the element $z.$ \item Assume there exist $1\leq i_0\leq n$ and $z=\mathbf{0}\notin \operatorname{supp}(u)$ such that $z=x_{i_0}$ or $z=y_{i_0}.$ We claim that similar to case $(3)$ it suffices to consider decompositions $u= \sum\limits_{i=1}^{k}s_i (x_i-y_i)$ that contain terms of the form $\lambda(x-z)$ and $\mu(z-y)$ with $|\lambda|\leq |\mu|.$ Indeed, since $z=\mathbf{0}\notin \operatorname{supp}(u)$ it follows that $u=\sum\limits_{i=1}^{n}\lambda_it_i \in L^{0}_\mathbb F(X)$ (normal form) and $\sum\limits_{i=1}^{n} \lambda_i=0_{\mathbb F}.$ If there exists only one term $\lambda(x-z)$ in which $z$ appears then by the previous steps we can assume, without loss of generality, that we are dealing with a decomposition of $u$ of the form $u=\sum\limits_{j=1}^{k}\mu_j(a_j-b_j)+\lambda(t_1-z),$ where $a_j,b_j\in\operatorname{supp}(u).$ On the one hand, $u'=\sum\limits_{j=1}^{k}\mu_j(a_j-b_j)\in L^{0}_\mathbb F(X).$ On the other hand, for every $i\neq 1$ the sum of the coefficients of $t_i$ in this decomposition of $u'$ is equal to $\lambda_i.$ It follows that the sum of $t_1's$ coefficients in $u'$ is $\lambda_1$ and this implies that $\lambda=0_{\mathbb F}.$ So we may assume that the terms $\lambda(x-z)$ and $\mu(z-y)$ appear in the decomposition of $u= \sum\limits_{i=1}^{k}s_i (x_i-y_i)$ with $|\lambda|\leq |\mu|.$ If $\lambda=\mu$ we simply replace these terms with the single term $\lambda(x-y).$ Otherwise, replace these terms with $\lambda(x-y)$ and $(\mu-\lambda)(z-y).$ In any case we can say that completely similar to reduction $(3)$ the number of terms in which the element $z$ appears decreases. The value of the corresponding maximum $\max \limits_{1\leq j\leq k}|\mu_j|d(x_j,y_j)$ does not increase under such a substitution. We apply finitely many substitutions of this form and obtain a decomposition of $u$ in which all terms do not contain the element $z.$ \end{enumerate} Using reductions $(3)$ and $(4)$ we complete the proof of Claim 2. \end{proof}\vskip 0.2cm \noindent \textbf{Claim 3:} For $u=\sum\limits_{i=1}^{n}\lambda_ix_i \in L_\mathbb F(X)$ let $m=|\operatorname{supp}(u)|$ (by Notation \ref{notation} we have $m =n,$ or $m=n+1$). Then,\begin{equation}\label{eq:claim3} ||u||=\inf\bigg \{\max \limits_{1\leq i,j \leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in \mathbb F, \ \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg\},\end{equation} where $c_{ii}=0_{\mathbb F}.$ \begin{proof} By Notation \ref{notation}, $\sum\limits_{i=1}^{n}\lambda_ix_i$ is a normal form of $u$. It follows that a matrix $(c_{ij})\in \mathbb F^{m\times m}$ satisfies the equations $$\sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \ \forall i: 1\leq i\leq n$$ if and only if $u=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}c_{ij}(x_i-x_j).$ Indeed, on the one hand the coefficient of $x_i$ in the right expression is $\sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}$ for all $1\leq i\leq n$. On the other hand, the coefficient of $x_i$ in $u$ is $\lambda_i.$ Note that by our convention if $m=n+1$ then $x_{n+1}=\mathbf{0}.$ Since $d(x_i,x_i)=0$ and $c_{ii}-c_{ii}=0_{\mathbb F}$ we may assume without loss of generality that $c_{ii}=0_{\mathbb F}.$ By Claim $2$, $||u||$ can be computed on the support of $u.$ If we have two terms of the form $\lambda(x_i-x_j),\ \mu(x_i-x_j)$ we can replace them with the single term $(\lambda+\mu)(x_i-x_j)$ since $|\lambda+\mu|\leq \max\{|\lambda|,|\mu|\}.$ Thus, we may consider only decompositions of the form $u=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}c_{ij}(x_i-x_j),$ where $\sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i$ and $c_{ii}=0_{\mathbb F}$. \end{proof} \vskip 0.2cm \noindent \textbf{Claim 4:} For $u=\sum\limits_{i=1}^{n}\lambda_ix_i \in L_\mathbb F(X)$ let $m=|\operatorname{supp}(u)|.$ Then, $$||u||\geq r\cdot l_0 $$ where $r=\max\{|\lambda_i|: 1\leq i\leq n\}$ and $l_0=\min\{d(x_i,x_j):\ 1\leq i\neq j\leq m\}.$ \begin{proof} Assuming the contrary let $||u||<r\cdot l_0.$ By Claim 3 there exists a matrix $(c_{ij})\in\mathbb F^{m\times m}$ such that $ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \ \forall i: 1\leq i\leq n \ $ and in addition $r\cdot l_0>\max\limits_{1\leq i, j\leq m}|c_{ij}|d(x_i,x_j)$. Taking into account the definition of $l_0$ we get $r>|c_{ij}| \ \forall i,j.$ By the definition of $r$ there exists $1\leq i_0\leq n$ such that $r=|\lambda_{i_0}|.$ Thus, $|\lambda_{i_0}|>|c_{ij}| \ \forall i, j.$ In particular, $$|\lambda_{i_0}|>\max\{\max\limits_{1\leq j\leq m} |c_{i_{0}j}|,\max\limits_{1\leq j\leq m}|c_{ji_0}|\}.$$ Applying the strong triangle inequality to the equation $\sum\limits_{j=1}^{m}c_{i_0j}- \sum\limits_{j=1}^{m}c_{ji_0}=\lambda_{i_0}$ we obtain the contradiction $$|\lambda_{i_0}|\leq \max\{\max\limits_{1\leq j\leq m} |c_{i_{0}j}|,\max\limits_{1\leq j\leq m}|c_{ji_0}|\}.$$ \end{proof} \noindent \textbf{Claim 5:} $\iota: (\overline{X},d) \hookrightarrow (L_\mathbb{F}(X),||\cdot||), \ \iota(x)=\{x\}$ is an isometric embedding, i.e. $$||x-y||=d(x,y) \ \ \ \forall \ x,y \in \overline{X}.$$ \begin{proof} If $x=y$ the assertion is trivial so we may assume that $u=x-y\neq \textbf{0}.$ By Claim $2$ the value $||x-y||$ can be computed on the support $\{x,y\}.$ Using also some of the reductions we mentioned above, it suffices to consider only the trivial decomposition $u=x-y.$ It follows that $||x-y||=d(x,y).$ \end{proof} \noindent \textbf{Claim 6:} $||u||=0$ if and only if $u$ admits a presentation $u=\sum\limits_{k=1}^{t} s_k(x_k-y_k)$ such that $x_k,y_k \in \operatorname{supp}(u)$ and $d(x_k,y_k)=0$ for every $k \in \{1, \dots, t\}$. In particular, the ultra-seminorm $||\cdot||$ is an ultra-norm on $L_\mathbb{F}(X)$ if and only if $d$ is an ultra-metric on $X$. \begin{proof} The ``if" part is trivial. The ``only if" part is obvious for $u=\textbf{0}$. Suppose that $u \neq \textbf{0}$ and let $u=\sum\limits_{i=1}^{n}\lambda_ix_i$ be a normal form of $u.$ First suppose that $u$ is \emph{$d$-irreducible} in the following sense: there are no $1\leq i \neq j\leq m$ such that $d(x_i,x_j)=0$, where $m=|\operatorname{supp}(u)|.$ We claim that $||u|| >0$. Indeed, the corresponding $l_0$ defined in Claim 4 is positive and we have $||u||\geq r\cdot l_0$. Clearly, $r >0$ because $u \neq \textbf{0}$. So, we get that $||u|| >0$. Now we can suppose that $u$ is $d$-reducible. We describe a certain reduction for $u$. Choose a pair $i \neq j$ such that $d(x_i,x_j)=0$. Without loss of generality we may assume that $x_i\neq\textbf{0}.$ Denote $w_1:= \lambda_i(x_i-x_j), u_1:=u-w_1$. By Claims 1 and 5 we know that $||w_1||=0$. Hence, $||u||=||u-w_1||=0$. $\blacktriangleright$ In case $x_j=\textbf{0}$ delete the term $\lambda_ix_i$ in the presentation $u=\sum\limits_{i=1}^{n}\lambda_ix_i$ to obtain a normal form of $u-w_1.$ $\blacktriangleright$ In case $x_j\neq \textbf{0}$ observe that $\lambda_ix_i + \lambda_j x_j=\lambda_i (x_i-x_j) + (\lambda_i+\lambda_j)x_j$. Replacing the terms $\lambda_ix_i, \lambda_j x_j$ in the presentation $u=\sum\limits_{i=1}^{n}\lambda_ix_i$ with the single term $(\lambda_i+\lambda_j)x_j$ we get a normal form of $u-w_1$. In both cases we can then use the same reductions for $u_1$ to obtain $u_2:=u_1-w_2$, etc. Continuing in this manner we get, after finitely many steps, a vector $u_t$ such that $||u||=||u_t||=0$ and in the normal presentation of $u_t$ we have no pair of distinct elements $a,b \in X$ such that $d(a,b)=0$. That is, $u_t$ is $d$-irreducible. Then necessarily $u_t=\textbf{0}$. Indeed, if not, then as above we obtain that $||u_t||>0$. So, $u_t=\textbf{0}$. Hence, $u=\sum\limits_{k=1}^{t} w_k$. By the definition of $w_k$ this proves Claim 6. \end{proof} \vskip 0.3cm \noindent \textbf{Claim 7:} (Maximality property) Let $\sigma$ be an ultra-seminorm on $L_\mathbb{F}(X)$ such that \begin{equation} \label{eq:sigma} \sigma(x-y) \leq d(x,y)\ \ \forall x,y\in \overline{X}. \end{equation} Then $\sigma\leq ||\cdot||.$ \begin{proof} Let $u$ be a non-zero element of $L_\mathbb{F}(X)$ and $\sigma$ be an ultra-seminorm which satisfies (\ref{eq:sigma}). Then for every decomposition $u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i), \ x_i,y_i\in \overline{X}$ we obtain $$\sigma(u)=\sigma(\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i))\leq \max_{1 \leq i \leq n}|\lambda_i|\sigma(x_i-y_i) \leq \max_{1 \leq i \leq n}|\lambda_i|d(x_{i},y_{i}).$$ It follows from the definition of the ultra-seminorm $||\cdot||$ that $\sigma(u) \leq ||u||.$ \end{proof} Combining the claims we complete the proof of Theorem \ref{t:AE}. \end{proof} \begin{example} Let $\mathbb F:=\mathbb Z_2$ be the discrete field of two elements. Note that in this case $(L_\mathbb{F}(X), ||\cdot||)$, as a topological group, coincides with $B_{\scriptscriptstyle\mathcal{NA}}$ the \emph{uniform free NA Boolean group} over $(X,d).$ Indeed, this follows from the fact that $B_{\scriptscriptstyle\mathcal{NA}}$ is metrizable by a Graev type ultra-norm (see \cite{MS}). \end{example} \begin{remark} \label{rem:addthird} \ Theorem \ref{t:attaining} shows that in Theorem \ref{t:AE} we can assume, in addition, that: \begin{enumerate} \item The infimum in Theorem \ref{t:AE} is attained. \item The coefficients $c_{ij}$ (in Theorem \ref{t:AE}.3) belong to the additive subgroup $G_u$ of $\mathbb F,$ generated by the normal coefficients $\lambda_i$ of $u$. \end{enumerate} \end{remark} \begin{remark} Using Claim $3$ and additional computations we obtain a simplified version of Equation (\ref{eq:claim3}): $$||u||=\min \bigg \{\max \limits_{1\leq i<j\leq m}|c_{ij}|d(x_i,x_j): \forall i\geq j \ c_{ij}=0 \ , \ \forall i: 1\leq i\leq n \ \sum\limits_{j=i+1}^{m}c_{ij}- \sum\limits_{j=1}^{i-1}c_{ji}=\lambda_i \bigg\}.$$ \end{remark} \section{Generalized integer value property} \subsection{$G$-value property for subgroups $G \subseteq \Bbb R$} First recall the {\it integer value property} for the case $\mathbb F=\Bbb R$. Let $d$ be a (pseudo)metric on $X$ and $||\cdot||$ be its Kantorovich (semi)norm. For an element of $L^0 (X)$ with integer coefficients the inf-sum cost Formula (\ref{classform}) achieves its infimum at an integer matrix $(c_{ij}).$ See, for example, Sakarovitz \cite[p. 179]{Sak}, and Uspenskij \cite{Us-free}. Replacing the group of integers $\Bbb Z$ with any other additive subgroup $G$ of $\Bbb R$ we obtain a natural generalization. We call it the \emph{$G$-value property}. It means that whenever we have an element of $L^0 (X)$ with coefficients from $G$, the minimum in the formula is obtained at a matrix with elements from $G.$ This generalized version can be proved using the tools of convex analysis as in \cite{Us-free}. In the sequel we prove the $G$-value property for the NA case. \vskip 0.3cm \subsection{$G$-value property in the non-archimedean case} In this subsection let $\mathbb F$ be an NA valued field and $(X,d)$ be an ultra-(pseudo)metric space. \begin{lemma} \label{l:GenDigital} Let $G$ be an additive subgroup of an NA valued field $\mathbb F.$ Let $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$ with $\lambda_i\in G \ \forall i.$ Then the ultra-seminorm $||u||$ can be computed using only the coefficients from $G$. That is, in the formula of Theorem \ref{t:AE}.2 we get $$||u||:=\inf\bigg \{\max \limits_{1\leq k\leq l} |\rho_k| d(s_k,t_k): u=\sum\limits_{k=1}^{l}\rho_k(s_k-t_k), \ s_k,t_k\in \overline{X}, \ \rho_k\in G \bigg \} .$$ \end{lemma} \begin{proof} It is equivalent to show that for every decomposition $u=\sum\limits_{j=1}^{m}\mu_j (a_j-b_j)$ there exists a decomposition $u=\sum\limits_{k=1}^{l}\rho_k(s_k-t_k)$ with $\rho_k\in G \ \forall k: 1\leq k \leq l$ such that $$\max \limits_{1\leq k\leq l} |\rho_k| d(s_k,t_k) \leq \max \limits_{1\leq j\leq m} |\mu_j| d(a_j,b_j).$$ By deleting any term of $u$ of the form $\mu_j(x-x)$ we may assume that $a_j\neq b_j \ \forall j.$ If $\mu_j\in G \ \forall j: 1\leq j\leq m$ there is nothing to prove. So, without loss of generality, we may assume that $\mu_1\notin G$. Moreover we can suppose that $a_1\neq \mathbf{0}$ (otherwise, write the summand $(-\mu_1)(b_1-a_1)$ instead of $\mu_1(a_1-b_1)$). Consider the set of indices $$A:=\{j\neq 1: a_j=a_1 \vee b_j=a_1\}.$$ We show that there exists $j\in A$ such that $\mu_j\notin G.$ If $a_1\in \operatorname{supp}(u)$ then there exists $1\leq i\leq n$ such that $a_1=x_i.$ Hence, $$\mu_1+\sum\limits_{j\in A}{k_j}\mu_j=\lambda_i$$ where $k_j=1$ if $a_j=a_1$ and $k_j=-1$ if $b_j=a_1.$ If $a_1\notin \operatorname{supp}(u)$ then $$\mu_1+\sum\limits_{j\in A}{k_j}\mu_j=0_\mathbb{F}.$$ Since $G$ is an additive subgroup of $\mathbb F, \ \mu_1\notin G$ and $\{0_{\mathbb{F}},\lambda_i\}\subseteq G$, we conclude that there exists $j\in A$ such that $\mu_j\notin G.$ Since $|\mu_j|=|-\mu_j|, |\mu_1|=|-\mu_1|$ we may assume, without loss of generality, that there exists $j\neq 1$ such that $b_j=a_1, \ \mu_j\notin G$ and $|\mu_j|\leq |\mu_1|.$ Replace the terms $\mu_1(a_1-b_1)$ and $\mu_j (a_j-a_1)$ with $\mu_j(a_j-b_1)$ and $(\mu_1-\mu_j)(a_1-b_1).$ We show that $$\max \{|\mu_j|d(a_j,b_1),|\mu_1-\mu_j|d(a_1,b_1)\}\leq \max \{|\mu_j|d(a_j,a_1),|\mu_1|d(a_1,b_1)\}.$$ This way we decrease the number of terms in which the element $a_1$ appears with scalar coefficient not from $G.$ Since $|\mu_j|\leq |\mu_1|$ it follows from the strong triangle inequality of the valuation $|\cdot|$ that \begin{align*} |\mu_1-\mu_j|d(a_1,b_1) &\leq \max \{|\mu_1|d(a_1,b_1),|\mu_j|d(a_1,b_1)\}=|\mu_1|d(a_1,b_1)\leq \\ &\leq \max \{|\mu_j|d(a_j,a_1),|\mu_1|d(a_1,b_1)\}. \end{align*} From the strong triangle inequality of $d$ we obtain $$|\mu_j|d(a_j,b_1)\leq \max \{|\mu_j|d(a_j,a_1),|\mu_j|d(a_1,b_1)\}\leq\max \{|\mu_j|d(a_j,a_1),|\mu_1|d(a_1,b_1)\}.$$ Therefore, $$\max \{|\mu_j|d(a_j,b_1),|\mu_1-\mu_j|d(a_1,b_1)\}\leq \max \{|\mu_j|d(a_j,a_1),|\mu_1|d(a_1,b_1)\}.$$ Applying finitely many substitutions of this form to terms in which the element $a_1$ appears and in which the coefficients are not taken from $G,$ we obtain a decomposition in which all coefficients of $a_1$ (if there are any) are from $G.$ Repeating this algorithm for other elements, if necessary, we obtain a decomposition of the form $$u=\sum\limits_{k=1}^{l}\rho_k(s_k-t_k)$$ with $\rho_k\in G \ \forall k: 1\leq k \leq l$ such that $$\max \limits_{1\leq k\leq l} |\rho_k| d(s_k,t_k) \leq \max \limits_{1\leq j\leq m} |\mu_j| d(a_j,b_j).$$ \end{proof} \begin{notation} For every $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$ (normal form) denote by $G_u$ the additive subgroup of $\mathbb F$ generated by the coefficients $\lambda_i$ of $u$. \end{notation} Observe that by the strong triangle inequality for every $c \in G_u$ we have \begin{equation} \label{eq:r} |c| \leq r:=\max\{|\lambda_i|: 1\leq i\leq n\}. \end{equation} \begin{lemma} \label{cor:fromg} \emph{(NA local $G_u$-value property)} For every $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$ we have $$||u||=\inf\bigg \{\max \limits_{1\leq i,j \leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in G_u, \ \ \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg\}.$$ \end{lemma} \begin{proof} Combine Lemma \ref{l:GenDigital} with Claims $2,3$ of Theorem \ref{t:AE} taking into account the following observation. Let $u=\sum\limits_{k=1}^{l}\rho_k(s_k-t_k)$ with $\rho_k\in G \ \forall k: 1\leq k \leq l.$ Since $G$ is an additive subgroup of $\mathbb F$, each reduction appearing in the proof of Claim $2$ yields a decomposition of the same form. That is, the coefficients in the resulting decomposition are from $G.$ \end{proof} \begin{lemma} \label{thm:min} Let $u=\sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$. Suppose that for every positive reals $a \leq b$ the set $A_{ab}:=\{|x|: x\in G_u, \ a\leq |x|\leq b \}$ is finite. Then $$||u||=\min \bigg \{\max \limits_{1\leq i,j\leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in G_u, \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg \}.$$ \end{lemma} \begin{proof} In case $||u||=0$ we do not need the finiteness assumption. Indeed, by the proof of Claim 6 of Theorem \ref{t:AE} there exists a matrix $(c_{ij})\in G_u^{m\times m}$ such that $$u=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{m}c_{ij}(x_i-x_j),$$ where for every $i,j$ either $d(x_i,x_j)=0$ or $c_{ij}=0_{\mathbb F}$. It follows that $$\sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \ \forall i: 1\leq i\leq n$$ and thus the infimum in Lemma \ref{cor:fromg} is attained. So without restriction of generality we may assume that $||u|| > 0$. We have to show that the infimum in Lemma \ref{cor:fromg} is attained. Assuming the contrary and taking into account Formula (\ref{eq:r}), there exists a sequence of matrices $$\{(c_{ij}^k):k\in \Bbb N \}\subseteq G_u^{m\times m}$$ with the following properties: \begin{enumerate} \item $\forall i,j,k \ \ |c_{ij}^k| \leq r$; \item $\forall k\in \Bbb N \ \forall i: 1\leq i\leq n \ \ \sum\limits_{j=1}^m c_{ij}^k-\sum\limits_{j=1}^m c_{ji}^k=\lambda_i;$ \item $\max\limits_{1\leq i,j\leq m}|c_{ij}^k|d(x_i,x_j)>\max\limits_{1\leq i,j\leq m}|c_{ij}^{k+1}|d(x_i,x_j)>||u||.$ \end{enumerate} Passing to a subsequence, if necessary, we can also assume that there exists a pair of indices $(i_0,j_0)$ such that $$ \forall k\in \Bbb N \ \max\limits_{1\leq i,j\leq m}|c_{ij}^k|d(x_i,x_j)=|c_{i_0j_0}^k|d(x_{i_0},x_{j_0}).$$ It follows that $$\forall k\in \Bbb N \ \ \ r\geq |c_{i_0j_0}^k|>|c_{i_0j_0}^{k+1}|>\frac{||u||}{d(x_{i_0},x_{j_0})}>0.$$ By our assumption the set $$A=\bigg\{|x|: x\in G_u, \ r\geq |x| \geq \frac{||u||}{d(x_{i_0},x_{j_0})}\bigg\}$$ is finite. This contradicts the fact that the set $\{|c_{i_0j_0}^k|: k\in \Bbb N\}$, being a strictly decreasing sequence, is infinite. \end{proof} By $\operatorname{char}(\mathbb F)$ we denote the characteristic of the field $\mathbb F$. Recall that if $\operatorname{char}(\mathbb F)=0$ then the field $\Bbb Q$ of rationals is naturally embedded in $\mathbb F$. \begin{lemma}\label{lem:fin} Let $(\mathbb{F},|\cdot|)$ be an NA valued field with $\operatorname{char}(\mathbb F)=0$. Then, for every positive reals $a \leq b$ the set $\{|q|: \ a\leq |q|\leq b, \ q\in\Bbb Q \}$ is finite. \end{lemma} \begin{proof} By Ostrowski's Theorem \ref{Ostrowski} the restricted valuation on $\Bbb Q \subseteq \mathbb F$ is discrete. Hence, the set $\{|q|: q\in \Bbb Q\setminus \{0_\mathbb F\}\}$ is closed and discrete. It follows that for any positive reals $a \leq b$ the set $\{|q|: \ a\leq |q|\leq b, \ q\in \Bbb Q \}$ is compact and discrete and thus finite. \end{proof} \begin{thm}[{Min-attaining Theorem}] \label{t:attaining} Let $(\mathbb{F},|\cdot|)$ be an NA valued field. Let $u=\sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$. Then, $$||u||=\min \bigg \{\max \limits_{1\leq i,j\leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in G_u, \ \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{m}c_{ij}- \sum\limits_{j=1}^{m}c_{ji}=\lambda_i \bigg \}.$$ \end{thm} \begin{proof} We show that Lemma \ref{thm:min} can be applied to every NA valued field $(\mathbb{F},|\cdot|)$ and to every $u=\sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)$. $\blacktriangleright$ In case $\operatorname{char}(\mathbb F)>0$ the subgroup $G_u$ is finite, being a finitely generated additive subgroup of a field of positive characteristic. So, it is trivial that the set $A_{ab}$ from Theorem \ref{thm:min} is finite. $\blacktriangleright$ Now assume that $\operatorname{char}(\mathbb F)=0.$ Instead of showing directly that the set $$A_{ab}:=\{|x|: x\in G_u, \ a\leq |x|\leq b \}$$ is finite for every positive reals $a \leq b$, we will show that it is contained in a finite subset $B_{ab}$ of $\Bbb R$. Let $$B_{ab}:=\{|x|: x\in \widetilde{G_u}, \ a\leq |x|\leq b\}$$ where $\widetilde{G_u}:=\{\sum\limits_{i=1}^{n}m_i\lambda_i| \ m_i\in \Bbb Q\}.$ Since $G_u\subseteq \widetilde{G_u}$ we also have $A_{ab}\subseteq B_{ab}.$ We prove the finiteness of the set $B_{ab}$ using induction on $n,$ the number of scalar coefficients $\lambda_i$ in the normal form of $u.$ First, for the case $n=1$ let $u=\lambda x.$ We show that the set $\{|m\lambda|: a\leq |m\lambda|\leq b, \ m\in \Bbb Q \}$ is finite. It is equivalent to show that the set $\{|m|: c\leq |m|\leq d,\ m\in \Bbb Q \}$ is finite, where $c=\frac{a}{|\lambda|}, d=\frac{b}{|\lambda|}.$ This set is finite by Lemma \ref{lem:fin}. Let $u=\sum\limits_{i=1}^{n+1}\lambda_ix_i$ and $v=\sum\limits_{i=1}^{n}\lambda_ix_i.$ By the induction hypothesis the set $$C_{ab}:=\{|x|: x\in \widetilde{G_v}, \ a\leq |x|\leq b\}$$ is finite. If $$\{|x|: x\in \widetilde{G_u}\setminus \widetilde{G_v}, \ a\leq |x|\leq b\}=\emptyset$$ there is nothing to prove. So we may assume that there exists an element of $\widetilde{G_u}$ of the form $t=\sum\limits_{i=1}^{n+1}t_i\lambda_i,$ where $t_i\in \Bbb Q \ \forall i,$ $\ t_{n+1}\neq 0, \ a\leq |t|\leq b$ and $|t|\notin C_{ab}.$ It follows from Lemma \ref{lem:fin} that the set $$D:=\{|qt|:q\in \Bbb Q, \ a\leq |qt|\leq b \}$$ is finite. It suffices to show that $$\{|x|: x\in \widetilde{G_u} \setminus \widetilde{G_v}, \ a\leq |x|\leq b\}\subseteq C_{ab}\cup D.$$ Let $s=\sum\limits_{i=1}^{n+1}s_i\lambda_i\in \widetilde{G_u}\setminus \widetilde{G_v}$ and $a\leq |s|\leq b$. We will show that $|s|\in C_{ab}\cup D.$ Since $s\in \widetilde{G_u}\setminus \widetilde{G_v}$ then $s_{n+1}\neq 0.$ Since $\ t_{n+1}\neq 0$, it follows that $\exists q\in \Bbb Q \setminus \{0\}$ such that $qt_{n+1}=s_{n+1}.$ Thus there exists $r\in \widetilde{G_v}$ such that $s=qt+r.$ Clearly $|qt|\neq |r|.$ Indeed, otherwise, we have $|t|=|\frac{1}{q}r|$ contradicting the fact that $|t|\notin C_{ab}.$ So, by the basic properties of the strong triangle inequality, either $|s|=|qt|\in D$ or $|s|=|r|\in C_{ab}.$ Therefore $B_{ab}\subseteq C_{ab}\cup D$, as needed. \end{proof} \begin{corol} \label{c:min} The infimum in Theorem \ref{t:AE} is, in fact, a minimum. \end{corol} \begin{prop} \label{p:upper} For every $u=\sum\limits_{i=1}^{n}\lambda_ix_i \in L_\mathbb F(X)$ we have $$r\cdot l_0 \leq ||u||\leq r\cdot l_1 $$ where $r=\max\{|\lambda_i|: 1\leq i\leq n\}$, $l_1=\max\{d(x_i,x_j):\ 1\leq i,j\leq m\}$, $l_0=\min\{d(x_i,x_j):\ 1\leq i \neq j\leq m\}$ and $m=|\operatorname{supp}(u)|.$ \end{prop} \begin{proof} Claim 4 of Theorem \ref{t:AE} provides a lower bound $r\cdot l_0 \leq ||u||.$ By Theorem \ref{cor:fromg} $$||u||=\inf \bigg \{\max \limits_{1\leq i,j\leq m}|c_{ij}|d(x_i,x_j): c_{ij} \in G_u, \forall i: 1\leq i\leq n \ \sum\limits_{j=1}^{n}c_{ij}- \sum\limits_{j=1}^{n}c_{ji}=\lambda_i \bigg\},$$ while $|c_{ij}| \leq r$ by (\ref{eq:r}). Therefore $||u|| \leq \max \limits_{1\leq i,j\leq m}|c_{ij}|d(x_i,x_j)\leq r\cdot l_1.$ \end{proof} \begin{corol} \label{c:special} Let $u=\sum\limits_{i=1}^{n}\lambda_ix_i \in L_\mathbb F(X)$. Suppose that $l=d(x_i,x_j)$ for every $x_i \neq x_j \in \operatorname{supp}(u)$. Then $||u||=r \cdot l$ where $r=\max\{|\lambda_i|: 1\leq i\leq n\}$. \end{corol} \section{Free NA locally convex space}\label{s:freeLCS} For the free locally convex $\mathbb{F}$-spaces (where $\mathbb F= \Bbb R$ or $\Bbb C$) on uniform spaces we refer to Raikov \cite{MPV}. Here we consider their NA analogue. Let $\mathbb{F}$ be an NA valued field. Recall \cite{Schn,PS} that a Hausdorff NA $\mathbb{F}$-vector space $V$ is said to be \emph{locally convex} if its topology can be generated by a family of ultra-seminorms. Assigning to every NA locally convex $\mathbb{F}$-space $V$ its uniform space $(V,{\mathcal U}),$ we define a forgetful functor from the category $_\mathbb{F}$LCS$_{\scriptscriptstyle\mathcal{NA}}$ of all Hausdorff NA locally convex spaces to the category of all NA Hausdorff uniform spaces $\mathbf{Unif}_{\scriptscriptstyle\mathcal{NA}}$. \begin{defi} \label{d:FreeGr} Let $\mathbb{F}$ be an NA valued field and $(X,{\mathcal U}) \in \mathbf{Unif}_{\scriptscriptstyle\mathcal{NA}}$ be an NA uniform space. By a \emph{free NA locally convex $\mathbb{F}$-space} of $(X,{\mathcal U})$ we mean a pair $(L_\mathbb{F}(X,{\mathcal U}),i)$ (or, simply, $L_\mathbb{F}(X,{\mathcal U})$ or $L_\mathbb{F}(X)$ when $i$ and ${\mathcal U}$ are understood), where $L_\mathbb{F}(X,{\mathcal U})$ is a locally convex $\mathbb{F}$-space and $i: X \to L_\mathbb{F}(X,{\mathcal U})$ is a uniform map satisfying the following universal property. For every uniformly continuous map \textbf{$\varphi: (X,{\mathcal U}) \to V$} into a locally convex $\mathbb{F}$-space $V$, there exists a unique continuous linear homomorphism \textbf{$\Phi: L_\mathbb{F}(X,{\mathcal U}) \to V$} for which the following diagram commutes: \begin{equation*} \label{equ:ufn} \xymatrix { (X,{\mathcal U}) \ar[dr]_{\varphi} \ar[r]^{i} & L_\mathbb{F}(X,{\mathcal U}) \ar[d]^{\Phi} \\ & V } \end{equation*} \end{defi} A categorical reformulation of this definition is that $i: X \to L_\mathbb{F}(X,{\mathcal U})$ is a universal arrow from $(X,{\mathcal U})$ to the forgetful functor $_\mathbb{F}$LCS$_{\scriptscriptstyle\mathcal{NA}} \to \mathbf{Unif}_{\scriptscriptstyle\mathcal{NA}}$. The uniformity $\overline{\mathcal U }$ in the following theorem is obtained from the uniformity $\mathcal{U}$ by adding to $X$ the element $\mathbf{0}$ as an isolated point. In particular, if $\mathcal U$ is metrizable and $d$ is the corresponding ultra-metric, one can extend $d$ from $X$ to $\overline{X}$ such that $d$ induces the uniformity $\overline{\mathcal U}$ (apply Lemma \ref{l:extend}). \begin{thm} \label{t:freeLCS} For every Hausdorff NA uniform space $(X,{\mathcal U})$ the uniform NA free locally convex $\mathbb{F}$-space exists. Its structure can be defined as follows. Let $D$ be the set of all $\overline{\mathcal U}$-uniformly continuous ultra-pseudometrics on $\overline{X}:=X\cup \{\mathbf{0}\}$. For every $d \in D$ we have the corresponding Kantorovich ultra-seminorm $||\cdot||_d$ on $L_\mathbb{F}(X).$ Then $L_\mathbb{F}(X)$ endowed with the family $\Gamma:=\{||\cdot||_d: d \in D\}$ of Kantorovich ultra-seminorms defines the desired uniform NA free locally convex $\mathbb{F}$-space which we denote by $L_\mathbb{F}(X,{\mathcal U})$. The corresponding arrow $i: (X,{\mathcal U})\to L_\mathbb{F}(X,{\mathcal U})$ is a uniform embedding. \end{thm} \begin{proof} First of all, observe that $L_\mathbb{F}(X,{\mathcal U})$ is Hausdorff. Indeed, this follows by analyzing Claims 4 and 6 of Theorem \ref{t:AE} (or, Proposition \ref{p:upper}). Next we have the following commutative diagram \begin{equation*} \label{e:K1} \xymatrix { (X,{\mathcal U}) \ar[dr]_{\varphi} \ar[r]^{i} & L_\mathbb{F}(X,{\mathcal U}) \ar[d]^{\Phi} \\ & V } \end{equation*} Now we only have to show that $\Phi$ is continuous. Since $\mathbf{0}$ is isolated in $(\overline{X},\overline{\mathcal U } )$ and $\varphi: (X,{\mathcal U})\to V$ is uniformly continuous, so is the natural extension $\varphi:(\overline{X},\overline{\mathcal U })\to V.$ By our assumption $V$ has a family $\Gamma_V$ of ultra-seminorms which generate its topology. Every $\rho \in \Gamma_V$ induces an ultra-seminorm $\sigma_{\rho}$ on $L_\mathbb{F}(X)$ and an ultra-pseudometric $d_{\rho}$ on $\overline{X}$ defined by $$ \sigma_{\rho}(u):=\rho(\Phi(u)), \ \ \ d_{\rho}(x,y):=\rho(\varphi(x)-\varphi(y)), $$ respectively. Since $\varphi:(\overline{X},\overline{\mathcal U })\to V$ is uniformly continuous we have $d_{\rho}\in D.$ Consider the corresponding Kantorovich ultra-seminorm $||\cdot||_{d_{\rho}}$ on $L_\mathbb{F}(X)$. Then $\sigma_{\rho}(x-y)={d_{\rho}}(x,y)$ for every $x,y \in \overline {X}$. By the maximality property (Definition \ref{d:KantUltraNorm} and Theorem \ref{t:AE}) we obtain $||\cdot||_{d_{\rho}} \geq \sigma_{\rho}.$ This guarantees that $\rho(\Phi(u)) \leq ||u||_{d_{\rho}}$ for every $u \in L_\mathbb{F}(X)$, which implies the continuity of $\Phi$. Finally, note that by Lemma \ref{l:extend} and Theorem \ref{t:AE} the family $\Gamma$ of Kantorovich ultra-seminorms generates the original uniform structure ${\mathcal U}$ on $X=i(X) \subseteq L_\mathbb{F}(X)$. Hence $i$ is a uniform embedding. \end{proof} \begin{prop} \label{p:emb} Let $\mathbb F$ be an NA valued filed and $K$ a subfield of $\mathbb F.$ Then for every Hausdorff NA uniform space $(X,{\mathcal U})$ the natural algebraic inclusion $j: L_K(X) \to L_{\mathbb F}(X)$ of $K$-vector spaces is a topological embedding. \end{prop} \begin{proof} Let $d$ be a uniformly continuous ultra-pseudometric on $\overline{X}:=X\cup \{\mathbf{0}\}$. Denote by $||\cdot||^K$ and $||\cdot||^{\mathbb F}$ the corresponding Kantorovich ultra-seminorms of $d$ in $L_K(X)$ and $L_{\mathbb F}(X)$ respectively. Let $u= \sum\limits_{i=1}^{n}\lambda_i x_i\in L_K(X) \subseteq L_{\mathbb F}(X)$. Then clearly $G_u$ is an additive subgroup of $K$ and of $\mathbb F.$ Therefore by Theorem \ref{cor:fromg} we have $||u||^K=||u||^{\mathbb F}$. Now Theorem \ref{t:freeLCS} guarantees that $j: L_K(X) \to L_{\mathbb F}(X)$ is a topological embedding. \end{proof} As in the classical case of the fields $\Bbb R$ or $\Bbb C$ (see \cite{Raikov}) we have the following property for the NA case. \begin{prop} \label{closed} The universal arrow $i: (X,{\mathcal U})\to L_\mathbb{F}(X,{\mathcal U})$ is a closed embedding for any NA valued field $\mathbb{F}$. \end{prop} \begin{proof} We have to show that $X=i(X)$ is closed in $L_\mathbb{F}(X)$. Let $v \in L_\mathbb{F}(X)$ be a vector such that $v \notin X$. It is enough to find a locally convex space $V$ and a continuous linear morphism $\Phi: L_\mathbb{F}(X) \to V$ such that $\Phi(v) \notin cl(\Phi(X))$. For $v=\lambda x$ with $\lambda \neq 1$ and $x \in X$ consider the continuous functional $$\Phi: L_\mathbb{F}(X) \to \mathbb{F}, \ \sum_{k=1}^m \lambda_k x_k \mapsto \sum_{k=1}^m \lambda_k.$$ Then $\Phi(v) = \lambda \notin cl(\Phi(X))=\{1\}$. The same $\Phi$ works for the case of $v={\mathbf 0}$. Now we may suppose that $v=\sum_{i=1}^n \lambda_i x_i$ with non-zero coefficients $\lambda_i$ and that $\operatorname{supp}(v)$ contains at least two elements from $X$. That is, $\operatorname{supp}(u)=\{x_1,x_2,x_3, \ldots, x_n\}$, where $x_1, x_2 \in X$ and $n \geq 2$. Define $V$ as the 2-dimensional NA normed $\mathbb{F}$-space $\mathbb{F}^2$ (with the $\max$ ultra-norm). Since the uniform space $(X,{\mathcal U})$ is NA and Hausdorff, one may partition it into three clopen disjoint subsets $$ X = X_1 \cup X_2 \cup X_3 $$ such that $$x_1 \in X_1, x_2 \in X_2, x_k \in X_3 \ \ \forall \ 3\leq k \leq n.$$ Now define $$ \varphi: X \to V=\mathbb{F}^2, \ \ \ \ \varphi(x) = \begin{cases} (1,0) & {\text{for}} \ x \in X_1\\ (0,1) & {\text{for}} \ x \in X_2\\ (0,0) & {\text{for}} \ x \in X_3. \end{cases} $$ This map is uniformly continuous and $\mathbb{F}^2$ is a locally convex NA $\mathbb{F}$-space. Hence, by the universality property, there exists the continuous extension $\Phi: L_\mathbb{F}(X) \to V$. Now observe that $$\Phi(v)=(\lambda_1,\lambda_2) \notin cl(\Phi(X))=\{(1,0), (0,1), (0,0)\}.$$ \end{proof} \subsection{Normability and metrizability} \begin{thm} \label{t:normable} Let $\mathbb F$ be an NA valued field with a trivial valuation, $(X,d)$ be an ultra-metric space and ${\mathcal U}(d)$ be the uniformity of $d$. Then the free NA locally convex space $L_\mathbb{F}(X,{\mathcal U}(d))$ is normable by the Kantorovich ultra-norm $||\cdot||_d$. \end{thm} \begin{proof} As in Lemma \ref{l:extend} consider the extension of $d$ on $\overline{X}$. Next, by Theorem \ref{t:AE}, we have the corresponding Kantorovich ultra-norm $||\cdot||.$ It suffices to show that if $\varphi: (X,d) \to V$ is a uniformly continuous map to a locally convex space $V,$ then the linear extension $ \Phi: (L_{\mathbb F}(X),||\cdot||)\to V$ is continuous. Being a locally convex space the topology of $V$ is defined by a collection of ultra-seminorms $\{\rho_i\}_{i\in I}.$ Clearly, $\varphi: (\overline{X},d)\to V$ is uniformly continuous. Fix $\varepsilon>0$ and $i_0\in I.$ It follows that there exists $\delta>0$ such that $\rho_{i_0} (\varphi(x)-\varphi(y))<\varepsilon \ \forall x,y\in \overline{X} $ with $d(x,y)<\delta.$ Now assume that $u\in L_{\mathbb F}(X)$ with $||u||<\delta.$ We prove the continuity of $\Phi$ by showing that $\rho_{i_0}(\Phi(u))<\varepsilon.$ By the definition of the ultra-norm $||\cdot||$ there exists a decomposition $u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i)$ such that $\max\limits_{1\leq i\leq n}|\lambda_i|d(x_i,y_i)<\delta.$ Since the valuation $|\cdot|$ is trivial we obtain $\max\limits_{1\leq i\leq n}d(x_i,y_i)<\delta.$ It follows that \begin{align*} &\rho_{i_0}(\Phi(u))=\rho_i(\Phi(\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i)))=\rho_i(\sum\limits_{i=1}^{n}\lambda_i(\varphi(x_i)-\varphi(y_i)))\leq \\ &\leq\max\limits_{1\leq i\leq n}|\lambda_i|\rho_{i_0}(\varphi(x_i)-\varphi(y_i))=\max\limits_{1\leq i\leq n}\rho_{i_0}(\varphi(x_i)-\varphi(y_i))<\varepsilon. \end{align*} \end{proof} It is known that if a Tychonoff space $X$ is non-discrete, then $A(X)$ is not metrizable (see \cite[Theorem 7.1.20]{AT}). This result inspired us to obtain the following. \begin{prop} Let $(X,\mathcal U)$ be a non-discrete NA uniform space. Let $\mathbb F$ be a complete NA valued field with a non-trivial valuation. Then $L_\mathbb{F}(X,{\mathcal U})$ is not metrizable. \end{prop} \begin{proof} Assuming the contrary, there exists a decreasing sequence $\{U_n\}_{n\in \mathbb N}$ which forms a local base at $\mathbf 0\in L_\mathbb{F}(X,{\mathcal U}).$ Since the valuation $|\cdot|$ is non-trivial, there exists $\lambda\in\mathbb F$ with $|\lambda|>1.$ In view of Theorem \ref{t:freeLCS} $(X,\mathcal U)$ is a uniform subspace of $L_\mathbb{F}(X,{\mathcal U}).$ By the continuity of the scalar multiplication it follows that there exists a sequence of entourages $\varepsilon_n\in \mathcal U$ such that $\lambda^n(x-y)\in U_n \ \forall x,y\in \varepsilon_n.$ Since $\mathcal U$ is non-discrete and Hausdorff we can find a sequence $(x_n,y_n)\in \varepsilon_n$ such that $x_n\neq y_n \ \forall n\in\Bbb N$ and $\forall i<n \ \ x_n \notin \{x_i,y_i\}.$ Clearly, the sequence $u_n=\lambda^n(x_n-y_n) \in U_n$ converges to $\mathbf{0}.$ Let us show that this leads to a contradiction. Since $(X,\mathcal U)$ is NA it is easy to define, by induction on $n,$ a sequence $\{f_n:n\in\Bbb N\}$ of uniformly continuous functions on $(X,\mathcal U)$ with values in $\mathbb F$ such that for every $n\geq 1$: \begin{enumerate} \item $ \ |f_n(x)|\leq |\lambda|^{-n} \ \forall x\in X;$ \item $f_n(x_k)=f_n(y_k)=f_n(y_n)=0_{\mathbb F} \ \ \forall k<n;$ \item $f_n(x_n)=\lambda^{-n}-\sum\limits_{k=1}^{n-1}(f_k(x_n)-f_k(y_n))$ if $|\sum\limits_{k=1}^{n-1}(f_k(x_n)-f_k(y_n))|\leq |\lambda|^{-n}$ and $f_n(x_n)=\lambda^{-n}$ otherwise. \end{enumerate} By $(3)$ and the strong triangle inequality we have $|f_n(x_n)+\sum\limits_{k=1}^{n-1}(f_k(x_n)-f_k(y_n))|\geq |\lambda|^{-n}.$ By $(1)$ for every $x\in X$ the sequence of partial sums $\bigg\{\sum\limits_{k=1}^{n}f_k(x) \bigg \}_{n\in \Bbb N}$ is Cauchy. Since the field $\mathbb F$ is complete, the function $f=\sum\limits_{n=1}^{\infty}f_n$ is well defined. From $(1)$ it follows that $f$ is uniformly continuous, and thus it admits, an extension to a linear continuous map $\widetilde{f}: L_\mathbb{F}(X,{\mathcal U})\to \mathbb F.$ For every $n\in \Bbb N$ we have \begin{align*} |&\widetilde{f}(u_n)|=|\lambda|^{n}\cdot |\sum\limits_{k=1}^{\infty}(f_k(x_n)-f_k(y_n))|= \\ &=|\lambda|^{n}\cdot |f_n(x_n)+\sum\limits_{k=1}^{n-1}(f_k(x_n)-f_k(y_n))|\geq |\lambda|^{n}\cdot |\lambda|^{-n}=1. \end{align*} It follows that the sequence $\{\widetilde{f}(u_n)\}$ does not converge to $\mathbf{0},$ contradicting the continuity of $\widetilde{f}.$ \end{proof} In contrast, note that the uniform free NA abelian topological group $A_{\scriptscriptstyle\mathcal{NA}}$ (Definition \ref{d:FreeGr}) is metrizable for every metrizable NA uniform space $(X,{\mathcal U})$ (see \cite{MS} and also Remark \ref{rem:spmet}). \subsection{Free abelian NA groups and NA Tkachenko-Uspenskij theorem} Recall the following definition from \cite{MS}. \begin{defi} \label{d:FreeGr} Let $(X,{\mathcal U})$ be an NA uniform space. The \emph{uniform free NA abelian topological group of $(X,{\mathcal U})$} is denoted by $A_{\scriptscriptstyle\mathcal{NA}}$ and defined as follows: $A_{\scriptscriptstyle\mathcal{NA}}$ is an NA abelian topological group for which there exists a universal uniform map $i: X \to A_{\scriptscriptstyle\mathcal{NA}}$ satisfying the following universal property. For every uniformly continuous map $\varphi: (X,{\mathcal U}) \to G$ into an abelian NA topological group $G$ there exists a unique continuous homomorphism \textbf{$\Phi: A_{\scriptscriptstyle\mathcal{NA}} \to G $} for which the following diagram commutes: \begin{equation*} \label{equ:ufn} \xymatrix { (X,{\mathcal U}) \ar[dr]_{\varphi} \ar[r]^{i} & A_{\scriptscriptstyle\mathcal{NA}} \ar[d]^{\Phi} \\ & G } \end{equation*} \end{defi} Let $(X,{\mathcal U})$ be an NA uniform space and $Eq(\mathcal U)$ be the set of all equivalence relations from ${\mathcal U}$. \begin{thm} \label{thm:desna} \cite[Theorem 4.14]{MS} Let $(X,{\mathcal U})$ be NA and let $\mathcal{B}\subseteq Eq(\mathcal U)$ be a base of ${\mathcal U}$. For every $\varepsilon\in \mathcal{B}$ denote by $<\varepsilon>$ the subgroup of $A(X)$ algebraically generated by the set $\{x-y\in A(X) : (x,y) \in \varepsilon\}.$ Then $\{<\varepsilon>\}_{\varepsilon \in \mathcal{B} }$ is a local base at the zero element of $A_{\scriptscriptstyle\mathcal{NA}}(X,{\mathcal U}).$ \end{thm} \begin{remark} \label{rem:spmet} It is easy to see from the description above that if $(X,d)$ is an ultra-metric space, then $A_{\scriptscriptstyle\mathcal{NA}}$ is metrizable. The following theorem provides a specific metrization which can be viewed as a Graev type ultra-norm. \end{remark} \begin{lemma}\label{lem:met} Let $(X,d)$ be an ultra-metric space treated as an ultra-metric subspace of $(\overline{X},d)$ as in Lemma \ref{l:extend}. Then $A_{\scriptscriptstyle\mathcal{NA}}$ is metrizable by the Graev type ultra-norm defined as follows. For $u\in A(X)$ let $$||u||:=\inf\bigg \{\max \limits_{1\leq i\leq n}d(x_i,y_i): u=\sum\limits_{i=1}^{n}(x_i-y_i), \ x_i,y_i\in \overline{X} \bigg \}.$$ \end{lemma} \begin{proof} Observe that for $\varepsilon<1$ we have $B_d(\mathbf{0},\varepsilon)=<\varepsilon>$, where $B_d(\mathbf{0},\varepsilon)$ is the open $\varepsilon$-ball. \end{proof} \begin{remark} Suppose that $(X,\mathcal U)$ is an NA uniform space generated by a collection of ultra-seminorms $\{d_i\}_{i\in I}.$ Then using the idea of Lemma \ref{lem:met} one can show that the topology of $A_{\scriptscriptstyle\mathcal{NA}}$ is generated by the set of the corresponding Graev type ultra-norms $\{||\cdot||_{d_i}\}_{i\in I}.$ So we have an analogy with Theorem \ref{t:freeLCS}. At the same time we have one key difference. In the description of $A_{\scriptscriptstyle\mathcal{NA}}$ it is enough to consider any set of ultra-pseudometrics $\{d_i\}_{i\in I}$ which generate the uniformity $\mathcal U$ on $X.$ \end{remark} By Tkachenko-Uspenskij theorem \cite{Tk,Us-free}, the free abelian topological group $A(X)$ is a topological subgroup of $L(X)$ (here $\mathbb F=\Bbb R$). This can be derived (as in \cite{Us-free}) using the usual integer value property and descriptions of Graev's extension. Consider an NA valued field $\mathbb F$ of characteristic zero. It is clear that, algebraically, $A_{\scriptscriptstyle\mathcal{NA}}(X)$ is a natural subgroup of $L_{\mathbb F} (X)$ since $\Bbb Q$ is embedded in $\mathbb F$ as a subfield. So, it is natural to ask for which NA valued fields $\mathbb F$ we have an analogue of Tkachenko-Uspenskij theorem. Theorem \ref{t: Tk-Usp} shows that this is true if and only if the valuation of $\mathbb F$ is trivial on $\Bbb Q$. First we give a particular example. \begin{example} \label{r:Tk-Usp} Tkachenko-Uspenskij theorem is not true for the field $\mathbb F=\mathbb Q_{p}$ of $p$-adic numbers (with its standard valuation). Clearly, $\lim p^n =0_{\mathbb F}$ in ${\mathbb F}$. Now, let $x,y \in X$ be a pair of distinct points in an ultra-metric space $X$. By the continuity of the operations $u_n:=p^n(x-y)$ converges to zero in the free locally convex space $L_{\mathbb F}(X)$. At the same time it is not true in the free NA abelian group $A_{\scriptscriptstyle\mathcal{NA}}(X),$ as it follows from the internal description of the topology of $A_{\scriptscriptstyle\mathcal{NA}}(X)$ (see Theorem \ref{thm:desna} or \cite{MS}). \end{example} \begin{thm} \label{t: Tk-Usp} Let $\mathbb F$ be an NA valued field and $(X,\mathcal U)$ be an NA uniform space. Suppose also that $\operatorname{char}(\mathbb F)=0$ and consider $A_{\scriptscriptstyle\mathcal{NA}} (X)$ as an algebraic subgroup of $L_\mathbb F (X).$ The following conditions are equivalent: \begin{enumerate} \item $A_{\scriptscriptstyle\mathcal{NA}}$ is a topological subgroup of $L_\mathbb{F}(X,{\mathcal U})$. \item The valuation of $\mathbb F$ is trivial on $\Bbb Q$. \end{enumerate} \end{thm} \begin{proof} (1) $\Rightarrow$ (2): If the valuation on $\Bbb Q$ is not trivial, then by Ostrowski's Theorem \ref{Ostrowski} this restricted valuation is equivalent to the $p$-adic valuation. Now the proof is reduced to the concrete case of Example \ref{r:Tk-Usp}. (2) $\Rightarrow$ (1): By Proposition \ref{p:emb} we know that $L_\mathbb{Q}(X,{\mathcal U})$ is a topological subgroup of $L_\mathbb{F}(X,{\mathcal U})$. So it suffices to show that $A_{\scriptscriptstyle\mathcal{NA}}$ is a topological subgroup of $L_\mathbb{Q}(X,{\mathcal U})$. Let $\{d_i\}_{i\in I}$ be a family of ultra-pseudometrics generating the uniformity $\mathcal U.$ For every $i$ extend $d_i$ to $\overline X$ as in Lemma \ref{l:extend}. Then consider the Kantorovich ultra-seminorm (Theorem \ref{t:AE}) $||\cdot||_{d_i}$ on $L_{\Bbb Q} (X).$ Since the restricted valuation $|\cdot|$ on $\Bbb Q$ is trivial, the topology of $L_\mathbb{Q}(X,{\mathcal U})$ is generated by the family $\{||\cdot||_{d_i}\}_{i\in I}.$ It suffices to prove the following claim. \vskip 0.3cm \noindent \textbf{Claim :} Let $(\overline{X},d)$ be an ultra-pseudometric space, $||\cdot||^L$ be the corresponding Kantorovich ultra-seminorm on $L_{\Bbb Q}(X)$ and $ ||\cdot||^A $ be the corresponding Graev type ultra-seminorm on $A_{\scriptscriptstyle\mathcal{NA}}$ (from Lemma \ref{lem:met}). Then $||u||^L=||u||^A$ for every $u\in A(X)$. \begin{proof} Since $\Bbb Z$ is an additive subgroup of $\Bbb Q,$ it follows by Lemma \ref{l:GenDigital} that \begin{align*} &||u||^L=\inf\bigg \{\max \limits_{1\leq i\leq n}|\lambda_i|d(x_i,y_i): u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i), \ x_i,y_i\in \overline{X}, \ \lambda_i\in \Bbb Z \bigg \} =\\ &=\inf\bigg \{\max \limits_{1\leq i\leq n}d(x_i,y_i): u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i), \ x_i,y_i\in \overline{X}, \ \lambda_i\in \Bbb Z \bigg \}=\\ &=\inf\bigg \{\max \limits_{1\leq i\leq n}d(x_i,y_i): u=\sum\limits_{i=1}^{n}(x_i-y_i), \ x_i,y_i\in \overline{X} \bigg \}= ||u||^A. \end{align*} \end{proof} \end{proof} \begin{example} \label{Levi-Civita} Theorem \ref{t: Tk-Usp} can be applied to the Levi-Civita field $\mathcal R$. Indeed, as it was noted in Example \ref{Levi-Civita1}, $ \mathcal R$ admits a natural dense valuation. Its restriction on $\Bbb Q$ is trivial. We conclude, by Theorem \ref{t: Tk-Usp}, that $A_{\scriptscriptstyle\mathcal{NA}}$ is a topological subgroup of $L_\mathcal{R}(X,{\mathcal U})$ for every NA uniform space $(X,{\mathcal U}).$ \end{example} \section{Pointed version and the dual space} Using similar techniques to those mentioned in the previous sections, one can study the pointed version of NATP. However, its connection to the dual space is a unique feature which we present below. Let $(X,d,e) $ be a pointed ultra-pseudometric with a base point $e.$ Let $L_\mathbb F(X)$ be the free pointed $\mathbb F$-vector space on the pointed set $(X,e).$ As before let $$L^{0}_\mathbb F(X):=\bigg \{ \sum\limits_{i=1}^{n}\lambda_i x_i\in L_\mathbb{F}(X)| \ \sum\limits_{i=1}^{n}\lambda_i=0_{\mathbb F} \bigg\}.$$ \begin{defi} The {\it Kantorovich ultra-seminorm} is the ultra-seminorm on $L^{0}_\mathbb F(X)$ given by the following formula. For $u\in L^{0}_\mathbb{F}(X)$ let $$||u||:=\inf\bigg \{\max \limits_{1\leq i\leq n}|\lambda_i|d(x_i,y_i): u=\sum\limits_{i=1}^{n}\lambda_i(x_i-y_i), \ x_i,y_i\in X, \ \lambda_i\in \mathbb F \bigg \}.$$ \end{defi} It follows from the definition of the Kantorovich ultra-seminorm that $||x-y|| \leq d(x,y)$ for every $x,y \in X$. As in the non-pointed case we can show that $||x-y|| = d(x,y)$ and $||\cdot||$ is an ultra-norm whenever $d$ is an ultra-metric. It is well known that the map $x \mapsto x-e$ defines an isometric embedding of a metric space $(X,d)$ into the classical Arens-Eells space. See, for example, \cite[Section 2.2]{Wea}. One may show that the same rule defines an isometric embedding of a pointed ultra-metric space $(X,d,e)$ into $(L^{0}_\mathbb F(X), ||\cdot||)$. For every pointed Lipschitz function $f: X \to \mathbb F$ we have a canonically defined continuous functional $L^{0}_\mathbb F(X) \to \mathbb F.$ Moreover, for a nontrivially valued NA field $\mathbb F,$ the dual NA Banach space of $L^{0}_\mathbb F(X)$ can be identified with the NA Banach space ${\rm Lip}_0$ of all pointed Lipschitz functions $f: X \to \mathbb F$. We omit the verification which essentially is very similar to the arguments of \cite[Theorem 2.2.2]{Wea}. Note that the nontriviality of the valuation is important in order to ensure that every continuous functional $L^{0}_\mathbb F(X) \to \mathbb F$ is a Lipschitz function. See \cite[Prop. 3.1]{Schn}. \section{Appendix} \label{s:ap} Let $(X,d)$ be a pseudometric space and $\Bbb C$ be the field of complex numbers. As in the case of reals (Equation (\ref{secform})) define the Kantorovich seminorm on $L^0_\Bbb C(X)$ as follows. For every $v \in L^0_\Bbb C(X) $ \begin{equation} \label{form} ||v||=\inf\bigg\{\sum\limits_{i=1}^{l}|\rho_i|d(a_i,b_i):v=\sum\limits_{i=1}^{l}\rho_i(a_i-b_i), \ \rho_i\in \Bbb C, a_i,b_i\in X\bigg\}. \end{equation} The following was mentioned in \cite{Flood, Wea, GaoPest}. \begin{thm} \label{main} Support elements do not determine the Kantorovich norm for the field $\mathbb F:=\Bbb C$ of complex numbers. \end{thm} The following example (which appears in \cite[p. 90]{Flood}, \cite[Ex. 1.5.7, p. 18]{Wea} without details) implies Theorem \ref{main}. That is, in general, the infimum in (\ref{form}) cannot be achieved or even approximated by support elements. This was mentioned also in \cite{GaoPest}. As we show below, one may say even more: in this example the infimum is attained outside the support. \begin{example} \label{e:outside} Let $X=\{e,p,q,r\}$ and $d$ be a metric on $X$ defined as follows: $d(p,q)=d(p,r)=d(q,r)=1$ and $d(e,p)=d(e,q)=d(e,r)=\frac{1}{2}.$ Let $\lambda=1\cdot p+\mu q+\nu r\in L^0_\Bbb C(X),$ where $S=\{1,\mu,\nu\}$ denotes the set of the three complex cube roots of unity. We show that the infimum in the definition of $||\lambda||$ cannot be achieved or even approximated by support elements. We also show that the infimum is attained outside the support and that $||\lambda||=\frac{3}{2}.$ \vskip 0.3cm \begin{enumerate}[(a)] \item Since $1+\mu+\nu=0$ we have $\lambda= (p-e)+\mu(q-e)+\nu(r-e).$ It follows that $||\lambda||\leq d(p,e)+|\mu|d(q,e)+|\nu|d(r,e)=\frac{3}{2}.$ We will show that the minimal sum-cost which comes from presentations of $\lambda$ that include only support elements, is strictly larger than $\frac{3}{2}.$ When dealing with support elements it suffices to consider presentations of $\lambda$ of the form $\lambda=c_{pq}(p-q)+ c_{pr}(p-r)+ c_{qr}(q-r)$ where $c_{pq},c_{pr},c_{qr}\in \Bbb C.$ Indeed, this follows from the reduction rules: \begin{enumerate} [(1)]\item Replace $m(x-y)$ with $-m(y-x).$ \item Replace the terms $m(x-y), \ n(x-y)$ with $(m+n)(x-y).$ \end{enumerate} If $\lambda=c_{pq}(p-q)+ c_{pr}(p-r)+ c_{qr}(q-r)$ then $c_{pq}+c_{pr}=1,$ $-c_{pq}+c_{qr}=\mu,$ $ -c_{pr}-c_{qr}=\nu.$ So, the infimum is $$\inf\{|c_{pq}|d(p,q)+|c_{pr}|d(p,r)+|c_{qr}|d(p,r):c_{pq}+c_{pr}=1, -c_{pq}+c_{qr}=\mu, -c_{pr}-c_{qr}=\nu \}.$$ Taking into account that $d(p,q)=d(p,r)=d(q,r)=1,$ we solve the system of linear equations and see that the latter expression is equal to $\inf \limits_{t\in \Bbb C}(|\mu-t|+|0-t|+|-\nu-t|).$ Finding this infimum is a simple geometrical problem since $0,\mu,-\nu$ are three vertices of an equilateral triangle in the complex plane. It follows that the infimum is equal to $\sqrt{3}.$ Clearly $\sqrt{3}>\frac{3}{2}$ as needed. \vskip 0.5cm \item We will show that the infimum is attained outside the support and that $||\lambda||=\frac{3}{2}.$ We already know that there exists a presentation of $\lambda$ for which the value of the sum-cost is $\frac{3}{2}.$ So $||\lambda||\leq\frac{3}{2}$ and it suffices to show that $||\lambda||\geq \frac{3}{2}.$ This is done by showing that for every presentation of $\lambda$ of the form $\lambda=c_{ep}(e-p)+ c_{eq}(e-q)+ c_{er}(e-r)+c_{pq}(p-q)+ c_{pr}(p-r)+ c_{qr}(q-r),$ where $c_{ep},c_{eq},c_{er},c_{pq},c_{pr},c_{qr}\in \Bbb C,$ we have $$|c_{ep}|d(e,p)+ |c_{eq}|d(e,q)+|c_{er}|d(e,r) +|c_{pq}|d(p,q)+|c_{pr}|d(p,r)+|c_{qr}|d(q,r)=$$$$=\frac{1}{2}(|c_{ep}|+|c_{eq}|+|c_{er}|)+|c_{pq}|+|c_{pr}|+|c_{qr}|\geq \frac{3}{2}.$$ We compare the coefficients of $e,p,q,r$ in the normal presentation of $\lambda$ and in the "new" presentation and obtain \begin{enumerate} [(1)]\item $c_{ep}+c_{eq}+c_{er}=0,$\item $-c_{ep}+c_{pq}+c_{pr}=1,$ \item $-c_{eq}-c_{pq}+c_{qr}=\mu,$ \item $-c_{er}-c_{pr}-c_{qr}=\nu.$ \end{enumerate} Now, using the triangle inequality and properties $(2)-(4),$ we obtain $$\frac{1}{2}(|c_{ep}|+|c_{eq}|+|c_{er}|)+|c_{pq}|+|c_{pr}|+|c_{qr}|=\frac{1}{2}(|-c_{ep}|+|c_{pq}|+|c_{pr}|)+\frac{1}{2}(|-c_{eq}|+|-c_{pq}|+|c_{qr}|)+$$$$+\frac{1}{2}(|-c_{er}|+|-c_{pr}|+|-c_{qr}|)\geq \frac{1}{2}(|-c_{ep}+c_{pq}+c_{pr}|+|-c_{eq}-c_{pq}+c_{qr}|+|-c_{er}-c_{pr}-c_{qr}|)=$$$$=\frac{1}{2}(|1|+|\mu|+|\nu|)=\frac{3}{2}.$$ \end{enumerate} \end{example} We showed that in the archimedean case the infimum can be attained outside the support. In fact, as the following example shows, sometimes the infimum is not attained at all. \begin{example} Let $X=\{p,q,r\}$ and $d$ be a metric on $X$ such that $d(p,q)=d(p,r)=d(q,r)=1.$ Let $\mathbb F=\Bbb Q(i)$ be the subfield of $\Bbb C,$ where $\Bbb Q(i):=\{a+bi:\ a,b\in\Bbb Q\}.$ We will show that the infimum in the definition of $||u||$ is not attained in $\mathbb F$ for $u=(1-i)p+iq-r.$ It suffices to show that the infimum $$\inf\{|c_{pq}|d(p,q)+|c_{pr}|d(p,r)+|c_{qr}|d(q,r): c_{pq}+c_{pr}=1-i, \ -c_{pq}+c_{qr}=i, \ -c_{pr}-c_{qr}=-1 \}= $$$$= \inf \limits_{t\in \mathbb F}(|t|+|t-i|+|t-1|)$$ is not attained. Since $\mathbb F=\Bbb Q(i)$ is a dense subfield of $\Bbb C$ it follows that the latter expression is equal to $\inf \limits_{t\in \Bbb C}(|t-i|+|1-t|+|t|).$ This infimum is attained at a unique point $p\in \Bbb C$ that is the Fermat-Torricelli point of the triangle in the complex plane with vertices $0,1,i.$ One can show that $p\notin \Bbb Q(i).$ By the uniqueness of $p$ it follows that the infimum in the definition of $||u||$ is not attained in $\mathbb F.$ \end{example} \section{Some possible developments and problems} \begin{enumerate} \item One of the most attractive directions is the study of concrete applications of NATP (non-archimedean transportation problem). \item A natural perspective is to extend the discrete version of NATP to a \emph{continuous} one (which in the classical case is based on measures). \item It would be interesting to look for additional properties of the free NA locally convex $\mathbb F$-space. \end{enumerate} \vskip 0.3cm \noindent {\bf Acknowledgments.} We thank R. Ben-Ari, A.M. Brodsky, G. Luk\'{a}cs, V. Pestov, L. Polev and S.T. Rachev for valuable suggestions. \bibliographystyle{amsalpha}
{ "timestamp": "2016-04-14T02:01:03", "yymm": "1504", "arxiv_id": "1504.06301", "language": "en", "url": "https://arxiv.org/abs/1504.06301" }
\section{Introduction} Quantum systems subject to high magnetic fields are known to acquire nontrivial characteristics such as the Hofstadter butterfly~\cite{PhysRevB.14.2239} and quantum Hall effect \cite{prange1987quantum} in two dimensional systems. Recently, the so-called synthetic gauge fields~\cite{RevModPhys.83.1523,goldman2014light,2014arXiv1410.8425G} available in ultracold atomic systems pave the way to realizations of such systems. An advantage of cold atoms is that one can control the geometry, dimension or quantum statistics of systems and parameters of microscopic Hamiltonians at unprecedented levels, which is used to explore non-trivial quantum states. Along these lines of researches, the Hofstadter Hamiltonian~\cite{PhysRevLett.111.185301,PhysRevLett.111.185302} and Haldane topological model~\cite{jotzu2014experimental} have been realized in cold atoms. In addition to such a non-trivial feature of non-interacting quantum matters, as is well known, an interaction is a key ingredient for the diversity of nature. Indeed various phenomena such as superconductivity, superfluidity and Mott transition are understood as a consequence of the interactions. One can thus expect the interaction effects in such topological matters to cause further non-trivial nature, and effective approaches incorporating interactions are required in theory. In one dimension one can successfully apply field theoretical approaches to incorporate interaction effects in a non-perturbative manner. Thus, reduction of dimensions would be a way to understand physics involving magnetic fields and correlations. The minimal model to show non-trivial effects in the presence of magnetic fields is a two-leg ladder, and the bosonic version was first discussed in Ref.~\cite{PhysRevB.64.144515} in the context of Josephson junction arrays. In this study, it has been predicted that two different phases show up: Meissner and vortex phases. While in the former phase a chiral current analogous to a Meissner edge current is induced on the legs by a magnetic flux, in the latter it is significantly reduced due to penetration of vortices, which is analogous to field-induced vortices in type-II superconductors. Later on, the theoretical interest has been devoted to the strong correlated regime, and it has been demonstrated that commensurability of a particle filling poses a Mott insulator with chirality and interesting critical properties~\cite{PhysRevA.85.041602,PhysRevB.87.174501,PhysRevLett.111.150601,tokuno,PhysRevA.91.013629}. Remarkably, the two-leg bosonic ladder subject to a magnetic flux has been successfully realized in an experiment~\cite{atala2014observation}, where a weakly-interacting regime is concerned. In this experiment, it has been confirmed that behaviors of chiral currents are consistent with what has been predicted in Ref.~\cite{PhysRevB.64.144515}. Thus it seems that a basic consensus in a weak-coupling regime is obtained. More recently, however, it has been argued in Ref.~\cite{PhysRevA.89.063617} that in a weak-coupling regime there should exist an additional phase where a spontaneous population imbalance between the legs occurs. This additional phase named a biased ladder phase has been shown with the theory of weakly-interacting Bose gases normally used in higher dimensions~\cite{pethick2002bose}. At the same time, in one dimension quantum fluctuations should be non-negligible in most cases. Thus it is worth considering whether the biased ladder phase is still robust against quantum fluctuations. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig1.pdf} \caption{A schematic figure of a two-leg Bose-Hubbard model with a flux $\phi$. In the gauge chosen here, the flux effect appears only in rung hoppings.} \label{fig1} \end{center} \end{figure} In this paper, we examine the two-leg Bose-Hubbard model in the presence of a magnetic flux in a weakly-interacting regime by means of a couple of effective-theory approaches. We show that a spontaneous population imbalance indeed occurs, and is stable against quantum fluctuation effects. In particular, we point out that the effective theory has the similarity to a ferromagnetic $XXZ$ quantum spin model. This implies that the Heisenberg point exists in the phase diagram, where $SU(2)$ symmetry shows up in the low-energy effective theory although the original Hamiltonian does not possess that symmetry. This situation is somewhat similar to a two-leg extended Bose-Hubbard system analyzed in Ref~\cite{PhysRevA.81.053606}, where the low-energy effective theory possesses an emergent symmetry. We also state that umklapp processes coming from commensurability of a flux should be seriously considered at the mean-field level, which has been overlooked in the previous studies. The umklapp processes existing in $\phi=\pi$ destabilize the biased ladder phase, and as a consequence, only the commensurate vortex phase is allowed. The structure of the paper is as follows. In Sec.~\ref{sec:formulation}, we review structures of single-particle bands as a function of $\phi$, and low-energy effective Hamiltonian reflecting the band structure. The single-particle band bottom shows different topologies: A single minimum for a small flux, and double minima for a large flux. In Secs.~\ref{sec:single-minimum} and~\ref{sec:double-minimum} we discuss physics separately for a single-minimum and for a double-minimum band structure, in which Meissner, vortex, and biased ladder phases are allowed depending on $\phi$ and $K/J$. In Sec.~\ref{sec:summary} summary and perspective on a phase transition between the Meissner and biased ladder phases, and on a stronger interaction effect are provided. Technical details on renormalization group equations are addressed in the Appendix. \section{Formulation of the Problem} \label{sec:formulation} Following the setup in Ref.~\cite{atala2014observation}, we consider the following two-leg Bose-Hubbard ladder Hamiltonian: \beq H &&=-J\sum_{l=1}^L\sum_{p=1,2}(e^{iA^{\parallel}_{l,p}}b^{\dagger}_{l+1,p}b_{l,p}+\mathrm{H.c.}) \nonumber\\ &&\quad -K\sum_{l}(e^{iA^{\perp}_{l}}b^{\dagger}_{l,1}b_{l,2}+\mathrm{H.c.}) +\frac{U}{2}\sum_{l,p}n_{l,p}(n_{l,p}-1), \label{eq:hamiltonian} \eeq where $J$ and $K$ are the hopping amplitudes along the leg and rung directions, respectively. The applied flux is introduced via the Peierls substitution, and the corresponding gauge fields along the chain and rung directions are denoted by $A^{\parallel}_{l,p}$ and $A^{\perp}_{l}$, respectively. The flux $\phi$ is then given as $\phi=A^{\parallel}_{l,1}-A^{\perp}_{l+1}-A^{\parallel}_{l,2}+A^{\perp}_{l}$. The technology of laser-assisted tunneling~\cite{PhysRevLett.107.255301,PhysRevLett.108.225303,PhysRevLett.108.225304,PhysRevLett.111.185301,PhysRevLett.111.185302} generates spatially-dependent phase in rung hoppings, which leads to the $\phi$ flux per plaquette as described in Fig.~\ref{fig1}. Namely, in this paper we choose the following gauge as $A^{\parallel}_{l,p}=0$ and $A^{\perp}_{l}=\phi l$. Taking into account the fact that an interatomic interaction is given by an $s$-wave scattering length, and the stability in bosonic systems, we restrict ourselves to a local repulsive interaction, $U>0$. Apparently, the Hamiltonian~\eqref{eq:hamiltonian} is invariant under the simultaneous transformations, $b_{l,1(2)}\to b_{l,2(1)}$ and $\phi\to-\phi$. Thus, we can safely take the domain of definition in $\phi$ as $0<\phi\le\pi$. By using the general relation between current and Hamiltonian, $j=-\frac{\partial H}{\partial A}$ where $A$ is the gauge field, we can define current operators along the legs and rungs as \beq &&j^{\parallel}_{l,p}=iJ(b^{\dagger}_{l+1,p}b_{l,p} -b^{\dagger}_{l,p}b_{l+1,p}),\label{eq:leg-current}\\ &&j^{\perp}_{l}=iK\left(e^{il\phi}b^{\dagger}_{l,1}b_{l,2}- e^{-il\phi}b^{\dagger}_{l,2}b_{l,1}\right). \label{eq:rung-current} \eeq For the sake of convenience, we also introduce chiral current along the legs: \beq j_c=j^{\parallel}_{l,1}-j^{\parallel}_{l,2}. \label{eq-chiral-current} \eeq We will see that $j_c$ and $j^{\perp}$ play important roles in characterizing each phase. Since we are interested in the regime $J,K\gg U$, we start with diagonalizing the single-particle Hamiltonian. To this end, we perform gauge and Fourier transformations as $b_{l,1}=\frac{1}{\sqrt{L}}\sum_k e^{i(k+\frac{\phi}{2})l}b_{k,1}$ and $b_{l,2}=\frac{1}{\sqrt{L}}\sum_k e^{i(k-\frac{\phi}{2})l}b_{k,2}$. Then, by considering a unitary transformation for $b_{k,1}$ and $b_{k,2}$ \beq \begin{pmatrix} b_{k,1}\\ b_{k,2} \end{pmatrix} = \begin{pmatrix} \cos\left(\frac{\xi_k}{2}\right) & -\sin\left(\frac{\xi_k}{2}\right)\\ \sin\left(\frac{\xi_k}{2}\right) & \cos\left(\frac{\xi_k}{2}\right) \end{pmatrix} \begin{pmatrix} \alpha_k\\ \beta_k \end{pmatrix}, \eeq where \beq \sin\left(\frac{\xi_k}{2}\right) =-\sqrt{\frac{1}{2}\left(1- \frac{\sin\left(\frac{\phi}{2}\right)\sin k}{ \sqrt{\left(\frac{K}{2J}\right)^2+\sin^2\left(\frac{\phi}{2}\right)\sin^2k}} \right)},\nonumber\\ \\ \cos\left(\frac{\xi_k}{2}\right) =\sqrt{\frac{1}{2}\left(1+ \frac{\sin\left(\frac{\phi}{2}\right)\sin k}{ \sqrt{\left(\frac{K}{2J}\right)^2+\sin^2\left(\frac{\phi}{2}\right)\sin^2k}} \right)},\nonumber\\ \eeq the single-particle Hamiltonian can be diagonalized~\cite{PhysRevB.73.195114,PhysRevB.76.195105} as \beq H_0=\sum_k(E_+(k)\alpha^{\dagger}_k\alpha_k+E_{-}(k)\beta^{\dagger}_k \beta_k). \eeq Here, the single-particle spectrum is given by \beq E_{\pm}(k)=2J\left[-\cos\left(\frac{\phi}{2}\right)\cos k \pm\sqrt{\left(\frac{K}{2J}\right)^2 +\sin^2\left(\frac{\phi}{2}\right)\sin^2k}\right], \label{eq:band} \nonumber\\ \eeq which depicts the two-band structure as well as the $2\pi$ periodicity, reflecting the two-leg ladder geometry. Unless a strong interaction is concerned, single-particle low-energy states, bottoms in the lowest band, play important roles in the low-energy many-body states. With this understanding, we neglect effects of the higher band $\alpha_k$ by keeping in mind the condition $J,K\gg U$. \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{fig2.pdf} \caption{Changes of topology in the lower band at a certain $\phi\ne\pi$. The direction of the arrows means the reduction of $K/J$. In a strong enough $K/J$, the band has a single minimum while in the opposite limit, the band of double well structure forms. In between, there exists a critical point in which the band bottom becomes quartic in $k$. The double-well structure is always maintained at $\phi=\pi$ regardless of values of $K/J$.} \label{fig2} \end{center} \end{figure} Let us look into a behavior of the lower band in more detail. We first obtain extrema via $\frac{\partial E_-(k)}{\partial k}=0$, which leads to \beq \sin k\left[\cos\left(\frac{\phi}{2}\right)-\frac{\sin^2\left(\frac{\phi}{2} \right)\cos k} {\sqrt{\left(\frac{K}{2J}\right)^2+\sin^2\left(\frac{\phi}{2}\right)\sin^2k}} \right]=0.\nonumber\\ \label{eq:minim} \eeq If $\phi\ne\pi$ with $K/J\gg1$, $k=0$ and $\pm \pi$ are the solution of Eq.~\eqref{eq:minim}, and $k=0$ gives the minimum of the band. As in the case of the normal cosine band, the dispersion near $k=0$ is approximated to be quadratic in $k$. It is straightforwardly shown that the condition for the single-minimum structure can be expressed as~\cite{PhysRevB.73.195114,PhysRevB.76.195105} \beq \left(\frac{K}{2J}\right)^2 >\frac{\sin^4\left(\frac{\phi}{2}\right)}{1-\sin^2\left(\frac{\phi}{2}\right)}. \label{eq:single-min} \eeq As $K/J$ decreases, on the other hand, the double-well structure starts to show up. The critical point between the single- and double-minimum structure is given by \beq \left(\frac{K}{2J}\right)^2 =\frac{\sin^4\left(\frac{\phi}{2}\right)}{1-\sin^2\left(\frac{\phi}{2}\right)}, \label{eq:critical_K/J} \eeq which means the equality limit of the inequality in Eq.~\eqref{eq:single-min}. Then the dispersion near the bottom becomes quartic in $k$. In the double-well structure case, two separated minima $\pm Q$ can be obtained from the factor of the square bracket in Eq.~\eqref{eq:minim}~\cite{PhysRevB.73.195114,PhysRevB.76.195105}, \beq Q=\sin^{-1}\left[\sqrt{\sin^2\left(\frac{\phi}{2}\right)-\left(\frac{K}{ 2J}\right)^2\cot^2\left(\frac{\phi}{2}\right)}\right]. \eeq In addition, the dispersions around the minimum point are quadratic in $k$ as in the case of the single minimum. We emphasize that $\phi=\pi$ is special because the symmetric double-well structure is strongly protected and its minima are located at $Q=\pm\frac{\pi}{2}$ regardless of $K/J$. The similar behavior is also found in the case of changes of $\phi$ with a fixed $K/J$. In that case it is shown that for the small enough $\phi$, the band forms single-minimum structure, and shows up the double-well structure when $\phi$ goes through a critical value $\phi_c$. A critical flux $\phi_c$ between these two structures is shown to be~\cite{PhysRevB.73.195114,PhysRevB.76.195105} \beq \sin^{-1}\left(\frac{\phi_c}{2}\right) =\sqrt{\frac{\sqrt{\left(\frac{K}{J}\right)^4 +16\left(\frac{K}{J}\right)^2} -\left(\frac{K}{J}\right)^2}{8}}. \eeq Based on the change of the band structure discussed above, let us incorporate interaction effects within a weakly-interacting regime, $J,K\gg U$. In this interaction regime, the bottoms of the band are also important for bosonic systems, which is an essentially different point from the fermion systems. Therefore, we first truncate all the effects involving the upper band ($\alpha_k$). The Hamiltonian~(\ref{eq:hamiltonian}) is reduced to \beq H=\sum_kE_-\beta^{\dagger}_k\beta_k+\frac{1}{2L}\sum_{k_1,k_2,k_3,k_4} \Gamma_{k_1,k_2,k_3,k_4}\beta^{\dagger}_{k_1}\beta^{\dagger}_{k_2} \beta_{k_3}\beta_{k_4}, \nonumber\\ \label{eq:original-h} \eeq where \beq \Gamma_{k_1,k_2,k_3,k_4} &=&U\sum_{n'\in\mathbb{Z}}\delta_{k_1+k_2-k_3-k_4,2\pi n'} \nonumber \\ && \times \Big[\sin\left(\frac{\xi_{k_1}}{2}\right) \sin\left(\frac{\xi_{k_2}}{2}\right) \sin\left(\frac{\xi_{k_3}}{2}\right) \sin\left(\frac{\xi_{k_4}}{2}\right) \nonumber\\ && +\cos\left(\frac{\xi_{k_1}}{2}\right)\cos\left(\frac{\xi_{k_2}}{2}\right) \cos\left(\frac{\xi_{k_3}}{2}\right) \cos\left(\frac{\xi_{k_4}}{2}\right) \Big].\nonumber\\ \label{eq:gamma} \eeq Note that in contrast with a system in continuum space, we need to consider scattering processes involving a finite momentum transfer equal to the reciprocal lattice vector. In our model, the finite momentum transfer to $2\pi n'$ with an integer $n'$ is allowed, which is nothing but the umklapp process, and turns out to play a crucial role in the $\phi=\pi$ case. In what follows, we separately look into many-body ground states in each topology of the single-particle band. \section{Band with a single minimum} \label{sec:single-minimum} For weakly-interacting bosons in higher dimensions, the Gross-Pitaevskii (GP) approach as one of the mean-field theories is known to provide good results~\cite{pethick2002bose}. As a consequence, the system undergoes a Bose-Einstein condensate (BEC), which also implies spontaneous breaking of $U(1)$ symmetry. Thereby, a gapless excitation mode known as a Nambu-Goldstone (NG) mode is obtained. However, it is also well known that for an interacting one-dimensional bosonic system, there exists neither BEC nor NG mode in the thermodynamic limit. Namely the mean-field analysis underestimates fluctuation effects, and cannot thus capture correct results in the one-dimensional cases. However, one-dimensional superfluids in the weakly-interacting regime show very slow power-law decay in such a way that the order effect works almost comparably to the quantum fluctuation. In addition, there is also an acoustic phonon mode similar to the NG mode, although it does not correspond to the spontaneous symmetry breaking. From these facts, it is found that the GP approach is not correct in a strict sense, but would provide a practically reasonable starting point to discuss the ground state and low-energy excitation structure. In addition the advantage of the GP approach is that both kinetic and interaction energies can be simultaneously taken into account at the mean-field level~\footnote{In the single minimum case, however, the kinetic energy is vanished due to the occupation at $k=0$. In Sec. \ref{sec:double-minimum}, we see an example where the kinetic energy takes a nonzero contribution.}. Based on the above observations, let us consider the system with the single band.~\cite{tokuno} As far as the weakly-interacting bosons are concerned, the bosons dominantly populate the minimum of the lower energy band, and one can perform an approximation such that all the energy states except for ones in the vicinity of the minimum are projected out. Thus the low-energy single-particle spectrum is approximated as \beq E_{-}(k)\approx -E_0+\frac{k^2}{2M}, \eeq where $E_0=K+2J\cos\left(\frac{\phi}{2}\right)$ and $\frac{1}{M}=\frac{d^2E_-(k=0)}{dk^2}$. Since all the wave numbers are restricted to be $|k_j|\ll 1$ due to the long-wave-length approximation, the effective interaction parameter $\Gamma_{k_1,k_2,k_3,k_4}$ in Eq.~\eqref{eq:gamma} is approximated in the following way: The small wave number $k$ leads to $\sin(\xi_{k}/2)\approx -1/\sqrt{2}$ and $\cos(\xi_{k}/2)\approx 1/\sqrt{2}$, and by substituting the approximated form of $\xi_{k}$ into Eq.~\eqref{eq:gamma}, the interaction parameter is approximated as $\Gamma_{k_1,k_2,k_3,k_4}\approx U/2$. Thus, we reach the following low-energy effective Hamiltonian~(\ref{eq:original-h}): \beq H\approx\int dx\Big[-\beta^{\dagger}(x)\frac{\nabla^2}{2M}\beta(x) +\frac{U}{4} \beta^{\dagger}(x)\beta(x) \beta^{\dagger}(x)\beta(x)\Big],\nonumber\\ \label{eq:continuum-h} \eeq where $\beta(x)=\frac{1}{\sqrt{L}}\sum_k\beta_ke^{ikx}$. We note that this is essentially identical to the Lieb-Liniger model~\cite{PhysRev.130.1605}. As far as the weak-coupling limit is concerned, one may consider the following GP ground state: \beq |GS\rangle=\frac{1}{\sqrt{N!}}(\beta^{\dagger}_{k=0})^N|0\rangle, \eeq where $N$ is the number of bosons. Let us next incorporate long-wavelength fluctuations which play crucial roles in low-energy properties. To this end, we adopt the hydrodynamic approach also known as the bosonization for bosons~\cite{haldane1981luttinger,giamarchi2003quantum,cazalilla2004bosonizing}: \beq \beta(x)\sim\Big[n-\frac{\nabla\varphi(x)}{\pi}\Big]^{\frac{1}{2}}\sum_{m\in\mathbb{Z}} e^{2im[\pi nx-\varphi(x)]}e^{-i\theta(x)}, \label{eq:bosonized-form} \eeq where $n=N/L$ is the mean density. We introduced the density and phase fluctuations, $\varphi(x)$ and $\theta(x)$, respectively, and the commutation relation between them is given by \beq [\theta(x),\frac{1}{\pi}\nabla\varphi(x')]=i\delta(x-x'). \eeq Applying the above bosonization formula~\eqref{eq:bosonized-form} to Eq.~\eqref{eq:continuum-h}, we obtain \beq H_{\text{eff}}=\frac{v_0}{2\pi}\int dx \left[\frac{1}{K_0}(\nabla\varphi)^2+K_0(\nabla\theta)^2\right], \eeq where $v_0=\sqrt{\frac{nU}{2M}}$ and $K_0=\pi\sqrt{\frac{2n}{MU}}$. This is the Hamiltonian for the celebrated Tomonaga-Luttinger liquid (TLL), which corresponds to a $c=1$ conformal field theory. It is remarkable that in this TLL Hamiltonian, the long-range order (LRO) of the single-particle density matrix, incorrectly predicted by the GP mean-field theory, is directly confirmed to be modified into a correct quasi-LRO of the algebraic decay~\cite{giamarchi2003quantum}; $\langle\beta^{\dagger}(x)\beta(0)\rangle\sim\left(\frac{1}{x}\right)^{1/2K_0}$. Let us next look into the rung~\eqref{eq:rung-current} and chiral current~\eqref{eq-chiral-current} by translating them in effective theory derived above. By using the bosonization formula~\eqref{eq:bosonized-form} the current operators, Eqs.~\eqref{eq:rung-current} and~\eqref{eq-chiral-current}, are expressed as \beq &&j^{\perp}(x)\sim0, \\ &&j_{c}(x)\sim 2nJ\sin\left(\frac{\phi}{2}\right)+O(\nabla^2\theta), \label{eq:chiral-m} \eeq where we point out that the rung current vanishes regardless of bosonization, while the chiral current has the nonzero constant term and the terms starting from $\nabla^2\theta$. Note that at the level of the long-wave approximation, a fluctuation of the rung current $j^{\perp}$ disappears. On the other hand, a fluctuation of the chiral current, $\delta{j_c}\equiv j_c-2nJ\sin\left(\phi/2\right)\sim \nabla^2 \theta$, behaves as \beq \langle{\delta{j_c}(x)\delta{j_c}(0)}\rangle \sim 1/x^4, \eeq which is given by the Gaussian property in the TLL Hamiltonian such that $\langle\nabla^2\theta(x)\nabla^2\theta(0)\rangle \sim 1/x^4$. In addition, Eq.~\eqref{eq:chiral-m} shows that the chiral current increases with $\phi$. These properties correspond to the Meissner phase introduced in Ref.~\cite{PhysRevB.64.144515}, which has been derived in a condition $J\gg K,U$ different from the present case. \section{Band with double minima} \label{sec:double-minimum} We next examine low-energy properties of the system with the double-well band structure where we can distinguish a commensurate wave number $Q$, giving the lowest-energy single-particle states, from incommensurate one. In the commensurate cases, $Q$ can be represented as $Q=\pi p/q$, where $p,q$ are coprime numbers. The effect of commensurability is related to types of interactions. As pointed out in Ref.~\cite{PhysRevB.64.144515}, a $q$-body interaction produces the serious effect for $Q=\pi p/q$. Thus, if arbitrary multi-body interactions come into the low-energy effective theory, every commensurability should be taken care. Note that it does not mean that multi-body interactions are required at the microscopic level. Namely even if only two-body interactions are assumed in the microscopic Hamiltonian, multi-body interactions are generated as virtual multiple-scattering processes when we integrate out irrelevant high-energy degrees of freedom such as deriving a low-energy effective Hamiltonian and implementing a perturbative renormalization group theory. However, such virtual multiple-scattering processes would be suppressed in the weakly-interacting case, and the relevant case would be only for $Q=\pi/2$ in which the two-body interaction yields the strong commensurability effect. Here we first consider an incommensurate $Q$ case in Sec.~\ref{sec:IC}. Next in Sec.~\ref{sec:Q=pi/2}, we move on to the discussion of the $Q=\frac{\pi}{2}$ ($\phi=\pi$) case as one of the commensurate cases. The other commensurability is also briefly discussed in Sec.~\ref{sec:otherC}. \subsection{Incommensurate $Q$ case}\label{sec:IC} In contrast with the single minimum case, the mean ground-state density with the double well structure depends on the couplings of the Hamiltonian. Following the analysis for a BEC on a double-well potential~\cite{pethick2002bose}, we assume the following ansatz, first introduced in Ref.~\cite{PhysRevA.89.063617}: \beq |GS\rangle=\frac{1}{\sqrt{N!}}(e^{i\theta_+} \cos\gamma\beta^{\dagger}_{Q}+e^{i\theta_-}\sin\gamma \beta^{\dagger}_{-Q})^N|0\rangle, \label{eq:gp} \eeq where $\gamma$ and $\theta_{\pm}$ are variational parameters. By taking the expectation value of $H$ in Eq.~\eqref{eq:original-h} with the above ansatz, one obtains~\cite{PhysRevA.89.063617} \beq &&\frac{E_0(\gamma,\theta_{\pm})}{N}=E_-(Q)+\frac{Un}{4}\Big[ \left(\frac{3}{2}\sin^2\xi_{Q}-1\right) \sin^22\gamma\nonumber\\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\sin^2\xi_{Q}+2\Big], \label{eq:mf-energy}\\ &&\langle n_{+}\rangle=\langle \beta^{\dagger}_+(x)\beta_+(x)\rangle =n\cos^2\gamma,\\ &&\langle n_{-}\rangle=\langle \beta^{\dagger}_{-}(x)\beta_{-}(x)\rangle =n\sin^2\gamma, \eeq where $\beta_{\pm}(x)=\frac{1}{\sqrt{L}}\sum_k \beta_{\pm Q+k}e^{ikx}$. We note that these mean-field values have no dependence in $\theta_{\pm}$, which implies that the problem is reduced to optimization of the single variational parameter, $\gamma$. Then, the optimized $\gamma$ is alternatively determined by whether~\cite{PhysRevA.89.063617} \beq \frac{3}{2}\sin^2\xi_{Q}<1, \label{eq:case1} \eeq or \beq \frac{3}{2}\sin^2\xi_{Q}>1. \label{eq:case2} \eeq In the former case~\eqref{eq:case1}, the ground state is minimized by $\gamma=\pi/4$~\cite{PhysRevA.89.063617}, where the populations at $k=\pm Q$ are the same: $\langle n_+\rangle=\langle n_-\rangle$. Thus, at the mean-field level, we expect that there are two independent BECs in the ground state. We note that this is different from a BEC on a double-well potential, where the ground-state energy depends on the relative phase via a hopping term between the condensates~\cite{pethick2002bose}. By contrast, such a hopping does not exist in our system, and thus the mean-field ground state is free to the relative phase. We will come back to this point in an analysis in the $\phi=\pi$ case, where a relative phase dependence shows up via the umklapp process in a nontrivial manner. In the latter case~\eqref{eq:case2}, the ground state is characterized by $\gamma=0$ or $\pi/2$~\cite{PhysRevA.89.063617}, where the mean density becomes $(\langle n_+\rangle,\ \langle n_-\rangle)=(n,0)$ or $(0,n)$. This is the solution such that all the bosons occupy either at $k=Q$ or at $k=-Q$. Thus, the mean-field theory shows that $Z_2$ symmetry is spontaneously broken in the ground state, and a single BEC occurs simultaneously. The transition between these mean-field ground states occurs at $\frac{3}{2}\sin^2\xi_{Q}=1$, which turns out to be rewritten as \beq \left(\frac{K}{2J}\right)^2= \frac{\sin^4(\phi/2)}{\frac{3}{2}-\sin^2(\phi/2)}. \label{eq:boundary-v-z2} \eeq What is important is that the above critical $K/J$ is smaller than another critical $K/J$, given in Eq.~\eqref{eq:critical_K/J}, between the single- and double-minimum band topology. Namely it means that the solution~\eqref{eq:boundary-v-z2} always exists in the regime of $K/2J$ and $\phi$ in which the double-minimum band structure comes out. Therefore, we see that the transition between the mean-field ground states always occurs at a certain $K/2J$ given by $\phi>0$. Let us next look into fluctuation effects based on the above mean-field analyses. As in the case of the single-minimum band, we approximate the Hamiltonian as~\cite{tokuno} \beq &&H\approx \int dx\Big[- \sum_{j=\pm }\beta^{\dagger}_{j}(x)\frac{\nabla^2}{2M^*}\beta_{j}(x) \nonumber\\ &&+\frac{U(2+\sin^2\xi_{Q})}{8}(n_++n_-)^2 +\frac{U(2-3\sin^2\xi_Q)}{8}(n_+-n_-)^2 \Big],\nonumber\\ \label{eq:hamiltonian-dw} \eeq where $\frac{1}{M^*}=\frac{d^2E_-(\pm Q)}{dk^2}$ is the effective mass. We first incorporate fluctuation effects in the case of the mean density $\langle n_+\rangle=\langle n_+\rangle=n/2$ which is stable when Eq.~\eqref{eq:case1} is obeyed. By using bosonization formula~\eqref{eq:bosonized-form} in $\beta_{\pm}$, \beq \beta_{\pm}(x)\sim\Big[\frac{n}{2}-\frac{\nabla\varphi_{\pm}(x)}{\pi}\Big]^{\frac{1}{2}}\sum_{m\in\mathbb{Z}} e^{2im[\pi nx/2-\varphi_{\pm}(x)]}e^{-i\theta_{\pm}(x)},\nonumber\\ \eeq with the density and phase fluctuation fields in the vicinity of the bottoms $k=\pm Q$, $\varphi_{\pm}$ and $\theta_{\pm}$, we obtain \beq H_{\text{eff}}=\sum_{\nu=s,a}\frac{v_i}{2\pi}\int dx \Big[K_{\mu}(\nabla\theta_{\mu})^2+\frac{(\nabla\varphi_{\mu})^2}{K_{\mu}} \Big]\nonumber\\ +\frac{2g}{(2\pi\alpha)^2}\int dx \cos(\sqrt{8}\varphi_a), \label{eq:bosonized-vx} \eeq where we have introduced the symmetric (anti-symmetric) fields, $\varphi_{s(a)}=\frac{1}{\sqrt{2}}(\varphi_{+}+(-)\varphi_-)$ and $\theta_{s(a)}=\frac{1}{\sqrt{2}}(\theta_{+}+(-)\theta_-)$. For convenience's sake, we have also introduced the cutoff parameter $\alpha=1/(\pi n)$ \cite{giamarchi2003quantum}. The velocities, $v_{s}$ and $v_{a}$, and TLL parameters, $K_{s}$ and $K_{a}$, appearing in the quadratic parts of the Hamiltonian are specified by \beq &&v_s=\sqrt{\frac{nU(2+\sin^2\xi_Q)}{4M^*}}, \label{eq:vs}\\ &&v_a=\sqrt{\frac{nU(2-3\sin^2\xi_Q)}{4M^*}}, \label{eq:va}\\ &&K_s=\sqrt{\frac{n}{M^*U(2+\sin^2\xi_Q)}}, \label{eq:Ks}\\ &&K_a=\sqrt{\frac{n}{M^*U(2-3\sin^2\xi_Q)}}, \label{eq:Ka} \eeq and the coupling of the cosine term is given by \beq g=U\sin^2\xi_Q. \eeq In the effective Hamiltonian~\eqref{eq:bosonized-vx}, the symmetric and anti-symmetric fields are decoupled. Especially, the symmetric part is the conventional TLL Hamiltonian. On the other hand, the anti-symmetric part seems not to be the TLL Hamiltonian due to the presence of the cosine term. Thus, the low-energy properties of the system are determined by the relevancy of this cosine term in the sense of the renormalization group. To this end, we implement the perturbative renormalization group analysis. By treating the coupling constant $g$ as a perturbative parameter, we obtain (See Appendix.), \beq \frac{d(g/v_a)}{dl}=2(1-K_a)g/v_a, \label{eq:rg-invortex} \eeq where $l$ is the scaling parameter. In general, $K_s,K_a\gg1$ in weakly-coupling bosons with a contact interaction. This means that $g$ is the irrelevant coupling, and the cosine term goes away in the low-energy limit. Therefore, this phase is found to be characterized by the two-independent TLL. Let us next look into the currents. The bosonized expressions of the operators are summarized as \begin{widetext} \beq j_c(x)&\sim& nJ\Big[4\sin^2\left(\frac{\xi_Q}{2}\right)\sin \left(\frac{\phi}{2}+Q\right) -\sqrt{2}\sin^2\left(\frac{\xi_Q}{2}\right)\cos \left(\frac{\phi}{2}+Q\right)\nabla\theta_a\nonumber\\ &&+\sqrt{2}\cos^2\left(\frac{\xi_Q}{2}\right)\cos \left(\frac{\phi}{2}-Q\right)\nabla\theta_a -4\sin\left(\frac{\xi_Q}{2}\right)\cos\left(\frac{\xi_Q}{2}\right) \sin\left(\frac{\phi}{2}\right)\cos\left(Q(2x+1)-\sqrt{2}\theta_a\right) \Big],\\ j^{\perp}(x)&\sim&nK\left(\sin^2\left(\frac{\xi_Q}{2}\right) -\cos^2\left(\frac{\xi_Q}{2}\right)\right)\sin(2Qx-\sqrt{2}\theta_a). \eeq \end{widetext} By taking the averages of the these quantities, we obtain \beq \langle j_c(x)\rangle&\sim&4nJ\sin^2\left(\frac{\xi_Q}{2}\right)\sin \left(\frac{\phi}{2}+Q\right) \label{eq:jc_incommQ} \\ \langle j^{\perp}(x)\rangle&\sim&0, \label{eq:jperp_incommQ} \eeq where we used $\langle\nabla\theta_a\rangle=\langle\cos(Q(2x+1)-\sqrt{2}\theta_a)\rangle =\langle\sin(2Qx-\sqrt{2}\theta_a)=0$. The form of Eqs.~\eqref{eq:jc_incommQ} and~\eqref{eq:jperp_incommQ} is the same as what Wei and Mueller \cite{PhysRevA.89.063617} have derived within the mean-field analysis for the net chiral current~\footnote{We note an important difference between the mean-field and bosonization approaches. In the mean-field approach, since $\theta_a$ (and $\theta_s$) is ordered, the local currents oscillate in space. In the bosonization approach, since the anti-symmetric sector is described by the TLL, the local currents do not show such an oscillation as far as the commensurate effect does not show up.}. As shown in Refs.~\cite{PhysRevB.64.144515,PhysRevA.89.063617}, the chiral current monotonically decreases with $\phi$, and goes to zero as approaching $\phi\to\pi$. This phase corresponds to the (incommensurate) vortex phase first introduced in Ref.~\cite{PhysRevB.64.144515} in which the reduction of the chiral current has been attributed to the penetration of the vortices. Indeed, a signature of the vortices is found in the correlations of the current fluctuations, $\delta j_c\equiv j_c-\langle j_c\rangle$ and $\delta j^{\perp}\equiv j^{\perp}-\langle j^{\perp}\rangle$. They are calculated as \begin{widetext} \beq \langle\delta j_c(x)\delta j_c(0)\rangle&\sim& \frac{n^2J^2}{K_a} \Big[\sin^2\left(\frac{\xi_Q}{2}\right)\cos\left(\frac{\phi}{2}+ Q\right)+\cos^2\left(\frac{\xi_Q}{2}\right)\cos\left(\frac{\phi}{2}- Q\right)\Big]^2\frac{1}{x^2}\nonumber\\ && +8n^2J^2\sin^2\left(\frac{\xi_Q}{2}\right) \cos^2\left(\frac{\xi_Q}{2}\right)\sin^2\left(\frac{\phi}{2}\right) \cos(2Qx)\frac{1}{x^{1/K_a}} \\ \langle\delta j^{\perp}(x)\delta j^{\perp}(0)\rangle&\sim& n^2K^2\left[\sin^2\left(\frac{\xi_Q}{2}\right) -\cos^2\left(\frac{\xi_Q}{2}\right)\right]^2 \cos(2Qx)\frac{1}{x^{1/K_a}}, \eeq \end{widetext} where we used the facts that $\langle\nabla\theta(x)\nabla\theta(0)\rangle\sim\frac{1}{x^2}$, $\langle e^{Ai\theta(x)}e^{-Ai\theta(0)} \rangle \sim\frac{1}{x^{\frac{A^2}{2K_a}}}$ with a constant $A$, and the cross terms such as $\langle\nabla\theta(x)\cos(Q-\sqrt{2}\theta_a(0))\rangle$ vanish. They have now oscillation components decaying with a power law. Recalling $K_a\gg1$, these power-law decays are extremely slow. We next consider fluctuation effects in the case of the biased mean density $(\langle n_+\rangle,\ \langle n_-\rangle)=(n,0)$ or $(0,n)$. For the sake of simplicity, let us take the case of $(\langle n_+\rangle,\ \langle n_-\rangle)=(n,0)$~\footnote{We can also discuss the case of $(\braket{n_+},\braket{n_-})=(0,n)$ exactly in the same manner, and the same result is obatined. However, only the magnetization has an opposite sign to Eq.~\eqref{eq:magnetization}}. Namely all the bosons only populate around $k=Q$. The degrees of freedom around $k=-Q$ are completely suppressed as far as the interaction is weak enough, and we may consider only the degrees of freedom around $k=Q$. Then the Hamiltonian simplified by the long-wave-length approximation is given as \beq H=\int dx\Big[ -\beta^{\dagger}(x)\frac{\nabla^2}{2M^*}\beta(x)+ \frac{U(2-\sin^2\xi_Q)}{4}n^2\Big]. \eeq Furthermore, by the bosonization formula~\eqref{eq:bosonized-form}, we obtain \beq H_{\text{eff}}=\frac{\bar{v}}{2\pi}\int dx \Big[\bar{K}(\nabla\theta)^2+\frac{(\nabla\varphi)^2}{\bar{K}} \Big], \label{eq:Heff-BLP} \eeq where the velocity and TLL parameter are, respectively, \beq &&\bar{v}=\sqrt{\frac{nU(2-\sin^2\xi_Q)}{2M^*}},\\ &&\bar{K}=\sqrt{\frac{2n}{M^*U(2-\sin^2\xi_Q)}}. \eeq Unlike the equal density case, the effective theory is described by a single TLL. Let us next look at the currents. They are bosonized as given by \begin{widetext} \beq j_c(x)&\sim&4nJ\Big[\sin^2\left(\frac{\xi_Q}{2}\right) \sin\left(\frac{\phi}{2}+Q\right) \Big]-2nJ\Big[\sin^2\left(\frac{\xi_Q}{2}\right) \cos\left(\frac{\phi}{2}+Q\right) -\cos^2\left(\frac{\xi_Q}{2}\right) \cos\left(\frac{\phi}{2}-Q\right) \Big]\nabla\theta, \\ j^{\perp}(x)&\sim&0, \eeq where the rung current is shown to be zero at the level of long-wave approximation as in the case of the Meissner phase. The averages of them are calculated as \beq \langle j_c(x)\rangle&\sim& 4nJ\sin^2\left(\frac{\xi_Q}{2}\right) \sin\left(\frac{\phi}{2}+Q\right),\\ \langle j^{\perp}(x)\rangle&\sim&0, \eeq which are identical to the expressions for the net currents obtained in Ref. \cite{PhysRevA.89.063617} and look the same as those of the $\braket{n_+}=\braket{n_-}$ case, i.e., Eqs.~\eqref{eq:jc_incommQ} and~\eqref{eq:jperp_incommQ}. However, this does not mean that all the low-energy properties coincide with the equal-density case. Indeed, we find that the difference occurs in the current fluctuations as \beq &&\langle\delta j_c(x)\delta j_c(0)\rangle\sim \frac{2n^2J^2}{\bar{K}}\Big[\sin^2\left(\frac{\xi_Q}{2}\right) \cos\left(\frac{\phi}{2}+Q\right)-\cos^2\left(\frac{\xi_Q}{2}\right) \cos\left(\frac{\phi}{2}-Q\right) \Big]^2\frac{1}{x^2} , \\ &&\langle\delta j^{\perp}(x)\delta j^{\perp}(0)\rangle\sim0. \eeq \end{widetext} Thus, in contrast with the vortex phase in the $\langle n_+\rangle=\langle n_-\rangle$ case, the current fluctuations in this phase do not have an oscillating component. A peculiarity of this phase is seen in the density in each leg, $n_1$ and $n_2$. To see this, we define a magnetization, i.e., population imbalance between the legs, as $m\equiv\langle n_{1} \rangle-\langle n_{2} \rangle$. According to the mean-field theory~\eqref{eq:gp}, the magnetization can be calculated as \cite{PhysRevA.89.063617} \beq m=- n\cos\xi_{Q}. \label{eq:magnetization} \eeq We note that it can be proved that the magnetization value $m$ is robust even when the quantum fluctuation up to the bosonization level is incorporated. This is due to the fact that the fluctuation of the magnetization $\delta m$ is given as $\delta{m}\sim\langle{\nabla\varphi}\rangle=0$. Thus, the mean-field result is applicable. As shown in Fig.~\ref{fig3}, $m$ takes a nonzero value as far as the stability condition of the phase is met, which means the spontaneous imbalance of the populations between the legs occurs. This phase corresponds to the biased ladder phase introduced in Ref.~\cite{PhysRevA.89.063617}, which has been first demonstrated within the GP mean-field theory. What is addressed here is that even in the presence of the quantum fluctuation the $Z_2$ symmetry breaking between the populations in the doubly-fold lowest energy in the double-well band is maintained, and the biased ladder phase is thus stable at the full quantum level. This would be reasonable once one recalls the fact that even though a continuous $U(1)$ symmetry cannot be broken in quantum one-dimensional systems, a spontaneous breaking of a discrete symmetry is possible. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig3.pdf} \caption{The absolute value of density difference at $\phi=\frac{\pi}{2}$ in the biased ladder phase. At the boundary between biased ladder ($m\ne 0$) and Meissner phases ($m=0$), the density difference disappears. On the other hand, at the boundary between incommensurate vortex and biased ladder phases, an infinite number of degeneracy in density difference emerges due to the emergent symmetry as shown in Sec. \ref{sec:ferroXXZ}. } \label{fig3} \end{center} \end{figure} \subsubsection{Analogy with ferromagnetic XXZ model} \label{sec:ferroXXZ} Let us now examine the nature in the phases and transitions between them, from the viewpoint of symmetry. To this end, we focus on the low-energy Hamiltonian~\eqref{eq:hamiltonian-dw}, which is invariant under the continuous transformations, $\beta_{\pm}\to e^{i\theta_{\pm}}\beta_{\pm}$, and discrete transformation, $\beta_{\pm}\to\beta_{\mp}$. This implies that symmetry of the low-energy Hamiltonian~\eqref{eq:hamiltonian-dw} is $U(1)_{+}\times U(1)_{-}\times Z_2$, where the subscript $\pm$ of $U(1)$ represents the corresponding symmetry in $\beta_{\pm}$. We also note that the above symmetry can also be represented as $U(1)_V\times U(1)_A \times Z_2$ where $U(1)_V$ and $U(1)_A$ represent the vector $U(1)$ symmetry, $\beta_{\pm}\to e^{i\theta}\beta_{\pm}$, and the axial $U(1)$ symmetry, $\beta_{\pm}\to e^{\pm i\theta}\beta_{\pm}$, respectively~\footnote{The terms of the vector and axial $U(1)$ symmetries are employed due to the analogy to chiral symmetries used in elementary particle physics.}. In the vortex phase, the low-energy effective properties are captured by the two independent TLLs, in which the two independent $U(1)$ symmetries are hold; namely no symmetry breaking occurs in this phase. In the biased ladder phase, on the other hand, the low-energy effective properties are described by the single TLL reflecting the acoustic phonon excitation around one of the two minimum-energy states in the double-well band. The $Z_2$ symmetry turns out to be broken, since the minima of the band are degenerate and one of the minima is spontaneously chosen. It is important to clarify the transition point between the vortex and biased ladder phases, where we need a special attention. First, we address that at this transition point, symmetry of the low-energy Hamiltonian is enlarged. To see this, it is convenient to introduce a two-component spinor, \beq \vec{\beta}= \begin{pmatrix} \beta_+\\ \beta_- \end{pmatrix}. \eeq In this spinor representation, we can define $U(1)_V\times SU(2)$ transformations as $\vec{\beta}\to e^{i\alpha_1}e^{i\alpha_2 \sigma_z}e^{i\alpha_3\sigma_y} e^{i\alpha_4\sigma_z}\vec{\beta}$, where $\sigma_i$ $(i=x,y,z)$ is the Pauli matrix, and $\alpha_{i}$ ($i=1,\cdots,4$) represents an angle of $U(1)_V\times SU(2)$. Then, the low-energy Hamiltonian~\eqref{eq:hamiltonian-dw} is shown to be invariant under the above transformations at the transition point where the term proportional to $(n_+-n_-)^2$ disappears, i.e., the Hamiltonian has $U(1)_V\times SU(2)$ symmetry. This implies that we can define the following conserved charges: \beq &&S_+=\int dx \beta^{\dagger}_+\beta_-, \label{eq:S+}\\ &&S_-=\int dx \beta^{\dagger}_-\beta_+, \label{eq:S-}\\ &&S_z=\int dx [n_+-n_-], \label{eq:Sz}\\ &&N_V=\int dx [n_++n_-]. \label{eq:Nv} \eeq Here, Eqs.~\eqref{eq:S+}-\eqref{eq:Sz} constitute $SU(2)$ charges and Eq.~\eqref{eq:Nv} originates from $U(1)_V$ symmetry. In addition, the Hamiltonian~\eqref{eq:hamiltonian-dw} at the transition point is identical to the two-component bosonic Yang-Gaudin model where the Bethe ansatz solution is available~\cite{li2003exact,PhysRevLett.95.150402,batchelor2006collective}. So far the followings on two-component bosonic Yang-Gaudin model are known: As usual, there is a TLL associated with $U(1)_V$ symmetry. In addition, the $SU(2)$ symmetry corresponding to Eqs. \eqref{eq:S+}-\eqref{eq:Sz} is spontaneously broken in the ground state, which is essentially corresponds to the physics of the Heisenberg ferromagnet~\cite{li2003exact,PhysRevLett.95.150402,batchelor2006collective}. Due to the spontaneous breaking of the $SU(2)$ symmetry, we expect that there is a NG mode whose dispersion is quadratic in $k$ and there exist an infinite number of the degenerate ground states and one of them is selected spontaneously. This is significant difference from the biased ladder phase where there only exists double degeneracy. The scenario discussed above reminds us of the similarity to the ferromagnetic $XXZ$ model. When we bosonize the $XXZ$ model, the low-energy effective theory is described as the sine-Gordon model. When the $XY$ in-plane anisotropy is strong, the theory is renormalized to the TLL which corresponds to the so-called $XY$ phase, and upon approaching the isotropic point the velocity and Luttinger parameter of the $XY$ phase, respectively, go to zero and infinity, which implies the quadratic dispersion of an excitation~\cite{giamarchi2003quantum}. Furthermore, beyond the isotropic point, i.e. Ising anisotropy, the $XXZ$ model undergoes the so-called ferromagnetic Ising phase, where the excitations are massive, and the $Z_2$ spin-inverse symmetry is spontaneously broken. In our case, the anti-symmetric sector in the vortex phase~\eqref{eq:bosonized-vx} exhibits the same effective theory as the $XXZ$ model, and the velocity~\eqref{eq:va} and Luttinger parameter~\eqref{eq:Ka} show the same behavior as those in the isotropic limit of the ferromagnetic $XXZ$ model. Going beyond the Heisenberg point, we encounter the biased ladder phase described by a single TLL, which corresponds to the ferromagnetic Ising phase in the ferromagnetic $XXZ$ model. It is interpreted that for the biased ladder phase the anti-symmetric sector goes away into the high enegy regime, which would correspond to the massive excitation in the ferromagnetic Ising phase. Namely such a massive excitation should describe the change of the populations on the band bottoms and is regarded as the high energy one in our approach, which is excluded in Eq.~\eqref{eq:Heff-BLP}. In Sec.~\ref{sec:bogoliubov}, it is shown that such a massive excitation may be incorporated with the Bogoliubov theory. \subsubsection{Bogoliubov spectrum in biased ladder phase} \label{sec:bogoliubov} To see some insight into the biased ladder phase, let us here review the Bogoliubov theory given by~\cite{PhysRevA.89.063617}. Since the Bogoliubov theory is based on the expansion from the GP solution, it must underestimate fluctuation effects. However, this does not mean that all the results by means of the Bogoliubov theory are incorrect as pointed out in Sec. \ref{sec:single-minimum}. At least, an excitation spectral feature in the low-energy limit for weakly interacting one-dimensional bosons is expected to reproduce the correct behavior. Indeed, it is known that linear excitation feature occurring in the Lieb-Liniger model \cite{PhysRev.130.1605} and quadratic excitation feature occurring in the two-component Yang-Gaudin model \cite{PhysRevLett.95.150402} can be captured by the Bogoliubov theory. As usual, by applying $\beta_k=\sqrt{N_0}\delta_{k,Q}+\bar{\beta}_k$, where $\bar{\beta}_k$ is the fluctuation field and $N_0$ is the number of the particles in the condensate, to Eq.~\eqref{eq:original-h}, we obtain the Bogoliubov Hamiltonian to be correct up to second order in the fluctuation field \beq H_{\mathrm{Bog}}&\sim&\sum_{k>0}\vec{\bar{\beta}}^{\dagger} M\vec{\bar{\beta}}\nonumber\\ &=& (\bar{\beta}^{\dagger}_{Q+k},\bar{\beta}_{Q-k}) \begin{pmatrix} \zeta(k) & \eta(k) \\ \eta(k) & \zeta(-k) \end{pmatrix} \begin{pmatrix} \bar{\beta}_{Q+k}\\ \bar{\beta}^{\dagger}_{Q-k} \end{pmatrix},\nonumber\\ \label{eq:Bog-h} \eeq where the matrix elements are defined as \beq &&\zeta(k)=E_{-}(Q+k)-E_{-}(Q)+2Un \Big[\sin^2\frac{\xi_{Q}}{2}\sin^2\frac{\xi_{Q+k}}{2} \nonumber\\ && \ \ \ +\cos^2\frac{\xi_{Q}}{2}\cos^2\frac{\xi_{Q+k}}{2} \Big] -Un\Big[\sin^4\frac{\xi_{Q}}{2} +\cos^4\frac{\xi_{Q}}{2} \Big],\\ &&\eta(k)=Un\Big[\sin^2\frac{\xi_{Q}}{2}\sin\frac{\xi_{Q+k}}{2} \sin\frac{\xi_{Q-k}}{2}\nonumber\\ && \ \ \ \ \ \ \ +\cos^2\frac{\xi_{Q}}{2}\cos\frac{\xi_{Q+k}}{2} \cos\frac{\xi_{Q-k}}{2} \Big]. \eeq Here, we did an approximation $N\approx N_0$. To diagonalize the above Hamiltonian~\eqref{eq:Bog-h}, a simple scheme is to consider the following eigenvalue problem~\cite{blaizot1986quantum,kawaguchi2012spinor}: \beq \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} \zeta(k) & \eta(k) \\ \eta(k) & \zeta(-k) \end{pmatrix} \begin{pmatrix} u\\ v \end{pmatrix} =\epsilon \begin{pmatrix} u\\ v \end{pmatrix}. \eeq Then, the Bogoliubov spectra $\pm\epsilon$ are obtained as eigenvalues of the above matrix equation. The Bogoliubov spectrum is shown to be~\cite{PhysRevA.89.063617} \beq \epsilon(k)=\pm\left(\frac{\zeta(k)-\zeta(-k)}{2}\right) +\sqrt{\frac{(\zeta(k)+\zeta(-k))^2}{4}-\eta^2(k)}.\nonumber\\ \label{eq:Bogoliubov} \eeq A behavior of the Bogoliubov spectrum is shown in Fig.~\ref{fig4}. When looking at $k\to0$, we have the linear spectrum, which can be regarded as the TLL. In addition, there is a local minimum around $k=-Q$. This roton-like behavior would be interpreted as the massive excitation originating from the $Z_2$ symmetry breaking. Indeed the similar situation recently realized in Ref.~\cite{PhysRevLett.114.055301} also occurs in a BEC in a shaken optical lattice, in which $Z_2$ symmetry is broken. The energy gap around $k\approx -Q$, $\epsilon(k\approx-Q)$, is found to go closed as approaching the point where the symmetry is enlarged to $SU(2)$. This scenario may remind one of the ferromagnetic transition in the $XXZ$ model when approaching the Heisenberg point from the Ising anisotropic side, the Ising gap will collapse and the dispersion turns to quadratic in $k$. \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{fig4.pdf} \caption{Schematic behavior of the Bogoliubov spectrum predicted by Eq.~\eqref{eq:Bogoliubov}. The excitation is linear in small $k$, which should be interpreted as the TLL spectrum. On the other hand, a local minimum exists in the vicinity of $k=-Q$ originating from the $Z_2$ symmetry breaking.} \label{fig4} \end{center} \end{figure} \subsection{Commensurate Q case ($Q=\pi/2$)}\label{sec:Q=pi/2} Let us next consider a commensurability effect, i.e. $Q=\pi/2$ ($\phi=\pi$) case where we require considerable attention. In this case, the double-well band structure is always maintained and its minima are located at $k=\pm\pi/2$ regardless of the ratio $K/J$. A peculiarity is an emergence of the umklapp scattering process between two energy minima in the band, which has been overlooked so far. To consider this, we turn back to the mean-field ansatz~\eqref{eq:gp}. The form of this ansatz includes the couplings of the far-separated states $k=\pm Q$, and thus automatically allows us to take into account the umklapp process involving the large momentum transfer. Based on the mean-field ansatz, the ground-state energy is straightforwardly calculated as \beq &&\frac{E_0(\gamma,\theta_{\pm})}{N}=E_-(Q)+ \frac{Un}{4}\Big[-\sin^2\xi_{Q}+2\nonumber\\ &&+\left\{\left(\frac{3}{2}+\frac{\cos(2\theta_+-2\theta_-)}{2}\right) \sin^2\xi_{Q}-1\right\} \sin^22\gamma \Big]. \label{eq:gs-energy-with-ukp} \eeq Note that it differs from Eq.~\eqref{eq:mf-energy} due to the presence of the umklap scattering. As can be clear from the above expression, the energy must be minimized when the relative phase satisfies $\theta_+-\theta_-=\pm\frac{\pi}{2}$. We notice that this is different from the case of a BEC in a double-well potential, where the relative phase is zero in the ground state~\cite{pethick2002bose}. This difference originates from the fact that the relative phase dependence is caused by a hopping (kinetic) term such as $-J\cos(\theta_{+}-\theta_-)$ in a BEC on a double-well potential while that in our model originates from the interaction term in our model. By substituting this relative phase $\theta_{+}-\theta_{-}=\pi/2$ or $-\pi/2$ into Eq.~\eqref{eq:gs-energy-with-ukp}, the ground-state energy is going to be \beq \frac{E_0(\gamma)}{N}=E_-(Q)+ \frac{Un}{4}\Big[ \left(\sin^2\xi_{Q}-1\right) \sin^22\gamma\nonumber\\ -\sin^2\xi_{Q}+2\Big]. \eeq Since $\sin^2\xi_{Q}\le1$, the ground state can be uniquely determined by $\gamma=\pi/4$, which is independent of $K/J$. This ground state leads to the balanced density $\langle n_+\rangle=\langle n_-\rangle$. Thus, the biased ladder phase is washed out in the presence of the umklapp process. Let us next consider quantum fluctuations from the mean-field solution. In this case, we need to retain the following process in the effective Hamiltonian~\eqref{eq:hamiltonian-dw}: \beq H_{\mathrm{umklapp}}=U\sin^2\xi_Q\int dx [ \beta^{\dagger}_+\beta^{\dagger}_+\beta_-\beta_- +h.c.]. \label{eq:umklapp} \eeq Note that due to this term~\eqref{eq:umklapp}, symmetry of the Hamiltonian is lowered from $U(1)\times U(1)\times Z_2$ to $U(1)_V\times Z_2$. Thus, the axial $U(1)$ symmetry ($\beta_{\pm}\to e^{\pm i\theta}\beta_{\pm}$) disappears from the low-energy Hamiltonian, and the continuous symmetry remaining turns out to be only the vector $U(1)$ symmetry ($\beta_{\pm}\to e^{i\theta}\beta_{\pm}$). On the other hand, the $Z_2$ symmetry ($\beta_{\pm}\to \beta_{\mp}$) remains in the presence of the umklapp term~\eqref{eq:umklapp}. Let us next perform bosonization as follows: \beq &&H=\sum_{\mu=s,a}\frac{v_{\mu}}{2\pi}\int dx \Big[K_{\mu}(\nabla\theta_{\mu})^2+\frac{(\nabla\varphi_{\mu})^2}{K_{\mu}} \Big]\nonumber\\ &&-\frac{g_1}{(2\pi\alpha)^2}\int dx \cos(\sqrt{8}\varphi_a) - \frac{g_2}{(2\pi\alpha)^2}\int dx \cos(\sqrt{8}\theta_a),\nonumber\\ \label{eq:Heff-Q=pi/2} \eeq where $g_1=\sin^2\xi_Q/2$, $g_2=\sin^2\xi_Q/4$, and \beq &&v_s=\sqrt{\frac{nU}{M^*}},\\ &&v_a=\sqrt{\frac{nU(1-\sin^2\xi_Q)}{M^*}},\\ &&K_s=\sqrt{\frac{n}{4M^*U}},\\ &&K_a=\sqrt{\frac{n}{4M^*U(1-\sin^2\xi_Q)}}. \eeq An essential difference from the incommensurate $\phi$ case is the presence of $\cos\sqrt{8}\theta_a$ which comes from $H_{\mathrm{umklapp}}$. Furthermore we move on to the renormalization group analysis to see the low-energy properties of the system. The parameters in Eq.\eqref{eq:Heff-Q=pi/2} are found to obey the following renormalization group equations as (See Appendix) \beq \frac{d(g_1/v_a)}{dl}=2(1-K_a)g_1/v_a,\label{eq:RG1}\\ \frac{d(g_2/v_a)}{dl}=2\left(1-\frac{1}{K_a}\right)g_2/v_a \label{eq:rg-commensurate}. \eeq Since $K_a\gg1$ in the weakly interacting case assumed here, it is found from Eqs.~\eqref{eq:RG1} and~\eqref{eq:rg-commensurate} that $g_1/v_a$ and $g_2/v_a$ are rapidly renormalized, respectively, to being zero and divergent as $l$ increases. Namely, $\cos\sqrt{8}\theta_a$ in Eq.~\eqref{eq:Heff-Q=pi/2} is highly relevantly retained in the effective Hamiltonian in the low-energy limit while $\cos\sqrt{8}\phi_a$ irrelevantly goes away as in the case of $Q\ne\pi/2$: The renormalized effective theory is \beq H=\sum_{\mu=s,a}\frac{v_{\mu}}{2\pi}\int dx \Big[K_{\mu}(\nabla\theta_{\mu})^2+\frac{(\nabla\varphi_{\mu})^2}{K_{\mu}} \Big]\nonumber\\ -\frac{g_2}{(2\pi\alpha)^2}\int dx \cos(\sqrt{8}\theta_a). \label{eq:Heff-Q=pi/2--2} \eeq Thus, the anti-symmetric sector becomes gapful due to the fixed relative phase $\langle{\cos(\sqrt{8}\theta_a)}\rangle=1$, which is consistent with the mean-field scenario, deduced by Eq.~\eqref{eq:gs-energy-with-ukp}, with the umklapp scattering, Let us also look at the currents in this phase. By means of the bosonization formula, the averaged currents are calculated as \cite{tokuno} \beq \langle j_c(x)\rangle&\sim& 2nJ\sin{\xi_Q}\, (-1)^x,\\ \langle j^{\perp}(x)\rangle&\sim& nK\cos{\xi_{Q}}\,(-1)^x. \eeq Thus, the current pattern shows up due to the umklapp effects. In a similar manner, we can evaluate the current correlations, which turn out to be zero (or exponentially decay in $x$). This phase is called (commensurate) vortex or chiral superfluid phase. The possible phase diagram in the weak coupling regime $J,K\gg U$ is summarized in Fig.~\ref{fig5}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{fig5.pdf} \caption{Schematic phase diagram (a) for $Q\ne\pi/2$ and (b) for $Q=\pi/2$. An emergent $SU(2)$ symmetry shows up at the boundary between incommensurate vortex and biased ladder phases, which is marked with the red circle in (a).} \label{fig5} \end{center} \end{figure} \subsubsection{Transition between commensurate and incommensulate fluxes} Now we discuss properties in the vicinity of the commensurate flux $\phi=\pi$. To this end, we first estimate the gap coming from the coupling $\cos\sqrt{8}\theta_a$ in Eq.~\eqref{eq:Heff-Q=pi/2--2} based on the renormalization group equation~\cite{giamarchi2003quantum}. As mentioned above, this coupling $g_2$ is highly relevant in $K_a\gg1$ and is therefore expected to flow to the strong coupling regime ($g_2 \rightarrow\infty$). Thus, we introduce a typical length scale $l^*$ estimated from Eq.~\eqref{eq:rg-commensurate} as \beq e^{l^*}\sim\left(\frac{v_2}{g_2}\right)^{\frac{1}{2-\frac{2}{K_a}}}, \eeq and stop the renormalization group flow at the length scale $l^{*}$. On the other hand, if we are in the strong coupling, we may approximate the cosine term as $g_2\cos(\sqrt{8}\theta_a)\approx g_2 [1 - 4\theta_a^2(x)]$. Therefore, applying the expansion to the renormalized effective theory~\eqref{eq:Heff-Q=pi/2--2}, the Hamiltonian is easily diagonalized, and the consequently estimated gap $\Delta(l^*)$ for the renormalized coupling constant $g_2(l^{*})$ at a termination of the renormalization group flow is found to be $\Delta(l^*)\sim\sqrt{\frac{g_2(l^*)v_a}{K_a}}\sim\sqrt{\frac{v_a}{K_a}}$. Taking into account that the gap is renormalized as $\Delta(l)=e^l\Delta$, we obtain \beq \Delta\sim g^{\frac{1}{2-\frac{2}{K_a}}}_2\sqrt{\frac{v_a}{K_a}}. \eeq Based on this estimation, we next consider the situation in which $Q$ is slightly deviated from a commensurate point $Q=\pi/2$, and write $Q$ as $Q=\frac{\pi}{2}+\frac{\delta }{4}$, where $\delta$ is assumed to be small. Then, the umklapp term~\eqref{eq:umklapp} is bosonized as \beq H_{\mathrm{umklapp}} \sim - \frac{g_2}{(2\pi\alpha)^2}\int dx \cos(\sqrt{8}\theta_a+\delta x), \eeq which is namely the oscillating term as a function of $x$. We may safely make the replacement of $\delta \rightarrow 0$ if $\delta$ is irrelevant. However, if $\delta$ is relevant, the umklapp term experiences a strong oscillation, and cancels out in the renormalized Hamiltonian. In general, the transition from the irrelevant and relevant $\delta$ or vice versa is known to occur around $\Delta\sim \delta$, and called a commensurate-incommensurate transition~\cite{giamarchi2003quantum}. In our cases, the transition between the chiral superfluid and vortex phases corresponds to such a commensurate-incommensurate transition~\cite{giamarchi2003quantum}, because once $\delta$ becomes relevant, the effective theory reduces to that of the two independent TLLs, which is identical to that of the vortex phase. On the other hand, the transition between the commensurate vortex and biased ladder phases may not be captured by this scenario of the incommensurate-commensurate transition because the effective theory in the biased ladder phase is not the two independent TLLs. It is also interesting to consider the transition between the commensurate vortex and Meissner phases. However, it is difficult to describe such a transition by means of our approach. Thus, it would be worthwhile examining the nature of these transitions in a numerical simulation. \subsection{Other commensurability effect}~\label{sec:otherC} So far, we have discussed only the $\phi=\pi$ case as a commensurate case. The point on the $\phi=\pi$ case is that the biased ladder phase is suppressed by the presence of the umklapp scattering. Here we consider the other commensurate case  in order to see roles of general commensurability, and we will see that the $\phi=\pi$ commensurability is special. As heretofore, let us first consider the mean-field approximation for the Hamiltonian~\eqref{eq:original-h}. Then, it turns out that as far as such a Hamiltonian is concerned, we always have the energy expression~\eqref{eq:mf-energy} regardless of commensurability of $Q$. Thus, the mean-field phase diagram is identical to the incommensurate $Q$ case. Let us next consider the bosonization to see quantum fluctuation effects. For the incommensurate vortex phase at the mean-field level, it is expected that the higher order perturbation theory generates cosine terms in $\theta_a$ at a commensurate $Q$ as shown in Ref.~\cite{PhysRevB.64.144515}. Since many of such cosine terms are relevant for $K_a\gg1$, we find that the incommensurate vortex phase is replaced by the commensurate vortex phase by quantum fluctuation effects if the coupling of the cosine term is larger than the temperature~\cite{giamarchi2003quantum}. On the other hand, for the biased ladder phase at the mean-field level, we find that a cosine term to fix $\theta_a$ does not show up by the bosonization since the mean density in one of the wells is equal to zero. Namely, the biased ladder phase at such a commensurate $Q$ is robust. \section{Summary and Perspective} \label{sec:summary} We have examined the two-leg Bose-Hubbard ladder model subject to a magnetic field flux. We have particularly revealed the structure of the phase diagram in a weak-coupling regime by using a couple of the effective theory methods. What we stress is that we have also found the so-called biased-ladder phase, first predicted by the GP mean-field approach, to be robust against quantum fluctuations. It has also been shown that the transition between the biased-ladder and vortex phases has a similarity as that of the ferromagnetic $XXZ$ model where the emergent $SU(2)$ symmetry comes out at the transition between the biased ladder and vortex phases. In the case of the ladder system subject to the magnetic flux, commensurability works to phase degrees of freedom, which produces a kind of umklapp processes. By incorporating such an umklapp process at the mean-field level, we have shown that the biased-ladder state tends to be destabilized by the umklapp process, and turns out to be forbidden for the case of $\phi=\pi$. \subsection{Transition between Meissner and biased ladder phases} As seen in Sec.~\ref{sec:formulation}, the dispersion becomes quartic at the critical point between the single- and double-minimum band structures. In the absence of an interaction, one can naively expect that all the bosons condense at the lowest energy, and just forms a BEC which is the same as the case of the quadratic dispersion. A question is what happens in the presence of an interaction. Here we briefly discuss a possible scenario. We first point out that the similar situation can be also considered for one-dimensional two-component bosons with spin-orbit couplings, where the bare single-particle dispersion becomes quartic at a certain value of the spin-orbit coupling and biased chemical potential between the two species. By employing the hydrodynamic approach and Gaussian approximation, it is shown that the low-energy effective theory undergoes non-TLL~\cite{PhysRevA.90.011602}. Then the excitation is still gapless, but is no longer identical to an acoustic phonon: the quadratic-dispersion mode. It would rather be that of the Heisenberg ferromagnet and two-component bosonic Yang-Gaudin model where the spontaneous symmetry breaking and NG mode show up. Interestingly, however, the off-diagonal density matrix is shown to decay exponentially as \beq \langle b^{\dagger}_{x,p}b_{0,p'} \rangle\sim ne^{-|x|/\xi_c}, \eeq where $\xi_c$ is the correlation length given by $\xi_c=\sqrt{2\rho_0/(mg\lambda^2)}$ with a mean-density $\rho_0$, atomic mass $m$, density-density interaction $g$ and spin-orbit coupling $\lambda$. Thus, it means that even one-dimensional superfluidity is destroyed. From the above example, we can expect the same physics in our model. Namely the system might form such a non-TLL when the system transits from the Meissner to the biased ladder state. However, then the Meissner current would be predicted to be still protected because its presence is guaranteed by the two band structure from the ladder geometry and flux~\eqref{eq:band}.~\cite{tokuno} In addition to the physical properties, this problem on interacting bosons for quartic bare dispersion would have another interesting aspect. In general, it is expected that an existence of gapless modes supports LRO or quasi-LRO while our model is an exceptional case on this statement. Thus, the profound understanding on the role of gapless modes in one dimension remains an open question. \subsection{Stronger $U$ effect} In the paper, we have analysed the system under the condition $K,J\gg U$. A natural question to come up then is what happens when the system with a stronger interaction $U$ is concerned. In many of one-dimensional systems, both the weak- and strong-coupling analyses are continuously connected to each other consequently~\cite{giamarchi2003quantum}, but we would think that the strong coupling regime includes different physics in our model. This is because while in a weak coupling, the low-energy properties can be well captured by assuming the quasi-condensates at well-separated lowest energy single particle states in the double-well band structure, such a picture is no longer applicable in a strong coupling due to a hard-core feature analogous to \textit{Fermi statistics}. Namely the double-well band feature can be no longer important in low-energy physics if we naively assume fermion-like occupation of the particles in the band picture as a strong coupling limit, and the different properties from those for the weak coupling should be then found. Indeed, the recent numerical analysis~\cite{PhysRevB.91.140406} in a strong coupling shows that the biased-ladder phase is not found while the presence of the Meissner and vortex phases is confirmed. Let us make a further consideration on the physics in the intermediate interaction. Then it is convenient to take the generalized mean-field ansatz~\cite{PhysRevA.89.063617}, \beq |GS'\rangle=\frac{1}{\sqrt{N!}}\left( e^{i\theta_+}\cos\gamma\beta_k^{\dagger}+e^{i\theta_-}\sin\gamma \beta_{k}^{\dagger}\right)^N|0\rangle, \eeq where now the wave-vector $k$ pointing at the bottoms of the band is also treated as a variational parameter. This generalization means that a modification of the band structure by an interaction is taken into account. As shown in Ref.~\cite{PhysRevA.89.063617}, the variational approach shows that the optimized value of $k$ decreases with $U$, and approaches zero at a certain $U_c$. It is not clear whether this mean-field ansatz correctly captures the physics in the regime $U\sim U_c$, but we can naively guess, at least, from this discussion, that the interaction works so as to collapse the double-well band structure. Combining the mean-field and numerical result, the following scenario can be deduced: The biased-ladder state for the weak-interaction goes unstable as $U$ increases; it eventually transits at $U=U_c$ to Meissner state, and such a Meissner state continues to that of the strong coupling regime which is found in Ref.~\cite{PhysRevB.91.140406}. To test this scenario, or to precisely estimate the critical $U_c$, an unbiased numerical simulation would be necessary. \textit{Note added:} Recently, we noticed a paper \cite{2015arXiv150406564G}, which found the biased ladder phase in a regime $J\simeq K\simeq U$ by means of the density matrix renormalization group. \begin{acknowledgements} S.U. is supported by the Swiss National Science Foundation under Division II. \end{acknowledgements}
{ "timestamp": "2015-07-24T02:06:50", "yymm": "1504", "arxiv_id": "1504.06159", "language": "en", "url": "https://arxiv.org/abs/1504.06159" }
\section{Model description} \label{sec:model} The graphical model of the FSLDS is depicted in Figure \ref{fig:DFSLDS} (top). It operates on three different sets of variables: The observed variables, $\mathbf{y}_t \in \mathbb{R}^{d_{y}}$ represent the patient's vital signs obtained from the monitoring devices at time $t$, which act as the input to our model. The continuous latent variables, $\mathbf{x}_t \in \mathbb{R}^{d_{x}}$, track the evolution of the dynamics of a patient's underlying physiology. The discrete variable, $s_t$, represents the switch setting or regime which the patient is currently in (e.g.\ stable, a blood sample is being taken etc.\ ). The switch variable can be factorised according to the cross-product of $M$ factors, so that $s_t = f_{t}^{1} \otimes f_{t}^{2} \otimes ... \otimes f_{t}^{M}$. Each factor variable, $f_{t}^{m}$, is usually a binary vector indicating the presence or absence of a factor, but in general it can take on $L^{(m)}$ different values and $K = \prod_{m=1}^M L^{(m)}$ is the total number of possible configurations of the switch variable, $s_t$. Also, $s_t$ depends explicitly on the previous time step, so that $p(s_t|s_{t-1}) = \prod_{m=1}^{M} p(f_{t}^{m}|f_{t-1}^{m})$. Conditioned on a particular regime, the FSLDS is equivalent to an LDS. The FSLDS can be seen then as a collection of LDS's, where each LDS models the dynamics of a patient's underlying physiology under a particular regime, and can also be used to generate a patient's observed vital signs. An LDS provides a generative framework for modelling our belief over the state space, given observations. We can alternatively adopt a discriminative view. We start by modelling $p(s_t|\mathbf{y}_{t-l:t+r})$ with a discriminative classifier, where (features of) observations from the previous $l$ and future $r$ time steps affect the belief of the model about $s_t$. The inclusion of $r$ frames of future context is analogous to fixed-lag smoothing in an FSLDS (see e.g.\ \citealp[sec.\ 10.5]{sarkka2013bayesian}). We note that inclusion of future observations in the conditioning set means that the DSLDS will operate with a delay of $r$ seconds, since an output of the model at time $t$ can be produced only after time $t+r$. Provided that $r$ is small enough ($r\leq$10 in experiments), this delay is negligible compared to the increase in performance. The LDS can also be regarded from a similarly discriminative viewpoint which allows us to model $p(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y}_t)$. This is similar to the Maximum Entropy Markov Model (MEMM) \citep{mccallum2000maximum} with the difference that the latent variable is continuous rather than discrete. The main advantage of this discriminative view is that it allows for a rich number of (potentially highly correlated) features to be used without having to explicitly model their distribution or the interactions between them, as is the case in a generative model. A combination of these two discriminative viewpoints gives rise to the DSLDS graphical model in Figure \ref{fig:DFSLDS} (bottom). The DSLDS, conditioned on $s_t$, can be seen then as a collection of MEMM's, where each MEMM in the DSLDS plays a role equivalent to that of each LDS in the FSLDS. The DSLDS can be defined as \begin{align} \label{eq:model} \nonumber p(\mathbf{s},\mathbf{x}|\mathbf{y}) = \hspace{1mm} &p(s_1|\mathbf{y}_1)p(\mathbf{x}_1|s_1,\mathbf{y}_1) \times \\ \hspace{-2mm}&\prod_{t=2}^T p(s_t|\mathbf{y}_{t-l:t+r})p(\mathbf{x}_t|\mathbf{x}_{t-1},s_t,\mathbf{y}_t) \hspace{2mm}. \end{align} \begin{figure}[ht] \hspace{-7mm} \begin{subfigure}{.5\textwidth} \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm, thick,state node/.style={circle,fill=white!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}, switch node/.style={rectangle,fill=white!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}, obs node/.style={circle,fill=blue!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}] \node[switch node] (1) {$s_{t-1}$}; \coordinate [left of =1] (h1); \node[switch node] (2) [right of=1] {$s_{t}$}; \node[switch node] (3) [right of=2] {$s_{t+1}$}; \coordinate [right of=3] (h3); \node[state node] (4) [above of=1] {$x_{t-1}$}; \coordinate [left of =4] (h4); \node[state node] (5) [above of=2] {$x_{t}$}; \node[state node] (6) [above of=3] {$x_{t+1}$}; \coordinate [right of=6] (h6); \node[obs node] (7) [below of=1] {$y_{t-1}$}; \node[obs node] (8) [below of=2] {$y_{t}$}; \node[obs node] (9) [below of=3] {$y_{t+1}$}; \draw [->,shorten <=0.7cm] (h1) to node {} (1); \draw [->,shorten <=0.7cm] (h4) to node {} (4); \draw [->,shorten >=0.7cm] (3) to node {} (h3); \draw [->,shorten >=0.7cm] (6) to node {} (h6); \path[every node/.style={font=\fontsize{1}{1.5}\selectfont}] (1) edge node [right] {} (2) (1) edge [below] node {} (7) (1) edge node [above] {} (4) (2) edge node [right] {} (3) (2) edge node [above] {} (5) (2) edge [below] node {} (8) (3) edge node [above] {} (6) (3) edge [below] node {} (9) (4) edge node [right] {} (5) (4) edge [bend right] node [below] {} (7) (5) edge node [right] {} (6) (5) edge [bend right] node [below] {} (8) (6) edge [bend right] node [below] {} (9); \end{tikzpicture} \end{center} \end{subfigure}% \vspace{4mm} \hspace{-24mm} \begin{subfigure}{.5\textwidth} \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm, thick,state node/.style={circle,fill=white!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}, switch node/.style={rectangle,fill=white!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}, obs node/.style={circle,fill=blue!20,minimum size = 30pt,draw,font=\fontsize{10}{1.5}\selectfont}] \node[switch node] (1) {$s_{t-1}$}; \node[switch node] (2) [right of=1] {$s_{t}$}; \node[switch node] (3) [right of=2] {$s_{t+1}$}; \coordinate [right of =3] (h3); \node[state node] (4) [above of=1] {$x_{t-1}$}; \coordinate [left of =4] (h4); \node[state node] (5) [above of=2] {$x_{t}$}; \node[state node] (6) [above of=3] {$x_{t+1}$}; \coordinate [right of =6] (h6); \node[obs node] (7) [below of=1] {$y_{t-1}$}; \coordinate [left of =7] (h7); \coordinate [left of =h7] (hh7); \node[obs node] (8) [below of=2] {$y_{t}$}; \node[obs node] (9) [below of=3] {$y_{t+1}$}; \draw [->,shorten <=0.7cm] (h4) to node {} (4); \draw [->,shorten <=0.7cm] (h7) to node {} (1); \draw [->,shorten <=2.7cm] (hh7) to node {} (1); \draw [->,shorten <=0.7cm] (h7) to node {} (2); \draw [->,shorten >=0.7cm] (6) to node {} (h6); \draw [->,shorten >=0.7cm] (7) to node {} (h3); \draw [->,shorten >=0.7cm] (8) to node {} (h3); \draw [->] (8) to node {} (1); \draw [->,shorten >=0.8cm] (9) to node {} (h3); \draw [->] (9) to node {} (1); \draw [->] (9) to node {} (2); \path[every node/.style={font=\fontsize{1}{1.5}\selectfont}] (7) edge node [below] {} (1) (7) edge [bend left] node {} (4) (1) edge node [below] {} (4) (2) edge node [below] {} (5) (8) edge [bend left] node {} (5) (3) edge node [below] {} (6) (9) edge [bend left] node {} (6) (4) edge node [right] {} (5) (7) edge node [below] {} (2) (7) edge node [below] {} (3) (5) edge node [right] {} (6) (8) edge node [below] {} (2) (8) edge node [below] {} (3) (9) edge node [below] {} (3); \end{tikzpicture} \end{center} \end{subfigure} \caption{Graphical model of the FSLDS (top) and the DSLDS (bottom). The state-of-health and underlying physiological values of a patient are represented by $s_t$ and $\mathbf{x}_t$ respectively. The shaded nodes correspond to the observed physiological values, $\mathbf{y}_t$. Note that in the case of the DSLDS the conditional probability $p(s_{t}|\mathbf{y}_{t-l:t+r})$ is modelled directly.} \label{fig:DFSLDS} \end{figure} The simplest assumption we can make for the DSLDS is that $p(s_t|\mathbf{y}_{t-l:t+r})$ factorises, so that \begin{align} \label{eq:factorised_switch} p(s_t|\mathbf{y}_{t-l:t+r}) = \prod_{m=1}^M p({f}_{t}^{(m)}|\mathbf{y}_{t-l:t+r}) \hspace{2mm} . \end{align} However, one could use a structured output model to predict the joint distribution of different factors. \subsection{Predicting s$_{t}$} \label{sec:s_t} Our belief about the state of health of a patient at time $t$ is modelled by $p(s_t|\mathbf{y}_{t-l:t+r})$, the conditional probability of the switch variable given the observed vital signs. Following the factorisation of the switch variable in eq.\ \ref{eq:factorised_switch}, we model the conditional probability of each factor being active at time $t$ given the observations with a probabilistic discriminative binary classifier, so that $p({f}_{t}^{(i)} = 1|\mathbf{y}_{t-l:t+r}) = G(\phi(\mathbf{y}_{t-l:t+r}))$, where $G(\cdot)$ is a classifier-specific function, and $\phi(\mathbf{y}_{t-l:t+r})$ is the feature vector that acts as input to our model at each time step as described in Section \ref{sec:features}. As is evident from Figure \ref{fig:DFSLDS} (bottom) there is no explicit temporal dependence on the switch variable sequence. However, temporal continuity is implicitly incorporated in our model through the construction of the features. \subsubsection{An $\alpha$-mixture of s$_{t}$} The DSLDS model can be seen as complementary to the FSLDS, and they can be run in parallel. One way of combining the two outputs is to maintain an $\alpha$-mixture over $s_t$. If $p_{g}(s_t)$ and $p_{d}(s_t)$ are the outputs for the switch variable at time $t$ from FSLDS and the DSLDS respectively, then their $\alpha$-mixture is given by: $p_{\alpha}(s_t) = c \left( {p_{g}(s_t) }^{(1-\alpha)/2} + {p_{d}(s_t)}^{(1-\alpha)/2} \right)^{2/(1-\alpha)}$, where $c$ is a normalisation constant which ensures that $p_{\alpha}(s_t)$ is a probability distribution. The family of $\alpha$-mixtures then subsumes various known mixtures of distributions and defines a continuum across them via the $\alpha$ parameter. For example, for $\alpha=-1$ we retrieve the mixture of experts (with equally weighted experts) framework, while for $\alpha \rightarrow 1$, the formula yields $p_{1}(s_t) = c\sqrt{p_{g}(s_t)p_{d}(s_t)}$, rendering it equivalent to a product of experts viewpoint. In general, as $\alpha$ increases, the $\alpha$-mixture assigns more weight to the smaller elements of the mixture (with $\alpha\rightarrow\infty$ giving $p_{\infty}(s_t) = \min\{p_{g}(s_t),p_{d}(s_t)\}$), while as $\alpha$ decreases, more weight is assigned to the larger elements (with $\alpha\rightarrow - \infty$ giving $p_{-\infty i}(s_t) = \max\{p_{g}(s_t),p_{d}(s_t)\}$) A thorough treatment is given in \citet{amari2007integration}. \subsection{Predicting x$_{t}$} \label{sec:x_t} The model of the patient's physiology should capture the underlying temporal dynamics of their observed vital signs under their current health state. The idea is that the current latent continuous state of a patient should be dependent on (a) the latent continuous state at the previous time step, (b) the current state of health and (c) the current observed values. We model these assumptions as follows \begin{align} \label{eq:state_space} \nonumber &p(\mathbf{x}_t|\mathbf{x}_{t-1},s_t,\mathbf{y}_t) \propto \\ \nonumber &\exp\lbrace \scalebox{0.75}[1.0]{\( - \)}\dfrac{1}{2}(\mathbf{x}_t \scalebox{0.75}[1.0]{\( - \)} \mathbf{A}^{(s_{t})}\mathbf{x}_{t-1})^{\top} (\mathbf{Q}^{(s_{t})})^{-1} (\mathbf{x}_t \scalebox{0.75}[1.0]{\( - \)} \mathbf{A}^{(s_{t})}\mathbf{x}_{t-1})\rbrace \times\\ &\exp \lbrace \scalebox{0.75}[1.0]{\( - \)}\dfrac{1}{2}(\mathbf{C}^{(s_{t})}\mathbf{x}_t \scalebox{0.75}[1.0]{\( - \)} \mathbf{y}_t)^{\top} (\mathbf{R}^{(s_{t})})^{-1} (\mathbf{C}^{(s_{t})}\mathbf{x}_{t} \scalebox{0.75}[1.0]{\( - \)} \mathbf{y}_t)\rbrace \hspace{2mm}. \end{align} The first term on the RHS of eq.\ \ref{eq:state_space} is the {\bf system model} for an LDS and captures the dynamics of a patient's latent physiology under state $s_{t}$. The second term can be seen as the discriminative counterpart of the {\bf observation model} of an LDS. In our condition monitoring setting, the observed vital signs are considered to be noisy realisations of the true, latent physiology of a patient and thus, the observation model encodes our belief that $\mathbf{x}_{t}$ is a noisy version of $\mathbf{y}_{t}$. Under this assumption, $\mathbf{C}^{s_t}$ consists of 0/1 entries, which are set based on our knowledge of whether the observations $\mathbf{y}_{t}$ are artifactual or not under state $s_t$. In the FSLDS, the corresponding observation model encodes the belief that the generated $\mathbf{y}_{t}$ should be normally distributed around $\mathbf{x}_{t}$ with covariance $\mathbf{R}^{s_t}$, whereas in our discriminative version, the observation model encodes our belief that $\mathbf{x}_{t}$ should be normally distributed around $\mathbf{y}_{t}$ with covariance $\mathbf{R}^{s_t}$. The idea behind this model is that at each time step we update our belief about $\mathbf{x}_t$ conditioned on its previous value, $\mathbf{x}_{t-1}$, and the current observation, $\mathbf{y}_t$, under the current regime $s_t$. For example, under an artifactual process, the observed signals do not convey useful information about the underlying physiology of a patient. In that case, we drop the connection between $\mathbf{y}_t$ and $\mathbf{x}_t$ (for the artifact-affected channels) which translates into setting the respective entries of $\mathbf{C}^{s_t}$ to zero. Then, the latent state $\mathbf{x}_t$ evolves only under the influence of the appropriate system dynamics parameters $(\mathbf{A}^{(s_{t})},\mathbf{Q}^{(s_{t})})$. Conversely, operation under a non-artifactual regime incorporates the information from the observed signals, effectively transforming the inferential process for $\mathbf{x}_t$ into a product of two ``experts'', one propagating probabilities from $\mathbf{x}_{t-1}$ and one from the current observations. We note that the step of conditioning on the current regime $s_t$ in order to predict $\mathbf{x}_t$ is required for our task, as we do not have training data for the $\mathbf{x}$-state. Otherwise, one could imagine building a simpler model such as a conditional random field \citep{lafferty2001conditional}, to predict the $\mathbf{x}$-state directly from the observations. However, in our case, where only labels about the patient's regime are available, this is not possible. \subsection{Learning} \label{sec:learning} We first describe learning in the general SLDS setting. The parameters that need to be learned are: \{$\mathbf{A}^{s}$, $\mathbf{Q}^{s}$, $\mathbf{C}^{s}$, $\mathbf{R}^{s}$\}. Given training data for each switch setting, these can be learned independently as LDS parameters for each configuration of $s$. Following \citet{quinn2009factorial} we use an independent ARIMA model with added observation noise for each channel. Casting such a model into state space form is a standard procedure as described in \citet[sec.\ 12.1]{brockwell2009time}, and amounts into reformulating the parameters of the ARIMA model into the parameters of a state-space model. Once the model is in state space form, $\mathbf{A}^{s}$, $\mathbf{Q}^{s}$, $\mathbf{C}^{s}$, $\mathbf{R}^{s}$ can be fit according to the maximum likelihood criterion by using numerical optimisation methods (like Newton-Raphson, Gauss-Newton), as presented in \citet[sec.\ 2.6]{shumway2000time} or expectation maximisation (EM) as presented in \citet{ghahramani1996parameter}. We note that the vector ARMA (VARMA) representation is used, where for example a one-dimensional AR($p$) process can be encoded as a $p+1$-dimensional VAR(1) process by maintaining a latent state representation of the form $\mathbf{x}_{t} = [{x}_{t}\;{x}_{t-1}\;...\;{x}_{t-p}]$. In the DSLDS, the same set of parameters needs to be learned. As mentioned in Section \ref{sec:x_t}, the assumptions for the DSLDS observation model constrain $\mathbf{C}^{s}$ to be a binary matrix, whose values are set so as to pick the most recent value $\mathbf{x}_{t}$ under the VARMA representation. For example, assuming that we are modelling one channel, under a physiological regime, as an AR(2) process, then $\mathbf{C}^{s} = [1\;0\;0]$. Under this constrained form of $\mathbf{C}^{s}$ we obtain the remaining parameters, $\mathbf{A}^{s}$, $\mathbf{Q}^{s}$ and $\mathbf{R}^{s}$, using the same learning process as the one already described for the case of a general SLDS. The task of determining the order of the respective ARIMA models is less straightforward. We have followed a practical approach as suggested in \citet[sec.\ 6.2]{diggle1990time}. The autocorrelation and partial autocorrelation function (ACF and PACF respectively) of the stationary data (if a time series is not stationary, we make it stationary by successive differencing) were examined to provide an initial estimate of the appropriate model order. A clear cut-off at lag $q$ in the ACF plot is suggestive of an MA($q$) process, while a clear cut-off at lag $p$ in the PACF plot is suggestive of an AR($p$) process. Clear cut-offs are rare in a real world application, in which case we looked for less clear tail-offs in the PACF and ACF plots. After establishing a small number of potential model orders suggested by these tail-offs, further exploration of the model order around these initial estimates was carried out by calculating the Akaike Information Criterion (AIC) score \citep{akaike1972information} for each of these potential model orders, and finally the one with the smallest AIC value was chosen. \subsection{Inference} \label{sec:inference} In this paper we are concerned with the task of computing the distribution $p(s_t,\mathbf{x}_t|\mathbf{y}_{1:t+r})$. According to our proposed model, $p(s_t|\mathbf{y}_{t-l:t+r})$ can be inferred at each time step via a classifier as described in Section \ref{sec:s_t}. However, exact inference for $\mathbf{x}_t$ is still intractable. The same limitation as in the case of a standard SLDS applies \citep{lerner2001inference}: In order to maintain an exact belief over the posterior distribution of $\mathbf{x}_t$ we need to keep track of all the potential combinations of switch variable settings that could have lead us from $\mathbf{x}_{t-1}$ to $\mathbf{x}_t$, making inference scale exponentially with time. An approximation of this distribution can be maintained via the Gaussian Sum algorithm\footnote{The Gaussian Sum algorithm is also known as the Generalised Pseudo Bayesian (GPB) algorithm as mentioned in \citet{murphy1998switching}.} \citep{alspach1972nonlinear}. The idea is that at each time step $t$ we maintain an approximation of $p(\mathbf{x}_t|s_t,\mathbf{y}_{1:t+r})$ as a mixture of $J$ Gaussians. Moving one time step forward will result in the posterior $p(\mathbf{x}_{t+1}|s_{t+1},\mathbf{y}_{1:t+r+1})$ having $KJ$ components, which are again collapsed to $J$ components. In our experiments we use $J=1$, which translates into matching moments (up to second order) of the distribution for each setting of $s_t$, as shown in \citet{murphy1998switching}. Therefore inference in the DSLDS can be seen as a two-step process, where $p(s_t|\mathbf{y}_{t-l:t+r})$ is inferred by our discriminative classifier, and $p(\mathbf{x}_t|s_t,\mathbf{y}_{1:t+r})$ is inferred according to the Gaussian Sum algorithm. \subsection{Related work} \label{sec:review} In terms of methodology, our proposed model bears some similarities to the one used by \citet{lu2009hybrid}. However, their model was used to model spatial relationships and they were only concerned with a binary discrete latent space. In our case, we are concerned with modelling temporal structure and we have a richer and more complex discrete latent space. More importantly, in their work the distribution maintained over the continuous latent space is a single multivariate Gaussian, whereas in our model, as described in the previous section, the belief over the continuous latent space is modelled as a mixture of $KJ$ Gaussians. This allows us to keep track of multiple modes about the belief over a patient's underlying physiology, since this is potentially affected by multiple factors. In terms of application, our work is mostly similar to the one presented in \citet{quinn2009factorial}. The same task of inferring artifactual and physiological processes was considered there. However a generative approach was taken there via the use of an FSLDS. In our case, we take a discriminative approach, which performs better in the experiments considered below. Also, in \citet{lehman2014physiological}, a switching vector autoregressive model was used on minute-by-minute heart rate and blood pressure vital signs to provide inputs for a logistic regression classifier with the goal of patient outcome prediction. In our work, we use a more expressive model, capable of modelling both discrete and continuous latent states under a unified framework, for the purposes of detecting patients' state-of-health and inferring their underlying physiology. \section{Experiments} \label{sec:experiments} In this section we describe experiments on two challenging datasets comprising of patients admitted to ICUs in two different hospitals, namely a neonatal ICU and an adult ICU. We emphasise that it is highly non-trivial to obtain annotations for medical datasets as it requires the very scarce resource of experienced clinicians. Indeed, for the adult ICU, the annotated data are the product of a one-year collaboration with that ICU. Physionet \citep{goldberger2000physiobank}, a freely available medical dataset, is not suitable for our task since the only available time-series annotations are a limited set of life threatening/terminal events, for which identification would not be of practical use in the ICU. For both datasets, we evaluate the performance of the DSLDS compared to the FSLDS. We also report the performance of an $\alpha$-mixture of the two models. Note that the FSLDS has been shown in \citet{quinn2009factorial} to achieve superior results compared to more basic models such as a factorial hidden Markov model (FHMM) for the task of condition monitoring in ICUs. We first provide a short description of the various features that were used as input to the state-of-health model as described in Section \ref{sec:s_t}, followed by an outline of the main characteristics of the two datasets. We conclude this section by providing results on two tasks: a) inferring a patient's state of health and b) inferring a patient's underlying physiology in the presence of artifact corruption. \subsection{Features \& Classifiers} \label{sec:features} As described in Section \ref{sec:s_t}, the estimate of $s_t$ is the output of a discriminative classifier. For both datasets, we found that using a random forest \citep{breiman2001random} as our classification method yields the best performance. Suggestions for judicious selection of various tree-construction parameters can be found in \citet[Ch.\ 15]{hastie2009elements}. The Gini index was used as the criterion for splitting nodes for each tree in the random forest. The output of the random forest for a new test point is an average of the predictions produced by each tree, where the prediction of each tree is the proportion of the observations that belong to the positive class in the leaf node in which the test point belongs to. Apart from their high performance, another appealing property of random forests is that they can handle missing observations via the construction of surrogate variables and splits within each decision tree as explained in \citet[sec.\ 9.2.4]{hastie2009elements}. We use a variety of features to capture interesting temporal structure between successive observations. At each time step, a sliding window of length $l+r+1$ is computed. For some features we also divide the window into further sub-windows and extract additional features from them. More precisely, the full set of features that are being used are: (i) the observed, raw values of the previous $l$ and future $r$ time steps ($\mathbf{y}_{t-l:t+r}$); (ii) the slopes (calculated by least squares fitting) of segments of that sliding window that are obtained by dividing it in segments of length $(l+r+1)/k$; (iii) an exponentially weighted moving average of this window of raw values (with a kernel of width smaller than $l+r+1$); (iv) the minimum, median and maximum of the same segments; (v) the first order differences of the original window; and (vi) differences of the raw values between different channels. \subsection{Neonatal ICU} \label{sec:NICU_data} The first dataset is the one used in \citet{quinn2009factorial}\footnote{The dataset has been anonymised and is available at: \emph{www.cit.mak.ac.ug/staff/jquinn/software.html}}. It comprises 24-hour periods from fifteen neonates admitted to the ICU of the Edinburgh Royal Infirmary, with events of interest annotated by two clinical experts. These annotations include: i) blood sample events (BS), ii) periods during which an incubator is open (IO), iii) core temperature probe disconnections (TD), iv) bradycardias (BR), and v) periods that are clearly not stable but no further identification was made by the clinicians (X). These last cases can be collectively considered as a ``none-of-the-above'' factor, which is referred to as the X-factor by \citet{quinn2009factorial}. More details about the events of interest can be found in the aforementioned work. We used the same parameters for the underlying physiology model as the ones used there. \subsection{Adult ICU} \label{sec:SGH_data} The second dataset comprises data collected from nine adults admitted to the neuro ICU of the Southern General Hospital in Glasgow. An average of 33-hour periods were collected from each of these patients, consisting of measurements recorded on a second-by-second basis for four different channels: heart rate (HR), systolic and diastolic blood pressure (BPsys, BPdia), and systolic intracranial pressure (ICPsys). These data were then annotated by a clinical expert. We give a brief description of the learning process for stability periods and modelled factors, which include blood samples, damped traces (DT), suction events (SC), and the X-factor. {\bf Stable periods} correspond to time periods when no annotation occurred from the experts, suggesting that the patient is in a stable condition. In \citet{williams2011automating} it was found that in a similar setting a 15 minute period of stability provides an adequate amount of training data. We use the same time interval for our experiments. We found that ARIMA(2,1,0) models were adequate for all channels. An example of a {\bf blood sample} is shown in Figure \ref{fig:SGH_damped_BS} (bottom). Changes in BPsys and BPdia can be modelled as a four-stage process: i) the blood is diverted to a syringe for blood sampling, which causes an artifactual ramp in the observed measurements. This is similar to the blood sample model described in \citet{quinn2009factorial} and we follow the same approach here. ii) A recalibration stage follows, causing measurements to drop to zero which can be modelled similarly to a dropout event as in \citet{quinn2009factorial}. iii) BP measurements continue as a stable period for a brief period. iv) The blood sample is concluded with a flushing event for hygiene purposes which causes a sharp increase in measurements. This stage is modelled as an AR(3) process for both the BPsys and BPdia channels. A total number of 64 blood sample events have been annotated, with an average duration of 1.6 minutes. During a {\bf suction event}, a flexible catheter is inserted into the airway of the patient to remove secretions that have accumulated over time in their pulmonary system. This event is observed as a significant increase in the values of all observed channels. An AR(2) process models the HR channel, while AR(3) processes were used to model the remaining channels. A total number of 53 suction events have been annotated, with an average duration of 4.3 minutes. A {\bf damped trace}, an example of which is shown in Figure \ref{fig:SGH_damped_BS} (top), is usually observed due to blood residues being accumulated in the line used for measuring the blood pressure channel, which leads both BPsys and BPdia to converge to a similar mean value while at the same time the measurements exhibit high variability. Both channels were modelled with AR(3) processes. A total number of 32 damped trace events have been annotated, with an average duration of 14 minutes. Except for the aforementioned factors which we explicitly model, there are a multitude of other factors present in our training data, corresponding to either known but not yet modelled factors (such as hygiene events, tachycardias etc.) or to unknown factors (clear abnormalities which however have not been identified by the clinicians). We collectively treat those events as unknown and model them according to the X-factor model proposed in \citet{quinn2009factorial}. A total number of 278 X-factor events have been annotated, with an average duration of 7.5 minutes. Channels which are unaffected by an artifactual process (as shown in Table \ref{tab:channels}) are modelled as in the stable case. In every case the parameters of the $\mathbf{x}$-state models were further optimised by EM. \renewcommand{\tabcolsep}{0.11cm} \begin{table}[ht] \caption{Channels affected by different processes for the adult ICU are marked by $\bullet$.} \label{tab:channels} \begin{center} \begin{tabular}{c l l l l} \multicolumn{1}{c}{} &\multicolumn{1}{c}{HR} &\multicolumn{1}{c}{BPsys} &\multicolumn{1}{c}{BPdia} &\multicolumn{1}{c}{ICPsys} \\ \hline \\ Blood sample &${}$ &$\bullet$ &$\bullet$ &${}$ \\ Damped trace &${}$ &$\bullet$ &$\bullet$ &${}$ \\ Suction &$\bullet$ &$\bullet$ &$\bullet$ &$\bullet$ \\ X-factor &$\bullet$ &$\bullet$ &$\bullet$ &$\bullet$ \\ \end{tabular} \end{center} \end{table} \renewcommand{\tabcolsep}{0.11cm} \begin{table}[ht] \caption{Comparison of DSLDS, FSLDS and $\alpha$-mixture performance for the Neonatal ICU dataset. Optimal value of the $\alpha$ parameter is shown inside parenthesis.} \label{tab:NICU} \begin{center} \begin{tabular}{c l l l l l} \multicolumn{1}{c}{\bf AUC} &\multicolumn{1}{c}{BS} \hspace{-1mm} &\multicolumn{1}{c}{IO} &\multicolumn{1}{c}{TD} &\multicolumn{1}{c}{BR} &\multicolumn{1}{c}{X} \\ \hline \\ DSLDS \hspace{2mm} &$0.98$ \hspace{1mm} &$0.83$ \hspace{1mm} &$0.90$ \hspace{1mm} &$0.94$ \hspace{1mm} &$0.57$ \\ FSLDS \hspace{2mm} &$0.92$ &$0.87$ &$0.88$ &$0.85$ &$0.66$ \\ $\alpha$-mixture$^{(0.5)}$ \hspace{2mm} &$0.98$ &$0.89$ &$0.93$ &$0.92$ &$0.67$ \\ \end{tabular} \end{center} \end{table} \subsection{Results} For both datasets we compare the performance of the DSLDS and the FSLDS for the task of inferring a patient's state of health. We measure the performance of the models by reporting the Area under the Receiver Operating Characteristic curve (AUC). Also, in Figures \ref{fig:ROC_nicu} and \ref{fig:ROC_sgh}, we provide plots of the Receiver Operating Characteristic curves (ROC) for the classification of the factors of interest comparing the DSLDS, the FSLDS, and an $\alpha$-mixture of the two models. In the case of the DSLDS, the features described in Section \ref{sec:features} involve a number of hyperparameters that need to be chosen. Fitting them with a standard cross-validation (CV) scheme when data are not abundant poses a non-negligible risk of overfitting. As is shown in \citet{varma2006bias}, using CV to evaluate performance of a model when the model's hyperparameters have been themselves tuned using CV can lead to an optimistic bias of the estimate of the true performance. In that same work, a nested CV approach is shown to yield an almost unbiased estimate of the true performance, which we also follow in our experiments. In the outer loop the data are partitioned into $P$ disjoint test sets. After choosing one of these partitions, the rest of the data are used in the inner loop in a standard CV setup to select the hyperparameters. The hyperparameters which yielded the highest performance (average cross-validated AUC across factors in our case) in the inner loop are then used to estimate the performance of the model on the partition (test set) in the outer loop. This process is repeated $P$ times, once for each partition in the outer loop. For both datasets, we use leave-one-patient-out CV for the inner loop and 3-fold CV for the outer loop. In the inner loop, we perform a grid search over hyperparameters in the following sets: a) number of trees of random forest classifiers in \{10, 25, 50, 100, 200\}; b) $l$ in \{4, 9, 14, 19, 29, 49\}; c) $r$ in \{0, 5, 10\}. The sub-segments lengths (for slope features) were always set to max\{5, ($l+r+1$)/5\} and the kernel widths (for moving average features) were always set to max\{5, ($l+r+1$)/5\}. In the case of the FSLDS, it is not necessary to follow the same procedure. Using the AIC score, as shown in Section \ref{sec:learning}, for choosing the orders of the ARIMA processes (which constitute the model's hyperparameters) avoids potential overfitting by penalising the model's likelihood as the parameters grow. We therefore use 3-fold CV to evaluate the FSLDS's performance. To evaluate the $\alpha$-mixture model, we have chosen the optimal $\alpha$ value as the one that maximises the average AUC across factors, via 3-fold CV. This also allowed us to explore the behaviour of the model as a function of $\alpha$ for both datasets. \begin{figure}[ht] \begin{center} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_nicu_BS.eps} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_nicu_IO.eps} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_nicu_TD.eps} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_nicu_brady.eps} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_nicu_X.eps} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \end{subfigure} \caption{ROC curves per modelled factor in the case of the neonatal ICU.} \label{fig:ROC_nicu} \end{center} \end{figure} \subsubsection{Neonatal ICU} \label{sec:NICU_res} In the case of the neonatal ICU we compare the two models on the full set of annotated factors reported in \citet{quinn2009factorial}. The results are shown in Table \ref{tab:NICU}\footnote{The FSLDS results were obtained using code provided by \citet{quinn2009factorial} with the same parameters as the ones mentioned there. The results are very close with the exception of the core temperature disconnection factor (for which the reported AUC in \citet{quinn2009factorial} was 0.79, while we obtained a value of 0.88), and the blood sample factor (for which the reported AUC in \citet{quinn2009factorial} was 0.96, while we obtained a value of 0.92).}. The DSLDS outperforms the FSLDS in three out of the four clinically identified factors. The difference in favour of the DSLDS is clear for bradycardias and blood samples, but less pronounced for core temperature disconnections. The FSLDS achieves slightly higher performance in the case of the incubator open factor, and clearly outperforms the DSLDS in the case of the X-factor. The FSLDS models the presence of outliers by the inclusion of an extra factor, which is essentially governed by the same parameters as stability with the only difference being that the system noise covariance is an inflated version of the respective covariance of the stability dynamics (for more details, see \citealp{quinn2009factorial}). Such an approach has the potential to address the issue of outlier detection in a more general and thus more satisfactory way. In the case of the DSLDS, our approach is to collectively treat all abnormal events, other than the ones attributed to known factors, as an ``X-class'' and build a binary classifier to distinguish that class. As the training datapoints for this class are highly inhomogeneous in terms of shared discriminative features, and test points belonging to the X-class may not exhibit a high degree of similarity to the training set, it is not surprising that the DSLDS may perform rather poorly for the X-factor. However, by considering an $\alpha$-mixture of the two models, we can combine the discriminative power of the DSLDS for known factors with the increased performance of the FSLDS for the X-factor, thus achieving a higher performance (bottom line of Table \ref{tab:NICU}) compared to considering the two models separately. The behaviour of the $\alpha$-mixture model as a function of $\alpha$ is shown in Figure \ref{fig:alpha_vs_AUC} (top). The optimal $\alpha$-mixture ($\alpha = 0.5$) yields the best average AUC across factors (in fact, $\alpha=0.5$ yields optimal performance for each factor separately except bradycardia, where it is almost optimal) compared to all other considered $\alpha$ values and also outperforms the DSLDS and the FSLDS in all cases except for the bradycardia factor, where the DSLDS performs slightly better. \begin{figure}[hm] \begin{center} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_sgh_BS.eps} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_sgh_DT.eps} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_sgh_SC.eps} \end{subfigure}% ~ \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth, height = 6.5cm, keepaspectratio=true]{figures/roc_adfslds_sgh_X.eps} \end{subfigure} \caption{ROC curves per modelled factor in the case of the adult ICU.} \label{fig:ROC_sgh} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth, height = 4cm]{figures/dfslds_damped.eps} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth, height = 4cm]{figures/dfslds_BS.eps} \end{subfigure} \caption{Example of DSLDS and FSLDS inferences for a damped trace event (top) and a blood sample event (bottom).} \label{fig:SGH_damped_BS} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \begin{subfigure}[b]{0.5\textwidth} \centering \hspace{0.2\textwidth} {Neonatal ICU} \includegraphics[width=\textwidth, height = 5cm]{figures/alpha_vs_AUC_NICU.eps} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth} \centering \hspace{0.2\textwidth} {Adult ICU} \includegraphics[width=\textwidth, height = 5cm]{figures/alpha_vs_AUC_SGH.eps} \end{subfigure} \caption{Performance of the $\alpha$-mixture models as a function of $\alpha$ ($step=0.25$) for the Adult ICU (top) and the neonatal ICU dataset (bottom). The asterisk marks the optimal value for $\alpha$.} \label{fig:alpha_vs_AUC} \end{center} \end{figure} \subsubsection{Adult ICU} \label{sec:SGH_res} \begin{table} \caption{Comparison of DSLDS, FSLDS and $\alpha$-mixture performance for the Adult ICU dataset. Optimal value of the $\alpha$ parameter is shown inside parenthesis.} \label{tab:SGH} \begin{center} \begin{tabular}{c l l l l} \multicolumn{1}{c}{\bf AUC} &\multicolumn{1}{c}{BS} &\multicolumn{1}{c}{DT} &\multicolumn{1}{c}{SC} &\multicolumn{1}{c}{X} \\ \hline \\ DSLDS \hspace{2mm} &$0.96$ \hspace{1mm} &$0.93$ \hspace{1mm} &$0.67$ \hspace{1mm} &$0.65$\\ FSLDS \hspace{2mm} &$0.95$ &$0.79$ &$0.57$ &$0.74$\\ $\alpha$-mixture$^{(0)}$ \hspace{2mm} &$0.99$ &$0.94$ &$0.70$ &$0.71$\\ \end{tabular} \end{center} \end{table} In the case of the adult ICU, inferences for two example events are shown in Figure \ref{fig:SGH_damped_BS}. In the top, a damped trace event is shown, which lasts for almost one hour before being resolved by a flushing event (spiking of both channels). The DSLDS accurately identifies the damped trace event, while the FSLDS fails totally to detect it, but hypothesises several incorrect blood sample events. In the bottom panel a blood sample event is shown, where the multiple stages are clearly visible. The event starts with two artifactual ramps, followed by a flushing, a zeroing, and finally with another flushing. This is slightly different than the description we have already given, but slight deviations from the standard protocol due to human error is to be expected. In this case, both models manage to capture the event in a generally satisfactory manner. Summary results are reported in Table \ref{tab:SGH}. The DSLDS outperforms the FLSDS on all of the known factors. The damped trace and suction events particularly are characterised by high variability which is hard to capture with a generative process. However, simple discriminative features are able to capture them with higher accuracy. As was expected, the FSLDS achieves a higher AUC for the X-factor. Again, the optimal $\alpha$-mixture ($\alpha = 0$) outperforms the DSLDS and the FSLDS in all cases except for the X-factor, where the FSLDS achieves a slightly higher AUC. Contrary to the neonatal ICU dataset, as shown in Figure \ref{fig:alpha_vs_AUC} (bottom) there are alternative $\alpha$ values which can yield higher AUC across different factors. For example, an X-factor AUC value of 0.76 can be obtained by setting $\alpha=5$. However, apart from the superior (on average) performance of the $\alpha$-mixture, another appealing property is that $\alpha$ could be treated as a user-tunable parameter. In a practical setting, the model could be preset with the optimal $\alpha$ value, but a clinician could decide, for example, to make the model focus on maximising its predictive performance on the X-factor (or some important physiological factor like bradycardia) to the potential detriment of other factors. Then the model could adjust its $\alpha$ parameter in real-time based on training data results to maximise its performance on the desired factor. \subsubsection{Inference for $\mathbf{x}$-state} \label{sec:x_state_results} Finally, Figure \ref{fig:x_state} shows the inferred distribution of underlying physiology during a blood sample taken from a neonate for both models. In both cases, estimates are propagated with increased uncertainty under the correctly inferred artifactual event. Note a small difference at the start of the event: The DSLDS partially identifies the event causing an increase in uncertainty, while the FSLDS (incorrectly) identifies this part as stable and thus its $\mathbf{x}$-state update exhibits lower uncertainty. Maintaining an estimate of the underlying vital signs in the presence of artifacts can then be used for data imputation. Another use, which has been deemed important by our clinical experts, is that such an estimate can help doctors maintain an approximate view of a patient's underlying physiology during artifactual events that would otherwise completely obscure a patient's vital signs. This can be crucial during treatment of a patient under critical conditions, such as the ones found in an ICU. \begin{figure}[ht] \begin{center} \hspace{-5mm} \begin{subfigure}[b]{0.5\textwidth} \centering \hspace{0.2\textwidth} {DSLDS} \includegraphics[width=\textwidth, height = 5cm]{figures/dslds_xstate.eps} \end{subfigure}% \hspace{-5mm} \begin{subfigure}[b]{0.5\textwidth} \centering \hspace{0.2\textwidth} {FSLDS} \includegraphics[width=\textwidth, height = 5cm]{figures/fslds_xstate.eps} \end{subfigure} \caption{Example of the inferred underlying physiology in the presence of a blood sample in the case of the DSLDS (top) and the FSLDS (bottom). The solid line corresponds to the actual observations, while the estimated true physiology is plotted as a dashed line with the shaded area indicating two standard deviations.} \label{fig:x_state} \end{center} \end{figure} \section{Discussion} \label{sec:discussion} We have presented a discriminative approach for the very important application of patient monitoring in ICUs. We show that our new approach is able to outperform the previous generative approach used for the same task in most of the investigated cases. We also show that an $\alpha$-mixture of the two approaches yields better results than either model separately. In our approach we have assumed that the prediction of the switching variable factorises over the state space. However, one could use a structured output model to predict the joint distribution of different factors. Finally, another issue is the lack of explicit temporal continuity in the $s$-chain. Implicitly, this is handled by the feature construction process. However, a future direction could be to establish a Markovian connection on the $s$-chain too and compare with our current approach. \subsubsection*{Acknowledgements} We extend our thanks to Ian Piper and Christopher Hawthorne for their expert insight and annotated data, and to Martin Shaw and Partha Lal for preprocessing code and valuable discussions. Author KG was funded by the Scottish Informatics and Computer Science Alliance. This research was funded in part by the Chief Scientist Office (Scotland) ref.\ CZH/4/801. \subsubsection*{References} \bibliographystyle{natbib} \renewcommand{\section}[2]{}
{ "timestamp": "2015-04-27T02:08:32", "yymm": "1504", "arxiv_id": "1504.06494", "language": "en", "url": "https://arxiv.org/abs/1504.06494" }
\section{Introduction} Tha notion of {\sl kernel\/}, as introduced by Von Neumann in 1944 \cite{VN}, has been generalised in several directions, see for example \cite{H,LS,R}. Here, we study yet another generalisation introduced by Sands et al. \cite{SSW} which arises naturally when the arcs of a digraph are coloured (see also \cite{GS}); such a generalisation was deeply studied too by Arpin and Linek \cite{AL}. Following Arpin and Linek \cite{AL}, let $\mathcal{B}_3$ be the class of digraphs $H$ with the property that for every digraph $D$ and every $H$-colouring of $D$ (to be defined) there exists an $H$-kernel by walks in $D$. We call those digraphs in $\mathcal{B}_3$ {\sl panchromatic patterns\/}. The aim of this paper is to characterise panchromatic patterns. For, we first prove a technical lemma (see Section~1) that allow us to add, under controlled circumstances, an arc to a panchromatic pattern preserving this property. This allow us to settle a question raised by Arpin and Linek \cite{AL} which characterises all panchromatic patterns of order 3. Then, we introduce the notion of a {\it bicomplete\/} digraph and prove (see Lemma~\ref{bicompletas}) the sufficiency for such digraphs to be panchromatic patterns. The rest of the paper, after some preliminaries in Section~2, is devoted to prove the necessity of such a property to be a panchromatic pattern and settle the desired characterisation. \section{Preliminaries} By a {\it digraph\/} $D=(V,A)$ we mean a finite non-empty set of vertices $V$ and a set of (directed) arcs $A\subseteq V\times V$. Given another digraph $H$, by an {\it $H$ colouring of $D$\/}, we mean a map $\varsigma\colon A(D)\to V(H)$ from the arcs of $D$ to the vertices of $H$ --- we think on the vertices of $H$ as colours assigned to the arcs of $D$, hence the name. Given such a colouring, a walk $W=x_0,x_1,\dots,x_k$ in $D$ is called an {\it $H$-walk\/} if $\varsigma(W)=\varsigma(x_0,x_1),\varsigma(x_1,x_2),\dots,\varsigma(x_{k-1},x_k)$ is a walk in $H$. A subset $K\subset V(D)$ is called an {\it $H$-kernel\/} if it is both, $H$-independent and $H$-absorbent; viz., there are no $H$-walks between any pair of different vertices in it, and given any vertex out of it, there is an $H$-walk into such a subset. Suppose that $H$ is a looped digraph with the following properties: \begin{enumerate} \item $V(H) = \bigsqcup_{i=1}^n C_i$, \item $(x,y)\in A(H)$, whenever $x\not=y$ and $x,y\in C_i$ for some $i$, and \item $C_i\times C_j\subseteq A(H)$ whenever $i\not=j$ and $C_i\times C_j\cap A(H)\not=\emptyset$ or $i=j$ and $(x,x)\in A(H)$ for some $x\in C_i$. \end{enumerate} Then, the digraph $H'$ with vertices $V(H')=\{C_1,\dots,C_n\}$ and arcs $(C_i,C_j)\in A(H')$ if and only if $(C_i\times C_j)\cap A(H)\not=\emptyset$ is called a {\it contraction\/} of $H$ and $H$ is an {\it expansion\/} of $H'$. A digraph $H$ is called a {\it panchromatic pattern\/} if given any digraph $D$ and any $H$-colouring of $D$, we can find an $H$-kernel of $D$. The aim of this paper is to characterise the class $\mathcal{B}_3$ of panchromatic patterns. For, we will use the following results of Arpin and Linek \cite{AL}, without further reference: \begin{lemma} \label{inducidas}\ \\ \begin{enumerate} \item If $H\in\mathcal{B}_3$, then every vertex of $H$ has a loop (is looped), \item If $H\in\mathcal{B}_3$, and $H'$ is an induced subdigraph of $H$, then $H'\in\mathcal{B}_3$, \item Let $H'$ be a contraction of $H$. $H\in\mathcal{B}_3$ if and only if $H'\in\mathcal{B}_3$. \end{enumerate} \end{lemma} \begin{lemma} \label{caminos} Let $W=x_0,x_1,\dots,x_k$ be a walk in $H$ such that \begin{enumerate} \item for all $x_j\in W$, with $0\leq j\leq k-1$, there is a colour $c_j\in V(H)$ such that $(x_j,c_j)\not\in A(H)$, \item $(x_k,x_0)\not\in A(H)$. \end{enumerate} Then, $H\not\in\mathcal{B}_3$. \end{lemma} \begin{lemma} The digraphs depicted in Figure~\ref{enb3} are all elements of $\mathcal{B}_3$. \end{lemma} \begin{figure}[htbp] \centering \includegraphics[width=4in]{enb3.pdf} \caption{Panchromatic patterns of order 3} \label{enb3} \end{figure} \begin{lemma} Non of the digraphs depicted in Figure~\ref{noenb3} are elements of $\mathcal{B}_3$. \end{lemma} \begin{figure}[htbp] \centering \includegraphics[width=4in]{noenb3.pdf} \caption{Some non-panchromatic patterns of order 3} \label{noenb3} \end{figure} \begin{lemma} \label{impares} If $H\in\mathcal{B}_3$, then $H$ does not contain odd directed cycles in its complement. \end{lemma} \section{Main Lemmas} \begin{lemma} \label{p2} Let $H$ be a digraph, with all its vertices looped, and $a=(u,v)\not\in A(H)$ a pair of vertices at distance 2 from $u$ to $v$. If $H\cup a\not\in\mathcal{B}_3$, then $H\not\in\mathcal{B}_3$. \end{lemma} \noindent {\bf Proof.} Let $H$ be a looped digraph and $u,z,v$ a path in $H$ from $u$ to $v$, two non adjacent vertices of $H$. Let $H' = H\cup(u,v)$. First we will show that for every $D$ and every $H'$-colouring of $D$, there exists a digraph $D'$ and an $H$-colouring of $D'$, such that $D$ admits an $H'$-kernel by walks if (and only if) $D'$ admits an $H$-kernel by walks. For, let $D'$ be constructed from $D$ as follows: for each walk $(x,y,z)$ in $D$, with arcs coloured by $u$ and $v$, in that order, we add a new vertex $\hat y$ in $D'$ and the symmetric arrows $(y,\hat y)$ and $(\hat y,y)$, each of them coloured with $z$. We denote by $\hat Y = \{\hat y\}$ the set of all those new vertices in $D'$ and by $Y$ their corresponding neighbours in $D$ (see Figure~\ref{construccion}). \begin{figure}[htbp] \centering \includegraphics[width=3in]{f1.jpg} \caption{The construction} \label{construccion} \end{figure} Let $K'\subset V(D')$ be an $H$-kernel by walks. We can construct an $H'$-kernel by walks in $D$ with the following set $$K:=K'\cup\{y\in V(D) : \hat y\in K'\}\setminus\hat Y.$$ To see the independence of $K$, consider a couple of vertices $\alpha,\beta\in K$. If there is an $H'$-walk from $\alpha$ to $\beta$ and such a walk never uses a vertex in $Y$, such a walk is an $H$-walk and it would contradict the independence of $K'$ in $D'$. So, let us suppose that such walk passes through a vertex $y\in Y$. Then, we can construct an $H$-walk in $D'$ adding the neighbour $\hat y$ of $y$ and the arrows $(y,\hat y)$ and $(\hat y,y)$ in the position of the vertex $y$ (see Figure~\ref{prueba}). \begin{figure}[htbp] \centering \includegraphics[width=1.3in]{f2.jpg} \caption{The proof} \label{prueba} \end{figure} We have to be careful and note that, if $\alpha$ or $\beta$ are vertices of $Y$, we need only to add the arcs $(\hat y,y)$ or $(y,\hat y)$, respectively, at the beginning or the end of the $\alpha\beta$ walk to obtain an $H$-walk in $D'$ and contradicting the independence of $K'$. Analogously, we can show the absorbency of $K$ by omitting the occurrences of vertices in $\hat Y$ in the $H$-walks of $D'$ and using the $H$-absorbency of $K'$. Finally, seeking for a contradiction, suppose that $H\in\mathcal{B}_3$. Since $H'\not\in\mathcal{B}_3$, then there exists a $D$ and an $H'$-colouring of $D$ which do not admit an $H'$-kernel by walks. Let $D'$ be constructed and coloured as before. Since $H\in\mathcal{B}_3$ then $D'$ admits an $H$-kernel by walks. Therefore, by the previous argument, $D$ would admit an $H'$-kernel by walks contradicting the hypothesis of the theorem, and concluding the proof. \hfill$\bullet$ As a consequence of this lemma, we can now decide that the following two digraphs in Figure~\ref{nuevas} are not in $\mathcal{B}_3$, which was previously unknown (see again \cite{AL}). \begin{figure}[htbp] \centering \includegraphics[width=2in]{indefinido.pdf} \caption{Patterns previously not known if panchromatic} \label{nuevas} \end{figure} \begin{corollary} The patterns in Figure~\ref{nuevas} are not panchromatic. \end{corollary} \noindent {\bf Proof.} We first concentrate on Figure~\ref{nuevas}.a. For, due to Lemma~\ref{inducidas}.3, it is enough to show that the solid arrows in Figure~\ref{auxiliara} induce a non-panchromatic digraph. Consider such a digraph and add the doted arc $(x,w)$. Due to lemma~\ref{p2}, it is enough now to show that this new digraph is not a panchromatic pattern; this last follows from the fact that the subdigraph induced by $x,w$ and $z$ is not a panchromatic pattern (see Figure~\ref{noenb3}.d). Analogously, to show that Figure~\ref{nuevas}.b is not a panchromatic pattern, we extend it with a vertex $w$ (see figure~\ref{auxiliarb}), add the dotted arc, and find the subdigraph induced by $x,w$ and $z$, which we know is not in $\mathcal{B}_3$. \hfill$\bullet$ \begin{figure}[htbp] \centering \includegraphics[width=1in]{exta.jpg} \caption{Extension of Figure \ref{nuevas}.a} \label{auxiliara} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1in]{extb.jpg} \caption{Extension of Figure \ref{nuevas}.b} \label{auxiliarb} \end{figure} \bigskip We say that a looped digraph $H$ is {\sl bicomplete} if it has the following three properties: \begin{enumerate} \item $V(H) = X\sqcup Y$, \item $H[X]$ and $H[Y]$ are complete digraphs, and \item $\forall x\in X$ and $y\in Y$ : $(y,x)\in A(H)$. \end{enumerate} Note that in a bicomplete digraph some of the $(x,y)$ arcs may belong to $A(H)$. \begin{lemma} \label{bicompletas} If $H$ is bicomplete, then $H\in\mathcal{B}_3$. \end{lemma} \noindent {\bf Proof.} Let $H$ be a bicomplete digraph. On the one hand, let us suppose that there is no arc $(x,y)\in A(H)$, with $x\in X$ and $y\in Y$. Consider the digraph $$H':=\left(V(H)\cup\{z\},A(H)\cup\{(x,z),(z,x),(y,z),(z,y),(z,z) : x\in X, y\in Y\}\right).$$ It is easy to se that $H'$ can be contracted to the digraph in Figure \ref{enb3}.a, which is in $\mathcal{B}_3$, and therefore by the Lemma~\ref{inducidas}.3, $H'$ and all its induced subdigraphs (Lemma~\ref{inducidas}.2) belongs to $\mathcal{B}_3$ --- in particular, $H\in\mathcal{B}_3$. On the other hand, consider a new pair $a=(x,y)$. We can add the arc $a$ to $H$ preserving its panchromaticity (i.e., $H\cup a\in\mathcal{B}_3$). For, observe that there is a path of length 2 $(x,z,y)$ in $H'$ and, by the previous Lemma~\ref{p2}, we guaranty that $H'\cup(x,y)\in\mathcal{B}_3$. Since $H\cup a$ is an induced subdigraph of $H'\cup a$, by Lemma~\ref{inducidas}.2 we guarantee its panchromaticity. Thus, we can recursively add arcs from $X$ to $Y$ and in each step we preserve the panchromaticity of the pattern.\hfill$\bullet$ \section{Main Theorem} Due to Sands et al. \cite{SSW} we know that the digraph $2K_1$ consisting in two looped vertices belongs to $\mathcal{B}_3$; moreover, due to Lemma~\ref{inducidas}.3, any expansion of $2K_1$ belongs to $\mathcal{B}_3$. Furthermore, due to lemma~\ref{bicompletas} we know that every bicomplete digraph is also in $\mathcal{B}_3$. We will now show that these are all digraphs in $\mathcal{B}_3$. \begin{theorem} $H$ is a panchromatic pattern if and only if $H$ is bicomplete or $H$ can be contracted to $2K_1$. \end{theorem} \noindent {\bf Proof.} The sufficiency is the content of Lemma~\ref{bicompletas} and Sauer et al. \cite{SSW}, respectively. For the necessity, let us suppose that $H\in\mathcal{B}_3$ is a panchromatic pattern and that it is not an expansion of $2K_1$. Let us denote by $G=H^c$ the complementary digraph of $H$. {\bf Claim.} {\sl If $(u,v)\in A(G)$ is an asymmetric arc, then $d_G^+(v)=0$.\/} For, suppose that there exist a $y\not=u$ such that $a=(v,y)\in A(G)$. If $a$ is an asymmetric arc (i.e., $(y,v)\not\in A(G)$), then, depending on the relationship between $u$ and $y$, one of the digraphs in Figure~\ref{noenb3}.b,e,f,g appears as an induced subdigraph of $H$, contradicting its panchromaticity. If $a$ is symmetric (i.e., $(y,v)\in A(G)$), depending on the relationship between $u$ and $y$, one of the digraphs in Figure~\ref{noenb3}.a,b,c,d appears as an induced subdigraph of H, contradicting again its panchromaticity. In any case, $a$ cannot belong to $G$, concluding the proof of the claim. \hfill$\circ$ As an immediate consequence of this claim we conclude that: {\sl if $H\in\mathcal{B}_3$ then every cycle of length at least 4 is symmetric}. Recall that Arpin and Linek \cite{AL} proved that no digraph in $\mathcal{B}_3$ has odd {\sl directed\/} cycles in its complement. {\bf Claim.} {\sl The underling graph of $G$ has no odd cycles; i.e., $G$ is bipartite.} Searching for a contradiction, let us suppose that the underling graph of $G$ has an odd cycle. Such a cycle cannot be symmetric since it would contain a directed odd cycle, contradicting Lemma~\ref{impares}, therefore it has an asymmetric arc. Due to the previous claim, it is easy to see that Figure~\ref{nuevas}.a, without its loops, most be part of the supposed cycle. From here we conclude that one of the digraphs in Figure~\ref{nuevas}.a or Figure~\ref{noenb3}.b,c,h most be induced subdigraphs of $H$ contradicting its panchromaticity. \hfill$\circ$ From here, we have two cases to analyse; namely, if $G$ has directed cycles of length at least 4, or not. \bigskip \noindent {\bf Case 1.} {\sl If $G$ contains a directed cycle of length at least 4, then $G$ is a bipartite complete digraph, and therefore $H$ is an extension of $2K_1$. \/} For, recall that every directed cycle of length at least 4 is symmetric. Therefore every non-trivial strongly connected component of $G$ is symmetric (since in a strongly connected digraph each arc is in a directed cycle). {\bf Claim.} {\sl If $S$ is a strongly connected component of $G$ with a directed cycle of length at least 4, then $S$ is a bipartite complete digraph (i.e., all arcs are symmetric and it has all arcs between two independent sets of vertices).\/} By the previous claim, $G$ is bipartite and therefore the induced subgraph of $S$ is bipartite too. Let $V(S)=A\sqcup B$ be the bipartition of $S$. We first show that $S$ has a cycle of length exactly 4. Let $\gamma$ be a directed girth of $S$. Searching for a contradiction, suppose $|\gamma|>4$; since there are no odd cycles, we have that $|\gamma|\geq6$. This induces a $P_4=(x,w,v,u)$ subgraph in $G$. Then, we have the path $(w,u,x,v)$ induced in $H$ and by Lemma~\ref{caminos} we have that $H\not\in\mathcal{B}_3$. Thus, let $(u,v,w,x)$ be the cycle of length 4 guaranteed by the previous argument. We further suppose that $u$ and $w$ are in $A$. Observe that every vertex in $B$ is adjacent to $u$ (by a symmetric arc); for, if there exists a vertex $z$ in $B$ non adjacent to $u$, then the induced subgraph by $u$, $z$ and $v$ is isomorphic to Figure~\ref{nuevas}.b, contradicting the panchromaticity of $H$. Analogously $w$ is adjacent to each vertex in $B$; moreover, $v$ and $x$ are adjacent to all vertices in $A$. Furthermore, each vertex $y\in A\setminus\{u,w\}$ is adjacent to all vertices in $B$; for, observe that if there is a vertex $z\in B\setminus\{v,x\}$ non adjacent to $y$, then the subdigraph induced by $w$, $y$ and $z$ is isomorphic to Figure~\ref{nuevas}.b, which contradicts the panchromaticity of $H$, concluding the proof of the claim.\hfill$\circ$ {\bf Claim.} {\sl Every connected component of $G$ is strongly connected.\/} For, let $S$ be as before with its 4-cycle $(u,v,w,x)$ and bipartition $V(S)=A\sqcup B$, and suppose that the connected component of $S$, in the week sense, is bigger. Then either an asymmetric arc goes into $S$ from its complement, or there is an asymmetric arc from $S$ to its complement. Due to our first claim, there cannot be an ``incoming'' arc $(y,s)$, with $s\in S$ and $y\in G\setminus S$, so let us suppose there is an ``outgoing'' arc $(s,y)$. With out loose of generality, suppose that $s\in B$. Then, the subdigraph induced by $w$, $s$ and $y$ in $H$, depending in the relationship between $w$ and $y$, is isomorphic to one of the Figures~\ref{noenb3}.b,c,h or Figure~\ref{nuevas}.a, contradicting the panchromaticity of $H$ and concluding the claim.\hfill$\circ$ {\bf Claim.} {\sl The underlying graph of $G$ is connected.\/} Let $S$ be as before, suppose there is another component of $G$, and let $y$ be a vertex in $G\setminus S$. Then the subdigraph of $H$ induced by $w$, $x$ and $y$ is isomorphic to Figure~\ref{nuevas}.b, which contradicts the panchromaticity of $H$.\hfill$\circ$ Therefore, we have that $G$ is isomorphic to $S$ which we have shown to be a complete bipartite and $H$ is an extension of $2K_1$ concluding case~1. \bigskip \noindent {\bf Case 2.} {\sl If $G$ does not have a directed cycle of length at least 4, then $H$ is a bicomplete digraph.\/} Recall that we had proved that the underling graph of $G$ is bipartite; we will work with its bipartition $V(G)=A\sqcup B$. {\bf Claim.} {\sl If $G$ contain cycles (viz., symmetric arcs), then all of them pass through a single vertex $x\in V(G)$.\/} If $G$ is acyclic, we are done; so, let us suppose that $G$ has a symmetric arc $\{u,v\}$, where $u\in A$ and $v\in B$. We will show that either every cycle of $G$ contain $u$ or all of them contain $v$. First of all, suppose there is a cycle (of length 2 --- or a symmetric arc, if you will) that does not contain either $u$, nor $v$; call such an arc $\{z,w\}$ where $z\in A$ and $w\in B$. Then the symmetric arc $\{v,z\}$ must exist since the complement of $v,w$ and $z$ must be in Figure~\ref{enb3}. Analogously, the pair $\{u,w\}$ form a symmetric arc; therefore we have the cycle $(u,v,z,w)$ which contradict the fact that every cycle is of length 2. Thus, we can suppose now that every cycle either contains $u$, or it contains $v$. Suppose there are cycles $\{z,v\}$ and $\{u,w\}$ in $G$. Since we don't have cycles of length 4, $z$ and $w$ must be independent, therefore we have that the complement of $u,z$ and $w$ is Figure~\ref{nuevas}.b, contradicting the panchromaticity of $H$ and concluding the propf of the claim. \hfill$\circ$ {\bf Claim.} {\sl There is a partition of $V(G\setminus x)=A\sqcup B$ (possibly degenerated; i.e., $B=\emptyset$), where $x$ is that vertex contained in all symmetric arcs, such that all arcs go from $A$ to $B$.\/} For, simply observe that $G\setminus x$ is acyclic and that, as proved earlier, the final vertex of every asymmetric arc has out-degree zero; so, let $A$ be the set of initial vertices of all arcs, and $B$ its complement. \hfill$\circ$ We end the proof showing that such a vertex $x\in V(G)$, indeed, does not exist. {\bf Claim.} {\sl If $H$ is not an extension of $2K_1$, then the digraph $G$ is acyclic.\/} For, suppose we have such a vertex $x\in V(G)$ and the partition $V(G\setminus x)=A\sqcup B$, where all arcs go from $A$ to $B$. By definition of $x$, there is a symmetric arc $\{x,y\}$ in $G$. If $y$ is a vertex in $A$, then there are symmetric arcs $\{x,a\}$ for all vertices in $A$ since otherwise the complement of $x,y$ and the nonadjacent $a\in A$ would induce a digraph isomorphic to Figure~\ref{noenb3}.d or one of the two digraphs in Figure~\ref{nuevas}. Furtheremore, $B$ has to be empty since otherwise $x,a$ and $b$ would induce a subgraph isomorphic to Figure~\ref{noenb3}.b,c,h or Figure~\ref{nuevas}.a,b. So, $H$ would be an isolated vertex union a complete digraph, which is an extension of $2K_1$. Analogously, there are no symmetric arcs from $x$ to $B$.\hfill$\circ$ Finally, since $G$ is bipartite and acyclic, and all out degrees of final vertices are 0, then either $H$ is bicomplete or an extension of $2K_1$, which completes the proof.\hfill$\bullet$
{ "timestamp": "2015-04-27T02:11:20", "yymm": "1504", "arxiv_id": "1504.06596", "language": "en", "url": "https://arxiv.org/abs/1504.06596" }
\section{Introduction} \subsection{Background and motivation} Nonlinear dispersive equations have solutions with various types of behavior in time, typically {\it scattering} (globally dispersive), {\it blow-up}, and solitary waves, i.e., {\it solitons}. In the recent years, especially since the work of Kenig and Merle \cite{KM}, global dynamics leading to those different types have been revealed among large general solutions, so that one can partially predict evolution of each solution from the initial data. Kenig and Merle \cite{KM} studied the energy-critical NLS \EQ{ i\dot u - \De u = |u|^4u, \pq u(t,x):\R^{1+3}\to\C,} and proved that all solutions with energy less than the ground state $W$ \EQ{ \pt E(u)=\int_{\R^3}\frac{|\na u|^2}{2}-\frac{|u|^6}{6}dx<E(W), \pr W(x):=(1+|x|^2/3)^{-1/2},\pq -\De W = W^5,} either scatter or blow-up, and that the two types of behavior are distinguished by some explicit functionals of the initial data. For example, \EQ{ K(u(0))=\int_{\R^3}|\na u(0)|^2-|u(0)|^6dx \CAS{\ge 0 \implies \text{scattering,} \\ <0 \implies \text{blow-up.}} } The distinction is essentially the same as that in the classical result for the nonlinear Klein-Gordon equation by Payne and Sattinger \cite{PS} into global existence vs.~blow-up, but the crucial aspect of Kenig-Merle's work is to reveal and exploit the global dispersion in the scattering part. It was extended to the threshold energy $E(u)\le E(W)$ by Duyckaerts and Merle \cite{DM}, and then slightly above the ground state by Schlag and the author \cite{NSK}, for the nonlinear Klein-Gordon equation \EQ{ \label{NLKG} \pt \ddot u - \De u + u = u^3, \pq u(t,x):\R^{1+3}\to\R, \pr E(u)=\int_{\R^3}\frac{|\dot u|^2+|\na u|^2+|u|^2}{2}-\frac{|u|^4}{4}dx<E(Q)+\e^2,} where $Q\in H^2(\R^3)$ is the unique positive radial solution or the ground state of \EQ{ \label{eq Q} -\De Q+Q=Q^3.} The types of behavior in that case are separated into 9 sets of solutions by center-stable and center-unstable manifolds of the ground state, and the mechanism of transition between scattering and blow-up is revealed. Furthermore, Duyckaerts, Kenig and Merle \cite{DKM} established a complete classification of asymptotic behavior of solutions, for the energy-critical wave equation \EQ{ \ddot u - \De u = u^5, \pq u(t,x):\R^{1+3}\to\R,} in terms of resolution into solitons (i.e.~rescaled $W$), without any size restriction on the initial data. All of these works have been extended to several equations and settings including the above examples, except the soliton resolution which is yet limited to variants of energy-critical wave equations. General dynamics are, however, far more complicated for more general or physical equations. In particular, many equations, especially of the NLS type, have many solitons, differing in shape, energy, stability, etc. Heuristically, unstable solitons are expected to collapse into stable solitons, radiating dispersive waves. For small solitons of the NLS with a decaying potential $V(x)$, Tsai and Yau \cite{TY1,TY2,TY3,TY4} first proved such a phenomenon, as well as asymptotic stability of the ground soliton, in the case $-\De+V$ has two well-positioned negative eigenvalues. Since then, there have been intensive studies (cf.~\cite{TS,SW,GS,NPT,CM}) on global behavior of small solutions including many solitons, but very little is rigorously known about dynamical relation between solitons which are neither close nor similar to each other. It seems hard in such cases to construct or control solutions in a precise way along some anticipated evolution. A more natural strategy is to deal altogether with general solutions including or at least close to those solitons, with less precise information on individual trajectories. \subsection{Setting and the main result} As a first step toward the above problem, we consider the NLS with a potential \EQ{ \label{NLSP} \pt i\dot u + H u = \sg|u|^2u, \pq H:=-\De+V, \pq \sg=\pm, \prq u(t,x):\R^{1+3}\to\C, \pq V(x):\R^3\to\R,} in the simplest non-trivial setting, namely the case with the unique eigenvalue \EQ{ \label{def phi0} \pt H\phi_0 = e_0\phi_0,\pq e_0<0, \pq 0<\phi_0\in H^2(\R^3), \pq \|\phi_0\|_{L^2(\R^3)}=1,} with $spec(H|_{\phi_0^\perp})=[0,\I)$ absolutely continuous, and the radial symmetry restriction \EQ{ u(t,x)=u(t,|x|), \pq V(x)=V(|x|).} Hence the initial data set is the radial subspace of the Sobolev space \EQ{ \label{def H1r} H^1_r(\R^3):=\{u\in L^2(\R^3)\mid \na u\in L^2(\R^3),\ u(x)=u(|x|)\}.} The nonlinearity can be either defocusing $\sg=-$ or focusing $\sg=+$. In the focusing case $\sg=+$, the above equation is one of the simplest equations with both stable and unstable solitons, where the former is small and the latter is large. The goal of this study is a complete description of global dynamics in a fairly large solution space, containing both the stable and the unstable solitons. In this paper, we consider the region of small mass and an upper energy constraint which eliminates the unstable solitons. An implication of the main result is that if an unstable (large) soliton with small mass and the second largest energy is perturbed to decrease its energy and the mass, then it either blows up or collapses into a (small) ground state soliton, radiating most of the energy (which is large) into a dispersive wave. The two types of behavior is distinguished by a functional of the initial data, similarly to Kenig-Merle or Payne-Sattinger. In order to state the main result, we need a few more assumptions on $V$. A simple sufficient condition is that $V$ is in the Schwartz class and $H$ has no resonance: \EQ{ \dot H^1(\R^3)\ni\fy,\pq (-\De+V)\fy=0 \implies \fy=0.} The existence of small solitons is well known in the above setting. The function $u(t,x)=e^{-it\om}\fy(x)$ is a solution of \eqref{NLSP} iff \EQ{ \label{sNLSP} (H+\om)\fy=\sg|\fy|^2\fy.} In this paper we call a solution of \eqref{sNLSP} a soliton, denoting the set of solitons by \EQ{ \label{def Soli} \Soli:=\{\fy\in H^1_r(\R^3) \mid \exists\om\in\R\ s.t.\ \eqref{sNLSP}\}} and the energy (Hamiltonian) and the mass (charge) by \EQ{ \label{def EM} \bE(u):=\int_{\R^3} \frac{|\na u|^2+V|u|^2}{2}-\frac{\sg|u|^4}{4}dx, \pq \bM(u):=\int_{\R^3} \frac{|u|^2}{2}dx,} which are continuous on $H^1(\R^3)$ and conserved for \eqref{NLSP}. For each fixed mass $\bM(\fy)=\mu>0$, we can define the energy levels of solitons by induction on $j=0,1,2\etc$ \EQ{ \label{def Ej} \sE_j(\mu):=\inf\{\bE(\fy)\mid \fy\in\Soli,\ \bM(\fy)=\mu,\ \bE(\fy)>\sE_{j-1}(\mu)\},} where $\sE_{-1}(\mu):=-\I$ and $\inf\emptyset:=\I$, then classify the solitons \EQ{ \label{def Solij} \Soli_j:=\{\fy\in\Soli \mid \bE(\fy)=\sE_j(\bM(\fy))\}.} $\Soli_0$ is the set of least energy solitons, namely {\it the ground states}, while $\Soli_j$ is the $j$-th {\it excited state} for $j\ge 1$. In this paper, we are concerned only with $\Soli_0$ and $\Soli_1$. It is easy to observe that the ground states for small mass are bifurcation from $0$ generated by the linear ground state $\phi_0$ in \eqref{def phi0}. More precisely, there exists $0<z_b\ll 1$ and a $C^1$ map \EQ{ (\Phi,\Om):D_b:=\{z\in\C\mid |z|^2<2z_b\} \to H^1_r(\R^3)\times\R} such that $(\fy,\om)=(\Phi[z],\Om[z])$ solves \eqref{sNLSP} for each $z\in D_b$ and \EQ{ \Phi[z] = z\phi_0 + \ga, \pq \ga \perp \phi_0, \pq \|\ga\|_{H^1}\lec|z|^3.} See \cite{gnt} for a proof in a more general setting. We can prove that $\Phi(D_b)=\Soli_0$ under the small mass constraint $\bM<z_b^2$, while the first excited energy satisfies \EQ{ \sE_1(\mu)=\CAS{\bE^0(Q)\bM(Q)\mu^{-1}(1+o(1)) &(\sg=+) \\ \I &(\sg=-),}} as $\mu\to 0$, where $\bE^0$ denotes the energy without the potential, namely \EQ{ \label{def E0} \bE^0(\fy):=\int_{\R^3}\frac{|\na u|^2}{2}-\sg\frac{|u|^4}{4}dx.} In fact, in the defocusing case $\sg=-$, the soliton \eqref{sNLSP} is unique for each fixed $\bM(\fy)=\mu>0$ modulo the gauge symmetry $e^{i\te}$. In the focusing case $\sg=+$, the first excited states are generated by scaling of $Q$ \EQ{ \Soli_1\ni\fy=\om^{1/2}(Q+o(1))(\om^{1/2}x) \pq(\bM(\fy)\to 0)} We do not need the above characterizations of $\Soli_1$, but the variational property with respect to the virial-type functional \EQ{ \label{def K2} \bK_2(u):=\int_{\R^3}|\na u|^2-\frac{rV_r|u|^2}{2}-\sg\frac{3|u|^4}{4}dx=\p_{\al=1}\bE(\al^{3/2}u(\al x))} plays a crucial role as in the case $V=0$. Henceforth $\p_{\al=a}$ denotes the partial derivative with respect to $\al$ at $\al=a$, namely \EQ{ \p_{\al=a}f :=\lim_{\e\to 0}\frac{f|_{\al=a+\e}-f|_{\al=a}}{\e}.} The following is the main result of this paper. \begin{thm} \label{main} There exists $0<\mu_\star\ll 1$ such that for any $u(0)\in H^1_r(\R^3)$ satisfying $\bM(u(0))\le\mu_\star$ and $\bE(u(0))<\sE_1(\bM(u(0)))$, the corresponding solution of \eqref{NLSP} either blows up in finite time both in $t>0$ and in $t<0$, or scatters as $t\to\pm\I$ to the ground states $\Soli_0$. More precisely, in the former case, there are $T_\pm\in(0,\I)$ such that the unique solution $u\in C((-T_-,T_+);H^1_r)$ exists and \EQ{ \lim_{t\to\pm(T_\pm-0)}\|\na u(t)\|_{L^2(\R^3)}=\I=\limsup_{t\to\pm(T_\pm-0)}\|u(t)\|_{L^\I(\R^3)}.} In the latter case, there are a $C^1$ function $z:\R\to D_b\subset\C$ and $u_\pm\in H^1_r(\R^3)$ such that $|z(t)|$ converges as $t\to\pm\I$ and \EQ{ \label{scatt to S_0} \lim_{t\to\pm\I}\|u(t)-\Phi[z(t)]-e^{-it\De}u_\pm\|_{H^1(\R^3)}=0.} Moreover, the blow up occurs if and only if \EQ{ \label{bup cond} \sg=+, \pq \|\na u(0)\|_{L^2(\R^3)}>1, \text{ and } \bK_2(u(0))<0,} which persists in $t$ as long as the solution $u$ exists. \end{thm} The above theorem contains the asymptotic stability of the ground state $\Soli_0$ for small $H^1$ radial solutions. This part is contained in the asymptotic stability for small solutions in \cite{gnt} by Gustafson, Tsai and the author, which does not need the radial symmetry restriction. If the potential $V=0$, then there is no small soliton such as $\Soli_0$, but the ground state $Q$ as in \eqref{NLKG} exists and unstable. In that case, the above result regarding $\Soli_0=\{0\}$ and $\Soli_1=\{\al Q(\al x)\}_{\al>0}$ was obtained by Holmer and Roudenko \cite{HR}, extended to the non-radial case by Duyckaerts, Holmer and Roudenko \cite{DHR}, to the threshold energy by Duyckaerts and Roudenko \cite{DR}, and slightly above the threshold (in the radial case) by Schlag and the author \cite{NSS}. In these works there is no small-mass constraint as above, but it is not an essential difference, because the scale invariance in the case $V=0$ allows one freely to add or remove such a restriction. \subsection{Difficulties and ideas in the proof} \label{ss:diff} The proof follows the strategy of Kenig and Merle \cite{KM}, which consists of a stationary part based on the classical variational argument for the elliptic equation \eqref{sNLSP}, and a dynamical (or scattering) part based on the variational argument in space-time: the profile decomposition by Bahouri and G\'erard \cite{BG}. The problem caused by the potential in the stationary variational argument can be read immediately from the virial identity \EQ{ \p_t\LR{i\dot u|x\cdot\na u}=-2\bK_2(u).} In the absence of $V$, the functional $\bK_2$ can not vanish under the energy constraint except at $0$, and so sign-definite along each trajectory. This leads to monotonicity in the virial identity, which has been the crucial starting point for $V=0$, including the case slightly above the ground state \cite{NSK}, where possible change of $\sign\bK_2$ was controlled by using the linearized operator around $Q$. In the presence of $V$, the functional $\bK_2$ changes the sign around the ground solitons $\Soli_0$. Note that this problem does not arise in the elliptic equation \eqref{sNLSP} using the Nehari functional \EQ{ \p_{\al=1}(\bE+\om\bM)(\al u)=\int_{\R^3}|\na u|^2+(V+\om)|u|^2-|u|^4dx,} because the excited states $\Soli_1$ can be distinguished from the ground states $\Soli_0$ by the time frequency $\om$. Indeed, $\om\to-e_0$ on $\Soli_0$ while $\om\to\I$ on $\Soli_1$ as $\bM\to 0$. In contrast, the virial functional $\bK_2$ is independent of $\om$, since it is derived by the $L^2$-preserving dilation. The above problem in the virial identity is however easily solved using the fact that the disturbance of $\sign\bK_2$ occurs only in a small neighborhood of $H^1(\R^3)$, where we have the asymptotic stability of $\Soli_0$ from \cite{gnt}. In fact, the region $\bK_2\lec \bM\ll 1$ splits into two sets far from each other in $H^1(\R^3)$: one around $0$ satisfying \EQ{ \|\na\fy\|_{L^2(\R^3)}^2+\|\fy\|_{L^4(\R^3)}^2 \lec \bM(\fy) \ll 1,} and the other with large energy satisfying \EQ{ \min(\|\na\fy\|_{L^2(\R^3)}^2,\|\fy\|_{L^4(\R^3)}^4) \gec \bM(\fy)^{-1} \gg 1.}See Lemma \ref{lem:SMD} for a more general statement with a proof. In \eqref{bup cond}, the condition $\|\na u\|_{L^2}>1$ is imposed only to distinguish the above two cases, so there are many alternative conditions, such as $\|u\|_{L^4}>1$. The problems in the space-time variational argument, caused by the potential, or more precisely by the stable solitons $\Soli_0$, appear more fundamental. First, we should obviously remove the stable soliton part from the solution to apply the profile decomposition, as it aims at global dispersion or space-time integrability of the solutions. Second, the linear terms of the dispersive part, namely the interaction with the small soliton, can not be treated as part of the nonlinear perturbation, since it would require smallness in $L^2_t$ of the remainder of the profile decomposition, which is impossible as long as we take the initial data from the $L^2_x$ Sobolev space. Therefore, we have to consider the linearized equation around the small soliton as the reference equation in the profile decomposition for the dispersive component. Since the modulation in time, namely $z(t)$ in \eqref{scatt to S_0}, depends on the solution, it means that we have to consider a sequence of equations corresponding to the sequence of solutions to which we apply the concentration compactness. Another problem is that we have very poor control on the global or asymptotic behavior of $z(t)$. For example, the convergence of $|z(t)|$ as $t\to\I$ becomes arbitrarily slow by choosing small $H^1$ data, see \cite[Theorem 1.9]{gnt}. This causes difficulties at least in the following two places. First, the nonlinear profile decomposition is a method to approximate solutions globally in time, but we can not do it for the soliton part $z(t)$. Therefore we have to distinguish time into two regimes: around and away from the profiles, approximating $z$ only in the former, while relying on the smallness of the dispersive component in the latter. Second, the nonlinear profiles moving to $t\to\pm\I$ were defined in Kenig-Merle \cite{KM} by the wave operator, i.e., solving the final state problem with the linear profile as the scattering state. The existence of solution to the final state problem in the current setting, namely around the ground states $\Soli_0$, was proved in \cite{gnt}, but we do not even know the uniqueness, while we would need some continuity estimate. Hence we have to define the nonlinear profiles in another way, that is the weak limit along a time sequence, proving afterward that the linear profile is the scattering state. The drawback of this definition is that we can not construct global approximation at one stroke as in Kenig-Merle, but have to proceed step by step over each profile. The approach in this paper can be roughly regarded as a hybrid between Bahouri-G\'erard \cite{BG} and Kenig-Merle \cite{KM}. The former used the scattering to describe the limit of sequence of solutions, while the latter used the limit of sequence to obtain the scattering. We need to proceed from both the sides. Yet another complication in the estimates is due to the quadratic nonlinearity in the equation after linearization, to which we can not directly apply the Strichartz estimate to obtain Lipschitz estimate in the energy space for global perturbation, together with the smallness of the remainder in the profile decomposition. To solve this problem, we follow the idea in \cite{CNLKG}, using non-admissible Strichartz norms and measuring the initial data by the Strichartz norm. Such estimates are derived for the linearized equation, treating the time dependent potential by the double endpoint Strichartz estimate as in \cite{gnt}, but it requires the non-admissible version, obtained independently by Foschi \cite{F} and by Vilela \cite{V}. Extension of the result in this paper to the lower space dimensions would require similar modification to the argument by Mizumachi \cite{M1,M2}, who extended the small data result of \cite{gnt} by replacing the endpoint Strichartz estimate with Kato's weighted $L^2$ space-time estimate. Apart from that issue, it should be rather straightforward to extend it to general space dimensions and general power nonlinearity between the mass and the energy critical exponents, namely \EQ{ \pt i\dot u + H u = \sg|u|^\al u, \pq u(t,x):\R^{1+d}\to\C, \pq \frac{4}{d}<\al<\frac{4}{d-2},} even though the 3D-cubic setting is exploited for minor simplification in several places of this paper. \subsection{Notation} \label{ss:nota} $L^p_x$, $B^s_{p,q}$, and $H^s_p$ denote respectively the standard Lebesgue, inhomogeneous Besov, and inhomogeneous Sobolev spaces on $\R^3$. The $L^2$ based Sobolev space is denoted by $H^s=H^s_2$. The $L^p_x$ norm is often denoted by $\|\cdot\|_p$. For any function space $X$ on $\R^3$, the subspace of radial functions is denoted by $X_r$, and the $L^p$ space in $t\in\R$ with values in $X$ is denoted by $L^p_tX$. For any function space $Z$ on $\R^{1+3}$ and $I\subset\R$, $Z(I)$ denotes the restriction onto $I\times\R^3$. The $L^2$ inner products on $\R^3$ are denoted by \EQ{ \label{def inner} (f|g):=\int_{\R^3}f(x)\ba{g(x)}dx, \pq \LR{f|g}:=\re(f|g).} \subsection{Assumptions on $V$} \label{ss:asm V} Let $L^\I_0(\R^3)=\{\fy \in L^\I(\R^3) \mid \Lim_{R\to\I}\|\fy\|_{L^\I(|x|>R)}=0\}$. The precise assumption on $V$ is as follows. \begin{enumerate} \item $V:\R^3\to\R$ is radially symmetric. \item $V,\ x\na V,\ x^2\na^2 V\in (L^2+L^\I_0)(\R^3)$ and $V/|x| \in L^1(\R^3)$. \item $-\De+V$ on $L^2_r(\R^3)$ has a unique and negative eigenvalue. \item The wave operator $W=\Lim_{t\to\I}e^{itH}e^{it\De}$ and its adjoint $W^*$ are bounded on the Sobolev space $W^{k,\pp}(\R^3)$ for some $\pp>6$ and $k=0,1$. \end{enumerate} The above assumption (ii) implies that \EQ{ \label{V decay} \lim_{|x|\to\I} |V(x)|+|x\na V(x)|= 0} by the radial Sobolev inequalities, cf.~Appendix \ref{app:dec V}. By Beceanu \cite[Corollary 1.5]{B}, the assumption (iv) is fulfilled if $0$ is neither an eigenvalue nor resonance of $H$, and $V\in (L^\pp \cap \F\dot B^{1/2}_{2,1})(\R^3)$. For example, if $\psi\in\cS(\R^3)$ is a radial positive function, the above assumptions (i)-(iv) are satisfied by $V=-a\psi$ for $a\in (1/a_1,1/a_2)$, where $a_1>a_2>0$ are the largest and the second largest eigenvalues of the compact self-adjoint operator $(-\De)^{-1/2}\psi(-\De)^{-1/2}$ on $L^2_r(\R^3)$. \section*{Acknowledgements} This work was originally started from intensive discussions with Stephen Gustafson and Tai-Peng Tsai. The author would like to thank them for useful comments on the manuscript. He is also grateful for Scipio Cuccagna, Masaya Maeda, Yoshio Tsutsumi, and the anonymous referee for their comments and pointing out some errors and missing references in the first version. \section{Standing waves} This section collects some properties of the solutions of \eqref{sNLSP}, namely solitons. It is easy to see $\om>0$ for $\fy\in H^1_r(\R^3)$, using the asymptotic behavior of the ODE as $r=|x|\to\I$. We will see that in the defocusing case $\sg=-$, there is a unique soliton $\fy$ for each $\om\in(0,-e_0)$ and nothing else. In the focusing case $\sg=+$, there is a soliton for each $\om\in(-e_0,\I)$, among which we can specify the ground state and the first excited state for each fixed small mass under the radial constraint. \subsection{Energy functionals} For any $\fy\in H^1(\R^3)$, and $V:\R^3\to\R$, we define the following functionals on $H^1(\R^3)$. \EQ{ \label{def funct} \pt \ml{V}(\fy):=\LR{V\fy|\fy}/2, \pq \bM(\fy):=\ml{1}(\fy)=\|\fy\|_2^2/2, \pq \bG(\fy):=\sg\|\fy\|^4_4/4, \pr \bH(\fy):=\LR{H\fy|\fy}/2=\|\na\fy\|_2^2/2+\ml{V}(\fy), \pq \bE:=\bH-\bG.} The energy $\bE$ and the mass $\bM$ are conserved in time for \eqref{NLSP}. The corresponding quantities without the potential $V$ are denoted by $\bH^0$, $\bE^0$, etc. \EQ{ \label{def H0} \pt \bH^0(\fy):=\|\na\fy\|_2^2/2, \pq \bE^0(\fy):=\|\na\fy\|_2^2/2-\sg\|\fy\|_4^4/4, \pr \bK_2^0(\fy):=\p_{\al=1}\bE^0(\al^{3/2}\fy(\al x)).} For the variational property in the focusing case, we need the dilation operator \EQ{ \label{def Stp} \cS^t_p\fy(x):=e^{3t/p}\fy(e^tx), \pq \cS'_p\fy(x)=(x\cdot\na+3/p)\fy(x),} which preserves the $L^p_x$ norm. The same notation is used for the functional \EQ{ \label{def S'} (\cS'_p F)(\fy):=\p_{t=0}F(\cS^t_p\fy).} Then we have \EQ{ \pt \cS'_p\bM=(6/p-3)\bM, \pq \cS'_p\bH^0=(6/p-1)\bH^0, \pr \cS'_p\bG=(12/p-3)\bG, \pq \cS'_p\ml{V}=-\ml{\cS'_{p/(p-2)}V}.} The $L^2$-scaling derivative plays a crucial role via the virial identity \EQ{ \bK_2 :=\cS'_2\bE = 2\bH^0-3\bG-\ml{\cS'_\I V}.} The following functional is used for convexity of $\bE$ in $\cS^t_p$: \EQ{ \label{def I} \bI:=\bE-\bK_2/2=\bG/2+\ml{\cS'_{3/2}V}/2.} If there is a family of solitons $\om\mapsto\fy_\om\in H^1$ differentiable in $\om\in\R$, then we have \EQ{ \label{E-M diff} \p_\om \bE(\fy_\om) = \LR{\bE'(\fy_\om)|\fy_\om'} = \LR{-\om\fy_\om|\fy'_\om} = -\om \p_\om\bM(\fy_\om).} For the potential part, we will frequently use the following bound \begin{lem} \label{decop V} Let $V\in(L^2+L^\I_0)(\R^3)$. Then for any $\e>0$, there is $C>0$ such that \EQ{ \label{est decop V} H^1(\R^3)\ni\forall\fy,\pq |\ml{V}(\fy)| \le \min(\e\|\fy\|_4^2+C\|\fy\|_2^2,\e\|\fy\|_2^2+C\|\fy\|_4^2).} \end{lem} \begin{proof} Let $V=V_2+V_\I$ where $V_2\in L^2$ and $V_\I\in L^\I_0$. For any $h>0$, we have \EQ{ |\ml{V_2}(\fy)| \le \|V_2\|_{L^2(V>h)}\|\fy\|_4^2+h\|\fy\|_2^2,} where $\|V_2\|_{L^2(V>h)}\to 0$ as $h\to\I$, so the right hand side is in the form of \eqref{est decop V}, choosing $h>0$ such that $\|V_2\|_{L^2(V>h)}\le\e$ or $h\le\e$. For any $R>0$, \EQ{ |\ml{V_\I}(\fy)| \lec \|V_\I\|_{L^\I(|x|<R)}\|\fy\|_4^2R^{3/2}+\|V_\I\|_{L^\I(|x|>R)}\|\fy\|_2^2,} where $\|V_\I\|_{L^\I(|x|>R)}\to 0$ as $R\to\I$, so the right hand side is also in the form of \eqref{est decop V}, choosing $R>0$ such that $\|V_\I\|_{\I}R^{3/2}\le\e$ or $\|V_\I\|_{L^\I(|x|>R)}\le\e$. Adding the above two estimates yields the conclusion. \end{proof} \subsection{Small solitons} For small mass, the ground state is the bifurcation from zero, generated by the ground state $\phi_0$ of $H$. The following precise statement can be extracted from \cite[Lemma 2.1]{gnt} \begin{lem} \label{lem:Phi} There exists $0<z_b\ll 1$ and a $C^1$ map \EQ{ (\Phi,\Om):D_b:=\{z\in\C\mid |z|^2<2z_b\} \to H^1_r(\R^3)\times\R,} such that $(\fy,\om)=(\Phi[z],\Om[z])$ is a soliton for each $z\in D_b$, with a decomposition \EQ{ \Phi[z]=z\phi_0+\ga[z], \pq \Om[z]=-e_0+o(z),\pq \ga[z]\perp\phi_0, \pq \|\ga[z]\|_{H^1}=o(|z|^2),} satisfying the gauge covariance $(\Phi[e^{i\te}z],\Om[e^{i\te}z])=(e^{i\te}\Phi[z],\Om[z])$, and $\bM(\Phi[z])=|z|^2/2+o(|z|^4)$ is an increasing function of $|z|$. Moreover, there is an open set in $H^1_r(\R^3)$ which contains $\Phi[D_b]$ but no other soliton. \end{lem} Let $\mu_b>0$ be the maximal mass among those solitons: \EQ{ \label{def mub} \mu_b := \sup \bM(\Phi[D_b]).} Then the monotonicity implies that \EQ{ [0,z_b)\ni z \mapsto \bM(\Phi[z])\in [0,\mu_b)} is an increasing bijection. Let $z_0:[0,\mu_b)\to[0,z_b)$ be the inverse function, so that \EQ{ \bM(\Phi[z_0(\mu)])=\mu.} The following lemma shows that the above solitons are the ground states for small mass. It will be crucial also for identifying the first excited state. \begin{lem}[Small mass dichotomy] \label{lem:SMD} For any $\fy\in H^1(\R^3)$ satisfying $\bK_2(\fy)\ll \bM(\fy)^{-1}$ and $\bM(\fy)\ll 1$, we have one of the following {\rm (i)-(iii)} \begin{enumerate} \item $\bH^0(\fy) \lec \bM(\fy)$, \item $\bM(\fy) \lec \bH^0(\fy) \sim \bE(\fy) \sim \bK_2(\fy)$, \item $\sg=+$ and $\bG(\fy) \gec \bH^0(\fy)\gec \bM(\fy)^{-1}$, \label{large case} \end{enumerate} Moreover, in the case {\rm(\ref{large case})}, we have \EQ{ \label{G dom V} |\ml{V}(\fy)|+|\ml{\cS'_pV}(\fy)|+|\ml{\cS'_p\cS'_qV}(\fy)| \ll \bH^0(\fy)} for any $p,q>0$. In particular, $\bG$ in {\rm(\ref{large case})} can be replaced with $2\bI$. \end{lem} Note that the first two regions overlap each other, but the last one is separated. Thus the above lemma gives a dichotomy into (i)-(ii) and (iii). The case (ii) can be removed if $\bK_2(\fy)\ll\bM(\fy)$, which is mostly satisfied when the above lemma is used. \begin{proof} Let $\mu:=\bM(\fy)$ and $h:=\bH^0(\fy)$. Using Gagliardo-Nirenberg, we have \EQ{ \label{h-K2} \pt |\bG(\fy)| \lec \|\fy\|_4^4 \lec h^{3/2}\mu^{1/2}, \pr |\ml{W}(\fy)| \lec \|W\|_{L^2+L^\I}(\mu + h^{3/4}\mu^{1/4})\pq(W=V,\cS_p'V,\cS_p'\cS_q'V).} Splitting into three cases: (1) $h\lec \mu$, (2) $\mu\ll h\ll\mu^{-1}$, and (3) $\mu^{-1}\lec h$, we may first dispose of (1)=(i). In the case of (2), the above estimates imply \EQ{ |2h-\bK_2(\fy)|=|\ml{\cS'_\I V}(\fy)-3\bG(\fy)| \lec \mu+h^{3/4}\mu^{1/4}+h^{3/2}\mu^{1/2} \ll h,} and the same estimate works for $|\bH^0(\fy)-\bE(\fy)|=|\ml{V}(\fy)-\bG(\fy)|$, leading to (ii). In the case of (3), we have $h\gec\mu^{-1}\gg 1$. Then using $\bK_2(\fy)\ll\mu^{-1}\lec h$ and $\mu+h^{3/4}\mu^{1/4} \ll h$ in \eqref{h-K2}, we obtain \eqref{G dom V} and \EQ{ h \lec 2h-\bK_2(\fy)-\ml{\cS'_\I V}(\fy)=3\bG(\fy),} leading to (iii). \end{proof} The above lemma enables us to identify the solitons in Lemma \ref{lem:Phi} with $\Soli_0$: \begin{prop} There exists $0<\mu_d\le\mu_b$ and $0<z_d\le z_b$ such that \EQ{ \label{small ground} \pt \{\fy\in\Soli_0\mid \bM(\fy)<\mu_d\}=\{\Phi[z]\mid |z|<z_d\}, \pr 0<\mu<\mu_d \implies \sE_0(\mu)=\bE(\Phi[z_0(\mu)])\sim e_0\mu<0.} \end{prop} \begin{proof} Since $\bK_2=0$ on $\Soli$, we can apply the above lemma to any $\fy\in\Soli$ with $\bM(\fy)<\mu_d\ll 1$, leading to either (i) or (iii). Taking $\mu_d>0$ small ensures that the region (i) is in the uniqueness region of $\Phi[D_b]$ in Lemma \ref{lem:Phi}, as well as that the region (iii) is far away. Then every $\fy\in\Soli$ with $\bM(\fy)<\mu_d$ is either in $\Phi[D_b]$ or in the region (iii). In the latter case, $\fy$ is an excited state, as $\Phi$ gives a soliton with the same mass and negative energy. Thus we obtain the first identity in \eqref{small ground}. The second one is its obvious consequence. The behavior of $\sE_0$ follows from \eqref{E-M diff} together with the differentiability of $\Phi$ from Lemma \ref{lem:Phi}. \end{proof} \subsection{Focusing case $\sg=+$} Next we investigate the first excited state of small mass in the focusing case. The small mass dichotomy Lemma \ref{lem:SMD} allows us to ignore the potential effect, leading to the same variational characterization as $V=0$ in the higher energy region. \begin{prop} \label{Soli foc} Let $\sg=+$. There exists $0<\mu_e\le\mu_d$ such that for $0<\mu<\mu_e$ \EQ{ \label{char E1} \sE_1(\mu)\pt=\inf\{\bE(\fy) \mid \fy\in H^1_r,\ \bM(\fy)=\mu,\ \bK_2(\fy)=0,\ \bG(\fy)\ge 1\} \pr=\inf\{\bI(\fy) \mid \fy\in H^1_r,\ \bM(\fy)\le\mu,\ \bK_2(\fy)\le 0,\ \bG(\fy)\ge 1\} \pr= \mu^{-1}\bM(Q)\bE^0(Q)(1+o(1)) \gg 1 \pq (\mu\to 0),} where $\inf$ is attained by some $\fy\in\Soli_1$, and $\sE_1(\mu)$ is decreasing in $\mu$. Moreover, there is a continuous function $\ka:(0,\mu_e)\times(0,\I)\to(0,1/2)$ such that for any $\de>0$ and any $\fy\in H^1_r$ satisfying $\bM(\fy)<\mu_e$, $\bE(\fy)\le \sE_1(\mu)-\de$ and $\|\na\fy\|_2\ge 1$, we have \EQ{ \label{bd K2} |\bK_2(\fy)| \ge \ka(\bM(\fy),\de).} \end{prop} The above minimization is well known in the case $V=0$ without the restriction on $\bG$. Some restriction to higher energy is needed in the case $V\not=0$, since those $\inf$ in \eqref{char E1} would become $\sE_0(\mu)$ without $\bG\ge 1$. \begin{proof} First, the $\ge$ part of \eqref{char E1} is obvious from $\bK_2=0$ on $\Soli$, the dichotomy Lemma \ref{lem:SMD}, and $\bI=\bE-\bK_2/2$. The second infimum is obviously decreasing in $\mu$. To show the $\le$ part of the second equality, let $\fy\in H^1_r$, $\bM(\fy)<\mu$, $\bK_2(\fy)\le 0$ and $\bG(\fy)\ge 1$. Consider the one-parameter scaling $v(t):=\cS_{3.5}^t\fy$ for $t\le 0$. The dichotomy implies $\bG(\fy)\gec\mu^{-1}\gg 1$. Since $\cS'_{3.5}\bM=-9\bM/7<0$, there exists $T<0$ such that $\bM(v(T))=\mu$. Moreover, since \EQ{ \cS_{3.5}'\bI=\frac{3}{14}\bG-\frac{1}{2}\ml{\cS_{7/3}'\cS_{3/2}'V}, \pq \cS_{3.5}'\bK_2=\frac{5}{7}\bK_2+\frac{6}{7}\bG+\ml{\cS'_{3/2}V},} we have, using Lemma \ref{decop V}, \EQ{ \label{I' K2' bd} \cS_{3.5}'\bI(v)\gec \mu^{-1}, \pq \cS_{3.5}'\bK_2(v)>5(\bK_2(v)+1)/7} as long as $\bK_2(v)\le\bM(v)\le \mu$. The second inequality of \eqref{I' K2' bd} implies that if $\bK_2(v)\ge -1$, $\bK_2(v)$ is decreasing as $t$ decreases. Therefore $\bK_2(v)<0$ and \eqref{I' K2' bd} are preserved for $0>t>T$, hence the infimum is reduced to the case $\bM(\fy)=\mu$. Next consider the one-parameter scaling $v(t):=\cS_2^t\fy$ for $t\le 0$. Since \EQ{ \label{S2'I} \cS'_2\bI=3\bG/2-\ml{\cS'_\I\cS'_{3/2}V}/2, \pq \cS'_2\bK_2=2\bK_2-2\cS'_2\bI,} a similar argument as above implies that $\bI(v)$ is decreasing and $\bK_2(v)$ is increasing as $t$ decreases, as long as $\bK_2(v)<0$, while $\bG(v)\ge 1$ is preserved by the dichotomy. Thus the infimum is further reduced to the case $\bK_2(\fy)=0$, which means the second equality in \eqref{char E1}. To prove the existence of minimizer as well as the lower bound \eqref{bd K2} on $|\bK_2|$, take any sequence $\fy_n\in H^1_r$ satisfying $\bM(\fy_n)\to\mu\in(0,\mu_e)$, $\bK_2(\fy_n)\to 0$, $\bG(\fy_n)+\|\na \fy_n\|_2\ge 1$ and $\bE(\fy_n)\to E_\I\le E_*(\mu)$, where $E_*$ is the infimum in \eqref{char E1}. Using Lemma \ref{decop V} with Gagliardo-Nirenberg, we have \EQ{ \label{bd by K2} \bH^0=3\bE-\bK_2-\ml{\cS'_1V} \le 3\bE-\bK_2+C\bM+\bH^0/2.} Hence $\fy_n$ is bounded in $H^1_r$, and so, we may assume, passing to a subsequence, that $\fy_n\to\exists\fy\in H^1_r$ weakly. Since $\bK_2=2\bH^0-3\bG-\ml{\cS'_\I V}$, disposing of the potential part as above, we deduce that $\bG(\fy_n)\gec 1$ and $\|\na\fy_n\|_2\gec 1$ are equivalent for large $n$, and then the dichotomy implies $\bH^0(\fy_n)\sim \bG(\fy_n)\gec \mu^{-1}$. Since $\bG$ and the potential functionals are weakly continuous on $H^1_r$, we have \EQ{ \bI(\fy)=E_\I, \pq \bG(\fy) \ge 1, \pq \bM(\fy)\le\mu, \pq \bK_2(\fy)\le 0,} hence $E_\I=E_*$ and $\fy$ is a minimizer of \eqref{char E1}. Moreover, the above argument implies that $\bM(\fy)=\mu$ and $\bK_2(\fy)=0$. Since the dichotomy implies $\bG(\fy)\gec\mu^{-1}\gg 1$, we have Lagrange multipliers $\om,\al\in\R$ such that \EQ{ \bE'(\fy) + \om \bM'(\fy) = \al \bK_2(\fy).} Differentiation along the curve $\cS_2^t\fy$ at $t=0$ yields \EQ{ 0 = \bK_2(\fy) = \cS'_2\bE(\fy) = \al \cS'_2\bK_2(\fy)=-2\al\cS_2'\bI(\fy),} where $\cS'_2\bI(\fy)\not=0$ by \eqref{S2'I} with the dichotomy, hence $\al=0$. This means that $\fy\in\Soli$ and so $\sE_1(\mu)=E_*(\mu)$, as well as the lower bound \eqref{bd K2} on $|\bK_2|$. Finally, we prove the asymptotic formula. In the second infimum in \eqref{char E1}, put $\psi(x):=\mu\fy(\mu x)$ and $V_\mu(x):=\mu^2V(\mu x)$. Then \EQ{ \label{muE1 rescaled} \mu \sE_1(\mu)\pt= \min_{\psi\in A_\mu}\left\{\bG(\psi)/2+\ml{(\cS'_{3/2}V)_\mu}(\psi)\right\}, \pr A_\mu:=\{\psi\in H^1_r \mid \bM(\psi)\le 1,\ \bK_2^0(\psi)\le\ml{(\cS'_\I V)_\mu}(\psi),\ \bG(\psi)\ge \mu\}.} Since $\|V_\mu\|_{L^2+L^\I}\le \mu^{1/2}\|V\|_{L^2+L^\I}$, we have for $p>0$ \EQ{ |\ml{(\cS'_pV)_\mu}(\psi)| \lec \mu^{1/2}(\|\psi\|_4^2+\|\psi\|_2^2).} Hence if $\psi\in H^1_r$ satisfies $\bM(\psi)\le 1$ and $\bK_2^0(\psi)\le -1$, then $\psi\in A_\mu$ for $0<\mu\ll 1$. Therefore as $\mu\to 0$, the minimizer $\psi$ is bounded in $L^4_x$. Since $\bG(\fy)\gec\mu^{-1}$, we obtain $\bG(\psi)\sim 1$ and $|\ml{(\cS'_pV)_\mu}(\psi)|\le O(\mu^{1/2})$. On the other hand, for any $\psi\in H^1_r$ satisfying $\bM(\psi)\le 1\sim\bG(\psi)$ and $\bK_2^0(\psi)=0$, we deduce from \eqref{S2'I} that for $0<\mu\ll 1$ there exists $t=O(\mu^{1/2})$ such that $\cS_2^t\psi\in A_\mu$, using the implicit function theorem around $t=0$. Therefore \EQ{ \label{Q mini} \lim_{\mu\to 0}\mu \sE_1(\mu)=\inf\{\bG(\psi)/2 \mid \psi\in H^1_r,\ \bM(\psi)\le 1,\ \bK_2^0(\psi)\le 0,\ \psi\not=0\}.} To see that the above equals $\bM(Q)\bE^0(Q)$, we may first replace $\bK_2^0(\psi)\le 0$ with $\bK_2^0(\psi)=0$, since on the curve $\R\ni t\mapsto \cS_2^t\psi\in H^1_r$ for any $\psi\in H^1_r\setminus\{0\}$, $\bM$ is constant, $\bG$ is increasing, and $\bK_2^0$ changes its sign exactly once and from positive to negative. Next, since $\bG(\cS_3^t\psi)=e^t\bG(\psi)$, $\bM(\cS_3^t\psi)=e^{-t}\bM(\psi)$ and $\bK_2^0(\cS_3^t\psi)=e^t\bK_2^0(\psi)$, we may remove $\bM(\psi)\le 1$ by replacing the minimized quantity $\bG/2$ with $\bM\bG/2$, which may further be replaced with $(\bG/2+\bM)^2/4$, because \EQ{ \inf_{t\in\R}(\bG/2+\bM)(\cS_3^t\psi)=\inf_{t\in\R}[e^t\bG(\psi)/2+e^{-t}\bM(\psi)]=2\sqrt{\bM(\psi)\bG(\psi)}.} Since $\bG/2=\bE^0$ on $\bK_2^0=0$, we thus obtain \EQ{ \eqref{Q mini}=\left[\inf\{(\bE^0+\bM)(\psi)\mid \psi\in H^1_r\setminus\{0\},\ \bK_2^0(\psi)=0\}\right/2]^2.} It is well-known that the above infimum is attained by the ground state $Q$ (see, e.g., \cite[Lemma 2.1]{NSK}). Using that $\bK_2^0(Q)=0=\p_{\al=1}(\bE^0+\bM)(\al Q)$, it is elementary to see that the above equals $\bM(Q)\bE^0(Q)$. \end{proof} \subsection{Defocusing case $\sg=-$} In the defocusing case, the variational structure is much simpler, and so we can determine the entire set of solitons $\Soli$ without the mass constraint. In this subsection, we prove the following \begin{prop} \label{Soli defoc} Let $\sg=-$. Under the assumptions on $V$ in Section \ref{ss:asm V}, the equation \eqref{sNLSP} has a unique positive solution $\fy_\om$ for each $\om\in(0,-e_0)$, and \EQ{ \Soli=\{e^{i\te}\fy_\om \mid 0<\om<-e_0,\ \te\in\R\}.} The function $(0,-e_0)\ni\om\mapsto\bM(\fy_\om)\in(0,\I)$ is $C^1$, decreasing and bijective. Let $\om_0(\mu)$ be its inverse function. Then for all $\mu>0$, \EQ{ e_0\mu < \sE_0(\mu)=\bE(\fy_{\om_0(\mu)}) < 0 < \I=\sE_1(\mu),\pq \om_0'(\mu)<0.} \end{prop} Since $H\ge e_0$, multiplying \eqref{sNLSP} with $\fy$ \EQ{ 0=\LR{(H+\om)\fy|\fy}+\|\fy\|_4^4 = \|\na\fy\|_2^2+\om\|\fy\|_2^2+\LR{V\fy|\fy}+\|\fy\|_4^4} implies that $\om<-e_0$ is necessary for existence of a non-trivial solution. If $\om\le 0$, then putting $\psi:=r\fy$ we have from \eqref{sNLSP} \EQ{ \psi_{rr}+\om\psi=(V+|\fy|^2)\psi,\pq\liminf_{r\to\I}\BR{|\psi(r)|+|\psi_r(r)|}=0.} Rewriting the above into an integral equation from $r=\I$, we obtain \EQ{ |\psi(s)| \le \int_s^\I(r-s)|(V+|\fy|^2)\psi|dr \le \|\psi\|_{L^\I(r>s)}\|r(V+|\fy|^2)\|_{L^1(r>s)},} where the last norm is vanishing as $s\to\I$ by the assumption $V/|x|\in L^1(\R^3)$ and $\fy \in L^2(\R^3)$. Hence taking $s\to\I$ and then solving the ODE, we deduce that $\fy=0$. Therefore $0<\om<-e_0$ for every non-trivial $\fy\in\Soli$. Moreover, using Lemma \ref{decop V} with $\e=\om/2$, we deduce that $\fy\in\Soli$ is uniformly bounded in $H^1(\R^3)$ on any interval of $\om$ away from $0$, while $\fy_\om\to 0$ in $H^1(\R^3)$ as $\om\to-e_0-0$. For each $\om\in(0,-e_0)$, we have a solution $\fy_\om$ of \eqref{sNLSP} which is a global minimizer \EQ{ \label{global min} (\bE+\om\bM)(\fy_\om)=\inf\{(\bE+\om\bM)(\fy)\mid \fy\in H^1\}<0.} The proof is easy and omitted. The positivity is also standard. The uniqueness of the solution $\fy_\om$ for each $\om$, modulo the phase $e^{i\te}$, follows from a general argument: \begin{lem} Let $H$ be a self-adjoint operator on $L^2$ with non-degenerate eigenvalue $e_0$, and assume the rest of the spectrum of $H$ is contained in $[e_1, \infty)$ for some $e_1 > e_0$. Let $f:[0,\I)\to[0,\I)$ be a strictly monotone function such that $f(a)a$ is non-decreasing. Then the nonlinear eigenvalue problem \[ (H + f(|\fy|)) \fy = \om \fy \] can have at most one non-trivial solution (up to the phase symmetry) for each $\om < e_1$. The same conclusion holds for $\om=e_1$ if $f(a)a$ is strictly increasing. \end{lem} The above lemma may be known, but a proof is given below for the sake of completeness. \begin{proof} Let $f(z):=f(|z|)$ for $z\in\C$. Let $\fy$ and $\psi$ be two non-zero solutions, and let $\phi_0$ be an eigenfunction of $H$ for $e_0$. We must have $(\phi_0|\fy) \not= 0$, or else \EQ{ (e_1 - \om) \| \fy \|_2^2 \le (\fy|(H - \om) \fy) = - (|\fy|^2|f(\fy)),} which contradicts either $\om<e_1$ or $\om=e_1$ with strictly increasing $f(a)a$. Thus we can find $\be \in \C\setminus\{0\}$ so that $(\phi_0|\be \fy + \psi) = 0$. Using the invariance of the equation for $\fy\mapsto e^{i\te}\fy$, we may take $\be>0$ by appropriate complex rotation of $\fy$. Then \EQ{ \label{min comb} \pn(e_1-\om) \| \be \fy + \psi \|_2^2 \pt\le \LR{\be \fy + \psi| (H-\om)(\be \fy + \psi) } \pr= -\be^2\LR{f(\fy)||\fy|^2}-\LR{f(\psi)||\psi|^2} + 2\be\LR{\fy|(H-\om)\psi} \pr\le 2\be\left[|\LR{\fy|(H-\om)\psi}|-\sqrt{\LR{f(\fy)||\fy|^2}\LR{f(\psi)||\psi|^2}}\right].} First consider the case where $f$ is non-decreasing. Then using Schwarz, \EQ{ |\LR{\fy|(H-\om)\psi}| =\CAS{|\LR{\fy|f(\psi)\psi}|\le\sqrt{\LR{f(\psi)||\fy|^2}\LR{f(\psi)||\psi|^2}}, \\ |\LR{f(\fy)\fy|\psi}|\le\sqrt{\LR{f(\fy)||\fy|^2}\LR{f(\fy)||\psi|^2}}.}} So we arrive at \EQ{ \pt \LR{f(\psi)||\fy|^2} \ge \LR{f(\fy)||\fy|^2}, \pq \LR{f(\fy)||\psi|^2} \ge \LR{f(\psi)||\psi|^2},} and hence $\LR{f(\fy)-f(\psi)||\fy|^2-|\psi|^2} \le 0$. Since $f$ is non-decreasing, this implies that $f(\fy)=f(\psi)$ (a.e.). Next consider the case where $f$ is non-increasing. By Schwarz, we have \EQ{ \pt|\LR{\fy|f(\psi)\psi}| \le\sqrt{\LR{f(\fy)||\fy|^2}\LR{f(\psi)^2|\psi|^2|1/f(\fy)}}, \pr|\LR{f(\fy)\fy|\psi}|\le\sqrt{\LR{f(\fy)^2|\fy|^2|1/f(\psi)}\LR{f(\psi)||\psi|^2}}.} Hence \eqref{min comb} implies that \EQ{ \pt \LR{f(\psi)^2|\psi|^2|1/f(\fy)} \ge \LR{f(\psi)^2|\psi|^2|1/f(\psi)}, \pr \LR{f(\fy)^2|\fy|^2|1/f(\psi)} \ge \LR{f(\fy)^2|\fy|^2|1/f(\fy)},} and so, $\LR{f(\fy)^2|\fy|^2-f(\psi)^2|\psi|^2|1/f(\fy)-1/f(\psi)} \le 0$. Since $1/f(a)$ and $f(a)a$ are both non-decreasing, we have $f(\fy)=f(\psi)$ or $f(\fy)|\fy|=f(\psi)|\psi|$ (a.e.). If $e_1>\om$, then $\psi=-\be\fy$, otherwise the above must be a strict inequality, contradicting the monotonicity of $f(\fy)$ and $f(\fy)|\fy|$. If $e_1=\om$, then $f(z)|z|$ is strictly increasing, so we get $f(\fy)=f(\psi)$, and then going back to \eqref{min comb}, \EQ{ 0= (e_1-\om) \|\be \fy + \psi\|_2^2 \pt\leq \LR{\be \fy + \psi|(H-\om)(\be \fy + \psi)} \pr= - \LR{f(\fy)||\be \fy + \psi|^2}\le 0. } The strict monotonicity of $f(a)a$ implies that at each $x$, $f(\fy(x))=0$ implies $f(\psi(x))=0$ and $\fy(x)=0=\psi(x)$. Hence $\psi=-\be\fy$ (a.e.). Thus we obtain $\psi=-\be\fy$ anyway. Then the equation for $\fy$ and $\psi$ implies that \EQ{ (\om-H)\fy=f(\fy)\fy=-f(\psi)\psi/\be=f(\be\fy)\fy.} Since $f$ is strictly monotone, this implies $\be=1$ or $\fy=0$ a.e. \end{proof} Once we have the uniqueness of $\fy_\om$ for $\om$, it is easy to prove continuity and then differentiability in $\om$. Differentiating the equation \EQ{ (H+\om+3\fy_\om^2)\fy_\om'+\fy_\om=0,\pq \fy_\om':=\p_\om\fy_\om} and multiplying it with $\fy_\om'$ yield \EQ{ \p_\om\bM(\fy_\om)=\LR{\fy_\om|\fy_\om'}=-\LR{(H+\om+3\fy_\om^2)\fy_\om'|\fy_\om'} \le -2\|\fy_\om\fy_\om'\|_2^2<0,} where we used $H+\om+\fy_\om^2\ge 0$, because $\fy_\om>0$ is the ground state in the kernel of this Schr\"odinger operator. Hence $\bM(\fy_\om)$ is decreasing in $\om$. Moreover, $\bM(\fy_\om)\to\I$ as $\om\to+0$, since otherwise the weak limit yields $0\not=\fy_0\in H^1_r$ satisfying $H\fy_0+\fy_0^3=0$, which is impossible. Using \eqref{E-M diff} as well, we conclude the proof of Proposition \ref{Soli defoc}. \section{Blow-up below the excited energy} \label{ss:bup} We are now ready to prove the blow-up part of Theorem \ref{main}, using the above characterization of $\Soli_1$ together with the estimate on $\bK_2$, namely Proposition \ref{Soli foc}. Let $u$ be a solution of \eqref{NLSP} with $\sg=+$, satisfying \eqref{bup cond} as well as $\bM(u)=:\mu<\mu_e$ and $\bE(u)<\sE_1(\mu)$, where $\mu_e$ is the small mass condition of Proposition \ref{Soli foc}. Fix $\de>0$ such that $\bE(u)\le \sE_1(\mu)-\de$. Suppose for contradiction that $u$ exists on $0<t<\I$. Then Proposition \ref{Soli foc} and Lemma \ref{lem:SMD} together with the continuity of $u(t)$ in $H^1_x$ imply that \eqref{bup cond} is preserved for all $t>0$, and also from \eqref{bd K2} \EQ{ \bK_2(u(t)) \le -\ka(\mu,\de).} We have the saturated virial identity from \cite{OT} \EQ{ \p_t\LR{R f_Ru|iu_r} \pt= 2\bK_2(u) - \int[2|u_r|^2f_{0,R}+R^{-2}|u|^2f_{1,R}+|u|^4f_{2,R}]dx \prQ-\int(1-f_RR/r)|u|^2 x\cdot\na V dx,} where $f_R(x)=f(x/R)$ with $R\gg 1$, and $f_{j,R}(x)=f_j(x/R)$ are derived from $f$ by \EQ{ f_0=1-f_r\ge 0, \pq f_1=\De(\p_r/2+1/r)f, \pq f_2=-3/2+(\p_r/2+1/r)f,} while $f(x)=f(|x|)$ is chosen to be smooth radial satisfying \EQ{ f(r)=\CAS{r &(r\le 1),\\ 3/2 &(r\ge 2).}} The $|u|^4$ integral is bounded by the radial Sobolev inequality \EQ{ \|u\|_{L^4(|x|>R)}^4 \lec R^{-2}\|u\|_{L^2(|x|>R)}^3\|u_r\|_{L^2(|x|>R)}.} Then we obtain \EQ{ \pt-\int[2|u_r|^2f_{0,R}+R^{-2}|u|^2f_{1,R}+|u|^4f_{2,R}+(1-f_RR/r)|u|^2 x\cdot\na V]dx \pr\lec R^{-4}\|u\|_{L^2(|x|>R)}^6 + o(1)\|u\|_{L^2(|x|>R)}^2,} as $R\to\I$. See \cite{OT}, \cite[\S 4.1]{NSS}, for the detail. Note that the potential part was treated by \eqref{V decay}, using $1-f_RR/r=0$ for $|x|<R$. Hence, for large $R$, we have \EQ{ \p_t\LR{R f_Ru|iu_r} < -\ka(\mu,\de)<0.} Since $\|u\|_{L^2_x}$ is conserved, it implies that $\|u_r\|_2\to \I$ as $t\to\I$. Then as $t\to\I$, \EQ{ \bK_2(u) \pt=3\bE(u)-\bH^0(u)-\ml{\cS'_1 V}(u) \sim -\|\na u\|_2^2.} The rest of the proof is the same as in the case without the potential, see \cite{OT}. Thus we obtain the ``if"-part for \eqref{bup cond} of the blow-up in Theorem \ref{main}. Next we show the ``only if"-part of \eqref{bup cond}, namely the global existence when it is not satisfied. If $\sg=-$, then we have a priori $H^1$ bound by conservation of the energy and mass, disposing of the potential part by Lemma \ref{decop V}, which leads to the global well-posedness in $H^1(\R^3)$. Hence we may restrict to the case $\sg=+$, $\bM(u)=\mu<\mu_e$ and $\bE(u)<\sE_1(\mu)$. By the persistence of \eqref{bup cond} proved above, if \eqref{bup cond} is initially not satisfied, neither is it at any other time. If $\bK_2(u(t))<0$ and $\|\na u(t)\|_2\le 1$, then Lemma \ref{lem:SMD} implies that $\|u(t)\|_{H^1}^2\lec\mu\ll 1$. If $\bK_2(u(t))\ge 0$, then \eqref{bd by K2} yields a priori bound on $H^1_x$ by the mass-energy conservation. Hence the solution $u$ is global and bounded in $H^1_x$ for all $t\in\R$. Moreover, we have the scattering to the ground states by \cite{gnt} if $\bH^0(u(t))\lec\mu$, and it is preserved for all $t\in\R$. Thus we have obtain the global existence part of Theorem \ref{main}. \begin{lem} \label{lem:gs} For any $u(0)\in H^1_r$ satisfying $\bM(u(0))=\mu\le \mu_e$ and $\bE(u(0))=\e<\sE_1(\mu)$, the corresponding solution $u$ of \eqref{NLSP} is global in time iff \eqref{bup cond} fails. Then it is never satisfied at any $t\in\R$. Moreover, the global solution $u$ satisfies one of the following \begin{enumerate} \item $\|u(t)\|_{H^1_x}^2\lec\mu$ for all $t\in\R$, scattering to $\Soli_0$. \item $\mu \lec \|u(t)\|_{H^1_x}^2 \lec \e+\mu$ and $\bK_2(u(t))\ge \ka(\mu,\sE_1(\mu)-\e)$ for all $t\in\R$, \end{enumerate} where $\ka>0$ is the same as in \eqref{bd K2}. \end{lem} The rest of this paper is devoted to the scattering in (ii). \section{Modulation and linearized equations around the ground state} Here we recall the coordinate in \cite{gnt} around the small ground state, and observe that it can be applied to large solutions as long as the mass $\bM(u)$ is small, including the excited solitons. For any $\mu>0$, denote \EQ{ \label{def H1mu} H^1[\mu]:=\{\fy \in H^1_r(\R^3) \mid \bM(\fy)< \mu\}.} Let $\Phi:D_b\to H^1_r$ be the small ground states as in Lemma \ref{lem:Phi}. We have the following nonlinear projection to them. \begin{lem} \label{lem:decop2Phi} There exist $0<\mu_p<\mu_b$ and a unique mapping $H^1[\mu_p]\ni\fy\mapsto (z,\y)\in D_p\times H^1[\mu_p]$, where $D_p:=\{z\in\C\mid |z|^2<2\mu_p\}$, such that \EQ{ \label{def decop} \pt \fy=\Phi[z]+\y, \pr \y\in \cH_c[z]:=\{\psi\in H^1(\R^3)\mid \LR{i\psi|\p_j\Phi[z]}=0\ (j=1,2)\},} where $\p_j$ denotes the derivative with respect to the real and imaginary parts of $z=z_1+iz_2$. Moreover, the map $\fy\mapsto(z,\y)$ is smooth and injective from $H^1[\mu_p]$ to $\C\times H^1_r$. Furthermore, the orthogonal projection $P_c$ to the continuous spectrum subspace \EQ{ \label{def Pc} P_c\fy:=1-\phi_0(\fy|\phi_0)} is bijective from $\cH_c[z]$ onto \EQ{ \cH_c[0]=P_c H^1(\R^3)=\{\fy\in H^1(\R^3) \mid \LR{i\fy|\phi_0}=0=\LR{i\fy|i\phi_0}\},} for any $z\in D_p$, and \EQ{ \label{def Rz} R[z]:=(P_c|_{\cH_c[z]})^{-1}} is a compact and continuous perturbation of identity in the operator norm on any space between $H^2\cap W^{1,1}$ and $H^{-2}+L^\I$. \end{lem} In \cite{gnt}, the above coordinate was defined on a small ball of $H^1(\R^3)$. However, it is easy to see that the smallness in $L^2$ suffices, since $z$ is determined by solving the orthogonality conditions \EQ{ \LR{\fy-\Phi[z]|i\p_j\Phi[z]}=0 \pq (j=1,2),} by the implicit function theorem. The derivative of the left hand side equals \EQ{ \LR{\fy-\Phi[z]|i\p_k\p_j\Phi[z]}-\LR{\p_k\Phi[z]|i\p_j\Phi[z]},\pq (j,k\in\{1,2\}).} The second term is a non-degenerate $2\times 2$ matrix of $O(1)$, while the first term is bounded by $\|\fy-\Phi[z]\|_2\lec\sqrt{\mu_p}\ll 1$, thereby the implicit function theorem works, leading to the conclusion. The $L^2$ bound follows from the orthogonality \EQ{ \bM(\fy)=\bM(\Phi[z])+\bM(\y).} The operator $R[z]$ is linear, so it does not need any smallness condition. Actually, the above lemma holds without even assuming that the function is in $H^1$. Hence every solution $u$ in $H^1[\mu_p]$ can be written as \EQ{ u(t) = \Phi[z(t)] + R[z(t)]\x(t) = \Phi[z(t)]+\y(t),} uniquely, and the equation for $u$ can be rewritten for $(z,\x)$ as, regarding $\C\simeq\R^2$, \EQ{ \label{eq zxi} \CAS{ \dot z-i\Om[z]z =\U{N}(z,R[z]\x):=M(z,R[z]\x)^{-1}\LR{N(z,R[z]\x)|D\Phi[z]}, \\ i\dot\x+H\x = B[z]\x+\ti N(z,\x),} } where $M(\cdot)$ is a $2\times 2$ matrix and $N(\cdot,\cdot)$ is a scalar defined by \EQ{ \label{def MN} \pt M_{j,k}(z,\y):=\LR{i\p_j\Phi[z]|\p_k\Phi[z]}-\LR{i\y|\p_j\p_k\Phi[z]}, \pr N(z,\y):=\sg\BR{2\Phi[z]|\y|^2+\overline{\Phi[z]}\y^2+|\y|^2\y},} $B[z]$ is the ``potential part" by the small soliton, namely \EQ{ \label{def Bz} B[z]\x:=\sg P_c\{2|\Phi[z]|^2R[z]\x+\Phi[z]^2\ba{R[z]\x}\},} which is $\R$-linear but not $\C$-linear, and $\ti N(\cdot,\cdot)$ is the quadratic part \EQ{ \label{def tiN} \ti N(z,\x):=P_c\{N(z,R[z]\x)-iD\Phi[z]\U{N}(z,R[z]\x)\}.} We introduce some notation for the linearized solutions. For any $s\in\R$ and any set $X$, the set of $X$-valued functions defined around $s$ is denoted by \EQ{ \label{def germ} X\{s\}:=\{u:I\to X \mid s\in \exists I\subset\R\}.} For any interval $I\subset\R$, $z\in C(I;\C)$, $s_0\in I$ and $u\in H^1\{s_0\}$, the linear solution $v$ of \EQ{ \label{Leqxi} i\dot v + Hv = B[z]v, \pq v(s_0)=u(s_0)} is denoted by \EQ{ \label{def propa} u[z,s_0]:=v \in C(I;H^1).} Note that this depends on $u(s_0)$ but not on $u(t)$ at the other time $t\not=s_0$. Indeed, there is no point for $u$ to depend on $t$ in the definition, but this convention avoids writing the same time $s_0$ twice. We can apply it to time-independent $u$ as well. Obviously \EQ{ \forall s_1\in I,\pq u[z,s_0][z,s_1]=u[z,s_0],} while the solution without the potential $B[z]$ is given by \EQ{ u[0,s_0]=e^{i(t-s_0)H}u(s_0).} The associated Duhamel integral is denoted by \EQ{ \label{def Duh} \D f[z,s_0](t):=\int_{s_0}^t f[z,s](t)ds,} so that $v:=\D f[z,s_0]$ satisfies \EQ{ i\dot v+Hv = B[z]v+ f, \pq v(s_0)=0.} Hence for any $\fy\in H^1$, $z\in\C\{s_0\}$ and $f\in H^1\{s_0\}$, the solution of \EQ{ i\dot\x+H\x=B[z]\x+ f,\pq \x(s_0)=\fy} is uniquely given by \EQ{ \x=\fy[z,s_0]+\D f[z,s_0].} Another notation \EQ{ \label{def turnoff} u[z,s_0]_> :=\CAS{ u(t) &(t<s_0) \\ u[z,s_0] &(t>s_0)}} is convenient to ``turn off the nonlinearity" after some time. Indeed if $u$ solves \EQ{ \label{Lequf} i\dot u + Hu = B[z]u + f} and $s_0<s_1$, then we have \EQ{ \CAS{ u = u[z,s_0] + \D f[z,s_0], \\ u[z,s_1]_> = u[z,s_0] + \D 1_{t<s_1} f[z,s_0].}} Next a few (semi)norms are introduced for space-time functions. For $s\in\R$, put \EQ{ \label{def Stz} \Stz^s := L^\I_t H^s_x \cap L^2_t B^s_{6,2}, \pq \Stz^{*s} := L^1_t H^s_x + L^2_t B^s_{6/5,2}, \pq \ST:= L^4_t L^6_x.} $\Stz^s$ is the full Strichartz norm for $H^s$ solutions, and $\Stz^{1/2} \subset L^4_t B^{1/2}_{3,2}\subset \ST$ by interpolation and Sobolev. The next semi-norm is a bit more involved. It is needed for long-time perturbation argument for the radiation part $\x$, whose equation contains quadratic terms. For $-\I\le T_0<T_1\le T_2\le\I$, $z\in C((T_0,T_2);\C)$ and $u\in C((T_0,T_1);H^1)$, \EQ{ \label{def wavenorm} \|u\|_{[z;T_0,T_1;T_2]}:=\sup_{T_0<S<T<T_1}\|u[z,T]_>-u[z,S]_>\|_{\ST(S,T_2)}} is a semi-norm vanishing exactly for solutions of the linearized equation with the parameter $z$, namely \EQ{ \label{ker [z]} \|u\|_{[z;T_0,T_1;T_2]}=0 \iff i\dot u+Hu=B[z]u \pq \text{on $(T_0,T_1)$.}} If $T_0>-\I$, we can fix $S\to T_0$ to get an equivalent semi-norm \EQ{ \|u\|_{[z;T_0,T_1;T_2]'}\pt:=\sup_{T_0<T<T_1}\|u[z,T]_>-u[z,T_0]_>\|_{\ST(T_0,T_2)} \pr\le \|u\|_{[z;T_0,T_1;T_2]} \le 2\|u\|_{[z;T_0,T_1;T_2]'},} where the first inequality follows from the continuity as $S\to T_0+0$, while the second one is obvious by the triangle inequality. This semi-norm measures how much $u$ deviates from the linear evolution between $T_0$ and $T_1$ and its influence until $t<T_2$. If $u$ solves \eqref{Lequf} on $(T_0,T_1)$, then \EQ{ \pt \|u\|_{[z;T_0,T_1;T_2]}=\sup_{T_0<S<T<T_1}\|\D 1_{t<T}f[z,S]\|_{\ST(S,T_2)}, \pr \|u\|_{[z;T_0,T_1;T_2]'}=\sup_{T_0<T<T_1}\|\D 1_{t<T}f[z,T_0]\|_{\ST(T_0,T_2)}.} Since we use only the Strichartz type estimates, i.e.~$L^p_t$ norms, the right hand side will be estimated in the same way as $\|\D f[z,T_0]\|_{\ST(T_0,T_1)}$. It will be used mostly to bound (for $T_0,T_1\in\R$) \EQ{ \label{use wave norm} \|u\|_{[z;T_0,T_1;T_2]} \ge \max(\|u-u[z,T_0]\|_{\ST(T_0,T_1)},\|u[z,T_1]-u[z,T_0]\|_{\ST(T_1,T_2)}).} The idea of long-time perturbation in this type of norms, together with the use of non-admissible Strichartz (as in Lemma \ref{lem:nonad} below), was introduced in \cite{CNLKG} to treat quadratic and sub-quadratic nonlinearity. An advantage of \eqref{def wavenorm} compared with the equivalent form is the monotonicity: \EQ{ \label{[z] mono} T_0\le T_0'<T_1'\le T_1,\ T_2'\le T_2 \implies \|u\|_{[z;T_0',T_1';T_2']} \le \|u\|_{[z;T_0,T_1;T_2]},} which is obvious by the definition. It is also subadditive for gluing intervals. \begin{lem} \label{lem:subadd} For $T_0<T_1<T_2<T_3$, $z\in C((T_0,T_3);\C)$ and $u\in C((T_0,T_3);H^1)$, \EQ{ \|u\|_{[z;T_0,T_2;T_3]} \le \|u\|_{[z;T_0,T_1;T_3]}+\|u\|_{[z;T_1,T_2;T_3]}.} \end{lem} The subadditivity holds also for the equivalent form, which is left for the reader. \begin{proof} The left side is the supremum of $\|u[z,T]_>-u[z,S]_>\|_{\ST(S,T_3)}$ over $T_0<S<T<T_2$. If $T_0<S<T<T_1$ or $T_1<S<T<T_3$, then it is trivially bounded by the first or the second term on the right. If $T_0<S\le T_1\le T<T_3$, then we have \EQ{ \|u[z,T]_>-u[z,S]_>\|_{\ST(S,T_3)} \pt\le \|u[z,T]_>-u[z,T_1]_>\|_{\ST(S,T_3)} \prQ+\|u[z,T_1]_>-u[z,S]_>\|_{\ST(S,T_3)} \pr\le \|u\|_{[z;T_0,T_1;T_3]}+\|u\|_{[z;T_1,T_2;T_3]},} using the continuity of $u[z,R]_>$ in $\ST(S,T_3)$ as $R\to T_1$. \end{proof} Now we derive the standard Strichartz estimates for the linearized equation, with uniformly small $z$. \begin{lem} \label{lem:Stz} Let $I=(T_0,T_1)$ be an interval and $z\in C(I;\C)$ with $\|z\|_{L^\I(I)}\ll 1$. Then for any $s_0\in I$, $\fy\in P_cH^1(\R^3)$, $f\in C(I;\cS')$, $T\in(T_0,T_1)$ and $\te\in[0,1]$, \EQ{ \label{lin Stz est} \pt \|\fy[z,s_0]\|_{\Stz^\te(I)} \lec \|\fy(s_0)\|_{ H^\te} \sim \inf_{t\in I}\|\fy[z,s_0](t)\|_{H^\te} \sim \sup_{t\in I}\|\fy[z,s_0](t)\|_{ H^\te}, \pr \|\D P_cf[z,s_0]\|_{\Stz^\te(I)} \lec \|f\|_{\Stz^{*\te}(I)}, \pq \|\D P_cf[z,s_0]\|_{[z;T_0,T;T_1]} \lec \|f\|_{\Stz^{*1/2}(T_0,T)}.} Moreover, if $\sup I=\I$ then we have the scattering of $u:=\fy[z,s_0]+\D P_cf[z,s_0]$ as $t\to\I$, namely the strong convergence of $e^{-itH}u(t)$ in $H^1$, and furthermore, $u(t)\to 0$ in $L^6_x$. The same holds for $t\to-\I$. \end{lem} \begin{proof} Let $\x:=\fy[z,s_0]+\D P_cf[z,s_0]$. Then the Strichartz estimate for $e^{itH}P_c$ yields \EQ{ \|\x\|_{\Stz^\te(I)} \pt\lec \|\fy(s_0)\|_{ H^\te}+\|B[z]\x+P_cf\|_{\Stz^{*\te}(I)},} where the term $B[z]\x$ is bounded by \EQ{ \|B[z]\x\|_{L^2_t B^\te_{6/5,2}(I)} \lec \|z\|_{L^\I_t(I)}^2\|\x\|_{L^2_t B^\te_{6,2}(I)} \ll \|\x\|_{L^2_t B^\te_{6,2}(I)},} and absorbed by the left. This yields the three inequalities in \eqref{lin Stz est}. If $\sup I=\I$, for any increasing sequence $t_n\to\I$, we have \EQ{ [e^{-itH}\x(t)]_{t=t_m}^{t=t_n}=\int_{t_m}^{t_n}e^{-isH}(B[z]\x+P_cf)ds,} and using the Strichartz as above, \EQ{ \|[e^{-itH}\x(t)]_{t=t_m}^{t=t_n}\|_{H^1_x} \lec \|\x\|_{L^2_tB^1_{6,2}(t_m,t_n)}+\|f\|_{Stz^{*1}(t_m,t_n)},\pq(t_m<t_n)} where the right side tends to $0$ as $t_m\to\I$, since the norms consist of $L^p_t$ with $p<\I$. Hence $e^{-itH}\x(t)$ converges as $t\to\I$ strongly in $H^1$. Then the decay in $L^6_x$ follows from the $L^{6/5}\to L^6$ decay estimate, the Sobolev embedding $H^1_x\subset L^6_x$ and the density argument. For the norm equivalence, let $u:=\fy[z,s_0]$ and $u_0:=\fy[0,s_0]$. Then by the same Strichartz as above, \EQ{ \|u-u_0\|_{\Stz^\te(I)} \lec \|B[z]u\|_{L^2_t B^\te_{6/5,2}(I)} \ll \|u\|_{L^2_t B^\te_{6,2}(I)},} and so the right hand side is equivalent to (since $\fy\in P_cH^1$) \EQ{ \|u_0\|_{L^2_t B^\te_{6,2}(I)} \lec \|u_0(s_0)\|_{ H^\te} \sim \|\LR{HP_c}^{\te/2}u_0(s_0)\|_{L^2} \sim \|u_0\|_{L^\I_t H^\te_x(I)},} which implies the first estimate in the lemma. \end{proof} As an immediate consequence, the semi-norm $[z;\cdots]$ is bounded by the full Strichartz norm \EQ{ \label{[z] bd Stz} \|u\|_{[z;T_0,T_1;T_2]} \lec \|u\|_{\Stz^1(T_0,T_1)},} since we have for any $S,T\in(T_0,T_1)$, using the Strichartz estimate for the linearized equation, \EQ{ \pt\|u[z,T]_>-u[z,S]_>\|_{\ST(S,T_2)} \pr\le \|u-u[z,S]\|_{\ST(S,T)}+\|u[z,T]-u[z,S]\|_{\ST(T,T_2)} \lec \|u\|_{\Stz^1(T_0,T_1)}.} We also need non-admissible Strichartz estimates. \begin{lem} \label{lem:nonad} Let $(p_0,q_0),(p_1,q_1)\in(1,\I)\times(2,6]$ and $\s_j:=2/p_j+3(1/q_j-1/2)$ satisfy \EQ{ \s_0+\s_1=0>\s_j-1/p_j, \pq |\s_j|\le 2/3.} Then there exists $C>0$ such that under the assumptions of the above lemma, \EQ{ \|\D P_c f[z,s_0]\|_{L^{p_0}_tL^{q_0}_x(I)} \le C\|f\|_{L^{p_1'}_tL^{q_1'}_x(I)}.} If $(p_0,q_0)=(4,6)$, then for $T_0<T\le T_1$, \EQ{ \|\D P_c f[z,s_0]\|_{[z;T_0,T;T_1]}\le C\|f\|_{L^{p_1'}_tL^{q_1'}_x(T_0,T)}.} \end{lem} \begin{proof} This set of estimates for the free Schr\"odinger equation was proved by Kato \cite{K} for $q_j<2^*$, and by Foschi \cite{F} and Vilela \cite{V} for $q_j=2^*$. It is transferred to the time independent equation $e^{itH}P_c$ by Yajima's argument of bounded wave operators \cite{Y}. More precisely, the condition $|\s_j|\le 2/3$ is not needed in Kato, but only in the double endpoint case $q_0=q_1=6$ by Foschi and Vilela, in the form $p_0=p_1\in[6/5,6]$. The above lemma is infected by this condition including non-endpoint cases, because we use the double endpoint estimate to treat the time-dependent potential part as a small perturbation. Let $u:=\D P_c f[z,s_0]$. Then by the above estimates on $e^{itH}P_c$, \EQ{ \|u\|_{(L^{p_0}_tL^{q_0}_x\cap L^{p_2}_tL^6_x)(I)} \lec \|B[z]u\|_{L^{p_2}_tL^{6/5}_x(I)}+\|f\|_{L^{p_1'}_tL^{q_1'}_x(I)},} where $p_2:=\frac{2}{\s_0+1}\in[6/5,6]$. The potential term is bounded by \EQ{ \|B[z]u\|_{L^{p_2}_tL^{6/5}_x(I)} \lec \|z\|_{L^\I_t(I)}^2\|u\|_{L^{p_2}_tL^6_x(I)} \ll \|u\|_{L^{p_2}_tL^6_x(I)},} and thus we obtain the desired result as in the previous lemma. \end{proof} \section{Linearized profile decomposition} Now we develop a profile decomposition for the linearized equation of the radiation part in \eqref{eq zxi}. For that purpose, we need a similar notation to the above for sequences. For any sequences $a,b,c\etc$ the sequence in the form \EQ{ \N\ni n\mapsto \sX(a_n,b_n,c_n\etc)} for {\it any expression} $\sX$ (as long as it is well defined), is denoted by \EQ{ \label{conve seq} \sX(a,b,c\etc):=\{\sX(a_n,b_n,c_n\etc)\}_n.} The same convention applies when the sequence is defined only for large $n\in\N$. When $\sX=\{\sX_n\}_n$ is a sequence of sets, then it is regarded as the product set: \EQ{ \label{conve prod} x \in \sX \iff \forall n\in\N,\ x_n\in \sX_n.} The same convention is applied to $\lim$, $\sup$, etc., for any sequence $X=\{X_n\}_n$: \EQ{ \label{conve lim} \pt X\to \lim X:=\lim_{n\to\I}X_n,\pq \Sup X:=\sup_n X_n, \pr \limsup X:=\limsup_{n\to\I}X_n,\pq \liminf X:=\liminf_{n\to\I}X_n,} unless the limit is explicitly associated with another parameter. ``$\Sup$" is capitalized to avoid possible confusion. The set of open intervals is denoted by \EQ{ \label{def sI} \pt \sI:=\{(a,b)\subset\R \mid a<b\}.} For any $I\in\sI^\N$, the set $\SBC(I)$ of sequences of uniformly small, bounded and continuous functions is defined by the following. For any $z\in C(I;\C)$ \EQ{ \label{def SBC} z\in \SBC(I)} iff $\sup_{n\in\N}\sup_{t\in I_n}|z_n(t)|\ll 1$ and \EQ{ \label{SBC} \forall\e>0,\ \exists\de>0,\ \forall s,\forall t\in I,\pq \Sup|s-t|<\de \implies \Sup|z(s)-z(t)|<\e.} The smallness requirement can be determined by $V$. Since it can be fixed throughout the paper but does not play any role, we leave it unspecified. For any $I\in\sI^\N$, any $z\in C(I;\C)$ and any $z_\I\in C(\R;\C)$, we say that $z\to z_\I$ \U{locally uniformly on $I$} iff for all $0<T<\I$ \EQ{ \label{def LUC} \lim_{n\to\I} \sup_{t\in[-T,T]\cap I_n}|z_n(t)-z_\I(t)|=0.} Let $\sI^\N\ni I\ni s$, $C(I;\C)\ni z$ and $H^1\{s\}\ni u$. Note that they are all sequences by the above convention. Suppose that $\lim P_cu(s)$ is weakly convergent in $H^1$. Then the sequence $v \in C(I;H^1)$ solving \EQ{ \forall n\in\N,\pq i\dot v_n + Hv_n = B[z_n]v_n \text{ (on $I_n$)}, \pq v_n(s_n)=\lim_{k\to\I}P_cu_k(s_k) } is denoted by \EQ{ \label{def limlin} u[z,s]\II:=\{v_n\}_n=\BR{\lim_{k\to\I}P_cu_k(s_k)}[z,s].} Let $s'\in I$ be another sequence of times, then \EQ{ u[z,s]\II[z,s']=u[z,s]\II.} In the autonomous case $z\equiv 0$, the above object can be defined by translation: \EQ{ \forall t\in\R,\pq u[0,s]\II(t)=e^{i(t-s)H}\lim_{k\to\I}P_cu_k(s_k),} which trivializes the limiting behavior as $n\to\I$. The presence of $z$ disables such a precise description. However, we do not need so much to prove the scattering result, but uniform integrability in the Strichartz norms will suffice, which is given by the following lemma. \begin{lem} \label{lem:Stzlim} Let $s\in I\in \sI^\N$, $z\in\SBC(I)$ and $u\in P_cH^1\{s\}$ be sequences such that $\Sup\|u(s)\|_{H^1}<\I$. Then after extracting a subsequence, there exist $\fy\in P_cH^1$ and $z_\I\in C(\R;\C)$ such that $u(s)\weakto\fy$ weakly in $H^1$, $z(s+t)\to z_\I$ locally uniformly on $I-s$, and \EQ{ \label{aproxprof} \|u[z,s]\II-\fy[z_\I,0](t-s)\|_{\Stz^1(I)}\to 0.} Moreover, for any $0<T<\I$, \EQ{ \label{profaprox} \|u[z,s]-u[z,s]\II\|_{L^\I_t(|t-s|<T;L^4_x)}\to 0.} If the convergence $u(s)\to\fy$ is strong in $H^1$, then \EQ{ \label{profaproxstrong} \|u[z,s]-u[z,s]\II\|_{\Stz^1(I)}\to 0.} \end{lem} \begin{proof} First, the uniform continuity of $\SBC$ allows us to extend $z_n$ to $\R$ so that we may assume $I=\R^\N$ without losing generality. The uniform boundedness in $H^1$ allows us to pass to a subsequence such that $u(s)\weakto\exists\fy$. Let \EQ{ \pt t\mapsto \z(t):=z(t+s)\in \SBC(\R^\N), \pr t\mapsto v(t):=\fy[\z,0](t)=u[z,s]\II(t+s) \in C(\R;P_cH^1)^\N.} Since $\z\in\SBC$, Ascoli-Arzela implies that, after extracting a subsequence, $\z\to\exists\z_\I$ in $C(\R;\C)$ with $\|\z_\I\|_{L^\I(\R)}\ll 1$. Let $v_\I=\fy[z_\I,0]\in C(\R;P_cH^1)$. Then we have \EQ{ i\dot v+Hv=B[\z]v, \pq i\dot v_\I+Hv_\I=B[\z_\I]v_\I, \pq \forall n,\ v_n(0)=\fy=v_\I(0).} We have the full Strichartz estimates on $v_\I$ by Lemma \ref{lem:Stz}, and \EQ{ \lim_{T\to\I}\|v_\I\|_{L^2(|t|>T;B^1_{6,2})}=0.} Since $\z\to\z_\I$ in $L^\I(|t|\le T)$ for any $T<\I$, $B[\z]\to B[\z_\I]$ in the operator norm of $B^1_{6,2}\to B^1_{6/5,2}$, uniformly on $|t|\le T$. Hence by the Strichartz estimate on $e^{itH}P_c$, and using $H^1_{6/5}\subset B^1_{6/5,2}$, \EQ{ \|v-v_\I\|_{\Stz^1(|t|<T)} \pt\lec \|B[\z]v-B[\z_\I]v_\I\|_{L^2_t(|t|<T;H^1_{6/5})} \pr\lec \|\z-\z_\I\|_{L^\I(|t|<T)}\|v_\I\|_{L^2_t(|t|<T;H^1_6)} \prQ + \|(\z,\z_\I)\|_{L^\I(|t|<T)}\|v-v_\I\|_{L^2_t(|t|<T;H^1_6)}.} Thus, the last term being absorbed by the left, we obtain \EQ{ \|v-v_\I\|_{\Stz^1(|t|<T)} \to 0.} Applying the same estimate to the Duhamel with $e^{itH}P_c$ from $t=\pm T$, we obtain \EQ{ \pt\|v-v_\I\|_{\Stz^1(|t|>T)} \pr\lec \|v(\pm T)-v_\I(\pm T)\|_{H^1_x} + \|B[\z]v-B[\z_\I]v_\I\|_{L^2_t(|t|>T;H^1_{6/5})} \pr\lec o(1) + \|(\z,\z_\I)\|_{L^\I(|t|>T)}(\|v_\I\|_{L^2_t(|t|>T;H^1_6)}+\|v-v_\I\|_{L^2_t(|t|>T;H^1_6)}),} where the last term is absorbed by the left. Thus we obtain \EQ{ \limsup \|v-v_\I\|_{\Stz^1(\R)} \ll \|v_\I\|_{L^2_t(|t|>T;H^1_6)}.} Sending $T\to\I$, we see that the right side is zero, namely \eqref{aproxprof}. To prove \eqref{profaprox}, let $w(t):=u[z,s](t+s)$ and $\ga:=(v-w)[0,0]$. Then by the Strichartz estimate on $e^{itH}P_c$, \EQ{ \pt\|(v-w)-\ga\|_{(L^\I_t L^2_x\cap L^2_tL^6_x)(|t|<T)} \pr\lec \|B[\z](v-w)\|_{L^2_tL^{6/5}_x(|t|<T)} \ll \|v-w\|_{L^2_tL^6_x(|t|<T)},} so it is bounded by $\|\ga\|_{L^2_tL^6_x(|t|<T)}$, which tends to $0$, because $\ga\to 0$ in $L^\I_tL^4_x(|t|<T)$ and bounded in $L^2_tH^1_6$. Interpolating with the uniform bounds on $v$ and $w$, we get $v-w-\ga\to 0$ also in $L^\I_tL^4_x(|t|<T)$, hence for $v-w$ as well. The proof of \eqref{profaproxstrong} is similar but easier. We add one derivative to the Strichartz norms and extend to the real line, such as $L^2_tH^1_6(\R)$. Then $\ga=e^{itH}\ga(0)\to 0$ in this norm, since $\ga(0)\to 0$ strongly in $H^1$. So $v-w$ is also vanishing. \end{proof} The linearized equation does not preserve either the mass or the energy, because $B[z]$ is not even $\C$-linear, but the next lemma suffices for the profile decomposition. Since $H>0$ on $P_cH^1$, its fractional power is defined. For any $\te\in[0,1]$, the inner product is defined by \EQ{ \label{def Hte} \bH_\te(\fy,\psi):=\frac12\LR{H^\te P_c\fy|\psi}, \pq \bH_\te(\fy):=\bH_\te(\fy,\fy),} such that $\bM(\fy)=\bH_0(\fy)$ and $\bH(\fy)=\bH_1(\fy)$ for all $\fy \in P_cH^1$. \begin{lem} \label{lem:uniforth} Let $s\in I\in\sI^\N$ and $z\in\SBC(I)$. Let $v^0,v^1\in C(I;P_cH^1)$ be two sequences of linearized solutions, i.e.~$v^j=v^j[z,s]$ for $j=0,1$. Suppose that $v^0(s)$ strongly converges in $H^1$ and that $v^1(s)\weakto 0$ weakly in $H^1$. Then \EQ{ \forall\te\in[0,1],\pq \|\bH_\te(v^0,v^1)\|_{L^\I_t(I)} \to 0.} \end{lem} \begin{proof} It suffices to show $\bH_\te(v^0,v^1)(s')\to 0$ for any $s'\in I$, along a subsequence. We use the unitarity \EQ{ \ti v^j:=e^{i(s-t)H} v^j \implies \bH_\te( v^0, v^1)=\bH_\te(\ti v^0,\ti v^1)} and the Duhamel formula \EQ{ \pt \ti v^j(t) = v^j(s)+\D^j(t), \pr \D^j(t):= \int_{0}^{t-s}e^{-it'H}B[z(s+t')] v^j(s+t')dt'.} $ v^0(s)\to\exists\fy$ strongly in $H^1$ by the assumption. Extracting a subsequence, we may assume $s'-s\to\exists s'_\I\in[-\I,\I]$ and $z(t+s)\to\exists z_\I(t)$ locally uniformly on $I-s$. Then \EQ{ \D^0(s')\to \int_0^{s'_\I}e^{-itH}B[z_\I]\fy[z_\I,0]dt,} strongly in $H^1$, by Lemma \ref{lem:Stzlim}, \eqref{aproxprof}, after passing to a subsequence. For $\D^1$, we use the $L^p$ decay estimate on $e^{itH}P_c$. Fix $0<\de\ll 1$ such that $1/q_\pm := 1/6 \pm \de \in [1/\pp,1/2)$. Then for any $0<T<\I$, \EQ{ \|\D^1\|_{L^\I_t(I;L^{q_+}+L^{q_-})_x} \pt\lec \int_{|t-s|>T}|t-s|^{-1-3\de}\|B[z] v^1(t)\|_{L^{q_-'}_x}dt \prQ+ \int_{|t-s|<T}|t-s|^{-1+3\de}\|B[z] v^1(t)\|_{L^{q_+'}_x}dt \pr\lec T^{-3\de}\| v^1\|_{L^\I_tL^2_x}+\| v^1\|_{L^\I_t(|t-s|<T;L^4_x)},} and the last term is vanishing by Lemma \ref{lem:Stzlim}, \eqref{profaprox}. Since $T>0$ is arbitrary, we deduce that $\D^1(s')\weakto 0$. Hence $\ti v^1(s')\weakto 0$, while $\ti v^0(s')$ is strongly convergent. Thus we obtain $\bH_\te( v^0, v^1)(s')\to 0$ as desired. \end{proof} Using the above lemmas, we are ready to prove the profile decomposition for the linearized equation for $\x$. \begin{lem} \label{lem:Lprof} Let $s\in I\in\sI^\N$ and $z\in\SBC(I)$. Let $\psi\in(P_cH^1)^\N$ be a bounded sequence. Then passing to a subsequence, there exist $J^*\in\N\cup\{\I\}$ and $s^j\in I$ for each $\N_0\ni j<J^*$ with the following properties. Let $\nu:=\psi[z,s]\in C(I;P_cH^1)$. \begin{enumerate} \item $s^0=s$ and $|s^j-s^k|\to\I$ for each $j\not=k<J^*$. \item For $j<J^*$, $\nu(s^j)\weakto\exists\fy^j_\I\in H^1$ weakly. Put $\la^j:=\nu[z,s^j]\II=\fy^j_\I[z,s^j]$. Then $\la^j(s^k)\weakto 0$ weakly in $H^1$ for $j\not=k$, and $\fy^j_\I\not=0$ for $j>0$. \item For each finite $J\le J^*$, put $\ga^{J}:=\nu-\sum_{0\le j<J} \la^j$. For $j< J$, $\ga^{J}(s^j)\weakto 0$ weakly in $H^1$, and for all $\te\in[0,1]$, \EQ{ \label{energy decop} \sum_{0\le j<J}\|\la^j\|_{L^\I_t(I;\dot H^\te_x)}^2 + \|\ga^{J}\|_{L^\I_t(I;\dot H^\te_x)}^2 \sim \|\psi\|_{\dot H^\te}^2+o(1).} $\bH_\te(\la^j,\la^k)$, $\bH_\te(\la^j,\ga^J)\to 0$ for $k\not=j< J$ and $\te\in[0,1]$, uniformly on $I$. \item For $0\le\te<1$, \EQ{ \label{Stz vanish} \lim_{J\to J^*}\limsup\|\ga^J\|_{[L^\I_tL^4_x,\Stz^1]_\te(I)}=0.} \end{enumerate} \end{lem} We call the decomposition given by the above lemma \EQ{ \label{lin decop} \psi[z,s]= \sum_{0\le j<J} \la^j + \ga^J, \pq \la^j:=\psi[z,s][z,s^j]\II,} \U{the linearized profile decomposition}. \begin{proof} The sequence $s^j\in I$ is defined inductively as follows. First, let $s^0:=s$. Passing to a subsequence, we may assume $\nu(s^0)\weakto\exists\fy^0_\I$. Then $\la^0:=u[z,s^0]\II$ and $\ga^1:=\nu-\la^0$ are defined with $\ga^1(s^0)\weakto 0$. The boundedness (in $H^1$) of $\psi$ implies that $\nu$ and $\la^0$ are uniformly bounded, so is $\ga^1$. Let $J\in\N$ and suppose that uniformly bounded $\ga^{J}\in C(I;H^1)$ and $s^j\in I$ have been defined for $j<J$ such that $\ga^{J}(s^j)\weakto 0$. Since $\|\ga^{J}\|_{L^\I_t(I;L^4_x)}$ is bounded, we can define $s^{J}\in I$ such that \EQ{ \label{ga L4 limit} \|\ga^{J}\|_{L^\I_t(I;L^4_x)} = \|\ga^{J}(s^{J})\|_{L^4_x} + o(1).} If the left hand sequence tends to $0$, put $J^*=J$ and the definition is terminated. Otherwise, put $\la^{J}:=\ga^{J}[z,s^{J}]\II$ and $\ga^{J+1}:=\ga^{J}-\la^{J}$. Passing to a subsequence, we may assume $\ga^{J}(s^{J})\weakto\exists\fy^J_\I\not=0$ in $H^1$ and $z(t+s^{J})\to\exists z_\I^{J}(t)\in C(\R;\C)$ locally uniformly on $I-s^J$. Then $\ga^{J+1}(s^{J})\weakto 0$ is obvious. Since $\ga^{J}(s^j)\weakto 0$ for $j<J$, we have $\ga^{J}\to 0$ in $L^\I_t(|t-s^j|<T;L^4_x)$ for any $T<\I$, by Lemma \ref{lem:Stzlim}, \eqref{profaprox}. Hence $|s^J-s^j|\to\I$. Then from Lemma \ref{lem:Stzlim}, \eqref{aproxprof}, together with the Strichartz bound on $\fy^J_\I[z_\I^J,0]$, we deduce that $\la^J(s^j)\weakto 0$, and so $\ga^{J+1}(s^j)\weakto 0$. The same argument implies that $\la^j(s^J)\weakto 0$ as well. Hence we can iterate the same procedure. In this way, after the diagonalization argument, we obtain the sequences $s^j$ with the properties that $|s^j-s^k|\to\I$ for $j\not=k$, $\ga^J(s^j)\weakto 0$ for $j< J$, $\la^j(s^k)\weakto 0$ for $j\not=k$, $z(t+s^j)\to z_\I^j(t)$, and the decomposition \eqref{lin decop}. Since $\la^j(s^j)=\fy^j_\I$ while $\la^k(s^j)\weakto 0$ for $k\not=j$, Lemma \ref{lem:uniforth} implies $\bH_\te(\la^j,\la^k)\to 0$, $\bH_\te(\la^j,\ga^k)\to 0$ for $j<k$ and $\te\in[0,1]$. Hence \EQ{ \pt \bH_\te(\psi)=\bH_\te(\sum_{0\le j<J}\la^j(s)+\ga^J(s)) =\sum_{0\le j<J}\bH_\te(\la^j(s))+\bH_\te(\ga^J(s))+o(1).} The equivalence $\bH_\te(\fy)\sim\|\fy\|_{\dot H^\te}^2$ on $P_cH^1$ and Lemma \ref{lem:Stz} imply \eqref{energy decop}. It remains to prove \eqref{Stz vanish}. By the definition of $s^j$, cf.~\eqref{ga L4 limit}, we have \EQ{ \|\ga^{J}\|_{L^\I_tL^4_x} = \|\fy^J_\I\|_{L^4_x}+o(1) \lec \|\la^J\|_{L^\I_t H^1_x}+o(1).} Since the right hand side is vanishing by \eqref{energy decop}, \EQ{ \lim_{J\to J^*}\limsup\|\ga^J\|_{L^\I_tL^4_x}=0,} and then by interpolation with the uniform Strichartz bound, we obtain \eqref{Stz vanish}. \end{proof} \section{Nonlinear perturbation estimates} In order to use the linearized profile decomposition to approximate the nonlinear solutions, we need a few perturbation lemmas for the nonlinear equation of $\x$ \EQ{ \label{eqxi} i\dot \x + H\x = B[z]\x + \ti N(z,\x)} regarding $z$ as a given time-dependent function. Since our global knowledge on $z$ is very poor (cf.~Section \ref{ss:diff}), we should avoid perturbing $z$ for long time. It leads us to prepare the following two lemmas for perturbation: Lemma \ref{lem:over river} for long time intervals where $\x$ is small, and Lemma \ref{lem:over mountain} for bounded time intervals where $\x$ may be large. The first lemma is a perturbation of $0$, or construction of dispersed solutions. \begin{lem} \label{lem:smallsol} Let $-\I<T_0<T_1\le\I$, $z\in C([T_0,T_1);\C)$ and $\fy\in H^1(\R^3)$. Put \EQ{ \cN_\te:=\|z\|_{L^\I(T_0,T_1)}+\|\fy\|_{H^\te}} for $\te\in[0,1]$ and assume $\cN_0\ll 1$. \rm{(I)} If $\|\fy[z,T_0]\|_{\ST(T_0,T_1)}\cN_{1/2}^3\ll 1$, then \eqref{eqxi} has a unique solution $\x$ satisfying \EQ{ \label{small sol cond} \x\in C([T_0,T_1);H^1),\pq \x(T_0)=\fy, \pq \|\x\|_{\ST(T_0,T_1)}\lec\|\fy[z,T_0]\|_{\ST(T_0,T_1)}.} \rm{(II)} Let $\x\in C([T_0,T_1);H^1)$ be a solution of \eqref{eqxi} with $\x(T_0)=\fy$ satisfying \EQ{ \|\x\|_{\ST(T_0,T_1)}\cN_{1/2}^3\ll 1.} Then for any $T\in(T_0,T_1)$ and all $\te\in[0,1]$, \EQ{ \|\x[z,T]_>-\fy[z,T_0]\|_{\Stz^\te(T_0,T_1)} \lec \cN_\te\|\fy[z,T_0]\|_{\ST(T_0,T)}(\cN_0+\|\fy[z,T_0]\|_{\ST(T_0,T)}),} and $\|\x\|_{[z;T_0,T;T_1]} \ll \|\x[z,T_0]\|_{\ST(T_0,T)} \sim \|\x\|_{\ST(T_0,T)}$. \end{lem} \begin{proof} Let $\x_0:=\fy[z,T_0]$ and $\|\x_0\|_{\ST(T_0,T_1)}=:\de_0$. The solution $\x$ is obtained by the iteration argument. If $\x$ is a solution, then by the Strichartz estimate: Lemma \ref{lem:Stz}, \EQ{ \label{est by Stz} \|\x-\x_0\|_{\Stz^\te} \lec \|\x\|_{\Stz^\te}\|\x\|_{\ST}(\|\x\|_{\ST}+\|z\|_{L^\I})} for $\te\in[0,1]$. Using the non-admissible Strichartz: Lemma \ref{lem:nonad}, \EQ{ \|\x\|_{\ST} \pt\le \|\x_0\|_{\ST} + C\|\ti N(z,\x)\|_{L^{8/3}_tL^{4/3}_x} \pr \lec \de_0 + \|\x\|_{\ST}\|\x\|_{L^8_tL^4_x}(\|\x\|_{L^\I_tL^3_x}+\|\Phi[z]\|_{L^\I_tL^3_x}) \pr \lec \de_0 + \|\x\|_{\ST}^{3/2}\|\x\|_{\Stz^{1/2}}^{1/2}(\|\x\|_{\Stz^{1/2}}+\cN_0).} Suppose that $\|\x\|_{\Stz^{1/2}}\le C\cN_{1/2}$ and $\|\x\|_{\ST}\le C\de_0$ for some constant $C\gg 1$ on some shorter interval. Then \EQ{ \pt\|\x\|_{\Stz^{1/2}} \lec \cN_{1/2} + C^2\de_0(\de_0+\cN_0)\|\x\|_{\Stz^{1/2}}, \pr\|\x\|_{\ST} \lec \de_0 + C^2(\de_0\cN_{1/2}^3)^{1/2} \|\x\|_{\ST}.} Since $\de_0+\cN_0\lec\cN_{1/2}$, we have $\de_0(\de_0+\cN_0)\lec(\de_0\cN_{1/2}^3)^{1/2}$. Hence, if $\de_0\cN_{1/2}^3\ll 1$ then $\|\x\|_{\ST} \lec \de_0$ and $\|\x\|_{\Stz^{\te}} \lec \cN_{\te}$ for all $\te\in[0,1]$. Then by the continuity for extending the interval, these bounds holds on the whole $(0,T)$. If we assume $\|\x\|_{\ST(T_0,T_1)}\cN_{1/2}^3\ll 1$ instead of $\x_0$, then we obtain $\|\x_0\|_{\ST(T_0,T)}\lec\|\x\|_{\ST(T_0,T)}$ in the same way, starting from $T=T_0+0$. In both cases, repeating the Strichartz estimate on $H^\te$ as above, we obtain (II). \end{proof} \begin{rem} Since $\dot H^{1/2}(\R^3)$ is the scaling invariant norm for the NLS without the potential, $\cN_{1/2}$ is in general large, when we use the above lemma. In the focusing case, $\cN_{1/2}\sim 1$ on $\Soli_1$, while there is no upper bound on $\cN_{1/2}$ in the defocusing case. However, we can expect that $\ST$ is small for dispersive solutions, by which the assumptions in the above lemma can be satisfied. Note also that the estimate cannot be closed if we use only the admissible Strichartz $\Stz^{1/2}$ as in \eqref{est by Stz}, because of the quadratic terms. \end{rem} If the above solution is obtained for $t\to\I$, then it scatters. \begin{lem} \label{lem:scatxi} Let $T_0\in\R$, $(z,\x)\in C([T_0,\I);\C\times P_cH^1)$ solve \eqref{eqxi} on $(T_0,\I)$ satisfying \EQ{ \|z\|_{L^\I(T_0,\I)}+\|\x\|_{L^\I_t(T_0,\I;L^2_x)} \ll 1, \pq \|\x\|_{L^\I_t(T_0,\I;H^1_x)}<\I.} Then the following {\rm(i)} and {\rm(ii)} are equivalent. \begin{enumerate} \item $\exists\fy\in H^1(\R^3)$ such that $\|\fy[z,T_0]-\x\|_{H^1_x}\to 0$ as $t\to\I$. \item $\|\x\|_{\ST(T_0,\I)}<\I$. \end{enumerate} In this case, we say that \U{$\x$ scatters with $z$ as $t\to\I$}. Moreover, as $T\to\I$, \EQ{ \|\x[z,T]-\fy[z,T_0]\|_{\Stz^1(T_0,\I)} +\|\x-\x[z,T]\|_{\Stz^1(T,\I)} \to 0, } and for any $\ti z\in C([T_0,\I);\C)$ satisfying $\|\ti z\|_{L^\I_t}\ll 1$, \EQ{ \label{scatt decay} \pt \|\x\|_{[\ti z;T,\I;\I]}+\|\x[\ti z,T]\|_{L^2B^1_{6,2}(T,\I)} +\|\x\|_{L^2B^1_{6,2}(T,\I)} \to 0,} uniformly with respect to $\ti z$. A sufficient condition of scattering is \EQ{ \label{suf cond scat} \|\x[z,T_0]\|_{\ST(T_0,\I)}\BR{\|z\|_{L^\I(T_0,\I)}+\|\x(T_0)\|_{H^{1/2}_x}}^3 \ll 1.} \end{lem} The scattering with $z$ as $t\to-\I$ is defined in the same way, which has the same property as above. \begin{proof} Assume (i) and let $\x_+:=\fy[z,T_0]$. Then by Lemma \ref{lem:Stz}, $\|\x_+\|_{\Stz^1(T_0,\I)}<\I$ and so in particular $\|\x_+\|_{\ST(T,\I)}\to 0$ as $T\to\I$. By (i) and the Strichartz estimate, \EQ{ \|\x_+-\x[z,T]\|_{\ST(T,\I)} \lec \|\x_+(T)-\x(T)\|_{H^{1/2}} \to 0.} Hence for sufficiently large $T$, the previous lemma implies $\|\x\|_{\ST(T,\I)}\lec\|\x[z,T]\|_{\ST(T,\I)}$. Assume (ii) and let $T>T_0$ so large that we can apply the previous lemma on $(T,\I)$ to get $\|\x\|_{\Stz^1(T,\I)}\lec\|\x[z,T]\|_{\Stz^1(T,\I)}$. Then for any $T_2>T_1>T$, by the Strichartz estimate: Lemma \ref{lem:Stz}, \EQ{ \|\x[z,T_2]-\x[z,T_1]\|_{\Stz^1(T_0,\I)} \pt\lec \|\ti N(z,\x)\|_{\Stz^{1*}(T_1,T_2)} \pr\lec \|\x\|_{\Stz^1(T_1,\I)}[\|\x\|_{\ST(T_1,\I)}+\|\x\|_{\ST(T_1,\I)}^2]\to 0} as $T_1\to\I$. In particular $\x[z,T](T_0)$ is Cauchy in $H^1_x$ as $T\to\I$, hence convergent to some $\x_+\in H^1_x$. Then \EQ{ \|\x_+[z,T_0](T)-\x(T)\|_{H^1_x} \pt\le \|\x_+[z,T_0]-\x[z,T]\|_{\Stz^1(T_0,\I)} \pr\lec \|\x_+-\x[z,T](T_0)\|_{H^1_x} \to 0 \pq (T\to\I)} and so (i). Thus in either case, we have $\|\x[z,T]\|_{\ST(T,\I)}\to 0$ as $T\to\I$, hence the previous lemma implies \EQ{ \|\x-\x[z,T]\|_{\Stz^1(T,\I)} \lec \BR{\|z\|_{L^\I_t(T_0,\I)}+\|\x\|_{L^\I_t(T_0,\I;H^1_x)}}\|\x[z,T]\|_{\ST(T,\I)}\to 0} as $T\to\I$. Hence $\|\x\|_{L^2_tB^1_{6,2}(T,\I)}\to 0$. Let $\x_0:=\x[z,T]$ and $\x_1:=\x[\ti z,T]$. Then \EQ{ \x_1 = \x_0 + \D(B[\ti z]-B[z])\x_1[z,T]} and by the Strichartz estimate: Lemma \ref{lem:Stz}, \EQ{ \|\x_1\|_{\Stz^1(T,\I)} \le \|\x_0\|_{\Stz^1(T,\I)}+C\|z\|_{L^\I_t(T_0,\I)}^2\|\x_1\|_{L^2_tB^1_{6,2}(T,\I)}.} After the last term is absorbed by the left, we obtain, as $T\to\I$, \EQ{ \|\x[\ti z,T]\|_{L^2_tB^1_{6,2}(T,\I)} \le 2\|\x[z,T]\|_{L^2_tB^1_{6,2}(T,\I)}\to 0,} which is uniform with respect to $\ti z$. By the previous lemma, we also have \EQ{ \|\x\|_{[\ti z;T,\I;\I]} \ll \|\x[\ti z,T]\|_{\ST(T,\I)} \sim \|\x\|_{\ST(T,\I)}\to 0.} The sufficiency of \eqref{suf cond scat} for scattering is now obvious by Lemma \ref{lem:smallsol}. \end{proof} The next two lemmas are concerned with difference of two solutions. For the sake of brevity, the following notation is introduced for differences. For any expressions $\sX$ and $a,b,c\etc$ \EQ{ \label{conve diff} \diff{\sX(a_\pa,b_\pa,c_\pa\etc)}:=\sX(a_0,b_0,c_0\etc)-\sX(a_1,b_1,c_1\etc).} The first lemma of difference estimates treats perturbation of dispersed solutions. It will be used either with $z_0=z_1$ or on a short interval. We need the non-admissible Strichartz for difference of quadratic terms. \begin{lem} \label{lem:over river} Let $-\I<T_0<T_1<T_2\le\I$, $z_0\in C([T_0,T_2);\C)$, $z_1\in C([T_0,T_1];\C)$, and $\x_0,\x_1\in C([T_0,T_1];H^1)$ solve \EQ{ \forall t\in(T_0,T_1),\pq i\dot \x_j + H \x_j = B[z_j]\x_j + \ti N(z_j,\x_j).} Put $\cN_\te:=\|z_j\|_{L^\I_t(T_0,T_1)}+\|\x_j\|_{L^\I_t(T_0,T_1;H^\te_x)}$ for $\te\in[0,1]$, and suppose that for some $0<\de\lec\ti\de$ satisfying $\cN_0+\ti\de\cN_{1/2}^3 \ll 1$, \EQ{ \pt \|\x_0[z_0,T_0]\|_{\ST(T_0,T_1)} \le \ti\de, \pq \|\diff\x_\pa[z_0,T_0]\|_{\ST(T_0,T_1)}+ \|\diff z_\pa\|_{L^4(T_0,T_1)} \le \de.} Then we have \EQ{ \|\diff \x_\pa\|_{[z;T_0,T_1;T_2]} \lec (\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de^{1/7}.} \end{lem} \begin{proof} The previous lemma \ref{lem:smallsol} applies to both $(z_j,\x_j)$, which implies \EQ{ \|\x_j\|_{\Stz^\te(T_0,T_1)}\lec \cN_\te, \pq \|\x_j\|_{\ST(T_0,T_1)}\lec \ti\de.} Now apply the non-admissible Strichartz estimate, Lemma \ref{lem:nonad} to Duhamel \EQ{ \diff\x_\pa = \diff\x_\pa[z_0,T_0] + \D\{\diff{\ti N}(z_\pa,\x_\pa)+\diff B[z_\pa]\x_1\}[z_0,T_0].} Choose \EQ{ (p_0,q_0)=(4,24/7),\ (p_1,q_1)=(2,24/5),\ (p_2,q_2)=(4,24/9)} so that $\s_0=-1/8$, $\s_1=\s_2=1/8$ and we can apply the lemma. Then \EQ{ \pt\|\diff\x_\pa - \diff\x_\pa[z_0,T_0]\|_{L^{4}_tL^{24/7}_x} \lec \|\diff{\ti N}(z_\pa,\x_\pa)+\diff B[z_\pa]\x_1\|_{L^{2}_tL^{24/19}_x+L^{4/3}_tL^{24/15}_x} \pr\lec \{\|\diff\x_\pa\|_{L^{4}_tL^{24/7}_x}+\|\diff z_\pa\|_{L^4_t}\}(\|\x_0\|_{\ST}+\|\x_1\|_{\ST})\{1+\|\x_0\|_{\ST}+\|\x_1\|_{\ST}\},} where the linear and quadratic (in $\x_0,\x_1$) terms are estimated in $L^2_tL^{24/19}_x$, while the cubic terms are in $L^{4/3}_tL^{24/15}_x$. The factor $1$ comes from the term $\diff B[z]\x_1$, and it also includes the smallness factor $\cN_0\ll 1$. For the cubic term with $\diff z_\pa$, we used \EQ{ \|\diff R[z_\pa]\x_j\|_{H^1_x} \lec |\diff z_\pa|\|\x_j\|_{L^2_x} \le |\diff z_\pa| \cN_0.} Thus, using the smallness of $\|\x_j\|_{\ST(T_0,T_1)}$ and $\|\diff z_\pa\|_{L^4(T_0,T_1)}$, we obtain \EQ{ \|\diff\x_\pa\|_{L^4_tL^{24/7}_x(T_0,T_1)} \lec \|\diff\x_\pa[z_0,T_0]\|_{L^{4}_tL^{24/7}_x(T_0,T_1)}+\ti\de\de.} Applying the same estimate to the Duhamel formula \EQ{ \diff\x_\pa[z_0,T_1]_> = \diff\x_\pa[z_0,T_0] + \D 1_{T_0<t<T_1}\{\diff{\ti N}(z_\pa,\x_\pa)+\diff B[z_\pa]\x_1\}[z_0,T_0],} we also obtain \EQ{ \|\diff\x_\pa[z_0,T_1]_> - \diff\x_\pa[z_0,T_0]\|_{L^{4}_tL^{24/7}_x(T_0,T_2)} \lec \ti\de\|\diff\x_\pa[z_0,T_0]\|_{L^{4}_tL^{24/7}_x(T_0,T_1)}+\ti\de\de,} and this norm is related to $\ST=L^4_tL^6_x$ by interpolation and Sobolev as \EQ{ \pt \|f\|_{\ST} \lec \|f\|_{L^{4}_tL^{24/7}_x}^{4/7}\|f\|_{L^4B^0_{\I,2}}^{3/7} \lec \|f\|_{L^{4}_tL^{24/7}_x}^{4/7}\|f\|_{\Stz^1}^{3/7}, \pr \|f\|_{L^{4}_tL^{24/7}_x} \le \|f\|_{\ST}^{1/4}\|f\|_{L^{4}_tL^{3}_x}^{3/4} \lec \|f\|_{\ST}^{1/4}\|f\|_{\Stz^0}^{3/4}.} Injecting these to the above and using the Strichartz bound, we obtain \EQ{ \|\diff\x_\pa[z_0,T_1]_> - \diff\x_\pa[z_0,T_0]\|_{\ST(T_0,T_2)} \pt\lec [\ti\de\de^{1/4}\cN_0^{3/4}]^{4/7}\cN_1^{3/7} \pr= (\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de^{1/7},} as desired. \end{proof} The second lemma of difference estimates treats perturbation of large solutions with finite Strichartz. It will be used only on a bounded interval of time. \begin{lem} \label{lem:over mountain} Let $-\I<T_0<T_1<T_2\le\I$, $z_0\in C([T_0,T_2);\C)$, $z_1\in C([T_0,T_1];\C)$, and $\x_0,\x_1\in C([T_0,T_1];H^1)$ solve \EQ{ \forall t\in(T_0,T_1),\pq i\dot \x_j + H \x_j = B[z_j]\x_j + \ti N(z_j,\x_j) .}Put for $\te\in[0,1]$ \EQ{ \cN_\te:=\|z_j\|_{L^\I_t(T_0,T_1)}+\|\x_j\|_{L^\I_t(T_0,T_1;H^\te_x)}, \pq \cN_2:=\|\x_0\|_{\ST(T_0,T_1)},} and assume $\cN_0\ll 1$. For any $\e>0$, there is $\de_*(\cN_1,\cN_2,\e)>0$, continuous and decreasing for each $\cN_j$, such that if \EQ{ \|\diff\x_\pa[z_0,T_0]\|_{\ST(T_0,T_1)}+ \|\diff z_\pa\|_{L^4(T_0,T_1)} \le \de_*} then we have \EQ{ \|\diff \x_\pa\|_{[z_0;T_0,T_1;T_2]} \le \e.} \end{lem} \begin{proof} For any $N\in\N$, $(T_0,T_1)$ is decomposed into subintervals $I_0\etc I_N$ such that \EQ{ \forall j=0\etc N,\pq \|\x_0\|_{\ST(I_j)} \le 2N^{-1/4}\cN_2=:\ti\de.} Let $I_j=:(S_j,S_{j+1})$. If $\ti\de\cN_{1/2}^3\ll 1$, then we can apply Lemma \ref{lem:smallsol} starting from $S_j$, and we obtain \EQ{ \|\x_0[z_0,S_j]\|_{\ST(I_j)} \sim \|\x_0\|_{\ST(I_j)} \le \ti\de.} Suppose that for some $\de_0>0$, \EQ{ \|\diff\x_\pa[z_0,S_0]\|_{\ST(S_0,T_1)}+ \|\diff z_\pa\|_{L^4(S_0,T_1)} \le \de_0 \le \ti\de, \pq \ti\de \cN_{1/2}^3 \ll 1.} Then $\|\x_1[z_0,T_0]\|_{\ST(S_0,S_1)} \lec \ti\de$ and we can apply Lemma \ref{lem:over river} on $I_0$. Then \EQ{ \|\diff\x_\pa\|_{[z_0;S_0,S_1;T_2]} \le C(\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de_0^{1/7}.} Let $\de_1:=\de_0+C(\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de_0^{1/7}$, then \EQ{ \|\diff\x_\pa[z_0,S_1]\|_{\ST(S_1,T_1)}+ \|\diff z_\pa\|_{L^4(S_1,T_1)} \le \de_1.} Hence if $\de_1\le\ti\de$ then we can repeat the same thing on $I_1$. Define the sequence $\de_j$ for $j=0\etc N$ inductively from $\de_0$ by \EQ{ \de_{j+1}:=\de_j + C(\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de_j^{1/7}.} Given $\cN_1$ and $\cN_2$, we can determine $\ti\de$ and $N$ such that \EQ{ 2N^{-1/4}\cN_2 \le \ti\de \ll \cN_1^{-3} \le \cN_{1/2}^{-3}.} Then there is $\de_*=\de_*(\cN_1,\cN_2,\e)>0$ such that \EQ{ \de_0\le\de_* \implies \de_{N+1} < \min(\ti\de,\e).} Then for $\de_0\le\de_*$, we can iterate the above estimate for all $j$ to get \EQ{ \|\diff\x_\pa\|_{[z_0;T_0,T_1;T_2]} \pt\le \sum_{j=0}^N \|\diff\x_\pa\|_{[z_0;S_j,S_{j+1};T_2]} \pr\le \sum_{j=0}^N C(\cN_0\cN_1)^{3/7}\ti\de^{4/7}\de_j^{1/7}=\de_{N+1}-\de_0<\e,} where we used the subadditivity for consecutive intervals by Lemma \ref{lem:subadd}. \end{proof} \section{Nonlinear profile decomposition} We are now ready to develop a profile decomposition for NLS \eqref{NLSP} in the $(z,\x)$ coordinate, i.e. the equation \eqref{eq zxi}. Let $I\in\sI^\N$ and $u\in C(I;H^1[\mu_p])$ be a sequence of solutions of \eqref{NLSP} in the mass region of the $(z,\x)$-coordinate. Put \EQ{ I_n=:(\U T_n,\ba T_n)} for each $n\in\N$. We can uniquely write $u=\Phi[z]+R[z]\x$ by Lemma \ref{lem:decop2Phi}, where $(z,\x)\in C(I;\C\times H^1)$ is a sequence of solutions for \eqref{eq zxi}. Suppose \EQ{ \label{asm on un} \cN_1:=\Sup \BR{\|z\|_{L^\I_t(I)}+\|u\|_{L^\I_t(I;H^1)}}<\I,} where the $z$ part is uniformly bounded by $\sqrt{\mu_p}$, so that we can omit it. Then we have $z\in\SBC(I)$, since the smallness of $|z|$ is already in Lemma \ref{lem:decop2Phi}, while \eqref{SBC} follows from a uniform bound on $|\dot z|$ (depending on $\cN_1$), easily observed in the equation \eqref{eq zxi} using $\|\y\|_{H^1}\lec\cN_1$, $H^1_x\subset L^6_x$, and the compactness of $\ba{D_p}\subset D_b$ for the $z$ dependence. By the $L^2$ conservation \EQ{ \cN_0:=\Sup \BR{\|z\|_{L^\I_t(I)}+\|u\|_{L^\I_t(I;L^2)}} \lec \sqrt{\mu_p} \ll 1.} Similarly for $\te\in[0,1]$, put \EQ{ \cN_\te:=\Sup\BR{\|z\|_{L^\I_t(I)}+\|u\|_{L^\I_t(I;H^\te)}} \le \cN_0^{1-\te}\cN_1^\te.} For any $s\in I$, we can apply the linearized profile decomposition: Lemma \ref{lem:Lprof} to the sequence $\x(s)$. Passing to a subsequence, we have for each $J<J^*$, \EQ{ \label{def LPD} \x[z,s]=\la^{[0,J)}+\ga^J, \pq \pq \la^j:=\x[z,s][z,s^j]\II,} where the following abbreviation is used: for any interval $I$, \EQ{ \label{def laI} \la^I:=\sum_{j\in I\cap \Z}\la^j.} Extracting a subsequence if necessary, we may assume \EQ{ \label{sj-s} s^j-s\to\s^j\I,\pq \exists\s^j\in\{+,-\}} for each $0<j<J^*$, and, since \eqref{eq zxi} implies that $(z,\x)$ is weakly equicontinuous, \EQ{ (z,\x)(s^j+t) \to \exists (z_\I^j,\x_\I^j)(t) \IN{\C\times\weak{H^1}},} locally uniformly on $I-s^j$. Put \EQ{ I^j_\I:=\liminf(I-s^j), \pq I^j:=I^j_\I+s^j.} \eqref{sj-s} implies $I^j_\I\supset\{-\I<\s^j t\le 0\}$ for $j>0$. For $j=0$, we have $I^0_\I\supset[0,\I)$ if $\ba T-s\to\I$ and $I^0_\I\supset(-\I,0]$ if $\U T-s\to-\I$. Note that if $|I_n|$ is bounded then the decomposition is trivial, i.e. $J^*=1$. In either case, $I^j\ni s^j,s^0=s$. By the property of $\SBC$, we can extend $z_\I^j$ to $\SBC(\R)$. By the subcritical nature of NLS, it is easy to see that the weak limit $(z_\I^j,\x_\I^j)$ is a solution of \eqref{eq zxi} in $C(I_\I^j;\C\times P_cH^1)$. In other words, \EQ{ u_\I^j:=\Phi[z_\I^j]+R[z_\I^j]\x_\I^j} is a solution of \eqref{NLSP} on $I_\I^j$. Using that $R[z]-1$ is compact on $H^1$, together with the weak convergence of the linearized profiles, we have \EQ{ u(s)\pt=\Phi[z_\I^0(0)]+R[z_\I^0(0)]\x(s)+o(1) \pr=u_\I^0(0)+\la^{(0,J)}(s)+\ga^J(s)+o(1)\pq\IN{H^1},} and, using the orthogonality as well, \EQ{ \label{ME decop} \pt \bM(u)=\bM(u_\I^0)+\sum_{0<j<J}\bM(\la^j(s))+\bM(\ga^J(s))+o(1), \pr \bE(u)=\bE(u_\I^0)+\sum_{0<j<J}\bH^0(\la^j(s))+\bH^0(\ga^J(s))+o(1).} \U{The nonlinear profile} $\La^j\in C(I^j;P_cH^1)$ is defined by \EQ{ \label{def La} \La^j(t):=\x_\I^j(t-s^j).} Also put \EQ{ \pt \fy^j_\I:=\lim \nu(s^j)=\la^j(s^j)\in P_cH^1,\pq \la_\I^j:=\fy^j_\I[z_\I^j,0]\in C(\R;P_cH^1), \pr z_j(t):=z(s^j+t), \pq z\II^j(t):=z_\I^j(t-s^j), \pq \la\II^j(t):=\la_\I^j(t-s^j),} such that $z_j=z_\I^j+o(1)$ in $C(\R)$, and using Lemma \ref{lem:Stzlim}, \eqref{aproxprof}, \EQ{ \label{zej stz} \la^j=\fy^j_\I[z,s^j]=\la\II^j+o(1) \IN{\Stz^1(I)},} while $(z\II^j,\La^j)$ is a sequence of solutions of \eqref{eq zxi} on $I^j$. \U{The nonlinear remainder} $\Ga^J\in C(I;H^1)$ is defined for each $J$ by the same sequence of equations as $\x$, with the same initial data as $\ga^J$: \EQ{ \label{eq Ga} i\dot \Ga^J + H \Ga^J = B[z]\Ga^J + \ti N(z,\Ga^J), \pq \Ga^J(s)=\ga^J(s).} Since $\Lim_{J\to J^*}\limsup\|\ga^J\|_{\ST(I)}=0$, Lemma \ref{lem:smallsol} ensures the unique existence of $\Ga_n^J$ for large $J$ and large $n$, satisfying \EQ{ \label{Ga bd} \|\Ga_n^J\|_{\Stz^\te(I_n)}\lec \cN_\te, \pq \|\Ga_n^J-\ga_n^J\|_{\ST(I_n)} + \|\Ga_n^J\|_{[z_n;s_n,\ba T_n;\ba T_n]} \ll\|\ga_n^J\|_{\ST(I_n)}.} We have $\ga^J(s^j)\weakto 0$ for each $j< J$. Moreover, for large $J$ and for $0\le j<J$, \EQ{ \label{Ga vanish st} \Ga^J(s^j)\weakto 0,\pq 0<\forall \ta<\I, \pq \|\Ga^J\|_{\ST(|t-s^j|<\ta)}+\|\ga^J\|_{\ST(|t-s^j|<\ta)}\to 0.} \begin{proof}[Proof of \eqref{Ga vanish st}] Let $X^j$ be a sequence of weighted norms on $I$ defined by \EQ{ \|f\|_{X^j} := \sup_{t\in I} \LR{t-s^j}^{-\de}\|f(t)\|_{(L^4+L^\pp)_x(\R^3)},} where $\de>0$ is fixed such that $0<\de<1/2-3/\pp$. By Lemma \ref{lem:Stzlim}, \eqref{profaprox}, we have $\|\ga^J(t)\|_{L^4_x}\to 0$ locally uniformly around $t=s^j$, which implies $\|\ga^J\|_{X^j}\to 0$, thanks to the decaying weight. Suppose that $\s^j=+$, namely $s^j-s^0\to\I$. Put $F^J:=B[z]\Ga^J+\ti N(z,\Ga^J)$. Then by the $L^p$ decay estimate on $e^{itH}P_c$, we have at any $t\in I_n$ satisfying $t>s_n$, \EQ{ \label{weighted Ga est} \|\Ga_n^J(t)-\ga_n^J(t)\|_{L^4_x+L^\pp_x} \pt\lec \int_{s_n^0}^{t}f(t-t')\|F_n^J(t')\|_{L^{4/3}_x\cap L^{\pp'}_x}dt', \prq f(t):=\min(|t|^{-3(1/2-1/\pp)},|t|^{-3/4}), } and, by H\"older and Sobolev, \EQ{ \|F_n^J\|_{L^{4/3}_x\cap L^{\pp'}_x} \pt\lec \|\Ga_n^J\|_{L^4_x+L^\pp_x}(|z_n|^2+\|\Ga_n^J\|_{L^4_x\cap L^2_x}^2) \pr\lec \{\cN_0^2+\|\Ga_n^J\|_{L^\I_tL^4_x(I_n)}^2\}\|\Ga_n^J\|_{L^4_x+L^\pp_x}.} By Lemma \ref{lem:smallsol}, we have \EQ{ \|\Ga_n^J\|_{L^\I_tL^4_x(I_n)} \le \|\ga^J_n\|_{L^\I_tL^4_x(I_n)}+C\cN_{3/4}\|\ga^J_n\|_{\ST(I_n)}(\cN_0+\|\ga_n^J\|_{\ST(I_n)}) } Taking $J$ and $n$ larger if necessary, we may assume that the right hand side is bounded by $\cN_0 \ll 1$. Inserting this to the above estimate and then to \eqref{weighted Ga est} yields \EQ{ \pn\|\Ga_n^J(t)-\ga_n^J(t)\|_{L^4_x+L^\pp_x} \pt\lec \cN_0^2 \|\Ga_n^J\|_{X_n^j}\int_{\R}f(t-t')\LR{t'-s_n^j}^{\de}dt' \pr\lec \cN_0^2\|\Ga_n^J\|_{X_n^j}\LR{t-s_n^j}^{\de},} where in estimating the integral in $t'$, our choice of $\de$ implies $-3(1/2-1/\pp)+\de<-1$, which ensures the integrability for $t'\to\pm\I$. The estimate in the cases $t<s_n$ and $\s^j=-$ is the same, as well as for $j=0$. Thus we obtain \EQ{ \lim\|\Ga^J\|_{X^j} \lec \lim\|\ga^J\|_{X^j}= 0,} for large $J$. By the uniform $H^1$ bound, this is equivalent to $\Ga^J(t)\weakto 0$ weakly in $H^1$, locally uniformly around $s^j$. Interpolation with \eqref{Ga bd} yields the other part. \end{proof} Let us now concentrate on the estimate on the time interval $t>s_n$, assuming \EQ{ \ba T_n-s_n \to\I,\pq(n\to\I)} since otherwise uniform Strichartz bound for $\x_n$ on $t>s_n$ is trivial. The restriction to $t>s_n$ allows us to ignore the profiles with $s^j-s^0\to-\I$. Fix a finite $J\le J^*$, so large that \eqref{Ga vanish st} holds. After neglecting those profiles with $s^j-s^0\to-\I$, and reordering the profiles\footnote{This reordering can not be performed before fixing $J<\I$, since more and more linear profiles may well appear between the previous profiles as $J\to\I$, which is a typical dispersive behavior of the remainder $\ga^J$.}, we may assume for $0<j<J$ \EQ{ s^j-s^{j-1}\to\I.} Since $J$ is now fixed, we can no longer gain a small factor by sending $J\to\I$. Instead another parameter $0<\ta\to\I$ is introduced, decomposing the time intervals \EQ{ (s,\ba T)=\Cu_{0\le j<J}(s^j_-,s^j_+)\cup(s^j_+,s^{j+1}_-),} where $s^j_\pm\in\R^\N$ are defined for each $j$ by \EQ{ \label{def sjpm} \pt s^j_-:=\max(s^j-\ta,s), \pq s^j_+:=\min(s^j+\ta,\ba T), \pq s^J_\pm:=\ba T.} Henceforth, $o_\ta$ denotes any sequence of real numbers satisfying \EQ{ \label{def ota} \pt X(\ta) = o_\ta \iff \lim_{\ta\to\I}\limsup_{n\to\I} X_n(\ta)=0.} By the uniform integrability \eqref{zej stz} of the linearized profiles, and their separation $s^{j}-s^{j-1}\to\I$, we have for $0\le j< J$ \EQ{ \label{zej st away} \pt \sum_{j<k<J} \|\la^k\|_{\ST(s^0,s^j_+)}+\sum_{0\le k<j}\|\la^k\|_{\ST(s^j_-,\ba T)}=o(1), \pr \|\la^j\|_{\ST(s^0,s^j_-)}+ \|\la^j\|_{\ST(s^j_+,\ba T)}=o_\ta.} The following is the main property of the nonlinear profile decomposition. \begin{lem} \label{lem:Nprofile} In the above setting, let $0\le l\le J$ and suppose that $\x_\I^j$ is scattering with $z^j_\I$ as $t\to\I$ for $0\le j<l$. Let $\ell:=\min(l,J-1)$. Then \begin{enumerate} \item For $0\le j<J$, we have \EQ{ \|\x-\La^j\|_{[z;s^j_-,s^j_+;\ba T]}+\|\Ga^J\|_{[z;s^j_-,s^j_+;\ba T]}=o(1).} \item For $0\le j\le\ell$, we have \EQ{ \label{laga before} \|(\x-\Ga^J)[z,s^j_-]-\la^{[j,J)}\|_{\ST(s^j_-,\ba T)}+\|\La^j[z,s^j_-]_>-\la^j\|_{\Stz^1(I)}=o_\ta,} \EQ{ \label{La before} \|\La^j\|_{[z;s^0,s^j_-;\ba T]}=o_\ta.} \item For $0\le j<l$, we have \EQ{ \|\x-\Ga^J\|_{[z;s^j_+,s^{j+1}_-;\ba T]}+\|\La^j\|_{[z;s^j_+,\ba T;\ba T]}=o_\ta.} \item For $0<j\le\ell$, we have $\|\x^j_\I-\la^j_\I\|_{\Stz^1(-\I,-\ta)}\to 0$ as $\ta\to\I$. In other words, $\x^j_\I$ scatters as $t\to-\I$ and the scattering profile is $\la^j_\I$. \end{enumerate} Moreover, $\x$ is bounded in $\Stz^1(s,s^l_+)$. \end{lem} \begin{proof} For the first term of (i), the locally uniform convergence of $(z,\x)(t+s^j)\to(z^j_\I,\x^j_\I)$ implies, using Lemma \ref{lem:Stzlim}, \eqref{profaprox}, \EQ{ \|z-z^j_{(\I)}\|_{L^4_t(s^j_-,s^j_+)}+\|\x[z,s^j_-]-\La^j[z,s^j_-]\|_{\ST(s^j_-,s^j_+)}=o(1).} Then by Lemma \ref{lem:over mountain}, we obtain \EQ{ \|\x-\La^j\|_{[z;s^j_-,s^j_+;\ba T]}=o(1).} For the second term of (i), using \eqref{use wave norm}, Lemma \ref{lem:smallsol} and \eqref{Ga vanish st}, we obtain \EQ{ \label{Ga s-2s+} \|\Ga^J[z,s^j_-]-\Ga^J[z,s^j_+]\|_{\ST(s^j_+,\ba T)} \le \|\Ga^J\|_{[z;s^j_-,s^j_+;\ba T]} \ll \|\Ga^J\|_{\ST(s^j_-,s^j_+)}=o(1).} \eqref{La before} follows from \eqref{laga before}, since using \eqref{ker [z]} and \eqref{[z] bd Stz}, we have \EQ{ \|\La^j\|_{[z;s^0,s^j_-;\ba T]} \pt\le \|\La^j-\la^j\|_{[z;s^0,s^j_-;\ba T]} + \|\la^j\|_{[z;s^0,s^j_-;\ba T]} \pr\lec \|\La^j-\la^j\|_{\Stz^1(s^0,s^j_-)} \pn\le\|\La^j[z,s^j_-]_>-\la^j\|_{\Stz^1(I)}.} The second term of (iii) is bounded using Lemma \ref{lem:scatxi}, \eqref{scatt decay} with the scattering of $\x_\I^j$ as $t\to\I$ \EQ{ \|\La^j\|_{[z;s^j_+,\ba T;\ba T]}\le\|\x^j_\I\|_{[z_j;\ta,\I;\I]}=o_\ta.} The remaining estimates are proved by induction on $j$. For $j=0$, $\eqref{laga before}=0$ by the definition and $s^0_-=s^0$. Assume \eqref{laga before} for some $j<l$ as an induction hypothesis. By the scattering of $\x^j_\I$ for $t\to\I$, Lemma \ref{lem:scatxi} implies \EQ{ \label{xI at sj+} \|\La^j[z,s^j_+]\|_{\ST(s^j_+,\ba T)} =\|\x_\I^j[z_j,\ta]\|_{\ST(\ta,\ba T-s^j)} = o_\ta.} Combining it with \eqref{laga before}, \eqref{Ga s-2s+} and (i), using \eqref{use wave norm}, we obtain \EQ{ \label{xn at sj+} \pt\|(\x-\Ga^J)[z,s^j_+]-\la^{[j+1,J)}\|_{\ST(s^j_+,\ba T)} \pr\le \|(\x-\Ga^J)[z,s^j_-]-\la^{[j,J)}\|_{\ST(s^j_+,\ba T)} +\|\La^j[z,s^j_-]-\la^j\|_{\ST(s^j_+,\ba T)} \prQ+\|\Ga^J[z,s^j_-]-\Ga^J[z,s^j_+]\|_{\ST(s^j_+,\ba T)} + \|\La^j[z,s^j_+]\|_{\ST(s^j_+,\ba T)} \prQ+\|(\x-\La^j)[z,s^j_-]-(\x-\La^j)[z,s^j_+]\|_{\ST(s^j_+,\ba T)} \pr\le o_\ta+\|\x-\La^j\|_{[z;s^j_-,s^j_+;\ba T]} = o_\ta.} Restricting it and using \eqref{zej st away}, we obtain \EQ{ \pt\|(\x-\Ga^J)[z,s^j_+]\|_{\ST(s^j_+,s^{j+1}_-)} = o_\ta.} This and the smallness of $\Ga_n^J$ in \eqref{Ga vanish st} allow us to apply Lemma \ref{lem:over river} to the difference of $\x_n$ and $\Ga_n^J$ for large $\ta$ and large $n$, with the same soliton part $z_n$. Then the above decay of the linearized solutions leads to the estimate on the first term of (iii): \EQ{ \|\x-\Ga^J\|_{[z;s^j_+,s^{j+1}_-;\ba T]}=o_\ta.} If $k:=j+1<J$, then combining the above with \eqref{xn at sj+}, using \eqref{use wave norm}, we obtain \EQ{ \pt \|(\x-\Ga^J)[z,s^k_-]-\la^{[k,J)}\|_{\ST(s^k_-,\ba T)} \pr\le \|(\x-\Ga^J)[z,s^j_+]-\la^{[k,J)}\|_{\ST(s^k_-,\ba T)}+\|\x-\Ga^J\|_{[z;s^j_+,s^k_-,\ba T]} = o_\ta,} which is the first term of \eqref{laga before} for $k$. Restricting the interval to $(s^k_-,s^k)$, we may discard $\la^{[k+1,J)}$ by \eqref{zej st away}, as well as $\Ga^J[z,s^k_-]$ by \eqref{Ga vanish st}, where we are allowed to linearize $\Ga^J$ by $[z,s^k_-]$ thanks to Lemma \ref{lem:smallsol}(II). Thus we obtain \EQ{ \|\x[z,s^k_-]-\la^k\|_{\ST(s^k_-,s^k)}=o_\ta.} Since $\x(s^k_-)-\la^k(s^k_-) \weakto \x_\I^k(-\ta)-\la_\I^k(-\ta)$, by Lemma \ref{lem:Stzlim}, \eqref{profaprox}, we obtain \EQ{ o_\ta \pt=\|(\x_\I^k(-\ta)-\la_\I^k(-\ta))[z,s^k-\ta]\|_{\ST(s^k-\ta,s^k)} \pr=\|(\x_\I^k-\la_\I^k)[z_k,-\ta]\|_{\ST(-\ta,0)}.} Taking the limit and using Lemma \ref{lem:Stzlim}, \eqref{aproxprof} with $z_k\to z_\I^k$, \EQ{ \label{x1-z1 fini} \lim_{\ta\to\I} \|\x_\I^k[z_\I^k,-\ta]-\la_\I^k\|_{\ST(-\ta,0)}=0.} Since $\la_\I^k\in\ST(-\I,0)$ by Lemma \ref{lem:Stz}, there is $\ta_*>0$ such that \EQ{ \|\la_\I^k\|_{\ST(-\I,-\ta_*)} + \sup_{\ta>\ta_*}\|\x_\I^k[z_\I^k,-\ta]-\la_\I^k\|_{\ST(-\ta,0)} \ll\cN_{1/2}^{-3}.} For $\ta>\ta_*$, we can apply Lemma \ref{lem:smallsol} to $\x_\I^k$ from $t=-\ta$, thereby obtain \EQ{ \|\x_\I^k\|_{\ST(-\ta,-\ta_*)} \le 2\|\x_\I^k[z_\I^k,-\ta]\|_{\ST(-\ta,-\ta_*)} \ll \cN_{1/2}^{-3}.} Sending $\ta\to\I$ implies $\|\x_\I^k\|_{\ST(-\I,-\ta_*)}<\I$, so by Lemma \ref{lem:scatxi}, $\x_\I^k$ scatters with $z_\I^k$ as $t\to-\I$. Hence there exists $\fy_-^k\in H^1$ such that \EQ{ \lim_{\ta\to\I}\|\x_\I^k[z_\I^k,-\ta]-\fy_-^k[z_\I^k,0]\|_{\Stz^1(-\I,0)}= 0.} Adding this and \eqref{x1-z1 fini} yields \EQ{ \|(\fy_-^k-\la_\I^k)[z_\I^k,0]\|_{\ST(-\I,0)}=0,} which implies $\fy_-^k=\la_\I^k(0)$, hence $\|\x_\I^k-\la_\I^k\|_{\Stz^1(-\I,-\ta)}\to 0$ as $\ta\to\I$. Thus we obtain (iv). Since $\x_\I^k=\La^k(t+s^k)$ and $\la_\I^k=\la^k(t+s^k)+o(1)$ in $\Stz^1(I-s^k)$, we obtain \EQ{ \|\La^k[z,s^k_-]_>-\la^k\|_{\Stz^1(I)} \lec \|\La^k-\la^k\|_{\Stz^1(s^0,s^k_-)}=o_\ta,} which is the second term of \eqref{laga before} for $k=j+1$, hence the induction is complete, finishing the proof for (ii)-(iv). Since the profiles $\La^j$ and the remainder $\Ga^J$ are vanishing $o_\ta$ in each other intervals, we obtain, using the subadditivity: Lemma \ref{lem:subadd}, as well as the monotonicity \eqref{[z] mono}, \EQ{ \pt\|\x-\La^{[0,\ell]}-\Ga^J\|_{[z;s,s^l_+;\ba T]} \pr\le \sum_{0\le j\le\ell} \|\x-\La^j\|_{[z;s^j_-,s^j_+;\ba T]} + \sum_{0\le j<l} \|\x-\Ga^J\|_{[z;s^j_+,s^{j+1}_-;\ba T]} + o_\ta \pn\le o_\ta.} Since the left hand side is non-decreasing in $\ta$, we deduce that \EQ{ \label{xn approx} \ti\x^\ell:=\La^{[0,\ell]}+\la^{(\ell,J)}+ \Ga^J \implies \|\x-\ti\x^\ell\|_{[z;s,s^l_+;\ba T]}=o(1),} where the linearized solution $\la^{(\ell,J)}$ is added for free, thanks to \eqref{ker [z]}. Using (iv) together with \eqref{zej stz}, as well as the definition of $\Ga^J$ and $\ga^J$, we have \EQ{ \ti\x^\ell(s)\pt=\la^{[0,\ell]}(s)+o(1)+\la^{(\ell,J)}(s)+\ga^J(s)=\x(s)+o(1)\IN{H^1}.} Hence \eqref{xn approx} with \eqref{use wave norm} implies \EQ{ \label{xn approx ST} \|\x-\ti\x^\ell\|_{\ST(s,s^l_+)} \le \|\x-\ti\x^\ell\|_{[z;s,s^l_+ ,\ba T]}+o(1) = o(1),} so we obtain, using \eqref{zej st away} as well, \EQ{ \pt\|\x\|_{\ST(s,s^l_+)} \le \sum_{0\le j\le\ell} \|\La^j\|_{\ST(s,s^l_+)}+\|\Ga^J\|_{\ST(s,\ba T)}+o(1),} where each term on the right is bounded by \EQ{ \pt \|\La^0\|_{\ST(s,s^l_+)}\le \|\x^0_\I\|_{\ST(0,\I)}<\I, \pr 1\le j<\ell \implies \|\La^j\|_{\ST(s,s^l_+)} \le \|\x^j_\I\|_{\ST(\R)}<\I, \pr j=\ell=l<J \implies \|\La^j\|_{\ST(s,s^l_+)} \le \|\x^l_\I\|_{\ST(-\I,\ta)}<\I, \pr \|\Ga^J\|_{\ST(s,\ba T)} \le 2\|\ga^J\|_{\ST(s,\ba T)} \ll 1.} Therefore $\x$ is bounded in $\ST(s,s^l_+)$. It is easily upgraded to a uniform bound in $\Stz^1(s,s^l_+)$ as follows. Let $s=t_0<t_1<\cdots<t_N=s^l_+$ such that $\|\x\|_{\ST(t_{a-1},t_a)}\le \de$ and $N\de\le \|\x\|_{\ST(s,s^l_+)}+1$ for some small $\de>0$. By the Strichartz estimate, we have for each $a$ \EQ{ \|\x\|_{\Stz^1(t_a,t_{a+1})} \lec \|\x(t_a)\|_{H^1}+\|\ti N(z,\x)\|_{L^2_tH^1_{6/5}(t_a,t_{a+1})},} and the nonlinear term is estimated as before by H\"older \EQ{ \|\ti N(z,\x)\|_{L^2_tH^1_{6/5}(t_a,t_{a+1})} \pt\lec \|\Phi[z]\|_{L^\I_tL^3_x}\|\x\|_{L^4_tH^1_3}\|\x\|_{\ST}+ \|\x\|_{L^\I_tH^1_x}\|\x\|_{\ST}^2 \pr\lec (\cN_0\de+\de^2)\|\x\|_{\Stz^1(t_a,t_{a+1})}.} Hence choosing $\de>0$ small enough, we obtain \EQ{ \|\x\|_{\Stz^1(t_a,t_{a+1})} \le C\|\x(t_a)\|_{H^1} \le C\|\x\|_{\Stz^1(t_{a-1},t_a)}} for some absolute constant $C>1$, which leads by induction to \EQ{ \|\x\|_{\Stz^1(s,s^l_+)} \le C^N\|\x(s)\|_{H^1} \le C^{C\|\x\|_{\ST(s,s^l_+)}}\cN_1} where the right hand side is bounded as shown above. \end{proof} The same argument works on the other time direction $(\U T,s)$, under the scattering assumption of $\x_j^\I$ with $z_j^\I$ as $t\to-\I$ for $\s^j=-$ and $j=0$. In order to consider the whole interval $(\U T,\ba T)$, we should assume the scattering of $\x_j^\I$ as $t\to\s^j\I$ for $\s^j=\pm$, and of $\x_0^\I$ as $t\to\pm\I$. A more precise statement is as follows. \begin{thm} \label{thm:NPD} Let $s\in I\in\sI^\N$ and $C(I;H^1[\mu_p])\ni u=\Phi[z]+R[z]\x$ be a sequence of solutions for \eqref{NLSP}, written in the coordinate in Lemma \ref{lem:decop2Phi}. Suppose that $u(s)$ is bounded in $H^1_x$ and let \EQ{ \x[z,s]=\sum_{0\le j<J}\la^j + \ga^J, \pq \la^j=\x[z,s][z,s^j]\II} be the linearized profile decomposition in Lemma \ref{lem:Lprof} (for a subsequence). If a finite $J\le J^*$ is fixed large enough, then we have the following. Suppose that $\sup_n\sup_{t\in I_n'}\|u_n(t)\|_{H^1_x}<\I$ for a sequence of subintervals $I'_n\subset I_n$ satisfying $s\in I'$, and let (after passing to a further subsequence if necessary) \EQ{ t\in I^j_\I:=\Cu_{n\in\N}\Ca_{m\ge n}(I'_m-s^j_m) \implies (z^j_\I,\x^j_\I)(t):=\Lim_{n\to\I}(z_n,\x_n)(s^j_n+t)} be the weak limit in $\C\times H^1_r$. Assume that $\x^j_\I$ scatters with $z^j_\I$ as $t\to\s\I$ for each $j<J$ and $\s\in\{+,-\}$ satisfying $\s I^j_\I\supset[0,\I)$ and $\lim\s(s-s^j)\le 0$. Then $\sup_n\|\x_n\|_{\Stz^1(I'_n)}<\I$. Moreover, for each $j<J$ and $\s\in\{+,-\}$ satisfying $0\in I^j_\I$ and $\s(s-s^j)\to\I$, $\x^j_\I$ scatters with $z^j_\I$ as $t\to\s\I$ and \EQ{ \label{backscat} \lim_{n,T\to\I}\|\x^j_\I-\la^j_n(t+s^j_n)\|_{\Stz^1(\s(T,\I)\cap(I_n'-s^j_n))}=0.} \end{thm} The above statement has nothing to do with the excited state energy, and it is applicable even if some nonlinear profile is not scattering, if the subintervals $I'_n$ are chosen appropriately. Note also that $I'_n$ can be chosen depending on the linearized profile decomposition, after fixing $J$. See the next section. \section{Scattering below the excited energy} We are now ready to prove the scattering to the ground states. For each $\mu>0$ and $A\in\R$, let $\GS(\mu,A)$ be the totality of global solution $u$ of \eqref{NLSP} satisfying \EQ{ \label{def GS} \bM(u)\le\mu,\pq \bE(u)\le A.} Let \EQ{ \label{def STX} \pt ST(\mu,A):=\sup\{\|\x\|_{\ST(0,\I)}\mid \Phi[z]+R[z]\x\in \GS(\mu,A)\}, \pr \X:=\{(\mu,A)\mid ST(\mu,A) <\I\}.} Introduce the following partial orders in $\R^2$ \EQ{ \label{def orders} \pt(\mu_1,A_1)\le(\mu_2,A_2) \iff \mu_1\le\mu_2 \tand A_1\le A_2, \pr(\mu_1,A_1)\ll(\mu_2,A_2) \iff \mu_1<\mu_2 \tand A_1<A_2.} The definition of $\X$ implies that for any $(\mu_j,A_j)\in(0,\I)\times\R$, \EQ{ (\mu_1,A_1)\le(\mu_2,A_2) \text{ and } (\mu_2,A_2)\in\X \implies (\mu_1,A_1)\in\X.} The goal of this section is to prove that for $0<\mu\ll 1$ and $A\in\R$, \EQ{ (\mu,A)\in\X \iff A<\sE_1(\mu).} $\implies$ is trivial in the defocusing case, obvious by the excited states $\Soli_1$ in the focusing case. So the question is the $\follows$ part. For small $H^1$ data, we have the scattering to $\Soli_0$ by \cite{gnt}, together with a uniform bound on the Strichartz norms of $\x$ in terms of $\|u(0)\|_{H^1_x}$. In fact, Lemma \ref{lem:scatxi} implies that $H^{1/2}_x$ smallness is enough. In particular, using Lemma \ref{lem:gs} and interpolation, we deduce that $(\mu,A)\in\X$ for sufficiently small $A$ for each fixed $\mu$, and for sufficiently small $\mu$ for each fixed $A$. Hence $\X$ contains a neighborhood of both $\{\mu=0\}$ and $\{A=0\}$. Suppose that there exists $(\mu_0,A_0)\in(0,\I)^2\setminus \X$ satisfying $A_0<\sE_1(\mu_0)$ and $\mu_0\ll 1$. Put \EQ{ \label{def EM*} E_*:=\sup \{A<A_0 \mid (\mu_0,A)\in\X\}, \pq M_*:=\sup\{\mu<\mu_0 \mid (\mu,E_*)\in\X\}.} Then \EQ{ \pt 0<M_*\le \mu_0,\pq 0<E_*\le A_0<\sE_1(\mu_0)\le \sE_1(M_*),} and $(M_*,E_*)$ is minimal on $\p\X$ in the sense that \EQ{ (\mu_1,A_1)<(M_*,E_*)\ll (\mu_2,A_2) \implies (\mu_1,A_1)\in\X,\pq (\mu_2,A_2)\not\in\X.} In particular, there is a sequence $(\R^2)^\N\ni(M,E)\to(M_*,E_*)$ and a sequence of solutions $u=\Phi[z]+R[z]\x\in\GS(M,E)$ such that \EQ{ \label{minimizing xn} M \le \mu_0+o(1),\pq E<\sE_1(M),\pq \|\x\|_{\ST(0,\I)}\to \I} See \eqref{conve seq} for the notation of sequences without index. The mass-energy constraint together with Lemma \ref{lem:gs} implies that $u$ is bounded in $H^1_x$, so is $\x$, while $|z|\lec\mu_0\ll 1$. The linearized profile decomposition: Lemma \ref{lem:Lprof} yields \EQ{ \pt \x[z,0]=\sum_{0\le j<J} \la^j + \ga^J,\pq \la^j=\x[z,0][z,s^j]\II,} for each $J<J^*$. Let \EQ{ (z_\I^j,\x_\I^j):=\lim (z,\x)(t+s^j), \pq u_\I^j:=\Phi[z_\I^j]+R[z_\I^j]\x_\I^j} be the weak limits, solving respectively \eqref{eq zxi} and \eqref{NLSP}. The weak convergence implies \EQ{ \label{limit ME} \bM(u_\I^j)\le M_*, \pq \bE(u_\I^j)\le E_*.} Fix a finite $J\le J^*$ so large that we can use Theorem \ref{thm:NPD}. Since $\|\x\|_{\ST(0,\I)}\to\I$, the assumption of Theorem \ref{thm:NPD} must fail for $I':=[0,\I)^\N\ni s:=0$. Hence there exists $l<J$ such that $s^l\ge 0$ and $\|\x^l_\I\|_{\ST(0,\I)}=\I$. We may choose the minimal $l$ in the sense that $s^j-s^l\to\I$ for all $j\not=l$ satisfying $s^j\ge 0$ and $\|\x^j_\I\|_{\ST(0,\I)}=\I$. Then \eqref{limit ME} together with the minimality of $(M_*,E_*)$ implies that $u_\I^l$ is a minimal solution which does not scatter to $\Soli_0$ as $t\to\I$, \EQ{ (M_*,E_*)=(\bM(u_\I^l),\bE(u_\I^l)),} and so the convergence is strong in $H^1_x$ for $\x(t+s^l)\to\x_\I^l(t)$ and $u(t+s^l)\to u_\I^l(t)$. In particular, if $l=0$ then $u(0)\to u_\I^0(0)$ strongly in $H^1_x$. If $l>0$, then $s^l\to\I$ and the minimality of $l$ implies that for each $j\not=l$, either $s^j\to-\I$, $s^j-s^l\to\I$ or $\|\x^j_\I\|_{\ST(0,\I)}<\I$, thereby we can apply Theorem \ref{thm:NPD} to $I':=[0,s^l]$. Then by \eqref{backscat}, we have \EQ{ \pt M_*=\bM(u^l_\I)=\bM(\Phi[z^l_\I(-s^l)])+\bM(\la^l(0))+o(1), \pr E_*=\bE(u^l_\I)=\bE(\Phi[z^l_\I(-s^l)])+\bH^0(\la^l(0))+o(1),} using that $R[z]-1$ is compact on $H^1$ and that the scattering $\x^l_\I$ is weakly vanishing in $H^1$ as $t\to-\I$. Then the smallness of the ground states implies \EQ{ \label{min ele ME} \bH^0(\la^l(0))\ge E_*-C\mu_0+o(1).} The same argument as above works if the assumption $\|\x\|_{\ST(0,\I)}\to\I$ is replaced with $\|\x\|_{\ST(0,T)}\to\I$ for some sequence $T\to\I$. Similarly, if it is replaced with $\|\x\|_{\ST(T,0)}\to\I$ for some sequence $T\to-\I$, then the same argument works in the negative time direction. Next we prove the precompactness of the orbit of a minimal solution. Henceforth, the index $n$ of sequences is made explicit in order to avoid confusion. Let $u=\Phi[z]+R[z]\x\in\GS(M_*,E_*)$ be a global solution satisfying \EQ{ (\bM(u),\bE(u))=(M_*,E_*), \pq \|\x\|_{\ST(0,\I)}=\I.} Then for any sequence $0<t_n\to\I$, the above argument applies to $u_n:=u(t+t_n)$ on $I_n:=(-t_n,\I)\to\R$, both with $I_n':=(-t_n,0]$ and with $I_n':=[0,\I)$, because $\|\x_n\|_{\ST(-t_n,0)}=\|\x\|_{\ST(0,t_n)}\to\I$ and $\|\x_n\|_{\ST(0,\I)}=\|\x\|_{\ST(t_n,\I)}=\I$. If $u_\I^0$ becomes the minimal element in either case, then $u_n(0)=u(t_n)$ is strongly convergent. Otherwise, we get \eqref{min ele ME} for some $l=l_0>0$ in $I_n'=(-t_n,0]$ and for another $l=l_1>0$ in $I_n'=[0,\I)$, while $u^0_\I$ is scattering to $\Soli_0$ as $t\to\pm\I$. Then $\bE(u^0_\I)$ can be negative only by the soliton component, hence $\bE(u^0_\I)\gec -\mu_0$. Putting them into \eqref{ME decop} yields \EQ{ \bE_* \ge \bE(u^0_\I)+\bH^0(\la^{l_0}(0))+\bH^0(\la^{l_1}(0))+o(1) \ge 2E_*-C\mu_0+o(1),} so $E_*\lec\mu_0$, contradicting the small data scattering if $\mu_0$ is small enough. Hence $u(t_n)$ converges strongly in $H^1_x$ after extracting a subsequence. In other words, \EQ{ \{u(t) \mid t\ge 0\} \subset H^1_r(\R^3)} is precompact for such a minimal solution $u$. By Lemma \ref{lem:gs}, we have a lower bound $\bK_2(u(t))\ge\ka_*:= \ka(M_*,\sE_1(M_*)-E_*)>0$. The precompactness implies that there is $R\gg 1$ such that \EQ{ \sup_{t>0}\int_{|x|>R}[|\na u|^2+|u|^2+|u|^4]dx \ll \ka_*.} Then the saturated virial identity as in Section \ref{ss:bup} implies \EQ{ \p_t\LR{R f_Ru|iu_r}> \ka_* > 0,} for all $t>0$, which obviously contradicts the boundedness of $\LR{R f_Ru|iu_r}$ in $t>0$. This concludes the scattering to the ground states $\Soli_0$ in (ii) of Lemma \ref{lem:gs}, and so the proof of Theorem \ref{main}.
{ "timestamp": "2016-02-11T02:04:03", "yymm": "1504", "arxiv_id": "1504.06532", "language": "en", "url": "https://arxiv.org/abs/1504.06532" }
\section{Introduction \label{intro}} The quantum walk (QW) is the quantum counterpart of the classical random walk. QWs have been widely investigated for the last decade, mainly in connection with quantum information science. The reviews and books on QWs are, for example, Kempe \cite{Kempe2003}, Kendon \cite{Kendon2007}, Venegas-Andraca \cite{VAndraca2008, Venegas2013}, Konno \cite{Konno2008b}, Cantero et al. \cite{CanteroEtAl2013}, Manouchehri and Wang \cite{MW2013}, Portugal \cite{P2013}. The properties of QWs on graphs including cycles were studied by Aharonov et al. \cite{AharonovEtAl2001}. In this paper, we consider two-state QWs on a cycle $C_N$ with $N$ vertices, where $C_N = \{0,1, \ldots, N-1 \}$. In particular, we focus on the periodicity of the Hadamard walk on $C_N$. From now on we present a brief definition of the general two-state QWs on $C_N$, which includes the Hadamard walk as a special case. The discrete-time QW is a quantum version of the classical random walk with additional degree of freedom called chirality. The chirality takes values left and right, and it means the direction of the motion of the walker. At each time step, if the walker has the left chirality, it moves one step to the left, and if it has the right chirality, it moves one step to the right. Let us define \begin{align*} \ket{L} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad \ket{R} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \end{align*} where $L$ and $R$ refer to the left and right chirality states, respectively. The time evolution of the walk is determined by $U \in \mbox{\boldmath{U}}(2)$, where $\mbox{\boldmath{U}}(n)$ be the set of $n \times n$ unitary matrices and \begin{align*} U = \begin{bmatrix} a & b \\ c & d \end{bmatrix}. \end{align*} To define the dynamics of our model, we divide $U$ into two matrices: \begin{eqnarray*} P = \begin{bmatrix} a & b \\ 0 & 0 \end{bmatrix}, \quad Q = \begin{bmatrix} 0 & 0 \\ c & d \end{bmatrix}, \end{eqnarray*} with $U =P+Q$. The important point is that $P$ (resp. $Q$) represents that the walker moves to the left (resp. right) at any position at each time step. The QW considered here is \begin{align} U = H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & - 1 \end{bmatrix} \label{akina}. \end{align} This model is called {\it the Hadamard walk} which has been extensively and deeply investigated in the study of QWs. Let $\Psi_n$ denote the amplitude at time $n$ of the QW on $C_N$: \begin{align*} \Psi_{n} &= {}^T\! \left[\Psi_{n}^{L}(0),\Psi_{n}^{R}(0),\Psi_{n}^{L}(1),\Psi_{n}^{R}(1), \ldots, \Psi_{n}^{L}(N-1),\Psi_{n}^{R}(N-1) \right], \\ &= {}^T\!\left[\Psi_{n}(0),\Psi_{n}(1), \cdots, \Psi_{n}(N-1) \right], \\ &= {}^T\!\left[ \begin{bmatrix} \Psi_{n}^{L}(0)\\ \Psi_{n}^{R}(0)\end{bmatrix},\begin{bmatrix} \Psi_{n}^{L}(1)\\ \Psi_{n}^{R}(1)\end{bmatrix}, \ldots, \begin{bmatrix} \Psi_{n}^{L}(N-1)\\ \Psi_{n}^{R}(N-1)\end{bmatrix} \right] \in(\mathbb{C}^{2})^{N}, \end{align*} where $\mathbb{C}$ denote the set of complex numbers, $T$ means the transposed operation, and $\Psi_n(x)={}^T \> [\Psi_n^{L}(x),\> \Psi_n^{R}(x)] \>\> (x \in C_N)$ is the amplitude at time $n$ and position $x$. Now we introduce the following $2N \times 2N$ unitary matrix: \begin{align*}\ U_N ^{(s)}=\begin{bmatrix} O&P&O&O&\cdots&O&Q\\ Q&O&P&O&\cdots&O&O\\ O&Q&O&P&\cdots&O&O\\ O&O&Q&O&\cdots&O&O\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ O&O&O&O&\cdots&O&P\\ P&O&O&O&\cdots&Q&O \end{bmatrix}\;\;\; \mbox{with} \;\;\; O=\begin{bmatrix} 0&0\\ 0&0 \end{bmatrix}. \end{align*} For $N=2$, following Dukes \cite{Dukes2014}, we put \begin{align*} U_2 ^{(s)} = \begin{bmatrix} O & U \\ U & O \end{bmatrix}. \end{align*} Then the state of the QW at time $n$ is given by \begin{align} \Psi_{n}=(U_N^{(s)})^{n}\Psi_{0}, \label{sankeien} \end{align} for any $n\geq0$. Put $\mathbb{R}_{+}=[0,\infty)$. Here we introduce a map $\phi:(\mathbb{C}^{2})^{N} \rightarrow \mathbb{R}_{+}^{N}$ such that if \begin{align*} \Psi= {}^T\!\left[ \begin{bmatrix} \Psi^{L}(0)\\ \Psi^{R}(0)\end{bmatrix}, \begin{bmatrix} \Psi^{L}(1)\\ \Psi^{R}(1)\end{bmatrix}, \cdots, \begin{bmatrix} \Psi^{L}(N-1)\\ \Psi^{R}(N-1)\end{bmatrix} \right] \in (\mathbb{C}^{2})^{N}, \end{align*} then \begin{align*} \phi(\Psi) = {}^T\! \left[ |\Psi^{L}(0)|^2 + |\Psi^{R}(0)|^2, |\Psi^{L}(1)|^2 + |\Psi^{R}(1)|^2, \ldots, |\Psi^{L}(N-1)|^2 + |\Psi^{R}(N-1)|^2 \right] \in \mathbb{R}_{+}^{N}. \end{align*} That is, for any $x \in C_N$, \begin{align*} \phi(\Psi) (x) = |\Psi^{L}(x)|^2 + |\Psi^{R}(x)|^2. \end{align*} Sometimes we identify $\phi(\Psi(x))$ with $\phi(\Psi) (x)$. Moreover we define the measure of the QW at position $x$ by \begin{align*} \mu(x)=\phi(\Psi(x)) \quad (x \in C_N). \end{align*} The probability that quantum walker at time $n$, $X_n= X_n ^{\varphi}$, starting from $0$ exists at position $x \in \mathbb{Z}$ is defined by \begin{align*} P \left( X_n = x \right) = P \left( X_n ^{\varphi} = x \right) = \phi \left( \left( U^{(s)} \right)^n \Psi_0 ^{\varphi} \right) (x). \end{align*} Here the initial state $\Psi_0 ^{\varphi}$ is given by \begin{align*} \Psi_0 ^{\varphi} = {}^T\!\left[ \begin{bmatrix} \Psi_0 ^{L}(0)\\ \Psi_0^{R}(0)\end{bmatrix}, \begin{bmatrix} \Psi_0^{L}(1)\\ \Psi_0^{R}(1)\end{bmatrix}, \cdots, \begin{bmatrix} \Psi_0^{L}(N-1)\\ \Psi_0^{R}(N-1)\end{bmatrix} \right] = {}^T\!\left[ \varphi, \begin{bmatrix} 0\\ 0 \end{bmatrix}, \cdots, \begin{bmatrix} 0\\ 0 \end{bmatrix} \right], \end{align*} where $\varphi = {}^T\![\alpha, \beta] \in \mathbb{C}^2$ with $|\alpha|^2+|\beta|^2=1$. We put \begin{align*} {\cal N} = \left\{ n \ge 1 : \left( U_N^{(s)} \right)^{n} = I_{2N} \right\}. \end{align*} If ${\cal N} \not= \emptyset$, the period $T_N (< \infty)$ is defined by $T_N = \min \cal{N}$. If ${\cal N} = \emptyset$, then we say that the QW does not have any period and write $T_N = \infty$. Let eigenvalues of $U_N^{(s)}$ be $\{\lambda_k : k=0,1, \ldots , 2N-1 \}$. Remark that $\displaystyle{\left(U_N^{(s)}\right)^{n} = I_{2N}}$ if and only if $\lambda_k^n =1 \> (k=0,1, \ldots , 2N-1).$ Dukes \cite{Dukes2014} studied periodicity of a class of two-state QWs on $C_N$ by using the property of eigenvalues $\lambda_k (k=0,1,\ldots, 2N-1)$ of $U_N ^{(s)}$: if period $T_N$ is finite, then $\lambda_j^{T_N}=1$ for any $j.$ As for the Hadamard walk case, he showed $T_2 =2, \> T_3 > 30, \> T_4=8, \> T_8=24.$ So we prove that $T_N = \infty$ except for $N=2,4,8$ (Theorem \ref{kahun3}). The rest of this paper is organized as follows. In Sect. \ref{results}, we present results on our model. Sections \ref{kahun} and \ref{kahunda} are devoted to proofs of Lemma \ref{kahun1} and Theorem \ref{kahun3}, respectively. In Sect. \ref{sum}, we summarize our result and give a future problem. \section{Results \label{results}} This section gives our results. We begin with $N=3$ case. Then \begin{align*} U_3 ^{(s)}= \begin{bmatrix} O&P&Q\\ Q&O&P\\ P&Q&O \end{bmatrix}. \end{align*} So we have \begin{align*} \left(U_3 ^{(s)}\right)^2 &= \begin{bmatrix} PQ+QP&Q^2&P^2\\ P^2&PQ+QP&Q^2\\ Q^2&P^2&PQ+QP \end{bmatrix}, \\ \left(U_3 ^{(s)}\right)^3 &= \begin{bmatrix} P^3+Q^3&PQP+QP^2+P^2Q&PQ^2+QPQ+Q^2P\\ PQ^2+QPQ+Q^2P&P^3+Q^3&PQP+QP^2+P^2Q\\ PQP+QP^2+P^2Q&PQ^2+QPQ+Q^2P&P^3+Q^3 \end{bmatrix}. \end{align*} Let $A(k,l) \> (1 \le k, l \le m)$ denote the $(k,l)$ component of matrix $A$. For example, $\displaystyle{\left( U_3 ^{(s)} \right)^3} (1,2)=PQP+QP^2+P^2Q$. In order to compute $\displaystyle{\left( U_N ^{(s)} \right)^n (k,l)}$, we use nice relations: $P^2 = aP, \> Q^2=dQ.$ Moreover we introduce the following $2 \times 2$ matrices, $R$ and $S$: \begin{align*} R= \begin{bmatrix} c & d \\ 0 & 0 \end{bmatrix}, \quad S= \begin{bmatrix} 0 & 0 \\ a & b \end{bmatrix}. \end{align*} Then we obtain the next table of products of matrices, $P, \> Q, \> R,$ and $S$: \par \ \par \begin{center} \begin{tabular}{c|cccc} & $P$ & $Q$ & $R$ & $S$ \\ \hline $P$ & $aP$ & $bR$ & $aR$ & $bP$ \\ $Q$ & $cS$ & $dQ$& $cQ$ & $dS$ \\ $R$ & $cP$ & $dR$& $cR$ & $dP$ \\ $S$ & $aS$ & $bQ$ & $aQ$ & $bS$ \label{pqrs} \end{tabular} \end{center} \par \ \par\noindent where $PQ=bR$, for example. In particular, for the Hadamard walk case, we have \par \ \par \begin{center} \begin{tabular}{c|rrrr} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$Q$} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$S$} \\ \hline $P$ & $P/\sqrt{2}$ & $R/\sqrt{2}$ & $R/\sqrt{2}$ & $P/\sqrt{2}$ \\ $Q$ & $S/\sqrt{2}$ & $-Q/\sqrt{2}$& $Q/\sqrt{2}$ & $-S/\sqrt{2}$ \\ $R$ & $P/\sqrt{2}$ & $-R/\sqrt{2}$& $R/\sqrt{2}$ & $-P/\sqrt{2}$ \\ $S$ & $S/\sqrt{2}$ & $Q/\sqrt{2}$ & $Q/\sqrt{2}$ & $S/\sqrt{2}$ \\ \end{tabular} \end{center} \par \ \par\noindent This path counting method was introduced and intensively studied by \cite{Konno2002, Konno2005}. Using this relation, we compute \begin{align*} \left( U_3 ^{(s)} \right)^3 (1,2)=PQP+QP^2+P^2Q=bcP+abR+acS= \left( \frac{1}{\sqrt{2}} \right)^2 \left( P + R + S \right). \end{align*} Similarly we have \begin{align*} \left( U_3 ^{(s)} \right)^4 (1,2) &=Q^3P + P^4 + PQ^3 +QPQ^2+ Q^2PQ \\ &=a^3P+bd^2R+(cd^2 + bcd + bcd)S \\ &= \left( \frac{1}{\sqrt{2}} \right)^3 \left\{ P + R + (1-1-1) S \right\}. \end{align*} Moreover we write the number of paths corresponding to $\displaystyle{\left( U_N ^{(s)} \right)^n (k,l)}$ by $w (N, n ; (k,l))$, e.g., $w (3, 2 ; (1,2))=1, \> w (3, 3 ; (1,2))=3, \> w (3, 4 ; (1,2))=5$. In general, the property of paths easily implies \begin{lem} For any $N \ge 2, \> n \ge 1$, $w (N, n ; (k,k))$ is even for $k \in \{1,2, \ldots, N\}$ and $w (N, n ; (1,l))=w (N, n ; (1,N-(l-2)))$ for $l \in \{2,3, \ldots, [(N/2)+1] \}$, where $[x]$ is the integer part of real number $x$. \label{salad1} \end{lem} We should note that $P, \> Q, \> R,$ and $S$ form an orthogonal basis of the vector space of $2 \times 2$ matrices with respect to the trace inner product $\langle A | B \rangle = $ tr$(A^{\ast}B)$. Thus if there exist $c_p, c_q, c_r, c_s \in \mathbb{C}$ such that \begin{align*} c_p P + c_q Q + c_r R + c_s S = O_2, \end{align*} then $c_p=c_q=c_r=c_s=0$. Thus we see that for any $N \ge 2, \> n \ge 1$ and $k \in \{1,2, \ldots, N\}$, if $w (N, n ; (k,l))$ is odd, then $\displaystyle{\left(U_N ^{(s)}\right)^n} (k,l) \not= O_2.$ Therefore combining this property with Lemma \ref{salad1}, we obtain the following lemma which is one of the key results of our method. \begin{lem} If there exists $n \ge 1$ such that $\displaystyle{\left(U_N ^{(s)}\right)^n = I_{2N}}$, then $w (N, n ; (k,l))$ is even for any $k,l \in \{1,2, \ldots, N\}$. \label{salad2} \end{lem} To count the number of paths, we introduce the adjacency matrix $A_N$ of $C_N$: \begin{align*}\ A_N = \begin{bmatrix} 0&1&0&\cdots&0&1\\ 1&0&1&\cdots&0&0\\ 0&1&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&0&1\\ 1&0&0&\cdots&1&0 \end{bmatrix}. \end{align*} For example, in $N=3$ case, we have \begin{align*} A_3= \begin{bmatrix} 0&1&1\\ 1&0&1\\ 1&1&0 \end{bmatrix}, \quad (A_3)^2= \begin{bmatrix} 2&1&1\\ 1&2&1\\ 1&1&2 \end{bmatrix}, \quad (A_3)^3= \begin{bmatrix} 2&3&3\\ 3&2&3\\ 3&3&2 \end{bmatrix}. \end{align*} Moreover we introduce another $N \times N$ matrix $B_N ^{(n)}$ whose component $B_N ^{(n)} (k,l)$ is equal to $(A_N)^n (k,l) \> ({\rm mod} \> 2)$. For $N=3$ case, we get \begin{align*} B_3 ^{(1)}=B_3 ^{(2)} = B_3 ^{(3)} = \cdots = \begin{bmatrix} 0&1&1\\ 1&0&1\\ 1&1&0 \end{bmatrix}. \end{align*} By using notation $B_N^{(n)} (k,l)$, Lemma \ref{salad2} can be rewritten as \begin{lem} If there exists $n \ge 1$ such that $\displaystyle{\left(U_N ^{(s)}\right)^n = I_{2N}}$, then $B_N^{(n)} (k,l)=0$ for any $k,l \in \{1,2, \ldots, N\}$. \label{salad3} \end{lem} On the other hand, we have the following result. \begin{lem} For any odd number $N (\ge 3)$, we have $B_N^{(n)} (k,l)=1$ for some distinct $k,l \in \{1,2, \ldots, N\}$. \label{kahun1} \end{lem} The proof will appear in Sect. \ref{kahun}. Combining Lemma \ref{salad3} with Lemma \ref{kahun1} immediately gives \begin{pro} For any odd number $N (\ge 3)$, we have $T_N = \infty$. \label{kahun2} \end{pro} By using Proposition \ref{kahun2} and the property of cyclotomic polynomials, we obtain the following main result. \begin{thm} For any $N$ except for $N=2,4,8$, we have $T_N = \infty$. \label{kahun3} \end{thm} The proof will appear in Sect. \ref{kahunda}. We should remark that Higuchi et al. \cite{HiguchiEtAl} investigated the periodicity of the Szegedy walk on graphs, e.g., the complete graphs, by using a method based on the property of cyclotomic polynomials. On the other hand, we consider the periodicity of the Hadamard walk on cycles by using not only cyclotomic polynomials but also the path counting for the walk. Combining Dukes' result, $T_2 =2, T_4=8, T_8=24$, with our Theorem \ref{kahun3} gives immediately \begin{thm} For any $N \ge 2$, \begin{eqnarray*} T_N = \left\{ \begin{array}{cl} 2, & (N=2), \\ 8, & (N=4), \\ 24, & (N=8), \\ \infty, & (N \not= 2,4,8). \end{array} \right. \end{eqnarray*} \label{kahun4} \end{thm} We should note that for the classical random walk in which the walker moves one step to the left with probability $p$ and to the right with probability $q$ with $p+q=1 \> (p,q \in [0,1])$, the eigenvalues $\{ \lambda_k : k=0,1, \ldots, N-1 \}$ of the corresponding transition matrix are given by \begin{align*} \lambda_k = \cos \left( \frac{2 k \pi}{N} \right) + i \> (q-p) \sin \left( \frac{2 k \pi}{N} \right) \quad (k=0,1, \ldots, N-1). \end{align*} Therefore we see that for any $N \ge 2$, \begin{eqnarray*} T_N = \left\{ \begin{array}{cl} N, & (p=0, 1), \\ \infty, & (p \in (0,1)), \end{array} \right. \end{eqnarray*} since $\lambda_k =e^{2 \pi i k/N} \>(p=0), \> e^{-2 \pi i k/N} \>(p=1),$ and $|\lambda_1|<1 \> (0<p<1).$ From now on we briefly review previous results on the Hadamard walk on $C_N$. To do so, we define the time-averaged measure $\overline{\mu}_{n}$ at time $n$ and the limit measure $\overline{\mu}_{\infty}$ for the Hadamard walk on $C_N$ by \begin{align*} \overline{\mu}_{n} (x) &= \frac{1}{n} \sum_{k=0}^{n-1} P(X_k =x), \\ \overline{\mu}_{\infty} (x) &= \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} P(X_k =x) \end{align*} for any $x \in C_N$. Aharonov et al. \cite{AharonovEtAl2001} proved that the time-averaged limit measure $\overline{\mu}_{\infty}$ is uniform for odd $N$, that is, $\overline{\mu}_{\infty} (x) = 1/N \> (x \in C_N)$, independent of the initial state. Bednarska et al. \cite{BednarskaEtAl2003} considered the Hadamard walk on $C_N$ with even $N$. They obtained the eigenvalues and eigenvectors of $U_N^{(s)}$ and gave an explicit formula of $\overline{\mu}_{\infty}$ starting from a single vertex for any $N$. By using the formula, they showed that $\overline{\mu}_{\infty}$ is uniform for $N=2,4$. Moreover they found that $\overline{\mu}_{\infty}$ is very sensitive to the arithmetric properties of $N$. Bednarska et al. \cite{BednarskaEtAl2004} reported examples for three different kinds of behaviour of the total variation distance between a uniform measure and the time-averaged measure $\overline{\mu}_{n}$ for the Hadamard walk on $C_N$ with even $N$. From Theorem \ref{kahun4}, we have \begin{cor} For $N=2,4,8$, \begin{align*} \overline{\mu}_{\infty} = \frac{1}{T_N} \sum_{n=0}^{T_N-1} \mu_n. \end{align*} \end{cor} \section{Proof of Lemma \ref{kahun1} \label{kahun}} Before we move to the proof, we consider $N=5$ case. In this case, $B_5^{(1)}(=A_5)$ is given by \begin{align*} B_5^{(1)} = \begin{bmatrix} 0&1&0&0&1\\ 1&0&1&0&0\\ 0&1&0&1&0\\ 0&0&1&0&1\\ 1&0&0&1&0 \end{bmatrix}. \end{align*} Then we would like to show that for any $n \ge 1$, there exists $(k,l)$ with $k \not= l$ such that $B_5^{(n)} (k,l)=1$ by induction. When $n=1$, we see immediately $B_5^{(1)} (1,2)=1$. Next we consider $n=m$ case. We put \begin{align} \left[ B_5^{(m)} (1,1), B_5^{(m)} (1,2), B_5^{(m)} (1,3), B_5^{(m)} (1,4), B_5^{(m)} (1,5) \right] =\left[0, c_1, c_2, c_2, c_1 \right]. \label{okazu1} \end{align} Here we should remark that Lemma \ref{salad1} implies $B_5^{(m)} (1,1)=0$ and $c_1 = B_5^{(m)} (1,2)=B_5^{(m)} (1,5), \> c_2 = B_5^{(m)} (1,3)=B_5^{(m)} (1,4)$ for any $m \ge 1$. We assume that when $n=m$, the statement holds. That is, $(c_1, c_2) \not= (0,0).$ We consider $n=m+1$. We assume that \begin{align} \left[ B_5^{(m+1)} (1,1), B_5^{(m+1)} (1,2), B_5^{(m+1)} (1,3), B_5^{(m+1)} (1,4), B_5^{(m+1)} (1,5) \right] =\left[0, 0, 0, 0, 0 \right]. \label{okamoto} \end{align} Then Eq. \eqref{okazu1} implies \begin{align} \left[ B_5^{(m+1)} (1,1), B_5^{(m+1)} (1,2), B_5^{(m+1)} (1,3), B_5^{(m+1)} (1,4), B_5^{(m+1)} (1,5) \right] =\left[0, c_2, c_1+c_2, c_1 + c_2, c_2 \right]. \label{taro} \end{align} Combining Eq. \eqref{okamoto} with Eq. \eqref{taro} gives \begin{align*} c_2=c_1+c_2=0. \end{align*} Thus we have $c_1=c_2=0$. This contradicts the assumption for $n=m$, i.e., $(c_1, c_2) \not= (0,0).$ Therefore we see that there exists $(1,l)$ with $l \in \{2,3,4,5\}$ such that $B_5^{(m+1)} (1,l)=1$. So Lemma \ref{kahun1} is valid for $N=5$. We can extend this argument to general odd number $N=2M+1$ as follows. When $n=1$, we easily see $B_N^{(1)} (1,2)=1$. Next we consider $n=m$. In a similar way, we put \begin{align} \left[ B_N^{(m)} (1,1), B_N^{(m)} (1,2), \ldots , B_N^{(m)} (1,N) \right] =\left[0, c_1, c_2, \ldots , c_{M-1}, c_M, c_M, c_{M-1}, \ldots, c_2, c_1 \right]. \label{okazu2} \end{align} We assume that when $n=m$, the statement holds. That is, $(c_1, c_2, \ldots , c_M) \not= (0, 0, \ldots , 0)$. We consider $n=m+1$. We assume that \begin{align} \left[ B_N^{(m+1)} (1,1), B_N^{(m+1)} (1,2), \ldots, B_N^{(m+1)} (1,N) \right] =\left[0, 0, \ldots, 0 \right]. \label{gokamoto} \end{align} Then Eq. \eqref{okazu2} gives \begin{align} & \left[ B_N^{(m+1)} (1,1), B_N^{(m+1)} (1,2), \ldots, B_N^{(m+1)} (1,N) \right] \nonumber \\ & \qquad =\left[0, c_2, c_1+c_3, c_2 + c_4, \ldots, c_{M-2}+c_M, c_{M-1}+c_M, \right. \nonumber \\ & \qquad \qquad \qquad \left. c_{M-1}+c_M, c_{M-2}+c_M, \ldots, c_2 + c_4,c_1+c_3, c_2 \right]. \label{gtaro} \end{align} From Eq. \eqref{gokamoto} and Eq. \eqref{gtaro}, we obtain \begin{align*} c_2=c_1+c_3= c_2 + c_4= \cdots = c_{M-2}+c_M=c_{M-1}+c_M=0. \end{align*} Thus we have $c_1=c_2= \cdots = c_M=0$. This contradicts the assumption for $n=m$, i.e., $(c_1, c_2, \ldots , c_M) \not= (0, 0, \ldots , 0)$. Therefore we see that there exists $(1,l)$ with $l \in \{2, \ldots ,N \}$ such that $B_N^{(m+1)} (1,l)=1$, and the proof is completed. \section{Proof of Theorem \ref{kahun3} \label{kahunda}} First we introduce cyclotomic polynomials: $F_1 (\lambda) = \lambda -1$, and for $n \ge 2$, \begin{align*} F_n (\lambda) = \prod_{\scriptstyle 1 \le k \le n-1: \atop \scriptstyle {\rm gcd}(k,n)=1} \left( \lambda - \exp \left(\frac{2 \pi i k}{n} \right) \right), \end{align*} where ${\rm gcd} (n_1, n_2, \ldots , n_k)$ denotes the greatest common divisor of $(n_1, n_2, \ldots , n_k)$. Before we move to a proof of Theorem \ref{kahun3}, we give another proof of Dukes' result, $T_2 =2, \> T_4=8, \> T_8=24$, by using cyclotomic polynomials. By definition of $U_N^{(s)}$, we have \begin{align} \det \left( \lambda I_{2N} - U_N^{(s)} \right) = \prod_{k=0}^{N-1} \left\{ \lambda^2 + i \sqrt{2} \sin \left(\frac{2 \pi i k}{N} \right) \lambda - 1 \right\}, \label{smidori} \end{align} see \cite{BednarskaEtAl2003,BednarskaEtAl2004}, for example. From Eq. \eqref{smidori}, we compute \begin{align*} \det \left( \lambda I_{4} - U_2^{(s)} \right) &= F_1(\lambda)^2 \> F_2(\lambda)^2, \\ \det \left( \lambda I_{8} - U_4^{(s)} \right) &= F_1(\lambda)^2 \> F_2(\lambda)^2 \> F_8(\lambda), \\ \det \left( \lambda I_{16} - U_8 ^{(s)} \right) &= F_1(\lambda)^2 \> F_2(\lambda)^2 \> F_8(\lambda) \> F_{12}(\lambda)^2. \end{align*} Then we have the desired conclusion: \begin{align*} T_2 = {\rm lcm} (1,2) =2, \quad T_4 = {\rm lcm} (1,2,8) =8, \quad T_8 = {\rm lcm} (1,2,8,12) =24, \end{align*} where ${\rm lcm} (n_1, n_2, \ldots , n_k)$ denotes the least common multiple of $(n_1, n_2, \ldots , n_k)$. Next we give another proof of $N=3$ case of Proposition \ref{kahun2} by using cyclotomic polynomials. That is, we prove $T_3 = \infty$. From Eq. \eqref{smidori}, we calculate \begin{align*} \det \left( \lambda I_{6} - U_3^{(s)} \right) = F_1(\lambda) \> F_2(\lambda) \> G(\lambda), \end{align*} where \begin{align*} G(\lambda) = \lambda^4- \frac{\lambda^2}{2}+1. \end{align*} On the other hand, it is known that there are only four cyclotomic polynomials with degree 4 as follows: \begin{align*} F_5(\lambda) & =\lambda^4+\lambda^3+\lambda^2+\lambda+1, \quad F_8(\lambda)=\lambda^4+1, \\ F_{10}(\lambda) & =\lambda^4-\lambda^3+\lambda^2-\lambda+1, \quad F_{12}(\lambda)=\lambda^4-\lambda^2+1. \end{align*} Thus, we confirm that $G(\lambda)$ is not a cyclotomic polynomial and conclude that $T_{3}= \infty.$ From now on, we move to a proof of Theorem \ref{kahun3}. First we consider odd $N$ case. Then we have \begin{pro} For any odd $N$, there exist $m (N), r_1, r_2, \ldots, r_{m(N)} \ge 1$ such that \begin{align*} \det \left( \lambda I_{2N} - U_N^{(s)} \right) = \prod_{j=1}^{m(N)} F_{r_j} (\lambda) \times G(\lambda), \end{align*} where $G(\lambda)$ is not a cyclotomic polynomial. \label{midorida} \end{pro} The proof is that if we do not have such a $G(\lambda)$, then $T_N = {\rm lcm} (r_1, r_2, \ldots , r_{m(N)})< \infty$ and this contradicts Proposition \ref{kahun2}. Moreover, we consider $N=2^n \times M$ case, where $n \ge 1$ and $M$ is an odd number. By Eq. \eqref{smidori}, we see that there exists a polynomial $H(\lambda)$ such that \begin{align*} &\det \left( \lambda I_{2N} - U_N^{(s)} \right) \\ &= \prod_{k=0}^{2^n \times M -1} \left\{ \lambda^2 + i \sqrt{2} \sin \left(\frac{2 \pi i k}{2^n \times M} \right) \lambda - 1 \right\} \\ &= \left\{ \lambda^2 + i \sqrt{2} \sin \left(\frac{2 \pi i \times 2^n \times 0}{2^n \times M} \right) \lambda - 1 \right\} \times \left\{ \lambda^2 + i \sqrt{2} \sin \left(\frac{2 \pi i \times 2^n \times 1}{2^n \times M} \right) \lambda - 1 \right\} \\ & \times \cdots \times \left\{ \lambda^2 + i \sqrt{2} \sin \left(\frac{2 \pi i \times 2^n \times (M-1)}{2^n \times M} \right) \lambda - 1 \right\} \times H(\lambda) \\ &= \det \left( \lambda I_{2M} - U_M^{(s)} \right) \times H(\lambda). \end{align*} From Proposition \ref{midorida}, we see that there exist $m (M), r_1, r_2, \ldots, r_{m(M)} \ge 1$ such that \begin{align*} \det \left( \lambda I_{2N} - U_N^{(s)} \right) = \prod_{j=1}^{m(M)} F_{r_j} (\lambda) \times G_M (\lambda) \times H(\lambda), \end{align*} where $G_M (\lambda)$ is not a cyclotomic polynomial. So we have $T_N = \infty$ for $N=2^n \times M$, where $n \ge 1$ and $M$ is an odd number. Therefore it is enough to deal with $N=2^n \> (n \ge 4)$ cases, since $T_2=2, \> T_{2^2}=8, \> T_{2^3} = 24$. For $N=2^4 =16$ case, we obtain \begin{align} \det \left( \lambda I_{2^5} - U_{2^4} ^{(s)} \right) &= F_1(\lambda)^2 \> F_2(\lambda)^2 \> F_8(\lambda) \> F_{12}(\lambda)^2 \nonumber \\ & \times \left( \lambda^4- \frac{\lambda^2}{2}+1 \right)^2 \> \left( \lambda^4- \frac{3 \lambda^2}{2}+1 \right)^2. \label{midori1} \end{align} Thus, as in the case of $N=3$, we obtain $T_{2^4}= \infty$. For $N=2^n \> (n \ge 5)$, we see that there exists a polynomial $G_N (\lambda)$ such that \begin{align} \det \left( \lambda I_{2^{n+1}} - U_{2^n}^{(s)} \right) = \det \left( \lambda I_{2^5} - U_{2^4} ^{(s)} \right) \times G_N (\lambda). \label{midori2} \end{align} Combining Eq. \eqref{midori1} with Eq. \eqref{midori2} implies that there exist $m (N), r_1, r_2, \ldots, r_{m(N)} \ge 1$ and a polynomial $G^{\ast}_N (\lambda)$ such that \begin{align*} \det \left( \lambda I_{2^{n+1}} - U_{2^n}^{(s)} \right) = \prod_{j=1}^{m(N)} F_{r_j} (\lambda) \times G^{\ast}_N (\lambda). \end{align*} We should note that $G^{\ast}_N (\lambda)$ contains a factor $\lambda^4- \lambda^2/2+1$. So $G^{\ast}_N (\lambda)$ is not a cyclotomic polynomial. Therefore we conclude that $T_N = \infty$ for any $N=2^n \> (n \ge 5)$. \section{Summary \label{sum}} In this paper, we proved that the period $T_N = \infty$ except with $N=2, 4, 8$ for the Hadamard walk on $C_N$. On the other hand, $T_2=2, \> T_4=8, \> T_8 = 24$ was previously shown by Dukes \cite{Dukes2014} in 2014. Our method is based on a path counting and cyclotomic polynomials which is different from his approach based on the property of eigenvalues for $U_N^{(s)}$. An implementation of a Hadamard-like QW on $C_N$ was proposed by Moqadam et al. \cite{MoqadamEtAl2014} by using optomechanical systems. We hope that our result is helpful in building new quantum algorithms. Chou and Ho \cite{ChouHo2014} investigated numerically the asymptotic behaviour of space-inhomogeneous QWs on $\mathbb{Z}$, where $\mathbb{Z}$ is the set of integers. Their model is defined by a periodic quantum coin $U_x (x \in \mathbb{Z})$ given by $H$ or $I_2$, where $I_2$ is the $2 \times 2$ identity matrix, e.g., $U_x =H$ for $x=0$ (mod $N$), $U_x =I_2$ for $x \not=0$ (mod $N$) with $N \ge 2$. They discussed localization of the QWs, so one of the interesting future problems is to consider the periodicity of space-inhomogeneous QWs on $C_N$. \par \ \par\noindent {\bf Acknowledgments.} We would like to thank Hyun Jae Yoo, Chul Ki Ko, Takeshi Kajiwara, Seiya Yoshida, Yuto Minowa, Kei Saito for useful discussions. This work was partially supported by the Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science (Grant No.24540116). \par \ \par \begin{small} \bibliographystyle{jplain}
{ "timestamp": "2015-04-27T02:04:17", "yymm": "1504", "arxiv_id": "1504.06396", "language": "en", "url": "https://arxiv.org/abs/1504.06396" }
\section{introduction} Since Einstein's relativistic theory tell that the matter can convert into energy, the possibility of converting energy into matter, i.e., the electron-positron pair, as predicted by Dirac in quantum electrodynamics \citep{Dirac}, has attracted a great deal of interest\citep{RMP_Keitel_2012}. In presence of a static and uniform electric field, the quantum electrodynamic (QED) vacuum may break down and decay into electron-positron pairs due to a quantum tunneling effect \citep{breakdown1_1931,breakdown2_1936,breakdown3_1951}. The critical Schwinger field is $E_{c}=m^{2}c^{3}/\left(\left|e\right|\hbar\right)$, which can accelerate the electron to an energy of the order of its rest mass on its Compton wavelength $\lambda_{C}=\hbar/mc$, where $m$ is the electron mass. Starting from the works of Brezin and Popov et.al \citep{gener_Sch_time_1,gener_Sch_time_2,gener_Sch_time_3}, the Schwinger mechanism was generalized to time dependent fields \citep{gener_Sch_time_4,gener_Sch_time_5,gener_Sch_time_Keitel_review,gener_Sch_time_6,gener_Sch_time_7,gener_Sch_time_8,gener_Sch_time_9,gener_Sch_time_10} , where another mechanism may be responsible for the pair creation. If the frequency of the alternating field exceeds the gap $2mc^{2}$, electrons in Dirac sea can transit to positive states and pairs are triggered. Experimentally, pairs can be generated by the relativistic heavy-ion collisions\citep{heavy_ion_collisions_1993} or the collision of an intense laser pulse and a 46 Gev electron beam\citep{SLAC_1997}, but pairs created from pure laser light has not been observed until now. Recently, various numerical approach were developed to deal with the time dependent Dirac equation\citep{Quant_dyna_re_ele_Keitel_2004,2d_code_Dirac_Keitel_2008,splitoperatormethod},the Klein paradox\citep{Klein_paradox_su_2004,Klein_paradox_NQFT_Su_2005}, the Zitterbewegung\citep{ZB_su_2004}, and the pair production process \citep{timing_Su_2006,Dynamics_Bound_States_Su_2005,boundstate_channel_close_su_2014,suotang2013,fit_rate,rev_nQFT_Su_2010}. The one-dimensional well potential, specially, for its simplicity, is studied extensively\citep{Dynamics_Bound_States_Su_2005,boundstate_channel_close_su_2014,suotang2013,MJiang_2013,Degeneracies}. The super critical well potential has bound states embedded in the negative continuum can cause spontaneous electron-positron pair creation. Theoretical investigations are expected to make the physics of the creation clear and predict a higher generation rate. However, The Pauli exclusion principle will block further creation once the bound states are occupied, resulting a asymptotic saturation behavior\citep{Dynamics_Bound_States_Su_2005,boundstate_channel_close_su_2014,Degeneracies}. Motivated by this requirement and a better understanding of the pair creation process in the one-dimensional well potential, we examine the pair creation in a well with its width or depth oscillating. By oscillating the width and depth, the transfer channels for population are opened and closed alternately. The electrons confined in the well will be released and the Pauli block become invalid. This can lead to a non-vanishing production rate, which means that pairs can be pumped inexhaustibly form the well. This paper is organized as follows. In Sec.\mbox{II} we present the model and the numerical method we employed. The well potential is set to be oscillating in two modes, the width oscillating mode and the depth oscillating mode. The energy spectrum is plotted as a function of the width or the depth. In Sec.\mbox{III}. we discuss the pair production process in both two modes. The time evolution of pair number, spacial density and pumping rate are studied. We also investigate the adiabatic limit of the oscillating. In the last section we give a brief summary. \section{model and method} \subsection{Model: one-dimensional well potential with oscillating depth or width} In one dimension, the time evolution of the Heisenberg field operator $\ensuremath{\hat{\Psi}\left(z,\, t\right)}$ is given by the Dirac equation ( without spin, for simplicity in this paper ) \citep{QED_strongfield_Greiner_1985} \begin{equation} i\frac{\partial}{\partial t}\hat{\Psi}\left(z,\, t\right)=\left[c\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\hat{p}}_{z}+c^{2}\boldsymbol{\sigma}_{3}+V\left(z,\, t\right)\right]\hat{\Psi}\left(z,\, t\right).\label{eq:Dirac_Eq} \end{equation} $\boldsymbol{\sigma}_{1,\,}\boldsymbol{\sigma}_{3}$ are Pauli matrices, $c$ is the speed of light in vacuum, $V\left(z,\, t\right)$ is the external potential. The atomic units ({[}a.u.{]}) is used in this paper: $m=\hbar=e=1$, $c=1/\alpha\approx137.0359991$, $\alpha$ is fine-structure constant, Compton wave length of electron is $\lambda_{C}=1/c$. The Hamiltonian of the system is $H=\left[c\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\hat{p}}_{z}+c^{2}\boldsymbol{\sigma}_{3}+V\left(z,\, t\right)\right]$. We define the potential as \begin{equation} V\left(z,t\right)=\frac{V_{0}\left(t\right)}{2}\left[\tanh\left(\frac{z-\frac{W\left(t\right)}{2}}{D}\right)-\tanh\left(\frac{z+\frac{W\left(t\right)}{2}}{D}\right)\right].\label{eq:Vzt} \end{equation} $D$ is the width of potential edge (a measure of the width of the electric field), and we set $D=0.3\lambda_{C}$. The numerical box size is set to $L=2.5$. The potential width $W\left(t\right)$ and the depth $V_{0}\left(t\right)$ ( positive, but note that the potential $V\left(z,\, t\right)$ is negative in the center and zero elsewhere) are set to two modes: (1) the \textbf{W-oscillating mode}: $V_{0}$ is constant, $W\left(t\right)=W_{1}+\frac{1}{2}\left(W_{2}-W_{1}\right)\left[1+\sin(\omega_{W}(t)-\pi/2)\right]$ ; (2) the \textbf{V-oscillating mode}\textbf{\textsl{:}} $W$ is constant, $V_{0}\left(t\right)=V_{1}+\frac{1}{2}\left(V_{2}-V_{1}\right)\left[1+\sin(\omega_{V}(t)-\pi/2)\right]$. In this paper we assume $W_{1}=0$ and $V_{1}=0$, then $W\left(t\right)$ ( or $V_{0}\left(t\right)$) varies as a sine function between zero and its upper boundary $W_{2}$ ( or $V_{2}$), and the turning on process is from zero point with first order derivative equal to zero. In the following numerical simulation, we choose the total evolution time to be the period ( $T_{W}$ or $T_{V}$ ) of the oscillating $W(t)$ or $V\left(t\right)$ multiples an integer, to make the potential turning off is finished with first order derivative equal to zero. \begin{figure} \includegraphics[scale=0.65]{pump_spectrum_black} \protect\caption{The energy spectrum of the total Hamiltonian as a function of the width or the depth of the potential. (a), $\ensuremath{V_{0}=2.53c^{2}}$, as $W$ increasing, the bound states dive into the Dirac sea at $W=2.79,5.51,8.21...$(in units of $\lambda_{C}$, the electron Compton wavelength). (b), $W=10\lambda_{C}$, as $V_{0}$ increasing, the bound states dive into the Dirac sea at $V_{0}=2.05,2.19,2.38,2.62,2.87,3.15,3.43,3.73,...$(in units of $c^{2}$). \label{fig:spectrum}} \end{figure} The numerically energy spectrum of the total Hamiltonian of finite-size (length, for one dimension) are presented in Fig. \ref{fig:spectrum} for varying $W$ and $V_{0}$, which can be a schematic of the real one-dimensional system. In Fig. \ref{fig:spectrum}, we show the critical width or depth, and the behavior of 'diving' of the bound states into the negative continuum. For example, if $\ensuremath{V_{0}=2.53c^{2}}$, there are bound states embedded and then pair can be spontaneously triggered only when $W>2.79\lambda_{C}$. \subsection{Method: the numerical quantum field theoretical approach} In recent years, numerical quantum field theoretical approach \citep{rev_nQFT_Su_2010} has been established to overcome the single particle picture described by quantum mechanics and the mathematical difficulty of quantum electrodynamics. In this section we will briefly review this method and describe that how do we deal with the model in this paper. The field operator can be expressed in terms of the electron annihilation and positron creation operators as\citep{rev_nQFT_Su_2010} \begin{eqnarray} \hat{\Psi}\left(z,\, t\right) & = & \sum_{p}\hat{b}_{p}W_{p}\left(z,\, t\right)+\sum_{n}\hat{d}_{n}^{\dagger}W_{n}\left(z,\, t\right)\\ & = & \sum_{p}\hat{b}_{p}\left(t\right)W_{p}\left(z\right)+\sum_{n}\hat{d}_{n}^{\dagger}\left(t\right)W_{n}\left(z\right), \end{eqnarray} in which $W_{p(n)}\left(z\right)=\left\langle z|p(n)\right\rangle $ is the solution of the filed-free Dirac Hamiltonian ( $V\left(z,\, t\right)=0$), $W_{p(n)}\left(z,\, t\right)=\left\langle z|p(n)\left(t\right)\right\rangle $ is the time dependent solution of the Dirac equation \eqref{eq:Dirac_Eq}, and the term $\sum_{p(n)}$ denotes the summation over all states with positive (negative) energy. The eigenstates of the filed-free Hamiltonian are \begin{eqnarray} W_{p}\left(z\right) & = & \frac{e^{ipz}}{\sqrt{4\pi E}}\begin{bmatrix}\sqrt{E+c^{2}}\\ sign\left(p\right)\sqrt{E-c^{2}} \end{bmatrix}\\ W_{n}\left(z\right) & = & \frac{e^{inz}}{\sqrt{-4\pi E}}\begin{bmatrix}-sign\left(n\right)\sqrt{-E-c^{2}}\\ \sqrt{-E+c^{2}} \end{bmatrix}, \end{eqnarray} where $E_{p}=\sqrt{c^{4}+p^{2}c^{2}}$, and $E_{n}=-\sqrt{c^{4}+n^{2}c^{2}}$ respectively. The time dependent single particle wave function $W_{p(n)}\left(z,\, t\right)$ can be got by introducing the time-evolution operator $\hat{U}\left(t_{2},t_{1}\right)=\hat{T}exp\left(-\frac{i}{\hbar}\int_{t_{1}}^{t_{2}}dt'\hat{H}\left(t'\right)\right)$, \begin{equation} W_{p(n)}\left(z,\, t\right)=\hat{U}\left(t,\, t=0\right)W_{p(n)}\left(z\right), \end{equation} where $\hat{T}$ denotes the Dyson time ordering operator. In this paper, we use the numerical split operator technique \citep{Quant_dyna_re_ele_Keitel_2004,2d_code_Dirac_Keitel_2008}, then \begin{eqnarray} W\left(t+dt\right) & \approx & e^{-iHdt}W\left(t\right)\nonumber \\ & = & e^{-i\frac{dt}{2}H_{\partial}}e^{-idtH_{z}}e^{-i\frac{dt}{2}H_{\partial}}+O\left(dt^{3}\right), \end{eqnarray} with \begin{eqnarray} H_{\partial} & = & c\boldsymbol{\sigma}_{1}\cdot\boldsymbol{\hat{p}}_{z}+c^{2}\boldsymbol{\sigma}_{3},\\ H_{z} & = & V\left(z,\, t\right). \end{eqnarray} Practically, since the derivation ( the momentum operator ) can be implemented by replacing the operator $\boldsymbol{\hat{p}}_{z}$ with its value $k_{z}$ in momentum space, the evolution operation has the following form \begin{eqnarray} e^{-i\frac{dt}{2}H_{\partial}}W\left(t\right) & = & \mathcal{F}^{-1}\left[\cos\left(\phi\right)\right.\nonumber \\ & & \left.-i\sin\left(\phi\right)\frac{\sigma_{1}\cdot k_{z}+c\sigma_{3}}{\sqrt{c^{2}+k_{z}^{2}}}\right]\mathcal{F}W\left(t\right),\\ e^{-idtH_{z}}W\left(t\right) & = & \left[\cos\left(V\left(t\right)dt\right)\right.\nonumber \\ & & \left.-i\sin\left(V\left(t\right)dt\right)\right]W\left(t\right), \end{eqnarray} where $k_{z}$ is momentum, $\phi=\frac{cdt}{2}\sqrt{c^{2}+k_{z}^{2}},$ and $\mathcal{F}\left(\mathcal{F}^{-1}\right)$ is Fourier transformation (inverse Fourier transformation). Then, after the time dependent field operator $\ensuremath{\hat{\Psi}\left(z,\, t\right)}$ can be calculated, the number, the spacial distribution of electrons created from the vacuum ( defined as $\hat{b}_{p}\left\Vert vac\right\rangle =0$, $\hat{d}_{n}\left\Vert vac\right\rangle =0$) are obtained from the positive part of the field operator, \begin{eqnarray} N^{el.}\left(t\right) & = & \left\langle vac\right\Vert \hat{\Psi}^{(+)\dagger}\left(x,\, t\right)\hat{\Psi}^{(+)}\left(x,\, t\right)\left\Vert vac\right\rangle \nonumber \\ & & =\sum_{pn}\left|U_{pn}\left(t\right)\right|^{2},\label{eq:N_pair}\\ N_{z}^{el.}\left(t\right) & = & \sum_{n}\left|\sum_{p}U_{pn}\left(t\right)W_{p}\left(z\right)\right|^{2},\label{eq:N_z_e} \end{eqnarray} where $U_{pn}\left(t\right)=\left\langle W_{p}\left(z\right)|W_{n}\left(z,\, t\right)\right\rangle =\int dxW_{p}^{*}\left(z\right)W_{n}\left(z,\, t\right)$. The pair number $N\left(t\right)$ is equal to the electron number $N^{el.}\left(t\right)$. The spacial distribution of the created positrons can be written as \begin{eqnarray} N_{z}^{po.}\left(t\right) & = & \sum_{p}\left|\sum_{n}U_{pn}\left(t\right)W_{n}\left(z\right)\right|^{2}.\label{eq:N_z_p} \end{eqnarray} The total positron number $N^{po.}\left(t\right)$ is equal to the electron number $N^{el.}\left(t\right)$. We can also get it from the negative part of the field operator by compute the number and spacial distribution of the holes. In this paper we use this expression ( Eq. \eqref{eq:N_z_p}) to reduce the computational cost, because $U_{pn}$ has been calculated in Eq.\eqref{eq:N_pair}. Furthermore, we can neglect the larger part of the momentum ( $\sqrt{k^{2}c^{2}+c^{4}}$ is far greater than $V$ and $\omega$) in the numerical simulation, for its contribution to the matrix element $U_{pn}\left(t\right)$ is very small. In the following, the number of spatial points is $N_{z}=2048$, and we only take $N_{p}=1024$ discrete momentum into account. Based on the projection of the field operator onto the the field-free electronic states in this method and the definition of electron and positron in the Dirac hole theory, in this paper we will present physical quantities for all time and focus on the moments when the field is absent. \section{pump electron-positron pairs from the well potential } For a well potential of depth $V_{0}$, if $V_{0}<2c^{2}$, the positive continuum and negative continuum can not overlap. But for a super-critical depth, $V_{0}>2c^{2}$, the domain $c^{2}-V_{0}<E<-c^{2}$ exist, and bound states in the well are possible (which we call ' bound states embedded in the negative continuum ' ) : their wave function do not decrease exponentially out the well, but join a continuum wave of the same energy $E<-c^{2}$ out the well, hence the wave function has a non-zero probability outside. An empty bound state will spontaneously be occupied by an electron (two, if the spin is considered) from the filled Dirac sea, and the hole (identified as positron) will travel away from the well to infinity \citep{QED_strongfield_Greiner_1985}. This is the picture of spontaneous creation of electron-positron pair. For a static well potential, electrons will fill the embedded bound states, and the Pauli principle will prevent further pair creation. The number of pair created should be the number of bound states which meet these conditions. For a time dependent potential, the situation is more complicated. In paper \citep{boundstate_channel_close_su_2014}, the effect of open and close a pair-creation channel was studied. The well depth is fixed at $V_{0}=2.53c^{2}$, while the width $W$ varies between $W_{1}=4.55\lambda_{C}$ and $W_{2}=6.15\lambda_{C}$. For $W=4.55\lambda_{C}$, there is one bound state embedded in the Dirac sea, and there are two for $W=6.15\lambda_{C}$. After enough time for saturation, the pair number will increase as one more channel is opened, but do not decrease as one of the two channels is closed. The reason is that the annihilation of the pair need the electron and positron to be in the same place, which is not satisfied because the electron remains in the well while the positrons have left the creation zone and escaped to the opposite direction. Naturally, one can propose that if the channel is opened and closed periodically, can this mechanism will lead to a continuously pair creation? Moreover, for fixed $W$ and varying $V_{0}$, since the diving behavior is similar (Fig. \ref{fig:spectrum}), will something similar happen? Motivated by these questions, we construct two oscillating modes as described in Sec. \mbox{II}. A. Results and discussion are as follows. \begin{figure} \includegraphics[scale=0.62]{pump_N_t_V_W} \protect\caption{The time evolution of the total number of pairs for both W-oscillating and V-oscillating mode. (a), W-oscillating mode, $W_{2}=10\lambda_{C}$,$V_{0}=2.53c^{2}$; (b), V-oscillating mode,\textbf{ $V_{2}=2.53c^{2}$}, $W=10\lambda_{C}$. The frequency $\omega_{W}$ and $\omega_{V}$ are in units of $c^{2}$. The dash line represent the time $t=0.009$ when the positrons arrive the boundary, $z=\pm L/2=\pm1.25$. The triangles denote pair number when the field is absent. The dot line just link these triangles.\label{fig:N_t}} \end{figure} \subsection{time evolution of pair number} Using the method presented in Sec. \mbox{II}. B, we graph the time evolution of the pair number defined as Eq.(\ref{eq:N_pair}) for both W-oscillating and V-oscillating mode in Fig. \ref{fig:N_t}. The width frequency $\omega_{W}$ and depth frequency $\omega_{W}$ are in units of $c^{2}$, and their values are assumed to be relative low, comparing to the gap $2c^{2}$, so that the photon absorption mechanism is not valid. The total time is $120\pi/c^{2}\approx0.02$ and the period is $T_{W}=2\pi/\omega_{W}$ or $T_{V}=2\pi/\omega_{V}$. The dot line represent the time $t\approx L/\left(2c\right)\approx0.009$ when the particles arrive the boundary, $z=\pm L/2=\pm1.25$. Since $W_{1}=0$ and $V_{1}=0$, if the time is an integer multiples of the period ( $T_{W}$ or $T_{V}$ ), the system Hamiltonian degenerate to a field free one. The triangles in Fig. \ref{fig:N_t} denote the pair number when the field is absent. \textbf{W-oscillating mode: }In Fig. \ref{fig:N_t} (a), we illuminate the total number of pairs as a function of time for $\omega_{W}=0.1/6c^{2}$, $0.2/3c^{2}$, $0.3c^{2}$, $0.6c^{2}$. The depth $V_{0}$ is fixed at $V_{0}=2.53c^{2}$. The width $W$ varies between $W_{1}=0$ and $W_{2}=10\lambda_{C}$, corresponding zero and three bound states embedded. When $W=W_{2}$, there are also eight bound states exist in the gap, which can be associate with the pair creation\citep{Dynamics_Bound_States_Su_2005}. When $\omega_{W}=0.1/6c^{2}$, the width $W$ can only finish one cycle in the total time $120\pi/c^{2}$. $N$ begin to arise before $t=3.57\times10^{-3}$ , corresponding $W\left(t\right)=2.79\lambda_{C}$, when the first bound state dive into the negative continuum. The reason is the non-adiabatic varying width, and $N$ will begin to arise precisely at the time when $W\left(t\right)=2.79\lambda_{C}$ in the adiabatic case ($\omega_{W}\rightarrow0$ , see the discussion below). $N$ increases as more bound states dive in, and reach its maximum $N=2.89$ at $t=1.37\times10^{-2}$ , between $t=1.28\times10^{-2}$ and $1.47\times10^{-2}$, at which time the third and the second bound state were pulled out the Dirac sea. Undergoing the particle-antiparticle annihilation, $N$ decreases but remains an appreciable value $N=2.85$ at the end. In the latter half of this cycle, the embedded bound states depart from the Dirac sea, return to the positive continuum, and become scattering states. The released positrons are reflected by the numerical box boundary, come back to the interaction region and will affect the pair generation after. Though the effect is weak when $\omega_{W}=0.1/6c^{2}$, it is non-ignorable when, i.e., $\omega_{W}=0.3c^{2}$ (see Fig. \ref{fig:imagesc_W} for details). For $\omega_{W}=0.2/3c^{2}$ and $\omega_{W}=0.3c^{2}$ , $W$ can finish four and eighteen cycles in the total time and the pair number are $N=6.49$, $N=21.4$ at the end. For $t<0.009$, $W$ can finish one and eight cycles, respectively. In each cycle, the positrons are repulsed by the electric field to the infinity once they were generated, while the electrons are limited in the well when the field is strong enough and extruded out as the well is turning off, avoiding the inevitable Pauli block in the non-varied static well construction. The non-synchronous ejection prevent the annihilation and lead to a high production rate. The next cycle starts from field free and is independent on the previous cycle. In Fig. \ref{fig:N_t}, the dot line link the triangles which denote the pair number when the field is absent. We can find that the pair generation denoted by the dot line is linearly depend on time for low frequency $\omega_{W}$, for $t<0.009$. If the system length $L$ is infinite and there is no reflection at the boundary, the pairs can be pumped inexhaustibly with a constant production rate from the well. Even for $\omega_{W}=0.6c^{2}$, there is nonlinear effect at the beginning, the generation rate become stable soon. Due to the finite period $T_{W}$ and the bound states in the gap, particle generation and ejection process is not monotonic with the increase of the frequency $\omega_{W}$, see Fig. \ref{fig:N_t} (a), However, ignoring the reflection, if the W-oscillating frequency $\omega_{W}$ is very small, we can expect a linear dependent of final pair number on the frequency. \textbf{V-oscillating mode:} The number of pairs $N$ as a function of time are presented in Fig. \ref{fig:N_t} (b), for $\omega_{V}=0.1/6c^{2}$, $0.2/3c^{2}$, $0.3c^{2}$ and $0.6c^{2}$. The width $W$ is fixed at $W=10\lambda_{C}$, while the depth varies between $V_{1}=0$ and $V_{2}=2.53c^{2}$, corresponding zero and three bound states embedded. There are also eight bound states exist in the gap when $V_{0}=V_{2}$. For $\omega_{W}=0.1/6c^{2}$, the first bound state dive in at $t=7.20\times10^{-3}$, at which time there are already $N=8.83\times10^{-2}$ pair generated. The first bound state depart the negative continuum after the second and the third one, at $t=1.29\times10^{-2}$, when $N$ reach its maximum $N=1.81$. Finally, there are $N=1.74$ pairs survived at $t=120\pi/c^{2}$. For $\omega_{V}=0.2/3c^{2}$, $0.3c^{2}$, $0.6c^{2}$, the pair number at the end are $N=2.21,$ $2.56$, $3.78$. Instead of pulling and pushing the walls of the well in W-oscillating mode, in this mode it is the rising and falling bottom of the well that control the bound states diving in and departing from the negative continuum. It is also the non-synchronous ejection of the positrons and electrons which dominant the pumping process. The dot line here indicate a linear relation between the pair number and time. The final number is not monotonic depending on the frequency $\omega_{V}$, and we can also expect a linear dependent of final pair number on $\omega_{V}$ when $\omega_{V}$ is very small. Note that although the two modes has the same beginning and ending parameters, the generation rate in the W-oscillating mode is much higher. \begin{figure} \includegraphics[scale=0.64]{image_waterfall_W} \protect\caption{For W-oscillating mode, $\omega_{W}=0.3c^{2}$, the three dimensional diagrams for entire time and the waterfall figures for field free moments (the time indicated by triangles in Fig. \ref{fig:N_t} (a)), for electron spacial density (a, c) and positron spacial density (b, d). The thicker curve in sub-figure (c, d) mark the last cycle before the positron arrive the boundary. The well potential $V(z)$ with $V_{0}=2.53c^{2}$, $W=10\lambda_{C}$, are included on the bottom for comparison. All other parameters are the same as Fig. \ref{fig:N_t} (a).\label{fig:imagesc_W}} \end{figure} \subsection{time evolution of spacial density } In last section we discussed the total pair number as a function of time in different oscillating frequency for both modes. To show the pumping process explicitly, we compute the time evolution of spacial density of electrons and positrons ( Eq. \ref{eq:N_z_e} and Eq.\ref{eq:N_z_p}) for $\omega_{W}=0.3c^{2}$ and $\omega_{V}=0.3c^{2}$ respectively. In Fig. \ref{fig:imagesc_W}, for W-oscillating mode, $\omega_{W}=0.3c^{2}$, we plot the the time evolution of spacial density of electrons and positrons (sub-figure (a) and (b)). Specially, for the moments when the field are zero, denoted by the triangles in Fig. \ref{fig:N_t} (a), these quantities are plotted in the waterfall figures, Fig. \ref{fig:imagesc_W}(c, d). For V-oscillating mode, $\omega_{V}=0.3c^{2}$, similar diagram are presented in Fig. \ref{fig:imagesc_V}. For comparison, the well potential $V(z)$ with wide and depth equal to the upper boundary of the two modes, $V_{0}=V_{2}=2.53c^{2}$, $W=W_{2}=10\lambda_{C}$, are included on the bottom. These figures clearly show the process how are the particles pumped from the well and spread in the numerical box. \begin{figure} \includegraphics[scale=0.64]{image_waterfall_V} \protect\caption{For V-oscillating mode, $\omega_{V}=0.3c^{2}$, the three dimensional diagrams for entire time and the waterfall figures for field free moments ( the time indicated by triangles in Fig. \ref{fig:N_t} (b)), for electron spacial density (a, c) and positron spacial density (b, d). The thicker curve in sub-figure (c, d) mark the last cycle before the positron arrive the boundary. The well potential $V(z)$ with $V_{0}=2.53c^{2}$, $W=10\lambda_{C}$, are included on the bottom for comparison. All other parameters are the same as Fig. \ref{fig:N_t} (b). \label{fig:imagesc_V}} \end{figure} Since $\omega_{W}=0.3c^{2}$, The period of the width oscillating is $T_{W}=1.12\times10^{-3}$. Before positrons arrive the boundaries, the width can finish eight cycles. If we detect the particle population at the boundary, we can find that positrons arrive the boundary first, at $t=9.15\times10^{-3}$, in conformity to the estimation $L/\left(2c\right)=9.12\times10^{-3}$. Electrons arrive the boundary at $t=1.02\times10^{-2}$, about one period ($T_{W}$ or $T_{V}$ ) later than the positrons. We can see that the particles reflected by the boundary come back to the interaction region, and may cause non-ignorable effect, i.e., the non-linearity of the last three triangles in the dot line in Fig. \ref{fig:N_t}(a), $\omega_{W}=0.3c^{2}$. Comparing with the rising and falling bottom of the well, more work is done by the wall of the well in the case of opening and closing the well. In the W-oscillating mode, the wavefront of the particles are more abrupter and regular. In energy space, higher energy modes are excited, and the spectrum show periodic structure with $0.3c^{2}$ between each peak. In the V-oscillating mode, electrons are lifted and released naturally. Less work is done and only low momentum mode are excited, the rate of electrons in the well region ($-5\lambda_{C}<z<5\lambda_{C}$) is more larger. Also, We can see the absent of interferences both in or out the well, as discussed in \citep{Dynamics_Bound_States_Su_2005}. \subsection{time evolution of pumping rate} \begin{figure} \includegraphics[scale=0.58]{pump_rate} \protect\caption{For W-oscillating mode ($\omega_{W}=0.3c^{2}$, sub-figure a, c) and V-oscillating mode ($\omega_{V}=0.3c^{2}$, sub-figure b, d), particles in the well ($N_{in}$) and the pumping rate $N_{out}/N$ as a function of time. The triangles denote the time when field is absent and the dot line link them. The blue triangles denote electron and the red denote positron. All parameters are the same as Fig. \ref{fig:imagesc_W} and Fig. \ref{fig:imagesc_V}, respectively.\label{fig:pump_rate}} \end{figure} In the V-oscillating mode, it turns out that the electrons is more inclined to gather in the well region (defined as $-5\lambda_{C}<z<5\lambda_{C}$) than it in W-oscillating mode . We can integrate the spacial density $N\left(z\right)$ in this region and get the particle number in the well, $N_{in}^{el.(po.)}(t)=\intop_{-5\lambda_{C}}^{5\lambda_{C}}N_{z}^{el.(po.)}(t)dz$. For the pumping process in last section, $N_{in}^{el.(po.)}(t)$ are graphed in Fig. \ref{fig:pump_rate}(a, b). In W-oscillating mode, as time increasing, $N_{in}^{el.}$ increase to a constant $1.60$ quickly, while $N_{in}^{po.}$ to a constant $0.36$. But in W-oscillating mode, $N_{in}^{el.}$ keep increasing while $N_{in}^{po.}$keep zero. The reason is positrons can be generated in the well region in W-oscillating mode, while the wall (the electric field) shut the door upon positrons in V-oscillating mode. In a pumping process, the pumping rate is vitally important and can be defined as $\alpha(t)=N_{out}/N$, where $N_{out}=N-N_{in}$, as shown in Fig. \ref{fig:pump_rate}(c, d). In both modes, at the end of the first cycle, when $t=T_{W}$ or $T_{V}$, nearly all the electrons are limited in the well region, while positron are ejected. In V-oscillating mode, since all the generated positrons are ejected and kept out of the well, the pump rate directly become $1$. For electron in the V-oscillating mode, or electron and positron in W-oscillating mode, in the long time limit, $\alpha(t)$ come to $1$ as $1-\beta/t$, where $\beta$ depends on the saturation number of particles in the well and the number of particles can be generated in each cycle. \subsection{The adiabatic limit} In Fig. \ref{fig:N_t}, for $\omega_{W}=0.1/6c^{2}$ and $\omega_{V}=0.1/6c^{2}$, there are $N=2.85$ and $N=1.74$ pairs survived at the end $t=120\pi/c^{2}$. We have proposed that in low frequency limit, the pairs survived finally should equal to three, the maximum number of embedded bound states swept in one cycle of each mode. In Fig. \ref{fig:adiabatic}, ignoring the reflection, for each frequency, the total time is chosen equal to the oscillating period for both modes, so that the oscillation can only finish one cycle. The final number of pairs survived as a function of the upper boundary of the oscillating width ( $W_{2}$ ) and depth ( $V_{2}$ ) are presented. In the adiabatic limit, a sub-critical well potential can not trigger pairs. As the width or depth increasing, the bound states in the gap dive into the negative continuum successively, the potential become super-critical. Pairs can be generated and saturated to the number of embedded bound states. However, as the width and depth decreasing, bound states depart the negative continuum successively and the generated pairs can not annihilate because of the non-synchronous ejection. Finally, the number of pairs survived in this cycle is equal to the maximum number of bound states embedded. This maximum number is a function of the upper boundary of the two oscillating cycle ( $W_{2}$ or $V_{2}$ ). \begin{figure} \includegraphics[scale=0.65]{pump_adiabatic_NN_W2_V2} \protect\caption{The final number of pairs created after one cycle as a function of the upper boundary of the oscillating width and depth. (a), W-oscillating mode, $V_{0}=2.53c^{2}$; (b), V-oscillating mode,\textbf{ }$W=10\lambda_{C}$. The total time $T$ is chosen equal to the oscillating period. $W_{2}$ is in units of $\lambda_{C}$, $T$ is in units of $1/c^{2}$, $\omega$ and $V_{2}$ is in units of $c^{2}$. \label{fig:adiabatic}} \end{figure} For a low frequency, the curve which indicate the final number of pairs vs. $W_{2}$ or $V_{2}$ is like a flight of stairs. As the frequency become lower, the rising edge of the stairs become more sharper. As shown in Fig. \ref{fig:adiabatic}, in the limit $\omega_{W},\,\omega_{V}\rightarrow0$, the the rising edge of the stairs will precisely locate at the points where the bound states dive into the negative continuum. These points are $W=2.79,5.51,8.21...$(in units of $\lambda_{C}$), and $V_{0}=2.05,2.19,2.38,2.62,2.87,3.15,3.43,3.73,...$(in units of $c^{2}$), as illuminated in Fig. \ref{fig:spectrum}. The gaps between bound states in the positive and negative continuum in V-oscillating mode are smaller than that in W-oscillating mode. To achieve a quasi-adiabatic ( finite $T_{W}$ or $T_{V}$) simulation, $T_{V}$ should be larger than $T_{W}$ to build a similar stairs. Now, if the two quasi-adiabatic oscillating cycle repeate periodically, we can expect a linear increasing pair number, i.e., for Fig. \ref{fig:adiabatic}(a), $W_{2}=7\lambda_{C}$, the final pair number will be $2$ times the number of the cycles. \section{summary} In this work, we have constructed a toy model, one-dimensional well potential with its width and depth oscillating, and studied the electron-positron creation. Since the bound states diving behavior in the energy spectrum are similar when sweeping the depth or width, the physical process are similar in these two modes. We find that the non-synchronous ejection of particles prevent the particle annihilation, break the Pauli block effect in a static super-critical well potential, and lead to a high constant production rate. The width oscillating mode can deliver more energy to particles and is more efficient in pumping pairs than the depth oscillating mode. The time evolution of spacial density illustrate the particles pumping from the well and the spreading of them in the numerical box. In a quasi-adiabatic case, pair number as a function of upper boundary of the oscillating, will reveal the diving of the bound states. This can be expected to detect the energy structure of a complicated potential. In order to reduce the computing cost, we neglect the larger part of the discrete momentum in the numerical simulation. On the other hand, with the same number of discrete momentum, the number of spatial points can be larger to describe the details of the potential. The simulation in this paper is done on a personal stand-alone computer. In this algorithm, the time evolution of each negative eigenstate can be done on a single CPU, hence the computation can be paralleled easily. Further more, if the second order spatial derivative in the Hamiltonian are done by finite difference approximations instead of Fourier transformation\citep{splitoperatormethod}, more lager one-dimensional, even two dimensional system can be simulated through paralleling the algorithm on memory shared parallel computers. \begin{acknowledgments} This work is supported by National Basic Research Program of China (973 Program) (Grants No. 2013CBA01502, No. 2011CB921503, and No. 2013CB834100), the National Natural Science Foundation of China (Grants No. 11374040, No. 11274051, and 11475027).\end{acknowledgments}
{ "timestamp": "2015-05-13T02:06:56", "yymm": "1504", "arxiv_id": "1504.06405", "language": "en", "url": "https://arxiv.org/abs/1504.06405" }
\section{#1}} \bmdefine{\bx}{x} \bmdefine{\by}{y} \bmdefine{\bz}{z} \begin{document} \title{Interacting multiple zero mode formulation and its application to a system consisting of a dark soliton in a condensate} \author{J.~Takahashi} \email{j.takahashi@aoni.waseda.jp} \affiliation{Department of Electronic and Physical Systems, Waseda University, Tokyo 169-8555, Japan} \author{Y.~Nakamura} \email{yusuke.n@asagi.waseda.jp} \affiliation{Department of Electronic and Physical Systems, Waseda University, Tokyo 169-8555, Japan} \author{Y.~Yamanaka} \email{yamanaka@waseda.jp} \affiliation{Department of Electronic and Physical Systems, Waseda University, Tokyo 169-8555, Japan} \date{\today} \begin{abstract} To formulate the zero modes in a finite-size system with spontaneous breakdown of symmetries in quantum field theory is not trivial, for in the naive Bogoliubov theory, one encounters difficulties such as phase diffusion, the absence of a definite criterion for determining the ground state, and infrared divergences. A new interacting zero mode formulation that has been proposed for systems with a single zero mode to avoid these difficulties is extended to general systems with multiple zero modes. It naturally and definitely gives the interactions among the quantized zero modes, the consequences of which can be observed experimentally. In this paper, as a typical example, we consider an atomic Bose--Einstein condensed system with a dark soliton that contains two zero modes corresponding to spontaneous breakdown of the U(1) gauge and translational symmetries. Then we evaluate the standard deviations of the zero mode operators and see how the mutual interaction between the two zero modes affects them. \end{abstract} \pacs{03.75.Hh, 03.75.Nt, 67.85.-d} \maketitle \section{Introduction} Since the pioneering experiments~\cite{Ketterle,Cornell,Bradley}, the Bose--Einstein condensate (BEC) phenomenon in a trapped ultracold atomic system has been the central subject of many experimental and theoretical studies. We view the subject from the standpoint of quantum field theory, which is the most fundamental dynamical law and in which the BEC in an atomic system is interpreted as a spontaneous breakdown of the U(1) gauge symmetry. The concept of spontaneous symmetry breaking (SSB) yields many examples of successful descriptions of nature using quantum field theory. In an infinite system with a spontaneously broken symmetry, the celebrated Nambu--Goldstone theorem~\cite{NGtheorem1,NGtheorem2} implies the existence of a zero mode, reflecting the original symmetry. The zero mode plays a crucial role in creating and retaining the ordered state. Study of the BEC in a trapped system highlights the importance of treating the operators belonging to the zero mode sector carefully because the system has a finite size, and the zero energy state also stands alone as a discrete level. The importance is easily overlooked in a homogeneous infinite system, for the zero mode sector is buried in the continuum labeled by the momentum index. The practice of formulation, usually called the Bogoliubov theory, is to take an unperturbed Hamiltonian up to the second power of a field operator and to attempt to diagonalize it by expanding the field operator in the appropriate complete set. However, when the complete set includes eigenfunctions belonging to a zero eigenvalue, {\it i.e.}, when the system has zero modes, the unperturbed Hamiltonian of the zero mode sector cannot be diagonalized in terms of creation and annihilation operators but has the form of free particles expressed by the quantum mechanical operators ${\hat P}$ and ${\hat Q}$. We refer to this as the \textit{free zero mode formulation}. Although this part of the Hamiltonian is simply neglected for a homogeneous infinite system, it cannot be neglected for a finite-size system. Then, according to Ref.~\cite{Lewenstein}, the phase of the order parameter is definite only for a short time because ${\hat Q}$ is interpreted as a phase operator, and its quantum fluctuation is given by $\Delta Q \sim t$ for large $t$. We also point out that there is no criterion for specifying a vacuum uniquely, as any energy eigenstate of a free particle has infinite $\Delta Q$. Thus, it is concluded that the free zero mode formulation for a finite-size system is inconsistent. To address this inconsistency, we proposed a new formulation for spontaneous breakdown of the U(1) gauge symmetry in Ref.~\cite{ZeroState}. The key point there is the inclusion of higher-than-third powers of ${\hat P}$ and ${\hat Q}$ in the unperturbed Hamiltonian, which yields their nonlinear equations of motion. We therefore call it the \textit{interacting zero mode formulation}. The stationary Schr\"odinger-like equation with the nonlinear unperturbed Hamiltonian of ${\hat P}$ and ${\hat Q}$ gives bound states rather than the free one, and the energy spectrum becomes discrete, so the ground state is identified uniquely as the vacuum. Then $\Delta Q$ is independent of time, and we have no inconsistency as long as the calculated $\Delta Q$ is small. The interacting zero mode formulation not only enables us to describe quantum fluctuation of a zero mode properly, but also, when two or more symmetries are broken spontaneously and there are multiple zero modes, introduces interactions among the zero modes naturally. In this paper, we focus on interactions among zero modes, extending the interacting zero mode formulation for a single zero mode \cite{ZeroState} to that for multiple ones. After giving a general formulation, we consider, as an example of its application, a system consisting of a dark soliton in a homogeneous condensate, where two zero modes coexist corresponding to the spontaneously broken translational and U(1) gauge symmetries. This example was studied in Ref.~\cite{Dziarmaga}, in which a nonperturbative treatment of the zero modes using an effective Hamiltonian was proposed. However, the resultant Hamiltonian represents the free motion of each zero mode with no interaction, so our approach is essentially different from that in Ref.~\cite{Dziarmaga}. This paper is organized as follows: In Sect.~II, we extend the interacting zero mode formulation for a single zero mode, presented in \cite{ZeroState}, to a general case of multiple zero modes, comparing it with the corresponding free zero mode case. In Sect.~III, the general formulation is applied to the originally homogeneous system containing a condensate and a dark soliton. Two zero modes appear and interact with each other. We are particularly interested in the quantum fluctuations $\Delta Q_i$ (where $i$ labels each zero mode), which are affected by interactions among the zero modes. Section IV presents a summary and conclusion. \bigskip \section{FORMULATION OF MULTIPLE ZERO MODES IN QUANTUM FIELD THEORY} In Ref.~\cite{ZeroState}, we proposed the interacting zero mode formulation for a finite-size system with spontaneous breakdown of the U(1) gauge symmetry. The main motivation for the formulation is that the quantum fluctuation in the phase of the order parameter cannot remain small in the conventional free zero mode formulation, whereas its smallness is the starting assumption of the formulation. In this section, we extend the new formulation to cases of multiple zero modes. We suppose a Hamiltonian with global U(1) gauge symmetry, \be \label{eq:originalH} \hat{H} \!=\! \intx \! \left[ \hat{\psi}^\d \! \left(-\frac{\nabla^2}{2m} + V_{\mathrm{ex}} - \mu\right) \! \hat{\psi} + \frac{g}{2} \hat{\psi}^\d\hat{\psi}^\d\hat{\psi}\hat{\psi} \right] \,, \ee where $V_{\mathrm{ex}}$, $m$, $\mu$, and $g$ are the external potential, atomic mass, chemical potential, and repulsive coupling constant ($g>0$), respectively. Throughout this paper, we set $\hbar$ to unity. The bosonic field operator $\hat{\psi}$ obeys the canonical commutation relations $ \bigl[ \hat{\psi}(\bx,t) , \hat{\psi}^\d(\bx',t) \bigr] = \delta(\bx-\bx') ,\,\, \bigl[ \hat{\psi}(\bx,t) , \hat{\psi}(\bx',t) \bigr] = 0 \,. $ When the U(1) gauge symmetry is broken spontaneously, $\hat{\psi}$ is divided into coherent and incoherent parts as $\hat{\psi}=\xi + \hat{\phi}$, according to the criterion $\bra0 \hat{\phi} \ket0 = 0$. The coherent part, or the order parameter $\xi$, is related to the total number of condensates, $N_0 = \intx |\xi|^2$, and the vacuum $\ket0$ is determined self-consistently later. The total Hamiltonian is rewritten in terms of $\hat{\phi}$ as $ \hat{H} = \hat{H}_1 + \hat{H}_2 + \hat{H}_3 + \hat{H}_4 , $ where \begin{align} \hat{H}_1 \!&=\! \intx \! \left[ \hat{\phi}^\d \! \left(-\frac{\nabla^2}{2m}+V_{\mathrm{ex}} - \mu + g|\xi|^2\right) \! \xi \right] \!+\!{\rm h.c.} \,,\\ \hat{H}_2 \!&=\! \intx \! \left[ \hat{\phi}^\d \mathcal{L} \hat{\phi} + \frac12\hat{\phi}\mathcal{M}\hat{\phi} + \frac12\hat{\phi}^\d \mathcal{M}^* \hat{\phi}^\d \right] \,,\label{eq:defH2}\\ \hat{H}_3 \!&=\! g\intx \xi \hat{\phi}^\d\hat{\phi}^\d\hat{\phi} + {\rm h.c.} \,,\\ \hat{H}_4 \!&=\! \frac{g}{2}\intx \hat{\phi}^\d\hat{\phi}^\d\hat{\phi}\hat{\phi}\,, \end{align} where $\mathcal{L} = -\nabla^2/2m+V_{\mathrm{ex}} -\mu + 2g|\xi|^2\,$ and $\mathcal{M} = g\xi^2\,$. \subsection{FREE ZERO MODE FORMULATION} In the conventional approach, one chooses $\hat{H}_1 + \hat{H}_2$ as the unperturbed Hamiltonian assuming small $\hat{\varphi}$ : \be \label{def:HLW} \hat{H}_0 = \hat{H}_1 + \hat{H}_2. \ee Because the vacuum of this Hamiltonian is time-independent, the field division criterion $\bra0 \hat{\phi}(\bx,t) \ket0=0$ and the Heisenberg equation yield $\hat{H}_1=0$. This implies that $\xi$ should satisfy the Gross--Pitaevskii (GP) equation \cite{GP}, \be \label{eq:GP} \left(-\frac{\nabla^2}{2m}+V_{\mathrm{ex}} - \mu + g|\xi|^2\right)\xi=0\,. \ee To diagonalize $\hat{H}_2$~\cite{Lewenstein,Matsumoto2}, we introduce the Bogoliubov--de Gennes (BdG) equation~\cite{Bogoliubov,deGennes} $ T \by_n = \omega_n \by_n \, $ with the doublet notations \be T = \BM \mathcal{L} & \mathcal{M} \\ -\mathcal{M}^* & -\mathcal{L} \EM \,,\qquad \by_n = \BM u_n \\ v_n \EM\,. \ee Now, we restrict ourselves to cases where all the eigenvalues are real, which implies that the system is dynamically stable, in order to diagonalize the excited modes. The diagonalization of a system with complex modes (that is, a dynamically unstable system) is discussed in Ref. \cite{Mine}. We consider that some symmetries in addition to the U(1) gauge symmetry are spontaneously broken and we need two or more eigenfunctions belonging to zero eigenvalue, {\it i.e.}, $T \by_{0,i}=0\,$ where $\by_{0,i} = (f_i ,\; -f_i^*)^t\,$, with a label $i$. For the sake of completeness, one has to introduce an adjoint function $\by_{-1,i} = (h_i ,\; h_i^*)^t$ to each $\by_{0,i}$, which satisfies $T\by_{0,i}=I_{i}\by_{-1,i}$ and 2$\intx \! h_i^* \,f_i=1\,,$ with normalization constants $I_i$. Let us expand $\hat{\phi}(\bx,t)$ by the BdG complete set as $\hat{\phi}(\bx,t)= \hat{\phi}_0(\bx,t) + \hat{\phi}_{\text{ex}}(\bx,t)$, where \begin{align} \label{eq:phi_expansion} \hat{\phi}_0(\bx,t) &= \sum_{i=\text{z.m.}} \left[ -i\hat{Q}_i(t) f_i(\bx) + \hat{P}_i(t) h_i(\bx) \right] \,,\\ \label{eq:phi_expansion2} \hat{\phi}_{\text{ex}}(\bx,t) &= \sum_{\ell=\text{ex.}} \left[ \hat{a}_\ell(t) u_\ell(\bx) + \hat{a}_\ell^\d(t) v_\ell^*(\bx) \right] \,, \end{align} where ``z.m.'' and ``ex.'' represent summations over the zero and excitation modes, respectively. The operators satisfy $ [\hat{Q}_i(t), \hat{P}_j(t)] = i\delta_{ij} ,\, [\hat{a}_\ell(t), \hat{a}_{\ell'}^\d(t)] = \delta_{\ell\ell'} \,, $ and the vanishing ones otherwise, where $\hat{Q}_i(t)$ and $\hat{P}_i(t)$, also called the zero mode operators or quantum coordinates, are hermitian. Substituting the expansions (\ref{eq:phi_expansion}) and (\ref{eq:phi_expansion2}) into Eq.~(\ref{eq:defH2}), we obtain \be \label{eq:H2} \hat{H}_0 = \hat{H}_2 =\sum_{i=\text{z.m.}}\frac{\hat{P}_i^2}{2I^{-1}_i} + \sum_{\ell=\text{ex.}} \omega_\ell \hat{a}_\ell^\d \hat{a}_\ell \,. \ee The unperturbed Hamiltonian~(\ref{eq:H2}) is represented by the sum of the harmonic part of the excitation for each $\ell$ and the free part of the quantum coordinates for each $i$. Because of the latter contribution, we call this conventional formulation the free zero mode one. As a matter of course, there is no interaction among the zero modes in the Hamiltonian~(\ref{eq:H2}). The vacuum of the total system would be the ground state of Eq.~(\ref{eq:H2}): $ \ket0 = \prod_{i}\ket0_i \otimes \ket0_{\text{ex}} \,, $ where $\ket0_i$ is the ground state in the $i$th zero mode sector satisfying the field division criterion $ { }_i\bra0 \hat{Q}_i \ket0_i =\, { }_i\bra0\hat{P}_i \ket0_i = 0 \,. $ However, the choice of $\ket0_i$ causes problems. First, $I_i$ may be negative, which corresponds to having a system with a negative ``mass,'' and there is no ground state. On the other hand, for $I_i>0$, the lowest eigenstate is the zero ``momentum'' state, $\hat{P}_i\ket{0}_i=0$. As a result of the uncertainty relation, the standard deviation of $\hat Q_i$, denoted by $\Delta Q_i=\sqrt{_i\bra0\hat{Q}^2_i \ket0_i - {}_i\bra0\hat{Q}_i\ket0_i^2}$, diverges. This divergence immediately conflicts with the starting assumption of small $\hat{\phi}$. It also yields an unphysical situation in which some physical quantities such as the total number density also diverge, which will be seen in Eq.~(\ref{meaningQx:depletion}). One way to resolve this contradiction might be to choose a wave packet state as the vacuum \cite{Lewenstein}. Although $\Delta Q_i$ is finite at $t=0$, it is again divergent after a long time because of the collapse of the wave packet; that is, $\hat{Q}_i(t)=\hat{Q}_i(0)+\hat{P}_i(0) t$ and $\Delta Q_i \propto t$ for large $t$. It is argued in Ref.~\cite{Lewenstein} that $\hat{Q}$, acting as the phase operator of the order parameter, yields a divergent $ \Delta Q(t)$ as $t$ goes to infinity, and that the phase diffuses. All of the above pathological properties are rooted in the fact that the free Hamiltonian has a continuous spectrum. We note that the difficulties concerning quantum fluctuations of zero modes are veiled in the Bogoliubov approximation, in which the original creation and annihilation operators associated with the eigenfunction belonging to zero eigenvalue are replaced with classical numbers in the field ${\hat \psi}$, or the zero mode operators in ${\hat \varphi}$ are simply neglected. \subsection{INTERACTING ZERO MODE FORMULATION} Although the choice of the unperturbed Hamiltonian~(\ref{def:HLW}) is based on the assumption of small $\hat{\phi}$, or, more precisely, small $\hat{\phi}_0$ and small $\hat{\phi}_{\text{ex}}$, $\avg{\hat{Q}_i^2(t)}$ is divergent, which indicates large $\hat{\phi}_0$. Thus, the zero mode fluctuations cannot be kept small, and the assumption of small $\hat{\phi}_0$ has to be abandoned. This recognition is the starting point of Ref.~\cite{ZeroState}. Gathering all the terms consisting only of $\hat{\phi}_0$ in the total Hamiltonian, we introduce the new unperturbed Hamiltonian \begin{align} \label{eq:Hu} \hat{H}_u = \hat{H}_2 + \Delta \hat{H} \,, \end{align} where $\hat{H}_{1}=0$, for the same reason as in the previous subsection. The additional component, $\Delta \hat{H}$, is the sum of the third and fourth powers of the zero mode operators and their counter terms, \begin{align} \Delta \hat{H} = \hat{H}^{QP}_3 + \hat{H}^{QP}_4 \!-\! \sum_{i=\text{z.m.}} \left[ \delta\mu_i \hat{P}_i + \delta\nu_i \hat{Q}_i \right] \,. \end{align} The superscript $QP$ indicates that all the terms consisting only of $\hat{Q}_i$ and $\hat{P}_i$ are picked up. We set up the stationary Schr\"odinger-like equation, \be \label{eq:HuEigen} \hat{H}_u^{QP} \ket{\Psi_\nu}= E_{\nu}\ket{\Psi_\nu} \, \quad (\nu=0,1,2,\cdots) . \ee As was seen in Ref.~\cite{ZeroState} for the model with the single zero mode and will be seen in the example with two zero modes in the next section, the eigenequation~(\ref{eq:HuEigen}) is a type of bound state problem and yields a discrete spectrum, in contrast to the free zero mode formulation in the previous subsection. It is quite natural to take the whole unperturbed vacuum $\ket0= \ket{\Psi_0} \otimes \ket{0}_{\mathrm{ex}}$, where $\ket{\Psi_0}$ is the ground state of the above equation. The unknown parameters $\delta\mu_i$ and $\delta\nu_i$ involved in $\hat{H}_u^{QP}$ should be determined so as to satisfy the field division criterion \be \label{eq:criterion_of_division} \bra{\Psi_0} \hat{Q}_i \ket{\Psi_0}= \bra{\Psi_0}\hat{P}_i \ket{\Psi_0} = 0 \, \ee in a manner consistent with Eq.~(\ref{eq:HuEigen}). Substituting the expansion (\ref{eq:phi_expansion}) into Eq.~(\ref{eq:Hu}), we gather it as $ \hat{H}_u = \hat{H}_{u,1}^{QP} \!+\! \hat{H}_{u,2}^{QP} \!+\! \hat{H}_{u,3}^{QP} \!+\! \hat{H}_{u,4}^{QP} \!+\! \sum_\ell \omega_\ell \hat{a}_\ell^\d \hat{a}_\ell \,, $ where \begin{align} \hat{H}_{u,1}^{QP} &\!=\! -\delta\mu_{i} \hat{P}_{i} -\delta\nu_{i} \hat{Q}_{i} ,\, \\ \hat{H}_{u,2}^{QP} &\!=\! \frac{\hat{P}^2_{i}}{2I^{-1}_{i}} ,\, \\ \hat{H}_{u,3}^{QP} &\!=\! 2 \mathrm{Re} \Big[\!-\! i A_{\theta jk\ell} \hat{Q}_j \hat{Q}_k \hat{Q}_\ell \!+\! B_{\theta jk\ell} \hat{Q}_j \{\hat{Q}_k,\, \! \hat{P}_\ell \} \nonumber\\ & \hspace{1cm} \!-\! B^*_{kj\theta\ell} \hat{P}_\ell \hat{Q}_k \hat{Q}_j \!+\! i C_{\theta jk\ell} \hat{Q}_j \hat{P}_k \hat{P}_\ell \nonumber\\ & \hspace{1cm} \!-\! i C'_{\theta jk\ell} \hat{P}_k \{\hat{Q}_j,\, \! \hat{P}_\ell \} \!+\! D_{\theta jk\ell} \hat{P}_j \hat{P}_k \hat{P}_\ell \Big] ,\,\label{eq:Hu3}\\ \hat{H}_{u,4}^{QP} &\!=\! \frac{A_{ijk\ell}}{2} \hat{Q}_i \hat{Q}_j \hat{Q}_k \hat{Q}_\ell \!-\! \mathrm{Im} \! \left[ B_{ijk\ell} \right] \hat{Q}_i \hat{Q}_j \{\hat{Q}_k,\, \! \hat{P}_\ell \} \nonumber\\ & \hspace{0.4cm} \!-\! \mathrm{Re} \! \left[ C_{ijk\ell} \right] \hat{Q}_i \hat{Q}_j \hat{P}_k \hat{P}_\ell \!+\! \frac{C'_{ijk\ell}}{2} \{ \hat{Q}_i,\, \! \hat{P}_k \} \{ \hat{Q}_j,\, \! \hat{P}_\ell \} \nonumber\\ & \hspace{0.4cm} \!-\! \mathrm{Im} \! \left[ D_{ijk\ell} \right] \{ \hat{Q}_i,\, \! \hat{P}_j \} \hat{P}_k \hat{P}_\ell \!+\! \frac{E_{ijk\ell}}{2} \hat{P}_i \hat{P}_j \hat{P}_k \hat{P}_\ell \,, \label{eq:Hu4} \end{align} with \begin{alignat}{3} \label{eq:defAE} A_{ijk\ell} &= g\int dx f^*_i f^*_j f_k f_\ell ,\,\, B_{ijk\ell} = g\int dx f^*_i f^*_j f_k h_\ell ,\,\, \nonumber\\ C_{ijk\ell} &= g\int dx f^*_i f^*_j h_k h_\ell ,\,\, C'_{ijk\ell} = g\int dx f^*_i f_j h^*_k h_\ell ,\,\, \nonumber\\ D_{ijk\ell} &= g\int dx f^*_i h^*_j h_k h_\ell ,\,\, E_{ijk\ell} = g\int dx h^*_i h^*_j h_k h_\ell . \end{alignat} In Eqs.~(\ref{eq:Hu3}) and (\ref{eq:Hu4}), we define $\{\hat{O}_1,\hat{O}_2\}\equiv\hat{O}_1\hat{O}_2 + \hat{O}_2\hat{O}_1$, and the dummy indices should be summed over. A remarkable consequence of the present formulation is that we have cross-terms among the different zero mode operators in the unperturbed Hamiltonian, namely, interactions and mixings among them. In other words, the Hamiltonian ${\hat H}_u$ is an effective Hamiltonian in the zero mode sector governing the dynamics of the condensate and is uniquely derived from the original Hamiltonian~(\ref{eq:originalH}). \section{APPLICATION TO HOMOGENEOUS SYSTEM WITH DARK SOLITON} In this section, we consider a system consisting of a dark soliton in a homogeneous system, which is described by the Hamiltonian in Eq.~(\ref{eq:originalH}) with $V_{\mathrm{ex}}=0$ and has two zero modes. The GP solution of a one-dimensional dark soliton is \begin{align} \xi(x) = \sqrt{n_0} \tanh \left\{\kappa (x-x_0) \right\},\quad \mu = gn_0 \, , \end{align} where $\kappa = \sqrt{mg n_0}$, and $n_0$ is the bulk density of the condensate. Hereafter, we set $x_0=0$ and $n_0=1$ for the sake of simplicity. The zero modes and their adjoint eigenfunctions are \begin{align} f_{\theta} &= \xi(x) \,, \label{def_of_f_t} \\ f_{x} &= i\frac{d}{dx} \xi(x) \,, \label{def_of_f_x}\\ h_{\theta} &= \frac{\sqrt{g}}{2L} \left[ \tanh (\kappa x) \!+\! \kappa x \left\{ 1 \!-\! \tanh^2 (\kappa x) \right\} \right] \,,\\ h_{x} &= -\frac{i}{4} \,,\label{def_of_h_x}\\ I_\theta &= \frac{g}{L} \,, \,\,\,\, I_x = -\frac{g}{4\kappa} \,, \end{align} where the subscripts $\theta$ and $x$ denote the $U(1)$ gauge and translational modes, respectively. We plot the four functions for later convenience in Fig.~\ref{figfh}. \begin{figure}[tbh!] \begin{center} \vspace{1cm} \includegraphics[width=8cm]{Fig1.eps} \end{center} \caption{\footnotesize (Color online) Plots of the zero modes and their adjoint eigenfunctions. Red bold and thin solid lines indicate $f_\theta$ and $h_\theta$, respectively. Green bold and thin broken lines indicate $f_x$ and $h_x$, respectively. }} \label{figfh} \end{figure} Note that although in Ref.~\cite{Dziarmaga}, which considered a nonperturbative treatment of the two zero modes, the system is put in an artificial box with a size $L$ and the boundary condition $f_\theta(-L/2)=f_\theta(L/2)=0$, our boundary condition is antiperiodic, $f_\theta(-L/2)=-f_\theta(L/2)\neq 0 $. The operators $\hat{Q}_\theta$ and $\hat{Q}_x$, which are associated with the eigenfunctions (\ref{def_of_f_t}) and (\ref{def_of_f_x}), respectively, may be interpreted as the phase and position operators of the soliton, \begin{align} \label{meaningQx:position} \hat{\psi}(x) &= \xi(x) - i\hat Q_\theta \xi(x) + \hat{Q}_x \frac{d\xi(x)}{dx} + \cdots \nonumber\\ &\simeq \xi(x+\hat{Q}_x)e^{-i\hat{Q}_\theta}. \end{align} However, this interpretation is true only when $\hat{Q}_\theta$ and $\hat{Q}_x$ are small. Our approach does not rely on it. \subsection{FREE ZERO MODE APPROACH FOR DARK SOLITON} In the conventional treatment, the unperturbed Hamiltonian (\ref{eq:H2}) for this system is described by \be \label{def:HDZ} \hat{H}_0 = \frac{\hat{P}_\theta^2}{2I^{-1}_\theta} + \frac{\hat{P}_x^2}{2I^{-1}_x} +\sum_{\ell=\text{ex.}} \omega_\ell \hat{a}_\ell^\d \hat{a}_\ell \,, \ee with positive $I_\theta$ and negative $I_x$. As mentioned above, the ground state for the U(1) gauge zero mode induces phase diffusion, and there is no ground state for the translational zero mode because of the negative ``mass,'' $I_x<0$. In Ref.~\cite{Dziarmaga}, Eq.~(\ref{def:HDZ}) is derived from the classical Lagrangian for the collective coordinates as the starting point of the nonperturbative treatment, and the U(1) gauge zero mode sector is restricted to a subspace with a definite total number of atoms, assuming a large phase fluctuation. After this procedure, the calculated $\Delta Q_x$ with a Gaussian wave packet state grows with $t$. As can be seen from the total number density \begin{align} \label{meaningQx:depletion} \bra0 &\hat{\psi}^\d(x,t) \hat{\psi}(x,t) \ket0 \nonumber\\ &= \left[ 1+ \Delta Q_\theta^2(t) \right] |\xi(x)|^2 \!+\! \Delta Q_x^2(t) \Big| \frac{d\xi(x)}{dx} \Big|^2 + \cdots \,, \end{align} $\Delta Q_x$ is related to the number of atoms that fill the center of the soliton. It has been indicated that $\Delta Q_x(t)$ grows with $t$, which is quantum depletion. A similar result of quantum depletion for a system with an attractive interaction having a bright soliton was obtained in Ref.~\cite{Huang}, although then the ``mass'' $I_x$ is positive. The difficulty of the free zero mode approach lies in the the negative ``mass'' problem, the absence of a ground state for the unperturbed Hamiltonian~(\ref{def:HDZ}), and the fact that $\Delta Q_x(t)$ increases to infinity with time. \subsection{INTERACTING ZERO MODE APPROACH FOR DARK SOLITON} In the interacting zero mode approach, we can obtain the natural vacuum, which causes neither phase dissipation nor the negative ``mass'' problem, by solving the Schr\"{o}dinger-like equation~(\ref{eq:HuEigen}) with the unperturbed Hamiltonian~(\ref{eq:Hu}). First, one has the nonperturbative Hamiltonian consisting only of ${\hat Q}_i$ and ${\hat P}_i$ with $i$ equal to either $\theta$ or $x$, denoted by $\hat{H}^{QP}_{u,i}$ $(i=\theta\,,\,x)$. In addition, there are interactions between $\{{\hat Q}_\theta\,,\,{\hat P}_\theta\}$ and $\{{\hat Q}_x\,,\,{\hat P}_x\}$, on which we focus our attention. We seek the dominant contribution of the interaction terms in the present model. Equations~(\ref{def_of_f_t})--(\ref{def_of_h_x}) and Fig.~\ref{figfh} show that $f_\theta$ and $h_\theta$ are odd functions with respect to the variable $x$, whereas $f_x$ and $h_x$ are even ones. The indices $(i,j,k,\ell)$ of the non-vanishing cross-term coefficients in Eq.~(\ref{eq:defAE}) must be $(\theta,\theta,x,x)$ in random order. We consider the limit of large $L$, much larger than the coherent length. Then the magnitude of $h_x$ is small, {\it i.e.}, $1/L$, so the $D$ and $E$ terms can be neglected. The function $f_x$ peaks sharply around $x=0$, and the contributions of the $A$ and $B$ terms are small. Thus, the dominant contributions come only from the $C$ and $C'$ terms. One can neglect the third-power terms with respect to the zero mode operators because their contributions are small compared with those of the fourth-power terms owing to the field division criterion. Consequently, we have the approximate Hamiltonian \begin{align} \label{Eq:Heff} \hat{H}^{QP}_u &\simeq\hat{H}^{QP}_{u,\theta} + \hat{H}^{QP}_{u,x} \!+\! 3|C_{\theta\theta xx}| \hat{Q}^2_{\theta}\hat{P}^2_{x} \,. \end{align} Here the last term represents the dominant interaction between the U(1) gauge and translational zero modes. Solving the Schr\"{o}dinger-like equation (\ref{eq:HuEigen}) with the Hamiltonian (\ref{Eq:Heff}) numerically, we find the ground state or vacuum in the zero mode sector. To illustrate the significance of the mutual interaction, namely, the last term in Eq.~(\ref{Eq:Heff}), we plot the ground state distribution $|\Psi_0(Q_\theta, Q_x=0)|^2$ with and without it in Fig.~\ref{Fig:groundstate}; the difference between the plots is striking. The sharp distribution in the presence of the mutual interaction can be understood from the fact that the last term, behaving as $Q_\theta^2$, serves as an additional harmonic potential for $Q_\theta$. \begin{figure}[tbh!] \begin{center} \vspace{1cm} \includegraphics[width=8cm]{Fig2.eps} \end{center} \caption{\footnotesize (Color online) Ground state distribution $|\Psi_0(Q_\theta, Q_x=0)|^2$ with and without the mutual interaction for $g=1$ and $L=1000$. (Red) bold solid and thin broken lines indicate the distributions for $\hat{H}^{QP}_u$ and $\hat{H}^{QP}_{u,\theta} + \hat{H}^{QP}_{u,x}$, respectively. Axes are scaled by $\ell_\theta = \,^6 \! \sqrt{I_\theta/24 A_{\theta\theta\theta\theta}}$ (see Ref.~\cite{ZeroState}). }} \label{Fig:groundstate} \end{figure} One can evaluate the standard deviations $\Delta Q_i$ using the ground state distribution obtained above. The results versus the parameter $g$ are presented in Fig.~\ref{figSDg}, which shows that $\Delta Q_\theta$ and $\Delta Q_x$ decrease according to the power law $g^{-\alpha_i}$, with $\alpha_\theta = 0.092\cdots$ and $\alpha_x=-0.192\cdots$. It is natural that $\Delta Q_x$ decreases as $g$ increases, as the coherent length is proportional to $g^{-1/2}$ and the soliton becomes sharper for larger $g$. We emphasize that the mutual interaction between the two zero modes qualitatively changes the $g$ dependence of $\Delta Q_\theta$: There would be no $g$ dependence without the mutual interaction, as in a normal condensate~\cite{ZeroState}. The suppression of $\Delta Q_\theta$ because of the mutual interaction may be rephrased as the condensate phase becoming rigid because of the presence of a soliton with a $\pi$ phase kink. \begin{figure}[tbh!] \begin{center} \vspace{1cm} \includegraphics[width=8cm]{Fig3.eps} \end{center} \caption{\footnotesize (Color online) $g$ dependences of standard deviations $\Delta Q_\theta$ and $\Delta Q_x$ with $m=1$ and $N_0=10^3$. (Red) bold and (green) thin solid lines indicate $\Delta Q_\theta$ and $\Delta Q_x$ for the ground state of $\hat{H}^{QP}_u$ in Eq. (\ref{Eq:Heff}), respectively. Broken and dotted lines indicate the same quantities for the ground state of the Hamiltonian $\hat{H}^{QP}_{u,\theta} + \hat{H}^{QP}_{u,x}$, namely, for the ground state without the mutual interaction between the two zero modes. }} \label{figSDg} \end{figure} So far we have considered a homogeneous limit, {\it i.e.}, the large $L$ limit. We turn our attention to finite-$L$ systems and estimate the $L$ dependence, for $L$ is a controllable parameter in experiments, although in that case the translational symmetry is broken not spontaneously but explicitly. The calculations of the interacting zero mode formulation presented above are straightforwardly applied to a cylindrical system with a circumferential length $2L$ and two dark solitons, or to a system consisting of a dark soliton confined in a box of size $L$. For the latter, although our antiperiodic boundary condition is not realistic, we expect that the calculations reflect the effects of finite $L$ qualitatively. The $L$ dependence of $\Delta Q_i$ is shown in Fig.~\ref{figSDL}, which shows that $\Delta Q_\theta$ decreases as $L^{\beta_\theta}$ with $\beta_\theta= -0.440\cdots$, whereas $\Delta Q_x$ increases as $L^{\beta_x}$ with $\beta_x=0.115\cdots$. \begin{figure}[tbh!] \begin{center} \vspace{1cm} \includegraphics[width=8cm]{Fig4.eps} \end{center} \caption{\footnotesize (Color online) Effect of system size on standard deviations $\Delta Q_\theta$ and $\Delta Q_x$ with $m=1$ and $g=1$. Each line indicates the same quantity as in Fig.~\ref{figSDg}. }} \label{figSDL} \end{figure} It is not surprising that $\Delta Q_\theta$ depends on $L$, as the eigenfunction $f_\theta(x)$ in Eq.~(\ref{def_of_f_t}) extends over the entire range from $-L/2$ to $L/2$. On the other hand, the fact that $\Delta Q_x$ also clearly depends on $L$ is puzzling at first glance, because the eigenfunction $f_x(x)$ in Eq.~(\ref{def_of_f_x}), with which the position operator of the soliton $\hat Q_x$ is associated, has a sharp distribution around the center of the soliton and is not affected by $L$. The puzzle can be resolved as follows: Whereas $\hat{Q}_x$ describes the position of the soliton, its conjugate partner $\hat{P}_x$ corresponds to its momentum or velocity and is size-dependent, as the spread of $h_x(x)$ is of order $L$ [see Eqs.~(\ref{eq:phi_expansion}) and (\ref{def_of_h_x})]. It is well known that although a standing soliton has a $\pi$ phase kink, a moving one has a smaller kink~\cite{Soliton_Review}. In view of this, the global character of $h_x(x)$, as seen in Eq.~(\ref{def_of_h_x}) to untwist the phase kink, could be understood. Because $\Hat{Q}_x$ and $\Hat{P}_x$ are canonically conjugate to each other, and one has the uncertainty relation $\Delta Q_x \cdot \Delta P_x \sim 1/2$, the standard deviation $\Delta Q_x$ depends on $L$ as a consequence of the $L$ dependence of $\Delta P_x$. The size effects may be observed in experiments on finite-size systems in which $L$ is controlled. \section{Summary} Considering that the zero mode is the essence of SSB and that its quantum fluctuation must be treated properly, we adopted the interacting zero mode formulation and extended it from a single zero mode system to a general case of multiple zero modes. It yields the effective Hamiltonian of a pair of canonical operators for each zero mode, the spectrum of which is discrete, as in the case of a single zero mode system, and introduces interactions among the zero modes naturally and definitely. The physical picture of zero modes interacting with each other is quite new. As an application of the new formulation, a system of size $L$ with a dark soliton is considered. In the large $L$ or homogeneous limit, there are two zero modes corresponding to spontaneous breakdown of the U(1) gauge and translational symmetries. We investigated this system by performing calculations. The vacuum is obtained uniquely, and the standard deviations for the zero mode operators $\Delta Q_i$ can be evaluated. The mutual interaction between the two zero modes influences the ground state distribution and therefore $\Delta Q_\theta$, and its effect is seen in the way that $\Delta Q_i$ depends on the coupling constant $g$. As the trapped ultracold atomic system has a finite size $L$ in real experimental situations, we also studied the $L$ dependence of $\Delta Q_i$, keeping $L$ finite in our calculations. The results may be checked in experiments on a cylindrical system with two dark solitons or a dark soliton system confined in a finite region. As a characteristic of soliton physics, the behavior of the translational zero mode of the dark soliton is expected to correlate with its velocity. A study of the correlation is a future work. \begin{acknowledgments} This work is supported in part by a Grant-in-Aid for Scientific Research (C) (No. 25400410) from the Japan Society for the Promotion of Science, Japan; ``Ambient SoC Global Program of Waseda University'' of the Ministry of Education, Culture, Sports, Science and Technology, Japan; and Waseda University Grant for Special Research Projects (Project No. 2013B-102). \end{acknowledgments}
{ "timestamp": "2015-04-27T02:05:53", "yymm": "1504", "arxiv_id": "1504.06420", "language": "en", "url": "https://arxiv.org/abs/1504.06420" }
\section{Introduction} Electrical currents through an incompressible, viscous and resistive liquid conductor produce azimuthal magnetic fields which, beyond a critical field strength, become unstable to a non-axisymmetric, i.e.~kink-type instability that we will call Tayler instability (TI) as a tribute to the seminal contributions of R.J. Tayler \cite{Tayler1957,Tayler1973}. For a constant current density in an infinitely long cylinder R\"udiger \etal had shown \cite{Ruediger2011,Ruediger2013} that the governing parameter is the Hartmann number, $Ha=B_{\varphi}(R) R (\sigma/\rho \nu)^{1/2}$, which has to exceed a value in the order of 20 for the TI to set in ($B_{\varphi}(R)$ is the azimuthal field at the outer radius $R$ of the cylinder, $\sigma$, $\rho$ and $\nu$ are the conductivity, density and viscosity of the fluid, respectively). This critical value of $\mathit{Ha}$ is actually consistent with previous results \cite{Spies1988,Shan1991,Montgomery1993,Cochran1993} concerning the effects of viscosity and resistivity on the stability of various plasma z-pinches, if one leaves aside the effects of complicated boundary conditions and non-homogeneous material parameters in the plasma case. Note that at these early days \cite{Montgomery1993} it was far from obvious that the governing parameter for the onset of the instability is $Ha$, rather than the Lundquist number $S=Ha Pm^{1/2}$ (with $Pm=\nu \mu_0 \sigma$ denoting the magnetic Prandtl number, where $\mu_0$ is the magnetic permeability constant). Whilst the focus of fusion related pinch experiments was prominently on the plasma destabilization when the ratio of axial to azimuthal magnetic field (the so-called safety parameter) falls below a certain critical value \cite{Bergerson2006}, a recent liquid metal experiment, with uniform conductivity and viscosity as well as well-defined insulating boundary condition, has indeed confirmed the TI-threshold of $\mathit{Ha}\simeq 20$ \cite{Seilmayer2012}. From the application point of view, current-driven instabilities in liquid metals are presently considered a possible limitation for the integrity of large-scale liquid metal batteries. Such batteries are self-assembling stratified systems made of a heavy liquid metal or metalloid (e.g., Bi, Sb) at the bottom, a suitable molten salt mixture as electrolyte in the middle, and a light alkaline or earth alkaline metal (e.g., Na, Mg) at the top. While small versions of liquid metal batteries have already been tested \cite{Weaver1962,Cairns1969,Bradwell2012,Kim2012}, the occurrence of the TI could possibly present a serious problem for the stratification in larger batteries with prospected charging/discharging currents of some thousand amps. In \cite{Stefani2011} we had advised a simple trick to suppress the TI in liquid metal batteries by just returning the battery current through a bore in the centre. By the resulting change of the radial profile $B_{\varphi}(r)$ it is possible to avoid the (ideal) condition $\partial (r B^2_{\varphi}(r))/\partial r>0$ \cite{Tayler1973} for the onset of the TI. In a follow-up paper \cite{Weber2013}, a numerical code has been presented that is capable of treating TI-problems at small values of $Pm$ as they are typical for liquid metals. This was achieved by replacing the solution of the induction equation for the magnetic field by applying the so-called quasistatic approximation \cite{Davidson2001}. This approximation allows to avoid the explicit time stepping of the magnetic field by computing the electrostatic potential by a Poisson equation, and deriving from this the electric current density. The induced magnetic field is then computed from the induced current density via Biot-Savart's law. This way one arrives at an integro-differential equation approach, as it had already been used by Meir and Schmidt for different magnetohydrodynamic (MHD) problems \cite{Meir2004}. Our numerical scheme utilizes the open source CFD library OpenFOAM\textsuperscript{\textregistered}\ \cite{openfoam}, supplemented by an MPI-parallelized implementation of Biot-Savart's law. This code was then applied to a number of TI related problems, in particular for determining the scaling properties of the growth rate and the saturated velocity field, the dependence of the critical current on the geometric aspect ratio, as well as for validating various methods of preventing TI in liquid metal batteries \cite{Weber2014}. Recently, our results were confirmed by another code working completely in the framework of the differential equation approach, by analyzing the scaling properties of the solutions with $Pm$ \cite{Herreman2015}. The authors also discussed carefully the limitations of the quasistatic approach for higher values of $Pm$. An interesting by-product of the battery-oriented simulations \cite{Weber2013} was the observation of the transient occurrence, but ultimate disappearance, of helical structures during the evolution of the TI. On the first glance, the appearance of helical structures is surprising, since the underlying equations have no preference for left or right handed solutions. Yet, it is exactly this helical (or chiral) symmetry breaking that has gained considerable interest in various astrophysical problems. This applies in particular to the concept of the Tayler-Spruit dynamo \cite{Spruit2002} in which an azimuthal magnetic field is thought to become strong enough to drive the TI against the stable stratification in the radiation zone of a star. Combined with the usual differential rotation this effect might lead to a working dynamo. Despite of the attractiveness of the TI, in particular for explaining angular momentum transport in various types of stars \cite{Ruediger2013,Meynet2011,Maeder2014}, the concept of the actual Tayler-Spruit dynamo is not without caveats. Zahn \etal \cite{Zahn2007} have argued that the TI-produced non-axisymmetric ($m=1$) poloidal magnetic field alone would not be suited to close the dynamo loop (since the toroidal field wound up from it would have the same $m=1$ dependence), but that some sufficiently large mean-field $\alpha$ effect would be needed to produce the necessary axisymmetric poloidal field. It is exactly here where the question whether TI saturates with a finite helicity, produced by a finite $\alpha$ effect, becomes highly relevant. Some recent papers have answered this question affirmatively: Gellert \etal \cite{Gellert2011} have found spontaneous chiral symmetry breaking of the TI in simulations with $Pm$ of 0.1, 1, and 10. Bonanno \etal \cite{Bonanno2012} got a similar result for very large $Pm=10^7$. In addition to the numerical simulation, the latter authors developed a simple model of energy and helicity evolution resulting in an instructive phase portrait. The equations describing this behaviour can also be linked to a similar chiral symmetry breaking in biochemistry where it refers to the selection of one of two possible forms of bio-molecules (mainly sugars and amino acids) that are mirror images of each other \cite{Saito2013}. With this background, the main motivation for the present paper is the discrepancy between the simulations of \cite{Gellert2011,Bonanno2012,Chatterjee2011} and the preliminary result of our low $Pm$ simulations \cite{Weber2013} showing that helicity starts to grow but ultimately decays to zero. Given the different $Pm$ at which the respective simulations were done, it is worthwhile to understand in detail the saturation mechanism of TI in dependence on $Pm$. Actually, helical states have a long history in plasma physics, tracing back to the early work of Lundquist on ''Magneto-hydrostatic fields'' \cite{Lundquist1950}. Specializing general pressure-balanced fields to force-free fields that satisfy $(\nabla \times {\bi{B}}) \times {\bi{B}}=0$, he found, first, that fields with $\nabla \times {\bi{B}}= a(r) {\bi{B}}$ fulfill this demand, and second, that $a(r)=const$ must be requested for the field to remain force-free during its time-evolution. For cylindrical geometry, Lundquist found that the force-free condition, i.e.~the demand that the current is parallel to the field, is guaranteed by Bessel function profiles $B_z =A J_0(a r), B_{\varphi}=A J_1(a r)$ (interestingly, the very same profiles for the {\it velocity} field turned later out to provide the most efficient dynamo of the Ponomarenko or Riga type \cite{Stefani1999}). Soon after Lundquist's work, Chandrasekhar and Woltjer \cite{Chandrasekhar1958} interpreted this Bessel functions solution in terms of achieving ''maximum magnetic energy for a given mean-squared current density'' or, alternatively, as a ''state of minimum dissipation for a given magnetic energy''. Since Bessel functions also maximize the magnetic helicity for given magnetic energy (and magnetic helicity is a better conserved quantity than energy) a surge of work was devoted to understand how solutions of this kind can be achieved dynamically. This goes mainly under the notion of Taylor relaxation \cite{Taylor1986}, and has found great interest in connection with the reversed field pinch. Quite a number of workers have tried to understand Taylor relaxation from different thermodynamic principles, such as minimum entropy production \cite{Hameiri1987} or minimum dissipation rates \cite{Montgomery_Phillips1988,Montgomery_Phillips_Theobald1989, Farengo1995,Phillips1996, Bhattacharyya2001,Dewar2008,Dasgupta2009}. One of the first applications of a general thermodynamic principle to plasma relaxation goes back to a note of Max Steenbeck \cite{Steenbeck1932} (relying, in turn, on an earlier idea of Compton and Morse \cite{Compton1927}). Steenbeck's principle states that in real gas discharges at fixed current the heat power, and thus the voltage drop between the electrodes, is minimized (somewhat surprisingly, this minimum-dissipation principle corresponds perfectly with the maximum entropy production rate principle \cite{Martyushev2006} if one considers the {\it total system} including the current-stabilizing external resistor \cite{Christen2009}.) Interestingly, it was also Steenbeck who was later to create the theoretical framework that nowadays allows for a deeper {\it dynamical} understanding of those somewhat vague thermodynamic principles. Mean-field magnetohydrodynamics (MHD) was originally developed to explain self-excitation of cosmic magnetic fields \cite{Steenbeck1966}. Its main idea is that certain correlations of the small-scale parts of velocity and magnetic field contribute to the dynamics of the large scale magnetic field \cite{Krause1980}. For helical turbulence the authors introduced the celebrated $\alpha$ effect which drives an electromotive force parallel to a prevailing large scale magnetic field $\overline{\bi{B}}$. Similarly, turbulence leads to an increase of the resistivity by the $\beta$ effect, so that the mean electromotive force can be written in the form $\cal{E}=\alpha \overline{\bi{B}} -\beta \nabla \times \overline{\bi{B}}$. Nowadays, mean-field concepts play not only a role in dynamo theory but also in the description of magnetically driven instabilities. Flow-driven helical dynamos and magnetically dominated helical ``dynamos'' are presently considered as two different aspects of the very same mean-field MHD \cite{Blackman2006}. The detailed saturation mechanism of the TI, in particular its termination in a helical or non-helical state, is but one interesting application of mean-field MHD. In this paper, we are going to study the exponential growth and the final saturation of the TI in finite cylindrical geometry for varying values of $Ha$ and $Pm$. On the basis of an axisymmetric ($m=0$) base state with an homogeneous axial current $J_0$ that produces an azimuthal magnetic field $B_0$, we compute the $m=1$ TI-eigenmode comprising the velocity $\bi u$ and the induced magnetic field $\bi b$ from which we infer the mean electromotive force $\overline{ {\bi{u}} \times {\bi{b}}}$ (the overbar denotes the average over the azimuthal angle), and from this the mean-field coefficients $\alpha$ and $\beta$. As a product of two $m=1$ modes, ${\bi{u}} \times {\bi{b}}$ comprises certain $m=0$ components that drive an azimuthal current (by virtue of the $\alpha$ effect) and reduce the impressed axial current (by virtue of the $\beta$ effect). Although there is not a big scale separation between the $m=0$ base state and the $m=1$ perturbation, mean-field theory perfectly applies here. The electromotive force $\cal{E}$ in direction of the mean field $\overline{\bi{B}}$ (i.e. the $\alpha$ effect), will be interpreted in terms of its relation to the small-scale current helicity, $\cal{E} \cdot \overline{\bi{B}}=-\overline{ \bi{j} \cdot \bi{b}}/\sigma+ \overline{\bi{e} \cdot \bi{b}}$, that had been derived and utilized by different authors \cite{Bhattacharjee1986,Seehafer1994,Ji1999,Blackman2007,Ebrahimi2014}. In case of $S \gg 1$, the modified currents and fields could be expected to resemble the typical Bessel function structure as typical for Taylor relaxation. In this sense, the mean axial field produced by the $\alpha$ effect would follow from the principle of minimum dissipation \cite{Montgomery_Phillips_Theobald1989}. However, this type of saturation mechanism, which relies on changing - by mean-field induction effects - the electromagnetic base state in such a way that it becomes just marginally stable against TI, does not apply for $S\ll 1$. In this case the magnetic Reynolds number $Rm$ of the TI-produced flow is much too small to induce any significant changes of the original applied magnetic field. The saturation must instead rely on a modification of the {\it hydrodynamic} base state, which we will discuss in detail. We will also evidence the occurrence of helicity oscillations, whose amplitudes and frequencies in dependence on $Ha$ and $Pm$ we will characterize. The paper closes with a discussion of the results, and with an outlook towards an application to stellar dynamo theory. \section{The numerical scheme} The usual numerical schemes for the simulation of TI, which solve the Navier-Stokes equation for the velocity and the induction equation for the magnetic field, are working typically only for values of $Pm$ down to 10$^{-3}$, although in a recent work by Herreman \etal this limit has been challenged \cite{Herreman2015}. Here, we circumvent the usual $Pm$ limitations of these codes by replacing the solution of the induction equation for the magnetic field by invoking the so-called quasistatic approximation \cite{Davidson2001}. We replace the explicit time stepping of the magnetic field by computing the electrostatic potential by a Poisson equation, and deriving the electric current density. However, in contrast to many other inductionless approximations in which this procedure is sufficient, in our case we cannot avoid to compute the induced magnetic field, too. The reason for that is the presence of an externally applied electrical current in the fluid. Computing the Lorentz force term it turns out that the product of the applied current with the induced field is of the same order as the product of the magnetic field (due to the applied current) with the induced current. Here, we compute the induced magnetic field from the induced current density by means of Biot-Savart's law. This way we arrive at an integro-differential equation approach, as it had already been used by Meir and Schmidt \cite{Meir2004}. In detail, the numerical model as developed by Weber \etal \cite{Weber2013} works as follows: it solves the Navier-Stokes equations (NSE) for incompressible fluids \begin{eqnarray}\label{eqn:navierstokes} \dot {\bi u} + \left({\bi u}\cdot\nabla\right){\bi u} = - \nabla p + \nu \Delta {\bi u} + \frac{\bi f_{\mathrm L} }{\rho}\hspace{5mm}\textrm{and}\hspace{5mm} \nabla\cdot \bi u = 0, \end{eqnarray} with $\bi u$ denoting the velocity, $p$ the (modified) pressure, $\bi f_{\mathrm L} = \bi J \times \bi B $ the electromagnetic Lorentz force density, $\bi J$ the total current density and $\bi B$ the total magnetic field. The NSE is solved using the PISO algorithm and applying no slip boundary conditions at the walls. Ohm's law in moving conductors \begin{eqnarray} {\bi j} = \sigma\left(-\nabla\varphi + {\bi u}\times {\bi B}\right) \end{eqnarray} allows to compute the induced current $\bi j$ by previously solving a Poisson equation for the perturbed electric potential $\varphi = \phi -J_0z/\sigma$: \begin{eqnarray} \Delta\varphi = \nabla\cdot\left({\bi u} \times {\bi B}\right). \end{eqnarray} In the following, we will concentrate on cylindrical geometries with an axially applied current. Then, after subtracting the (constant) potential part $J_0z/\sigma$, with $z$ as coordinate along the cylinder axis, we use the simple boundary condition $\varphi = 0$ on top and bottom and $\bi n\cdot \nabla \varphi=0$ on the mantle of the cylinder, with $\bi n$ as the surface normal vector. The induced magnetic field can then be calculated by Biot-Savart's law \begin{eqnarray}\label{eqn:biotsavart} {\bi b}({\bi r}) = \frac{\mu_0}{4\pi}\int dV' \, \frac{{\bi j}({\bi r}') \times ({\bi r}-{\bi r}')}{\left|{\bi r}-{\bi r}'\right|^3}. \end{eqnarray} Since this is a costly procedure, we modify here the method of \cite{Weber2013} slightly. Actually, equation (4) is applied only at the boundary of the cylinder, while the magnetic field in the bulk is computed by solving the vectorial Poisson equation $\Delta {\bi b}=\mu_0 \sigma \nabla \times ( {\bi{u}} \times {\bi{B}} )$ which results from the full time-dependent induction equation in the quasi-stationary approximation. Knowing $\bi b$ and $\bi j$ we compute the Lorentz force ${\bi f}_{\mathrm L}$ for the next iteration. A flow chart of this numerical procedure is shown in figure \ref{fig:fig1}. For more details about the numerical scheme, see section 2 and 3 of \cite{Weber2013}. \begin{figure}[h] \centerline{ \includegraphics[width=0.7\columnwidth]{./fig1.eps}} \caption{Flow chart of the simulation model, slightly modified with respect to that of \cite{Weber2013}.} \label{fig:fig1} \end{figure} \section{Results} For the sake of simplicity, we consider a cylindrical electrically conducting fluid with a ratio of height $L$ to diameter $2R$ of 1.25. A current is applied from top to bottom, just by setting the electrical potential to constant (but different) values at the two faces. Note that we refrain from taking into account any currents in the electrodes at top and bottom which has been shown to lead only to minor modifications of the results \cite{Weber2015}. The side walls of the cylinder are considered as electrically insulating. No-slip boundary conditions apply to the velocity at all boundaries. In the following, we will focus on three different cases. The magnetic Prandtl number for the first two runs is $Pm=10^{-6}$. Differing in the Hartmann number (60 and 100), these two runs will show a quite different behaviour of the helicity. Whereas for $Ha=60$ the initially growing helicity ultimately goes to zero, for $Ha=100$ we observe helicity oscillation in the saturation regime. For the third case, with the much higher $Pm=10^{-3}$ and $Ha=100$, we will find a finite and non-zero value of the final helicity. At the end of this section, we will summarize the different ways of saturation. \subsection{Saturation with zero helicity} Here, we choose $Ha=60$ and $Pm=10^{-6}$ which results in a Lundquist number $S=0.06$ that is definitely low enough for applying the quasistatic approximation. \begin{figure}[h] \centerline{ \includegraphics[width=0.99\columnwidth]{./fig2.eps}} \caption{Time evolution of various quantities for $Pm=10^{-6}$ and $Ha=60$. (a) - Reynolds number, (b) - Normalized Rms value of the induced magnetic field, (c) - normalized kinetic helicity, (d) - Relation of electromotive force and small-scale current helicity, (e) - Normalized mean $\alpha$ effect, (f) - Normalized mean $\beta$ effect, (g) - Normalized mean axial field, (h) - Normalized mean axial current.} \label{fig:fig2} \end{figure} Figure \ref{fig:fig2} exhibits the time dependence of various quantities that characterize the instability. The indicated dimensionless time $t_n$ is $t$ normalized by the viscous time scale $R^2/\nu$. Figure 2a, to start with, shows the evolution of the averaged Reynolds number of the flow arising from the initial state at rest: $\langle Re \rangle=R/\nu (\langle u^2 \rangle)^{1/2}$ (here, $\langle...\rangle$ denotes an average over the total volume rather than only over the azimuthal angle, which will always be indicated by an overbar). We chose a logarithmic scale in order to evidence the exponential growth of the TI. Approximately at $t_n=0.27$, the instability starts to saturate. We have added the respective energy contents of the various azimuthal wavenumbers $m=0,1,2$. Evidently, the $m=1$ mode is the dominating one throughout the evolution. However, approximately at $t_n=0.17$, both the $m=0$ and $m=2$ modes start to increase with the double growth rate as the $m=1$ mode. These even modes result from the non-linear term of the NSE. Saturation sets in when the $m=0$ and $m=2$ modes have acquired an amplitude comparably to that of the $m=1$ mode which ultimately brings the growth rate of the TI to zero. The corresponding evolution of the averaged induced magnetic field is depicted in figure 2b. Note that the $m=0$ component is here significantly weaker than the $m=2$ component, quite in contrast to the rather parallel evolution for the kinetic energy. The kinetic helicity $H_u= \bi{u} \cdot (\nabla \times \bi{u}) $ is the next quantity to be discussed (see figure 2c). Actually, we show here the helicity as normalized by the mean square of the velocity, i.e. ${\langle H_{u} \rangle}_n= \langle {\bi{u}} \cdot (\nabla \times {\bi{u}}) \rangle R /\langle u^2 \rangle $. After an initial increase, this normalized helicity stays nearly constant for a while (i.e., the {\it non-normalized} helicity $\langle H_u \rangle$ grows with the same growth rate as the kinetic energy), until it decays to zero when saturation is reached. In Figure 2d we give an interpretation of the mean electromotive force $\cal{E}$ in direction of the large scale magnetic field in terms of the small-scale current helicity $\overline{ \bi{j} \cdot \bi{b}}$, according to the relation $\cal{E} \cdot \overline{\bi{B}}=-\overline{ \bi{j} \cdot \bi{b}}/\sigma+ \overline{\bi{e} \cdot \bi{b}}$ \cite{Bhattacharjee1986,Seehafer1994,Ji1999,Blackman2007,Ebrahimi2014}. We see that this relation is perfectly fulfilled, showing that $\alpha$ is essentially proportional to the current helicity, with a minor correction coming from an electric field term (note that all quantities are normalized here by $B_0 J_0/\sigma$). Figure 2e depicts separately the $\alpha$ effect, defined by $\alpha=\overline{({\bi u} \times \bi{b})} \cdot {\bi{B}}_0/B^2_0$. Since $\alpha$ has the dimension of a velocity, we give it here in form of a magnetic Reynolds number which includes again a complete spatial average: ${\langle \alpha\rangle}_n=\mu_0 \sigma R \langle ({\bi u} \times {\bi{b}}) \cdot {\bi{B}}_0 \rangle /B^2_0$. We observe initially an (exponential) increase, though to a very small value in the order of $10^{-10}$, and then a decay to zero. More monotonic than $\alpha$, is the time evolution of $\beta=\overline{({\bi u} \times \bi{b})} \cdot {\bi{J}}_0/(\mu_0 J^2_0)$ effect (figure 2f), which we normalize here by the magnetic diffusivity $(\mu_0 \sigma)^{-1}$. This normalized, and spatially averaged ${\langle \beta \rangle}_n= \mu_0 \sigma \langle ({\bi u} \times {\bi{b}}) \cdot {\bi{J}}_0 \rangle/(\mu_0 J^2_0)$, acquires values of about $6 \times 10^{-10}$, so its influence on the total resistivity can be considered as negligible. The induction effects of the mean field coefficients $\alpha$ and $\beta$ are illustrated in figure 2g and 2h. Figure 2g shows the mean axial magnetic field $\langle b_z \rangle$ which is produced by the azimuthal current that is driven, in turn, by $\alpha$. Normalized to $B_0$, we see again that this induction is negligibly small. Note also that $\langle b_z \rangle$, in contrast to $\alpha$ (figure 2e), does not completely vanish in the saturation regime. We attribute this to numerical inaccuracies which seem to ``stray'' some energy from the much stronger $m=1$ mode into the $m=0$ and $m=2$ modes, an effect that is already visible in the non-physical parallelism of all modes in the beginning of the exponential growth regime (figure 2a). In contrast to this slight discrepancy, the behaviour of $\langle j_z \rangle/J_0$ (figure 2h) is nearly identical to that of $\beta$ (figure 2f). \begin{figure}[h] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig3.eps}} \caption{Snapshots of different quantities for three different instants. The contours correspond to 7\%, 3\% and 50\%, 20\%, and 10\% of the extremal values in the first, second, third and fourth row, respectively.} \label{fig:fig3} \end{figure} For three selected instants in time, figure \ref{fig:fig3} illustrates the spatial structure of various features of the TI (the normalized values of $u_z$, $b_z$, $(\bi{u} \times \bi {b})\cdot {\bi B}_0$, $(\bi{u} \times \bi {b})\cdot {\bi J}_0$). The left column depicts these quantities amidst the exponential growth phase in which we observe a clear helical structure. The middle and right columns show then the respective structures shortly before and during the saturated regime. Evidently, the helicity has completely disappeared here. In terms of the mode structure, we notice that the left handed spiral ($m=1$, say) and the right handed spiral ($m=-1$, say) have grown to the same strength. Having seen that, due to the low values of $Pm$, neither $\alpha$ nor $\beta$ is able to induce any relevant change of the electromagnetic base state that would lead to saturation of the TI, we have to look for an alternative saturation mechanism. Evidently, this can only be related to a change of the hydrodynamic state, i.e. the flow field. Let us return to figure \ref{fig:fig2}a: after a long exponential growth period of the (more or less) pure $m=1$ mode, at $t_n=0.17$ the $m=0$ and $m=2$ modes start to grow due to the action of the nonlinear terms in the NSE. As shown in figure \ref{fig:fig4}, the $m=0$ part of the velocity in the saturation comprises two poloidal vortices pointing outward in the equatorial plane (in the dynamo community this flow topology would be denoted by s2$^+$ \cite{Dudley1989}). This axisymmetric state, together with the $m=2$ component, changes the hydrodynamic base state for the TI so that it becomes just marginally stable. \begin{figure}[h] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig4.eps}} \caption{The velocity field in the saturated state, including the three lowest azimuthal modes. } \label{fig:fig4} \end{figure} With the simultaneous appearance of an $m=0$ and $m=2$ component, it is no surprise that the saturation is also connected with a restoration of the chiral symmetry. According to the sum rule for the nonlinear interaction the $m=2$ will produce, from any dominant $m=1$ mode, a corresponding $m=-1$ mode, so that chiral symmetry is ultimately restored. \subsection{Saturation with helicity oscillations} We now increase the Hartmann number from $Ha=60$ to $Ha=100$. As before, figure \ref{fig:fig5} illustrates the time evolution of various quantities. While the behaviour of the Reynolds number and the $\beta$ effect are not significantly different from the previous case with $Ha=60$, the helicity and $\alpha$ look quite differently. Evidently, the TI does not anymore saturate with zero helicity, but instead produces a helicity oscillation in the final state. Figure 6 shows again the spatial structure of various quantities in the exponential growth phase and at two different instants in the saturated state. In the plots for $(\bi{u} \times \bi {b})\cdot {\bi B}_0$, the helicity oscillation appears as a slightly changing asymmetry between positive (orange) and negative (blue) ``blobs''. \begin{figure}[h] \centerline{ \includegraphics[width=0.99\columnwidth]{./fig5.eps}} \caption{Same as figure \ref{fig:fig2}, but for $Pm=10^{-6}$ and $Ha=100$.} \label{fig:fig5} \end{figure} \begin{figure}[h] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig6.eps}} \caption{Same as figure \ref{fig:fig3}, but for $Pm=10^{-6}$ and $Ha=100$.} \label{fig:fig6} \end{figure} For the range between $Ha=40$ and $Ha=140$ we characterize this helicity oscillation by its amplitude and frequency (figure \ref{fig:fig7}). \begin{figure}[h] \centerline{ \includegraphics[width=0.90\columnwidth]{./fig7.eps}} \caption{Characteristics of the helicity osciallations in dependence on $Ha$, for $Pm=10^{-6}$. (a) Frequency, also compared with the growth rate in the exponential growth phase. (b) Amplitude.} \label{fig:fig7} \end{figure} \subsection{Saturation with finite helicity} We leave now the realm in which the magnetic induction is irrelevant for saturation and move towards the parameter region which had already been explored by other codes. Actually, we increase the magnetic Prandtl number to $Pm=10^{-3}$, and consider the case $Ha=100$. This leads to a Lundquist number of $S=3.16$ for which we come close to the edge of applicability of the inductionless approximation \cite{Herreman2015}. \begin{figure}[h] \centerline{ \includegraphics[width=0.99\columnwidth]{./fig8.eps}} \caption{Same as figure \ref{fig:fig2}, but for $Pm=10^{-3}$ and $Ha=100$.} \label{fig:fig8} \end{figure} \begin{figure}[h] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig9.eps}} \caption{Same as figure \ref{fig:fig3}, but for $Pm=10^{-3}$ and $Ha=100$, and different instants.} \label{fig:fig9} \end{figure} Again, figures \ref{fig:fig8} and \ref{fig:fig9} show the time evolution, and some snapshots, of various quantities. The main difference to the former runs at low $Pm$ is that now the helicity and $\alpha$ acquire non-zero values in the saturated state. This is also shown in the snapshots of figure \ref{fig:fig9}. Evidently, we are now in a regime where the induced magnetic fields contribute significantly to the saturation mechanism. This involves also that the $m=0$ component of the magnetic field (\ref{fig:fig8}b) becomes now comparable to the $m=2$ part, quite in contrast to the former cases with $Pm=10^{-6}$. This is illustrated in figure \ref{fig:fig10} which shows the dependence of the induced mean current $\langle j_z \rangle/J_0$ and the induced mean axial field $\langle b_z \rangle/B_0$ on $Pm$ (at fixed $Ha=100$). According to the criterion of Kruskal-Shafranov \cite{Bergerson2006}, we know that $b_z$ tends to inhibit the TI, so that a finite value of $\alpha$ is indeed likely to appear in the saturation regime. \begin{figure}[h] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig10.eps}} \caption{Dependence of $\langle b_z \rangle/B_0$ and $\langle j_z \rangle/J_0$ on the magnetic Prandtl number} \label{fig:fig10} \end{figure} \subsection{Between chiral symmetry breaking and helicity oscillations} In the following, we will summarize the three different saturation mechanisms. We will be guided by the simple and instructive model of chiral symmetry breaking model that had been worked out by Bonanno \etal \cite{Bonanno2012}. The authors started from left and right handed helical modes for the velocity and the magnetic field, fulfilling the Beltrami relation \begin{eqnarray} \nabla \times {\bi L}=\lambda {\bi L} \;\;\; \mbox{and} \;\;\; \nabla \times {\bi R}=-\lambda {\bi R} \end{eqnarray} which can be realized by appropriate linear combinations of Chandrasekhar-Kendall functions $J_m(r \sqrt{\lambda^2 +n^2\pi^2/h^2}) \cos(m\phi) \cos(nz\pi/H)$. Invoking some symmetry arguments, the authors ``guessed'' the simplest form of a Lagrangian which then leads to the following evolution equations for the energy of the left and right handed helical modes \begin{eqnarray} \frac{d E_L}{d t}&=&2 \gamma E_L -4 \mu E_L^2-4\mu_{*} E_L E_R\\ \frac{d E_R}{d t}&=&2 \gamma E_R -4 \mu E_R^2-4\mu_{*} E_L E_R \; . \end{eqnarray} The last terms on the r.h.s.~of equations (6,7) describe the so-called mutual antagonism between the two chiralities that has been used extensively in the theory of homochirality of bio-molecules \cite{Frank1953,Brandenburg2005,Saito2013}. From here, one arrives at the following evolution equations for the total energy $E=E_R+E_L$ and the helicity $H=E_R-E_L$: \begin{eqnarray} \frac{d E}{d t}&=&2 \gamma E -2(\mu+\mu_{*})E^2-2(\mu-\mu_{*})H^2\\ \frac{d H}{d t}&=&2 \gamma H -4\mu E H \; . \end{eqnarray} In figure \ref{fig:fig11} we illustrate the phase portrait of this equation system showing a clear chiral symmetry breaking both in the exponential growth phase as well as in the saturated phase. \begin{figure}[hbt] \centerline{ \includegraphics[width=0.7\columnwidth]{./fig11.eps}} \caption{Phase portrait of the coupled equation system (8,9), for the parameters $\gamma=2.71$, $\mu=3.0$, $\mu_{*}=5.7$. In this typical situation, $S_1$ is a repeller, $S_2$ and $S_3$ are attractive points, while $S_4$ is a saddle point.} \label{fig:fig11} \end{figure} We return now to our three cases, for which we plot the time evolution (Figure \ref{fig:fig12}, left columns) and the phase portrait of the kinetic helicity (middle column), as well as the phase portrait of the current helicity (right column). To make contact with figure 11, we have now chosen a different normalization so that both helicities start at zero. \begin{figure}[hbt] \centerline{ \includegraphics[width=0.9\columnwidth]{./fig12.eps}} \caption{Time evolution (left), and phase portraits for the kinetic helicity (middle), and phase portrait of the current helicity (right), for four different cases. Note that the viscous time scale is indeed the relevant one both for the growth process and for the helicity oscillations.} \label{fig:fig12} \end{figure} We start, in the first row of figure \ref{fig:fig12}, with $Pm=10^{-6}$ and $Ha=60$. Evidently the exponential growth phase looks very similar as in figure 11, but ultimately the system runs into a state with zero helicity. The second case with $Pm=10^{-6}$, $Ha=100$ is similar but terminates with a helicity oscillation around zero. It is remarkable that this helicity oscillation proceeds without any significant oscillation of the energy. The third case, $Pm=10^{-3}$, $Ha=100$ has indeed some resemblance with the above model of Bonanno \etal and terminates with a finite, non-zero helicity. In the fourth row, we add here also the plot for $Pm=10^{-2}$ and $Ha=100$ which amounts to $S=10$ which is beyond the simple applicability of the quasistatic approximation and should be, therefore, considered with caution. Evidently, the magnetic helicity shows now a phase portrait that is similar to figure \ref{fig:fig11}, despite the fact that we now observe an "overshoot" to the helicity with the other sign, and also a remaining oscillation in the saturated state. While we do not claim that this is a reliable result, the two last rows of figure \ref{fig:fig11} at least suggest that we are now approaching the usual saturation scheme as already discussed by \cite{Bonanno2012}. \section{Conclusions} In this paper, we have utilized an integro-differential equation solver for addressing the problem of chiral symmetry breaking in the exponential growth phase and in the saturation phase of the Tayler instability. The advantage of this code is its easy applicability for small magnetic Prandtl numbers, while its suitability for problems at $S>1$ is at least questionable (at least it has to be checked case by case whether a final steady state with finite and non-oscillatory, i.e. static, helicity could still be treated with our quasistatic scheme). Our simulation have allowed to identify three different saturation regimes. To start with the last regime, for a comparably large value of $S=3.16$ we have confirmed a similar type of chiral symmetry breaking as it was previously evidenced by Gellert \etal \cite{Gellert2011} and Bonanno \etal \cite{Bonanno2012}. Depending on the random initial conditions, the TI grows with one of the two possible helicities which does not disappear in the saturated regime. The helicity is intrinsically connected with a non-zero $\alpha$ effect that generates a current parallel to the applied azimuthal magnetic field. At the same time the mean-field e.m.f. contains also a significant $\beta$ effect that changes the axial current. Both effects together work against the TI. In the ultimate case of high $S$ (which is, probably, not accessible by our code) one could expect a sort of Taylor-relaxation into a helicity maximizing state \cite{Taylor1986}. Whether for those large values of $S$ one reaches a regime of helicity oscillation around a finite value (as suggested by the fourth row of figure \ref{fig:fig12}) is still to be validated by complementary codes. The saturation mechanism, which relies on changing (by mean-field induction effects) the electromagnetic base state in such a way that it becomes just marginally stable against TI, does not apply for $S\ll 1$. This can already be seen from the general scaling $Re\propto Ha^2$, or $Rm\propto S^2$ which applies both to $S<1$ and $S>1$ \cite{Weber2013}. For $S\ll 1$ the final $Rm$ becomes much too small in order to induce any significant changes of the original applied magnetic field. In this parameter region, the saturation relies instead on the non-linear appearance of an $m=0$ and an $m=2$ velocity component, which now changes the {\it hydrodynamic} base state of the TI in a way that the growth rate of the TI vanishes. Perhaps the most interesting result of our study is the observation of a robust and systematic helicity oscillation whose amplitude and frequency dependence on $Ha$ has been worked out. Interestingly this helicity oscillation is not connected with any significant energy oscillations. Based on the latter observation, we would like to conclude with some, admittedly, very speculative ideas. There is a long tradition in trying to link the various frequencies of the solar dynamo action to corresponding periodicities of planetary motion. Tracing back to a paper by Jose \cite{Jose1965}, who had related the 11.86 years Jupiter orbit with the 22.08 year solar cycle, some refinements of this connection in terms of a combined torque and gravity action of Earth, Venus and Jupiter have been discussed recently \cite{Wilson2013}. Other papers have tried to link periodicities of the Jupiter-Saturn orbits to various longer-time cycles of the solar dynamo \cite{Scafetta2010,Abreu2012}, with possible connections to the climate on Earth. However, in all those cases, it was noticed that the planetary forces are much to weak to compete with the typical acceleration forces in the convection zone. The only viable way of influencing the solar dynamo was speculated to rely on the action of gravity on the shape, or local rotation rate, of the tachocline. Yet, this would imply that the solar dynamo works indeed as some sort of Tayler-Spruit dynamo \cite{Spruit2002}, in which the transformation from poloidal to toroidal field is traditionally realised by differential rotation, the reverse mechanism, however, by some $\alpha$ effect due to the TI. It is exactly here where helicity oscillations, and their possible synchronization with planetary forces and torques, might come into play. In particular, since the oscillations of $\alpha$ are not connected to any significant changes of the energetic content, very minor changes of the state of the tachocline might just open the ``$\alpha$-bottleneck'' for the Tayler-Spruit dynamo (which is, in any case, still a sort of $\alpha-\Omega$ like dynamo). Even if the $\alpha$ oscillations are around some non-zero mean values prevailing in the two solar hemispheres, it still might give rise to dynamo oscillations. Note that a parametric resonance and synchronization of $m=1$ dynamo eigenmodes with $m=2$ velocity perturbations has been observed both for galactic dynamos (swing excitation, \cite{Schmitt1992}), and for a VKS-like dynamo \cite{Giesecke2012}. Whether a similar effect may actually be at work for synchronizing the solar dynamo with periodic planetary forces via their action on the tachoclinic state, will remain a topic for future investigations. \section*{Acknowledgment} This work was supported by Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF) in frame of the ``Initiative f\"ur mobile und station\"are Energiespeichersysteme'', and in frame of the Helmholtz Alliance LIMTECH, as well as by Deutsche Forschungsgemeinschaft in frame of the SPP 1488 (PlanetMag). We thank Pascal Beckstein for his support to increase the performance of our numerical code. We gratefully acknowledge fruitful discussions with Rainer Arlt, Alfio Bonanno, Axel Brandenburg, Marcus Gellert, Wietze Herreman, Rainer Hollerbach, Caroline Nore, J\=anis Priede, G\"unther R\"udiger and Martin Seilmayer on several aspects of the Tayler instability. \section*{References} \providecommand{\newblock}{}
{ "timestamp": "2015-04-24T02:08:15", "yymm": "1504", "arxiv_id": "1504.06120", "language": "en", "url": "https://arxiv.org/abs/1504.06120" }
\section{Introduction} In this paper, we will consider initial value problems \begin{equation} \label{eq:ODE} \begin{split} \dot{x}(t) &= f(t, x(t)), \\ x(t_0) &= x_0, \end{split} \end{equation} with $ t \in \mathbb{I} \subseteq \mathbb{R} $ and $ f : \mathbb{I} \times \mathbb{D} \rightarrow \mathbb{R}^n $, $ \mathbb{D} \subseteq \mathbb{R}^n $. A fundamental class of numerical solvers are one-step methods of the form \begin{equation} x^{m+1} = x^m + h \, \Phi(t^m, x^m, h), \end{equation} where $ \Phi $ is referred to as the \emph{increment function}. Important examples of one-step methods are Runge--Kutta methods. The increment function of a general $ s $-stage Runge--Kutta method is given by \begin{subequations} \label{eq:RKODE} \begin{equation} \Phi(t^m, x^m, h) = \sum_{q=1}^s b_q k_q, \end{equation} where \begin{equation} k_q = f\big(t^m + c_q h, x^m + h \sum_{r=1}^s a_{qr} k_r\big). \end{equation} \end{subequations} The coefficients $ a_{qr} $, $ b_q $, and $ c_q $ are often arranged in form of the so-called \emph{Butcher tableau} \begin{equation} \begin{array}{c|c} c & A \\ \hline \\[-1.8ex] & b^T \end{array} \qquad \coloneqq \qquad \begin{array}{c|cccc} c_1 & a_{11} & a_{12} & \dots & a_{1s} \\ c_2 & a_{21} & a_{22} & \dots & a_{2s} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ c_s & a_{s1} & a_{s2} & \dots & a_{ss} \\ \hline & b_1 & b_2 & \dots & b_s \end{array}. \end{equation} If the matrix $ A $ is strictly lower triangular, then the Runge--Kutta method is called explicit. Otherwise, the method is said to be implicit. \section{Time-driven ordinary differential equations} Without loss of generality, the ordinary differential equation \eqref{eq:ODE} can be rewritten as \begin{equation} \twovec{x_E}{\dot{x}_I} = \twovec{f_E(t)}{f_I(x_E, x_I)}, \end{equation} with external variables $ x_E \in \mathbb{R}^{n_E} $ and internal variables $ x_I \in \mathbb{R}^{n_I} $. That is, we split the system into two subsystems and introduce additional variables which can be explicitly written as a function of the time $ t $. The dimension of the input vector $ x_E $ depends on the number of different time-dependent terms, the dimension of the internal vector $ x_I $ is equal to the number of equations of the original system. We introduce this partitioning to measure the influence of the input signals on the internal variables and to generate a model of the signal flow. From now on, for the sake of simplicity, we will write the system---to which we will refer as a \emph{time-driven ordinary differential equation}---as \begin{equation} \label{eq:TDODE} \twovec{x_E}{\dot{x}_I} = f(t, x), \text{ with } x = \twovec{x_E}{x_I} \text{ and } f = \twovec{f_E}{f_I}. \end{equation} Thus, $ x_{E, i} = x_i $ and $ x_{I, i} = x_{n_E + i} $. Let $ n = n_E + n_I $ denote the size of the whole system again. For a time-driven ordinary differential equation, a one-step method is of the form \begin{equation} \twovec{x_E^{m+1}}{x_I^{m+1}} = \twovec{x_E^m}{x_I^m} + \twovec{\Delta x_E^m}{\Delta x_I^m}, \end{equation} with \begin{equation} \label{eq:TDODE_update} \begin{split} \Delta x_E^m &= f_E(t^{m+1}) - f_E(t^m), \\ \Delta x_I^m &= h \, \Phi(t^m, x^m, h). \end{split} \end{equation} The increment function of a Runge--Kutta method can now be rewritten as \begin{subequations} \label{eq:RKTDODE} \begin{equation} \Phi(t^m, x^m, h) = \sum_{q=1}^s b_q k_I^q, \end{equation} where \begin{equation} \begin{split} k_E^q &= f_E(t^m + c_q h), \\ k_I^q &= f_I\big(k_E^q, x_I^m + h \sum_{r=1}^s a_{qr} k_I^r\big). \end{split} \end{equation} \end{subequations} \section{Dependency graph} \label{sec:Dependency graph} Given a time-driven ordinary differential equation, we want to analyze how changes of the input variables $ x_E $ affect the internal variables $ x_I $ and how the signals propagate through the system. To this end, we derive a directed graph which represents the structure of the system. Define $ \indices{n} = \{ 1, \dots, n \} $ to be the set of indices. Since in general the functions $ f_i $, $ i \in \indices{n} $, do not depend on all variables $ x_j $, $ j \in \indices{n} $, we introduce input and output sets for each variable to describe the dependency on other variables. \begin{definition}[Input and output sets] Define the \emph{input set} of $ x_i $, $ i \in \indices{n} $, to be \begin{equation} \pre{x_i} = \left\{ x_j \; \bigg| \; \pd{f_i}{x_j} \not\equiv 0, \, j \in \indices{n} \right \}. \end{equation} Analogously, define the \emph{output set} to be \begin{equation} \post{x_i} = \left\{ x_j \; \bigg| \; \pd{f_j}{x_i} \not\equiv 0, \, j \in \indices{n} \right \}. \end{equation} \end{definition} That is, the variable $ x_i $ depends on $ x_j $ if the value of $ x_j $ is required for the evaluation of $ f_i $. The input and output sets induce a directed graph with the vertices being the variables and the edges being the dependency relations between the variables. \begin{definition}[Dependency graph] For a given time-driven ordinary differential equation, define the \emph{dependency graph} by $ \mf{G}_d(f) = (\mf{V}_d, \mf{E}_d) $, with $ \mf{V}_d = \{ \mf{v}_1, \dots, \mf{v}_n \} $ and $ \mf{E}_d = \{ (\mf{v}_i, \mf{v}_j) \mid x_i \in \pre{x_j}, \; i, j \in \indices{n} \} $. \end{definition} If it is clear which differential equation is meant, we will simply write $ \mf{G}_d $. The dependency graph of large-scale dynamical networks can be very sparse since the subsystems are often strongly coupled inside but only connected to a few other subsystems of the network. \begin{example} \hspace*{\fill} \begin{enumerate} \item Consider the linear differential equation \begin{equation*} \ddddot{x}(t) = \dddot{x}(t) + \dot{x}(t), \end{equation*} which is equivalent to the first-order system \begin{equation*} \begin{bmatrix} \dot{x}_1(t) \\ \dot{x}_2(t) \\ \dot{x}_3(t) \\ \dot{x}_4(t) \end{bmatrix} = \underbrace{ \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \end{bmatrix} }_{\displaystyle A} \begin{bmatrix} x_1(t) \\ x_2(t) \\ x_3(t) \\ x_4(t) \end{bmatrix}. \end{equation*} The input and output sets are \begin{equation*} \begin{array}{ll} \pre{x_1} = \{ x_2 \}, & \post{x_1} = \varnothing, \\ \pre{x_2} = \{ x_3 \}, & \post{x_2} = \{ x_1, x_4 \}, \\ \pre{x_3} = \{ x_4 \}, & \post{x_3} = \{ x_2 \}, \\ \pre{x_4} = \{ x_2, x_4 \}, & \post{x_4} = \{ x_3, x_4 \}. \end{array} \end{equation*} The differential equation is an equation of order three in $ \dot{x}(t) $. This can also be seen in the dependency graph, which is shown in Figure~\ref{fig:LinearSystemDependency}, since $ x_1 $ depends only on $ x_2 $ and can be obtained by integration. Moreover, the transposed system matrix $ A^T $ is the adjacency matrix of $ \mf{G}_d $, i.e.\ $ \mf{G}_d = \mf{G}(A^T) $. \begin{figure}[htb] \centering \includegraphics[width=0.2\textwidth]{LinearSystemDependency} \caption{Dependency graph $ \mf{G}_d $ of the linear system.} \label{fig:LinearSystemDependency} \end{figure} \item Given the inverter chain of length $ N $ shown in Figure~\ref{fig:Inverterchain}, the corresponding circuit equations can be written as a time-driven ordinary differential equation with \begin{equation*} f(t, v) = \left[ \begin{array}{c} 0 \\ \mathrm{V_{dd}} \\ V_s(t) \\ \hline g(v_1, v_2, v_3, v_4) \\ g(v_1, v_2, v_4, v_5) \\ \vdots \\ g(v_1, v_2, v_{N+2}, v_{N+3}) \end{array} \right]. \end{equation*} Here, $ n_E = 3 $ and $ n_I = N $. The function $ g $ consists of the characteristic equations of the modules connected to the individual nodes and can be written as \begin{equation*} g(v_1, v_2, v_{i-1}, v_i) = -\frac{1}{C_i}\big(\imath_{ds, n}(v_i, v_{i-1}, v_1) + \imath_{ds, p}(v_i, v_{i-1}, v_2)\big). \end{equation*} We use the Shichman--Hodges model \cite{SH68} to describe the drain-source current $ \imath_{ds} $ of the pMOS and nMOS transistors. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{Inverterchain} \caption{Inverter chain of length $ N $.} \label{fig:Inverterchain} \end{figure} Although the ground voltage and the positive supply voltage $ \mathrm{V_{dd}} $ are constant over time, we introduce additional variables since this assignment leads to a natural correlation between the nodes $ \mf{n}_i $ and the vertices $ \mf{v}_i $. In addition, it allows for a straightforward graph-based approach to generate the system of equations and the dependency graph. The Jacobian $ \pd{f}{v} $ exhibits the following structure \begin{equation*} \pd{f}{v} = \left[ \begin{array}{ccc|ccccc} & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline * & * & * & * & & & & \\ * & * & & * & * & & & \\ * & * & & & * & * & & \\ \vdots & \vdots & & & & \ddots & \ddots & \\ * & * & & & & & * & * \end{array} \right], \end{equation*} where empty places denote partial derivatives identical to zero. Figure~\ref{fig:InverterchainDependency} shows the dependency graph of the inverter chain. Since the constant voltages $ v_1 $ and $ v_2 $ have no influence on the dynamic signal flow, the corresponding vertices and associated edges have been omitted due to visualization reasons. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{InverterchainDependency} \caption{Dependency graph $ \mf{G}_d $ of the inverter chain.} \label{fig:InverterchainDependency} \end{figure} \end{enumerate} \end{example} In the following, we often identify $ x_i $ with $ \mf{v}_i $. Each internal vertex of the dependency graph represents a one-dimensional ordinary differential equation that is coupled to other one-dimensional systems. Generally speaking, a time-driven ordinary differential equation together with its dependency graph can be regarded as a coupled cell system~\cite{GPS04, GS03} with additional time-dependent inputs. \section{Signal-flow based Runge--Kutta methods} During the simulation of big and loosely coupled networks, different subsystems often exhibit different rates of activity. That is, the values in some parts of the network change rapidly, while in other parts the values change very slowly or do not change at all. The active regions usually vary over time so that a previously inactive region undergoes quick changes and vice versa. Consider for example the inverter chain. If we apply an input signal, then, generally speaking, this input signal is reversed repeatedly with a small time delay so that it seems to flow continuously through the circuit. The step size control of standard integration schemes depends mainly on the fastest changing variables. As a result, even the inactive signals have to be recomputed at every time step unless multirate integration schemes or other techniques to exploit the latency are used. We will propose an integration scheme which utilizes the underlying structure of the system. With the definitions in Section~\ref{sec:Dependency graph}, it is possible to determine which values of $ x^m $ are necessary to compute the new values of $ x^{m+1} $, namely, for the update of $ x_i^m $, all values of the variables of the input set $ \pre{x_i} $ are required. Since the external variables $ x_{E, i} $, $ i \in \indices{n_E} $, depend only on the time $ t $, the input sets are empty, i.e.\ $ \pre{x_{E, i}} = \varnothing $. The update of the internal values $ x_{I, i} $, $ i \in \indices{n_I} $, requires the evaluation of $ f_{I, i} $ and thus the values of $ \pre{x_{I, i}} $. To identify latent regions, we have to distinguish between the different vertex types. \begin{definition}[Semi-latency] Let $ t^m $ be the current time point and $ t^{m-1} $ the previous time point. \begin{enumerate} \item An external variable $ x_{E, i} $, $ i \in \indices{n_E} $, is said to be \emph{semi-latent} at $ t^m $ if \begin{equation} f_{E, i}(t^m + c_q h) = f_{E, i}(t^{m-1} + c_q h) \end{equation} for all $ q = 1, \dots, s $. \item An internal variable $ x_{I, i} $, $ i \in \indices{n_I} $, is defined to be \emph{semi-latent} if \begin{equation} \Phi_i(t^{m-1}, x^{m-1}, h) = 0. \end{equation} \end{enumerate} \end{definition} The definition implies that $ x_{I, i}^m = x_{I, i}^{m-1} $ for all semi-latent internal variables. Whether a vertex is semi-latent at a specific time point is not known until all the values have been evaluated, but since our aim is to reduce the number of function evaluations, we want to mark vertices which need not be recomputed. Therefore, we introduce an additional concept. \begin{definition}[Latency] A variable $ x_i $, $ i \in \indices{n} $, is called \emph{latent of order} $ 1 $ if $ x_i $ and all variables of the set $ \pre{x_i} $ are semi-latent. Additionally, a latent variable $ x_i $ is defined to be \emph{latent of order} $ \nu $ if all variables in $ \pre{x_i} $ are at least latent of order $ \nu-1 $. \end{definition} Let $ \varepsilon $ be a user-defined error tolerance. For numerical computations, the semi-latency conditions are replaced by $ \abs{\Delta x_{E, i}^{m-1}} < \varepsilon $ and $ \abs{\Delta x_{I, i}^{m-1}} < \varepsilon $, respectively. In order to illustrate the different states of activity, we simulate the inverter chain. \begin{example} \label{ex:InverterchainSimulation} If the inverter chain is excited with a given input signal, then this signal flows---reversed at each inverter---through the circuit, as described above. Figure~\ref{fig:InverterchainSimulation} shows the voltages and activity states resulting when the circuit is excited with the displayed piecewise linear function. With a view to a better visualization, the respective activity states of the vertices are slightly shifted upward. Clearly, only a few vertices are active at each time point and these active regions flow through the dependency graph. \begin{figure}[htbp] \begin{center} \subfiguretitle{a)} \includegraphics[width=0.9\textwidth]{InverterchainTrajectories} \\ \vspace*{4mm} \subfiguretitle{b)} \includegraphics[width=0.9\textwidth]{InverterchainJacobian} \\ \vspace*{4mm} \subfiguretitle{c)} \includegraphics[width=0.9\textwidth]{InverterchainActivity_1} \\ \vspace*{1mm} \includegraphics[width=0.9\textwidth]{InverterchainActivity_2} \\ \vspace*{1mm} \includegraphics[width=0.9\textwidth]{InverterchainActivity_3} \\ \vspace*{1mm} \includegraphics[width=0.9\textwidth]{InverterchainActivity_4} \\ \caption[Excitation of the inverter chain with a piecewise linear function.] {Excitation of the inverter chain with a piecewise linear function. a) The dotted trajectories show the input function and the voltages at intermediate vertices, the thin horizontal lines in the corresponding color the activity state. Here, $ 0 $ denotes active, $ 1 $ semi-latent, and $ 2 $ latent, respectively. b) Structure of $ \pd{f_I}{x_I} $ and $ \dot{x}_I $ at time 1, 2, 3, and 4 for a threshold of $ 10^{-4} $. c) Activity states at time 1, 2, 3, and 4, where red vertices represent active, yellow vertices semi-latent, and green vertices latent regions.} \label{fig:InverterchainSimulation} \end{center} \end{figure} \end{example} The example shows that the vertices are latent during the major part of the simulation, but each vertex at a different time. Below, we will propose modified Runge--Kutta methods for time-driven ordinary differential equations which take into account the dependency graph and the signal flow of the underlying system. The aim is to reduce the number of function evaluations without a huge loss of accuracy by exploiting the inherent latency. Since for some applications the function evaluations are time-consuming, whereas the update of the dependency graph can be accomplished in linear time, this approach offers the possibility to conceivably speed up the simulation. \subsection{Explicit Runge--Kutta methods} For the computation of the vectors $ k_E^q $ and $ k_I^q $, $ q = 1, \dots, s $, in \eqref{eq:RKTDODE}, it is necessary to evaluate the functions $ f_E $ and $ f_I $, respectively. The functions $ f_{I,i} $, $ i \in \indices{n_I} $, have to be recomputed if only one of the variables of the input set $ \pre{x}_{I, i} $ is active or semi-latent. If $ x_{I, i} $ is latent of a certain order, then we can reuse the previous value. \begin{definition}[Signal-flow based Runge--Kutta method] Given a time-driven ordinary differential equation, a \emph{signal-flow based Runge--Kutta method} is defined by \begin{equation} \begin{split} x_E^{m+1} &= x_E^m + \Delta x_E^m, \\ x_{I, i}^{m+1} &= \begin{cases} x_{I, i}^m, & \text{if } x_{I, i} \text{ is latent of order } s, \\ x_{I, i}^m + \Delta x_{I, i}^m, & \text{otherwise}, \end{cases} \end{split} \end{equation} for all $ i \in \indices{n_I} $. Here, $ s $ is again the number of stages. The vectors $ \Delta x_E^m $ and $ \Delta x_I^m $ are as defined in \eqref{eq:TDODE_update}. \end{definition} Provided that we use exact computation, the following theorem holds. \begin{theorem}\label{th:ERK=sfERK} The explicit Runge--Kutta methods and the corresponding signal-flow based methods are equivalent. \end{theorem} \begin{proof} \allowdisplaybreaks In the proof, we add the superscript $ m $ or $ m-1 $ to the stages to differentiate between the different time points. Let $ x_{I, i} $ be latent at $ t^m $, i.e.\ $ \Phi_i(t^{m-1}, x^{m-1}, h) = 0 $ and \begin{align*} f_{E, j}(t^m + c_q h) = f_{E, j}(t^{m-1} + c_q h) & \; \Rightarrow \; k_{E, j}^{m, q} = k_{E, j}^{m-1, q} \quad \forall x_{E, j} \in \pre{x_{I, i}}, \\ \Phi_j(t^{m-1}, x^{m-1}, h) = 0 & \; \Rightarrow \; x_{I, j}^m = x_{I, j}^{m-1} \quad \forall x_{I, j} \in \pre{x_{I, i}}. \end{align*} For $ q = 1 $, we have $ c_1 = 0 $ and thus \begin{equation*} k_{I, i}^{m, 1} = f_{I, i}(x_E^m, x_I^m) = f_{I, i}(x_E^{m-1}, x_I^{m-1}) = k_{I, i}^{m-1, 1} \end{equation*} since $ f_{I, i} $ depends only on the values of the input set $ \pre{x_{I, i}} $ and these values are the same as in the previous time step by definition. Now, assume that $ x_{I, i} $ is latent of order $ 2 $, i.e.\ all inputs of $ x_{I, i} $ are at least latent of order $ 1 $. If follows that \begin{equation*} \begin{split} k_{I, i}^{m, 2} &= f_{I, i}(k_E^{m, 2}, x_I^m + h \, a_{21} k_I^{m, 1}) \\ &= f_{I, i}(k_E^{m-1, 2}, x_I^{m-1} + h \, a_{21} k_I^{m-1, 1}) = k_{I, i}^{m-1, 2} \end{split} \end{equation*} using the same reasoning again. Furthermore, by induction it can be shown that \begin{equation*} \begin{split} k_{I, i}^{m, q} &= f_{I, i}\big(k_E^{m, q}, x_I^m + h \sum_{r=1}^{q-1} a_{qr} k_I^{m, r}\big) \\ &= f_{I, i}\big(k_E^{m-1, q}, x_I^{m-1} + h \sum_{r=1}^{q-1} a_{qr} k_I^{m-1, r}\big) = k_{I, i}^{m-1, q} \end{split} \end{equation*} if $ x_{I, i} $ is latent of order $ q $ and \begin{equation*} \begin{split} x_{I, i}^{m+1} &= x_{I, i}^m + h \, \Phi_i(t^m, x^m, h) \\ &= x_{I, i}^m + h \sum_{q=1}^s b_q k_{I, i}^{m, q} \\ &= x_{I, i}^{m-1} + h \sum_{q=1}^s b_q k_{I, i}^{m-1, q} \\ &= x_{I, i}^{m-1} + h \, \Phi_i(t^{m-1}, x^{m-1}, h) = x_{I, i}^m \end{split} \end{equation*} if $ x_{I, i} $ is latent of order $ s $. \end{proof} For numerical computations, we do not update a variable if it is latent of order at least one assuming that the influence of longer paths is negligibly small. In the following, we will abbreviate the standard classical fourth-order Runge--Kutta method as RK and the corresponding signal-flow based method as sfRK. \begin{example} \label{ex:Inverterchain_sfRK} Consider once again the inverter chain, which is a popular benchmark problem for multirate integration schemes. To analyze the efficiency of the signal-flow based standard Runge--Kutta method, we simulate the inverter chain of length $ N = 100 $ with variably time-consuming function evaluations and different rates of inherent latency. To vary the amount of latency, we apply periodic input functions with different delays between two adjacent pulse signals, as shown in Figure~\ref{fig:InverterchainPwlInput}. The complexity of the transistor model is increased by artificially adding terms which do not affect the solution of the system. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{InverterchainPwlInput} \caption{Piecewise linear input function with varying delay $ \Delta T $ to emulate latency.} \label{fig:InverterchainPwlInput} \end{figure} The runtimes of the simulation with both the standard Runge--Kutta method and the corresponding signal-flow based method for varying model complexities and input functions are shown in Figure~\ref{fig:Inverterchain_sfRK}. Here, the time interval is $ \mathbb{I} = [0, 40] $, the step size $ h = \frac{1}{100} $, and the latency parameter $ \varepsilon = 10^{-6} $. While the runtime of RK does not depend on the inherent latency, the runtime of sfRK decreases with increasing latency. Furthermore, the more complex the transistor model, the bigger the speedup of the signal-flow based integration scheme due to the reduced number of function evaluations. Table~\ref{tab:Inverterchain_sfRK} contains the number of transistor model evaluations for different values of $ \Delta T $. The influence of $ \varepsilon $ on the speedup of sfRK and the average difference per step between RK and sfRK for a fixed delay $ \Delta T = 10 $ are shown in Figure~\ref{fig:InverterchainEpsilon_RK}. \begin{figure}[htbp] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{RK} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_RK} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{sfRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_sfRK} \end{minipage} \\ \vspace*{3mm} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{RK vs. sfRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{Inverterchain_RK_sfRK} \end{minipage} \end{center} \caption{Influence of the complexity and latency on the runtime of RK and sfRK.} \label{fig:Inverterchain_sfRK} \end{figure} \begin{table}[htb] \caption{Number of transistor model evaluations of RK and sfRK.} \newcommand{\mc}[1]{\multicolumn{1}{|c|}{#1}} \newcommand{\ts}{\mspace{2mu}} \footnotesize \centering \begin{tabular}{|r*{5}{|r}|} \hline \mc{$ \Delta T $} & \mc{0} & \mc{5} & \mc{10} & \mc{15} & \mc{20} \\ \hline \hline RK & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ \\ sfRK & $ 2\ts317\ts152 $ & $ 1\ts046\ts664 $ & $ 649\ts976 $ & $ 479\ts360 $ & $ 413\ts024 $ \\ \hline \end{tabular} \label{tab:Inverterchain_sfRK} \end{table} We can reduce the number of function evaluations even for $ \Delta T = 0 $ since at the beginning of the simulation the circuit is in a steady state and it takes a short time until the input signal reaches the last inverter. During that time, parts of the circuit are inactive and need not be evaluated. \begin{figure}[htb] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Speedup} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainEpsilon_sfRK1} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Deviation} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainEpsilon_sfRK2} \end{minipage} \end{center} \caption{Speedup and deviation of sfRK as a function of $ \varepsilon $.} \label{fig:InverterchainEpsilon_RK} \end{figure} Note that the deviation does not depend on the complexity since only artificial terms were introduced to model different complexities of the transistor model. \end{example} \subsection{Implicit Runge--Kutta methods} The stages of implicit Runge--Kutta methods cannot be evaluated successively. At each time point, a system of nonlinear equations has to be solved. To solve these systems with the Newton--Raphson method, the Jacobian $ \pd{f_I}{x_I} $ has to be computed. For the transient analysis of integrated circuits, this can be accomplished efficiently using so-called element stamps~\cite{GFtM05}. Every time the right-hand side $ f_I $ is evaluated, the Jacobian $ \pd{f_I}{x_I}$---if needed---is generated simultaneously. However, only the nonlinear equations that correspond to active regions will be solved assuming that the influence of and on the latent regions is negligibly small. Furthermore, it is then only necessary to compute and factorize the fraction of the Jacobian which represents the active part. That is, we can exploit the latency also on the level of the nonlinear and linear systems of equations. In our implementation, a variable is not updated if it is at least latent of order one, the influence of longer paths is neglected again. In the following, we will consider in particular the trapezoidal rule, which is frequently used for the simulation of integrated circuits. Since the second version of \textsc{Spice} most circuit simulators apply either the trapezoidal rule or BDF schemes to solve the circuit equations~\cite{GFtM05}. We will denote the trapezoidal rule abbreviatory as TR and the signal-flow based trapezoidal rule as sfTR. The increment function of the trapezoidal rule tailored to time-driven ordinary differential equations can be written as \begin{equation} \Phi(t^m, x^m, h) = \frac{1}{2}\left(f_I(x_E^m, x_I^m) + f_I(x_E^{m+1}, x_I^{m+1})\right). \end{equation} That is, at each time step a system of nonlinear equations \begin{equation} F(z) \coloneqq z - x_I^m - \frac{h}{2}\left(f_I(x_E^m, x_I^m) + f_I(x_E^{m+1}, z)\right) = 0 \end{equation} has to be solved. Using the Newton--Raphson method, this leads to the iteration \begin{equation} z_{k+1} = z_k + \Delta z_k, \end{equation} where $ \Delta z_k $ is the solution of the linear system of equations \begin{equation} \left(I - \frac{h}{2} \pd{f_I}{x_I}(x_E^{m+1}, z_k)\right) \Delta z_{k} = -z_k + x_I^m + \frac{h}{2}\left(f_I(x_E^m, x_I^m) + f_I(x_E^{m+1}, z_k)\right). \end{equation} As a starting point for the iteration, we use $ z_0 = x_I^m $. \begin{example} \label{ex:Inverterchain_sfTR} To facilitate comparisons of the explicit Runge--Kutta method and the implicit trapezoidal rule, we repeat the simulation of the inverter chain of length $ N = 100 $ with the settings described in Example~\ref{ex:Inverterchain_sfRK}. Figure~\ref{fig:Inverterchain_sfTR} shows the runtimes of the simulation with both the standard trapezoidal rule and the signal-flow based trapezoidal rule for varying model complexities and input functions again. We use the Newton--Raphson method to solve the nonlinear systems and the LU factorization to solve the resulting linear systems of equations. For the signal-flow based simulation, only the active and semi-latent parts of the nonlinear and linear systems of equations are generated and solved. Here, the influence of the model complexity is negligible since the runtime of the LU factorizations is dominating. Table~\ref{tab:Inverterchain_sfTR} contains the number of required transistor model evaluations. The influence of $ \varepsilon $ on the speedup of sfTR and the average deviation per step for a fixed delay $ \Delta T = 10 $ are shown in Figure~\ref{fig:InverterchainEpsilon_TR}. If the delay $ \Delta T $ of the input function is bigger than $ 12 $ or the period is bigger than $ 14 $, respectively, then the trapezoidal rule depends on the latency. This is due to the fact that the signal needs approximately this period of time to pass all inverters. For larger values of $ \Delta T $, there is a small time interval where all vertices are latent and thus the Newton--Raphson method needs less iterations to converge. \begin{figure}[htbp] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{TR} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_TR} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{sfTR} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_sfTR} \end{minipage} \\ \vspace*{3mm} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{TR vs. sfTR} \vspace*{0.35em} \includegraphics[width=\textwidth]{Inverterchain_TR_sfTR} \end{minipage} \end{center} \caption{Influence of the complexity and latency on the runtime of TR and sfTR.} \label{fig:Inverterchain_sfTR} \end{figure} \begin{table}[htb] \caption{Number of transistor model evaluations of TR and sfTR.} \newcommand{\mc}[1]{\multicolumn{1}{|c|}{#1}} \newcommand{\ts}{\mspace{2mu}} \footnotesize \centering \begin{tabular}{|r*{5}{|r}|} \hline \mc{$ \Delta T $} & \mc{0} & \mc{5} & \mc{10} & \mc{15} & \mc{20} \\ \hline \hline TR & $ 2\ts353\ts600 $ & $ 2\ts353\ts600 $ & $ 2\ts353\ts600 $ & $ 2\ts075\ts200 $ & $ 1\ts881\ts600 $ \\ sfTR & $ 1\ts736\ts618 $ & $ 784\ts214 $ & $ 486\ts788 $ & $ 357\ts118 $ & $ 307\ts582 $ \\ \hline \end{tabular} \label{tab:Inverterchain_sfTR} \end{table} \begin{figure}[htb] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Speedup} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainEpsilon_sfTR1} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Deviation} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainEpsilon_sfTR2} \end{minipage} \end{center} \caption{Speedup and deviation of sfTR as a function of $ \varepsilon $.} \label{fig:InverterchainEpsilon_TR} \end{figure} \end{example} \section{Generalization to periodic systems} In power electronic circuits, diodes and semiconductor switches are constantly changing their status and a steady state condition is by definition reached when the waveforms are periodic with a time period $ T $ which depends on the specific nature of the circuit~\cite{MUR95}. The time scales of these circuits may differ by several orders of magnitude and the simulation requires very small step sizes to cover the dynamics of the fastest subsystems. The maximum simulation time, on the other hand, is usually determined by the slowest subsystems. Thus, a detailed simulation of power electronic circuits is in general very time-consuming. Now, we want to extend the signal-flow based approach to identify and exploit not the latency but the periodicity of subsystems in order to reduce the runtime of the simulation. \begin{definition}[Semi-periodicity] Let $ T $ be the fundamental period of the system and $ h = \frac{T}{p} $, $ p \in \mathbb{N} $, the step size. \begin{enumerate} \item An external variable $ x_{E, i} $, $ i \in \indices{n_E} $, is said to be \emph{semi-periodic} at $ t^m $ if \begin{equation} f_{E, i}(t^m + c_q h) = f_{E, i}(t^{m-p} + c_q h) \end{equation} for all $ q = 1, \dots, s $. \item An internal variable $ x_{I, i} $, $ i \in \indices{n_I} $, is defined to be \emph{semi-periodic} if \begin{equation} x_{I, i}^m = x_{I, i}^{m-p}. \end{equation} \end{enumerate} \end{definition} In contrast to the definition of semi-latency, the variables are not compared to the previous time step, but to the corresponding time step of the previous period. Roughly speaking, latency can be regarded as a special case of periodicity for which $ p = 1 $. \begin{definition}[Periodicity] A variable $ x_i $, $ i \in \indices{n} $, is called \emph{periodic of order} $ 1 $, if $ x_i $ and all variables of the set $ \pre{x_i} $ are semi-periodic. Additionally, a periodic variable $ x_i $ is defined to be \emph{periodic of order} $ \nu $ if all variables in $ \pre{x_i} $ are at least periodic of order $ \nu-1 $. \end{definition} Let $ \varepsilon $ be again a given error tolerance. For numerical computations, the semi-periodicity conditions are replaced by $ \abs{x_{E, i}^m - x_{E, i}^{m-p}} < \varepsilon $ and $ \abs{x_{I, i}^m - x_{I, i}^{m-p}} < \varepsilon $, respectively. Analogously to the latency-based methods, we do not update a variable if it is periodic of order one or higher. To illustrate the different activity states, we use the inverter chain. \begin{example} The inverter chain is excited with a piecewise linear function which is periodic with $ T = 4 $ for $ t > 1 $. The input function and the resulting node voltages at intermediate vertices are shown in Figure~\ref{fig:InverterchainSimulationP}. \begin{figure}[htb] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Latency} \includegraphics[width=\textwidth]{InverterchainTrajectoriesP1} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{Periodicity} \includegraphics[width=\textwidth]{InverterchainTrajectoriesP2} \end{minipage} \end{center} \caption{Comparison of latency and periodicity. The curves show the node voltages $ v_3 $, $ v_7 $, and $ v_{11} $, the thin horizontal lines the corresponding states of the variables. Here, $ 0 $ denotes active, $ 1 $ semi-latent or semi-periodic, and $ 2 $ latent or periodic, respectively.} \label{fig:InverterchainSimulationP} \end{figure} \end{example} \begin{definition}[Signal-flow based periodic Runge--Kutta method] An explicit \emph{signal-flow based periodic Runge--Kutta method} for a time-driven ordinary differential equation is defined by \begin{equation} \begin{split} x_E^{m+1} &= x_E^m + \Delta x_E^m, \\ x_{I, i}^{m+1} &= \begin{cases} x_{I, i}^{m-p+1}, & \text{if } x_{I, i} \text{ is periodic of order } s, \\ x_{I, i}^m + \Delta x_{I, i}^m, & \text{otherwise}, \end{cases} \end{split} \end{equation} for $ i \in \indices{n_I} $. \end{definition} To exploit the periodicity of subsystems and to reduce the number of function evaluations, we store the vectors $ x^{m-p+1}, x^{m-p+2}, \dots, x^m $ in a circular buffer. \begin{theorem} \label{th:pERK=sfpERK} The explicit Runge--Kutta methods and the corresponding signal-flow based methods for periodic systems are equivalent. \end{theorem} \begin{proof} \allowdisplaybreaks The proof is almost identical to the proof of Theorem~\ref{th:ERK=sfERK}. We add again the superscript $ m $ or $ m-p $ to the stages to differentiate between the time points. Let $ x_{I, i} $ be periodic at $ t^m $, i.e.\ $ x_{I, i}^m = x_{I, i}^{m-p} $ and \begin{align*} f_{E, j}(t^m + c_q h) &= f_{E, j}(t^{m-p} + c_q h) \quad \forall x_{E, j} \in \pre{x_{I, i}}, \\ x_{I, j}^m &= x_{I, j}^{m-p} \quad \forall x_{I, j} \in \pre{x_{I, i}}. \end{align*} For $ q = 1 $, this yields \begin{equation*} k_{I, i}^{m, 1} = f_{I, i}(x_E^m, x_I^m) = f_{I, i}(x_E^{m-p}, x_I^{m-p}) = k_{I, i}^{m-p, 1} \end{equation*} and hence by induction \begin{align*} k_{I, i}^{m, q} &= f_{I, i}\big(k_E^{m, q}, x_I^m + h \sum_{r=1}^{q-1} a_{qr} k_I^{m, r}\big) \\ &= f_{I, i}\big(k_E^{m-p, q}, x_I^{m-p} + h \sum_{r=1}^{q-1} a_{qr} k_I^{m-p, r}\big) = k_{I, i}^{m-p, q} \end{align*} for each variable $ x_{I, i} $ which is periodic of order $ q $. Consequently, \begin{equation*} \begin{split} x_{I, i}^{m+1} &= x_{I, i}^m + h \, \Phi_i(t^m, x^m, h) \\ &= x_{I, i}^m + h \sum_{q=1}^s b_q k_{I, i}^{m, q} \\ &= x_{I, i}^{m-p} + h \sum_{q=1}^s b_q k_{I, i}^{m-p, q} \\ &= x_{I, i}^{m-p} + h \, \Phi_i(t^{m-p}, x^{m-p}, h) = x_{I, i}^{m-p+1}, \end{split} \end{equation*} for each $ x_{I, i} $ which is periodic of order $ s $. \end{proof} Now, let sfpRK denote the signal-flow based standard fourth-order Runge--Kutta method for periodic systems. \begin{example} \label{ex:Inverterchain_sfpRK} To compare the signal-flow based method for periodic systems with the standard Runge--Kutta method, we simulate the inverter chain as described in Example~\ref{ex:Inverterchain_sfRK}. The results are shown in Figure~\ref{fig:Inverterchain_sfpRK} and Table~\ref{tab:FunctionEval_sfpRK}. Here, the number of function evaluations rises with increasing $ \Delta T $ since the time interval in which the system is periodic according to our definition decreases. \begin{figure}[htbp] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{RK} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_RK} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{sfpRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{InverterchainTime_sfpRK} \end{minipage} \\ \vspace*{3mm} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{RK vs. sfpRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{Inverterchain_RK_sfpRK} \end{minipage} \end{center} \caption{Influence of the complexity and latency on the runtime of RK and sfpRK.} \label{fig:Inverterchain_sfpRK} \end{figure} \begin{table}[htb] \caption{Number of transistor model evaluations of RK and sfpRK.} \newcommand{\mc}[1]{\multicolumn{1}{|c|}{#1}} \newcommand{\ts}{\mspace{2mu}} \footnotesize \centering \begin{tabular}{|r*{21}{|r}|} \hline \mc{$ \Delta T $} & \mc{0} & \mc{5} & \mc{10} & \mc{15} & \mc{20} \\ \hline \hline RK & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ & $ 3\ts200\ts000 $ \\ sfpRK & $ 422\ts328 $ & $ 700\ts936 $ & $ 999\ts672 $ & $ 1\ts360\ts800 $ & $ 1\ts760\ts800 $ \\ \hline \end{tabular} \label{tab:FunctionEval_sfpRK} \end{table} \end{example} \section{Conclusion} The efficiency of the signal-flow based Runge--Kutta methods depends strongly on the characteristic properties of the system. The inverter chain example shows that if during the simulation large parts of the system are latent and function evaluations are comparatively time-consuming, then the signal-flow based methods result in a substantially reduced runtime while introducing only a small deviation compared to the corresponding standard Runge--Kutta methods. If, on the other hand, large parts are periodic with a fundamental period $ T $, then the signal-flow based methods for periodic systems can be used to speed up the simulation. The following example summarizes these results. \begin{example} \label{ex:Inverterchain_sfRK_sfpRK} Figure~\ref{fig:Inverterchain_sfRK_sfpRK} shows a comparison of the signal-flow based standard Runge--Kutta method and the corresponding method for periodic systems. If $ T $ is small, then the periodicity-oriented Runge--Kutta method is more efficient since the circuit is active most of the time. With increasing $ T $, the latency exploitation becomes more efficient. \begin{figure}[htbp] \begin{center} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{sfRK vs. sfpRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{Inverterchain_sfRK_sfpRK} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \subfiguretitle{sfpRK vs. sfRK} \vspace*{0.35em} \includegraphics[width=\textwidth]{Inverterchain_sfpRK_sfRK} \end{minipage} \end{center} \caption{Comparison of sfRK and sfpRK.} \label{fig:Inverterchain_sfRK_sfpRK} \end{figure} \end{example} \section{Further extensions} To utilize not only the temporal latency, i.e.\ inactivity over a period of time, but also the spatial latency, i.e.\ inactivity during the Newton--Raphson iterations, the proposed techniques might be applicable as well. This could, for example, be used to speed up the DC analysis, exploiting the fact that some parts of the circuit possibly converge rapidly to a solution while other parts converge only very slowly.
{ "timestamp": "2015-04-27T02:05:35", "yymm": "1504", "arxiv_id": "1504.06413", "language": "en", "url": "https://arxiv.org/abs/1504.06413" }
\section{Introduction} In vacuum, the static cylindrically symmetric spacetimes, in absence of a cosmological constant, was found by Levi-Civita \cite{LeviCivita} just few years after the emerging of General Relativity. However, the inclusion of a nonzero cosmological constant was only achieved almost 70 years later by Linet \cite{Linet} and Tian \cite{Tian}. More recently, some geometrical properties of these spacetimes, such as the presence of conical singularities, were reviewed in \cite{daSilvaC,Bicak}. The stationary cylindrically symmetric vacuum solution was discovered independently by Lanczos \cite{Lanczos} and Lewis \cite{Lewis}. The general solution contains a number of integration constant, whose physical interpretation has been studied in \cite{daSilvaA,daSilvaB}. In vacuum, the cylindrical stationary spacetime with a nonvanishing cosmological constant was derived in \cite{Santos} (see also \cite{Krasinski}). The interpretation of the integration constants was clarified in \cite{MacCallumSantos}, where was proved that three of them are indeed essential parameters. Two integration constant have a topological origin \cite{MacCallum}, and a third one characterizes the local gravitational field. Despite the static cylindrically symmetric spacetimes are widely known in vacuum, exact solutions containing a massless scalar field as matter source have not been obtained in the most general form until now. Only solutions with plane symmetry, which are a particular case of the cylindrical ones, have been reported \cite{Vuille, GM}. In this article, the general stationary cylindrically symmetric solution of Einstein-massless scalar field system with a non-positive cosmological constant $\Lambda$ is found, and its geometrical properties are studied. The aim of this work is to determine the implications of a massless scalar field in a cylindrically symmetric system. Due to the high interest in exact solutions whose asymptotic behavior approaches the anti-de Sitter spacetime, we include in the analysis a negative cosmological constant. In fact, the solutions presented here, for $\Lambda <0$, have that asymptotic behavior. Moreover, we study the effect of a massless scalar field in the case of a vanishing cosmological constant, i. e., we explore the backreaction generated by the scalar field in the well-known Lanczos-Lewis and Levi-Civita spacetimes. As is expected, in absence of suitable potentials and non-minimally couplings for the scalar field, the no-hair theorem rules out solutions having event horizons, and this is precisely our case. We are just considering a massless scalar field with a constant potential (zero or negative). Thus, in general, the solutions presented here contain naked singularities, which however could have some physical interest \cite{Gubser}. The article is organized as follows. In the next section, the field equations are solved by considering a general stationary cylindrically symmetric ansatz and a negative cosmological constant. We obtain the general solution, which can be expressed as a linear combination of three functions. Then, the local properties of the solutions are studied using the Newman-Penrose (NP) formalism, where the Weyl-NP scalars allow to obtain the Petrov classification of these spacetimes. It is shown that a parameter included through the scalar field enlarges the family of spacetimes with respect to the vacuum ones. Afterwards, following \cite{MacCallum}, the stationary spacetime is obtained from the static one by means of a topological construction. These formalisms allow us to identify the four essential parameters of the general solution. One of them is the amplitude of the scalar field, which in conjunction with a second one describe the strength of the gravitational field. The remaining parameters have a topological origin and are just globally defined, because they cannot be removed by a proper coordinate transformation. Moreover, the mass and angular momentum are computed by using the Regge-Teitelboim method \cite{ReggeTeitelboim}. These conserved charges illustrate the physical meaning of the essential parameters. The case of a vanishing cosmological constant is considered in section \ref{four}. We note that it is necessary to integrate the field equations from scratch, because a special class of solutions is not available by just taking the limit $\Lambda\rightarrow0$ in the solutions presented in Sec. \ref{two}. We found that these spacetimes have all their scalar invariants constant, and are supported by a phantom scalar field. The last section contains some concluding remarks. \section{Stationary cylindrically symmetric solutions with $\Lambda<0$}\label{two} We consider the Einstein-Hilbert action with a massless scalar field and a cosmological constant $\Lambda$, \begin{equation} \label{action} I=\int d^{4}x\sqrt{-g}\left[\frac{R-2\Lambda}{2 \kappa}-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi\right], \end{equation} where $\kappa=8 \pi G$ is the gravitational constant. The stress-energy tensor turns out to be \begin{equation} T_{\mu\nu}=\partial_{\mu}\Phi\partial_{\nu}\Phi-\frac{1}{2}g_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\Phi\partial_{\beta}\Phi, \end{equation} and the field equations are given by \begin{equation} \label{FE} G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa T_{\mu\nu}, \quad \Box \Phi=0. \end{equation} The general stationary, cylindrically symmetric\footnote{In order to include spacetimes lacking of a regular axis, we are adopting the less restrictive definition of cylindrical symmetry given in \cite{MacCallumSantos}.} configuration can be described by the line element \begin{equation}\label{ds2} ds^2\!=\!g_{tt}(r)dt^2+g_{\phi\p}(r)d\phi^2+g_{zz}(r)dz^2+2g_{t\phi}(r)dt d\phi+dr^2, \end{equation} where the coordinates range as $t\in(-\infty,\infty)$, $r\in[0,\infty)$, $z\in(-\infty,\infty)$ and $\phi\in[0,2\pi)$, and a scalar field depending just on the radial coordinate, $\Phi=\Phi(r)$. The general solution \eqref{ds2} of the field equations \eqref{FE}, considering a negative cosmological constant $\Lambda=-3 l^{-2}$, can be written as a linear combination of the functions \begin{equation} \label{gs} g_i(r)=\left(\frac{ e^{3 r/l}-b}{ e^{3 r/l}+b}\right)^{K_i}\!\left(e^{3 r/l} -b^2 e^{-3 r/l}\right)^{2/3}\!, i=\{0,1,2\}, \end{equation} where the metric coefficients read \begin{equation}\label{ds2comp} \begin{split} g_{tt}(r)&=a_1 g_1(r)-a_0 g_0(r),\\ g_{\phi\p}(r)&=b_1 g_1(r)-b_0 g_0(r),\\ g_{t\phi}(r)&=\sqrt{a_0 b_0}g_0(r)-\sqrt{a_1 b_1}g_1(r),\\ g_{zz}(r)&=c_0 g_2(r). \end{split} \end{equation} The scalar field is given by \begin{equation}\label{sf} \Phi(r)=\Phi_0+\frac{1}{2}\sqrt{\frac{\alpha}{2\kappa }}\log\left(\frac{ e^{3 r/l}-b}{ e^{3 r/l}+b}\right)^2. \end{equation} Here $K_i$, $a_0$, $a_1$, $b$, $b_0$, $b_1$, $c_0$, $\alpha$ and $\Phi_0$ are integration constants. The constants $K_i$ are not independent, since they verify the algebraic relations \begin{eqnarray} K_0+ K_1+ K_2&=&0,\label{relnorm}\\ K_0 K_1+ K_1 K_2+ K_2 K_0&=&-\frac{4}{3}+\alpha.\label{relesp} \end{eqnarray} In order to ensure a real metric and scalar field, the previous algebraic relations fix bounds for the constants. The constant $\alpha$ runs in the interval $0\leq\alpha\leq 4/3$, and the constants $|K_i|$ are bounded from above by $\frac {2} {3}\sqrt {4 - 3\alpha}$, $\frac {1} {3}\sqrt {4 - 3\alpha}$, and $\frac {1} {3}\sqrt {4 - 3\alpha}$ in any order. Note that the presence of the scalar field is encoded in the additional integration constant $\alpha$ in \eqref{relesp}. In absence of the scalar field, the stationary solutions presented in \cite{Santos}, and the static ones in \cite{Linet,Tian}, are recovered. The constant $c_0$ can be absorbed by rescaling the noncompact coordinate $z$, and only one of the constants $a_0$, $a_1$, $b_0$, $b_1$ is essential, as it will become clear in the next subsection. In order to get insight about the parameter $b$, it is convenient to start with static metric \begin{equation}\label{Linet1} ds^2=-g_{0}(r)dt^2+g_{1}(r)l^2 d\phi^2+g_{2}(r)dz^2+dr^2. \end{equation} The constant $b$ determines the location of the axis of symmetry at $r_0=l/3\log |b|$, and it can be removed from the scalar field by a shift of the radial coordinate $r \rightarrow r+r_0$. With this shift, $b$ just appears as a multiplicative factor $b^{2/3}$ in $g_{i}$, and consequently, the invariants do not depend on $b$. In other words, $b$ could be removed from the solution by rescaling the coordinates $t, z, \phi$. However, $\phi$ is a compact coordinate and global properties will be modified with this rescaling. In fact, the metric with the shifted radial coordinate reduces in absence of the scalar field to that shown in \cite{Bicak}, where a conicity parameter equivalent to $b^{-1/3}$ is explicitly exhibited. In summary, $b$ has no relevance for the local properties, but it is a topological parameter that contributes to the mass of the solution (see subsection \ref{three}). The general solution previously considered for the vacuum case do not contain a locally anti-de Sitter (AdS) spacetime \cite{daSilvaC}. Indeed, the locally AdS solution appears as a special branch disconnected from the general one \cite{Bicak}. The advantage of our general static solution is that it is smoothly connected to a locally AdS spacetime, and in fact, this is achieved just doing $b=0$ in (\ref{Linet1}). Explicitly, we obtain \begin{equation}\label{locAdS} ds^2=dr^2+e^{2 r/l}\left(-dt^2+l^2 d\phi^2+dz^2\right), \end{equation} which becomes the background required for computing the conserved charges in subsection \ref{three}. \subsection{Local properties} In order to obtain a deeper insight into the geometrical properties of the solution, we make use of an invariant characterization of the spacetimes. Spacetimes are usually classified according to the Petrov classification of their Weyl invariants. Note that for analyzing the local properties it is enough to consider the static solutions because, as it will be shown in the next subsection, the stationary solutions can be obtained from a topological construction, and therefore they are locally equivalent. The general solution presented above, (\ref{Linet1}), is of type I (named normally algebraically general). However, as Linet pointed out in \cite{Linet}, a particular choice of the constants $K_0$, $K_1$ and $K_2$, makes the solution to be an algebraically special spacetime of type D. We find that, with the inclusion of the scalar field, i.e. by means of the constant $\alpha$, the Petrov type D spacetimes are no longer determined only by those particular values of $K_i$, but by a range of values driven by $\alpha$. Namely, Petrov type D spacetimes are found for values of $K_i$ taken as any ordering of $\pm\frac {2} {3}\sqrt {4 - 3\alpha}$, $\mp\frac {1} {3}\sqrt {4 - 3\alpha}$, and $\mp\frac {1} {3}\sqrt {4 - 3\alpha}$, provided $0\leq\alpha < 4/3$. These type D spacetimes have a planar section (two $K_i$ are equal), which allows an additional symmetry. This fourth Killing vector corresponds to a rotation or a boost in this plane depending on its signature. A novel feature introduced by the scalar field, is a nontrivial Petrov type O subfamily. In fact, for $\alpha=\frac{4}{3}$, $b\neq 0$ and vanishing $K_i$, a conformally flat spacetime arises, and it is given by \begin{equation} \label{cf} ds^2=dr^2+(e^{3 r/l}-b^2 e^{-3 r/l})^{2/3}(-dt^2+dz^2+l^2 d\phi^2). \end{equation} In other words, the scalar field gives rise to a wider family of spacetimes. This Petrov type O is a new subfamily parametrized by $b$, which strictly emerges due to the scalar field. In this case the number of isometries is enlarged to six since we are dealing with a conformally flat spacetime. It is remarkable to have such a number of symmetries in a space endowed with a matter source, in particular since for the vacuum (nontrivial) case there are at most four Killing vectors \cite{daSilvaC}. For $b=0$ the scalar field is trivial ---it is a constant--- and (\ref{cf}) reduces to the locally AdS spacetime (\ref{locAdS}). Studying the Weyl and Ricci scalars of the Newman-Penrose formalism it is shown that they are singular at the axis for the whole family of solutions, except in two cases. The first one, corresponds to the CSI spacetimes, which will be discussed in Sec. \ref{csi}. The second case appears for a constant scalar field ($\alpha=0$) provided the constants $K_i$ take the values $\{\pm\frac{4}{3},\mp\frac{2}{3},\mp\frac{2}{3}\}$, or any permutation of them \cite{Linet}. Since this special solution is regular at the axis, a change of the radial coordinate $r$ can be performed to prove that this type D solution is a black string. In fact, for $K_0=4/3$, and $K_1=K_2= -2/3$ the transformation reads \begin{equation} r= \frac{2 l}{3} \log\left[\frac{\rho^{3/2}+\sqrt{\rho^3-4 b l^3 }}{2 l^3} \right], \end{equation} yielding the black string \begin{equation} ds^2=-\left( \frac{\rho^2}{l^2}-\frac{4 l b}{\rho}\right)dt^2 + \frac{d\rho^2}{\displaystyle \frac{\rho^2}{l^2}-\frac{4 l b}{\rho}}+ \frac{\rho^2}{ l^{2}}dz^2+\rho^2 d\phi^2. \end{equation} Note that the original axis of symmetry at $r_0=l/3\log |b|$ is mapped to the horizon $\rho_+= 2^{2/3} l b^{1/3}$, and the new axis of symmetry is located at $\rho=0$. This black string was previously found by solving the Einstein field equations in \cite{Lemos}, and by using an adequate coordinate transformation in \cite{Bicak}. \subsection{Topological construction of the rotating solution from a static one} As explained in \cite{MacCallum}, a diagonal static metric with dependence on the spacelike coordinates $r$ and $z$, and with the ``angular" coordinate stretched to infinity, can be locally equivalent but globally different to a stationary axisymmetric metric obtained from a topological identification in the static spacetime. This identification is defined by two essential parameters. This kind of essential parameters can not be removed by a permissible change of coordinates since they encode topological information. In this section we are going to build the stationary solution \eqref{ds2} with the metric coefficients \eqref{ds2comp}, using the procedure presented in \cite{MacCallum} in the particular case of cylindrical symmetry. Let us consider the static solution with scalar field \begin{equation}\label{Linetscalar} ds^2=-g_{0}(r)d\hat{t}^2+g_{1}(r) l^2 d\hat{\phi}^2+g_{2}(r)dz^2+dr^2, \end{equation} where $g_{i}$ is given by \eqref{gs} in a coordinate system $(\hat{t},r,z,\hat{\phi})$ with $\hat{t}\in(-\infty,\infty)$, $r\in[0,\infty)$, $z\in(-\infty,\infty)$ and $\hat{\phi}\in(-\infty,\infty)$. Note that $\hat{\phi}$ is not a compact coordinate. We perform a coordinate transformation on the $(\hat{t},\hat{\phi})$ plane given by \begin{equation}\label{ct} \hat{t}=\beta_0 \phi+\beta_1 t,\qquad \hat{\phi}=\alpha_0 \phi+\alpha_1t, \end{equation} where $\alpha_0$, $\alpha_1$, $\beta_0$ and $\beta_1$ are parameters. This transforms \eqref{Linetscalar} into \eqref{ds2comp} by defining these parameters as follows \begin{eqnarray}\label{parameters} \alpha_0&= \sqrt{a_0}, \quad \alpha_1 = \displaystyle -\frac{\sqrt{a_1}}{l},\nonumber \\ \beta_0&= -\sqrt{b_0}, \quad \beta_1 =\displaystyle\frac{\sqrt{b_1}}{l}. \end{eqnarray} As shown in \cite{MacCallum}, $\alpha_1$ and $\beta_1$ are not essential parameters, and they can be set as $\alpha_1=0$ and $\beta_1=1$. On the contrary, $\alpha_0$ and $\beta_0$ are essential. However, after a topological identification, which transforms the $(\hat{t},\hat{\phi})$ plane into a cylinder, one can fix the period of the angular coordinate $\phi$ to $2\pi$ by choosing $\alpha_0= 1$. Since that in (\ref{Linetscalar}) all the coordinates are not compact, $b$ can be absorbed by rescaling the coordinates. After identification, $\hat{\phi}$ becomes periodic and $b$ has a topological meaning. The parameter $\alpha_0$ plays the same topological role, and in fact it redefines $b$. Therefore, without loss of generality $\alpha_0$ can be fixed, but not simultaneously with $b$. In other words, since from the beginning the static solution contains an arbitrary conicity parameter $b$, the constant $\alpha_0$ can be fixed. Going back to relations \eqref{parameters} we find that $a_0=1, a_1=0$ and $b_1=l^2$ reproduce the set of values chosen for $\alpha_0, \alpha_1$ and $\beta_1$. Then, after fixing the period as $2 \pi$ there is just one essential parameter $\beta_0$ in the transformation, which will be named $-a$ hereafter. Then, the transformation (\ref{ct}) reduces to \begin{equation}\label{trans} \hat{t}=t-a \phi,\qquad \hat{\phi}= \phi. \end{equation} In summary, a topological construction can bring the solution \eqref{Linetscalar} into a locally equivalent, but globally different, solution by doing the transformation (\ref{trans}) to get \begin{equation}\label{Linetsol} ds^2=-g_0(r)(dt-a d\phi)^2+g_1(r) l^2 d\phi^2+g_2(r)dz^2+dr^2. \end{equation} Transformation \eqref{trans} is not a proper coordinate transformation, since it converts an exact 1-form into a closed but not exact 1-form, as was discussed in detail in \cite{Stachel}. Hence, (\ref{trans}) only preserves the local geometry, but not the global one. Therefore, the resulting manifold is globally stationary but locally static. Hereafter, we will consider \eqref{Linetsol} instead of \eqref{ds2comp} as the general solution, because it already contains all the local and global essential information. \subsection{Asymptotic behavior} In order to display the asymptotic behavior of the fields, it is convenient to use the coordinate $\rho= l e^{r/l}$. In this way, the behavior at large $\rho$ is given by \begin{equation}\label{asymp} \begin{split} g_{tt}(\rho)&=-\frac{\rho^2}{l^2} + \frac{2 b l K_0}{\rho}+O(\rho^{-4}),\\ g_{\phi\p}(\rho)&=\rho^2 (1 - \frac{a^2}{l^2})+\frac{ 2 l b (-l^2 K_1+a^2 K_0)}{\rho}+O(\rho^{-4}),\\ g_{t\phi}(\rho)&=\frac{\rho^2 a}{l^2}-\frac{ 2 b l a K_0}{\rho}+O(\rho^{-4}),\\ g_{zz}(\rho)&=\frac{\rho^2}{l^2}-\frac{ 2 b l K_2}{\rho}+O(\rho^{-4}), \qquad g_{\rho \rho}(\rho)=\frac{l^2}{\rho^2},\\ \Phi(\rho)&= \Phi_0+ \sqrt{\frac{2 \alpha}{\kappa}}\frac{ b l^3}{\rho^3}+ O(\rho^{-9}). \end{split} \end{equation} One can note that the metric asymptotically approaches a locally AdS spacetime, as the scalar field becomes constant. The background is fixed by setting $a=b=\alpha= \Phi_0=0$, which corresponds to a locally AdS spacetime. \subsection{Mass and angular momentum}\label{three} The mass and angular momentum of the solutions are determined using the Regge-Teitelboim method \cite{ReggeTeitelboim}. In the canonical formalism, the generator of an asymptotic symmetry associated to the vector $\xi=(\xi^{\perp},\xi^{i})$ is built as a linear combination of the constraints $\mathcal{H}_{\perp}, \mathcal{H}_{i}$, with an additional surface term $Q[\xi]$ \begin{equation} H[\xi]=\int d^{3} x \left( \xi^{\perp} \mathcal{H}_{\perp}+\xi^{i} \mathcal{H}_{i}\right) +Q[\xi]. \end{equation} A suitable choice of this surface term attains the generator has well-defined functional derivatives with respect to the canonical variables \cite{ReggeTeitelboim}. The surface term $Q[\xi]$ is the conserved charge under deformations $\xi$ provided the constraints vanish. For the action (\ref{action}), the variation of $Q[\xi]$ is given by \begin{align}\label{de} &\delta Q[\xi]=\oint d^{2}S_{l}\left[ \frac{G^{ijkl}}{2 \kappa}(\xi^{\bot}\delta g_{ij;k}-{\xi^{\bot}}_{,k} \delta g_{ij})+2 \xi_{k}\delta \pi^{kl}\right. \nonumber\\& \! \! \!\left.+(2 \xi^{k}\pi^{jl}\!-\!\xi^{l}\pi^{jk}) \delta g_{jk}\!-\! (\sqrt{g} \xi^{\bot}g^{lj}\Phi_{,j}\!+\!\xi^{l}\pi_{\Phi})\delta\Phi \right], \end{align} where $G^{ijkl}\equiv\sqrt{g}(g^{ik}g^{jl}+g^{il}g^{jk}-2g^{ij}g^{kl})/2$. The canonical variables are the spatial metric $g_{ij}$ and the scalar field $\Phi$ together with their respective conjugate momenta $\pi^{ij}$ and $\pi_\Phi$. To evaluate $\delta Q[\xi]$ we consider as asymptotic conditions just the asymptotic behavior of the solutions with a negative cosmological constant (\ref{asymp}), where the integration constants $K_{i}, a, b, \alpha$ are allowed to be varied. The additive constant of the scalar field $\Phi_0$ is considered as a fixed constant without variation, in order to save the asymptotic scale invariance\footnote{For $\delta\Phi_0 \neq 0$, $\delta Q[\xi]$ contains a term proportional to $\oint d^{2}S \xi^t\sqrt{\alpha} b \delta \Phi_0$. The integration of this term requires a boundary condition relating $\Phi_0$ with $\alpha$ and $b$.}. Since the solution is in the comoving frame along $z$, the corresponding momentum $Q[\partial_z]$ vanishes. Then, the only nonvanishing charges are those associated to symmetry under time translations and the rotational invariance, the mass and angular momentum, respectively. Defining $ q[\xi]$ as the charge by unit length $ Q[\xi]=\int q[\xi] dz$, we can obtain from (\ref{asymp}) and (\ref{de}), the explicit form of $\delta q[\xi]$ \begin{align} \delta q[\xi]&= \frac{6\pi }{\kappa}\left[ -\xi^t \delta (b(K_1+K_2))+\xi^\phi \delta (a b(K_1-K_0)) \right]. \end{align} Thus, using $\kappa= 8 \pi G$, the mass $M=q[\partial_t]$ and angular momentum $J=q[\partial_\phi]$ per unit length are \begin{align} M&=\frac{3 b}{4 G}K_0, &J&=\frac{3 a b}{4 G }(K_1-K_0). \end{align} These global charges are defined up to an additive constant without variation. In order to set the locally AdS spacetime (\ref{locAdS}) as a background, these additive constants must be chosen to be null. As we can see from the expression for the angular momentum, there are two manners of turning off the angular momentum. The first one is by doing $a=0$, which cancels the off-diagonal term $g_{t\phi}$ in the metric. The second way is less obvious, since it is achieved by considering $K_0=K_1$. Indeed, this particular choice of the parameters yields a static solution of type D. This can be shown from the coordinate transformation \begin{equation} d\phi \rightarrow d\phi+\frac{a}{(a^2-l^2)}d t, \qquad d t \rightarrow d t. \end{equation} As analyzed in \cite{MacCallum}, this transformation contains an inessential parameter $\alpha_1= a/(a^2-l^2)$, which does not change the topology. Therefore, the solution with $K_0=K_1$ is no just locally equivalent to the static solution, but also globally. \section{Stationary cylindrically symmetric solutions with $\Lambda=0$}\label{four} In this section the field equations are integrated using the same procedure as in \cite{Linet}, but assuming from scratch a vanishing cosmological constant. We show that the limit $\Lambda \rightarrow 0$, or equivalently $l\rightarrow\infty$, in the configurations of Sec. \ref{two}, does not provide all the solutions coming from a direct integration of the field equations. Two classes of solutions are obtained. The first type corresponds to solutions that match the limit $\Lambda \rightarrow 0$ in the configurations introduced in Sec. \ref{two}, and they are dubbed as Levi-Civita type spacetimes. The second type is formed by spacetimes having all their invariants constant. These two types will be analyzed in detail below. The discussion in this section is focused on static solutions. The topological construction explained in Sec. \ref{two} does not depend on the value of the cosmological constant, and in consequence, the stationary solutions for $\Lambda=0$ can be obtained from the improper transformation (\ref{trans}). Since (\ref{trans}) is a local transformation, the static configuration and its stationary counterpart share the same local properties. \subsection{Levi-Civita type spacetimes} \label{LCs} In this subsection, we show a Levi-Civita type spacetime in presence of a massless scalar field. We consider the most general static cylindrical metric \begin{equation}\label{Linetnull} ds^2=-g_{0}(r)dt^2+g_{1}(r) d\phi^2+g_{2}(r)dz^2+dr^2, \end{equation} with a scalar field depending just on the radial coordinate $r$. Following \cite{Linet}, the Einstein field equations can be put in a very simple form \begin{eqnarray} \left(\left(\frac{u}{g_i}\right)g_i^{\prime}\right)'&=&0,\quad i=0,1,2,\label{eqh}\\ \frac{g_0^{\prime}g_1^{\prime}}{g_0g_1}+\frac{g_1^{\prime}g_2^{\prime}}{g_1g_2}+\frac{g_2^{\prime}g_3^{\prime}}{g_2g_3}&=&2\kappa\Phi'^2,\label{eqnh} \end{eqnarray} with $u\equiv \sqrt{g_0 g_1 g_2}$. Using the sum of all the equations in \eqref{eqh}, and the definition of $u$, one obtains $u''=0$, so that \begin{equation} \label{usol} u(r)=K r+u_0, \end{equation} where $K$ and $u_0$ are integration constants. Substituting (\ref{usol}) in \eqref{eqh}, and choosing the axis at $r=0$, we obtain \begin{equation}\label{gsss} g_i(r)=r^{2/3+K_i}g_i^0, \end{equation} where $K_i$ and $g_i^0$ are integration constants fulfilling $g_0^0g_1^0g_2^0=K^2$. Moreover, replacing (\ref{gsss}) in the definition of $u$ we obtain the same relation for the $K_i$ given by Eq. (\ref{relnorm}). The scalar field is obtained from (\ref{eqnh}), $\Phi=\sqrt{\alpha/2\kappa}\log(r/r_0)$, where $\alpha$ is related with $K_i$ in the same way as in \eqref{relesp}, and $r_0$ is an arbitrary constant. The algebraic relations (\ref{relnorm}) and \eqref{relesp} determine two essential constants related to the gravitational and scalar field strengths. Since $\phi$ is an angular coordinate with a given period, the constant $g_1^0$ cannot be absorbed by a rescaling of this coordinate keeping the same period. Then, $g_1^0$ is a third essential parameter, and plays a topological role in the same way as $b$ in the previous section. The transformation (\ref{trans}) provides the fourth essential parameter for the stationary solution. Since $K_i$ and $\alpha$ must satisfy (\ref{relnorm}) and \eqref{relesp}, they are bounded in the same way as was established in Section \ref{two}. As in Sec. \ref{two}, we study the local properties through the Petrov classification. Normally the solution is algebraically general as occurs in vacuum \cite{Griffiths}, but algebraically special spacetimes are also possible to be found. The scalar field parametrizes three families of type D spacetimes, which will be described in Table 1. Two of these families ($S_1$ and $S_2$) are allowed only for a nonvanishing scalar field, while the third one ($S_3$) reduces to the three known vacuum type D Levi-Civita spacetimes by switching off the scalar field and by circular permutations of $K_i$. A nontrivial type O spacetime emerges strictly from the scalar field. In this case $K_0=K_1=K_2=0$ and $\alpha=4/3$ yielding the conformally flat metric \begin{equation} ds^2=dr^2+r^{2/3}(-dt^2+dz^2+g_1^0 d\phi^2). \end{equation} This is the counterpart with $\Lambda=0$ of the conformally flat spacetime described in (\ref{cf}). \begin{table}[h] \begin{tabular}{ccccc} \hline \hline & $K_0$ & $K_1$ & $K_2$ & $\alpha$ \\ \hline $S_1$ & $\frac {2} {3}\sqrt {4 - 3\alpha}$ & $-\frac {1} {3}\sqrt {4 - 3\alpha}$ & $-\frac {1} {3}\sqrt {4 - 3\alpha}$ & $(0, \frac {4}{3})$ \\ $S_2$ & $-\frac {2} {3}$ & $\frac {1} {3}\pm\sqrt {1 - \alpha}$ & $\frac {1} {3}\mp\sqrt {1 - \alpha}$ & $(0,1]$ \\ $S_3$ & $-\frac {2} {3}\sqrt {4 - 3\alpha}$ & $\frac {1} {3}\sqrt {4 - 3\alpha}$ & $\frac {1} {3}\sqrt {4 - 3\alpha}$ & $[0, \frac {4}{3})$ \\ \hline \hline \end{tabular} \caption{Petrov D spacetimes for $\Lambda=0$. The constants $K_i$ are classified in three sets, and depend on the amplitude of the scalar field $\alpha$. Within each set $K_0$, $K_1$ and $K_2$ can be taken in any order. The last column shows the range of $\alpha$ allowed for each set. The first two sets are exclusive for a non-constant scalar field ($\alpha \neq 0$), and the third one also includes a trivial scalar field.} \end{table} It is found that the nonvanishing components of the Riemann tensor ${R^{\mu \nu}}_{\lambda \rho}$ and Kretschmann scalar are proportional to $r^{-2}$ and $r^{-4}$, respectively. Then, the spacetime is asymptotically locally flat. Until now, we have assumed a nonvanishing $K$. However, when we consider $K=0$, i.e $u=u_0$, equations \eqref{eqh} drastically modify the functional form of $g_i(r)$. This new branch of solutions, which is not provided by the limit $\Lambda \rightarrow 0$ in Sec. \ref{two}, are analyzed in next subsection. \subsection{CSI spacetimes}\label{csi} In general, the Levi-Civita type spacetimes discussed above possess curvature invariants which are singular at $r=0$. However, it is possible to find regular spacetimes, i.e spacetimes free of any curvature singularity, where in addition, all polynomial scalar invariants constructed from the Riemann tensor and its covariant derivatives are constant. These spacetimes are known as constant scalar invariant (CSI) spacetimes. In this subsection, a non-trivial CSI spacetime due to the presence of the scalar field is presented. It is found that it is required to switch off the cosmological constant in order to get this class of spacetimes. This case is of particular interest since it provides a non-vacuum solution with constant curvature scalars. For simplicity, only the static cases will be considered, since the stationary CSI spacetimes can be obtained by performing the coordinate transformation \eqref{trans}. This special spacetime comes from considering $K=0$ in \eqref{usol}. As mentioned before, this branch of solutions is not smoothly connected with that shown in subsection \ref{LCs}. In fact, the solutions of the equations \eqref{eqh} are given by exponentials, \begin{equation} \label{gcsi} g_i(r)=g_i^0e^{K_i r}, \end{equation} and the scalar field, obtained from (\ref{eqnh}), is a linear function, $\Phi(r)=\Phi_0+\sqrt{\alpha /(2 \kappa)}r$. The integration constants satisfy the algebraic relations, \begin{equation}\label{CSIrel} \begin{split} K_0+K_1+K_2&=0,\\ K_0 K_1+K_1 K_2+K_2 K_0&=\alpha. \end{split} \end{equation} Thus, from \eqref{CSIrel} we obtain, \begin{eqnarray} K_0&=&\frac{1}{2}(-K_2\pm\sqrt{-3 (K_2)^2-4\alpha}),\\ K_1&=&\frac{1}{2}(-K_2\mp\sqrt{-3 (K_2)^2-4\alpha}). \end{eqnarray}\\[-7pt] Note that the reality condition of the line element demands $\alpha<-\frac{3}{4}(K_2)^2$ and as a consequence the scalar field becomes imaginary. This means that the presence of a phantom scalar field makes possible to remove curvature singularities present in the vacuum solutions. The Petrov classification indicates that these spacetimes are type D. In order to verify that these spacetimes are indeed CSI spacetimes, we make use of a theorem proved in \cite{Coley1,Coley2}. The theorem states that any four dimensional locally homogeneous spacetime is a CSI spacetime. The line element \eqref{Linetnull} with the metric coefficients (\ref{gcsi}) has three trivial Killing vectors $\partial_t$, $\partial_z,$ and $\partial_\phi$. However, it is possible to find a fourth Killing vector given by \begin{equation}\label{tkv1} \xi^{(4)}=(-\frac{1}{2}K_0t,-\frac{1}{2}K_1\phi,-\frac{1}{2}K_2 z,1), \end{equation} in the coordinate system $(t,\phi,z,r)$, which in addition to the trivial ones, form a transitive group of isometries. Therefore, this spacetime is locally homogeneous. \section{Concluding remarks} In this paper, the general stationary cylindrically symmetric solution of Einstein-massless scalar field system with a non-positive cosmological constant has been found, and its local and global properties has been studied. Four integration constants are essential parameters for the general solution. This means that these parameters encode all the relevant physical information. One is the amplitude of the scalar field, which beside a second one present in the metric, characterize the gravitational field strength. The other two parameters have a topological origin, since they appearing in the improper gauge transformation that allow us to obtain the stationary solution from the static one. The meaning of these parameters can be also analyzed from the expressions for the mass and angular momentum of the solutions with a negative cosmological constant. The Petrov classification was performed to explore the effects of the scalar field on the vacuum solutions for a negative and a vanishing cosmological constant. The inclusion of the scalar field enlarges the family of solutions in comparison with the vacuum case. Thus, type D solutions are now parametrized by the amplitude of the scalar field and nontrivial type O solutions have been found in presence of nonvanishing scalar field. These conformally flat solutions endowed with a matter field have six Killing vectors. Note that in the vacuum case, there are not type O solutions apart from the trivial ones, the locally Minkowski (for $\Lambda=0$) and the locally AdS spacetime (for $\Lambda<0$). Other interesting case occurs for $\Lambda=0$. There are special type D solutions which are possible only if the scalar field is present. We have shown that these spacetimes have a fourth Killing vector, which completes a transitive group of isometries, and consequently they are locally homogeneous. Thus, these solutions become CSI spacetimes dressed by a phantom scalar field. \acknowledgments We thank Eloy Ay\'on-Beato, Patricia Ritter, Marco Astorino and Ricardo Troncoso for helpful discussions. C. E. thanks CONICYT for financial support. This work has been partially funded by the Fondecyt grants 1121031 and 1130658. The Centro de Estudios Cient\'{\i}ficos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of Conicyt.
{ "timestamp": "2015-04-27T02:00:19", "yymm": "1504", "arxiv_id": "1504.06321", "language": "en", "url": "https://arxiv.org/abs/1504.06321" }
\section{Introduction} \begin{table}[ht] \scriptsize \begin{center} \begin{tabular}{ c | c X c c | c | c |} \cline{2-6} & \multicolumn{1}{ c X}{Low-Level Task} & \multicolumn{4}{ c |}{ High-Level Tasks}\\ \cline{2-6} & \multicolumn{1}{ c X}{BD} & \multicolumn{2}{ c |}{ SBL }& \multicolumn{1}{ c |}{SS} & \multicolumn{1}{ c |}{OP}\\ \cline{2-6} & ODS & MF & AP & PI-IOU & MR\\ \hline \multicolumn{1}{| c |}{SotA} & 0.76~\cite{Shen_2015_CVPR} & 28.0~\cite{BharathICCV2011} &19.9~\cite{BharathICCV2011} & 45.8~\cite{DBLP:journals/corr/ChenPKMY14} & 0.88~\cite{ZitnickDollarECCV14edgeBoxes}\\ \hline \multicolumn{1}{| c |}{\bf \textsc{HfL}\xspace} & \bf 0.77 & \bf 62.5 & \bf 54.6 & \bf 48.8 & \bf 0.90\\ \hline \end{tabular} \end{center} \caption{Summary of results achieved by our proposed method (\textsc{HfL}\xspace) and state-of-the-art methods (SotA). We provide results on four vision tasks: Boundary Detection (BD), Semantic Boundary Labeling (SBL), Semantic Segmentation (SS), and Object Proposal (OP). The evaluation metrics include ODS F-score for BD task, max F-score (MF) and average precision (AP) for SBL task, per image intersection over union (PI-IOU) for SS task, and max recall (MR) for OP task. Our method produces better results in each of these tasks according to these metrics.\vspace{-0.3cm}} \label{summary} \end{table} In the vision community, boundary detection has always been considered a low-level problem. However, psychological studies suggest that when a human observer perceives boundaries, object level reasoning is used~\cite{psych,sanguinetti2013ground,KourtziKanwisher01}. Despite these findings, most of the boundary detection methods rely exclusively on low-level color and gradient features. In this work, we present a method that uses object-level features to detect boundaries. We argue that using object-level information to predict boundaries is more similar to how humans reason. Our boundary detection scheme can be viewed as a \textit{High-for-Low} approach where we use high-level object features as cues for a low-level boundary detection process. Throughout the rest of the paper, we refer to our proposed boundaries as \textit{High-for-Low} boundaries (\textsc{HfL}\xspace). We present an efficient deep network that uses object-level information to predict the boundaries. Our proposed architecture reuses features from the sixteen convolutional layers of the network of Simonyan et al.~\cite{Simonyan14c}, which we refer to as VGG net. The VGG net has been trained for object classification, and therefore, reusing its features allows our method to utilize high-level object information to predict \textsc{HfL}\xspace boundaries. In the experimental section, we demonstrate that using object-level features produces semantically meaningful boundaries and also achieves above state-of-the-art boundary detection accuracy. Additionally, we demonstrate that we can successfully apply our \textsc{HfL}\xspace boundaries to a number of high-level vision tasks. We show that by using \textsc{HfL}\xspace boundaries we improve the results of three existing state-of-the-art methods on the tasks of semantic boundary labeling, semantic segmentation and object proposal generation. Therefore, using \textsc{HfL}\xspace boundaries to boost the results in high level vision tasks can be viewed as a \textit{Low-for-High} scheme, where boundaries serve as low-level cues to aid high-level vision tasks. We present the summarized results for the boundary detection and the three mentioned high-level vision tasks in Table~\ref{summary}. Specifically, we compare our proposed method and an appropriate state-of-the-art method for that task. As the results indicate, we achieve better results in each of the tasks for each presented evaluation metric. We present more detailed results for each of these tasks in the later sections. In summary, our contributions are as follows. First, we show that using object-level features for boundary detection produces perceptually informative boundaries that outperform prior state-of-the-art boundary detection methods. Second, we demonstrate that we can use \textsc{HfL}\xspace boundaries to enhance the performance on the high-level vision tasks of semantic boundary labeling, semantic segmentation and object proposal. Finally, our method can detect boundaries in near-real time. Thus, we present a boundary detection system that is accurate, efficient, and is also applicable to high level vision tasks. \section{Related Work} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{./paper_figures/architecture11.pdf} \end{center} \caption{An illustration of our architecture (best viewed in color). First we extract a set of candidate contour points. Then we upsample the image and feed it through $16$ convolutional layers pretrained for object classification. For each candidate point, we find its correspondence in each of the feature maps and perform feature interpolation. This yields a $5504$-dimensional feature vector for each candidate point. We feed each of these vectors to two fully connected layers and store the predictions to produce a final boundary map.\vspace{-0.4cm}} \label{fig:arch} \end{figure} \captionsetup{labelformat=empty} \begin{figure*} \centering \myfigurefivecol{./paper_figures/conv_maps/49024.pdf} \myfigurefivecol{./paper_figures/conv_maps/49024_1.pdf} \myfigurefivecol{./paper_figures/conv_maps/49024_2.pdf} \myfigurefivecol{./paper_figures/conv_maps/49024_3.pdf} \myfigurefivecol{./paper_figures/conv_maps/49024_4.pdf} \myfigurefivecol{./paper_figures/conv_maps/157032.pdf} \myfigurefivecol{./paper_figures/conv_maps/157032_1.pdf} \myfigurefivecol{./paper_figures/conv_maps/157032_2.pdf} \myfigurefivecol{./paper_figures/conv_maps/157032_3.pdf} \myfigurefivecol{./paper_figures/conv_maps/157032_4.pdf} \captionsetup{labelformat=default} \setcounter{figure}{1} \caption{A visualization of selected convolutional feature maps from VGG network (resized to the input image dimension). Because VGG was optimized for an object classification task, it produces high activation values on objects and their parts.} \label{conv_maps} \end{figure*} \captionsetup{labelformat=default} Most of the contour detection methods predict boundaries based purely on color, text, or other low-level features. We can divide these methods into three broad categories: spectral methods, supervised discriminative methods and deep learning based methods. Spectral methods formulate contour detection problem as an eigenvalue problem. The solution to this problem is then used to reason about the boundaries. The most successful approaches in this genre are the MCG detector~\cite{cArbelaez14}, gPb detector~\cite{Arbelaez:2011:CDH:1963053.1963088}, PMI detector~\cite{crisp_boundaries}, and Normalized Cuts~\cite{Shi97normalizedcuts}. Some of the notable discriminative boundary detection methods include sketch tokens (ST)~\cite{LimCVPR13SketchTokens}, structured edges (SE)~\cite{Dollar2015PAMI} and sparse code gradients (SCG)~\cite{ren_nips12}. While SCG use supervised SVM learning~\cite{Burges98atutorial}, the latter two methods rely on a random forest classifier and models boundary detection as a classification task. Recently there have been attempts to apply deep learning to the task of boundary detection. SCT~\cite{MYP:ACCV:2014} is a sparse coding approach that reconstructs an image using a learned dictionary and then detect boundaries. Both $N^4$ fields~\cite{DBLP:journals/corr/GaninL14} and DeepNet~\cite{kivinen2014visual} approaches use Convolutional Neural Networks (CNNs) to predict edges. $N^4$ fields rely on dictionary learning and the use of the Nearest Neighbor algorithm within a CNN framework while DeepNet uses a traditional CNN architecture to predict contours. The most similar to our approach is DeepEdge~\cite{gberta_2015_CVPR}, which uses a multi-scale bifurcated network to perform contour detection using object-level features. However, we show that our method achieves better results even without the complicated multi-scale and bifurcated architecture of DeepEdge. Additionally, unlike DeepEdge, our system can run in near-real time. In comparison to prior approaches, we offer several contributions. First, we propose to use object-level information to predict boundaries. We argue that such an approach leads to semantic boundaries, which are more consistent with humans reasoning. Second, we avoid feature engineering by learning boundaries from human-annotated data. Finally, we demonstrate excellent results for both low-level and high-level vision tasks. For the boundary detection task, our proposed \textsc{HfL}\xspace boundaries outperform all of the prior methods according to both F-score metrics. Additionally, we show that because \textsc{HfL}\xspace boundaries are based on object-level features, they can be used to improve performance in the high-level vision tasks of semantic boundary labeling, semantic segmentation, and object proposal generation. \section{Boundary Detection} \label{boundary_detection} In this section, we describe our proposed architecture and the specific details on how we predict \textsc{HfL}\xspace boundaries using our method. The detailed illustration of our architecture is presented in Figure~\ref{fig:arch}. \subsection{Selection of Candidate Contour Points} We first extract a set of candidate contour points with a high recall. Due to its efficiency and high recall performance, we use the SE edge detector~\cite{Dollar2015PAMI}. In practice, we could eliminate this step and simply try to predict boundaries at every pixel. However, selecting a set of initial candidate contour points, greatly reduces the computational cost. Since our goal is to build a boundary detector that is both accurate and efficient, we use these candidate points to speed up the computation of our method. \subsection{Object-Level Features} After selecting candidate contour points, we up-sample the original input image to a larger dimension (for example $1100 \times 1100$). The up-sampling is done to minimize the loss of information due to the input shrinkage caused by pooling at the different layers. Afterwards, we feed the up-sampled image through $16$ convolutional layers of the VGG net. We use the VGG net as our model because it has been trained to recognize a large number of object classes (the $1000$ categories of the ImageNet dataset~\cite{ILSVRCarxiv14}) and thus encodes object-level features that apply to many classes. To preserve specific location information we utilize only the $16$ convolutional layers of the VGG net. We don't use fully connected layers because they don't preserve spatial information, which is crucial for accurate boundary detection. We visualize some of the selected convolutional maps in Figure~\ref{conv_maps}. Note the high activation values around the various objects in the images, which confirms our hypothesis that the VGG net encodes object specific information in its convolutional feature maps. \subsection{Feature Interpolation} Similarly to~\cite{DBLP:journals/corr/SermanetEZMFL13, DBLP:journals/corr/HariharanAGM14a,long_shelhamer_fcn}, we perform feature interpolation in deep layers. After the up-sampled image passes through all $16$ convolutional layers, for each selected candidate contour point we find its corresponding point in the feature maps. Due to the dimension differences in convolutional maps these correspondences are not exact. Thus we perform feature interpolation by finding the four nearest points and averaging their activation values. This is done in each of the $5504$ feature maps. Thus, this results in a $5504$-dimensional vector for each candidate point. We note that the interpolation of convolutional feature maps is the crucial component that enables our system to predict the boundaries efficiently. Without feature interpolation, our method would need to independently process the candidate edge points by analyzing a small image patch around each point, as for example done in DeepEdge~\cite{gberta_2015_CVPR} which feeds one patch at a time through a deep network. However, when the number of candidate points is large (e.g., DeepEdge considers about 15K points at each of 4 different scales), their patches overlap significantly and thus a large amount of computation is wasted by recalculating filter response values over the same pixels. Instead, we can compute the features for all candidate points with a single pass through the network by performing deep convolution over the {\em entire} image (i.e., feeding the entire image rather than one patch at a time) and then by interpolating the convolutional feature maps at the location of each candidate edge point so as to produce its feature descriptor. Thanks to this speedup, our method has a runtime of $1.2$ seconds (using a K40 GPU), which is better than the runtimes of prior deep-learning based edge detection methods~\cite{Shen_2015_CVPR, DBLP:journals/corr/GaninL14,kivinen2014visual,gberta_2015_CVPR}. \subsection{Learning to Predict Boundaries} After performing feature interpolation, we feed the $5504$-dimensional feature vectors corresponding to each of the candidate contour points to two fully connected layers that are optimized to the human agreement criterion. To be more precise, we define our prediction objective as a fraction of human annotators agreeing on the presence of the boundary at a particular pixel. Therefore, a learning objective aims at mimicking the judgement of the human labelers. Finally, to detect \textsc{HfL}\xspace boundaries, we accumulate the predictions from the fully connected layers for each of the candidate points and produce a boundary probability map as illustrated in Figure~\ref{fig:arch}. \subsection{Implementation Details} In this section, we describe the details behind the training procedure of our model. We use the Caffe library~\cite{jia2014caffe} to implement our network architecture. In the training stage, we freeze the weights in all of the convolutional layers. To learn the weights in the two fully connected layers we train our model to optimize the least squares error of the regression criterion that we described in the previous subsection. To enforce regularization we set a dropout rate of $0.5$ in the fully connected layers. Our training dataset includes $80K$ points from the BSDS500 dataset~\cite{MartinFTM01}. As described in the previous subsection, the labels represent the fraction of human annotators agreeing on the boundary presence. We divide the label space into four quartiles, and select an equal number of samples for each quartile to balance the training dataset. In addition to the training dataset, we also sample a hold-out dataset of size $40,000$. We use this for the hard-positive mining~\cite{malisiewicz-iccv11} in order to reduce the number of false-negative predictions. For the first $25$ epochs we train the network on the original $80,000$ training samples. After the first $25$ epochs, we test the network on the hold-out dataset and detect false negative predictions made by our network. We then augment the original $80,000$ training samples with the false negatives and the same number of randomly selected true negatives. For the remaining $25$ epochs, we train the network on this augmented dataset. \subsection{Boundary Detection Results} In this section, we present our results on the BSDS500 dataset~\cite{MartinFTM01}, which is the most established benchmark for boundary detection. The quality of the predicted boundaries is evaluated using three standard measures: fixed contour threshold (ODS), per-image best threshold (OIS), and average precision (AP). We compare our approach to the state-of-the-art based on two different sets of BSDS500 ground truth boundaries. First, we evaluate the accuracy by matching each of the predicted boundary pixels with the ground truth boundaries that were annotated by {\em any} of the human annotators. This set of ground truth boundaries is referred to as ``any''. We present the results for ``any'' ground truth boundaries in the lower half of Table~\ref{any_bsds}. As indicated by the results, \textsc{HfL}\xspace boundaries outperform all the prior methods according to both F-score measures. \begin{table} \begin{center} \begin{tabular}{ | c | c c c | c |} \hline {\em Consensus GT} & {\em ODS} & {\em OIS} & {\em AP} & {\em FPS} \\ \hline \hline SCG ~\cite{ren_nips12} & 0.6 & 0.64 & 0.56 & 1/280\\ \hline DeepNet~\cite{kivinen2014visual} & 0.61 & 0.64 & 0.61 & $1/5^{\ddagger}$\\ \hline PMI~\cite{crisp_boundaries} & 0.61 & \bf 0.68 & 0.56 & 1/900\\ \hline DeepEdge~\cite{gberta_2015_CVPR} & 0.62 & 0.64 & 0.64& $1/1000^{\ddagger}$\\ \hline $N^4$-fields~\cite{DBLP:journals/corr/GaninL14} & 0.64 & 0.67 & 0.64 & $1/6^{\ddagger}$\\ \hline \bf \textsc{HfL}\xspace & \bf 0.65 & \bf 0.68 & \bf 0.67 & $5/6^{\ddagger}$ \\ \hline \addlinespace[1ex] \hline {\em Any GT} & {\em ODS} & {\em OIS} & {\em AP} & {\em FPS} \\ \hline \hline SE~\cite{Dollar2015PAMI} & 0.75 & 0.77 & 0.80 & \bf 2.5\\ \hline MCG~\cite{cArbelaez14} & 0.75 & 0.78 & 0.76 & 1/24\\ \hline $N^4$-fields~\cite{DBLP:journals/corr/GaninL14} & 0.75 & 0.77 & 0.78 & $1/6^{\ddagger}$\\ \hline DeepEdge~\cite{gberta_2015_CVPR} & 0.75 & 0.77 & \bf 0.81 & $1/1000^{\ddagger}$\\ \hline MSC~\cite{Sironi_2014_CVPR} & 0.76 & 0.78 & 0.79 & -\\ \hline DeepContour~\cite{Shen_2015_CVPR} & 0.76 & 0.77 & 0.8 & $1/30^{\ddagger}$\\ \hline \bf \textsc{HfL}\xspace & \bf 0.77 & \bf 0.79 & 0.8 & $5/6^{\ddagger}$\\ \hline \end{tabular} \end{center}\vspace{-.2cm} \caption{Boundary detection results on BSDS500 benchmark. Upper half of the table illustrates the results for ``consensus'' ground-truth criterion while the lower half of the table depicts the results for ``any'' ground-truth criterion. In both cases, our method outperforms all prior methods according to both ODS (optimal dataset scale) and OIS (optimal image scale) metrics. We also report the run-time of our method ($\ddagger$ GPU time) in the FPS column (frames per second), which shows that our algorithm is faster than prior approaches based on deep learning~\cite{Shen_2015_CVPR, DBLP:journals/corr/GaninL14,kivinen2014visual,gberta_2015_CVPR}.} \label{any_bsds} \end{table} \begin{figure} \centering \myfigurethreecol{./paper_figures/bsds/15011.pdf} \myfigurethreecol{./paper_figures/bsds/15011_se_comp.pdf} \myfigurethreecol{./paper_figures/bsds/15011_my_comp.pdf} \myfigurethreecol{./paper_figures/bsds/108004.pdf} \myfigurethreecol{./paper_figures/bsds/108004_se_comp.pdf} \myfigurethreecol{./paper_figures/bsds/108004_my_comp.pdf} \captionsetup{labelformat=default} \setcounter{figure}{4} \caption{Qualitative results on the BSDS benchmark. The first column of images represent input images. The second column illustrates SE~\cite{Dollar2015PAMI}, while the third column depicts \textsc{HfL}\xspace boundaries. Notice that SE boundaries are predicted with low confidence if there is no significant change in color between the object and the background. Instead, because our model is defined in terms of object-level features, it can predict object boundaries with high confidence even if there is no significant color variation in the scene.\vspace{-0.2cm}} \label{qual_bsds} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{./paper_figures/feats/F6.pdf} \end{center} \caption{We train a linear regression model and visualize its weight magnitudes in order to understand which features are used most heavily in the boundary prediction (this linear regression is used only for the visualization purposes and not for the accuracy analysis). Note how heavily weighted features lie in the deepest layers of the network, i.e., the layers that are most closely associated with object information.\vspace{-0.4cm}} \label{fig:feats} \end{figure} Recently, there has been some criticism raised about the procedure for boundary detection evaluation on the BSDS500 dataset. One issue with the BSDS500 dataset involves the so called ``orphan'' boundaries: the boundaries that are marked by only one or two human annotators. These ``orphan'' boundaries comprise around $30\%$ of BSDS500 dataset but most of them are considered uninformative. However, the standard evaluation benchmark rewards the methods that predict these boundaries. To resolve this issue we also evaluate our \textsc{HfL}\xspace boundaries on the so called ``consensus'' set of ground truth boundaries. These ``consensus'' boundaries involve only boundaries that are marked by {\em all} of the human annotators and hence, are considered perceptually meaningful. In the upper half of Table~\ref{any_bsds}, we present the results achieved by our method on the ``consensus'' set of the ground truth boundaries. Our \textsc{HfL}\xspace boundaries outperform or tie all the prior methods in each of the three evaluation metrics, thus suggesting that \textsc{HfL}\xspace boundaries are similar to the boundaries that humans annotated. We also report the runtimes in Table~\ref{any_bsds} and note that our method runs faster than previous deep-learning based edge detection systems~\cite{Shen_2015_CVPR, DBLP:journals/corr/GaninL14,kivinen2014visual,gberta_2015_CVPR}. Our proposed model computes a highly nonlinear function of the 5504-dimensional feature vector of each candidate point. Thus, it is difficult to assess which features are used most heavily by our edge predictor. However, we can gain a better insight by replacing the nonlinear function with a simple linear model. In Fig.~\ref{fig:feats} we show the weight magnitudes of a simple linear regression model (we stress that this linear model is used only for feature visualization purposes). From this Figure, we observe that many important features are located in the deepest layers of the VGG network. As shown in~\cite{DBLP:journals/corr/DonahueJVHZTD13}, these layers encode high-level object information, which confirms our hypothesis that high-level information is useful for boundary detection. Finally, we present some qualitative results achieved by our method in Figure~\ref{qual_bsds}. These examples illustrate the effective advantage that \textsc{HfL}\xspace boundaries provide over another state-of-the-art edge detection system, the SE system~\cite{Dollar2015PAMI}. Specifically, observe the parts of the image where there is a boundary that separates an object from the background but where the color change is pretty small. Notice that because the SE boundary detection is based on low-level color and texture features, it captures these boundaries with very low confidence. In comparison, because \textsc{HfL}\xspace boundaries rely on object-level features, it detects these boundaries with high confidence. \section{High-Level Vision Applications} In this section, we describe our proposed \textit{Low-for-High} pipeline: using low-level boundaries to aid a number of high-level vision tasks. We focus on the tasks of semantic boundary labeling, semantic segmentation and object proposal generation. We show that using \textsc{HfL}\xspace boundaries improves the performance of state-of-the-art methods in each of these high-level vision tasks. \subsection{Semantic Boundary Labeling} \label{sbl} \setlength{\tabcolsep}{2pt} \begin{table*}[t] \footnotesize \begin{center} \begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c ? c |} \hline Method (Metric) & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mean\\ \hline\hline InvDet (MF) & 42.6 & 49.5 & 15.7 & 16.8 & 36.7 & 43.0 & 40.8 & 22.6 & 18.1 & 26.6 & 10.2 & 18.0 & 35.2 & 29.4 & 48.2 & 14.3 & 26.8 & 11.2 & 22.2 & 32.0 & 28.0\\ \hline \bf \textsc{HfL}\xspace-FC8 (MF) & 71.6 & 59.6 & 68.0 & 54.1 & 57.2 & 68.0 & 58.8 & 69.3 & 43.3 & 65.8 & 33.3 & 67.9 & 67.5 & 62.2 & 69.0 & 43.8 & 68.5 & 33.9 & 57.7 & 54.8 & 58.7\\ \hline \bf \textsc{HfL}\xspace-CRF (MF) & \bf 73.9 & \bf 61.4 & \bf 74.6 & \bf 57.2 & \bf 58.8 & \bf 70.4 & \bf 61.6 & \bf 71.9 & \bf 46.5 & \bf 72.3 & \bf 36.2 & \bf 71.1 & \bf 73.0 & \bf 68.1 & \bf 70.3 & \bf 44.4 & \bf 73.2 & \bf 42.6 & \bf 62.4 & \bf 60.1 & \bf 62.5\\ \Xhline{4\arrayrulewidth} InvDet (AP) & 38.4 & 29.6 & 9.6 & 9.9 & 24.2 & 33.6 & 31.3 & 17.3 & 10.7 & 16.4 & 3.7 & 12.1 & 28.5 & 20.4 & 45.7 & 7.6 & 16.1 & 5.7 & 14.6 & 22.7 & 19.9\\ \hline \bf \textsc{HfL}\xspace-FC8 (AP) & 66.0 & 50.7 & 58.9 & 40.6 & 47.1 & 62.9 & 51.0 & 59.0 & 25.6 & 54.6 & 15.3 & 57.8 & 57.3 & 55.9 & 62.2 & 27.5 & 55.6 & 18.0 & 50.1 & 40.6 & 47.8 \\ \hline \bf \textsc{HfL}\xspace-CRF (AP) & \bf 71.2 & \bf 55.2 & \bf 69.3 & \bf 45.7 & \bf 48.9 & \bf 71.1 & \bf 56.8 & \bf 65.7 & \bf 29.1 & \bf 65.9 & \bf 17.7 & \bf 64.5 & \bf 68.3 & \bf 64.7 & \bf 65.9 & \bf 29.1 & \bf 66.5 & \bf 25.7 & \bf 60.0 & \bf 49.8 & \bf 54.6\\ \hline \end{tabular} \end{center} \caption{Results of semantic boundary labeling on the SBD benchmark using the Max F-score (MF) and Average Precision (AP) metrics. Our method (\textsc{HfL}\xspace) outperforms Inverse Detectors~\cite{BharathICCV2011} for all $20$ categories according to both metrics. Note that using the CRF output to label the boundaries produces better results than using the outputs from the FC8 layer of FCN.} \label{sbl_maxf} \end{table*} The task of semantic boundary labeling requires not only to predict the boundaries but also to associate a specific object class to each of the boundaries. This implies that given our predicted boundaries we also need to label them with object class information. We approach this problem by adopting the ideas from the recent work on Fully Convolutional Networks (FCN)~\cite{long_shelhamer_fcn}. Given an input image, we concurrently feed it to our boundary-predicting network (described in Section~\ref{boundary_detection}), and also through the FCN that was pretrained for $20$ Pascal VOC classes and the background class. While our proposed network produces \textsc{HfL}\xspace boundaries, the FCN model predicts class probabilities for each of the pixels. We can then merge the two output maps as follows. For a given boundary point we consider a $9\times 9$ grid around that point from each of the $21$ FCN object-class probability maps. We calculate the maximum value inside each grid, and then label the boundary at a given pixel with the object-class that corresponds to the maximum probability across these $21$ maps. We apply this procedure for each of the boundary points, in order to associate object-class labels to the boundaries. Note that we consider the grids around the boundary pixel because the output of the FCN has a poor localization, and considering the grids rather than individual pixels leads to higher accuracy. We can also merge \textsc{HfL}\xspace boundaries with the state-of-the-art DeepLab-CRF segmentation~\cite{DBLP:journals/corr/ChenPKMY14} to obtain higher accuracy. We do this in a similar fashion as just described. First, around a given boundary point we extract a $9 \times 9$ grid from the DeepLab-CRF segmentation. We then compute the mode value in the grid (excluding the background class), and use the object-class corresponding to the mode value as a label for the given boundary point. We do this for each of the boundary points. By merging \textsc{HfL}\xspace boundaries and the output of FCN or DeepLab-CRF, we get semantic boundaries that are highly localized and also contain object-specific information. \subsubsection{Semantic Boundary Labeling Results} In this section, we present semantic boundary labeling results on the SBD dataset~\cite{BharathICCV2011}, which includes ground truth boundaries that are also labeled with one of $20$ Pascal VOC classes. The boundary detection accuracy for each class is evaluated using the maximum F-score (MF), and average precision (AP) measures. Labeling boundaries with the semantic object information is a novel and still relatively unexplored problem. Therefore, we found only one other approach (Inverse Detectors) that tried to tackle this problem~\cite{BharathICCV2011}. The basic idea behind Inverse Detectors consists of several steps. First, generic boundaries in the image are detected. Then, a number of object proposal boxes are generated. These two sources of information are then used to construct the features. Finally, a separate classifier is used to label the boundaries with the object-specific information. Table~\ref{sbl_maxf} shows that our approach significantly outperforms Inverse Detectors according to both the maximum F-score and the average precision metrics for all twenty categories. As described in Section~\ref{sbl} we evaluate the two variants of our method. Denoted by \textsc{HfL}\xspace-FC8 is the variant for which we label \textsc{HfL}\xspace boundaries with the outputs from the last layer (FC8) of the pretrained FCN. We denote with \textsc{HfL}\xspace-CRF the result of labeling our boundaries with the output from the DeepLab-CRF~\cite{DBLP:journals/corr/ChenPKMY14}. Among these two variants, we show that the latter one produces better results. This is expected since the CRF framework enforces spatial coherence in the semantic segments. In Figure~\ref{qual_sbl}, we present some of the qualitative results produced by our method. We note that even with multiple objects in the image, our method successfully recognizes and localizes boundaries of each of the classes. \captionsetup{labelformat=empty} \begin{figure} \centering \myfigurethreecol{./paper_figures/semantic_boundaries/2008_000195.pdf} \myfigurethreecol{./paper_figures/semantic_boundaries/2008_000195_4_comp.pdf} \myfigurethreecol{./paper_figures/semantic_boundaries/2008_000195_15_comp.pdf} \myfigurethreecol{./paper_figures/semantic_boundaries/2008_001231.pdf} \myfigurethreecol{./paper_figures/semantic_boundaries/2008_001231_2_comp.pdf} \myfigurethreecol{./paper_figures/semantic_boundaries/2008_001231_15_comp.pdf} \captionsetup{labelformat=default} \setcounter{figure}{6} \caption{A visualization of the predicted semantic boundary labels. Images in the first column are input examples. Columns two and three show semantic \textsc{HfL}\xspace boundaries of different object classes. Note that even with multiple objects appearing simultaneously, our method outputs precise semantic boundaries.\vspace{-0.2cm}} \label{qual_sbl} \end{figure} \subsection{Semantic Segmentation} For the semantic segmentation task, we propose to enhance the DeepLab-CRF~\cite{DBLP:journals/corr/ChenPKMY14} with our predicted \textsc{HfL}\xspace boundaries. DeepLab-CRF is a system comprised of a Fully Convolutional Network (described in Section~\ref{sbl}) and a dense CRF applied on top of FCN predictions. Specifically, in the CRF, the authors propose to use a Gaussian kernel and a bilateral term including position and color terms as the CRF features (see~\cite{DBLP:journals/corr/ChenPKMY14}). While in most cases the proposed scheme works well, DeepLab-CRF sometimes produces segmentations that are not spatially coherent, particularly for images containing small object regions. We propose to address this issue by adding features based on our predicted \textsc{HfL}\xspace boundaries in the CRF framework. Note that we use predicted boundaries from Section~\ref{boundary_detection} and not the boundaries labeled with the object information that we obtained in Section~\ref{sbl}. We use the Normalized Cut~\cite{Shi97normalizedcuts} framework to generate our features. \captionsetup{labelformat=default} \setlength{\tabcolsep}{2pt} \begin{table*} \scriptsize \begin{center} \begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c ? c |} \hline Metric & Method (Dataset) & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mean\\ \hline \multirow{4}{*}{PP-IOU} & DL-CRF (VOC)& \bf 78.6 & 41.1 & \bf 83.5 & \bf 75.3 & 72.9 & 83.1 & \bf 76.6 & \bf 80.8 & \bf 37.8 & \bf 72.1 & 66.5 & \bf 64.7 & 65.8 & \bf 75.7 & \bf 80.5 & \bf 34.4 & \bf 75.9 & \bf 47.4 & 86.6 & \bf 77.9 & \bf 68.9\\ & \bf DL-CRF+\textsc{HfL}\xspace (VOC) & 77.9 & \bf 41.2 & 83.1 & 74.4 & \bf 73.2 & \bf 85.5 & 76.1 & 80.6 & 35.7 & 71.0 & \bf 66.6 & 64.3 & \bf 65.9 & 75.2 & 80.2 & 32.8 & 75.2 & 47.0 & \bf 87.1 & \bf 77.9 & 68.5\\ \cline{2-23} & DL-CRF (SBD)& 74.2 & 68.0 & \bf 81.9 & 64.6 & \bf 71.8 & 86.3 & \bf 78.3 & \bf 84.3 & \bf 41.6 & \bf 78.0 & 49.9 & \bf 82.0 & \bf 78.5 & 77.1 & 80.1 & \bf 54.3 & \bf 75.6 & \bf 49.8 & \bf 79.5 & 70.1 & \bf 71.4\\ & \bf DL-CRF+\textsc{HfL}\xspace (SBD)& \bf 75.1 & \bf 69.2 & 81.6 & \bf 64.8 & 71.3 & \bf 86.4 & 78.1 & 84.1 & 41.2 & 77.8 & \bf 50.4 & 81.6 & 78.2 & \bf 78.5 & \bf 80.7 & 53.8 & 74.9 & 49.1 & \bf 79.5 & \bf 70.4 & \bf 71.4\\ \Xhline{4\arrayrulewidth} \multirow{4}{*}{PI-IOU} & DL-CRF (VOC)& 46.1 & \bf 28.0 & 48.5 & 54.5 & 45.5 & 57.6 & 34.1 & \bf 47.3 & 19.5 & \bf 61.4 & \bf 41.6 & 42.5 & 34.4 & 61.8 & \bf 62.1 & \bf 22.1 & 50.5 & 41.0 & 61.2 & 31.9 & 44.6\\ & \bf DL-CRF+\textsc{HfL}\xspace (VOC) & \bf 47.5 & 27.6 & \bf 50.4 & \bf 63.5 & \bf 47.7 & \bf 57.9 & \bf 38.7 & 47.2 & \bf 21.1 & 57.3 & 41.2 & \bf 43.7 & \bf 36.0 & \bf 66.4 & 61.1 & 21.3 & \bf 53.9 & \bf 42.1 & \bf 70.9 & \bf 34.6 & \bf 46.5\\ \cline{2-23} & DL-CRF (SBD) & 59.4 & 36.5 & 58.0 & 38.6 & 32.0 & 58.1 & 44.7 & 59.6 & 25.8 & 51.8 & 28.1 & 59.0 & 46.9 & 50.3 & 61.8 & 22.2 & 45.9 & 33.4 & 62.1 & 41.0 & 45.8\\ & \bf DL-CRF+\textsc{HfL}\xspace (SBD) & \bf 63.4 & \bf 42.5 & \bf 58.4 & \bf 41.3 & \bf 32.5 & \bf 61.2 & \bf 45.7 & \bf 61.4 & \bf 28.4 & \bf 55.5 & \bf 31.5 & \bf 61.4 & \bf 51.8 & \bf 54.6 & \bf 62.1 & \bf 24.9 & \bf 52.6 & \bf 34.2 & \bf 67.1 & \bf 45.1 & \bf 48.8\\ \hline \end{tabular} \end{center} \caption{Semantic segmentation results on the SBD and VOC 2007 datasets. We measure the results according to PP-IOU (per pixel) and PI-IOU (per image) evaluation metrics. We denote the original DeepLab-CRF system and our proposed modification as DL-CRF and DL-CRF+\textsc{HfL}\xspace, respectively. According to the PP-IOU metric, our proposed features (DL-CRF+\textsc{HfL}\xspace) yield almost equivalent results as the original DeepLab-CRF system. However, based on PI-IOU metric, our proposed features improve the mean accuracy by $3\%$ and $1.9\%$ on SBD and VOC 2007 datasets respectively.} \label{pp_iou} \end{table*} First, we construct a pixel-wise affinity matrix $\bf W$ using our \textsc{HfL}\xspace boundaries. We measure the similarity between two pixels as: \begin{equation} W_{ij}=\exp{(-\max_{p \in \overline{ij}}\{\frac{M(p)^2}{\sigma^2}\})} \nonumber \end{equation} where $W_{ij}$ represents the similarity between pixels $i$ and $j$, $p$ denotes the boundary point along the line segment $\overline{ij}$ connecting pixels $i$ and $j$, $M$ depicts the magnitude of the boundary at pixel $p$, and $\sigma$ denotes the smoothness parameter, which is usually set to $14\%$ of the maximum boundary value in the image. The intuitive idea is that two pixels are similar (i.e. $W_{ij}=1$) if there is no boundary crossing the line connecting these two pixels (i.e. $M(p)=0\quad \forall p \in \overline{ij}$) or if the boundary strength is low. We note that it is not necessary to build a full affinity matrix $W$. We build a sparse affinity matrix connecting every pair of pixels $i$ and $j$ that have distance $5$ or less from each other. After building a boundary-based affinity matrix $\bf W$ we set $D_{ii}=\sum_{i \neq j} W_{ij}$ and compute eigenvectors $\bf v$ of the generalized eigenvalue system: $$(\bf{D}-\bf{W})\bf{v}=\lambda D\bf{v}$$ We then resize the eigenvectors $v$ to the original image dimensions, and use them as additional features to the CRF part of DeepLab-CRF system. In our experiments, we use the $16$ eigenvectors corresponding to the smallest eigenvalues, which results in $16$ extra feature channels. Note that the eigenvectors contain soft segmentation information. Because \textsc{HfL}\xspace boundaries predict object-level contours with high confidence, the eigenvectors often capture regions corresponding to objects. We visualize a few selected eigenvectors in Figure~\ref{eigv}. In the experimental section, we demonstrate that our proposed features make the output produced by DeepLab-CRF more spatially coherent and improve the segmentation accuracy according to one of the metrics. We also note that our proposed features are applicable to any generic method that incorporates CRF. For instance, even if DeepLab-CRF used an improved DeepLab network architecture, our features would still be beneficial because they contribute directly to the CRF part and not the DeepLab network part of the system. \subsubsection{Semantic Segmentation Results} \captionsetup{labelformat=empty} \begin{figure} \centering \myfigurethreecol{./paper_figures/eigv/2_0.pdf} \myfigurethreecol{./paper_figures/eigv/2_1.pdf} \myfigurethreecol{./paper_figures/eigv/2_2.pdf} \myfigurethreecol{./paper_figures/eigv/3_0.pdf} \myfigurethreecol{./paper_figures/eigv/3_1.pdf} \myfigurethreecol{./paper_figures/eigv/3_2.pdf} \captionsetup{labelformat=default} \setcounter{figure}{7} \caption{In this figure, the first column depicts an input image while the second and third columns illustrate two selected eigenvectors for that image. The eigenvectors contain soft segmentation information. Because \textsc{HfL}\xspace boundaries capture object-level boundaries, the resulting eigenvectors primarily segment regions corresponding to the objects.\vspace{-0.3cm}} \label{eigv} \end{figure} \captionsetup{labelformat=default} In this section, we present semantic segmentation results on the SBD~\cite{BharathICCV2011} and also Pascal VOC 2007~\cite{pascal-voc-2007} datasets, which both provide ground truth segmentations for $20$ Pascal VOC classes. We evaluate the results in terms of two metrics. The first metric measures the accuracy in terms of pixel intersection-over-union averaged per pixel (PP-IOU) across the 20 classes. According to this metric, the accuracy is computed on a per pixel basis. As a result, the images that contain large object regions are given more importance. We observe that while DeepLab-CRF works well on the images containing large object regions, it produces spatially disjoint outputs for the images with smaller and object regions (see Figure~\ref{qual_ss}). This issue is often being overlooked, because according to the PP-IOU metric, the images with large object regions are given more importance and thus contribute more to the accuracy. However, certain applications may require accurate segmentation of small objects. Therefore, in addition to PP-IOU, we also consider the PI-IOU metric (pixel intersection-over-union averaged per image across the 20 classes), which gives equal weight to each of the images. For both of the metrics we compare the semantic segmentation results of a pure DeepLab-CRF~\cite{DBLP:journals/corr/ChenPKMY14} and also a modification of DeepLab-CRF with our proposed features added to the CRF framework. We present the results for both of the metrics in Table~\ref{pp_iou}. Based on these results, we observe that according to the first metric (PP-IOU), our proposed features yield almost equivalent results as the original DeepLab-CRF system. However, according to the second metric (PI-IOU) our features yield an average improvement of $3\%$ and $1.9\%$ in SBD and VOC 2007 datasets respectively. We also visualize the qualitative results produced by both approaches in Figure~\ref{qual_ss}. Notice how our proposed features make the segmentations look smoother relative to the segmentations produced by the original DeepLab-CRF system. Once again, we want to stress that our \textsc{HfL}\xspace features are applicable to any method that uses the CRF. Therefore, based on the results presented in this section, we believe that our proposed features could be beneficial in a wide array of problems that involve the use of the CRF framework. \captionsetup{labelformat=default} \subsection{Object Proposals} \label{obj_prop_gen} Finally, we show that our method produces object-level boundaries that can be successfully exploited in an object proposal scheme. Specifically we adopt the EdgeBoxes approach~\cite{ZitnickDollarECCV14edgeBoxes}, which can be applied to any generic boundaries to produce a list of object proposal boxes. The original EdgeBoxes method uses SE boundaries to generate the boxes. However, SE boundaries are predicted using low-level color and texture features, rather than object-level features. Thus, here we validate the hypothesis that the EdgeBoxes proposals can be improved by replacing the SE boundaries with our \textsc{HfL}\xspace boundaries. \captionsetup{labelformat=empty} \begin{figure} \centering \myfigurethreecol{./paper_figures/semantic_seg/2008_000043_dc.pdf} \myfigurethreecol{./paper_figures/semantic_seg/2008_000043_my.pdf} \myfigurethreecol{./paper_figures/semantic_seg/2008_000043_gt.pdf} \myfigurethreecol{./paper_figures/semantic_seg/2008_001225_dc.pdf} \myfigurethreecol{./paper_figures/semantic_seg/2008_001225_my.pdf} \myfigurethreecol{./paper_figures/semantic_seg/2008_001225_gt.pdf} \captionsetup{labelformat=default} \setcounter{figure}{8} \caption{An illustration of the more challenging semantic segmentation examples. The first column depicts the predictions achieved by DeepLab-CRF, while the second column illustrates the results after adding our proposed features to the CRF framework. The last column represents ground truth segmentations. Notice how our proposed features render the predicted semantic segments more spatially coherent and overall more accurate.} \label{qual_ss} \end{figure} \subsubsection{Object Proposal Results} In this section, we present object proposal results on the Pascal VOC 2012 dataset~\cite{pascal-voc-2012}. We evaluate the quality of bounding-box proposals according to three metrics: area under the curve (AUC), the number of proposals needed to reach recall of $75\%$, and the maximum recall over $5000$ object bounding-boxes. Additionally, we compute the accuracy for each of the metrics for three different intersection over union (IOU) values: $0.65, 0.7$, and $0.75$. We present these results in Table~\ref{bb_results}. As described in Section~\ref{obj_prop_gen}, we use EdgeBoxes~\cite{ZitnickDollarECCV14edgeBoxes}, a package that uses generic boundaries, to generate object proposals. We compare the quality of the generated object proposals when using SE boundaries and \textsc{HfL}\xspace boundaries. We demonstrate that for each IOU value and for each of the three evaluation metrics, \textsc{HfL}\xspace boundaries produce better or equivalent results. This confirms our hypothesis that \textsc{HfL}\xspace boundaries can be used effectively for high-level vision tasks such as generating object proposals. \captionsetup{labelformat=default} \begin{table} \scriptsize \begin{center} \begin{tabular}{ l | SSS | SSS | SSS } \toprule \multirow{2}{*}{Method} & \multicolumn{3}{c |}{IoU 0.65} & \multicolumn{3}{c |}{IoU 0.7} & \multicolumn{3}{c }{IoU 0.75} \\ & AUC & \text{N\text{@}75\%} & \text{Recall} & AUC & \text{N\text{@}75\%} & \text{Recall} & AUC & \text{N\text{@}75\%} & \text{Recall} \\ \midrule SE & 0.52 & \hspace{0.25cm}413 & \hspace{0.1cm}0.93 & 0.47 & \hspace{0.2cm}658 & \hspace{-0.04cm}0.88 & \hspace{0.1cm}\bf 0.41 & \hspace{0.3cm}\text{inf} & 0.75 \\ \bf \textsc{HfL}\xspace & \hspace{0.17cm}\bf 0.53 & \hspace{0.25cm}\bf 365 & \hspace{0.1cm}\bf 0.95 & \hspace{0.16cm}\bf 0.48 & \hspace{0.2cm}\bf 583 & \hspace{0.05cm}\bf 0.9 & \hspace{0.1cm}\bf 0.41 & \hspace{0.2cm}\bf 2685 & \hspace{0.2cm}\bf 0.77 \\ \bottomrule \end{tabular} \end{center} \caption{Comparison of object proposal results. We compare the quality of object proposals using Structured Edges~\cite{Dollar2015PAMI} and \textsc{HfL}\xspace boundaries. We evaluate the performance for three different IOU values and demonstrate that using \textsc{HfL}\xspace boundaries produces better results for each evaluation metric and for each IOU value.} \label{bb_results} \end{table} \section{Conclusions} In this work, we presented an efficient architecture that uses object-level information to predict semantically meaningful boundaries. Most prior edge detection methods rely exclusively on low-level features, such as color or texture, to detect the boundaries. However, perception studies suggest that humans employ object-level reasoning when deciding whether a given pixel is a boundary~\cite{psych,sanguinetti2013ground,KourtziKanwisher01}. Thus, we propose a system that focuses on the semantic object-level cues rather than low level image information to detect the boundaries. For this reason we refer to our boundary detection scheme as a \textit{High-for-Low} approach, where high-level object features inform the low-level boundary detection process. In this paper we demonstrated that our proposed method produces boundaries that accurately separate objects and the background in the image and also achieve higher F-score compared to any prior work. Additionally, we showed that, because \textsc{HfL}\xspace boundaries are based on object-level features, they can be employed to aid a number of high level vision tasks in a \textit{Low-for-High} fashion. We use our boundaries to boost the accuracy of state-of-the-art methods on the high-level vision tasks of semantic boundary labeling, semantic segmentation, and object proposals generation. We show that using \textsc{HfL}\xspace boundaries leads to better results in each of these tasks. To conclude, our boundary detection method is accurate, efficient, applicable to a variety of datasets, and also useful for multiple high-level vision tasks. We plan to release the source code for \textsc{HfL}\xspace upon the publication of the paper . \section{Acknowledgements} We thank Mohammad Haris Baig for the suggestions and help with the software. This research was funded in part by NSF award CNS-1205521. {\small \bibliographystyle{ieee}
{ "timestamp": "2015-09-22T02:21:27", "yymm": "1504", "arxiv_id": "1504.06201", "language": "en", "url": "https://arxiv.org/abs/1504.06201" }
\section{Introduction} Tilings form a popular basis for many mathematical games, for games for the kids. In science, they are popular tools for rather different researches, in chemistry (to describe quasicrystalline structures, e.g., \cite{levitov}), in pure logics (e.g. deciding classes of first order predicates defined on their syntax, see \cite{gurevitch}), in computational complexity (as basic model for complexity, \cite{boas}). The first famous result about tilings is the so-called domino problem: Berger proved that given a tile set, we cannot decide algorithmically whether it can tile the plane, \cite{berger}. Within the proof, Berger constructed the first aperiodic tile set --- a tile set that can tile the plane but only non-periodically. It was the first tile set that allows only tilings of the plane with rather complex structure. Thus, rather simple local rules can imply quite nontrivial global structure of a tiling. Since Berger's paper, quite a lot of different algorithmic and combinatorial properties of aperiodic tilings were investigated. It was proven that a tile set that accepts only aperiodic tilings, must accept uncountably many of them, \cite{bruno}. Many researchers tried to construct possibly simpler aperiodic tile sets (e.g., \cite{robinson}, \cite{gs}, \cite{kari}, \cite{ollinger}, \cite{drs}). The idea of ``simplicity'' was interpreted in several different ways: as the number of tiles, algorithmic simplicity of the construction, etc. Another avenue of research was constructing tile sets that guarantee not only aperiodicity, but also more sophisticated properties of tilings: non-recursivity, maximal algorithmic complexity (of each tiling), robustness and fault-tolerance of tilings, and their combinations, \cite{nonrecursive1}, \cite{nonrecursive2}, \cite{dls}, \cite{drs}. The fundamental question ``\emph{How complex can a tiling be?}'' also can be understood in terms of Turing degrees of unsolvability. Some partial answers to this question are known. First of all, we remark that for each tile set, the set of valid tilings is effectively closed (i.e., belongs to the class $\mathrm\Pi_1^0$). In \cite{dls} the property of cone-avoidance was proven: for each tile set $\tau$ and for every undecidable set $A$ there exists a $\tau$-tiling $T$ such that $A$ is not Turing-reducible to $T$. Quite a complete study of Turing degrees of tilings was given in \cite{pascal1} and \cite{pascal2}. Not surprisingly, the constructions that guarantee some nontrivial combinatorial properties or involve simulation of a Turing machine require very different technical features. So it is rather difficult to combine in one and the same tiling properties of different nature. In this paper we try to do some kind of aggregation; we combine the combinatorial property of quasiperiodicity with complexity issues. We prove that all upper cones of Turing degrees above any $\mathrm \Pi_1^0$ class can be achieved by a tile set that produces only quasiperiodic tilings. This rather complex theorem has a more concrete consequence: we build a tile set that produces only quasiperiodic tilings, and none of these tilings is recursive. Let us be more precise now. In this paper \emph{Wang tiles} are unit squares with colored sides. A \emph{tile set} is a finite family of tiles. For a given tile set the domino problem is to decide whether the entire plane can be tiled with these tiles. Here we assume of course that we are given infinitely many copies of each tile (tiles are prototypes); in other words, we are allowed to place translated copies of the same tile into different sites of the plane (rotations are not allowed). In a correct tiling the tiles in the neighbor cells must match (sides in contact must have the same color). If a tile set $\tau$ tiles the plane, we call these tilings $\tau$-tilings. More formally, a $\tau$-tiling can be defined as a mapping $F\colon\mathbb{Z}^2\mapsto \tau$, where for each pair of neighboring cells $x,y\in \mathbb{Z}^2$ the colors of the tiles $F(x)$ and $F(y)$ match each other on their neighboring sides. A tiling is called \emph{periodic} if some nontrivial shift transforms it into itself. A tiling $F$ is called \emph{quasiperiodic} (or \emph{uniformly recurrent}) if every pattern that appears in this tiling, appears in every sufficiently large square of~$F$. The domino problem (existence of a tiling with a given tile set) is algorithmically undecidable, \cite{berger}. An interesting and nontrivial fact (which follows from Berger's theorem) is that there exist tile sets that allow only aperiodic tilings of the plane. The main result of this article is the following theorem that claims that some tile sets enforce at once two nontrivial properties of a tiling: quasiperiodicity and non-computability. \begin{theorem}\label{thm1} There exists a tile set \textup(a set of Wang tiles\textup) $\tau$ such that (i) there exist $\tau$-tilings of the plane, (ii) all $\tau$-tilings are quasiperiodic, (iii) all $\tau$-tiling are non-computable. \end{theorem} The tile set from Theorem~\ref{thm1} is \emph{not minimal} (we cannot claim that all $\tau$-tilings contain the same finite pattern). In fact, minimality cannot be combined with non computability: each minimal tile set allows at least one computable tiling, see \cite{ballier-jeandel}. On the other hand, minimality can be combined with aperiodicity, see Theorem~\ref{thm2} below. \begin{theorem}\label{thm1-bis} For every effectively closed set $\cal A$ there exists a tile set $\tau$ such that (i) all $\tau$-tilings are quasiperiodic, (ii) the Turing degrees of all $\tau$-tiling make up exactly the upper cone of ${\cal A}$ (i.e., the class of all Turing degrees $d$ such that $d\ge_T \omega $ for at least one $\omega\in {\cal A}$). \end{theorem} For every tile set $\tau$, the set of $\tau$-tilings is alway effectively closed. Moreover, if all $\tau$-tilings are strongly quasiperiodic, then the class of Turing degrees of all $\tau$-tilings is known to be upward closed, see \cite{pascal2}. Thus, Theorem~\ref{thm1-bis} gives a precise characterisation of the Turing spectra of quasiperiodic tilings: they are exactly the upward closed sets in $\mathrm \Pi_1^0$. Notice that Theorem~\ref{thm1-bis} does not imply the result of \cite{pascal3} (a construction of a minimal subshift of finite type with a nontrivial Turing spectrum that consists of uncountably many cones with disjoint bases). The reason is again that the tile sets constructed in Theorem~\ref{thm1-bis} are not minimal. We prove Theorem~\ref{thm1} and Theorem~\ref{thm1-bis} using the technique of fixed-point tilings from~\cite{drs}, with some suitable extensions. Though conceptually this technique is not very difficult, a very formal explanation would involve many (sometimes excessive) technical details. In order to meet the space limitations of the conference proceedings and also to make the argument more accessible, we present it in a less formalised way, starting with a proof of a simpler Theorem~\ref{thm2} below. Being somewhat sketchy, we nevertheless do not skip any important part of the construction, and we emphasise the parallels and differences with the previously known construction of a fixed-point tilings in~\cite{drs}. The rest of the paper is organised as follows. First we remind the reader the core ideas of the fixed-point tiling from \cite{drs} and explain how this technique implies aperiodicity of tilings. Then we upgrade the construction and build a tile set that combines the properties of aperiodicity and quasiperiodicity. After that we prove the main results of the paper. \section{Self-simulating tilings (reminder)} Our proof is based on the \emph{fixed point} construction from \cite{drs}. The main idea of this argument is that we can enforce in a tiling a kind of \emph{self-similar} structure. In what follows we remind the principal ingredients of this construction (here we follow the notations from \cite{drs}). The reader familiar with the technique used in \cite{drs} can skip this section and go directly to Section~3. Let $\tau$ be a tile set and $N>1$ be an integer. We call by a \emph{macro-tile} an $N \times N $ square correctly tiled by matching tiles from $\tau$. Every side of a $\tau$-macro-tile contains a sequence of $N$ colors (of tiles from $\tau$); we refer to this sequence as a \emph{macro-color}. Further, let $\rho$ be some set of $\tau$-macro-tiles (of size $N\times N$). We say that $\tau$ \emph{implements} $\rho$ if (i) some $\tau$-tilings exist, and (ii) for every $\tau$-tiling there exists a unique lattice of vertical and horizontal lines that cuts this tiling into $N\times N$ macro-tiles from $\rho$. (We do not require that all macro-tiles from $\rho$ appear in every $\tau$-tiling.) The value of $N$ is called the \emph{zoom factor} of this implementation. A tile set $\tau$ is called \emph{self-similar} if it implements some set $\rho$ of $\tau$-macro-tiles with some zoom factor $N>1$ and $\rho$ is isomorphic to $\tau$. This means that there exist a one-to-one correspondence between $\tau$ and $\rho$ such that the matching pairs of $\tau$-tiles correspond exactly to the matching pairs of $\rho$-macro-tiles. By definition, for a self-similar tile set $\tau$ each tiling can be uniquely split into $N\times N$ macro-tiles (the set of all macro-tiles is isomorphic to the initial tile set $\tau$); further, the grid of macro-tiles can be grouped into blocks of size $N^2\times N^2$, where each block is a macro-tile of rank $2$ (again, the set of all macro-tiles of rank $2$ is isomorphic to the initial tile set $\tau$), etc. It is not hard to deduce from this observation the following statement. \begin{proposition}[folklore] \label{thm-folklore} A self-similar tile set $\tau$ has only aperiodic tilings. \end{proposition} The proof is based on a simple observation: every period (if it exists) should be a multiple of $N$, since the lattice of vertical and horizontal lines that cuts this tiling into $N\times N$ macro-tiles must be \emph{unique}. Similarly, a period must be a multiple of $N^2$ (to respect the uniquely defined grid of macro-macro-tiles), a multiple of $N^3$, etc. It follows that a period must greater than any integer; see details in \cite{drs}. % Thus, if we want to construct an aperiodic tile set, then it is enough to present an instance of a self-similar tile set. Below we discuss a very general construction of self-similar tile sets. \subsection{Implementing some given tile set with a large enough zoom factor} Assume that we have a tile set $\rho$ where each color is a $k$-bit string (i.e., the set of colors $C \subset \{0,1\}^k$) and the set of tiles $\rho \subset C^4$ is presented by a predicate $P(c_1,c_2,c_3,c_4)$ (the predicate is true if and only if the quadruple $(c_1,c_2,c_3,c_4)$ corresponds to a tile from $\rho$). Assume that we have some Turing machine $\cal M$ that computes $P$. Let us show how to implement $\rho$ using some other tile set $\tau$, with a large enough zoom factor $N$. We will build a tile set $\tau$ where each tile ``knows'' its coordinates modulo $N$. This information is included in the tiles' colors. More precisely, for a tile that is supposed to have coordinates $(i,j)$ modulo $N$, the colors on the left and on the bottom sides should involve $(i,j)$, the color on the right side should involve $(i+1\mod N, j)$, and the color on the top side, respectively, involves $(i, j+1 \mod N)$, see Fig.~1. \begin{figure} \center \begin{minipage}[b]{0.45\linewidth} \center \includegraphics[scale=0.5]{pic1.pdf} \label{pic1} \caption{} \end{minipage} \begin{minipage}[b]{0.40\linewidth} \center \includegraphics[scale=0.75]{pic2.pdf} \label{pic2} \caption{} \end{minipage} \end{figure} This means that every $\tau$-tiling can be uniquely split into blocks (macro-tiles) of size $N\times N$, where the coordinates of cells ranges from $(0,0)$ in the bottom-left corner to $(N-1,N-1)$ in top-right corner, Fig.~2. So, intuitively, each tile ``knows'' its position in the corresponding macro-tile. In addition to the coordinates, each tile in $\tau$ should have some supplementary information encoded in the colors on its sides. We refer to this additional information as the \emph{shade} of the color. On the border of a macro-tile (where one of the coordinates is zero) only two additional shades (say, $0$ and $1$) are allowed. Thus, for each macro-tile of size $N\times N$ the corresponding macro-colors represent a string of $N$ zeros and ones. We will assume that $k\ll N$. We allocate $k$ bits in the middle of a macro-tile sides and make them represent colors from $C$; all other bits on the sides of a macro-tile are zeros. Now we introduce additional restrictions on tiles in $\tau$ that will guarantee the required property: the macro-colors on the macro-tiles satisfy the relation $P$. To achieve this, we ensure that bits from the macro-tile side are transferred to the central part of the tile, and the central part of a macro-tile is used to simulate a computation of the predicate $P$. We fix which cells in a macro-tile are ``wires'' (we may assume that wires do not cross each other) and then require that these tiles carry the same (transferred) bit on two sides. The central part of a macro-tile (of size, say $m\times m$) should represent a time-space diagram of $\cal M$'s computation (the tape is horizontal, time goes up). This is done in a standard way. We require that computation terminates in an accepting state (if not, no correct tiling can be formed), see Fig.~3. \begin{figure} \center \begin{minipage}[b]{0.45\linewidth} \center \includegraphics[scale=0.43]{pic5.pdf} \label{pic3} \caption{} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \center \includegraphics[scale=0.43]{pic4.pdf} \label{pic4} \caption{} \end{minipage} \end{figure} To make this construction work, the size of macro-tile (the number $N$) should be large enough: first, we need enough space for $k$ bits to propagate, second, we need enough time (i.e., height) so all accepting computations of $\cal M$ terminate in time $m$ and on space $m$ (where the size of the computation zone $m$ cannot be greater than the size of a macro-tile). In this construction the number of additional shades depends on the machine $\cal M$ (the more states it has, the more additional shades we need to simulate the computation in the space-time diagram). To avoid this dependency, we replace $\cal M$ by a fixed universal Turing machine $\cal U$ that runs a program simulating $\cal M$. We may assume that the tape has an additional read-only layer. Each cell of this layer carries a bit that never changes during the computation; these bits are used as a program for the universal machine. So in the computation zone the columns carry unchanged bits; the construction of a tile set guarantees that these bits form the program for $\cal U$, and the computation zone of a macro-tile represents a view of an accepting computation for that program, see Fig.~4. In this way we get a tile set $\tau$ that has $O(N^2)$ tiles and implements $\rho$. (This construction works for all large enough $N$.) In the updated construction the tile set still depends on the program simulated in the computational zone. However, this dependency is essentially reduced: the simulated program (and, implicitly, the predicate $P$) affects only the rules for the tiles used in bottom line of the computational zone. The colors on the sides of all other tiles are universal and do not depend on the simulated tile set~$\tau$. \subsection{A self-similar tile set: implementing itself} In the previous section we explained how to implement a given tile set $\rho$ (represented as a program for the universal TM) by another tile set $\tau$ with large enough zoom factor $N$. Now we want $\tau$ be isomorphic to $\rho$. This can be done using a construction that follows Kleene's fixed-point theorem. Note that most steps of the construction of $\tau$ do not depend on the program for $\cal M$ (the coordinates of tiles that make the skeleton of a macro-tile, the information transfer along the wires, the propagation of unchanged program bits, and the space-time diagram for the universal machine in the computation zone). Let us fix these rules as part of $\rho$'s definition and set $k = 2 \log N + O(1)$, so that we can encode $O(N^2)$ colors by $k$ bits. From this definition we obtain a program $\pi$ for the TM that checks that macro-tiles behave like $\tau$-tiles in this respect. We are almost done with the program $\pi$. The only remaining part of the rules for $\tau$ is the hardwired program. We need to guarantee that the computation zone in each macro-tile carries the very same program $\pi$. But since the program is written on the tape of the universal machine, it can be instructed to access its own bits and check that if a macro-tile belongs to the computation zone, this macro-tile carries the correct bit of the program. It remains to explain the choice of $N$ and $m$ (note that the value of the zoom factor $N$ and the size of the computation zone $m$ are hardwired in the program). We need it to be large enough so the computation described above (which deals with inputs of size $O(\log N)$) can fit in the computation zone. The computations are rather simple (polynomial in the input size, i.e., $O(\log N))$, so they easily fit in space and time bounded by $m=\mathrm{poly}(\log N)$. This completes the construction of a self-similar aperiodic tile set. Now it is not hard to verify that the constructed tile sets (1) allows a tiling of the plane, and (2) each tiling is self-similar. Applying Proposition~\ref{thm-folklore} we obtain the following proposition. \begin{proposition}[R.~Berger] There exists a tile set $\tau$ such that there exist $\tau$-tilings of the plane, and each $\tau$-tiling is aperiodic. \end{proposition} In the next section we will upgrade the basic construction of the fixed-point tile set. So far we should keep in mind that in such a tile set all tiles can be classified into three types: \begin{itemize} \item the ``skeleton'' tiles that keep no information except for their coordinates in a macro-tile; these tiles work as building blocks for our hierarchical structure; \item the ``wires'' that transmit the bits of macro-colors from the frontier of the macro-tile to the computation zone; \item the tiles of the computation zone (intended to simulate the space-time diagram of the Universal Turing machine). \end{itemize} The same is true for macro-tiles, super-macro-tiles, etc.; i.e., each macro-tile is a ``skeleton'' block, or a part of a ``wire'', or a cell in the computation zone in the macro-tile of higher rank. \section{Quasiperiodicity and aperiodicity} Before we approach the main result, we prove a simpler statement; we show that there exists a tile set such that all tilings are both \emph{quasiperiodic} and \emph{aperiodic}. \begin{theorem}\label{thm2} There exists a tile set \textup(a set of Wang tiles\textup) $\tau$ such that (i)~there exist $\tau$-tilings of the plane; (ii)~each $\tau$-tiling is quasiperiodic; moreover, the set of $\tau$-tilings is minimal \textup(i.e., all $\tau$-tilings contain the same finite patterns\textup); (iii)~each $\tau$-tiling is aperiodic. \end{theorem} This result was originally proven in \cite{alexis} (for a tile set $\tau$ constructed in \cite{ollinger}). \subsection{Supplementary features: what else we can assume on the fixed-point tiling} The general construction of a fixed-point tiling does not imply the property of quasiperiodicity. In fact, for tilings described above, each pattern that includes only ``skeleton'' tiles (or ``skeleton'' macro-tiles of some rank $k$) must appear infinitely often, in all homologous position inside all macro-tiles of higher rank. However, this is not the case for patterns that include tiles from the ``communication zone'' or the ``communication wires''. Informally, the problem is that even a very small pattern can involve the information relevant for a macro-tile of arbitrarily high rank. So we cannot guarantee that a similar pattern appears somewhere in the neighborhood. To overcome this difficulty we need some new idea and new technical tricks. % First of all, without essential modification of the construction we can enforce the following additional properties of a tiling: \begin{itemize} \item In each macro-tile, the size of the computation zone $m$ is much less than the size of the macro-tile $N$. Technically, in what follows we will need to reserve free space in a macro-tile to insert $O(1)$ (some constant number) of copies of each $2\times 2$ pattern from the computation zone (of this macro-tile). This requirement is easy to meet. We may assume that the size of a computation zone in a macro-tile of size $N\times N$ is only $m=\mathrm{poly}(\log N)$. \item We require that the tiling inside the computation zone satisfies the property of $2\times2$-\emph{determinicity}: if we know all colors on the borderline of a $2\times2$-pattern inside of the computation zone (i.e., a tuple of $8$ colors), then we can uniquely reconstruct the $4$ tiles of this pattern. Again, we do not need any new idea: this requirement is met if we simulate the space-time diagram of a Turing machine in a natural way. \item The communication channels in a macro-tile (the wires that transmit the information from the macro-color on the borderline of this macro-tile to the bottom line of its computation zone) must be isolated from each other. The distance between every two wires must be greater than $2$ from each other. That is, each $2\times 2$-pattern can touch at most one communication wire. \end{itemize} Also we will need a somewhat more essential modification of the construction. We discuss it in the next section. \section{Proof of Theorem~\ref{thm2}} To achieve the property of quasiperiodicity, we should guarantee that every finite pattern that appears once in a tiling, must appear in each large enough square. If a tile set $\tau$ is self-similar, then in every $\tau$-tiling each finite pattern can be covered by at most $4$ macro-tiles (by a $2\times2$-pattern) of an appropriate rank. Thus, to prove Theorem~\ref{thm2} it is enough to guarantee that each $2\times2$ group of macro-tiles (of each rank) that ever appears in a tiling, must appear in eachl large enough squares in it. This property is not true for the tile set constructed above. As we noticed above, this is obviously true for a $2\times2$ pattern that involves only skeleton macro-tiles (we can find an identical pattern in the neighboring macro-tile of the appropriate rank); however, this property can be false for patterns that touch the communication wires or the computation zone. To achieve the desired property we need to modify the basic construction. To this end we implement in our construction one new feature. \textbf{The new feature:} Notice that for each $2\times2$-window that touches the computation zone or the communication wires there exist only a bounded number $c$ of ways to tile them correctly (and make a correct tiling). This constant $c$ depends on the alphabet of the tape and the number of internal states of the Universal Turing machine. For each possible position of a $2\times2$-window in the computation zone or in the communication wires and for each possible filling of this window by tiles, we reserve a special $2\times2$-\emph{slot} in a macro-tile (somewhere far away from the computation zone and from all communication wires) and define the neighbors around this slot in such a way that only these specific $2\times 2$ patterns can patch it. Note that the tiles around this ``know'' their real coordinates in the bigger macro-tile, while the tiles inside the slot do not (they ``believe'' to be tiles in the computation zone, though they are in a ``slot'' outside of it). An example of such a slot is shown in Fig.~5. In Fig.~6 we show how these ``slots'' are placed in a macro-tile. This simple trick is the sharpest difference between this construction and the fixed-point tilings known before: now some tiles do not ``know'' their real position in the ambient macro-tile. \begin{figure} \centering \includegraphics[scale=0.25]{pic7.pdf} \label{pic5} \caption{A ring of 12 ``skeleton'' tiles (the white squares) makes a slot for a $2\times 2$-pattern of tiles from the computation zone (the grey squares). In the picture we show the ``coordinates'' encoded in the colors on the sides of each tile. The colors of the bold lines (the blue lines between white and grey tiles and the bold black lines between grey tiles) contain some information beyond coordinates --- these colors involve the bits used to simulate a space-time diagram of the universal Turing machine. (We do not show all the corresponding bits explicitly.) The ``real'' coordinates of the bottom-left corner of this slot are $(i+1,j+1)$, while the ``natural'' coordinates of the corresponding patterns (when it appears in the computation zone) are $(s,t)$. } \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{pic6.pdf} \label{pic6} \caption{The array of ``slots'' (with patterns from the computation zone) embedded in a macro-tile.} \end{figure} Here we use (a) the property of $2\times2$-determinicity of the computation zone (there is a unique way to put tiles in the ``slot''), and (b) the fact that we have enough room to put in a macro-tile the slots for all $2\times 2$-patters that can appear in the computation zone or along the communication wires. (Here we use the fact that the size of the computational zone $m\times m$ and the lengths of all communication wires $O(1)\times O(N)$ in a macro-tile are much less than the total area of a macro-tile $N\times N$.) This feature guarantees that each $2\times2$ pattern from the computational zone appears at least once in each macro-tile (such a pattern appears once in each macro-tile in the introduced ``slots'' and possibly once again in the computation zone of this macro-tile). We choose the positions of the ``slots'' in the macro-tile so that coordinates can be computed by a short program in time polynomial in $\log N$. We require that the positions of all slots are disjoint, and they do not touch each other. This precaution is needed to guarantee that the tiles used in the slots do not damage the general structure of the macro-tiles. For a tile set with a new feature, every tiling enjoys a new property: every $2\times 2$-pattern of tiles touching the computation zone or a communication wire, must appear at least once in \emph{each} macro-tile (hence, this pattern must appear in each large enough square). Of course, the property of self-similarity implies that similar statements hold for $2\times 2$-pattern of macro-tiles of each ranks $k$. Thus, we proved that every $N^k\times N^k$ pattern that appear in a $\tau$-tiling, must appear in each large enough square in this tiling; moreover, this pattern must appear in each large enough square in \emph{all} $\tau$-tilings. Hence, the constructed tile set satisfies the requirements of Theorem~\ref{thm2}. \section{From aperiodicity to non-computability} To prove Theorem~\ref{thm1}, we need a slightly more sophisticated construction. We need a self-similar tiling with \emph{variable zoom factor}, see \cite{drs} for details. In this version of the construction the size of a macro-tile of rank $r$ is equal to $N_r\times N_r$, for some suitable sequence of zooms $N_r$, $r=1,2,\ldots$ We may assume that $N_r = Cr$ for some constant $C$. Now each macro-tile of rank $r$ must ``know'' its own rank (that is, the binary representation of $r$ is written on the tape of the Turing machine simulated on the computation zone). This information is used by a macro-tile to simulate the next rank macro-tiles properly. The size of the computational zone $m_r$ should also grow as a function of rank $r$ (easily computable from $r$); again, we may assume that $m_r = \mathrm{poly}(\log N_r)$. Also we may require that all macro-tiles of rank $r$ contain in their computational zone the prefix (e.g., of length $\lceil \log r \rceil $) of some infinite sequence $X=x_0x_1x_2\ldots$ The bits of this prefix are propagated by wires to the neighboring macro-tiles, so all macro-tiles of the same rank contain the same bits $x_0x_1\ldots$ The usual self-simulation guarantees that the bits of $X$ embedded into a macro-tile of rank $r+1$ extends the prefix embedded in a macro-tile of rank $r$. Since the size of the computational zone increases as a function of rank $r$, the entire tiling of the plane involves an infinite sequence of bits $X$. The construction becomes interesting if we can enforce some special properties of the embedded sequence $X$. For example, we can guarantee that it is not computable. Indeed, let us make the machine in the computation zone do some new job: let it enumerate two non-separable enumerable sets (on each level $r$ we run the simulation for the number of steps that fits the computation zone available in a macro-tile of rank $r$). Then we can require that $X$ is a separator between these two sets, and in each level, the machine verifies that the (partially) enumerated sets are indeed separated by the given prefix of $X$. Combining all ingredients together, we obtain a tile set $\tau$, which is self-similar in a generalised sense (with a variable zoom factor), with two nontrivial properties: all $\tau$-tilings are non-computable and quasiperiodic. Thus, we proved Theorem~\ref{thm1}. \noindent \emph{A technical remark}: Notice that in this construction we cannot control precisely the sequence $X$ embedded in the tiling (we can specify the two non-separable enumerable sets that are ``enumerated'' in a tiling, but we cannot define uniquely the separator $X$ between them). Thus, our tile set accepts tilings corresponding to infinitely many sequences $X$, and it is not minimal. This is not surprising: if an effectively closed subshift contains no computable points, then it cannot be minimal (see, e.g., \cite{hochman-2009}). In contrast to the proof of Theorem~\ref{thm1-bis}, in the construction used in this section we cannot claim that there exist only $O(1)$ ways to fill by a quadruple of macro-tiles of rank $k$ a slot of size $2N_k\times2N_k$ placed somewhere in a macro-tiles of rank $(k+1)$. Indeed, these macro-tiles involve a prefix of $X$, and there exist potentially many different sequences $X$. However, once $X$ is fixed, there rest only a constant number of $2\times2$ macro-tiles of rank $k$ that fit the given position in the next level macro-tile. This observation allows to reuse the ``new feature'' from the proof of Theorem~\ref{thm2}. With essentially the same technique we can prove Theorem~\ref{thm1-bis}. We employ again the idea of embedding of an infinite sequence $X$ in a tiling. Technically, we require that all macro-tiles of rank $k$ should involve on their computational zone the same finite sequence of $\log k$ bits, which is understood as a prefix of $X$; we guarantee that the prefix embedded in macro-tiles of rank $(k+1)$ is compatible with the prefix available to the macro-tiles of rank $k$. Further, since $\cal A$ is in $\mathrm \Pi_1^0$, we can enumerate the (potentially infinite) list of patterns that should not appear in $X$. On each level, the macro-tiles allocate some part of the available space and time (limited by the size of the computational zone available on this level) to run the enumeration of $\cal A$; every time a new element of $\cal A$ is enumerated, the algorithm (simulated in the computational zone) verifies that the found forbidden pattern does not appear in the prefix of $X$ accessible to macro-tiles of this level. Since the computational zone in a macro-tile of rank $k$ becomes bigger and bigger as $k$ increases, the enumeration extends longer and longer. Thus, a sequence $X$ can be embedded in an infinite tiling, if and only if this sequence does not contain any forbidden pattern (i.e., this $X$ belongs to $\cal A$). What are the Turing degrees of tilings in the described tile set? For our tile set, every tiling is defined by three infinite parameters: the sequence of bits $X$ embedded in this tiling, and two sequences of integers $\sigma_h, \sigma_v$ that specifies the shifts (the vertical and the horizontal ones) of macro-tiles of each level relative to the origin of the plane. This information is enough to reconstruct the tiling. Indeed, $\sigma_h$ and $\sigma_v$ define the hierarchical structure of the macro-tiles: on each level $k$ we should split the macro-tiles of the previous rank into blocks of size $N_k\times N_k$ ($k$-level macro-tiles), and there are $N_k^2$ ways to choose the grid of horizontal and vertical lines that define this splitting. And the content of the computational zones of all macro-tile is defined by the prefixes of $X$. Conversely, given a tiling as an oracle, we can (computably) extract from it the digits of the sequences $X$, $\sigma_h$, and $\sigma_v$. It remains to notice that $\sigma_h$ and $\sigma_v$ can be absolutely arbitrarily. Thus, the Turing degree of a tiling is the Turing degree of $(X,\sigma_h, \sigma_v)$, which can be arbitrary degree not less than $X$. That is, the set of degrees of tilings is exactly the upper closure of ${\cal A}$. So we get the statement of Theorem~\ref{thm1-bis}. \smallskip \emph{Acknowledgements:} We thank Laurent Bienvenu and Emmanuel Jeandel for many prolific discussions. We are also very grateful to the three anonymous referees of MFCS 2015 for exceptionally detailed and instructive comments.
{ "timestamp": "2015-06-15T02:08:28", "yymm": "1504", "arxiv_id": "1504.06130", "language": "en", "url": "https://arxiv.org/abs/1504.06130" }
\section{Introduction} \label{sec:Introduction} Numerical linear algebraic solvers for large matrices have strong needs among various applications with the current and next-generation supercomputers. Nowadays ScaLAPACK\cite{ScaLAPACK, SCALAPACK-URL} \footnote{ ScaLAPACK = Scalable Linear Algebra PACKage } is the {\it de facto} standard solver library for parallel computations but several routines give severe bottlenecks in the computational speed with current massively parallel architectures. Novel solver libraries were proposed so as to overcome the bottlenecks. Since the performance of numerical routines varies significantly with problems and architectures, the best performance is achieved, when one constructs an optimal \lq hybrid' among the libraries. \begin{figure}[htb] \centering \includegraphics[width=7cm]{HybridSolvers.eps} \caption{Concept of hybrid solver; Structure of the program code (a) without and (b) with hybrid solver or numerical middleware.} \label{fig:Concept} \end{figure} The concept of hybrid solver is illustrated in Fig.~\ref{fig:Concept}. It is a numerical middleware and has a unique data interface to real applications. One can choose the optimal workflow for each problem without any programming effort. The present paper focuses on dense-matrix solvers for generalized eigenvalue problems (GEPs) in the form of \begin{eqnarray} A \bm{y}_k = \lambda_k B \bm{y}_k \label{EQ-GEV-EQ} \end{eqnarray} with the given $M \times M$ real-symmetric matrices of $A$ and $B$. The matrix $B$ is positive definite. The eigenvalues $\{ \lambda_k \}$ and the eigenvectors $\{ \bm{y}_k \}$ will be calculated. The computational cost is $\mathcal{O}(M^3)$ or is proportional to $M^3$. The present hybrid solvers are constructed among ScaLAPACK and the two newer libraries of ELPA ~\cite{ELPA-URL, ELPAReview, ELPAAlgorithm} \footnote{ ELPA = Eigenvalue soLvers for Petascale Applications }, and EigenExa~\cite{EIGENEXA-URL, EigenExa-PNST, EigenExa-PMAA, EigenExa-ISC}. The ELPA and EigenExa libraries are written in Fortran and appeared in 2000's for efficient massively parallel computations. The present paper is organized as follows; Section ~\ref{SEC-BACKGROUND} explains the background from the electronic structure calculation. Section ~\ref{SEC-WORKFLOW} describes the mathematical foundation. Sections ~\ref{SEC-BENCHMARK} and ~\ref{SEC-DISCUSSIONS} are devoted to the benchmark results and discussions, respectively. The summary and future outlook will appear in Sec.~\ref{SEC-SUMMARY}. \begin{figure}[htb] \centering \includegraphics[width=6cm]{fig-ELSES-WFN-Bench-combined.eps} \caption{(a) The upper panel is a $\pi$-type electronic wavefunction in an amorphous-like conjugated polymer (poly-((9,9) dioctyl fluorine)). The lower panel shows the atomic structure (R$\equiv$C$_8$H$_{17}$) \cite{HOSHI2014-JPS-CP}. (b) Strong scaling plot by ELSES for one-hundred-million-atoms calculations on the K computer. \cite{HOSHI2014-JPS-CP,HOSHI-TAIWAN-PROC} The calculated materials are a nano-composite carbon solid (the upper line) and the amorphous-like conjugated polymer (the lower line). The number of used processor nodes are from $P=$ 4,096 to 82,944 (full nodes of the K computer). \label{fig:graph-bench-100M-aPF-NCCS}} \end{figure} \section{Background \label{SEC-BACKGROUND}} \subsection{Large-scale electronic structure calculations} The GEP of Eq.~(\ref{EQ-GEV-EQ}) gives the mathematical foundation of electronic structure calculations or quantum mechanical calculations of materials, in which an electron is treated as a quantum mechanical \lq wave'. The input matrix $A$ or $B$ of Eq.~(\ref{EQ-GEV-EQ}) is called Hamiltonian or the overlap matrices, respectively. An eigenvalue of $\{ \lambda_k \}$ is the energy of one electron and an eigenvector of $\{ \bm{y}_k \}$ specifies the wavefunction or the shape of an electronic \lq wave'. Fig. \ref{fig:graph-bench-100M-aPF-NCCS}(a) shows an example of the wavefunction. The number of the required eigenvalues is, at least, on the order of the number of the electrons or the atoms in calculated materials. See the ELPA paper ~\cite{ELPAReview} for a review, because ELPA was developed under tight collaboration with electronic structure calculation society. Here, our motivation is explained. The present authors developed a large-scale quantum material simulator called ELSES \footnote{ ELSES = Extra-Large-Scale Electronic Structure calculation } \cite{ELSES-URL,HOSHI-mArnoldi}. The theories are explained in Refs.~\cite{HOSHI-mArnoldi, SOGABE-2012-GSQMR} and the reference therein. The matrices are based on the real-space atomic-orbital representation and the matrix size $M$ is nearly proportional to the number of atoms $N$ ($M \propto N$). The simulations mainly use novel \lq order-$N$' linear-algebraic methods in which the computational cost is \lq order-$N$' ($\mathcal{O}(N)$) or is proportional to the number of atoms $N$. Their mathematical foundation is sparse-matrix (Krylov-subspace) solvers. Efficient massively parallel computation is found in Fig.~\ref{fig:graph-bench-100M-aPF-NCCS}, a strong scaling benchmark on the K computer \cite{HOSHI2014-JPS-CP, HOSHI-TAIWAN-PROC} with one hundred million atoms or one-hundred-nanometer scale materials. The simulated materials are a nano-composite carbon solid with $N=103,219,200$ or $M=412,876,800$ \cite{HOSHI2014-JPS-CP} and an amorphous-like conjugated polymer with $N=102,238,848$ or $M=230,776,128$ \cite{HOSHI-TAIWAN-PROC}. The present dense-matrix solvers are complementary methods to the order-$N$ calculations, because the order-$N$ calculation gives approximate solutions, while the dense-matrix solvers give numerically exact ones with a heavier ($\mathcal{O}(M^3)$) computational cost. The use of the two methods will lead us to fruitful researches. The exact solutions are important, for example, when the system has many nearly degenerated eigen pairs and one would like to distinguish them. The exact solutions are important also as reference data for the development of fine approximate solvers. The matrices of $A$ and $B$ in the present benchmark appear on \lq ELSES Matrix Library'.~\cite{ELSESMatrixLibrary} The Library is the collection of the matrix data generated by ELSES for material simulations. The benchmark was carried out with the data files of \lq NCCS430080', \lq VCNT22500' \lq VCNT90000' and \lq VCNT1008000' for the matrix sizes of $M$=22,500, $M$=90,000, $M$=430,080, $M$=1,008,000, respectively. A large matrix data ($>0.5$GB) is uploaded as a set of split files for user's convenience. The physical origin of the matrices is explained briefly. The files in the present benchmark are carbon materials within modeled tight-binding-form theories based on {\it ab initio} calculations. The matrix of \lq NCCS430080' appears in our material research on a nano-composite carbon solid (NCCS) \cite{HOSHI-NCCS}. An sp-orbital form \cite{CALZAFERRI} is used and the system contains $N=M/4=107,520$ atoms. The other files are generated for thermally vibrated single-wall carbon nanotubes (VCNTs) within a supercell. An spd-orbital form \cite{CERDA2000} is used and each system contains $N=M/9$ atoms. The VCNT systems were prepared, so as to generate matrices systematically in different size with similar eigenvalue distributions. We used these matrices for the investigation on $\pi$-electron materials with the present dense-matrix solver and the order-$N$ solver. \footnote{ The present matrices are sparse, which does not lose the generality of the benchmark, since the cost of the dense matrix solver is not dependent on the number of non-zero elements of the matrix. } \begin{figure}[htb] \centering \includegraphics[width=6cm]{EigenTestDetailedFlow.eps} \caption{Workflow of the hybrid GEP solver.} \label{fig:EigenTestDetailedFlow} \end{figure} \section{The hybrid solvers \label{SEC-WORKFLOW}} A hybrid solver is constructed, when a routine is chosen for each subprocedure from ScaLAPACK, EigenExa and ELPA. The code was developed as a general middleware that can be connected not only to ELSES but also to any real application software, as in Fig.~\ref{fig:Concept}. A mini-application was also developed and used in the present benchmark. In the benchmark, ScaLAPACK was used as a built-in library on each machine. EigenExa in the version 2.2a \footnote{ The present EigenExa package does not include the GEP solver. The GEP solver routine for EigenExa in the present paper is that of the version 2.2b of KMATH\_EIGEN\_GEV \cite{EigenExaGEV} that shares the SEP solver routine with the EigenExa package. } and ELPA in the version 2014.06.001 were used. ELPA and EigenExa call some ScaLAPACK routines. \subsection{Mathematical formulation \label{SEC-MATH-FOUND}} The GEP of Eq.~(\ref{EQ-GEV-EQ}) can be written in a matrix form of \begin{eqnarray} A Y = BY \Lambda, \label{EQ-GEV-EQ-MAT} \end{eqnarray} where the matrix $\Lambda \equiv {\rm diag}(\lambda_1, \lambda_2, \dots)$ is diagonal and the matrix $Y \equiv (\bm{y}_1 \, \bm{y}_2 \, \cdots)$ satisfies $Y^T B Y = I$. In the solvers, the GEP of Eq.~(\ref{EQ-GEV-EQ}) is reduced to a standard eigenvalue problem (SEP) of \begin{eqnarray} A' Z = Z \Lambda, \label{EQ-SEV-EQ} \end{eqnarray} where the reduced matrix $A'$ is real symmetric \cite{MatrixComputations} and the matrix of $Z \equiv (\bm{z}_1 \, \bm{z}_2 \, \cdots)$ contain eigenvectors of $A'$. The reduction procedure can be achieved, when the Cholesky factorization of $B$ gives the Cholesky factor $U$ as an upper triangle matrix: \begin{eqnarray} B = U^T U. \label{EQ-CHOLE-DECMP} \end{eqnarray} The reduced matrix $A'$ is defined by \begin{eqnarray} A' = U^{-T} A U^{-1}. \label{EQ-A-ATAU} \end{eqnarray} The eigenvectors of the GEP, written as $Y \equiv (\bm{y}_1 \, \bm{y}_2 \, \cdots)$, are calculated from those of the SEP by \begin{eqnarray} Y = U^{-1} Z. \label{EQ-BACKWARD-TRANS} \end{eqnarray} This procedure is usually called backward transformation. The GEP solver is decomposed into the two subprocedures of (a) the solver of the SEP in Eq.~(\ref{EQ-SEV-EQ}) and (b) the reduction from the GEP to the SEP ($( A, B ) \Rightarrow A'$) and the backward transformation ($Z \Rightarrow Y$). The subprocedures (a) and (b) are called \lq SEP solver' and \lq reducer', respectively, and require $\mathcal{O}(M^3)$ operations. Figure ~\ref{fig:EigenTestDetailedFlow} summarizes the workflows of the possible hybrid solvers. A hybrid solver is constructed, when one choose the routines for (a) the SEP solver and (b) the reducer, respectively. For (a) the SEP solver, five routines are found in the base libraries; One routine is a ScaLAPACK routine (routine name in the code : \lq pdsyevd') that uses the conventional tridiagonalization algorithm. \cite{REF-pdsyevd} The ELPA or EigenExa library contains a SEP solver routine based on the tridiagonalization algorithm. The routine in ELPA is called \lq ELPA1' (routine name in the code : \lq solve\_evp\_real') in this paper, as in the original paper \cite{ELPAReview}, and the one in EigenExa called \lq Eigen\_s' or \lq EIGS' (routine name in the code : \lq eigen\_s'). ELPA and EigenExa also contain the novel SEP solvers based on the narrow-band reduction algorithms without the conventional tridiagonalization procedure. The solvers are called \lq ELPA2' (routine name in the code : \lq solve\_evp\_real\_2stage') for the ELPA routine and \lq Eigen\_sx' or \lq EIGX' (routine name in the code : \lq eigen\_sx') for the EigenExa routine in this paper. See the papers ~\cite{EigenExa-PNST, ELPAAlgorithm} for details. For (b) the reducer, three routines are found in the base libraries and are called ScaLAPACK style, ELPA style, and EigenExa style reducers in this paper. In the ScaLAPACK style, the Cholesky factorization, Eq.~(\ref{EQ-CHOLE-DECMP}) is carried out and then the reduced matrix $A'$, defined in Eq.~(\ref{EQ-A-ATAU}), is generated by a recursive algorithm (routine name \lq pdsygst') without explicit calculation of $U^{-1}$ nor $U^{-T}$. Details of the recursive algorithm are explained, for example in Ref.~\cite{ReducingGEP}. In the ELPA style, the Cholesky factorization (routine name: \lq cholesky\_real') is carried out, as in the ScaLAPACK style, and the reduced matrix $A'$ is generated by the explicit calculation of the inverse (triangular) matrix $R\equiv U^{-1}$ (routine names : \lq invert\_trm\_real') and the explicit successive matrix multiplication of $A' = (R^T A) R$ (routine names: \lq mult\_at\_b\_real') \cite{ELPAReview} \footnote{The benchmark was carried out in an ELPA style reduction algorithm. The ScaLAPACK routine of \lq pdtrmm' is used for the multiplication of the triangular matrix $R$ from right, while a sample code in the ELPA package uses the ELPA routine (\lq mult\_at\_b\_real'). We ignore the difference, since the elapse time of the above procedure is not dominant.}. In the EigenExa style, the Cholesky factorization is not used. Instead, the SEP for the matrix $B$ \begin{eqnarray} B W = W D, \label{EQ-SEV-EQ-B-MAT} \end{eqnarray} is solved by the SEP solver (Eigen\_sx), with the diagonal matrix of $D \equiv {\rm diag}(d_1, d_2,...)$ and the unitary matrix of $W \equiv (\bm{w}_1 \, \bm{w}_2 \, ....)$. A reduced SEP in the form of Eq.~(\ref{EQ-SEV-EQ}) is obtained by \begin{eqnarray} A' &=& (D^{-1/2} W^{T}) A (W D^{-1/2}) \\ Y &=& W D^{-1/2} Z, \label{EQ-SEV-EQ-EIGENEXA} \end{eqnarray} because of $Z=D^{1/2} W^{T} Y$ and $W^{-T}=W$. Equation (\ref{EQ-SEV-EQ-EIGENEXA}) is solved by the SEP solver (Eigen\_sx). Though the SEP solver of Eq.~(\ref{EQ-SEV-EQ}) requires a larger operation cost than the Cholesky factorization (See Fig.1 of Ref.~\cite{CHOLESKY-EXP}, for example), the elapse time can not be estimated only from the operation costs among the modern supercomputers. \begin{table}[htb] \caption{List of the workflows in the benchmark. The routine names for the SEP solver and the reducer are shown for each workflow. Abbreviations are shown within parentheses. \label{tab:workflows} } \begin{tabular}{c|cc} Workflow & SEP solver & Reducer \\ \hline $A$ & ScaLAPACK (SCLA) & ScaLAPACK (SCLA) \\ $B$ & Eigen\_sx (EIGX) & ScaLAPACK (SCLA) \\ $C$ & ScaLAPACK (SCLA) & ELPA \\ $D$ & ELPA2 & ELPA \\ $E$ & ELPA1 & ELPA \\ $F$ & Eigen\_s (EIGS) & ELPA \\ $G$ & Eigen\_sx (EIGX) & ELPA \\ $H$ & Eigen\_sx (EIGX) & Eigen\_sx (EIGX) \end{tabular} \end{table} The benchmark of the hybrid GEP solvers was carried out for the eight workflows listed in Table~\ref{tab:workflows}. In general, a potential issue is the possible overhead of the data conversion process between libraries. This issue will be discussed in Sec.~\ref{SEC-DATA-CONV}. \begin{table}[htb] \caption{Selected results of the benchmark. The elapse time for the full (eigenpair) calculation ($T_{\rm full}$) and that for the eigenvalue-only calculation ($T_{\rm evo}$) with the workflows. The recorded time is the best data among ones with different numbers of the used nodes. The number of used nodes ($P$) for the best data is shown within parentheses. The best data among the workflows are labelled by \lq [B]'. The saturated data are labelled by \lq [S]'. The workflow $D'$ on Altix is that without the SSE optimized routine of the \lq ELPA2' SEP solver. See the text for details. } \begin{tabular}{cc|c|cc} Size $M$/Machine & WF & $T_{\rm full}$ (sec) & $T_{\rm evo}$ (sec) \\ \hline \hline 1,000,080/FX10 & $G$ & 39,919 ($P$ = 4,800) & 35,103 ($P$ = 4,800) \\ \hline \hline 430,080/K & $A$ & 11,634 ($P$ = 10,000) & 10,755 ($P$ = 10,000) \\ & $B$ & 8,953 ($P$ = 10,000) & 8,465 ($P$ = 10,000) \\ & $C$ & 5,415 ($P$ = 10,000) & 4,657 ($P$ = 10,000) \\ & $D$ & 4,242 ($P$ = 10,000) & 2,227 ($P$ = 10,000)[B] \\ & $E$ & 2,990 ($P$ = 10,000) & 2,457 ($P$ = 10,000) \\ & $F$ & 2,809 ($P$ = 10,000) & 2,416 ($P$ = 10,000) \\ & $G$ & 2,734 ($P$ = 10,000)[B] & 2,355 ($P$ = 10,000) \\ & $H$ & 3,595 ($P$ = 10,000) & 3,147 ($P$ = 10,000) \\ \hline \hline 90,000/K & $A$ & 590 ($P$ = 4,096) & 551 ($P$ = 4,096) \\ & $B$ & 493 ($P$ = 1,024)[S] & 449 ($P$ = 1,024)[S] \\ & $C$ & 318 ($P$ = 4,096) & 298 ($P$ = 4,096) \\ & $D$ & 259 ($P$ = 4,096) & 190 ($P$ = 4,096)[B] \\ & $E$ & 229 ($P$ = 4,096)[B] & 194 ($P$ = 4,096) \\ & $F$ & 233 ($P$ = 4,096) & 210 ($P$ = 4,096) \\ & $G$ & 258 ($P$ = 4,096) & 240 ($P$ = 4,096) \\ & $H$ & 253 ($P$=4,096) & 236 ($P$=4,096) \\ \hline 90,000/FX10 & $A$ & 1,248 ($P$ = 1,369) & 1,183 ($P$ = 1,369) \\ & $B$ & 691 ($P$ = 1,024)[S] & 648 ($P$ = 1,024)[S] \\ & $C$ & 835 ($P$ = 1,369) & 779 ($P$ = 1,369) \\ & $D$ & 339 ($P$ = 1,369) & 166 ($P$ = 1,024)[B][S] \\ & $E$ & 262 ($P$ = 1,369) & 233 ($P$ = 1,024)[S] \\ & $F$ & 250 ($P$ = 1,369)[B] & 222 ($P$ = 1,369) \\ & $G$ & 314 ($P$ = 1,024)[S] & 283 ($P$ = 1,024)[S] \\ & $H$ & 484 ($P$=1,369) & 456 ($P$=1,369) \\ \hline 90,000/Altix & $A$ & 1,985 ($P$ = 256) & 1,675 ($P$ = 256) \\ & $B$ & 1,883 ($P$ = 256) & 1,586 ($P$ = 256) \\ & $C$ & 1,538 ($P$ = 256) & 1,240 ($P$ = 256) \\ & $D$ & 1,621 ($P$ = 256) & 594 ($P$ = 256) \\ & $D'$ & 2,621 ($P$ = 256) & 585 ($P$ = 256)[B] \\ & $E$ & 1,558 ($P$ = 256) & 1,287 ($P$ = 256) \\ & $F$ & 1,670 ($P$ = 256) & 1,392 ($P$ = 256) \\ & $G$ & 1,453 ($P$ = 256)[B] & 1,170 ($P$ = 256) \\ & $H$ & 2,612 ($P$=256) & 2,261 ($P$=256) \\ \hline \hline 22,500/K & $A$ & 65.2 ($P$ = 1,024) & 59.6 ($P$ = 256) \\ & $B$ & 45.8 ($P$ = 1,024)[S] & 43.2 ($P$ = 1,024)[S] \\ & $C$ & 41.7 ($P$ = 2,025) & 37.8 ($P$ = 2,025) \\ & $D$ & 28.4 ($P$ = 2,025) & 22.6 ($P$ = 1,024) \\ & $E$ & 28.3 ($P$ = 2,025)[B] & 22.6 ($P$ = 1,024)[B] \\ & $F$ & 28.8 ($P$ = 1,024)[S] & 26.9 ($P$ = 1,024)[S] \\ & $G$ & 29.7 ($P$ = 1,024)[S] & 27.8 ($P$ = 1,024)[S] \\ & $H$ & 39.3($P$=1024)[S] & 37.5($P$=1024)[S] \\ \hline 22,500/FX10 & $A$ & 126.2 ($P$ = 256) & 118.1 ($P$ = 256) \\ & $B$ & 71.3 ($P$ = 256)[S] & 67.1 ($P$ = 256)[S] \\ & $C$ & 103.5 ($P$ = 256)[S] & 96.3 ($P$ = 256)[S] \\ & $D$ & 30.5 ($P$ = 529)[B] & 24.4 ($P$ = 529)[B] \\ & $E$ & 34.3 ($P$ = 256)[S] & 31.2 ($P$ = 256)[S] \\ & $F$ & 32.1 ($P$ = 529) & 29.4 ($P$ = 529) \\ & $G$ & 45.3 ($P$ = 529) & 42.5 ($P$ = 529) \\ & $H$ & 74.9($P$=529) & 72.2 ($P$=529) \\ \hline 22,500/Altix & $A$ & 51.4 ($P$ = 256) & 42.1 ($P$ = 256) \\ & $B$ & 70.0 ($P$ = 256) & 50.7 ($P$ = 256) \\ & $C$ & 45.6 ($P$ = 256) & 35.5 ($P$ = 256) \\ & $D$ & 41.8 ($P$ = 256) & 22.3 ($P$ = 256)[B] \\ & $D'$ & 59.6 ($P$ = 256) & 21.8 ($P$ = 256)[B] \\ & $E$ & 32.3 ($P$ = 256)[B] & 26.7 ($P$ = 256) \\ & $F$ & 48.5 ($P$ = 256) & 37.3 ($P$ = 256) \\ & $G$ & 57.2 ($P$ = 256) & 39.6 ($P$ = 256) \\ & $H$ & 71.2 ($P$=256) & 64.1 ($P$=256) \end{tabular} \label{tab:besttime} \end{table} \section{Benchmark result} \label{SEC-BENCHMARK} Strong scaling benchmarks are investigated for the hybrid solvers. The elapse times were measured for (i) the full eigenpair calculation ($T_{\rm full}$) and (ii) the \lq eigenvalue-only' calculation ($T_{\rm evo}$). In the latter case, the elapse time is ignored for the calculation of the eigenvectors. The two types of calculations are important among electronic structure calculations. \cite{ELPAReview} The present benchmark ignores small elapse times of the initial procedure for distributed data and the comments on them will appear in Sec.~\ref{SEC-INI-DATA}. The benchmark was carried out on three supercomputers; the K computer at Riken, Fujitsu FX10 and SGI Altix ICE 8400EX. The K computer has a single SPARC 64 VIIIfx processor (2.0GHz, 8-core) on node. The FX10 is Oakleaf-FX of the University of Tokyo. Fujitsu FX10 is the successor of the K computer and has a single SPARC64 IXfx processor (1.848 GHz, 16-core) on each node. \footnote{ Additional options of the K computer and FX10 are explained; We did not specify a MPI process shape on the Tofu interconnect. We used the rank directory feature to alleviate I/O contention. } We also used SGI Altix ICE 8400EX of Institute for Solid State Physics of the University of Tokyo. It is a cluster of Intel Xeon X5570 (2.93GHz, 8-core). The byte-per-flop value (B/F) is B/F=0.5, 0.36 or 0.68, for the K computer, FX10 or SGI Altix, respectively. The numbers of used processor nodes $P$ are set to be square numbers ($P=q^2$) except in Sec.~\ref{SEC-BENCH-MILLION}, since the ELPA paper~\cite{ELPAReview} reported that the choice of a (near-)square number for $P$ can give better performance. When the non-traditional SEP solver algorithm of ELPA is used on Altix, one can choose an optimized low-level routine using SSE instructions (\lq REAL\_ELPA\_KERNEL\_SSE') and a generic routine (\lq REAL\_ELPA\_KERNEL\_GENERIC'). \cite{ELPAReview} The optimized code can run only on the Intel-based architectures compatible to SSE instructions and was prepared so as to accelerate the backtransformation subroutine. Among the results on Altix, the \lq ELPA2' solver and the workflow $D$ on Altix are those with the optimized routine, while the \lq ELPA2$'$' solver and the workflow $D'$ are those with the generic routine. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{combined_430080_K.eps} \caption{ Results with $M$=430,800 on the K computer. The elapse times are plotted with the workflows for the (a) full ($T_{\rm full}$) and (b) eigenvalue-only ($T_{\rm evo}$) calculations. (c) The decomposed times for the SEP solver ($T_{\rm SEP}$) and for the reducer ($T_{\rm RED}$) are plotted. The routines for the reducers is labeled by \lq (RED)'. Detailed decomposed times for subprocedures of the ELPA style reducer and the Cholesky decomposition in the ScaLAPACK style reducer are also plotted in (c). The ideal speedup in parallelism is drawn as a dashed gray line. } \label{fig:combined_430080_K} \end{figure} \subsection{Result with the matrix size of $M=430,080$} \label{subsec:430080} The benchmark with the matrix size of $M=430,080$ was carried out for up to $P$ = 10,000 nodes on the K computer. The elapse times for $P$=10,000 nodes is shown in Table~\ref{tab:besttime}. The elapse time for all the cases are shown in Fig.~\ref{fig:combined_430080_K} for the (a) full ($T_{\rm full}$) or (b) eigenvalue-only ($T_{\rm evo}$) calculations. The decomposed times are also shown in Fig.~\ref{fig:combined_430080_K} (c) for the SEP solver ($T_{\rm SEP}$) and the reducer ($T_{\rm RED}$) ($T_{\rm full} = T_{\rm SEP} + T_{\rm RED}$). \begin{table}[htb] \caption{Decomposition of the elapse time (sec) of the SEP solvers with $M$=430,080 and $P=10,000$. See the text for the subroutine names of \lq TRD/BAND' , \lq D\&C' and \lq BACK'. } \begin{tabular}{c|ccc|c} SEP solver & TRD/BAND & D\&C & BACK & Total ($T_{\rm SEP}$) \\ \hline SCLA & 3,055 & 465 & 633 & 4,152 \\ ELPA2 & 966 & 141 & 1,892 & 2,999 \\ ELPA1 & 1,129 & 138 & 400 & 1,667 \\ EIGS & 1,058 & 196 & 265 & 1,521 \\ EIGX & 828 & 390 & 255 & 1,473 \end{tabular} \label{tab:SEP_decomposition} \end{table} Table~\ref{tab:SEP_decomposition} shows the decomposed time of the SEP solvers for $P$=10,000. A SEP solver routine is decomposed into three subroutines of (i) the tridiagonalization or narrow-band reduction (\lq TRD/BAND'), (ii) the divide and conquer algorithms for the tridiagonal or narrow-band matrices (\lq D\&C') so as to compute the eigenvalues, and (iii) the backtransformation of eigenvectors (\lq BACK') so as to compute the eigenvectors of the GEP. One can observe several features on the results; (I) In the full calculation benchmark (Fig.~\ref{fig:combined_430080_K}(a)), the best data, the smallest elapse time, appears in the workflow $G$ for $P$=10,000. The workflow $G$ is the hybrid solver that uses the \lq Eigen\_sx' SEP solver in EigenExa and the ELPA style reducer, since these routines are the best among the SEP solvers and the reducers, respectively, as shown in Fig.~\ref{fig:combined_430080_K}(c) and Table~\ref{tab:SEP_decomposition}. In Table~\ref{tab:besttime}, the speed ($T_{\rm full}^{-1}$) of the workflow $G$ is approximately four times faster than that of the conventional workflow $A$ (11,634 sec) / (2,734 sec) $\approx$ 4.3). (II) Fig.~\ref{fig:combined_430080_K} (c) shows that the ELPA style reducer gives significantly smaller elapse times than those of ScaLAPACK and those of EigenExa. The elapse time for $P$=10,000 is $T_{\rm RED}$ = 1,261 sec with the ELPA style reducer and is $T_{\rm RED}$ = 2,157 sec with the EigenExa reducer. The elapse time with the EigenExa reducer is governed by that of the SEP solver for Eq.~(\ref{EQ-SEV-EQ-B-MAT}) ($T_{\rm SEP}$ = 1,473 sec in Table~\ref{tab:SEP_decomposition}). (III) In the eigenvalue-only calculation (Fig.~\ref{fig:combined_430080_K}(b)), the best data, the smallest elapse time, appears in the workflow $D$ for $P$=10,000. The workflow $D$ is the solver that uses the \lq ELPA2' SEP solver and the ELPA style reducer and the eigenvector calculation consumes a large elapse time of $T_{\rm vec}$; $T_{\rm vec} \equiv T_{\rm full} - T_{\rm evo} =$ (4,242 sec) - (2,227 sec) = (2,015 sec) in Table~\ref{tab:besttime}. The time $T_{\rm vec}$ is contributed mainly by the backward transformation subroutine ($T_{\rm BACK}$ =1,892 sec) in Table~\ref{tab:SEP_decomposition}, because the backward transformation subroutine in ELPA2 uses a characteristic two-step algorithm (See Sec. 4.3 of Ref.~\cite{ELPAReview}). \subsection{Benchmark with the matrix sizes of $M$=90,000, 22,500 \label{SEC-RESULT-MIDDLESIZE}} The benchmark with the smaller matrix sizes of $M=90,000$ and 22,500 are also investigated. The maximum number of used processor nodes is $P_{\rm max}$ = 4,096, 1,039 and 256, on the K computer, FX10, and Altix, respectively. \footnote{ We observed on Altix that the \lq ELPA2' and \lq ELPA2$'$' SEP solver required non-blocking communication requests beyond the default limit number of $N_{\rm MPI\_MAX}=16,384$ and the job stopped with an MPI error message. Then we increased the limit number to $N_{\rm MPI\_MAX}=1,048,576$, the possible maximum of the machine by the environment variable \lq MPI\_REQUEST\_MAX' and the calculations were completed. } Figures~\ref{fig:combined_90000_all_machines} and ~\ref{fig:combined_22500_all_machines} show the data with $M$=90,000 and with $M$=22,500, respectively. The decomposed times are shown in Fig.~\ref{fig:combined_seprep_all_machines}. Table \ref{tab:besttime} shows the best data for each workflow among the different numbers of used nodes. The results will help general simulation researchers to choose the solver and the number of used nodes, since the elapse times in Table~\ref{tab:besttime} are less than a half hour and such calculations are popular \lq regular class' jobs among systematic investigations. \footnote{ One should remember that supercomputers are usually shared by many researchers who run many calculations in similar problem sizes successively and/or simultaneously. } \begin{figure}[h] \centering \includegraphics[width=8.5cm]{combined_90000_all_machines.eps} \caption{Benchmark with the matrix size of M=90,000, (I) on the K computer for the (a) full (eigenpair) and (b) eigenvalue-only calculation, (II) on FX10 for the (c) full and (d) eigenvalue-only calculation, (III) on Altix for the (e) full and (f) eigenvalue-only calculation. The ideal speedup in parallelism is drawn as a dashed gray line. \label{fig:combined_90000_all_machines}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.2cm]{combined_22500_all_machines.eps} \caption{Benchmark with the matrix size of M=22,500, (I) on the K computer for the (a) full (eigenpair) and (b) eigenvalue-only calculation, (II) on FX10 for the (c) full and (d) eigenvalue-only calculation, (III) on Altix for the (e) full and (f) eigenvalue-only calculation. The ideal speedup in parallelism is drawn as a dashed gray line. \label{fig:combined_22500_all_machines}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm]{combined_sepred_all_machines.eps} \caption{Decomposition analysis of the elapse time into those of the SEP solver and the reducer (I) on the K computer with (a) $M$=90,000 and (b) $M$=22,500, (II) on FX10 with (c) $M$=90,000 and (d) $M$=22,500, (III) on Altix with (c) $M$=90,000 and (d) $M$=22,500. The routines for the reducers is labeled by \lq (RED)'. The \lq ELPA2$'$' SEP solver is that without the SSE optimized routine. The ideal speedup in parallelism is drawn as a dashed gray line.\label{fig:combined_seprep_all_machines}} \end{figure} Here, the results are discussed; (I) Table~\ref{tab:besttime} shows that the smallest elapse time in the full calculation appears among the workflows with the ELPA style reducer (the workflows $D, E, F$, and $G$) and that in the eigenvalue-only calculation appears with the workflow $D$. The above features are consistent to the results in the previous subsection. (II) Unlike the result in the previous subsection, the speed up is sometimes saturated. An example is observed in Fig.~\ref{fig:combined_22500_all_machines} (a), in the full calculation with $M$=22,500 on the K computer, because the elapse time in the workflow $F$ gives a minimum as the function of $P$ at $P$=1,024. The decomposition analysis of Fig.~\ref{fig:combined_seprep_all_machines}(b) indicates that the saturation occur both for the SEP solver and the reducer, which implies that the improvement both on the SEP solver and the reducer is desirable. The saturated cases are marked in Table~\ref{tab:besttime} with the label of \lq [S]'. \footnote{ No saturation is found on Altix, unlike on the K computer and FX10, partially because the maximum number of used nodes ($P_{\rm max}=256$) is smaller. } (III) Finally, the SSE-optimized routine in the workflow $D$ is compared with the generic routine in the workflow $D'$ in the case of $M=90,000$ on Altix with $P=$256. The SSE-optimized routine is prepared only in the backward transformation process. Since the process with the SSE-optimized routine or the generic one gives the elapse time of $T_{BACK}$ = 929 sec or $T_{BACK}$ = 1,872 sec, respectively, the process is accelerated with the SSE-optimized routine by $1,872$ sec / $929$ sec $\approx$ 2.02. As shown in Table~\ref{tab:besttime}, the full calculation is accelerated with the SSE-optimized routine by $2,621$ sec / $1,621$ sec $\approx$ 1.62. \subsection{Benchmark for a million dimensional matrix \label{SEC-BENCH-MILLION}} Finally, the benchmark for a million dimensional matrix is discussed. A press release at 2013 \cite{EigenExaPressRelease} reported, as a world record, a benchmark of a million dimensional SEP carried out by EigenExa, in approximately one hour, on the full (82,944) nodes of the K computer. An eigenvalue problem with a million dimensional matrix ($M$=10$^6$) seems to be the practical limitation of the present supercomputer, owing to the $\mathcal{O}(M^3)$ operation cost. We calculated a million dimensional GEP at Dec. 2014 on the full (4,800) nodes of Oakleaf-FX. \footnote{ We used FX10 not the K computer, because FX10 is in a newer architecture with a lower B/F value and the result on FX10 is speculated to be closer to that on the next-generation (exa-scale) machine. } Since our computational resource was limited, only one calculation was carried out with the workflow $G$, because it gives the best data among those with $M=430,080$ in Table~\ref{tab:besttime}. The calculation finished in approximately a half day, as shown in Table~\ref{tab:besttime} ($T_{\rm full}$ = 39,919 sec and $T_{\rm evo}$ = 35,103 sec). The elapse time of the reducer ($T_{\rm RED}=T_{\rm full} - T_{\rm SEP}$ = 15,179 sec) is smaller than but comparable to that of the SEP solver ($T_{\rm SEP}=24,740$). The benchmark proved that the present code qualifies as a software applicable to massively parallel computation with up to a million dimensional matrix. \section{Discussions} \label{SEC-DISCUSSIONS} \subsection{Preparation of initial distributed data \label{SEC-INI-DATA}} In the benchmark, the initial procedures including file reading are carried out for the preparation of distributed data. Its elapse time is always small and is ignored in the previous section. \footnote{ In the case of the workflow G on the K computer with $M$=430,080 and $P$=10,000, for example, the elapse time of the initial procedures is $T_{\rm ini}=$123sec and is much smaller than that of the total computation ($T_{\rm tot}=2,734$sec. See Table. \ref{tab:besttime}). It is noteworthy that the present matrices are sparse, as explained in Sec.~\ref{SEC-BACKGROUND}. } These procedures, however, may consume significant elapse times, when the present solver is used as a middleware with real applications. The discussions on such cases are beyond the present scope, since they depend on the program structure of the real applications. Here, several comments are added for real application developers; In general, the matrix data cost is, at most, $\mathcal{O}(M^2)$ and the operation cost is $\mathcal{O}(M^3)$ in the dense-matrix solvers and one should consider a balance between them. In the case of $M=430,080$, for example, the required memory size for all the matrix elements is 8 B $\times \, M^2 \approx$ 1.5 TB, which can not be stored on a node of the K computer. Therefore, the data should be always distributed. In our real application (ELSES), the initial distributed data is prepared, when only the required elements are generated and stored on each node. \subsection{Data conversion overhead \label{SEC-DATA-CONV}} As explained in Sec.~\ref{SEC-MATH-FOUND}, several workflows require data conversion processes between distributed data formats, since ScaLAPACK and ELPA use block cyclic distribution with a given block size $n_{\rm block}(>1)$ and EigenExa uses cyclic distribution ($n_{\rm block} \equiv 1$). In the present benchmark, the block size $n_{\rm block}$ in ScaLAPACK and ELPA was set to be $n_{\rm block}=128$, a typical value. Consequently, the workflows B, F, G require data conversion processes. In the present paper, the elapse time of the conversion procedures is included in the reducer part ($T_{\rm red}$). Table~\ref{tab:conversion} shows the elapse time for the data conversion. The elapse times are shown in the cases with the maximum numbers of used nodes ($P=P_{\rm max}$) among the present benchmark. Two data conversion procedures are required. One is the conversion from the block cyclic distribution into the cyclic distribution, shown as \lq (b $\rightarrow$ 1)' in Table~\ref{tab:conversion} and the other is the inverse process shown as \lq (1 $\rightarrow$ b)'. The two procedures are carried out, commonly, by the \lq pdgemr2d' routine in ScaLAPACK. Table~\ref{tab:conversion} indicates that the overhead of the data conversion procedures is always small and is not the origin of the saturation. In general, the conversion requires an $\mathcal{O}(M^2)$ operation cost, while the calculation in a dense-matrix solver requires an $\mathcal{O}(M^3)$ operation cost. The fact implies the general efficiency of hybrid solvers, at least, among dense-matrix solvers. \begin{table}[htb] \caption{The elapse times for data conversion; \lq (b $\rightarrow$ 1)', \lq (1 $\rightarrow$ b)' and \lq $T_{\rm RED}$' are the times in seconds for, the conversion process from block cyclic into cyclic distributions, the inverse process and the whole reducer procedure, respectively. The saturated data of $T_{\rm RED}$ are labelled by \lq [S]'. The \lq ratio' is ((b $\rightarrow$ 1) + (1 $\rightarrow$ b)) / $T_{\rm RED}$.} \label{tab:conversion} \begin{tabular}{cc|ccc|c} Size M & Machine(P) & (b $\rightarrow$ 1) & (1 $\rightarrow$ b) & $T_{\rm RED}$ & ratio[\%] \\ \hline 1,008,000 & FX10(4,800) & 51.4 & 51.7 & 8,208 & 1.26 \\ 430,080 & K(10,000) & 13.4 & 6.48 & 1,261 & 1.58 \\ 90,000 & K(4,096) & 6.89 & 0.797 & 124[S] & 6.21 \\ & FX10(1,369) & 1.89 & 0.973 & 84.0[S] & 3.41 \\ & Altix(256) & 2.01 & 2.02 & 394 & 1.02 \\ 22,500 & K(2,025) & 0.571 & 0.610 & 11.3[S] & 10.4 \\ & FX10(529) & 0.328 & 0.176 & 9.20 & 5.48 \\ & Altix(256) & 0.120 & 0.279 & 11.9 & 3.35 \\ \end{tabular} \end{table} \subsection{Decomposition analysis of the reducer} The decomposition analysis of the ELPA-style reducer is focused on, since the ELPA-style reducer is fastest among the three libraries. Figure~\ref{fig:combined_430080_K} (c) shows the case on the K computer with $M$=430,080. The elapse times of the subprocedures of the ELPA-style reducer are plotted; \lq ELPA($R_1$)' is the Cholesky factorization of Eq.~(\ref{EQ-CHOLE-DECMP}), \lq ELPA($R_2$)' is the explicit calculation of the inversion $R = U^{-1}$ of the Cholesky factor $U$, \lq ELPA($R_3$)' and \lq ELPA($R_4$)' are the successive matrix multiplication of Eq.~(\ref{EQ-A-ATAU}) and \lq ELPA($R_5$)' is the backward transformation of eigenvectors by matrix multiplication of Eq.~(\ref{EQ-BACKWARD-TRANS}). The elapse times of the Cholesky factorization in the ScaLAPACK style reducer is also plotted as \lq SCLA($R_1$)' as a reference data. The same decomposition analysis is carried out also for other cases, as shown in Fig.~\ref{fig:ReducerDetail}. One can observe that the Cholesky factorization of the ELPA-style reducer does not scale and sometimes is slower than that of the ScaLAPACK reducer. In particular, the saturation of the ELPA-style reducer is caused by that of the Cholesky factorization in Fig.~\ref{fig:ReducerDetail} (a)(b)(c). The above observation implies that the reducer can be a serious bottleneck in the next-generation (exa-scale) supercomputers, though not in the present benchmark. One possible strategy is the improvement on the Cholesky factorization for better scalability and another is the development of a reducer without the Cholesky factorization, as in the EigenExa-style reducer. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{combined_red_detail.eps} \caption{Decomposition analysis of the elapse time of subprocedures of the ELPA style reducer and the Cholesky factorization in the ScaLAPACK style reducer (I) on the K computer with (a) $M$=90,000 and (b) $M$=22,500, (II) on FX10 with (c) $M$=90,000 and (d) $M$=22,500, (III) on Altix with (c) $M$=90,000 and (d) $M$=22,500. The ideal speedup in parallelism is drawn as a dashed gray line.} \label{fig:ReducerDetail} \end{figure} \section{Summary and future outlook \label{SEC-SUMMARY}} In summary, hybrid GEP solvers were constructed between the three parallel dense-matrix solver libraries of ScaLAPACK, ELPA and EigenExa. The benchmark was carried out with up to a million dimensional matrix on the K computer and other supercomputers. The hybrid solvers with ELPA and EigenExa give better benchmark results than the conventional ScaLAPACK library. The code was developed as a middleware and a mini-application and will appear online. Several issues are discussed. In particular, the decomposition analysis of the elapse time reveals a potential bottleneck part on next-generation (exa-scale) supercomputers, which indicates the guidance for future development of the algorithms and the codes. As a future outlook, the present code for the hybrid solvers is planned to be extended by introducing the solvers with different mathematical foundations. A candidate is the parallel block Jacobi solver \cite{BLOCK-YACOBI-2012, BLOCK-YACOBI-2014}. Since the solver is applicable only to standard eigenvalue problems, the hybrid solver enables us to use the solver in generalized eigenvalue problems. \begin{acknowledgment} The authors thank to Toshiyuki Imamura and Takeshi Fukaya in RIKEN Advanced Institute of Computational Science (AICS) for fruitful discussions on EigenExa. The authors also thank to Yusaku Yamamoto in The University of Electro-Communications on the parallel block Jacobi solver. This research is partially supported by Grant-in-Aid for Scientific Research (KAKENHI Nos. 25104718 and 26400318) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. The K computer of RIKEN was used in the research projects of hp140069, hp140218, hp150144. The supercomputer Oakleaf-FX of the University of Tokyo was used in the research project of 14-NA04 in \lq Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures' in Japan, in the \lq Large-scale HPC Challenge' Project, Information Technology Center, The University of Tokyo and Initiative on Promotion of Supercomputing for Young or Women Researchers, Supercomputing Division, Information Technology Center, The University of Tokyo. We also used the supercomputer SGI altix ICE 8400EX at the Institute for Solid State Physics of the University of Tokyo and the supercomputers at the Research Center for Computational Science, Okazaki. \end{acknowledgment}
{ "timestamp": "2015-04-27T02:06:35", "yymm": "1504", "arxiv_id": "1504.06443", "language": "en", "url": "https://arxiv.org/abs/1504.06443" }
\section{Introduction}\label{section1} \subsection{What Are Quantum Groups?}\label{motivation} An important problem in the theory of quantum groups is to give some definition of a class of these objects that captures known series of quantum groups, such as the quantum enveloping algebras $U_q(\fr{g})$ of \cite{Dri}, and their finite-dimensional analogues, as examples. This was for example formulated in \cite[Problem II.10.2]{BG}: \begin{quotation} \begin{it} ``Given a finite-dimensional Lie algebra $\mathfrak{g}$, find axioms for Hopf al\-ge\-bras to qualify as quantized enveloping algebras of this particular $\mathfrak{g}$." \end{it} \end{quotation} A possible hint to the structure of quantum groups is that the quantum envel\-oping algebras $U_q(\mathfrak{g})$ (as well as the small quantum groups $u_q(\fr{g})$ and multiparameter versions) are \emph{pointed Hopf algebras}. Such Hopf algebras were studied by several authors (see e.g. \cite{AS}). Classification results as in \cite{AS2} suggest a strong resemblance of all finite-dimensional pointed Hopf algebras over abelian groups with small quantum groups. Another paper \cite{AS3} gives a characterization of quantum groups at generic pa\-ram\-e\-ters using pointed Hopf algebras of finite Gelfand--Kirillov dimension with infinitesimal braiding of positive generic type. A further hint to the structure of quantum groups is that they can be decomposed in a triangular way (via the PBW theorem) as \[ U_q(\mathfrak{g})=U_q(\fr{n}_+)\otimes k\mathbb{Z}^n\otimes U_q(\fr{n}_-). \] Here, the positive and negative part are perfectly paired braided Hopf algebras, and the relation with the group algebra $k\mathbb{Z}^n$ is governed by semidirect product relations. The positive (and negative) part are so-called \emph{Nichols algebras}. A third aspect --- observed already in the original paper \cite{Dri} --- is that quantum groups are (quotients of) \emph{quantum} or \emph{Drinfeld doubles}. It was shown in \cite{Maj2} that $U_q(\fr{g})$ in fact is a \emph{braided} Drinfeld double (which is referred to as a \emph{double bosonization} there). It was proved in \cite{BW} that also two-parameter quantum groups are Drinfeld doubles. In this paper, we aim to provide an axiomatic approach to the definition of (multiparameter) quantum groups by combining the pointed Hopf algebra and the triangular decomposition approach. Under the additional assumption of what we call a triangular decomposition of \emph{weakly separable type} over a group, the only indecomposable examples are close generalizations of multiparameter quantum groups. In particular, assuming further non-degeneracy, they are examples of a more general version of braided Drinfeld doubles, which we refer to as \emph{asymmetric} braided Drinfeld doubles. Further, under certain assumptions on the group and the parameters, we can recover Lie algebras from these Hopf algebras, after introducing a suitable integral form. \subsection{This Paper's Results} This paper starts by recalling the necessary technical background, including a brief overview on classification results of finite-dimensional pointed Hopf algebras, as well as structural results by \cite{BB} on algebras with triangular decomposition, in Section~\ref{background}. Next, we give the definition of a bialgebra with a triangular decomposition over a Hopf algebra $H$ in Section~\ref{section1.5}. This adapts the two-step approach used for algebras in \cite{BB} to the study of bialgebras. Namely, we first consider the \emph{free} case of a bialgebra $T(V)\otimes H\otimes T(V^*)$ where the positive and negative parts ($T(V)$, respectively $T(V^*)$) are tensor algebras, and then specify by what ideals (called \emph{triangular} Hopf ideals) we can take the quotient. The core of this paper is formed by a partial classification of bialgebras with triangular decomposition over a group algebra $kG$. We assume that $V$ has one-dimensional homogeneous components (weak separability). We again proceed in two steps. First, we determine all pointed bialgebras with free positive and negative part over $kG$ in Section \ref{freeclassification}, and then look at pairs of ideals $I$, $I^*$ such that the quotient $A/{( I, I^*)}$ is still a bialgebra in Section \ref{quotientsection}. We find that indecomposable examples are automatically pointed Hopf algebras, and impose strong commutativity conditions on the group $G$. Multiparameter quantum groups fit into this framework. Indeed, the only possible commutator relations (\ref{commrel}) closely resemble those of multiparameter quantum groups: \begin{align} [f_i,v_j]&=\gamma_{ij}(k_j-l_i)\in kG, &\forall i=1,\ldots,n. \end{align} We further observe that there exists a natural generalization of the definition of a braided Drinfeld double to the setting of braided Hopf algebras in the category of Yetter--Drinfeld modules (YD-modules) over $H$. For this, the base Hopf algebra $H$ does not need to be quasitriangular. We need two braided Hopf algebras which are only required to be dually paired considered as braided Hopf algebra in the category of modules (rather than YD-modules). That is, the requirement that is weakened compared to the definition of a braided Drinfeld double (as in \cite{Maj2} or \cite{Lau}) is that the comodule structures do not need to be dually paired. We refer to this generalization as the \emph{asymmetric braided Drinfeld double}. It gives a natural way of producing Hopf algebras with triangular decomposition --- which are not necessarily quasitriangular. We show in Theorem \ref{drinfeldtheorem} that the Hopf algebras arising in the classification in Theorem~\ref{mainclassificationthm} are of this form (provided that the parameters $\gamma_{ii}$ are non-zero) and that $G$ has to be abelian in this case. In Section \ref{liealgebrasection} we show that from these asymmetric braided Drinfeld doubles of separable type we can recover Lie algebras provided that there exists a well-defined morphism of rings to $\mathbb{Z}$ when setting the parameters equal to 1. Hence, in the spirit of the question asked in Section \ref{motivation}, we can relate the outcome of our classification back to Lie algebras, which are always generated by Lie subalgebras isomorphic to $\mathfrak{sl}_2$. Here is an overview of the increasingly stronger assumptions on the Hopf algebras $A$ and $H$ used in the classification: \begin{itemize} \item Section~\ref{section1.5}: $H$ any Hopf algebra over a field $k$, $A$ a bialgebra with triangular decomposition; \item Section~\ref{section2}: $H=kG$, $A$ a bialgebra with triangular decomposition; \begin{itemize} \item Section~\ref{preliminaryobs}--\ref{freeclassification}: $A$ is of weakly separable type and indecomposable after Definition~\ref{indecprop}; \item Section~\ref{quotientsection}: $A$ is indecomposable and non-degenerate of separable type; \item Section~\ref{liealgebrasection}: In addition to the assumptions of \ref{quotientsection}, we require that $\operatorname{char} k=0$, and that setting the parameters equal to 1 gives a well-defined homomorphism of rings to $\mathbb{Z}$. \end{itemize} \end{itemize} The final section~\ref{multiparametersection} contains different classes of indecomposable pointed Hopf algebras with triangular decomposition over a group $kG$ that arise as examples in the main classification. The first class we discuss are the multiparameter quantum groups $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ introduced by \cite{FRT} (adapting the presentation in \cite{CM}). They are asymmetric braided Drinfeld doubles, which is a generalization of the result of \cite{BW} showing that two-parameter quantum groups are Drinfeld doubles. In Section \ref{section3} we bring results of \cite{Ros} on growth condition (finite Gelfand--Kirillov dimension) and classification of Nichols algebras from \cite{AS3} into the picture. We use these results to characterize the Drinfeld--Jimbo type quantum groups at generic parameters $q$ within the classification of this paper under the additional assumption that the triangular decomposition is what we call \emph{symmetric}. Further, a class of finite-dimensional pointed Hopf algebras by Radford can naturally be included as examples in this framework (Section \ref{radford}). To conclude this paper, we suggest in Section \ref{conclusion} that future research could focus on the search for Hopf algebras with triangular decomposition over other Hopf algebras $H$ (replacing the group algebra $kG$). This might give interesting monoidal categories, or even knot invariants, in other contexts. As the first --- most classical --- example, we take $H$ to be a polynomial ring $k[x_1,\ldots, x_n]$. In this case, the only examples are universal enveloping algebras of Lie algebras. \subsection{Notations and Conventions} In this paper, adapted Sweedler's notation (see e.g. \cite[1.2]{Swe}) is used to denote coproducts and coactions omitting summation. Unless otherwise stated, we work with Hopf algebras over an arbitrary field $k$. A Hopf algebra always has an invertible antipode $S$. The category of left YD-modules (or \emph{crossed modules}, cf. \cite[Proposition 7.1.6]{Maj1}) over a Hopf algebra $H$ is denoted by $\leftexpsub{H}{H}{\mathcal{YD}}$, while left modules are denoted by $\lmod{H}$, and right modules by $\rmod{H}$. We denote the module spanned by generators $S$ over a commutative ring $R$ by $R\langle S \rangle$, while $R[S]$ denotes the $R$-algebra generated by elements $S$ (subject to some specified relations). Groups generated by elements of a set $S$ are denoted by $\langle S\rangle$, while ideals are denoted using $(~)$. \section{Background}\label{background} \subsection{Pointed Hopf Algebras} Let the coproduct $\Delta\colon H\to H\otimes H$ make $H$ a coalgebra over a field $k$. We can consider \emph{simple} subcoalgebras $A\leq H$. That is, $\Delta(A)\leq A\otimes A$ and there are no proper subobjects of this type in $A$. A basic observation is that if $\dim A=1$, then $A$ can be written as $kg$, for a generator $g\in H$ such that $\Delta(g)=g\otimes g$. Such elements are called \emph{grouplike}. Indeed, if $H$ is a Hopf algebra, then the set of all grouplike elements $G(H)$ has a group structure. A Hopf algebra is \emph{pointed} if all simple subcoalgebras are one-dimensional. This notion can be traced back to \cite[8.0]{Swe} and classifying all finite-dimensional pointed Hopf algebras can be taken as a first step in the classification of all finite-dimensional Hopf algebras (see e.g. \cite{And} for a recent survey). In the late 1980s and early 1990s, important classes of pointed Hopf algebras have been discovered with the introduction of the quantum groups (and their small analogues). Due to the vast applications of and attention to these Hopf algebras in the literature, the study of pointed Hopf algebras has become an important algebraic question. \subsection{Link-Indecomposability}\label{indecomposability} In the early 1990s, Montgomery asked the question, which groups may occur as $G(H)$ where $H$ is an \emph{indecomposable} pointed Hopf algebra. In \cite{Mon2}, an appropriate notion of indecomposability is discussed in different ways. We will briefly recall the description in terms of \emph{link-indecomposability} which is equivalent to indecomposability as a coalgebra and indecomposability of the Ext-quiver of simple comodules. Given a pointed Hopf algebra $H$, we define a graph $\Gamma_H$ with vertices being the simple subcoalgebras of $H$ (that is, the grouplike elements). There is an edge $h\to g$ if there exists a $(g,h)$-skew-primitive element $v\in H$, i.e. $\Delta(v)=v\otimes g+h\otimes v$, which is not contained in $kG(H)$. We say that $H$ is \emph{indecomposable} if $\Gamma_H$ is connected. As an example, group algebras $kG$ are only indecomposable if $G=1$. The quantum group $U_q(\mathfrak{sl}_2)$ is indecomposable if the coproducts are e.g. defined as $\Delta(E)=E\otimes 1 + K\otimes E$ and $\Delta(F)=F\otimes 1 + K^{-1}\otimes F$. There are other versions of the coproduct which are not indecomposable (see \cite{Mon2}). \subsection{Classification Results for Pointed Hopf Algebras}\label{classificationsurvey} It was understood early that some pointed Hopf algebras can be obtained as bosonizations $A=\mathcal{B}(V)\rtimes kG$ of so-called \emph{Nichols} (or \emph{Nichols-Woronowicz}) algebras $\mathcal{B}(V)$ associated to YD-modules over a group $G$ (see e.g. \cite[Section~2]{AS} for definitions). In this case, the coproducts are given by $\Delta(v)=v^{(0)}\otimes v^{(-1)}+ 1 \otimes v$ using Sweedler's notation. That is, if $v$ is a homogeneous element, then $\Delta(v)=v\otimes g+1\otimes v$ for the degree $g\in G(A)$ of $v$ and $A$ is indecomposable over the group generated by $g\in G$ with $V_g\neq 0$. Thus, the question of finding finite-dimensional pointed Hopf algebras is linked to finding finite-dimensional Nichols algebras.\footnote{However, a pointed Hopf algebra is not necessarily a bosonization of this form. Important tools available are the coradical filtration (see e.g. \cite[5.2]{Mon}) and the \emph{lifting method} of Andruskiewitsch and Schneider \cite[Section~5]{AS}.} Although both questions remain open in general, vast progress on classifying pointed Hopf algebras has been made in a series of papers by Andruskiewitsch and Schneider (see \cite{AS,AS2}) for abelian groups $G$, and more recently for symmetric and alternating groups \cite{AFGV}, or groups of Lie type \cite{ACG1,ACG2}. See \cite{And} for more detailed references. Let us briefly recall the classification results of \cite{AS2} over an algebraically closed field $k$ of characteristic zero here in order to provide the basis for comparison to this paper's classification in Section~\ref{section2} later. To fix notation, let $\mathcal{D}$ denote a \emph{finite Cartan datum}. That is, a finite abelian group $\Gamma$, a Cartan matrix $A=(a_{ij})$ of dimension $n\times n$ with a choice of generating group elements $g_i$, and characters $\chi_i$ for $i=1,\ldots,n$. Then define $q_{ij}:=\chi_j(g_i)$ and impose the conditions that \begin{equation}\label{cartandatum} q_{ij}q_{ji}=q_{ii}^{a_{ij}}, \quad q_{ii}\neq 1. \end{equation} We can associate to the Cartan matrix $A$ a root system $\Phi$ (with positive roots $\Phi^+$). The simple roots $\alpha_i$ of $\Phi$ are indexed by $i=1,\ldots ,n$. Denote by $\chi$ the set of connected components of the corresponding Dynkin diagram, and by $\Phi_J$ the root system restricted to the component $J\in \chi$, and write $i\sim j$ if $i$ and $j$ are in the same connected component. Denote further \[ g_\alpha:= \prod_{i=1}^n{g_i^{n_i}}, \qquad \chi_\alpha:= \prod_{i=1}^n\chi_{i}^{n_i}, \qquad \text{for a root } \alpha=\sum_{i=1}^n{n_i\alpha_i}. \] To state the classification of finite-dimensional pointed Hopf algebras over abelian groups, some technical assumptions need to be made: \begin{enumerate} \item[(a)] Assume that the parameters $q_{ii}$ are roots of \emph{odd} order $N_i$. \item[(b)] $N_i=N_J$ is constant on each connected component, $i\in J$. \item[(c)] If $J\in \chi$ is of type $G_2$, then 3 does not divide $N_J$. \end{enumerate} To construct pointed Hopf algebra from a Cartan datum $\mathcal{D}$, we need two families of parameters: \begin{enumerate} \item[(d)] Let $\lambda=(\lambda_{ij})$ be an $n\times n$-matrix of elements in $k$ such that for all $i \nsim j$, $g_ig_j=1$ or $\chi_i\chi_j\neq \varepsilon$ implies $\lambda_{ij}=0$. \item[(e)] Further let $\mu=(\mu_\alpha)_{\Phi^+}$ be elements in $k$ such that for any $\alpha\in \Phi^+_J$, for $J\in \chi$, such that if $g_{\alpha}^{N_J}=1$ or $\xi_{\alpha}^{N_J}\neq \varepsilon$, then $\mu_{\alpha}=0$. \end{enumerate} \begin{definition}[{\cite[5.4]{AS2}}]\label{asform} Given a Cartan datum $\mathcal{D}$ with families of parameters $\lambda, \mu$ as above, there is a Hopf algebra $u=u(\mathcal{D},\lambda,\mu)$. The algebra $u$ is generated by elements $g\in \Gamma$ (to define $u_\alpha(\mu)\in k\Gamma$ for $\alpha\in \Phi^+$, see \cite[2.14]{AS2}), and $x_i$ for $i=1,\ldots, n$, subject to the relations \begin{align} gx_i&=\chi_i(g)x_ig,\qquad &\text{for all $i$, $g\in \Gamma$},\label{asrel1}\\ \underline{\operatorname{ad}}(x_i)^{1-a_{ij}}&=0, \qquad &\text{for $i\neq j$, $i\sim j$},\label{asrel2}\\ \underline{\operatorname{ad}}(x_i)(x_j)&=\lambda_{ij}(1-g_ig_j), \qquad &\text{for all $i<j$, $i\nsim j$},\label{asrel3}\\ x_\alpha^{N_J}&=u_\alpha(\mu), \qquad &\text{for all $\alpha\in \Phi_J^+$, $J\in \chi$}.\label{asrel4} \end{align} Here, $\underline{\operatorname{ad}}(x)(y)$ is the \emph{braided} commutator $xy-m\circ \Psi(x\otimes y)$ where $m$ denotes multiplication and $\Psi$ is the YD-braiding. The comultiplication is given by $\Delta(x_i)=x_i\otimes 1 + g_i\otimes x_i.$ \end{definition} \begin{theorem}[{\cite[0.1]{AS}}] Under the above assumptions (a)--(e) on a Cartan datum $\mathcal{D}$ with parameters $\lambda$, $\mu$, the Hopf algebra $u(\mathcal{D},\lambda, \mu)$ is indecomposable and pointed with $G(u)=\Gamma$ and has finite dimension. Moreover, if $\abs{G}$ is not divisible by $2,3,5$ or $7$, then any indecomposable finite-dimensional pointed Hopf algebra over $kG$, where $G$ is abelian, and $k=\overline{k}$, $\operatorname{char} k=0$, is of this form. \end{theorem} \subsection{Algebras with Triangular Decomposition (Free Case)} A triangular decomposition of algebras means that an intrinsic PBW decomposition exists, similar to universal enveloping algebras of Lie algebras. This is a common feature of quantum groups and rational Cherednik algebras, but more generally shared by all braided Drinfeld or Heisenberg doubles (cf. \cite[Section~3]{Lau}). Here, we are using the definitions introduced in \cite{BB} to study such algebras with triangular decomposition (so-called \emph{braided doubles}). From a deformation-theoretic point of view, triangular decomposition can be viewed as follows. Let $V$, $V^*$ be dually paired finite-dimensional vector spaces and $H$ a Hopf algebra over a field $k$, such that $V$ is a left $H$-module, and $V^*$ carries the dual right $H$-action. That is, for the evaluation map $\langle ~,~\rangle\colon V^*\otimes V\to k$, we have \begin{equation} \langle f\triangleleft h,v\rangle=\langle f, h\triangleright v\rangle, \qquad \forall f\in V^*, v\in V, h\in H. \end{equation} Now define $A_0(V,V^*)$ to be the algebra on $T(V)\otimes H \otimes T(V^*)$ with relations \begin{equation}\label{boson} fh=h_{(1)}(f\triangleleft h_{(2)}), \qquad hv=(h_{(1)}\triangleright v) h_{(2)}, \end{equation} (i.e. the bosonizations $T(V)\rtimes H$ and $H\ltimes T(V^*)$ are subalgebras), and $[f,v]=0$. In \cite[3.1]{BB}, a family of deformations of $A_0(V,V^*)$ over $\operatorname{Hom}_k(V^*\otimes V,H)$ is defined. The algebra $A_\beta(V,V^*)$, for a parameter $\beta\colon V^*\otimes V\to H$, is defined using the same generators in $V$, $V^*$ and $H$ with the same bosonization relations, but the commutator relations \begin{equation} [f,v]=\beta(f,v). \end{equation} In order to obtain flat deformations we restrict to maps $\beta$ such that multiplication \begin{align*} m\colon T(V)\otimes H\otimes T(V^*)&\stackrel{\sim}{\longrightarrow} A_{\beta}(V,V^*),&v\otimes h\otimes f&\mapsto vhf, \end{align*} gives an isomorphism of $k$-vector spaces. \begin{definition} In the case where $m$ gives such an isomorphism of $k$-vector spaces, we say that $A_\beta(V,V^*)$ is a \emph{free braided double}. \end{definition} \begin{theorem}[{\cite[Theorem 3.3]{BB}}]\label{bbthm} The algebra $A_\beta(V,V^*)$ is a free braided double if and only if there exists a $k$-linear map $\delta\colon V\to H\otimes V$, $\delta(v)=v^{[-1]}\otimes v^{[0]}$ which is YD-compatible with the $H$-action on V, i.e. for any $h\in H$ \begin{equation}\label{ydcond} h_{(1)}v^{[-1]}\otimes (h_{(2)}\triangleright v^{[0]})=(h_{(1)}\triangleright v)^{[-1]}h_{(2)}\otimes (h_{(1)}\triangleright v)^{[0]}. \end{equation} In this case, we call $(V,\delta)$ a \emph{quasi-YD-module} and we have \begin{equation}\label{commrel} [f,v]=\beta(f\otimes v)=v^{[-1]}\langle f,v^{[0]}\rangle. \end{equation} \end{theorem} Note that $A_{\beta}(V,V^*)$ is a graded algebra where $\deg v=1$, $\deg h=0$, and $\deg f=-1$, for all $v\in V$, $h\in H$, and $f\in V^*$. \subsection{Triangular Ideals}\label{triangularideals} So far, the braided Hopf algebras $T(V)$ and $T(V^*)$ were assumed to be free. We can bring additional relations into the picture, defining \emph{braided doubles} that are not necessarily free. Let $I\triangleleft T(V)$ and $I^*\triangleleft T(V^*)$ be ideals. We want to determine when the quotient map \[ m\colon T(V)/I\otimes H \otimes T(V^*)/{I^*}\stackrel{\sim}{\longrightarrow} A_\beta(V, V^*)/{( I, I^*)} \] is still a graded isomorphism of $k$-vector spaces. In \cite[Appendix~A]{BB} it is show that this is the case if and only if $J:=( I, I^*)$ is a so-called \emph{triangular ideal}. That is, $J=I\otimes H \otimes T(V^*)+T(V)\otimes H \otimes I^*$, where $I\triangleleft T^{>1}(V)$, $I^*\triangleleft T^{>1}(V^*)$ are homogeneously generated ideals such that $I$ and $I^*$ are $H$-invariant and \begin{equation}\label{idealcond} T(V^*)I\leq J, \qquad I^*T(V)\leq J. \end{equation} This condition is equivalent to the commutators $[f,I]$ and $[I^*,v]$ being contained in $J$ for all elements $v\in V$, $f\in V^*$. For each quasi-YD-module, there exists a unique largest triangular ideal $I_{\op{max}}$, and thus a unique maximal quotient referred to as the \emph{minimal braided double} of $V$. If $\delta$ is a YD-module, then the maximal quotient $T(V)/{I_{\op{max}}}$ is the Nichols algebra $\mathcal{B}(V)$ of $V$, and the braided double on $\mathcal{B}(V)\otimes H \otimes\mathcal{B}(V^*)$ is a generalization of the Heisenberg double, a so-called \emph{braided Heisenberg double}. For the purpose of this paper, we need ideals $I$ such that $T(V)/I$ is a braided bialgebra, where $V$ is a YD-module. That is, not a bialgebra object in the category of $k$-vector spaces but in the category of YD-modules over $kG$ (see e.g. \cite[1.2--1.3]{AS}). In fact, if $I$ is a homogeneously generated ideal in $T^{>1}(V)$ which is a coideal and a YD-submodule, then $T(V)/I$ is a braided Hopf algebra. We denote the collection of such ideals by $\mathcal{I}_V$. In particular $I_{\op{max}}\in \mathcal{I}_V$ as the Nichols algebra $\mathcal{B}(V)$ is a braided Hopf algebra. \section{Hopf Algebras with Triangular Decomposition}\label{section1.5} In this section, we let $k$ be a field of arbitrary characteristic and $H$ a Hopf algebra over $k$. We introduce a notion of a Hopf algebra with triangular decomposition over $H$. \subsection{Definitions}\label{definitions} We refer to the grading of a braided double $T(V)/I\otimes H\otimes T(V^*)/{I^*}$ given by \[ \deg v=1,\quad \deg f=-1, \quad \deg h=0, \qquad \forall v\in V, ~f\in V^*, ~h\in H, \] as the \emph{natural grading}. We want to study Hopf algebras with triangular decomposition preserving this grading. \begin{definition}\label{triangulardecompdefn} A bialgebra (or Hopf algebra) $A$ with \emph{triangular decomposition} over a Hopf algebra $H$ is a braided double $A=T(V)/I\otimes H\otimes T(V^*)/{I^*}$ which is a bialgebra (respectively Hopf algebra) such that \begin{align} \bullet~&\text{$H$ is a subcoalgebra of $A$ with respect to the original coproduct of $H$},\label{assum0}\\ \begin{split}\bullet~&\text{the subspaces $T(V)\otimes H$ and $H\otimes T(V^*)$ are closed under the}\\&\text{coproduct of $A$,} \end{split}\label{assum1}\\ \begin{split}\bullet~&\text{the coproduct and counit are morphisms of graded algebras}\\ &\text{for the natural grading.} \end{split}\label{assum2} \end{align} (In the Hopf case, the antipode $S$ is required to preserve the natural grading and the subspaces $T(V)\otimes H$ and $H\otimes T(V^*)$.) \end{definition} Note that (\ref{assum2}) implies that $\varepsilon(v)=\varepsilon(f)=0$ for all $v\in V$, $f\in V^*$. We further observe that assumptions (\ref{assum1}) and (\ref{assum2}) combined with the counit property, give that $\Delta(V)\leq H\otimes V+V\otimes H$ as well as $\Delta(V^*)\leq H\otimes V^*+V^*\otimes H$. Consider the compositions $\delta_r, \delta_l$ with projections in \[ \xymatrix{ &\ar[dl]_{\delta_l}\ar[d]^{\Delta} V \ar[rd]^{\delta_r}&\\ H\otimes V&\ar@{->>}[l]^-{p_1}H\otimes V\oplus V\otimes H\ar@{->>}[r]_-{p_2}&V\otimes H .} \] The coalgebra axioms imply that $\delta_l$ and $\delta_r$ are left (respectively right) $H$-coactions. In particular, as the semidirect product relations in $A$ are preserved by $\Delta$, $\delta_l$ (and $\delta_r$) are left (respectively right) YD-compatible with the given actions of $H$ on $V$ (right action via antipode). Similarly, we can obtain a left and right YD-module structure over $H$ on the dual $V^*$ from the coproduct. The corresponding coactions are denoted by $\delta_l^*$ and $\delta_r^*$. \begin{definition} Given a bialgebra $A$ with triangular decomposition over $H$, we define the \emph{right (respectively, left) YD-structure} of $A$ to be $\delta_r$ (respectively, $\delta_l$) together with the given $H$-actions. We refer to $\delta_r^*$ and $\delta_l^*$ (with the dual $H$-actions) as the right and left \emph{dual} YD-structures. \end{definition} To fix Sweedler's notation for the different coactions, denote $\delta_r(v)=v^{(0)}\otimes v^{(-1)}$ and $\delta_l(v)=v^{\overline{(-1)}}\otimes v^{\overline{(0)}}$ and use similar notations for $f\in V^*$. We will reformulate the definition of a bialgebra with triangular decomposition in terms of conditions on the YD-structures of $A$ in (\ref{eqn1})--(\ref{eqn5}) in the free case first. \begin{lemma}\label{hopflemma} A bialgebra $A$ with triangular decomposition over $H$ is a Hopf algebra with triangular decomposition if and only if there exists a $k$-linear map $S\colon V\oplus V^*\to V\otimes H\oplus V^*\otimes H$ such that \begin{align} \begin{split} S(v^{(0)})v^{(-1)}+(S(v^{\overline{(-1)}})_{(1)}\triangleright v^{\overline{(0)}})S(v^{\overline{(-1)}})_{(2)}&=0, \\ v^{(0)}S(v^{(-1)})+({v^{\overline{(-1)}}}_{(1)}\triangleright S(v^{\overline{(0)}})){v^{\overline{(-1)}}}_{(2)}&=0, \end{split}&\forall v\in V,\label{antipodecond1}\\ \begin{split} f^{\overline{(-1)}}S(f^{\overline{(0)}})+S(f^{(-1)})_{(1)}(f^{(0)}\triangleleft S(f^{(-1)})_{(2)})&=0,\\ S(f^{\overline{(-1)}})f^{\overline{(0)}}+{f^{(-1)}}_{(1)}(S(f^{(0)})\triangleleft {f^{(-1)}}_{(2)})&=0, \end{split}&\forall f\in V^*.\label{antipodecond2} \end{align} In this case, $S$ extends uniquely to an antipode on all of $A$. \end{lemma} \begin{proof} This follows (under use of the semidirect product relations) by restating the antipode axioms for the coproduct of a Hopf algebra with triangular decomposition, in which the coproducts have the form $\Delta(v)=v^{(0)}\otimes v^{(-1)}+v^{\overline{(-1)}}\otimes v^{\overline{(0)}}$. Note that $\varepsilon(v)=0$ as we require the counit to be a morphism of graded algebras. \end{proof} \subsection{The Free Case}\label{freesection} Let $A$ be a \emph{free} braided double, i.e. $A=T(V)\otimes H\otimes T(V^*)$. We can now state necessary and sufficient conditions on the YD-structures of $A$ to make the algebra $A$ a bialgebra with triangular decomposition. In the following, we stick to the notation of \cite[Definition~2.1]{BB} denoting the quasi-coaction determining the commutator relations between elements of $V$ and $V^*$ by $\delta(v)=v^{[-1]}\otimes v^{[0]}$, for $v\in V$. \begin{lemma} A free braided double $A$ on $T(V)\otimes H\otimes T(V^*)$ is a bialgebra with triangular decomposition if and only if there exist YD-structures $\delta_l$, $\delta_r$, $\delta_l^*$, and $\delta_r^*$ such that the following conditions hold for $v\in V$, $f\in V^*$: \begin{align} (f^{(0)}\triangleleft {v^{\overline{(-1)}}})\otimes ({f^{(-1)}}\triangleright v^{\overline{(0)}})&=f\otimes v,\label{eqn1}\\ ({f^{\overline{(-1)}}}\triangleright v^{(0)})\otimes (f^{\overline{(0)}}\triangleleft {v^{(-1)}})&=v\otimes f,\label{eqn2}\\ v^{(0)}f^{(0)}\otimes (f^{(-1)}v^{(-1)}-v^{(-1)}f^{(-1)})&=0,\label{eqn3}\\ (f^{\overline{(-1)}}v^{\overline{(-1)}}-v^{\overline{(-1)}}f^{\overline{(-1)}})\otimes v^{\overline{(0)}}f^{\overline{(0)}}&=0,\label{eqn4}\\ \begin{aligned}&v^{(0)[-1]}\langle f^{(0)}, v^{(0)[0]}\rangle\otimes f^{(-1)}v^{(-1)}\\&+ f^{\overline{(-1)}}v^{\overline{(-1)}}\otimes v^{\overline{(0)}[-1]}\langle f^{\overline{(0)}}, v^{\overline{(0)}[0]}\rangle\end{aligned}&= v^{[-1]}\otimes v^{[-1]}\langle f, v^{[0]} \rangle. \label{eqn5} \end{align} \end{lemma} \begin{proof} The conditions are easily checked --- under use of the relations in $A$ and the PBW theorem --- to be equivalent to the requirement that (\ref{commrel}) is preserved by $\Delta$. This gives the relations (\ref{eqn3})--(\ref{eqn5}), as well as \begin{align*} {v^{\overline{(-1)}}}_{(1)}(f^{(0)}\triangleleft {v^{\overline{(-1)}}}_{(2)})\otimes ({f^{(-1)}}_{(1)}\triangleright v^{\overline{(0)}}){f^{(-1)}}_{(2)}&=v^{\overline{(-1)}}f^{(0)}\otimes v^{\overline{(0)}}f^{(-1)},\\ ({f^{\overline{(-1)}}}_{(1)}\triangleright v^{(0)}){f^{\overline{(-1)}}}_{(2)}\otimes {v^{(-1)}}_{(1)}(f^{\overline{(0)}}\triangleleft {v^{(-1)}}_{(2)})&=v^{(0)}f^{\overline{(-1)}}\otimes v^{(-1)}f^{\overline{(0)}}. \end{align*} These relations are equivalent to (\ref{eqn1}) and (\ref{eqn2}) under use of the counit of $H$, applying the coaction axioms. Further, given $\delta_r$, $\delta_l$ as well as their dual counterparts $\delta_r^*$, $\delta_l^*$, the bosonization relations are preserved by the coproduct defined as \begin{align} \Delta(v)&=v^{(0)}\otimes v^{(-1)}+v^{\overline{(-1)}}\otimes v^{\overline{(0)}}, &\Delta(f)&=f^{(0)}\otimes f^{(-1)}+f^{\overline{(-1)}}\otimes f^{\overline{(0)}}, \end{align} for $v\in V$, $f\in V^*$ by YD-compatibility. \end{proof} It will become apparent in Section~\ref{section2} what constraints on the structure of $A$ conditions (\ref{eqn1})--(\ref{eqn5}) give when working over $H$ a group algebra, and over a polynomial ring in Section~\ref{conclusion}. \subsection{Triangular Hopf Ideals}\label{triangularhopfsect} We are looking for triangular ideals $J=I\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I^*$ (cf. \cite[Appendix~A]{BB} or Section \ref{triangularideals}) which are also coideals, and hence $A/J$ is a bialgebra or Hopf algebra with a triangular decomposition. Using the description of the coproduct $\Delta$ in terms of the left and right YD-structures on $A$, the triangular ideals $J$ that are also coideals are simply those triangular ideals for which $I$ (and $I^*$) are YD-submodules for both $\delta_l$ and $\delta_r$ (respectively, $\delta_l^*$ and $\delta_r^*$). If $A$ is a triangular Hopf algebra with antipode given as in Lemma~\ref{hopflemma}, then every triangular ideal which is also a coideals is automatically a Hopf ideal. \begin{definition} We denote the collection of triangular ideals of the form \[ J=I\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I^* \] for homogeneously generated $I\triangleleft T^{>1}(V)$ and $I^*\triangleleft T^{>1}(V^*)$ which are also YD-submodules for $\delta_r$, $\delta_l$ (respectively for $\delta_r^*$, $\delta_l^*$) by $\mathcal{I}_\Delta(A)$. Such ideals $J$ are called \emph{triangular Hopf ideals}. \end{definition} \subsection{Asymmetric Braided Drinfeld Doubles}\label{asymdrin} A special class of Hopf algebras with triangular decomposition can be provided by braided Drinfeld doubles of primitively generated Hopf algebras over a quasitriangular base Hopf algebra $H$. This form of the Drinfeld double was introduced as the \emph{double bosonization} in \cite{Maj1,Maj2}, see also \cite[Section 3.5]{Lau} for the presentation used here. We now give a more general definition of an \emph{asymmetric} braided Drinfeld double which is suitable to capture the more general class of Hopf algebras that we find in Section~\ref{section2}, including multiparameter quantum groups, as examples. In this construction, the base Hopf algebra $H$ need not be quasitriangular, and the asymmetric braided Drinfeld double is also not quasitriangular in general. To define the braided Drinfeld double of dually paired braided Hopf algebras $C$ and $B$ in the category $\lmod{\operatorname{Drin}(H)}=\leftexpsub{H}{H}{\mathcal{YD}}$ we require that $\langle~,~\rangle \colon C\otimes B\to k$ is a morphism of YD-modules. This implies that the actions and coactions on $C$ and $B$ are dual to one-another (by means of the antipode of $H$). A weaker requirement is that we consider the images of $C$ and $B$ under the forgetful functor \[ F\colon \leftexpsub{H}{H}{\mathcal{YD}}\longrightarrow\lmod{H}, \] and require that $F(C)$ and $F(B)$ are dually paired Hopf algebras in $\lmod{H}$ (with the induced braiding under $F$), while $C$ and $B$ may not be dually paired in $\leftexpsub{H}{H}{\mathcal{YD}}$. Hence the coactions on $C$ and $B$ do not necessarily have to be related via the antipode, but the actions and resulting braidings need to be related by duality. This is captured by the following definition, where we denote the left coactions by $c\mapsto c^{(-1)}\otimes c^{(0)}$ and $b\mapsto b^{(-1)}\otimes b^{(0)}$ respectively. \begin{definition} We say that two braided Hopf algebras $C,B$ in $\leftexpsub{H}{H}{\cal{YD}}$ are \emph{weakly dually paired} if there exists a morphism of $H$-modules $\langle~,~\rangle\colon C\otimes B\to k$ such that \begin{align} \langle cc',b\rangle&=\langle c',b_{(1)}\rangle\langle c,b_{(2)}\rangle ,& \langle c,bb'\rangle&=\langle c_{(1)},b'\rangle\langle c_{(2)},b\rangle, \end{align} for all $c,c'\in C$, and $b,b'\in B$; as well as \begin{align}\label{weaklyduallypairedcond} (c^{(-1)}\triangleright b)c^{(0)}&=b^{(0)}(b^{(-1)}\triangleright c). \end{align} \end{definition} This weaker duality is equivalent to an analogue of condition (\ref{eqn2}). To see this, we can regard the left $H$-coaction on $B$ as a right $H^{\operatorname{cop}}$-coaction, over the co-opposite Hopf algebra $H^{\operatorname{cop}}$ with coproduct $\tau\Delta$. Given a left $H$-action $\triangleright$, we define a right $H^{\operatorname{cop}}$-action $\triangleleft:=\triangleright(S^{-1}\otimes \operatorname{Id})\tau$ (where $\tau$ denotes the $\otimes$-symmetry in $\mathbf{Vect}_k$). The resulting structures make $B$ a right YD-module over $H^{\operatorname{cop}}$. The analogue of condition (\ref{eqn2}) can be rephrased as requiring for all $b\in B, c\in C$ that \begin{align} &&b^{(0)}c^{(-1)}\otimes b^{(-1)}c^{(0)}&=c^{(-1)}b^{(0)}\otimes c^{(0)} b^{(-1)},\nonumber\\ &\Longleftrightarrow& b^{(0)}c^{(-1)}\otimes b^{(-1)}c^{(0)}&=({c^{(-1)}}_{(1)}\triangleright b^{(0)}){c^{(-1)}}_{(2)}\otimes {b^{(-1)}}_{(1)}(c^{(0)} \triangleleft {b^{(-1)}}_{(2)}), \label{compeqn2}\\ &\Longleftrightarrow& bc&=(c^{(-1)}\triangleright b^{(0)})(c^{(0)}\triangleleft b^{(-1)}),\label{compeqn}\\ &\Longleftrightarrow& (c^{(-1)}\triangleright b)c^{(0)}&=b^{(0)}(c\triangleleft S(b^{(-1)}))=b^{(0)}(b^{(-1)}\triangleright c),\nonumber \end{align} which gives condition (\ref{weaklyduallypairedcond}). We can visualize conditions (\ref{compeqn2}) and (\ref{compeqn}) using graphical calculus (with the conventions from \cite{Lau}), see Fig. \ref{braidingcomp}. \begin{figure}[h] \[ \begin{array}{ccc} \vcenter{\hbox{\import{Graphics/}{asymcond.pdf_tex}}}&\Longleftrightarrow& \vcenter{\hbox{\import{Graphics/}{asymcond2.pdf_tex}}}. \end{array} \] \caption{Left and right braiding compatibility condition} \label{braidingcomp} \end{figure} Given (\ref{weaklyduallypairedcond}), we can define an analogue of the braided Drinfeld double on the $k$-vector space $B\otimes H\otimes C$ (rather than using $B\otimes \operatorname{Drin}(H)\otimes C$) with this weaker requirement of duality on $C$ and $B$. The definition of the \emph{asymmetric} braided Drinfeld double can be given using Tannakian reconstruction theory by describing their category of modules. This is similar to the approach used for the braided Drinfeld double in \cite[Appendix~B]{Maj2} (cf. also \cite[Section 3.2]{Lau}). \begin{definition} Let $C,B$ be weakly dually paired braided Hopf algebras in $\leftexpsub{H}{H}{\cal{YD}}$. We define the category $\YDasy{C}{B}{H}$ of \emph{asymmetric YD-modules} over $C,B$ as having objects $V$ which are left $H$-modules (also viewed as right modules by means of the inverse antipode), equipped with a left $C$-action and a right $B$-action (by morphisms of $H$-modules) which satisfy the compatibility condition \begin{align}\label{assydcond} ((c_{(2)}\triangleright v)\triangleleft {b_{(1)}}^{(-1)})\triangleleft b_{(2)}\langle c_{(1)}, {b_{(1)}}^{(0)}\rangle &=c_{(1)}\triangleright ({c_{(2)}}^{(-1)}\triangleright(v\triangleleft b_{(1)}))\langle {c_{(2)}}^{(0)},b_{(2)}\rangle, \end{align} for all $v\in V, b\in B, c\in C$. Morphisms in $\YDasy{C}{B}{H}$ are required to commute with the actions of $H$, $B$ and $C$. \end{definition} It may help to visualize the condition (\ref{assydcond}) using graphical notation, see Fig. \ref{asymydpicture}. \begin{figure}[h] \begin{center} \import{Graphics/}{ydcondasy.pdf_tex}~. \end{center} \caption{Asymmetric Yetter--Drinfeld modules} \label{asymydpicture} \end{figure} \begin{proposition} The category $\YDasy{C}{B}{H}$ is monoidal, with a commutative diagram of monoidal fiber functors \[ \xymatrix{&\rmod{B}(\lmod{H})\ar[dr]&&\\ \YDasy{C}{B}{H}\ar[ur]\ar[dr] &&\lmod{H}\ar[r]&\mathbf{Vect}_k.\\ &\lmod{C}(\lmod{H})\ar[ur]&& } \] \end{proposition} \begin{proof} This monadicity statement can for example be checked directly using graphical calculus. Note that condition (\ref{compeqn}) is crucial. The fiber functors simply forget the additional structure at each step. \end{proof} \begin{definition}\label{asymmetricdrinfelddef} The \emph{asymmetric braided Drinfeld double} $\operatorname{Drin}^{\operatorname{asy}}_H(C,B)$ is defined as the algebra obtained by Tannakian reconstruction\footnote{See e.g. \cite[9.4.1]{Maj1} or \cite[Section 2.3]{Lau}.} on $B\otimes H\otimes C$ applied to the functor $\YDasy{C}{B}{H}\longrightarrow \mathbf{Vect}_k$. Hence $\lmod{\operatorname{Drin}^{\operatorname{asy}}_H(C,B)}$ and $\YDasy{C}{B}{H}$ are canonically equivalent as categories. \end{definition} \begin{proposition}\label{asymmetricdrinrel} An explicit presentation for the asymmetric braided Drinfeld double $\operatorname{Drin}^{\operatorname{asy}}_H(C,B)$ on the $k$-vector space $B\otimes H\otimes C$ can be given as follows: the multiplication on $B$ is opposite, and for $c\in C$, $b\in B$ and $h\in H$ we have \begin{align} hb&=(h_{(2)}\triangleright b)h_{(1)},\\hc&=(h_{(1)}\triangleright c)h_{(2)},\\ b_{(2)}S^{-1}({b_{(1)}}^{(-1)})c_{(2)}\langle c_{(1)}\otimes {b_{(1)}}^{(0)}\rangle &=c_{(1)}{c_{(2)}}^{{(-1)}}b_{(1)}\langle {c_{(2)}}^{{(0)}}\otimes b_{(2)}\rangle. \end{align} The coproducts are given by \begin{align} \Delta(h)&=h_{(1)}\otimes h_{(2)},\\ \Delta(b)&={b_{(1)}}^{(0)}\otimes b_{(2)}S^{-1}({b_{(1)}}^{(-1)}),\\ \Delta(c)&={c_{(1)}}{c_{(2)}}^{{(-1)}}\otimes {c_{(2)}}^{{(0)}}, \end{align} and the antipode is \begin{align} S(h)&=S(h), &S(b)&=S^{-1}(b^{(0)})b^{(-1)}, &S(c)&=S(c^{{(-1)}})S(c^{{(0)}}), \end{align} using the respective given structures on $H$, $B$, and $C$. \end{proposition} \begin{proof} This follows under application of reconstruction (in $\mathbf{Vect}_k$) applied to $\YDasy{C}{B}{H}$. See e.g \cite[Section 2.3]{Lau} for formulas on how to obtain the structures, including the antipode (Figure 2.1). \end{proof} An important feature of the braided Drinfeld double is that it has a braided monoidal category of representations, hence is quasitriangular. For the \emph{asymmetric} braided Drinfeld double to be quasitriangular, we need $H$ to be quasitriangular. If $H$ is not quasitriangular, this can be achieved by working with over $\operatorname{Drin}(H)$ instead of $H$ as a base Hopf algebra. From now on, we restrict to the important special case where $B$ and $C$ are primitively generated by finite-dimensional YD-modules. This way, we obtain examples of Hopf algebras with a triangular decomposition over $H$. \begin{lemma}\label{asymmetricdouble} Let $V$, $V^*$ be left YD-modules over $H$, such that the action on $V^*$ is dual to the action on $V$. Then the braided tensor (co)algebras $T(V)^{\operatorname{op}}$ and $T(V^*)^{\operatorname{cop}}$ are dually paired\footnote{We choose the opposite $T(V)^{\operatorname{op}}$ and co-opposite $T(V^*)^{\operatorname{cop}}$ in order to avoid having to take the opposite multiplication in the resulting double (cf. Proposition \ref{asymmetricdrinrel}). As tensor algebras are braided cocommutative, this choice does not affect the formulas for the coproduct.} in the monoidal category of right modules over $H$. Further assume that the compatibility condition (\ref{weaklyduallypairedcond}) holds. Then the asymmetric braided Drinfeld double $\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}},T(V)^{\op{op}})$ is given on $A=T(V)\otimes H\otimes T(V^*)$ subject to the usual bosonization relations (\ref{boson}) and the cross relation \begin{align}\label{asymmetriccomm} [f,c]=S^{-1}(v^{(-1)})\langle f,v^{(0)}\rangle-f^{(-1)}\langle f^{{(0)}},v\rangle. \end{align} The coalgebra structure is given by \begin{align} \Delta(v)&=v^{(0)}\otimes S^{-1}(v^{(-1)})+1\otimes v,&\Delta(f)&=f\otimes 1+f^{{(-1)}}\otimes f^{{(0)}}. \end{align} The counit is given by $\varepsilon(v)=\varepsilon(f)=0$ and the antipode can be computed using the conditions from equations (\ref{antipodecond1}) and (\ref{antipodecond2}) as \begin{align} S(v)&=-v^{(0)}v^{(-1)},&S(f)&=-S(f^{{(-1)}})f^{{(0)}}. \end{align} We can also consider quotients of the form $A/J$ for any triangular Hopf ideal $J\in \mathcal{I}_\Delta(A)$. The quotient of $A$ by the maximal triangular Hopf ideal in $\mathcal{I}_\Delta(A)$ is denoted by $U_H(V,V^*)$. \end{lemma} \begin{lemma}\label{ideallemma} Let $A=\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}}, T(V)^{\operatorname{op}})$ for $V$, $V^*$ as in Lemma \ref{asymmetricdouble}. Then the maximal ideal $I_{\op{max}}(A)$ in $\mathcal{I}_{\Delta}(A)$ is given by \[ I_{\max}(A)=I_{\op{max}}(V)\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I_{\op{max}}(V^*), \] where $I_{\op{max}}(V)$ is the maximal Nichols ideal in $T(V)$ for the left coaction on $V$, and $I_{\op{max}}(V^*)$ is the maximal Nichols ideal in $T(V^*)$ for the left coaction on $V^*$. Hence \[ m\colon \mathcal{B}(V)\otimes H\otimes \mathcal{B}(V^*)\stackrel{\sim}{\longrightarrow}U_H(V,V^*) \] is an isomorphism of $k$-vector spaces (PBW theorem). \end{lemma} \begin{proof} This is clear as we know that $T(V)^{\operatorname{op}}/{I_{\op{max}}(V)}$ and $T(V^*)^{\operatorname{cop}}/{I_{\op{max}}(V^*)}$ are weakly dually paired braided Hopf algebras and their asymmetric braided Drinfeld double is given by the quotient $\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}},T(V)^{\operatorname{op}})/{I_{\max}(A)}$, which must be the minimal double $U_H(V,V^*)$. \end{proof} A perfect pairing between the positive and negative part of $U_H(V,V^*)$ implies the existence of a formal power series $\operatorname{coev}$ satisfying the axioms of coevaluation. We expect that this can be used to give a braiding on a suitable category of modules over $U_H(V, V^*)$ (where $\mathcal{B}(V)$ acts integrally), and all modules have the structure of being YD-modules over $H$. \subsection{Symmetric Triangular Decompositions} The rest of this section will be devoted to the question of recovering the braided Drinfeld double over a quasitriangular base Hopf algebra $H$ as a special case of the asymmetric braided Drinfeld double. For this, we introduce the idea of a Hopf algebra with a \emph{symmetric} triangular decomposition: \begin{definition} Let $A$ be a bialgebra with a triangular decomposition over $H$. If the associated coactions satisfy that the right coaction $\delta_r^*$ of $V^*$ is the dual coaction to $\delta_l$, i.e. \begin{equation}\label{symmetry} \langle f^{(0)}\otimes v\rangle f^{(-1)}=\langle f\otimes v^{\ov{(0)}}\rangle v^{\ov{(-1)}}, \end{equation} and the coactions $\delta_r$ and $\delta_l^*$ are compatible in the same way, then we call the triangular decomposition \emph{symmetric}. \end{definition} In the case where $H$ is a quasitriangular Hopf algebra, we can recover a special case of the definition of the braided Drinfeld double given in \cite[Example 3.5.6]{Lau} from the more general form given in Definition \ref{asymmetricdrinfelddef}, and the resulting triangular decomposition will be symmetric. For this, note that the universal $R$-matrix and its inverse give functors (see \cite{Maj13}) \begin{align*} R^{-1}\colon \lmod{H}&\longrightarrow \leftexpsub{H}{H}{\mathcal{YD}}, &(V,\triangleright) &\longmapsto (V,\triangleright, (\operatorname{Id}_H\otimes \triangleright)(R^{-1}\otimes \operatorname{Id}_V)),\\ R\colon \rmod{H}&\longrightarrow \leftexpsub{H}{H}{\mathcal{YD}}, &(V,\triangleleft) &\longmapsto (V,\triangleleft, ( \triangleleft\otimes\operatorname{Id}_H)(\operatorname{Id}_V\otimes R)). \end{align*} Given a right $H$-module $V$, we can hence give $V$ a left YD-module structure using $R^{-1}$, and $V^*$ the dual YD-module structure using $R$. Note that (\ref{weaklyduallypairedcond}) is satisfied in this case. With these structures, the relation (\ref{asymmetriccomm}) becomes \begin{align*} [f,c]&=S^{-1}(R^{-(2)})\langle f,v \triangleleft R^{-(1)}\rangle -R^{-(1)}\langle R^{-(2)}\triangleright f, v\rangle \\ &=R^{(2)}\langle R^{(1)}\triangleright f,v\rangle -R^{-(1)}\langle R^{-(2)}\triangleright f,v\rangle. \end{align*} This is precisely the cross relation of \cite[Example 3.5.6]{Lau}. Note that we use $R=(S^{-1}\otimes \operatorname{Id}_H)R^{-1}$. This proves the following Proposition: \begin{proposition}\label{recoverdouble} Braided Drinfeld doubles of braided Hopf algebras over a qua\-si\-triangular Hopf algebras are asymmetric braided Drinfeld doubles (as in Definition \ref{asymmetricdrinfelddef}) with a symmetric triangular decomposition. \end{proposition} Note that a partial converse statement also holds: Given an asymmetric braided Drinfeld double that is symmetric, then it can be displayed as a braided Drinfeld double in the sense of \cite{Maj2,Lau}, but unless $H$ is quasitriangular (and the coactions induced by the $R$-matrix), we need to view it over the base Hopf algebra $\operatorname{Drin}(H)$ instead. If the positive and negative part are perfectly paired, then we can give a formal power series describing the $R$-matrix and an appropriate subcategory (corresponding to the Drinfeld center) is braided. Particularly interesting examples of such braided Drinfeld doubles include the quantum groups $U_q(\mathfrak{g})$ for generic $q$, and the small quantum groups $u_q(\mathfrak{g})$ (see \cite[Section~4]{Maj2}). Their construction uses the concept of a \emph{weak} quasitriangular structure for which a similar statement to Proposition \ref{recoverdouble} can be made. We will see in Section~\ref{multiparametersection} that multiparameter quantum groups can be viewed as examples of asymmetric braided Drinfeld doubles that are not symmetric. Further, all the pointed Hopf algebras classified in the main result of this paper (Theorem \ref{mainclassificationthm}), under the additional assumption that the braiding is of separable type and some commutators do not vanish, are asymmetric braided Drinfeld doubles (Theorem \ref{drinfeldtheorem}). \section{Classification over a Group}\label{section2} In this section, we denote by $A= T(V)\otimes kG\otimes T(V^*)$ a bialgebra with triangular decomposition over a group algebra $kG$. Note that we do not assume $G$ to be finite. \subsection{Preliminary Observations}\label{preliminaryobs} Hopf algebras that are generated by grouplike and skew-primitive elements are always pointed. We show that assuming a Hopf algebra has triangular decomposition over a group and is of what we call \emph{weakly separable type}, it is generated by skew-primitive elements and hence pointed. \begin{lemma} For a bialgebra $A$ with triangular decomposition over $kG$ as above, there exists a basis $v_1,\ldots,v_n$ of $V$ and $f_1, \ldots, f_n$ of $V^*$, as well as invertible matrices $M$ and $N$ such that \begin{align} \Delta(v_i)&=v_i\otimes g_i+\sum_j{M_{ji}h_j\otimes v'_j},&\Delta(f_i)&=f_i\otimes a_i+\sum_j{N_{ji}b_j\otimes f'_j}\label{coprodonv}, \end{align} where $v'_1,\ldots,v'_n$ is another basis of $V$, and $f_1',\ldots,f_n'$ of $V^*$. \end{lemma} \begin{proof} Let $v_1, \ldots, v_n$ be a homogeneous basis for the YD-compatible grading $\delta_r$ and $v'_1,\ldots,v'_n$ a homogeneous basis for $\delta_l$. The form (\ref{coprodonv}) of the coproducts is obtained by letting $M$ be the base change matrix from $\lbrace v_i\rbrace$ to $\lbrace v_i'\rbrace$. The same argument works for the dual $V^*$, denoting the base change matrix from $\lbrace f_i\rbrace$ to $\lbrace f_i'\rbrace$ by $N$. \end{proof} \begin{lemma} A bialgebra $A$ with a triangular decomposition over $kG$ as above is a Hopf algebra, with antipode $S$ given on generators of the form $v_i$, $f_i$ as in (\ref{coprodonv}) by \begin{align} S(v_i)&=-\sum_{j}M_{ji}(h_j^{-1}\triangleright v'_j)h_j^{-1}g_i^{-1},&S(f_i)&=-\sum_{j}N_{ji}(f'_j\triangleleft b_j)b_j^{-1}a_i^{-1}. \end{align} \end{lemma} \begin{proof} The antipode axioms require that $S$ is of the form stated, using that $kG$ is a Hopf subalgebra, cf. (\ref{antipodecond1})--(\ref{antipodecond2}). As $T(V)$ and $T(V^*)$ are free, defining $S$ on the generators extends uniquely to an anti-algebra and anti-coalgebra map on all of $A$. \end{proof} \begin{definition}\label{indecprop} A Hopf algebra $A$ with triangular decomposition over a group is called of \emph{weakly separable type} if the right degrees $g_i,\ldots, g_n$ of $V$ are pairwise distinct group elements, and the same holds for the left degrees $h_1,\ldots,h_n$ of $V$ as well as the dual degrees. \end{definition} We observe that being of weakly separable type over a group implies that $V$ and $V^*$ have 1-dimensional homogeneous components. This gives that for a homogeneous basis element $v_i$ of degree $a_i$, $g\triangleright v_i\neq 0$ is homogeneous of degree $ga_ig^{-1}$ which hence has to be a scalar multiple of a basis element $v_{g(i)}$ where $g(i)$ is an index $1, \ldots, n$. Hence we obtain an action of $G$ on $\lbrace 1,\ldots, n\rbrace$. To fix notation, we write \begin{align} g\triangleright v_i&=\lambda_i(g)v_{g(i)},&f_i\triangleleft g=\mu_i(g)f_{g(i)}&. \end{align} We will see that for $A$ of weakly separable type, the base change matrices $M$, $N$ are diagonal matrices and can be chosen to be the identity matrix by rescaling of the diagonal bases. This implies that $A$ is generated by primitive and group-like elements and hence pointed. It is a conjecture in \cite[Introduction]{AS} that all finite-dimensional pointed Hopf algebras over an algebraically closed field of characteristic zero are in fact generated by skew-primitive and group-like elements. \begin{proposition}\label{primitiveprop} If $A$ is of weakly separable type, then there exist bases $\lbrace v_i\rbrace$ of $V$ and $\lbrace f_i\rbrace$ of $V^*$ consisting of skew-primitive elements, such that \begin{align}\label{primitivecoprod} \Delta(v_i)&=v_i\otimes g_i+h_i\otimes v_i, &\Delta(f_i)&=f_i\otimes a_i+b_i\otimes f_i, \end{align} and the antipode on these skew-primitive elements is given by $S(v_i)=(h_i^{-1}\triangleright v_i)h_i^{-1}g_i^{-1}$, $S(f_i)=(f_i\triangleleft b_i)b_i^{-1}a_i^{-1}$. \end{proposition} \begin{proof} Consider the right and left coactions $\delta_r$ and $\delta_l$ from Section \ref{definitions}. Choosing a basis $v_1,\ldots,v_n$ homogeneous for $\delta_l$ and $v'_1,\ldots, v'_n$ homogeneous for $\delta_r$, (\ref{coprodonv}) gives \begin{equation} \Delta(v_i)=v_i\otimes g_i+\sum_j{M_{ji} h_j\otimes v'_j}, \end{equation} where $M=(M_{ji})$ is the base change matrix. By coassociativity, we find that \begin{equation}\label{eq2} \sum_{j,k}{M_{ji}(M^{-1})_{kj}h_j\otimes v_k\otimes g_k}=\sum_j{M_{ji} h_j\otimes v'_j\otimes g_i}. \end{equation} By weak separability of $\delta_r$ and $\delta_l$ we now have for each $j=1,\ldots, n$: \begin{align} \sum_{k}{M_{ji}(M^{-1})_{jk}v_k\otimes g_k}=M_{ji}v'_j\otimes g_i. \end{align} Note that $M_{ji}\neq 0$ for at least some $i$. This implies that $(M^{-1})_{kj}=0$ unless $k=i$ as the $g_i$ are all distinct. Further, if $M_{ji}\neq 0$, then $v_i$ and $v'_j$ are proportional. This can only be true for at most one $i$ for given index $j$ by weak separability. Hence by reordering the basis $v'_1, \ldots, v'_n$ we find that $M$ is a diagonal matrix and can rescale the basis $\lbrace v'_i\rbrace$ such that $M$ is the identity matrix. Hence we have $\Delta(v_i)=v_i\otimes g_i + h_i\otimes v_i$. The antipode conditions for $A$ give (using Lemma \ref{hopflemma}) that $S$ is of the form claimed. \end{proof} \begin{remark} The bases $\lbrace v_i\rbrace$ and $\lbrace f_i \rbrace$ do not necessarily need to be orthogonal with respect to the pairing $\langle ~,~\rangle$. We will see in Theorem \ref{mainclassificationthm} that if the characters $\lambda_i$ are all distinct, then the bases can be chosen to be dual bases. \end{remark} \begin{remark}\label{primitivenotation} In the following, we fix a basis $v_1, \ldots,v_n$ for $V$ and $f_1,\ldots, f_n$ for $V^*$ such that \begin{align} \Delta(v_i)&=v_i\otimes g_i+h_i\otimes v_i,&\Delta(f_i)&=f_i\otimes a_i+b_i\otimes f_i, &i=1,\ldots,n. \end{align} \end{remark} A direct observation from Proposition \ref{primitiveprop} is that the algebra $A$ is generated by primitive and grouplike elements (which precisely give the group $G$) and hence pointed. We have the following restrictions on the group structure. \begin{proposition}\label{symmetricprop} In the group $G$, the relations $[g_i,a_j]=[h_i,a_j]=1$ and $[h_i, b_j]=[g_i,b_j]=1$ hold for all $i,j=1, \ldots,n$. In particular, if $A$ has a symmetric triangular decomposition, then the subgroup of $G$ generated by all degrees is abelian. Further, the following identities for the characters of the group action hold: \begin{align}\label{characteridentities} \mu_j(h_i)&=\lambda_i(a_j)^{-1},&\mu_j(g_i)=\lambda_i(b_j)^{-1}.& \end{align} \end{proposition} \begin{proof} The commutator relations follow by applying (\ref{eqn3}) and (\ref{eqn4}) to a pair of homogeneous basis elements of $V$ and $V^*$ with respect to $\delta_l, \delta_r^*$ (or $\delta_r, \delta_l^*$). Then it follows from (\ref{eqn1}) and (\ref{eqn2}) that $h_i(j)=j$, $a_j(i)=i$, $g_i(j)=j$ and $b_j(i)=i$ by the PBW theorem. This implies the relations (\ref{characteridentities}). In the symmetric case, $a_i=g_i^{-1}$ and $b_i=h_i^{-1}$ which forces the subgroup generated by all degrees to be abelian. \end{proof} \subsection{Classification in the Free Case of Weakly Separable Type}\label{freeclassification} We are now in the position to classify all Hopf algebras $A$ with triangular decomposition of weakly separable type (cf. Definition \ref{indecprop}). This will enable us to view the Hopf algebras arising from this classification as analogues of multiparameter quantum groups in Section~\ref{multiparametersection}. We start by considering the case $A=T(V)\otimes kG\otimes T(V^*)$ which is referred to as the \emph{free} case. \begin{proposition}\label{pointedthm} For the Hopf algebra $A$ with triangular decomposition of weakly separable type to be indecomposable as a coalgebra it is necessary that $G$ is generated by elements $k_1,\ldots,k_n$, $l_1,\ldots, l_n$ such that there exist generators $v_i$ of $V$ and $f_i$ of $V^*$ which are skew-primitive of the form \begin{align}\label{coproductform} \Delta(v_i)&=v_i\otimes k_i+1\otimes v_i, &\Delta(f_i)=f_i\otimes 1+l_i\otimes f_i, \end{align} with $[k_i,l_j]=1$ for all $i,j$. For the characters of the actions on the homogeneous components of $V$ and $V^*$ we require that \begin{equation}\label{characterrequirement} \mu_j(k_i)=\lambda_i(l_j)^{-1}. \end{equation} \end{proposition} \begin{proof} To determine when pointed Hopf algebras are indecomposable as coalgebras, consider the graph $\Gamma_A$ described in \ref{indecomposability}. Assume that $A$ has generators given as in Remark~\ref{primitivenotation}. We claim that the connected components of $\Gamma_A$ are in bijection with the double cosets of the subgroup \[ Z:=\langle g_1^{-1}h_1,\ldots, g_n^{-1}h_n, a_1^{-1}b_1,\ldots, a_n^{-1}b_n\rangle \] in $G$ which partition $G$. Indeed, using that the elements $gv_i$ and $gf_i$ are skew-primitive of type $(gg_i,gh_i)$ and $(ga_i,gb_i)$, we find that the connected component of $g$ contains, for $i=1,\ldots, n$, of the strands \[ \ldots \longrightarrow g(g_i^{-1}h_i)^{-2}\longrightarrow g(g_i^{-1}h_i)^{-1}\longrightarrow g \longrightarrow g(g_i^{-1}h_i)^1\longrightarrow g(g_i^{-1}h_i)^{2} \longrightarrow \ldots \] for $i=1,\ldots, n$ and the same strands with $a_i^{-1}b_{i}$ instead of $g_i^{-1}h_i$ (and with $g$ multiplied on the right). Moreover, as the elements $gv_i$, $gf_i$, $v_ig$, $f_ig$ (and possibly linear combinations of products of them, which would again be skew-primitive with degrees given by elements in $Z$) are the only skew-primitive elements in $A$, and thus give the only arrows in $\Gamma_A$, two elements $g$ and $h$ are in the same connected component if and only if $z_1gz_2=z_3hz_4$, for some $z_i\in Z$. Thus, $A$ is indecomposable if and only if $G$ equals the connected component of $1$ in the graph $\Gamma_A$, hence if $G=Z$ which is the group generated by the elements $k_i:=h_i^{-1}g_i$, $l_i:=a_i^{-1}b_i$ for $i=1,\ldots, n$. Thus, in order to obtain indecomposability, the coproducts are of the form as stated in (\ref{coproductform}). This is achieved by replacing the generators $v_i$ by $v_ih_i^{-1}$ and $f_i$ by $a_i^{-1}f_i$. The remaining statements follow directly from Proposition \ref{symmetricprop}. \end{proof} \begin{theorem}\label{mainclassificationthm} For an indecomposable Hopf algebra $A$ of weakly separable type as in Proposition \ref{pointedthm}, the commutator relations (\ref{commrel}) are of the form \begin{align}\label{commrel2} [f_i,v_j]&=\gamma_{ij}(k_j-l_i),&\forall 1\leq i, j\leq n, \end{align} where $\gamma_{ij}$ are scalars in $k$ such that $\gamma_{ij}=0$ whenever $\lambda_i\neq \lambda_j$ in which case also $\langle f_i,v_j\rangle=0$, or if either of $l_i$ of $k_j$ are not central. Conversely, any choice of such scalars gives a pointed Hopf algebra of this form. \end{theorem} \begin{proof} With the work done in Proposition \ref{pointedthm}, it remains to verify that the form of the commutator relation (\ref{commrel}) is as stated. Recall that in \cite[3.1]{BB}, the commutator relation is given by means of a quasi-coaction. That is a morphism $\delta\colon V\to kG\otimes V$ satisfying (\ref{ydcond}) and (\ref{commrel}). Such a morphism has the general form \begin{align} \delta(v_j)=v_j^{[-1]}\otimes v_j^{[0]}&=\sum_{k,g}{\alpha_{k,g}^j g\otimes v_k},&\alpha_{k,g}^i\in k, \end{align} on the basis elements from (\ref{coproductform}). Then (\ref{eqn5}), which is required for $A$ to be a bialgebra, rewrites as \begin{align*} \sum_{k,g}{\alpha_{k,g}^j (g\otimes k_j+l_i\otimes g)\langle f_i,v_k \rangle} &=\sum_{k,g}{\alpha_{k,g}^j g\otimes g\langle f_i,v_k \rangle}, &\forall i,j. \end{align*} For each $i$, there exists $k$ such that $\langle f_i,v_k\rangle \neq 0$. For given $i$, we denote the set of indices such that $\langle f_i,v_k\rangle \neq 0$ by $I_i$. For such $k\in I_i$, we find that $\alpha_{k,g}^j=0$ for $g\neq k_j, l_i$, and $\alpha_{k,k_j}^j=-\alpha_{k,l_i}^j$. Thus, we obtain that $\delta$ is of the form \begin{equation}\label{qcoactionform} \delta(v_j)=v_j^{[-1]}\otimes v_j^{[0]}=\sum_{i=1}^n{\gamma_{ij} (k_j-l_i)\otimes v'_i}, \end{equation} where $\gamma_{ij}=\sum_{k\in I_i}{\alpha_{k,k_j}^j {\langle f_i,v_k\rangle\abs{I_i}}}$ and $\lbrace v'_i\rbrace$ is the dual basis of $V$ to $\lbrace f_i\rbrace$. Conversely, given arbitrary scalars $\gamma_{ij}$ for $i,j=1,\ldots,n$, we can define a quasi-coaction by the same formula (\ref{qcoactionform}). Then $\delta$ is YD-compatible with the given action of $G$ on $V$ if and only if (cf. condition (A) in \cite[Theorem~A]{BB}) \begin{align*} \gamma_{ij}\mu_i(g)(gk_j-gl_{i})=&g[f_i\triangleleft g,v_j]\stackrel{(\text{A})}{=}[f_i, g\triangleright v_j]g=\gamma_{ij}\lambda_j(g)(k_jg-l_ig). \end{align*} This implies $\lambda_j=\mu_i$ whenever $\gamma_{ij}\neq 0$. Further, if $\gamma_{ij}\neq 0$ we need $l_i, k_j\in Z(G)$. These two requirement ensure that $\delta$ is YD-compatible. Further, by duality of the action, if $\langle f_i,v_j \rangle\neq 0$ then $\lambda_i=\mu_j$. As for given $i=1,\ldots, n$, $\langle f_i,v_j \rangle\neq 0$ for some $j$ we have that $\lambda_i=\mu_j$ for at least some $j$, and vice versa. Hence, the set of characters and dual characters are in bijection. We can change the numbering and assume without loss of generality (recall that we are in the weakly separable case) to obtain \begin{equation} \lambda_i=\mu_i. \end{equation} From now on, we will hence only use the notation $\lambda_i$. \end{proof} \begin{example} The most degenerate case, where $\gamma_{ij}=0$, gives the Hopf algebra $(T(V)\otimes T(V^*))\rtimes kG$ where the tensor algebras are again computed in the category of YD-modules over $kG$. \end{example} \begin{remark} At this point, a comparison to \cite[2.4]{AS2} and \cite[4.3]{AS3} seems appropriate. The condition (\ref{commrel2}) is equivalent to the so-called \emph{linking relation} (\ref{asrel3}) after a change of generators $f_i\leftrightarrow l_i^{-1}f_i$, since in the form of Definition \ref{asform} all generators have coproducts $\delta(v_i)=v_i\otimes 1+g_i\otimes v_i$. Such a change of generators causes the commutators $\operatorname{ad}=[~,~]$ to become braided commutators $\underline{\operatorname{ad}}= \operatorname{Id}_{V^{\otimes 2}}-\Psi$. The scalars $\lambda_{ij}$ satisfy the condition (d) in \ref{classificationsurvey}, where for the characters $\chi_i\chi_j\neq \varepsilon$ implies $\lambda_{ij}=0$. This is the analogue of our condition $\lambda_i\neq \lambda_j$ implying $\gamma_{ij}=0$. The linking relations also appear in the quantum group characterization of \cite[Theorem 4.3]{AS3}. Hence we can conclude that the classification in this section gives Hopf algebras with similar relations as appearing in the work of Andruskiewitsch and Schneider. The outcome here is more restrictive as in our setting relations of the form (\ref{asrel4}) cannot involve non-trivial elements in $kG$, and we also have a symmetry in the set $\chi$ of connected components due to the triangular decomposition. \end{remark} The situation where $\lbrace v_i\rbrace$ and $\lbrace f_i \rbrace$ are orthogonal bases deserves particular attention. In this case, the scalars $\gamma_{ij}=0$ for $i\neq j$. The following concept of separability ensure this. \begin{definition} Let $A$ have a triangular decomposition of weakly separable type over a group $G$. If the characters $\lambda_1, \ldots, \lambda_n$ are distinct for different indices, we will speak of a triangular decomposition of \emph{separable type}. If $A$ is of the form as in Theorem \ref{mainclassificationthm}, we say that $A$ is \emph{non-degenerate} if $\gamma_{ii}\neq 0$ for all $i=1,\ldots,n$ (this implies $k_i\neq l_i$). \end{definition} Note that both definitions --- separability and non-degeneracy --- cause the group $G$ to be abelian, and hence the braidings on $V$ and $V^*$ to be of diagonal type. Assuming non-degeneracy, we can adapt the terminology of \cite[5.5]{BB} that the braided doubles in this case come from \emph{mixed} YD-structures. A mixed YD-structure is a quasi-coaction $\delta$ that is a weighted sum $\sum{t_i\delta_i}$, where $\delta_i$ are YD-modules compatible with the same action, and $t_i$ are generic scalars. The quasi-YD-module in the theorem is the sum $\delta=\delta_r-(\delta_l^*)^*$, where $(\delta_l^*)^*$ is the YD-module given by $v_j\mapsto l_j\otimes v_j$, which is dual to $\delta_l^*$. We will see that in this case all the Hopf algebras arising are certain \emph{asymmetric} braided Drinfeld doubles (as defined in \ref{asymdrin}). In the symmetric case, these algebras are in fact braided Drinfeld doubles. In particular, their appropriately defined module categories (resembling the Drinfeld center) are braided. \subsection{Interpretation as Asymmetric Braided Drinfeld Doubles}\label{quotientsection} Assume in this section that $A$ is non-degenerate of indecomposable separable type over $G$. So far, we have only classified \emph{free} braided doubles over $kG$. That is, as a $k$-vector space $A\cong T(V)\otimes kG\otimes T(V^*)$ via the multiplication map. To capture examples such as quantum groups, it is necessary to consider quotients of $A$ by triangular ideals $J=( I,I^*)$ such that $A/J\cong T(V)/I\otimes kG\otimes T(V^*)/{I^*}$ is still a Hopf algebra (and thus pointed). Here $I\triangleleft T^{>1}(V)$ and $I^*\triangleleft T^{>1}(V^*)$ are ideals and also coideals, and $J\in \mathcal{I}_{\Delta}(A)$. We will now refine our considerations from Section \ref{triangularhopfsect} to find for what ideals $I$ and $I^*$ this is the case. We will use the notation \begin{align} q_{ij}:=\lambda_j(k_i). \end{align} Then, by (\ref{characterrequirement}), we have that $\lambda_j(l_i)=q_{ji}^{-1}$, and the matrix $q=(q_{ij})$ describes the braiding on $V$ fully, i.e. it is of \emph{diagonal type}. The collection of triangular Hopf ideals $\mathcal{I}_\Delta(A)$ introduced in Section \ref{triangularhopfsect} can be described more concretely in the separable non-degenerate case: The ideals in $\mathcal{I}_\Delta(A)$ are of the form $J=I\otimes kG\otimes T(V^*)+T(V)\otimes kG\otimes I^*$ where $I$ is an ideal in the collection $\mathcal{I}_{(V,\delta_r)}$ for $V$ with the right coaction given by $\delta_r$, and $I^*$ is in $\mathcal{I}_{(V^*,\delta_l^*)}$ for the left dual coaction $\delta_l^*$ on $V^*$. This follows using \cite[Proposition 5.10]{BB} and the description of triangular Hopf ideals in Lemma \ref{ideallemma}. We use that by (\ref{characterrequirement}) the braiding $\Psi_r$ coming from $\delta_r$ and $\Psi_l$ from $(\delta_l^*)^*$ on $V$ are given by \begin{align} \Psi_r(v_i\otimes v_j)&=q_{ij}v_j\otimes v_i,&&\Psi_l(v_i\otimes v_j)=q_{ji}^{-1}v_j\otimes v_i,. \end{align} That is $\Psi_l=\Psi_r^{-1}$, the inverse braiding. Thus, $I^*$ is just the dual $k$-vector space to $I$. \begin{example} In the quantum groups $A=U_q(\mathfrak{g})$, the braiding satisfies the symmetry $q_{ij}=q^{i\cdot j}=q^{j\cdot i}=q_{ji}$ as the Cartan datum is symmetric. This implies that the relations in $I$ are symmetric under reversing the order of tensors $v_1\otimes \ldots\otimes v_n\leftrightarrow v_n\otimes \ldots\otimes v_1$. This can be verified explicitly by observing that in $U_q(\mathfrak{g})$ the ideal $I$ is generated by $q$-Serre relations, which carry such a symmetry. \end{example} \begin{theorem}\label{drinfeldtheorem} Let $A$ be an indecomposable bialgebra with triangular decomposition of separable non-degenerate type over $G$. Then $A$ is an asymmetric braided Drinfeld double. \end{theorem} That is, all quotients by triangular Hopf ideals $J\in \mathcal{I}_\Delta(A)$ of algebras $A$ of separable non-degenerate type occurring in the classification of Theorem \ref{mainclassificationthm} are asymmetric braided Drinfeld doubles. If $J$ is maximal in $\mathcal{I}_\Delta(A)$, then $A/J\cong U_{kG}(V, V^*)$. \begin{proof} Recall that every Hopf algebra with triangular decomposition is the quotient of a free braided double by a triangular Hopf ideal. We saw that in the free separable case the commutator relations are of the form $[f_i,v_j]=\delta_{ij}\gamma_{ii}(k_i-l_j)$. This is precisely the form of the asymmetric braided Drinfeld double of $V$ with right YD-module structure given by the right grading, and $V^*$ with left YD-module structure given by the left dual grading. The pairing is given by $\langle f_i,v_j\rangle=\delta_{ij}\gamma_{ii}$ here. We have to check that the braided Hopf algebras $T(V)$ and $T(V^*)$ of YD-modules over $G$ are dually paired when viewed in the category of left $kG$-modules. This however follows from condition (\ref{characterrequirement}). Taking the maximal quotient by a triangular ideal (or the left and right radical of the pairing) gives the asymmetric braided Drinfeld double $U_{kG}(V, V^*)$. \end{proof} If some of the parameters $\gamma_{ii}$ are zero, then the pointed Hopf algebras obtained are not asymmetric braided Drinfeld double any more (in the sense of Definition \ref{asymmetricdrinfelddef}). \subsection{Recovering a Lie Algebra}\label{liealgebrasection} We assume that $\operatorname{char} k=0$ in this section and study Hopf algebras with triangular decomposition of separable type which are of the form $U_{kG}(V, V^*)$ (see Theorem \ref{drinfeldtheorem}). The aim is to set the characters $\lambda_i$ and the group elements $k_i$, $l_i$ equal to 1. This way, we want to recover a Lie algebra $\fr{g}$ for any of the indecomposable pointed Hopf algebras of the form $U_{kG}(V, V^*)$, relating back to the question asked in the introduction of finding quantum groups for a given Lie algebra. The tool available for this is the Milnor--Moore theorem from \cite{MM} (see also \cite[Theorem 5.6.5]{Mon}) which shows that any cocommutative connected Hopf algebras is of the form $U(\fr{g})$ for a (possibly infinite-dimensional) Lie algebra $\fr{g}$. There are technical problems with this naive approach. To set the elements $q_{ij}$ --- which will be replaced by formal parameters --- equal to one, we need to give an appropriate integral form to avoid that the modules collapse to zero. This rules out examples like e.g. $k[x]/(x^n)$ (and, more generally, the small quantum groups) which are braided Hopf algebras in the category of YD-modules over $k\mathbb{Z}$, as here a generator of the group acts by a primitive $n$th root of unity $q$ on $x$, and $\mathbb{Z}[q]\subset k$ is a cyclotomic ring. As a first step, we introduce appropriate integral forms of $U_{kG}(V, V^*)$, for which we need the square roots of $q_{ij}$. We consider the subring $Z:=\mathbb{Z}[q_{ij}^{\pm 1/2}]_{i,j}\subset k$ adjoining all square roots of the numbers $q_{ij}$ and their inverses. These will now be treated as formal parameters with certain relations between them, coming from the relations we have among them in $k$. \begin{remark} In this section, we assume that the ideal $\langle q_{ij}^{\pm 1/2}-1 \mid i,j=1,\ldots,n\rangle $ in $Z$ is a proper ideal, and hence $p\colon Z\to \mathbb{Z}$, $q_{ij}^{\pm 1/2}\mapsto 1$ is a well-defined morphism of rings. \end{remark} This assumption is crucial in the formal limiting process. It, for example, prevents examples in which $q^n+q^{n-1}+\ldots+q+1=0$ as in cyclotomic rings. To produce an integral form, we replace a given YD-module $V$ over $kG$ of separable type as in the previous sections by a YD-module over $ZG$. For this, we can choose a $G$-homogeneous basis $v_1,\ldots,v_n$ and a homogeoneous dual basis $f_i,\ldots,f_n$ such that (possibly after rescaling) \begin{align} \langle f_i, v_j\rangle&=\frac{1}{q_{ii}^{1/2}-q_{ii}^{-1/2}}\delta_{ij}, &\forall i,j. \end{align} An important observation is that the Woronowicz symmetrizers, which are used to compute the Nichols ideal $I_{\op{max}}(V)$, have coefficients in $Z$. Hence their kernels will be $Z$-modules. That is, for $V^{\op{int}}$ defined as $Z\langle v_1, \ldots, v_n\rangle$, which is a YD-module over the group ring $ZG$, the Woronowicz symmetrizer $\operatorname{Wor}^n_{\op{int}}\Psi$ is a $Z$-linear map $V^{\op{int}\otimes n}\to V^{\op{int}\otimes n}$. Hence $I_{\op{max}}(V^{\op{int}}):=\ker \operatorname{Wor}_{\op{int}}\Psi$ is an ideal in $T(V^{\op{int}})$, the tensor algebra over $Z$. In order to provide an integral form of $U_{kG}(V, V^*)$, we change the presentation by introducing new commuting generators, namely $[f_i,v_i]=:t_i$. One verifies that the following commutator relations hold over $k$, as we are given the relation $t_i=\tfrac{1}{q_{ii}^{1/2}-q_{ii}^{-1/2}}(k_i-l_i)$ when working over the field: \begin{align} [f_i,t_j]&=\delta_{i,j}(q_{ii}^{1/2}k_if_i+q_{ii}^{-1/2}l_if_i),\label{tirel1}\\ [v_i,t_j]&=-\delta_{i,j}(q_{ii}^{-1/2}k_iv_i+q_{ii}^{1/2}l_iv_i).\label{tirel2} \end{align} \begin{definition} The \emph{integral form} $U_{ZG}(V^{\op{int}}, V^{\op{int}*})$ of $U_{kG}(V, V^*)$ is defined as the graded Hopf algebra over the ring $Z$ generated by $v_1,\ldots, v_n$, of degree 1, $f_1,\ldots, f_n$ of degree $-1$, and the group elements $k_1, \ldots, k_n, l_1, \ldots, l_n\in G$, and additional elements $t_1, \ldots, t_n$ of degree 0, subject to the relations of $I_{\op{max}}(V^{\op{int}})$ and $I_{\op{max}}^*(V^{\op{int}})$, bosonization relations \begin{align} gv_i&=(g\triangleright v_i)g,&f_ig&=g(f_i\triangleleft g), \end{align} as well as the relations (\ref{tirel1}), (\ref{tirel2}) and \begin{align} gv_i&=(g\triangleright v_i)g,\qquad f_ig=g(f_i\triangleleft g),\label{intrel1}\\ q_{ii}^{1/2}(k_i-l_i)&=(q_{ii}-1)t_i,\label{intrel2}\\ [f_i, v_j]&=\delta_{i,j}t_i,\label{intrel3}\\ [t_i,t_j]&=0. \end{align} The coproducts are given as before on the generators $f_i,v_i,k_i,l_i$ and $\Delta(t_i)=t_i\otimes k_i+l_i\otimes t_i$. \end{definition} Note that as $A=U_{ZG}(V^{\op{int}}, V^{\op{int}*})$ is a Hopf algebra over the commutative ring $Z$, the coproduct is a map $A\to A\otimes_Z A$. For the quantum groups $U_q(\fr{g})$ at generic parameter, the integral form in this case is the so-called \emph{non-restricted} integral form (see e.g. \cite[9.2]{CP}) which goes back to De Concini--Kac \cite{DK}. To set the parameters equal to one, and to consider extensions of Hopf algebras to fields, we use the following Lemma: \begin{lemma}\label{hopflemma2} Let $\phi \colon R\to S$ be a morphism of commutative algebras. We denote the category of Hopf algebras over $R$ by $\mathbf{Hopf}_{R}$. Then base change along $\phi$ induces a functor \begin{align*} \mathbf{Hopf}_{\phi}\colon \mathbf{Hopf}_{R}&\longrightarrow \mathbf{Hopf}_{S},&A&\longmapsto A\otimes_RS. \end{align*} \end{lemma} \begin{proof} Given a Hopf algebra $A$ which is an $R$-algebra, i.e. there is a morphism $R\to A$, we induce the multiplication and comultiplication on $A\otimes_RS$ using the isomorphism \[ (A\otimes_RS)\otimes_S(A\otimes_RS)\cong (A\otimes_R A)\otimes_RS. \] It is easy to check that the Hopf algebra axioms are preserved under base change. \end{proof} \begin{proposition} There is an isomorphism of graded Hopf algebras \[ U_{ZG}(V^{\op{int}}, V^{\op{int}*})\otimes_Zk\stackrel{\sim}{\longrightarrow} U_{kG}(V, V^*). \] \end{proposition} \begin{proof} Recall that $Z\leq k$ by construction. Extending to $k$, we are able to divide by $q_{ii}-1$ in (\ref{intrel2}), and recover the original commutator and bosonization relations in $U_{kG}(V, V^*)$. It remains to verify that \[ I_{\op{max}}(V^{\op{int}})\otimes_Zk=\ker \operatorname{Wor}_{\op{int}}\Psi\otimes_Z k=\ker \operatorname{Wor} \Psi=I_{\op{max}}(V). \] This follows by noting that $k$ is flat as a $Z$-module (since the function field $K(Z)$ is flat over $Z$ as a localization, and $k$ is free over $K(Z)$), and $V^{\op{int}}\otimes_Zk\cong V$ as $k$-vector spaces. \end{proof} \begin{definition} We define the \emph{classical limit} of $U_{kG}(V, V^*)$ as the algebra \[ U_k^{\op{cl}}(V, V^*):=\bigslant{(U_{ZG}(V^{\op{int}}, V^{\op{int}*})\otimes_Z\mathbb{Z})\otimes_\mathbb{Z} k}{( \ker \varepsilon_G)}, \] using the morphism $p\colon Z\to \mathbb{Z}$ mapping all $q_{ij}^{\pm 1/2}$ to $1$, and the two sided ideal $( \ker \varepsilon_G)$ generated by the kernel of the augmentation map $\varepsilon_G\colon kG\to k$ mapping all group elements to $1$. Note that this ideal is a Hopf ideal. \end{definition} That is, to obtain the classical limit, we first set the parameters $q_{ij}^{\pm 1/2}$ equal to 1 in the integral form and then extend the resulting $\mathbb{Z}$-module to a $k$-vector space, and finally set the group elements equal to 1 along the counit $\varepsilon_G\colon kG\to k$. We obtain a primitively generated Hopf algebra, and hence a Lie algebra, this way: \begin{proposition} The classical limit $U_k^{\op{cl}}(V, V^*)$ is a connected Hopf algebra, generated by primitive elements. Hence, for the Lie algebra $\fr{p}_V$ of primitive elements, $U(\fr{p}_V)=U_k^{\op{cl}}(V, V^*)$. This algebra is generated by triples $f_i,v_i,t_i$ which form a subalgebra isomorphic to $U(\fr{sl}_2)$. \end{proposition} \begin{proof} Lemma \ref{hopflemma2} ensures that $U_k^{\op{cl}}(V, V^*)$ is a Hopf algebra over $k$, and freeness of $V^{\op{int}}$ over $Z$ ensures that the positive and negative part do not collapse to the zero space. In particular, the $k$-vector space $V^{\op{int}}\oplus V^{\op{int}*}$ embeds into the Lie algebra $\fr{p}_V$ of primitive elements. In the classical limit, we obtain the relations \begin{align} [f_i,v_j]&=\delta_{i,j}t_i, &[f_i,t_j]&=2\delta_{i,j}f_i, &[v_i,t_j]&=-2\delta_{i,j}v_i. \end{align} Hence every triple $f_i, v_i, t_i$ generates a Lie subalgebra of $\fr{p}_V$ isomorphic to $\fr{sl}_2$. Note that $U_{k}^{\op{cl}}(V, V^*)$ is generated by the $t_i$ and the primitive elements: \begin{align*} \Delta(f_i)=&f_i\otimes 1+1\otimes f_i,&\Delta(v_i)&=v_i\otimes 1+1\otimes v_i. \end{align*} We also compute \[ \Delta(t_i)=\Delta([f_i,v_i])=[f_i,v_i]\otimes k_i+l_i\otimes[f_i,v_i]=t_i\otimes k_i+l_i\otimes t_i. \] Hence, $t_i$ is skew-primitive in $U_{ZG}^{\op{int}}(V, V^*)$ and primitive in the classical limit. Thus, $U^{\op{cl}}_{k}(V,V^*)$ is a pointed Hopf algebra over the trivial group. That is, a \emph{connected} pointed Hopf algebra. It is further cocommutative and Theorem 5.6.5 in \cite{Mon} implies that such a Hopf algebra is of the form $U(\fr{g})$ where $\fr{g}=\fr{p}_V$ in $\operatorname{char}k=0$. \end{proof} Note that $U_{k}^{\op{cl}}(V, V^*)$ is a braided double over the polynomial ring $S(T)$, where $T=k\langle t_1,\ldots,t_n \rangle$ (which is not necessarily $n$-dimensional). The action is given by $t_j\triangleright v_i=2\delta_{i,j}v_i$, and the quasi-coaction is given by $\delta(v_i)=t_i\otimes v_i$ which is \emph{not} a coaction, hence $U_{ZG}^{\op{int}}(V, V^*)$ is \emph{not} a braided Heisenberg double. It is also not an asymmetric braided Drinfeld double. \begin{example} For $U_q(\fr{g})$, $\fr{g}$ a semisimple Lie algebra, viewed as a braided Drinfeld double, the classical limit is $U(\fr{g})$. \end{example} We can also compute examples that do not give finite-dimensional semisimple Lie algebras. As a general rule, the relations between the parameters $q_{ij}$ determine the relations in the Lie algebra. It is easy to construct free examples, for which there are no relations between the $v_1,\ldots, v_n$ by choosing algebraically independent parameters $q_{ij}$. The work of \cite{Ros} and \cite{AS3} give restrictions on examples satisfying the growth condition of finite Gelfand--Kirillov dimension. We will view their results in the setting of this paper in Section \ref{section3}. \section{Classes of Quantum Groups}\label{multiparametersection} In this section, we relate the classification from Section~\ref{section2} to various classes of examples which are often regarded as quantum groups. This includes the multiparameter quantum groups studied by \cite{FRT, Res, Sud,AST} and others in Section \ref{quantumgroupsmulti}, a characterization of Drinfeld--Jimbo quantum groups in Section \ref{section3}, and classes of examples of pointed Hopf algebras from the work of Radford in Section \ref{radford}. The classification in Theorem \ref{mainclassificationthm} points out natural generalizations of these classes of examples\footnote{While this paper was under revision, it was pointed out by Dr Gast\'on Andr\'es Garc\'ia that a further series of examples of asymmetric braided Drinfeld doubles is given in \cite[Definition~7]{HPR} and described in \cite{Gar} using a family of pointed Hopf algebras defined in \cite{ARS}.}. We finally sketch how one can define analogues of quantum groups using triangular decompositions over other Hopf algebras than $kG$. \subsection{Multiparameter Quantum Groups}\label{quantumgroupsmulti} Let $k$ be a field of characteristic zero. For the purpose of this section, let $\lambda \in k$ be generic, and $p_{ij}\in k$ for $1\leq i<j\leq n$. Assume that $p_{ii}=1$ and $p_{ji}=p_{ij}^{-1}$. Following \cite{AST,CM} and to fix notation, we set \begin{align*} &\kappa_j^{(i)}=\begin{cases}p_{ij},& \text{if }i<j,\\ \lambda, & \text{if } i=j,\\ \tfrac{\lambda}{p_{ji}}, & \text{if }i>j.\end{cases}&&\lambda_j^{(i)}=\begin{cases}\tfrac{\lambda}{p_{ij}},& \text{if }i<j,\\ \lambda, & \text{if } i=j,\\ p_{ji}, & \text{if }i>j. \end{cases} \end{align*} We will provide a variation of the presentation of \cite{AST,CM} in order to display the multiparameter quantum group $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ as a Hopf algebra with triangular decomposition. \begin{example}[Multiparameter quantum groups] We define on $F=k \langle f_1, \ldots, f_{n-1}\rangle$ a YD-module structure over an abelian group $G$ with generators $k_1, \ldots, k_{n-1}$, $l_1, \ldots, l_{n-1}$. Denote the dual by $E=k\langle e_1,\ldots, e_{n-1}\rangle$, where the pairing is given by $\langle e_i,f_j\rangle=(1-\lambda)\delta_{ij}$. The YD-structure is of separable type, and given by assigning the right degree $k_i$ to $f_i$, and the left degree $l_i$ to $e_i$, and actions \begin{align} k_i\triangleright f_j&=\lambda_j(k_i)f_j=\frac{\lambda_{j+1}^{(i)}\lambda_{j}^{(i+1)}}{\lambda_{j}^{(i)}\lambda_{j+1}^{(i+1)}}f_j,\\ l_i\triangleright f_j&=\lambda_j(l_i)f_j=\frac{\kappa_j^{(i)}\kappa_{j+1}^{(i+1)}}{\kappa_{j+1}^{(i)}\kappa_j^{(i+1)}}f_j, \end{align} for $i,j=1,\ldots n-1$. We will relate the \emph{multiparameter quantum group} $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ to be the asymmetric braided Drinfeld double $U_{kG}(F,E)$. \end{example} Note that the definition of $U_{kG}(F,E)$ is possible as (\ref{characterrequirement}) holds, i.e. \begin{align*} q_{ij}:=\lambda_j(k_i)=\frac{\lambda_{j+1}^{(i)}\lambda_{j}^{(i+1)}}{\lambda_{j}^{(i)}\lambda_{j+1}^{(i+1)}}=\frac{\kappa_{i+1}^{(j)}\kappa_i^{(j+1)}}{\kappa_i^{(j)}\kappa_{i+1}^{(j+1)}}=\lambda_i(l_j)^{-1}. \end{align*} The commutator relation in $U_{kG}(F,E)$ is given by \begin{equation} [e_i,f_j]=(1-\lambda)\delta_{ij}(k_i-l_i). \end{equation} The following isomorphism displays $U_{kG}(F,E)$ as an indecomposable subalgebra of a multiparameter quantum group considered in the literature: \begin{proposition} There is an isomorphism of Hopf algebras $U_{kG}(F,E)\cong U'$ where $U'$ is a Hopf subalgebra of the multiparameter quantum group $U=U_{\lambda,\underline{p}}(\fr{gl}_n)$. \end{proposition} \begin{proof} We prove the theorem by first considering the morphism \[ \phi\colon T(E)\otimes kG\otimes T(F)\longrightarrow U. \] Such a morphism will descent to an injective morphism $\overline{\phi}\colon U_{kG}(F,E)\to U$ by the following Lemma \ref{qserrelemma}. We further note that the image $\op{Im}{\overline{\phi}}=:U'$ is a Hopf subalgebra isomorphic to $U_{kG}(F,E)$. Denote the generators of $U$ by $E_i,F_i$ for $i=1,\ldots,n-1$ and group elements $K_i,L_i$ for $i=1,\ldots, n$ (see \cite[4.8]{CM}). The map $\phi$ is defined by $\phi(e_i)=\lambda E_iK^{-1}_{i+1}K_i$, $\phi(f_i):=F_i$, $\phi(k_i)=L_{i+1}L_i^{-1}$, and $\phi(l_i):=K_{i+1}^{-1}K_i$. One checks directly that the relations in the free braided double $T(E)\otimes kG\otimes T(F)$ are preserved under this map, using the presentation in \cite[4.8]{CM} for $U$. \end{proof} \begin{lemma}\label{qserrelemma} The largest ideal in $\mathcal{I}_\Delta(A)$ for $A=U_{kG}(F,E)$ is generated by the quantum Serre relations \begin{align} \underline{\operatorname{ad}}(e_i)^{1-a_{ij}}(e_j)=\underline{\operatorname{ad}}(f_i)^{1-a_{ij}}(f_j)=0, \end{align} where $\underline{\operatorname{ad}}(e_i)(e_j)=e_ie_j-q_{ij}e_je_i$. \end{lemma} \begin{proof} It follows from Lemma \ref{ideallemma} that the maximal ideal $J$ in $\mathcal{I}_\Delta(A)$ is given by $J=( I,I^*)$ where $I$ is the Nichols ideal of the YD-module $F$. Generation of the maximal triangular ideal by quantum Serre relations for $U_{\lambda,\underline{p}}(\fr{gl}_n)$ follows from Lemma 4.5 in \cite{CM}. For this, it is crucial that $\lambda$ is not a root of unity. The proof uses the observation in \cite{Res}, or \cite{AST} for the deformed function algebra, that multiparameter quantum groups, using quantum coordinate rings, can be obtained via 2-cocycles from one-parameter quantum groups. The fact that the quantum Serre relations generate the Nichols ideal then follows from Theorem 4.4 in \cite{CM} where it is shown that these relations generate the radical of a Hopf pairing. Using the map $\phi$, this result describes the Nichols ideals of $T(F)$, $T(E)$ as generated by quantum Serre relations. \end{proof} The result that the multiparameter quantum group $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ is the asymmetric braided Drinfeld double $U_{kG}(F,E)$ can be seen as a generalization of the result in \cite{BW} where the two-parameter quantum groups were shown to be Drinfeld doubles. \subsection{Characterization of Drinfeld--Jimbo Quantum Groups}\label{section3} Let $\op{char} k=0$ in this section. In Section~\ref{section2} we observed that for an algebra $A$ with symmetric triangular decomposition of separable type to be an indecomposable pointed Hopf algebra, $G(A)$ needs to be abelian acting on $V$ by scalars. That means, in the terminology of \cite{AS} that the YD-braiding $\Psi(v\otimes w)=v^{(-1)}\triangleright w\otimes v^{(0)}$ is of \emph{diagonal type}, i.e. there exist non-zero scalars $q_{ij}$ such that $\Psi(v_i\otimes v_j)=q_{ij}v_j\otimes v_i$ for a basis $\{v_1,\ldots,v_n\}$. We fix a choice of YD-module structure over an abelian group $G$ for this section to describe the diagonal braiding. That is, $q_{ij}=\lambda_j(k_i)$ for the characters $\lambda_i$ by which $G$ acts on $kv_i$ and group elements $k_i$ such that $\delta(v_i)=k_i\otimes v_i$. It is a basic observation that the braided Hopf algebras $T(V)/I$ for $I\in \mathcal{I}_V$, including the Nichols algebras for $V$, only depend on the braiding on $V$ (rather than the concrete choice of $\lambda_i$, $k_i$). However, different diagonal braidings $(V, \Psi)$ and $(V, \Psi')$ may give isomorphic braided Hopf algebras $T(V)/I$. Such isomorphisms can be obtained using the notion of \emph{twist equivalence} for diagonal braidings (which is a special case of the more general concept of twisting a Hopf algebra by a 2-cocycle). \begin{definition} Two braided $k$-vector spaces of diagonal type $(V,\Psi)$, $(V',\Psi')$ (given by scalars $q_{ij}$, $q_{ij}'$) are \emph{twist equivalent} if $V\cong V'$, $q_{ii}=q_{ii}'$, and $q_{ij}q_{ji}=q_{ij}'q_{ji}'$. \end{definition} \begin{lemma}\label{twistlemma} If $(V,\Psi)$, $(V',\Psi')$ are twist equivalent of diagonal type, then $T(V)\cong T(V')$ as braided Hopf algebras in the category of braided $k$-vector spaces, preserving the natural grading. \end{lemma} \begin{proof} For a proof see e.g. \cite[3.9--3.10]{AS}. We can find generators $v_i$ of $V$ and $v_i'$ of $V'$ such that the isomorphism $\phi$ is determined by $v_i\mapsto v_i'$. Defining a 2-cocycle $\sigma$ by $\sigma(v_i\otimes v_j)=q_{ij}'q_{ij}^{-1}$ for $i<j$ and $1$ otherwise, we find that the product $v_iv_j$ maps to the product twisted by $\sigma$. Note that the isomorphism is \emph{not} an isomorphism in the category of YD-modules over $kG$ unless $(V',\Psi')=(V,\Psi)$. \end{proof} For an ideal $I\in \mathcal{I}_V$, denote the corresponding ideal under the isomorphism $T(V)\cong T(V')$ from Lemma~\ref{twistlemma} by $I'$. Then we conclude that $T(V)/I\cong T(V')/{I'}$ is also an isomorphism of braided Hopf algebras. In particular, $\mathcal{B}(V)\cong \mathcal{B}(V')$ for the corresponding Nichols algebras. \begin{lemma}\label{twistdrinfelddoubles} If $(V,\Psi)$ and $(V',\Psi')$ are twist equivalent, such that \[ G=\langle k_1,\ldots, k_n \rangle\cong\langle k_1',\ldots, k_n' \rangle=G' \] via $k_i\mapsto k_i'$, then $U_{kG}(V, V^*)\cong U_{kG'}(V', {V'}^{*})$ as Hopf algebras. \end{lemma} \begin{proof} By Lemma~\ref{twistlemma}, $T(V)/I\cong T(V')/{I'}$ and $T(V^*)/{I^*}\cong T({V'}^{*})/{{I'}^{*}}$. By the assumptions on the group generators, $k_i\mapsto k_i'$ extends to an isomorphism $kG\cong kG'$. Thus we can define a morphism $U_{kG}(V, V^*)\to U_{kG}(V', V'^{*})$ which is an isomorphism of $k$-vector spaces. Further, preservation of the bosonization condition can be checked on generators using the isomorphism $\phi$ from Lemma~\ref{twistlemma}. Finally, the commutator relation (\ref{commrel2}) is preserved using the isomorphism on $kG$. \end{proof} Diagonal braidings are a very general class of braidings. Quantized enveloping algebras at generic parameters however are based on braidings of specific type, called \emph{Drinfeld--Jimbo type}. Following \cite{AS3}, there are different classes of braidings which we distinguish: \begin{definition}[{\cite[Definition~1.1]{AS3}}] Let $(q_{ij})$ be the $n\times n$-matrix of a braiding of diagonal type. \begin{enumerate} \item[(a)] The braiding given by $(q_{ij})$ is \emph{generic} if $q_{ii}$ is not a root of unity for any $i=1,\ldots,n$. \item[(b)] In the case $k=\mathbb{C}$ we say the braiding $(q_{ij})$ is \emph{positive} if it is generic and all diagonal elements $q_{ii}$ are positive real numbers. \item[(c)] The braiding $(q_{ij})$ is of \emph{Cartan type} if $q_{ii}\neq 1$ for all $i$ and there exists a $\mathbb{Z}$-valued $n\times n$-matrix $(a_{ij})$ with values $a_{ii}=2$ on the diagonal and $0\leq -a_{ij}<\operatorname{ord} q_{ii}$ for $i\neq j$, such that \begin{equation} q_{ij}q_{ji}=q_{ii}^{a_{ij}}\qquad \text{ for all }i,j. \end{equation} This implies that $(a_{ij})$ is a generalized Cartan matrix which may have several connected components. We denote the collection of these by $\chi$. \item[(d)] The braiding $(q_{ij})$ is of \emph{Drinfeld--Jimbo type (DJ-type)} if it is of Cartan type and there exist positive integers $d_1,\ldots, d_n$ such that for all $i,j$, $d_i a_{ij}=d_j a_{ji}$ (hence the matrix $(a_{ij})$ is symmetrizable), and for any $J\in \chi$, there exists a scalar $q_J\neq 0$ in $k$ such that $q_{ij}=q_J^{d_ia_{ij}}$ for any $i\in I$, and $j=1,\ldots, n$. \end{enumerate} \end{definition} Some observations can be made about the Nichols algebras associated to braid\-ed vector spaces of DJ-type. First, observe that for a braiding of Cartan type with connected components $I_1,\ldots,I_n\in \chi$, we have that $\mathcal{B}(V)$ is the braided tensor product $\mathcal{B}(V_{I_1})\otimes \ldots\otimes \mathcal{B}(V_{I_n})$ (\cite[Lemma 4.2]{AS4}). Further, for $V$ with braiding $(q_{ij})$ of DJ-type where $q_{ii}$ are generic, the Nichols algebra can be computed explicitly by the quantum Serre relations (\cite[Theorem 15]{Ros}): \[ \mathcal{B}(V)=k\langle x_1,\ldots,x_n \mid \un{\operatorname{ad}}(x_i)^{1-a_{ij}}(x_j)=0, \forall i\neq j\rangle. \] We now bring the growth condition of finite \emph{Gelfand--Kirillov dimension} (GK-dimension) into the picture, using characterization results of \cite{Ros} of Nichols algebras with this property. \begin{lemma}[\cite{Ros}]\label{rossolemma} Let $k=\mathbb{C}$ and $(q_{ij})$ be the matrix of a braiding of diagonal type which is \emph{generic} such that the Nichols algebra $\mathcal{B}(V)$ has finite Gelfand--Kirillov dimension. Then $(q_{ij})$ is of Cartan type. Moreover, if the braiding is positive then it is twist equivalent to a braiding of DJ-type with finite Cartan matrix if and only if the GK-dimension is finite. \end{lemma} \begin{proof} See \cite{AS3}, Corollary 2.12 and Theorem 2.13. \end{proof} \begin{corollary} Let $A=U_{\mathbb{C} G}(V,V^*)$, for $V$ of separable type, with generic positive braiding $(q_{ij})$. Then the following are equivalent \begin{itemize} \item[(i)] $A\cong U_q(\fr{g})$ for $\fr{g}$ a semisimple Lie algebra. \item[(ii)] The braided $\mathbb{C}$-vector space $V$ with braiding $(q_{ij})$ is twist equivalent to a braiding of DJ-type with Cartan matrix of finite type. \item[(iii)] $\mathcal{B}(V)$ has finite Gelfand--Kirillov dimension. \item[(iv)] $A$ has finite Gelfand--Kirillov dimension. \end{itemize} \end{corollary} \begin{proof} The equivalence of (ii) and (iii) is the statement of Lemma \ref{rossolemma} due to \cite{Ros}. Using Lemma \ref{twistdrinfelddoubles} we find that (ii) implies (i), while it is clear that (i) implies (ii). In fact, the GK-dimension of $\mathcal{B}(V)$ for $V$ of DJ-type equals the number of positive roots \cite[2.10(ii)]{AS3}. Further, we observed that $A$ is of the form $U(\mathcal{D})$ in \cite[Theorem 4.3]{AS3} in Theorem \ref{drinfeldtheorem} provided that $V$ has finite Cartan type. This observation (together with Lemma~\ref{twistdrinfelddoubles}) gives that (ii) is equivalent to (iv) using Theorem 5.2 in \cite{AS3}. \end{proof} \begin{corollary} The only indecomposable bialgebras with a symmetric triangular decomposition on $\mathcal{B}(V)\otimes k\mathbb{Z}^n\otimes \mathcal{B}(V^*)$ of separable type, such that $V=\mathbb{C}\langle v_1,\ldots,v_n\rangle$ is of positive diagonal type, and that no $v_i$ commutes with all of $V^*$ are isomorphic to $U_q(\mathfrak{g})$ for a semisimple Lie algebra $\mathfrak{g}$. \end{corollary} \begin{proof} This follows from the classification in Theorem \ref{mainclassificationthm}, combined with the results of Rosso. The Lie algebra $\fr{g}$ is determined by the Cartan matrix one obtains under twist equivalence in Lemma \ref{rossolemma}. The technical condition that no $v_i$ commutes with all of $V^*$ ensures that $[f_i,v_i]\neq 0$ for a dual basis $f_1,\ldots,f_n$ of $V^*$, resembling the non-degeneracy condition that the scalars $\gamma_{ii}\neq 0$ in Theorem \ref{drinfeldtheorem}. \end{proof} This is a characterization for quantum groups at generic parameters. The work surveyed in \cite{AS,AS2} on finite-dimensional pointed Hopf algebras can be viewed as a characterization of small quantum groups. The triangular decomposition can be interpreted as the case where the graph $\Gamma$ described in \ref{classificationsurvey} has two connected components, such that the corresponding generators for the two components give dually paired braided Hopf algebras. The characterization suggests that if we are looking for examples outside of DJ-type, we have to consider braidings of generic Cartan type which are not positive. In fact, \cite[2.6]{AS3} gives an example that is generic of Cartan type, but not of DJ-type. We compute the associated double here: \begin{example} Let $G=\langle k_1,k_2\rangle\cong C_\infty\times C_\infty$ be a free abelian group with two generators. We define a two-dimensional YD-module $V$ over $G$ on generators $v_1$ of degree $k_1$, $v_2$ of degree $k_2$ via \begin{align*} k_1\triangleright v_1&=q v_1,&k_1\triangleright v_2&=q^{-1}v_2,&k_2\triangleright v_1&=q^{-1}v_1,&k_2\triangleright v_2&=-qv_2. \end{align*} Lemma 2.1 in \cite{AS3} shows that \[ \mathcal{B}(V)=\langle v_1,v_2\mid \un{\operatorname{ad}}(v_1)^3(v_2)=\un{\operatorname{ad}}(v_2)^3(v_1)=0\rangle. \] The asymmetric braided Drinfeld double $U_{\mathbb{C} G}(V,V^*)$ is in fact a braided Drinfeld double if we define $V^*$ to be the dual YD-module. It is the Hopf algebra given on $\mathcal{B}(V)\otimes \mathbb{C} G\otimes \mathcal{B}(V^*)$, subject to the relations \begin{align*} [f_1,v_i]&=\delta_{1,i}\frac{k_1-k_1^{-1}}{q^{1/2}-q^{-1/2}}, &[f_2,v_i]&=\delta_{2,i}\frac{k_2-k_2^{-1}}{iq^{1/2}+iq^{-1/2}},&\\ k_1v_2&=q^{-1}v_2k_1,&k_2v_1&=q^{-1}v_1k_2,\\ k_1v_1&=qv_1k_1,&k_2v_2&=-qv_2k_2,\\ k_1f_2&=qf_2k_1,&k_2f_1&=qf_1k_2,\\ k_1f_1&=q^{-1}f_1k_1,&k_2f_2&=-q^{-1}f_2k_2, \end{align*} and with coproducts \begin{align*} \Delta(v_i)&=v_i\otimes k_i+1\otimes v_i,& \Delta(f_i)&=f_i\otimes 1+k_i^{-1}\otimes f_i. \end{align*} \end{example} Apart from such examples, we can also include examples where free and nilpotent generators are combined, hence capturing features of both small and generic quantum groups. Here is such an example of small rank: \begin{example} Let $G=C_\infty\times C_p=\langle g_{\infty}\rangle \times\langle g_p\rangle$ the product of an infinite cyclic group and one of order $p$. We define a 2-dimensional YD-module over $G$ on $\mathbb{C} v_\infty\oplus \mathbb{C} v_p$, where $v_\infty$ has degree $g_\infty$, and $v_p$ has degree $g_p$. The group action is given by \begin{align*} g_p\triangleright v_p&=\xi_pv_p, &g_p\triangleright v_\infty&=\eta_p v_\infty,\\ g_\infty\triangleright v_p&=\eta_p^{-1}v_p, &g_\infty \triangleright v_\infty&=\eta_\infty v_\infty, \end{align*} where scalars with a subscript $p$ are primitive $p$th roots of unity, and $\eta_\infty$ is generic. We can now compute the Nichols algebra with generators $v_p$ and $v_\infty$. It is given by \[ \mathcal{B}(V)=\mathbb{C}\langle v_p,v_\infty \rangle /{(v_p^p, v_pv_\infty-\eta_pv_\infty v_p )}. \] We denote the dual YD-module by $V^*$ with generators $f_p$, $f_\infty$. The braided Drinfeld double on $\mathcal{B}(V)\otimes k(C_p\times C_\infty)\otimes \mathcal{B}(V^*)$ of the braided Hopf algebra $\mathcal{B}(V)$ is a quantum group that combines both $u_q(\fr{sl}_2)$ and $U_q(\fr{sl}_2)$: \begin{align*} [f_p,v_i]&=\delta_{i,p}\frac{g_p-g_p^{-1}}{\xi_p^{1/2}-\xi_p^{-1/2}}, &[f_\infty,v_i]&=\delta_{\infty,i}\frac{g_\infty-g_\infty^{-1}}{\eta_\infty^{1/2}-\eta_\infty^{-1/2}},\\ g_pv_p&=\xi_pv_pg_p,&g_pv_\infty&=\eta_p v_\infty g_p,\\ g_\infty v_p&=\eta_p^{-1} v_pg_\infty,&g_\infty v_\infty&=\eta_\infty v_\infty g_\infty,\\ g_pf_p&=\xi_p^{-1}f_pg_p,&g_pf_\infty&=\eta_p^{-1}f_\infty g_p,\\ g_\infty f_p&=\eta_qf_pg_\infty,&g_\infty f_\infty&=\eta_\infty^{-1}f_\infty g_\infty.& \end{align*} and with coproducts \begin{align*} \Delta(v_i)&=v_i\otimes g_i+1\otimes v_i,& \Delta(f_i)&=f_i\otimes 1+g_i^{-1}\otimes f_i, &\text{for }i=p,\infty. \end{align*} Choosing instead $g_\infty\triangleright v_p=\xi_\infty v_p$ we obtain more examples where the Nichols algebra will involve other relations depending on choice of $\xi_\infty$. \end{example} \subsection{Classes of Pointed Hopf Algebras by Radford}\label{radford} In \cite{Rad}, a class of pointed Hopf algebras $U_{(N,\nu, \omega)}$ was introduced (see also \cite{Gel} for generalizations). These Hopf algebras are associated to the datum of a positive integer $N$ and $1\leq \nu <N$ such that $N$ does not divide $\nu^2$, and $\omega\in k$ is a primitive $N$th root of unity in a field $k$. Denote $q:=\omega^\nu$ and $r=\abs{q^\nu}=\abs{\omega^{\nu^2}}$. We let $C_N$ denote a cyclic group of order $N$ generated by an element $a$. The algebra $U_{(N,\nu,\omega)}$ is the braided Drinfeld double of the YD-module Hopf algebra $U_+:=k[x]/(x^r)$ over $C_p$, with grading given by $x\mapsto a^{\nu}\otimes x$ and action $a\triangleright x=q^{-1} x$. Note that $U_+$ is the Nichols algebra of the one-dimensional YD-module $kx$. The coalgebra structure is given by $\Delta(x)=x\otimes a^{\nu}+1\otimes x$, and $\Delta(y)=y\otimes 1+a^{-\nu}\otimes y$ for the dual generator $y$. Note further that the other Hopf algebra $H_{(N,\nu,\omega)}$ introduced by Radford is simply the bosonization $U_+\rtimes kC_N$ in this set-up. The algebras $U_{(N,\nu, \omega)}$ and $H_{(N,\nu,\omega)}$ are not indecomposable unless $\nu=1$. To obtain indecomposable pointed Hopf algebras, we can consider the subalgebras generated by $x, y$ and $a^{\nu}$ (respectively, $x$ and $a^{\nu}$). Since these only depend on the choices of $r$ and $q$ we denote these Hopf algebras by $U_{(r,q)}$ (respectively, $H_{(r,q)}$). Note that $U_{(r,1,q)}=U_{(r,q)}$. \subsection{Quantum Group Analogues in Other Contexts}\label{conclusion} To conclude this paper, we would like to adapt the point of view that quantum groups can also be studied over other Hopf algebras $H$ than the group algebra. For this, one can, motivated by the results of this paper, look for Hopf algebras $A$ with triangular decomposition over $H$. The property over a group that $A$ is of separable type can be generalized by requiring that the YD-modules $V$ with respect to the left and right coactions $\delta_r$ and $\delta_l$ are a direct sum of distinct (one-dimensional) simples. One-dimensionality of the simples is however a strong restriction. As a first example, we can consider the case where $H$ itself is primitively generated, i.e. $H=k[x_1, \ldots, x_n]$ over a field of characteristic zero. If $A$ is a bialgebra with triangular decomposition over $H$, then for $v\in V$, $\Delta(v)\in V\otimes H+H\otimes V$ implies that $\Delta(v)$ in fact equals $v\otimes 1+1\otimes v$ using the counitary condition. This gives that $A$ is generated by primitive elements and hence is a pointed Hopf algebra that is connected (i.e. the group-like elements are the trivial group). Now $A$ is in particular cocommutative, so Theorem 5.6.5 in \cite{Mon} implies (for $\operatorname{char} k=0$) that $A=U(\fr{g})$ where $\fr{g}$ is the Lie algebra of primitive elements in $A$. From this point of view, all quantum groups over $H=k[x_1,\ldots, x_n]$ are simply the classical universal enveloping algebras. Investigating bialgebras with triangular decomposition over other Hopf algebras $H$ can be the subject of future research. \begin{acknowledgements} A preliminary version of this paper is part of the PhD thesis of the author completed at the University of Oxford. I am grateful to my PhD advisor Prof Kobi Kremnizer for his guidance. I would also like to thank Dr Yuri Bazlov, Prof Arkady Berenstein, Prof Dan Ciubotaru and Prof Shahn Majid for helpful discussions on the subject matter. \end{acknowledgements} \section{Introduction}\label{section1} \subsection{What Are Quantum Groups?}\label{motivation} An important problem in the theory of quantum groups is to give some definition of a class of these objects that captures known series of quantum groups, such as the quantum enveloping algebras $U_q(\fr{g})$ of \cite{Dri}, and their finite-dimensional analogues, as examples. This was for example formulated in \cite[Problem II.10.2]{BG}: \begin{quotation} \begin{it} ``Given a finite-dimensional Lie algebra $\mathfrak{g}$, find axioms for Hopf al\-ge\-bras to qualify as quantized enveloping algebras of this particular $\mathfrak{g}$." \end{it} \end{quotation} A possible hint to the structure of quantum groups is that the quantum envel\-oping algebras $U_q(\mathfrak{g})$ (as well as the small quantum groups $u_q(\fr{g})$ and multiparameter versions) are \emph{pointed Hopf algebras}. Such Hopf algebras were studied by several authors (see e.g. \cite{AS}). Classification results as in \cite{AS2} suggest a strong resemblance of all finite-dimensional pointed Hopf algebras over abelian groups with small quantum groups. Another paper \cite{AS3} gives a characterization of quantum groups at generic pa\-ram\-e\-ters using pointed Hopf algebras of finite Gelfand--Kirillov dimension with infinitesimal braiding of positive generic type. A further hint to the structure of quantum groups is that they can be decomposed in a triangular way (via the PBW theorem) as \[ U_q(\mathfrak{g})=U_q(\fr{n}_+)\otimes k\mathbb{Z}^n\otimes U_q(\fr{n}_-). \] Here, the positive and negative part are perfectly paired braided Hopf algebras, and the relation with the group algebra $k\mathbb{Z}^n$ is governed by semidirect product relations. The positive (and negative) part are so-called \emph{Nichols algebras}. A third aspect --- observed already in the original paper \cite{Dri} --- is that quantum groups are (quotients of) \emph{quantum} or \emph{Drinfeld doubles}. It was shown in \cite{Maj2} that $U_q(\fr{g})$ in fact is a \emph{braided} Drinfeld double (which is referred to as a \emph{double bosonization} there). It was proved in \cite{BW} that also two-parameter quantum groups are Drinfeld doubles. In this paper, we aim to provide an axiomatic approach to the definition of (multiparameter) quantum groups by combining the pointed Hopf algebra and the triangular decomposition approach. Under the additional assumption of what we call a triangular decomposition of \emph{weakly separable type} over a group, the only indecomposable examples are close generalizations of multiparameter quantum groups. In particular, assuming further non-degeneracy, they are examples of a more general version of braided Drinfeld doubles, which we refer to as \emph{asymmetric} braided Drinfeld doubles. Further, under certain assumptions on the group and the parameters, we can recover Lie algebras from these Hopf algebras, after introducing a suitable integral form. \subsection{This Paper's Results} This paper starts by recalling the necessary technical background, including a brief overview on classification results of finite-dimensional pointed Hopf algebras, as well as structural results by \cite{BB} on algebras with triangular decomposition, in Section~\ref{background}. Next, we give the definition of a bialgebra with a triangular decomposition over a Hopf algebra $H$ in Section~\ref{section1.5}. This adapts the two-step approach used for algebras in \cite{BB} to the study of bialgebras. Namely, we first consider the \emph{free} case of a bialgebra $T(V)\otimes H\otimes T(V^*)$ where the positive and negative parts ($T(V)$, respectively $T(V^*)$) are tensor algebras, and then specify by what ideals (called \emph{triangular} Hopf ideals) we can take the quotient. The core of this paper is formed by a partial classification of bialgebras with triangular decomposition over a group algebra $kG$. We assume that $V$ has one-dimensional homogeneous components (weak separability). We again proceed in two steps. First, we determine all pointed bialgebras with free positive and negative part over $kG$ in Section \ref{freeclassification}, and then look at pairs of ideals $I$, $I^*$ such that the quotient $A/{( I, I^*)}$ is still a bialgebra in Section \ref{quotientsection}. We find that indecomposable examples are automatically pointed Hopf algebras, and impose strong commutativity conditions on the group $G$. Multiparameter quantum groups fit into this framework. Indeed, the only possible commutator relations (\ref{commrel}) closely resemble those of multiparameter quantum groups: \begin{align} [f_i,v_j]&=\gamma_{ij}(k_j-l_i)\in kG, &\forall i=1,\ldots,n. \end{align} We further observe that there exists a natural generalization of the definition of a braided Drinfeld double to the setting of braided Hopf algebras in the category of Yetter--Drinfeld modules (YD-modules) over $H$. For this, the base Hopf algebra $H$ does not need to be quasitriangular. We need two braided Hopf algebras which are only required to be dually paired considered as braided Hopf algebra in the category of modules (rather than YD-modules). That is, the requirement that is weakened compared to the definition of a braided Drinfeld double (as in \cite{Maj2} or \cite{Lau}) is that the comodule structures do not need to be dually paired. We refer to this generalization as the \emph{asymmetric braided Drinfeld double}. It gives a natural way of producing Hopf algebras with triangular decomposition --- which are not necessarily quasitriangular. We show in Theorem \ref{drinfeldtheorem} that the Hopf algebras arising in the classification in Theorem~\ref{mainclassificationthm} are of this form (provided that the parameters $\gamma_{ii}$ are non-zero) and that $G$ has to be abelian in this case. In Section \ref{liealgebrasection} we show that from these asymmetric braided Drinfeld doubles of separable type we can recover Lie algebras provided that there exists a well-defined morphism of rings to $\mathbb{Z}$ when setting the parameters equal to 1. Hence, in the spirit of the question asked in Section \ref{motivation}, we can relate the outcome of our classification back to Lie algebras, which are always generated by Lie subalgebras isomorphic to $\mathfrak{sl}_2$. Here is an overview of the increasingly stronger assumptions on the Hopf algebras $A$ and $H$ used in the classification: \begin{itemize} \item Section~\ref{section1.5}: $H$ any Hopf algebra over a field $k$, $A$ a bialgebra with triangular decomposition; \item Section~\ref{section2}: $H=kG$, $A$ a bialgebra with triangular decomposition; \begin{itemize} \item Section~\ref{preliminaryobs}--\ref{freeclassification}: $A$ is of weakly separable type and indecomposable after Definition~\ref{indecprop}; \item Section~\ref{quotientsection}: $A$ is indecomposable and non-degenerate of separable type; \item Section~\ref{liealgebrasection}: In addition to the assumptions of \ref{quotientsection}, we require that $\operatorname{char} k=0$, and that setting the parameters equal to 1 gives a well-defined homomorphism of rings to $\mathbb{Z}$. \end{itemize} \end{itemize} The final section~\ref{multiparametersection} contains different classes of indecomposable pointed Hopf algebras with triangular decomposition over a group $kG$ that arise as examples in the main classification. The first class we discuss are the multiparameter quantum groups $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ introduced by \cite{FRT} (adapting the presentation in \cite{CM}). They are asymmetric braided Drinfeld doubles, which is a generalization of the result of \cite{BW} showing that two-parameter quantum groups are Drinfeld doubles. In Section \ref{section3} we bring results of \cite{Ros} on growth condition (finite Gelfand--Kirillov dimension) and classification of Nichols algebras from \cite{AS3} into the picture. We use these results to characterize the Drinfeld--Jimbo type quantum groups at generic parameters $q$ within the classification of this paper under the additional assumption that the triangular decomposition is what we call \emph{symmetric}. Further, a class of finite-dimensional pointed Hopf algebras by Radford can naturally be included as examples in this framework (Section \ref{radford}). To conclude this paper, we suggest in Section \ref{conclusion} that future research could focus on the search for Hopf algebras with triangular decomposition over other Hopf algebras $H$ (replacing the group algebra $kG$). This might give interesting monoidal categories, or even knot invariants, in other contexts. As the first --- most classical --- example, we take $H$ to be a polynomial ring $k[x_1,\ldots, x_n]$. In this case, the only examples are universal enveloping algebras of Lie algebras. \subsection{Notations and Conventions} In this paper, adapted Sweedler's notation (see e.g. \cite[1.2]{Swe}) is used to denote coproducts and coactions omitting summation. Unless otherwise stated, we work with Hopf algebras over an arbitrary field $k$. A Hopf algebra always has an invertible antipode $S$. The category of left YD-modules (or \emph{crossed modules}, cf. \cite[Proposition 7.1.6]{Maj1}) over a Hopf algebra $H$ is denoted by $\leftexpsub{H}{H}{\mathcal{YD}}$, while left modules are denoted by $\lmod{H}$, and right modules by $\rmod{H}$. We denote the module spanned by generators $S$ over a commutative ring $R$ by $R\langle S \rangle$, while $R[S]$ denotes the $R$-algebra generated by elements $S$ (subject to some specified relations). Groups generated by elements of a set $S$ are denoted by $\langle S\rangle$, while ideals are denoted using $(~)$. \section{Background}\label{background} \subsection{Pointed Hopf Algebras} Let the coproduct $\Delta\colon H\to H\otimes H$ make $H$ a coalgebra over a field $k$. We can consider \emph{simple} subcoalgebras $A\leq H$. That is, $\Delta(A)\leq A\otimes A$ and there are no proper subobjects of this type in $A$. A basic observation is that if $\dim A=1$, then $A$ can be written as $kg$, for a generator $g\in H$ such that $\Delta(g)=g\otimes g$. Such elements are called \emph{grouplike}. Indeed, if $H$ is a Hopf algebra, then the set of all grouplike elements $G(H)$ has a group structure. A Hopf algebra is \emph{pointed} if all simple subcoalgebras are one-dimensional. This notion can be traced back to \cite[8.0]{Swe} and classifying all finite-dimensional pointed Hopf algebras can be taken as a first step in the classification of all finite-dimensional Hopf algebras (see e.g. \cite{And} for a recent survey). In the late 1980s and early 1990s, important classes of pointed Hopf algebras have been discovered with the introduction of the quantum groups (and their small analogues). Due to the vast applications of and attention to these Hopf algebras in the literature, the study of pointed Hopf algebras has become an important algebraic question. \subsection{Link-Indecomposability}\label{indecomposability} In the early 1990s, Montgomery asked the question, which groups may occur as $G(H)$ where $H$ is an \emph{indecomposable} pointed Hopf algebra. In \cite{Mon2}, an appropriate notion of indecomposability is discussed in different ways. We will briefly recall the description in terms of \emph{link-indecomposability} which is equivalent to indecomposability as a coalgebra and indecomposability of the Ext-quiver of simple comodules. Given a pointed Hopf algebra $H$, we define a graph $\Gamma_H$ with vertices being the simple subcoalgebras of $H$ (that is, the grouplike elements). There is an edge $h\to g$ if there exists a $(g,h)$-skew-primitive element $v\in H$, i.e. $\Delta(v)=v\otimes g+h\otimes v$, which is not contained in $kG(H)$. We say that $H$ is \emph{indecomposable} if $\Gamma_H$ is connected. As an example, group algebras $kG$ are only indecomposable if $G=1$. The quantum group $U_q(\mathfrak{sl}_2)$ is indecomposable if the coproducts are e.g. defined as $\Delta(E)=E\otimes 1 + K\otimes E$ and $\Delta(F)=F\otimes 1 + K^{-1}\otimes F$. There are other versions of the coproduct which are not indecomposable (see \cite{Mon2}). \subsection{Classification Results for Pointed Hopf Algebras}\label{classificationsurvey} It was understood early that some pointed Hopf algebras can be obtained as bosonizations $A=\mathcal{B}(V)\rtimes kG$ of so-called \emph{Nichols} (or \emph{Nichols-Woronowicz}) algebras $\mathcal{B}(V)$ associated to YD-modules over a group $G$ (see e.g. \cite[Section~2]{AS} for definitions). In this case, the coproducts are given by $\Delta(v)=v^{(0)}\otimes v^{(-1)}+ 1 \otimes v$ using Sweedler's notation. That is, if $v$ is a homogeneous element, then $\Delta(v)=v\otimes g+1\otimes v$ for the degree $g\in G(A)$ of $v$ and $A$ is indecomposable over the group generated by $g\in G$ with $V_g\neq 0$. Thus, the question of finding finite-dimensional pointed Hopf algebras is linked to finding finite-dimensional Nichols algebras.\footnote{However, a pointed Hopf algebra is not necessarily a bosonization of this form. Important tools available are the coradical filtration (see e.g. \cite[5.2]{Mon}) and the \emph{lifting method} of Andruskiewitsch and Schneider \cite[Section~5]{AS}.} Although both questions remain open in general, vast progress on classifying pointed Hopf algebras has been made in a series of papers by Andruskiewitsch and Schneider (see \cite{AS,AS2}) for abelian groups $G$, and more recently for symmetric and alternating groups \cite{AFGV}, or groups of Lie type \cite{ACG1,ACG2}. See \cite{And} for more detailed references. Let us briefly recall the classification results of \cite{AS2} over an algebraically closed field $k$ of characteristic zero here in order to provide the basis for comparison to this paper's classification in Section~\ref{section2} later. To fix notation, let $\mathcal{D}$ denote a \emph{finite Cartan datum}. That is, a finite abelian group $\Gamma$, a Cartan matrix $A=(a_{ij})$ of dimension $n\times n$ with a choice of generating group elements $g_i$, and characters $\chi_i$ for $i=1,\ldots,n$. Then define $q_{ij}:=\chi_j(g_i)$ and impose the conditions that \begin{equation}\label{cartandatum} q_{ij}q_{ji}=q_{ii}^{a_{ij}}, \quad q_{ii}\neq 1. \end{equation} We can associate to the Cartan matrix $A$ a root system $\Phi$ (with positive roots $\Phi^+$). The simple roots $\alpha_i$ of $\Phi$ are indexed by $i=1,\ldots ,n$. Denote by $\chi$ the set of connected components of the corresponding Dynkin diagram, and by $\Phi_J$ the root system restricted to the component $J\in \chi$, and write $i\sim j$ if $i$ and $j$ are in the same connected component. Denote further \[ g_\alpha:= \prod_{i=1}^n{g_i^{n_i}}, \qquad \chi_\alpha:= \prod_{i=1}^n\chi_{i}^{n_i}, \qquad \text{for a root } \alpha=\sum_{i=1}^n{n_i\alpha_i}. \] To state the classification of finite-dimensional pointed Hopf algebras over abelian groups, some technical assumptions need to be made: \begin{enumerate} \item[(a)] Assume that the parameters $q_{ii}$ are roots of \emph{odd} order $N_i$. \item[(b)] $N_i=N_J$ is constant on each connected component, $i\in J$. \item[(c)] If $J\in \chi$ is of type $G_2$, then 3 does not divide $N_J$. \end{enumerate} To construct pointed Hopf algebra from a Cartan datum $\mathcal{D}$, we need two families of parameters: \begin{enumerate} \item[(d)] Let $\lambda=(\lambda_{ij})$ be an $n\times n$-matrix of elements in $k$ such that for all $i \nsim j$, $g_ig_j=1$ or $\chi_i\chi_j\neq \varepsilon$ implies $\lambda_{ij}=0$. \item[(e)] Further let $\mu=(\mu_\alpha)_{\Phi^+}$ be elements in $k$ such that for any $\alpha\in \Phi^+_J$, for $J\in \chi$, such that if $g_{\alpha}^{N_J}=1$ or $\xi_{\alpha}^{N_J}\neq \varepsilon$, then $\mu_{\alpha}=0$. \end{enumerate} \begin{definition}[{\cite[5.4]{AS2}}]\label{asform} Given a Cartan datum $\mathcal{D}$ with families of parameters $\lambda, \mu$ as above, there is a Hopf algebra $u=u(\mathcal{D},\lambda,\mu)$. The algebra $u$ is generated by elements $g\in \Gamma$ (to define $u_\alpha(\mu)\in k\Gamma$ for $\alpha\in \Phi^+$, see \cite[2.14]{AS2}), and $x_i$ for $i=1,\ldots, n$, subject to the relations \begin{align} gx_i&=\chi_i(g)x_ig,\qquad &\text{for all $i$, $g\in \Gamma$},\label{asrel1}\\ \underline{\operatorname{ad}}(x_i)^{1-a_{ij}}&=0, \qquad &\text{for $i\neq j$, $i\sim j$},\label{asrel2}\\ \underline{\operatorname{ad}}(x_i)(x_j)&=\lambda_{ij}(1-g_ig_j), \qquad &\text{for all $i<j$, $i\nsim j$},\label{asrel3}\\ x_\alpha^{N_J}&=u_\alpha(\mu), \qquad &\text{for all $\alpha\in \Phi_J^+$, $J\in \chi$}.\label{asrel4} \end{align} Here, $\underline{\operatorname{ad}}(x)(y)$ is the \emph{braided} commutator $xy-m\circ \Psi(x\otimes y)$ where $m$ denotes multiplication and $\Psi$ is the YD-braiding. The comultiplication is given by $\Delta(x_i)=x_i\otimes 1 + g_i\otimes x_i.$ \end{definition} \begin{theorem}[{\cite[0.1]{AS}}] Under the above assumptions (a)--(e) on a Cartan datum $\mathcal{D}$ with parameters $\lambda$, $\mu$, the Hopf algebra $u(\mathcal{D},\lambda, \mu)$ is indecomposable and pointed with $G(u)=\Gamma$ and has finite dimension. Moreover, if $\abs{G}$ is not divisible by $2,3,5$ or $7$, then any indecomposable finite-dimensional pointed Hopf algebra over $kG$, where $G$ is abelian, and $k=\overline{k}$, $\operatorname{char} k=0$, is of this form. \end{theorem} \subsection{Algebras with Triangular Decomposition (Free Case)} A triangular decomposition of algebras means that an intrinsic PBW decomposition exists, similar to universal enveloping algebras of Lie algebras. This is a common feature of quantum groups and rational Cherednik algebras, but more generally shared by all braided Drinfeld or Heisenberg doubles (cf. \cite[Section~3]{Lau}). Here, we are using the definitions introduced in \cite{BB} to study such algebras with triangular decomposition (so-called \emph{braided doubles}). From a deformation-theoretic point of view, triangular decomposition can be viewed as follows. Let $V$, $V^*$ be dually paired finite-dimensional vector spaces and $H$ a Hopf algebra over a field $k$, such that $V$ is a left $H$-module, and $V^*$ carries the dual right $H$-action. That is, for the evaluation map $\langle ~,~\rangle\colon V^*\otimes V\to k$, we have \begin{equation} \langle f\triangleleft h,v\rangle=\langle f, h\triangleright v\rangle, \qquad \forall f\in V^*, v\in V, h\in H. \end{equation} Now define $A_0(V,V^*)$ to be the algebra on $T(V)\otimes H \otimes T(V^*)$ with relations \begin{equation}\label{boson} fh=h_{(1)}(f\triangleleft h_{(2)}), \qquad hv=(h_{(1)}\triangleright v) h_{(2)}, \end{equation} (i.e. the bosonizations $T(V)\rtimes H$ and $H\ltimes T(V^*)$ are subalgebras), and $[f,v]=0$. In \cite[3.1]{BB}, a family of deformations of $A_0(V,V^*)$ over $\operatorname{Hom}_k(V^*\otimes V,H)$ is defined. The algebra $A_\beta(V,V^*)$, for a parameter $\beta\colon V^*\otimes V\to H$, is defined using the same generators in $V$, $V^*$ and $H$ with the same bosonization relations, but the commutator relations \begin{equation} [f,v]=\beta(f,v). \end{equation} In order to obtain flat deformations we restrict to maps $\beta$ such that multiplication \begin{align*} m\colon T(V)\otimes H\otimes T(V^*)&\stackrel{\sim}{\longrightarrow} A_{\beta}(V,V^*),&v\otimes h\otimes f&\mapsto vhf, \end{align*} gives an isomorphism of $k$-vector spaces. \begin{definition} In the case where $m$ gives such an isomorphism of $k$-vector spaces, we say that $A_\beta(V,V^*)$ is a \emph{free braided double}. \end{definition} \begin{theorem}[{\cite[Theorem 3.3]{BB}}]\label{bbthm} The algebra $A_\beta(V,V^*)$ is a free braided double if and only if there exists a $k$-linear map $\delta\colon V\to H\otimes V$, $\delta(v)=v^{[-1]}\otimes v^{[0]}$ which is YD-compatible with the $H$-action on V, i.e. for any $h\in H$ \begin{equation}\label{ydcond} h_{(1)}v^{[-1]}\otimes (h_{(2)}\triangleright v^{[0]})=(h_{(1)}\triangleright v)^{[-1]}h_{(2)}\otimes (h_{(1)}\triangleright v)^{[0]}. \end{equation} In this case, we call $(V,\delta)$ a \emph{quasi-YD-module} and we have \begin{equation}\label{commrel} [f,v]=\beta(f\otimes v)=v^{[-1]}\langle f,v^{[0]}\rangle. \end{equation} \end{theorem} Note that $A_{\beta}(V,V^*)$ is a graded algebra where $\deg v=1$, $\deg h=0$, and $\deg f=-1$, for all $v\in V$, $h\in H$, and $f\in V^*$. \subsection{Triangular Ideals}\label{triangularideals} So far, the braided Hopf algebras $T(V)$ and $T(V^*)$ were assumed to be free. We can bring additional relations into the picture, defining \emph{braided doubles} that are not necessarily free. Let $I\triangleleft T(V)$ and $I^*\triangleleft T(V^*)$ be ideals. We want to determine when the quotient map \[ m\colon T(V)/I\otimes H \otimes T(V^*)/{I^*}\stackrel{\sim}{\longrightarrow} A_\beta(V, V^*)/{( I, I^*)} \] is still a graded isomorphism of $k$-vector spaces. In \cite[Appendix~A]{BB} it is show that this is the case if and only if $J:=( I, I^*)$ is a so-called \emph{triangular ideal}. That is, $J=I\otimes H \otimes T(V^*)+T(V)\otimes H \otimes I^*$, where $I\triangleleft T^{>1}(V)$, $I^*\triangleleft T^{>1}(V^*)$ are homogeneously generated ideals such that $I$ and $I^*$ are $H$-invariant and \begin{equation}\label{idealcond} T(V^*)I\leq J, \qquad I^*T(V)\leq J. \end{equation} This condition is equivalent to the commutators $[f,I]$ and $[I^*,v]$ being contained in $J$ for all elements $v\in V$, $f\in V^*$. For each quasi-YD-module, there exists a unique largest triangular ideal $I_{\op{max}}$, and thus a unique maximal quotient referred to as the \emph{minimal braided double} of $V$. If $\delta$ is a YD-module, then the maximal quotient $T(V)/{I_{\op{max}}}$ is the Nichols algebra $\mathcal{B}(V)$ of $V$, and the braided double on $\mathcal{B}(V)\otimes H \otimes\mathcal{B}(V^*)$ is a generalization of the Heisenberg double, a so-called \emph{braided Heisenberg double}. For the purpose of this paper, we need ideals $I$ such that $T(V)/I$ is a braided bialgebra, where $V$ is a YD-module. That is, not a bialgebra object in the category of $k$-vector spaces but in the category of YD-modules over $kG$ (see e.g. \cite[1.2--1.3]{AS}). In fact, if $I$ is a homogeneously generated ideal in $T^{>1}(V)$ which is a coideal and a YD-submodule, then $T(V)/I$ is a braided Hopf algebra. We denote the collection of such ideals by $\mathcal{I}_V$. In particular $I_{\op{max}}\in \mathcal{I}_V$ as the Nichols algebra $\mathcal{B}(V)$ is a braided Hopf algebra. \section{Hopf Algebras with Triangular Decomposition}\label{section1.5} In this section, we let $k$ be a field of arbitrary characteristic and $H$ a Hopf algebra over $k$. We introduce a notion of a Hopf algebra with triangular decomposition over $H$. \subsection{Definitions}\label{definitions} We refer to the grading of a braided double $T(V)/I\otimes H\otimes T(V^*)/{I^*}$ given by \[ \deg v=1,\quad \deg f=-1, \quad \deg h=0, \qquad \forall v\in V, ~f\in V^*, ~h\in H, \] as the \emph{natural grading}. We want to study Hopf algebras with triangular decomposition preserving this grading. \begin{definition}\label{triangulardecompdefn} A bialgebra (or Hopf algebra) $A$ with \emph{triangular decomposition} over a Hopf algebra $H$ is a braided double $A=T(V)/I\otimes H\otimes T(V^*)/{I^*}$ which is a bialgebra (respectively Hopf algebra) such that \begin{align} \bullet~&\text{$H$ is a subcoalgebra of $A$ with respect to the original coproduct of $H$},\label{assum0}\\ \begin{split}\bullet~&\text{the subspaces $T(V)\otimes H$ and $H\otimes T(V^*)$ are closed under the}\\&\text{coproduct of $A$,} \end{split}\label{assum1}\\ \begin{split}\bullet~&\text{the coproduct and counit are morphisms of graded algebras}\\ &\text{for the natural grading.} \end{split}\label{assum2} \end{align} (In the Hopf case, the antipode $S$ is required to preserve the natural grading and the subspaces $T(V)\otimes H$ and $H\otimes T(V^*)$.) \end{definition} Note that (\ref{assum2}) implies that $\varepsilon(v)=\varepsilon(f)=0$ for all $v\in V$, $f\in V^*$. We further observe that assumptions (\ref{assum1}) and (\ref{assum2}) combined with the counit property, give that $\Delta(V)\leq H\otimes V+V\otimes H$ as well as $\Delta(V^*)\leq H\otimes V^*+V^*\otimes H$. Consider the compositions $\delta_r, \delta_l$ with projections in \[ \xymatrix{ &\ar[dl]_{\delta_l}\ar[d]^{\Delta} V \ar[rd]^{\delta_r}&\\ H\otimes V&\ar@{->>}[l]^-{p_1}H\otimes V\oplus V\otimes H\ar@{->>}[r]_-{p_2}&V\otimes H .} \] The coalgebra axioms imply that $\delta_l$ and $\delta_r$ are left (respectively right) $H$-coactions. In particular, as the semidirect product relations in $A$ are preserved by $\Delta$, $\delta_l$ (and $\delta_r$) are left (respectively right) YD-compatible with the given actions of $H$ on $V$ (right action via antipode). Similarly, we can obtain a left and right YD-module structure over $H$ on the dual $V^*$ from the coproduct. The corresponding coactions are denoted by $\delta_l^*$ and $\delta_r^*$. \begin{definition} Given a bialgebra $A$ with triangular decomposition over $H$, we define the \emph{right (respectively, left) YD-structure} of $A$ to be $\delta_r$ (respectively, $\delta_l$) together with the given $H$-actions. We refer to $\delta_r^*$ and $\delta_l^*$ (with the dual $H$-actions) as the right and left \emph{dual} YD-structures. \end{definition} To fix Sweedler's notation for the different coactions, denote $\delta_r(v)=v^{(0)}\otimes v^{(-1)}$ and $\delta_l(v)=v^{\overline{(-1)}}\otimes v^{\overline{(0)}}$ and use similar notations for $f\in V^*$. We will reformulate the definition of a bialgebra with triangular decomposition in terms of conditions on the YD-structures of $A$ in (\ref{eqn1})--(\ref{eqn5}) in the free case first. \begin{lemma}\label{hopflemma} A bialgebra $A$ with triangular decomposition over $H$ is a Hopf algebra with triangular decomposition if and only if there exists a $k$-linear map $S\colon V\oplus V^*\to V\otimes H\oplus V^*\otimes H$ such that \begin{align} \begin{split} S(v^{(0)})v^{(-1)}+(S(v^{\overline{(-1)}})_{(1)}\triangleright v^{\overline{(0)}})S(v^{\overline{(-1)}})_{(2)}&=0, \\ v^{(0)}S(v^{(-1)})+({v^{\overline{(-1)}}}_{(1)}\triangleright S(v^{\overline{(0)}})){v^{\overline{(-1)}}}_{(2)}&=0, \end{split}&\forall v\in V,\label{antipodecond1}\\ \begin{split} f^{\overline{(-1)}}S(f^{\overline{(0)}})+S(f^{(-1)})_{(1)}(f^{(0)}\triangleleft S(f^{(-1)})_{(2)})&=0,\\ S(f^{\overline{(-1)}})f^{\overline{(0)}}+{f^{(-1)}}_{(1)}(S(f^{(0)})\triangleleft {f^{(-1)}}_{(2)})&=0, \end{split}&\forall f\in V^*.\label{antipodecond2} \end{align} In this case, $S$ extends uniquely to an antipode on all of $A$. \end{lemma} \begin{proof} This follows (under use of the semidirect product relations) by restating the antipode axioms for the coproduct of a Hopf algebra with triangular decomposition, in which the coproducts have the form $\Delta(v)=v^{(0)}\otimes v^{(-1)}+v^{\overline{(-1)}}\otimes v^{\overline{(0)}}$. Note that $\varepsilon(v)=0$ as we require the counit to be a morphism of graded algebras. \end{proof} \subsection{The Free Case}\label{freesection} Let $A$ be a \emph{free} braided double, i.e. $A=T(V)\otimes H\otimes T(V^*)$. We can now state necessary and sufficient conditions on the YD-structures of $A$ to make the algebra $A$ a bialgebra with triangular decomposition. In the following, we stick to the notation of \cite[Definition~2.1]{BB} denoting the quasi-coaction determining the commutator relations between elements of $V$ and $V^*$ by $\delta(v)=v^{[-1]}\otimes v^{[0]}$, for $v\in V$. \begin{lemma} A free braided double $A$ on $T(V)\otimes H\otimes T(V^*)$ is a bialgebra with triangular decomposition if and only if there exist YD-structures $\delta_l$, $\delta_r$, $\delta_l^*$, and $\delta_r^*$ such that the following conditions hold for $v\in V$, $f\in V^*$: \begin{align} (f^{(0)}\triangleleft {v^{\overline{(-1)}}})\otimes ({f^{(-1)}}\triangleright v^{\overline{(0)}})&=f\otimes v,\label{eqn1}\\ ({f^{\overline{(-1)}}}\triangleright v^{(0)})\otimes (f^{\overline{(0)}}\triangleleft {v^{(-1)}})&=v\otimes f,\label{eqn2}\\ v^{(0)}f^{(0)}\otimes (f^{(-1)}v^{(-1)}-v^{(-1)}f^{(-1)})&=0,\label{eqn3}\\ (f^{\overline{(-1)}}v^{\overline{(-1)}}-v^{\overline{(-1)}}f^{\overline{(-1)}})\otimes v^{\overline{(0)}}f^{\overline{(0)}}&=0,\label{eqn4}\\ \begin{aligned}&v^{(0)[-1]}\langle f^{(0)}, v^{(0)[0]}\rangle\otimes f^{(-1)}v^{(-1)}\\&+ f^{\overline{(-1)}}v^{\overline{(-1)}}\otimes v^{\overline{(0)}[-1]}\langle f^{\overline{(0)}}, v^{\overline{(0)}[0]}\rangle\end{aligned}&= v^{[-1]}\otimes v^{[-1]}\langle f, v^{[0]} \rangle. \label{eqn5} \end{align} \end{lemma} \begin{proof} The conditions are easily checked --- under use of the relations in $A$ and the PBW theorem --- to be equivalent to the requirement that (\ref{commrel}) is preserved by $\Delta$. This gives the relations (\ref{eqn3})--(\ref{eqn5}), as well as \begin{align*} {v^{\overline{(-1)}}}_{(1)}(f^{(0)}\triangleleft {v^{\overline{(-1)}}}_{(2)})\otimes ({f^{(-1)}}_{(1)}\triangleright v^{\overline{(0)}}){f^{(-1)}}_{(2)}&=v^{\overline{(-1)}}f^{(0)}\otimes v^{\overline{(0)}}f^{(-1)},\\ ({f^{\overline{(-1)}}}_{(1)}\triangleright v^{(0)}){f^{\overline{(-1)}}}_{(2)}\otimes {v^{(-1)}}_{(1)}(f^{\overline{(0)}}\triangleleft {v^{(-1)}}_{(2)})&=v^{(0)}f^{\overline{(-1)}}\otimes v^{(-1)}f^{\overline{(0)}}. \end{align*} These relations are equivalent to (\ref{eqn1}) and (\ref{eqn2}) under use of the counit of $H$, applying the coaction axioms. Further, given $\delta_r$, $\delta_l$ as well as their dual counterparts $\delta_r^*$, $\delta_l^*$, the bosonization relations are preserved by the coproduct defined as \begin{align} \Delta(v)&=v^{(0)}\otimes v^{(-1)}+v^{\overline{(-1)}}\otimes v^{\overline{(0)}}, &\Delta(f)&=f^{(0)}\otimes f^{(-1)}+f^{\overline{(-1)}}\otimes f^{\overline{(0)}}, \end{align} for $v\in V$, $f\in V^*$ by YD-compatibility. \end{proof} It will become apparent in Section~\ref{section2} what constraints on the structure of $A$ conditions (\ref{eqn1})--(\ref{eqn5}) give when working over $H$ a group algebra, and over a polynomial ring in Section~\ref{conclusion}. \subsection{Triangular Hopf Ideals}\label{triangularhopfsect} We are looking for triangular ideals $J=I\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I^*$ (cf. \cite[Appendix~A]{BB} or Section \ref{triangularideals}) which are also coideals, and hence $A/J$ is a bialgebra or Hopf algebra with a triangular decomposition. Using the description of the coproduct $\Delta$ in terms of the left and right YD-structures on $A$, the triangular ideals $J$ that are also coideals are simply those triangular ideals for which $I$ (and $I^*$) are YD-submodules for both $\delta_l$ and $\delta_r$ (respectively, $\delta_l^*$ and $\delta_r^*$). If $A$ is a triangular Hopf algebra with antipode given as in Lemma~\ref{hopflemma}, then every triangular ideal which is also a coideals is automatically a Hopf ideal. \begin{definition} We denote the collection of triangular ideals of the form \[ J=I\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I^* \] for homogeneously generated $I\triangleleft T^{>1}(V)$ and $I^*\triangleleft T^{>1}(V^*)$ which are also YD-submodules for $\delta_r$, $\delta_l$ (respectively for $\delta_r^*$, $\delta_l^*$) by $\mathcal{I}_\Delta(A)$. Such ideals $J$ are called \emph{triangular Hopf ideals}. \end{definition} \subsection{Asymmetric Braided Drinfeld Doubles}\label{asymdrin} A special class of Hopf algebras with triangular decomposition can be provided by braided Drinfeld doubles of primitively generated Hopf algebras over a quasitriangular base Hopf algebra $H$. This form of the Drinfeld double was introduced as the \emph{double bosonization} in \cite{Maj1,Maj2}, see also \cite[Section 3.5]{Lau} for the presentation used here. We now give a more general definition of an \emph{asymmetric} braided Drinfeld double which is suitable to capture the more general class of Hopf algebras that we find in Section~\ref{section2}, including multiparameter quantum groups, as examples. In this construction, the base Hopf algebra $H$ need not be quasitriangular, and the asymmetric braided Drinfeld double is also not quasitriangular in general. To define the braided Drinfeld double of dually paired braided Hopf algebras $C$ and $B$ in the category $\lmod{\operatorname{Drin}(H)}=\leftexpsub{H}{H}{\mathcal{YD}}$ we require that $\langle~,~\rangle \colon C\otimes B\to k$ is a morphism of YD-modules. This implies that the actions and coactions on $C$ and $B$ are dual to one-another (by means of the antipode of $H$). A weaker requirement is that we consider the images of $C$ and $B$ under the forgetful functor \[ F\colon \leftexpsub{H}{H}{\mathcal{YD}}\longrightarrow\lmod{H}, \] and require that $F(C)$ and $F(B)$ are dually paired Hopf algebras in $\lmod{H}$ (with the induced braiding under $F$), while $C$ and $B$ may not be dually paired in $\leftexpsub{H}{H}{\mathcal{YD}}$. Hence the coactions on $C$ and $B$ do not necessarily have to be related via the antipode, but the actions and resulting braidings need to be related by duality. This is captured by the following definition, where we denote the left coactions by $c\mapsto c^{(-1)}\otimes c^{(0)}$ and $b\mapsto b^{(-1)}\otimes b^{(0)}$ respectively. \begin{definition} We say that two braided Hopf algebras $C,B$ in $\leftexpsub{H}{H}{\cal{YD}}$ are \emph{weakly dually paired} if there exists a morphism of $H$-modules $\langle~,~\rangle\colon C\otimes B\to k$ such that \begin{align} \langle cc',b\rangle&=\langle c',b_{(1)}\rangle\langle c,b_{(2)}\rangle ,& \langle c,bb'\rangle&=\langle c_{(1)},b'\rangle\langle c_{(2)},b\rangle, \end{align} for all $c,c'\in C$, and $b,b'\in B$; as well as \begin{align}\label{weaklyduallypairedcond} (c^{(-1)}\triangleright b)c^{(0)}&=b^{(0)}(b^{(-1)}\triangleright c). \end{align} \end{definition} This weaker duality is equivalent to an analogue of condition (\ref{eqn2}). To see this, we can regard the left $H$-coaction on $B$ as a right $H^{\operatorname{cop}}$-coaction, over the co-opposite Hopf algebra $H^{\operatorname{cop}}$ with coproduct $\tau\Delta$. Given a left $H$-action $\triangleright$, we define a right $H^{\operatorname{cop}}$-action $\triangleleft:=\triangleright(S^{-1}\otimes \operatorname{Id})\tau$ (where $\tau$ denotes the $\otimes$-symmetry in $\mathbf{Vect}_k$). The resulting structures make $B$ a right YD-module over $H^{\operatorname{cop}}$. The analogue of condition (\ref{eqn2}) can be rephrased as requiring for all $b\in B, c\in C$ that \begin{align} &&b^{(0)}c^{(-1)}\otimes b^{(-1)}c^{(0)}&=c^{(-1)}b^{(0)}\otimes c^{(0)} b^{(-1)},\nonumber\\ &\Longleftrightarrow& b^{(0)}c^{(-1)}\otimes b^{(-1)}c^{(0)}&=({c^{(-1)}}_{(1)}\triangleright b^{(0)}){c^{(-1)}}_{(2)}\otimes {b^{(-1)}}_{(1)}(c^{(0)} \triangleleft {b^{(-1)}}_{(2)}), \label{compeqn2}\\ &\Longleftrightarrow& bc&=(c^{(-1)}\triangleright b^{(0)})(c^{(0)}\triangleleft b^{(-1)}),\label{compeqn}\\ &\Longleftrightarrow& (c^{(-1)}\triangleright b)c^{(0)}&=b^{(0)}(c\triangleleft S(b^{(-1)}))=b^{(0)}(b^{(-1)}\triangleright c),\nonumber \end{align} which gives condition (\ref{weaklyduallypairedcond}). We can visualize conditions (\ref{compeqn2}) and (\ref{compeqn}) using graphical calculus (with the conventions from \cite{Lau}), see Fig. \ref{braidingcomp}. \begin{figure}[h] \[ \begin{array}{ccc} \vcenter{\hbox{\import{Graphics/}{asymcond.pdf_tex}}}&\Longleftrightarrow& \vcenter{\hbox{\import{Graphics/}{asymcond2.pdf_tex}}}. \end{array} \] \caption{Left and right braiding compatibility condition} \label{braidingcomp} \end{figure} Given (\ref{weaklyduallypairedcond}), we can define an analogue of the braided Drinfeld double on the $k$-vector space $B\otimes H\otimes C$ (rather than using $B\otimes \operatorname{Drin}(H)\otimes C$) with this weaker requirement of duality on $C$ and $B$. The definition of the \emph{asymmetric} braided Drinfeld double can be given using Tannakian reconstruction theory by describing their category of modules. This is similar to the approach used for the braided Drinfeld double in \cite[Appendix~B]{Maj2} (cf. also \cite[Section 3.2]{Lau}). \begin{definition} Let $C,B$ be weakly dually paired braided Hopf algebras in $\leftexpsub{H}{H}{\cal{YD}}$. We define the category $\YDasy{C}{B}{H}$ of \emph{asymmetric YD-modules} over $C,B$ as having objects $V$ which are left $H$-modules (also viewed as right modules by means of the inverse antipode), equipped with a left $C$-action and a right $B$-action (by morphisms of $H$-modules) which satisfy the compatibility condition \begin{align}\label{assydcond} ((c_{(2)}\triangleright v)\triangleleft {b_{(1)}}^{(-1)})\triangleleft b_{(2)}\langle c_{(1)}, {b_{(1)}}^{(0)}\rangle &=c_{(1)}\triangleright ({c_{(2)}}^{(-1)}\triangleright(v\triangleleft b_{(1)}))\langle {c_{(2)}}^{(0)},b_{(2)}\rangle, \end{align} for all $v\in V, b\in B, c\in C$. Morphisms in $\YDasy{C}{B}{H}$ are required to commute with the actions of $H$, $B$ and $C$. \end{definition} It may help to visualize the condition (\ref{assydcond}) using graphical notation, see Fig. \ref{asymydpicture}. \begin{figure}[h] \begin{center} \import{Graphics/}{ydcondasy.pdf_tex}~. \end{center} \caption{Asymmetric Yetter--Drinfeld modules} \label{asymydpicture} \end{figure} \begin{proposition} The category $\YDasy{C}{B}{H}$ is monoidal, with a commutative diagram of monoidal fiber functors \[ \xymatrix{&\rmod{B}(\lmod{H})\ar[dr]&&\\ \YDasy{C}{B}{H}\ar[ur]\ar[dr] &&\lmod{H}\ar[r]&\mathbf{Vect}_k.\\ &\lmod{C}(\lmod{H})\ar[ur]&& } \] \end{proposition} \begin{proof} This monadicity statement can for example be checked directly using graphical calculus. Note that condition (\ref{compeqn}) is crucial. The fiber functors simply forget the additional structure at each step. \end{proof} \begin{definition}\label{asymmetricdrinfelddef} The \emph{asymmetric braided Drinfeld double} $\operatorname{Drin}^{\operatorname{asy}}_H(C,B)$ is defined as the algebra obtained by Tannakian reconstruction\footnote{See e.g. \cite[9.4.1]{Maj1} or \cite[Section 2.3]{Lau}.} on $B\otimes H\otimes C$ applied to the functor $\YDasy{C}{B}{H}\longrightarrow \mathbf{Vect}_k$. Hence $\lmod{\operatorname{Drin}^{\operatorname{asy}}_H(C,B)}$ and $\YDasy{C}{B}{H}$ are canonically equivalent as categories. \end{definition} \begin{proposition}\label{asymmetricdrinrel} An explicit presentation for the asymmetric braided Drinfeld double $\operatorname{Drin}^{\operatorname{asy}}_H(C,B)$ on the $k$-vector space $B\otimes H\otimes C$ can be given as follows: the multiplication on $B$ is opposite, and for $c\in C$, $b\in B$ and $h\in H$ we have \begin{align} hb&=(h_{(2)}\triangleright b)h_{(1)},\\hc&=(h_{(1)}\triangleright c)h_{(2)},\\ b_{(2)}S^{-1}({b_{(1)}}^{(-1)})c_{(2)}\langle c_{(1)}\otimes {b_{(1)}}^{(0)}\rangle &=c_{(1)}{c_{(2)}}^{{(-1)}}b_{(1)}\langle {c_{(2)}}^{{(0)}}\otimes b_{(2)}\rangle. \end{align} The coproducts are given by \begin{align} \Delta(h)&=h_{(1)}\otimes h_{(2)},\\ \Delta(b)&={b_{(1)}}^{(0)}\otimes b_{(2)}S^{-1}({b_{(1)}}^{(-1)}),\\ \Delta(c)&={c_{(1)}}{c_{(2)}}^{{(-1)}}\otimes {c_{(2)}}^{{(0)}}, \end{align} and the antipode is \begin{align} S(h)&=S(h), &S(b)&=S^{-1}(b^{(0)})b^{(-1)}, &S(c)&=S(c^{{(-1)}})S(c^{{(0)}}), \end{align} using the respective given structures on $H$, $B$, and $C$. \end{proposition} \begin{proof} This follows under application of reconstruction (in $\mathbf{Vect}_k$) applied to $\YDasy{C}{B}{H}$. See e.g \cite[Section 2.3]{Lau} for formulas on how to obtain the structures, including the antipode (Figure 2.1). \end{proof} An important feature of the braided Drinfeld double is that it has a braided monoidal category of representations, hence is quasitriangular. For the \emph{asymmetric} braided Drinfeld double to be quasitriangular, we need $H$ to be quasitriangular. If $H$ is not quasitriangular, this can be achieved by working with over $\operatorname{Drin}(H)$ instead of $H$ as a base Hopf algebra. From now on, we restrict to the important special case where $B$ and $C$ are primitively generated by finite-dimensional YD-modules. This way, we obtain examples of Hopf algebras with a triangular decomposition over $H$. \begin{lemma}\label{asymmetricdouble} Let $V$, $V^*$ be left YD-modules over $H$, such that the action on $V^*$ is dual to the action on $V$. Then the braided tensor (co)algebras $T(V)^{\operatorname{op}}$ and $T(V^*)^{\operatorname{cop}}$ are dually paired\footnote{We choose the opposite $T(V)^{\operatorname{op}}$ and co-opposite $T(V^*)^{\operatorname{cop}}$ in order to avoid having to take the opposite multiplication in the resulting double (cf. Proposition \ref{asymmetricdrinrel}). As tensor algebras are braided cocommutative, this choice does not affect the formulas for the coproduct.} in the monoidal category of right modules over $H$. Further assume that the compatibility condition (\ref{weaklyduallypairedcond}) holds. Then the asymmetric braided Drinfeld double $\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}},T(V)^{\op{op}})$ is given on $A=T(V)\otimes H\otimes T(V^*)$ subject to the usual bosonization relations (\ref{boson}) and the cross relation \begin{align}\label{asymmetriccomm} [f,c]=S^{-1}(v^{(-1)})\langle f,v^{(0)}\rangle-f^{(-1)}\langle f^{{(0)}},v\rangle. \end{align} The coalgebra structure is given by \begin{align} \Delta(v)&=v^{(0)}\otimes S^{-1}(v^{(-1)})+1\otimes v,&\Delta(f)&=f\otimes 1+f^{{(-1)}}\otimes f^{{(0)}}. \end{align} The counit is given by $\varepsilon(v)=\varepsilon(f)=0$ and the antipode can be computed using the conditions from equations (\ref{antipodecond1}) and (\ref{antipodecond2}) as \begin{align} S(v)&=-v^{(0)}v^{(-1)},&S(f)&=-S(f^{{(-1)}})f^{{(0)}}. \end{align} We can also consider quotients of the form $A/J$ for any triangular Hopf ideal $J\in \mathcal{I}_\Delta(A)$. The quotient of $A$ by the maximal triangular Hopf ideal in $\mathcal{I}_\Delta(A)$ is denoted by $U_H(V,V^*)$. \end{lemma} \begin{lemma}\label{ideallemma} Let $A=\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}}, T(V)^{\operatorname{op}})$ for $V$, $V^*$ as in Lemma \ref{asymmetricdouble}. Then the maximal ideal $I_{\op{max}}(A)$ in $\mathcal{I}_{\Delta}(A)$ is given by \[ I_{\max}(A)=I_{\op{max}}(V)\otimes H\otimes T(V^*)+T(V)\otimes H\otimes I_{\op{max}}(V^*), \] where $I_{\op{max}}(V)$ is the maximal Nichols ideal in $T(V)$ for the left coaction on $V$, and $I_{\op{max}}(V^*)$ is the maximal Nichols ideal in $T(V^*)$ for the left coaction on $V^*$. Hence \[ m\colon \mathcal{B}(V)\otimes H\otimes \mathcal{B}(V^*)\stackrel{\sim}{\longrightarrow}U_H(V,V^*) \] is an isomorphism of $k$-vector spaces (PBW theorem). \end{lemma} \begin{proof} This is clear as we know that $T(V)^{\operatorname{op}}/{I_{\op{max}}(V)}$ and $T(V^*)^{\operatorname{cop}}/{I_{\op{max}}(V^*)}$ are weakly dually paired braided Hopf algebras and their asymmetric braided Drinfeld double is given by the quotient $\operatorname{Drin}^{\operatorname{asy}}_H(T(V^*)^{\operatorname{cop}},T(V)^{\operatorname{op}})/{I_{\max}(A)}$, which must be the minimal double $U_H(V,V^*)$. \end{proof} A perfect pairing between the positive and negative part of $U_H(V,V^*)$ implies the existence of a formal power series $\operatorname{coev}$ satisfying the axioms of coevaluation. We expect that this can be used to give a braiding on a suitable category of modules over $U_H(V, V^*)$ (where $\mathcal{B}(V)$ acts integrally), and all modules have the structure of being YD-modules over $H$. \subsection{Symmetric Triangular Decompositions} The rest of this section will be devoted to the question of recovering the braided Drinfeld double over a quasitriangular base Hopf algebra $H$ as a special case of the asymmetric braided Drinfeld double. For this, we introduce the idea of a Hopf algebra with a \emph{symmetric} triangular decomposition: \begin{definition} Let $A$ be a bialgebra with a triangular decomposition over $H$. If the associated coactions satisfy that the right coaction $\delta_r^*$ of $V^*$ is the dual coaction to $\delta_l$, i.e. \begin{equation}\label{symmetry} \langle f^{(0)}\otimes v\rangle f^{(-1)}=\langle f\otimes v^{\ov{(0)}}\rangle v^{\ov{(-1)}}, \end{equation} and the coactions $\delta_r$ and $\delta_l^*$ are compatible in the same way, then we call the triangular decomposition \emph{symmetric}. \end{definition} In the case where $H$ is a quasitriangular Hopf algebra, we can recover a special case of the definition of the braided Drinfeld double given in \cite[Example 3.5.6]{Lau} from the more general form given in Definition \ref{asymmetricdrinfelddef}, and the resulting triangular decomposition will be symmetric. For this, note that the universal $R$-matrix and its inverse give functors (see \cite{Maj13}) \begin{align*} R^{-1}\colon \lmod{H}&\longrightarrow \leftexpsub{H}{H}{\mathcal{YD}}, &(V,\triangleright) &\longmapsto (V,\triangleright, (\operatorname{Id}_H\otimes \triangleright)(R^{-1}\otimes \operatorname{Id}_V)),\\ R\colon \rmod{H}&\longrightarrow \leftexpsub{H}{H}{\mathcal{YD}}, &(V,\triangleleft) &\longmapsto (V,\triangleleft, ( \triangleleft\otimes\operatorname{Id}_H)(\operatorname{Id}_V\otimes R)). \end{align*} Given a right $H$-module $V$, we can hence give $V$ a left YD-module structure using $R^{-1}$, and $V^*$ the dual YD-module structure using $R$. Note that (\ref{weaklyduallypairedcond}) is satisfied in this case. With these structures, the relation (\ref{asymmetriccomm}) becomes \begin{align*} [f,c]&=S^{-1}(R^{-(2)})\langle f,v \triangleleft R^{-(1)}\rangle -R^{-(1)}\langle R^{-(2)}\triangleright f, v\rangle \\ &=R^{(2)}\langle R^{(1)}\triangleright f,v\rangle -R^{-(1)}\langle R^{-(2)}\triangleright f,v\rangle. \end{align*} This is precisely the cross relation of \cite[Example 3.5.6]{Lau}. Note that we use $R=(S^{-1}\otimes \operatorname{Id}_H)R^{-1}$. This proves the following Proposition: \begin{proposition}\label{recoverdouble} Braided Drinfeld doubles of braided Hopf algebras over a qua\-si\-triangular Hopf algebras are asymmetric braided Drinfeld doubles (as in Definition \ref{asymmetricdrinfelddef}) with a symmetric triangular decomposition. \end{proposition} Note that a partial converse statement also holds: Given an asymmetric braided Drinfeld double that is symmetric, then it can be displayed as a braided Drinfeld double in the sense of \cite{Maj2,Lau}, but unless $H$ is quasitriangular (and the coactions induced by the $R$-matrix), we need to view it over the base Hopf algebra $\operatorname{Drin}(H)$ instead. If the positive and negative part are perfectly paired, then we can give a formal power series describing the $R$-matrix and an appropriate subcategory (corresponding to the Drinfeld center) is braided. Particularly interesting examples of such braided Drinfeld doubles include the quantum groups $U_q(\mathfrak{g})$ for generic $q$, and the small quantum groups $u_q(\mathfrak{g})$ (see \cite[Section~4]{Maj2}). Their construction uses the concept of a \emph{weak} quasitriangular structure for which a similar statement to Proposition \ref{recoverdouble} can be made. We will see in Section~\ref{multiparametersection} that multiparameter quantum groups can be viewed as examples of asymmetric braided Drinfeld doubles that are not symmetric. Further, all the pointed Hopf algebras classified in the main result of this paper (Theorem \ref{mainclassificationthm}), under the additional assumption that the braiding is of separable type and some commutators do not vanish, are asymmetric braided Drinfeld doubles (Theorem \ref{drinfeldtheorem}). \section{Classification over a Group}\label{section2} In this section, we denote by $A= T(V)\otimes kG\otimes T(V^*)$ a bialgebra with triangular decomposition over a group algebra $kG$. Note that we do not assume $G$ to be finite. \subsection{Preliminary Observations}\label{preliminaryobs} Hopf algebras that are generated by grouplike and skew-primitive elements are always pointed. We show that assuming a Hopf algebra has triangular decomposition over a group and is of what we call \emph{weakly separable type}, it is generated by skew-primitive elements and hence pointed. \begin{lemma} For a bialgebra $A$ with triangular decomposition over $kG$ as above, there exists a basis $v_1,\ldots,v_n$ of $V$ and $f_1, \ldots, f_n$ of $V^*$, as well as invertible matrices $M$ and $N$ such that \begin{align} \Delta(v_i)&=v_i\otimes g_i+\sum_j{M_{ji}h_j\otimes v'_j},&\Delta(f_i)&=f_i\otimes a_i+\sum_j{N_{ji}b_j\otimes f'_j}\label{coprodonv}, \end{align} where $v'_1,\ldots,v'_n$ is another basis of $V$, and $f_1',\ldots,f_n'$ of $V^*$. \end{lemma} \begin{proof} Let $v_1, \ldots, v_n$ be a homogeneous basis for the YD-compatible grading $\delta_r$ and $v'_1,\ldots,v'_n$ a homogeneous basis for $\delta_l$. The form (\ref{coprodonv}) of the coproducts is obtained by letting $M$ be the base change matrix from $\lbrace v_i\rbrace$ to $\lbrace v_i'\rbrace$. The same argument works for the dual $V^*$, denoting the base change matrix from $\lbrace f_i\rbrace$ to $\lbrace f_i'\rbrace$ by $N$. \end{proof} \begin{lemma} A bialgebra $A$ with a triangular decomposition over $kG$ as above is a Hopf algebra, with antipode $S$ given on generators of the form $v_i$, $f_i$ as in (\ref{coprodonv}) by \begin{align} S(v_i)&=-\sum_{j}M_{ji}(h_j^{-1}\triangleright v'_j)h_j^{-1}g_i^{-1},&S(f_i)&=-\sum_{j}N_{ji}(f'_j\triangleleft b_j)b_j^{-1}a_i^{-1}. \end{align} \end{lemma} \begin{proof} The antipode axioms require that $S$ is of the form stated, using that $kG$ is a Hopf subalgebra, cf. (\ref{antipodecond1})--(\ref{antipodecond2}). As $T(V)$ and $T(V^*)$ are free, defining $S$ on the generators extends uniquely to an anti-algebra and anti-coalgebra map on all of $A$. \end{proof} \begin{definition}\label{indecprop} A Hopf algebra $A$ with triangular decomposition over a group is called of \emph{weakly separable type} if the right degrees $g_i,\ldots, g_n$ of $V$ are pairwise distinct group elements, and the same holds for the left degrees $h_1,\ldots,h_n$ of $V$ as well as the dual degrees. \end{definition} We observe that being of weakly separable type over a group implies that $V$ and $V^*$ have 1-dimensional homogeneous components. This gives that for a homogeneous basis element $v_i$ of degree $a_i$, $g\triangleright v_i\neq 0$ is homogeneous of degree $ga_ig^{-1}$ which hence has to be a scalar multiple of a basis element $v_{g(i)}$ where $g(i)$ is an index $1, \ldots, n$. Hence we obtain an action of $G$ on $\lbrace 1,\ldots, n\rbrace$. To fix notation, we write \begin{align} g\triangleright v_i&=\lambda_i(g)v_{g(i)},&f_i\triangleleft g=\mu_i(g)f_{g(i)}&. \end{align} We will see that for $A$ of weakly separable type, the base change matrices $M$, $N$ are diagonal matrices and can be chosen to be the identity matrix by rescaling of the diagonal bases. This implies that $A$ is generated by primitive and group-like elements and hence pointed. It is a conjecture in \cite[Introduction]{AS} that all finite-dimensional pointed Hopf algebras over an algebraically closed field of characteristic zero are in fact generated by skew-primitive and group-like elements. \begin{proposition}\label{primitiveprop} If $A$ is of weakly separable type, then there exist bases $\lbrace v_i\rbrace$ of $V$ and $\lbrace f_i\rbrace$ of $V^*$ consisting of skew-primitive elements, such that \begin{align}\label{primitivecoprod} \Delta(v_i)&=v_i\otimes g_i+h_i\otimes v_i, &\Delta(f_i)&=f_i\otimes a_i+b_i\otimes f_i, \end{align} and the antipode on these skew-primitive elements is given by $S(v_i)=(h_i^{-1}\triangleright v_i)h_i^{-1}g_i^{-1}$, $S(f_i)=(f_i\triangleleft b_i)b_i^{-1}a_i^{-1}$. \end{proposition} \begin{proof} Consider the right and left coactions $\delta_r$ and $\delta_l$ from Section \ref{definitions}. Choosing a basis $v_1,\ldots,v_n$ homogeneous for $\delta_l$ and $v'_1,\ldots, v'_n$ homogeneous for $\delta_r$, (\ref{coprodonv}) gives \begin{equation} \Delta(v_i)=v_i\otimes g_i+\sum_j{M_{ji} h_j\otimes v'_j}, \end{equation} where $M=(M_{ji})$ is the base change matrix. By coassociativity, we find that \begin{equation}\label{eq2} \sum_{j,k}{M_{ji}(M^{-1})_{kj}h_j\otimes v_k\otimes g_k}=\sum_j{M_{ji} h_j\otimes v'_j\otimes g_i}. \end{equation} By weak separability of $\delta_r$ and $\delta_l$ we now have for each $j=1,\ldots, n$: \begin{align} \sum_{k}{M_{ji}(M^{-1})_{jk}v_k\otimes g_k}=M_{ji}v'_j\otimes g_i. \end{align} Note that $M_{ji}\neq 0$ for at least some $i$. This implies that $(M^{-1})_{kj}=0$ unless $k=i$ as the $g_i$ are all distinct. Further, if $M_{ji}\neq 0$, then $v_i$ and $v'_j$ are proportional. This can only be true for at most one $i$ for given index $j$ by weak separability. Hence by reordering the basis $v'_1, \ldots, v'_n$ we find that $M$ is a diagonal matrix and can rescale the basis $\lbrace v'_i\rbrace$ such that $M$ is the identity matrix. Hence we have $\Delta(v_i)=v_i\otimes g_i + h_i\otimes v_i$. The antipode conditions for $A$ give (using Lemma \ref{hopflemma}) that $S$ is of the form claimed. \end{proof} \begin{remark} The bases $\lbrace v_i\rbrace$ and $\lbrace f_i \rbrace$ do not necessarily need to be orthogonal with respect to the pairing $\langle ~,~\rangle$. We will see in Theorem \ref{mainclassificationthm} that if the characters $\lambda_i$ are all distinct, then the bases can be chosen to be dual bases. \end{remark} \begin{remark}\label{primitivenotation} In the following, we fix a basis $v_1, \ldots,v_n$ for $V$ and $f_1,\ldots, f_n$ for $V^*$ such that \begin{align} \Delta(v_i)&=v_i\otimes g_i+h_i\otimes v_i,&\Delta(f_i)&=f_i\otimes a_i+b_i\otimes f_i, &i=1,\ldots,n. \end{align} \end{remark} A direct observation from Proposition \ref{primitiveprop} is that the algebra $A$ is generated by primitive and grouplike elements (which precisely give the group $G$) and hence pointed. We have the following restrictions on the group structure. \begin{proposition}\label{symmetricprop} In the group $G$, the relations $[g_i,a_j]=[h_i,a_j]=1$ and $[h_i, b_j]=[g_i,b_j]=1$ hold for all $i,j=1, \ldots,n$. In particular, if $A$ has a symmetric triangular decomposition, then the subgroup of $G$ generated by all degrees is abelian. Further, the following identities for the characters of the group action hold: \begin{align}\label{characteridentities} \mu_j(h_i)&=\lambda_i(a_j)^{-1},&\mu_j(g_i)=\lambda_i(b_j)^{-1}.& \end{align} \end{proposition} \begin{proof} The commutator relations follow by applying (\ref{eqn3}) and (\ref{eqn4}) to a pair of homogeneous basis elements of $V$ and $V^*$ with respect to $\delta_l, \delta_r^*$ (or $\delta_r, \delta_l^*$). Then it follows from (\ref{eqn1}) and (\ref{eqn2}) that $h_i(j)=j$, $a_j(i)=i$, $g_i(j)=j$ and $b_j(i)=i$ by the PBW theorem. This implies the relations (\ref{characteridentities}). In the symmetric case, $a_i=g_i^{-1}$ and $b_i=h_i^{-1}$ which forces the subgroup generated by all degrees to be abelian. \end{proof} \subsection{Classification in the Free Case of Weakly Separable Type}\label{freeclassification} We are now in the position to classify all Hopf algebras $A$ with triangular decomposition of weakly separable type (cf. Definition \ref{indecprop}). This will enable us to view the Hopf algebras arising from this classification as analogues of multiparameter quantum groups in Section~\ref{multiparametersection}. We start by considering the case $A=T(V)\otimes kG\otimes T(V^*)$ which is referred to as the \emph{free} case. \begin{proposition}\label{pointedthm} For the Hopf algebra $A$ with triangular decomposition of weakly separable type to be indecomposable as a coalgebra it is necessary that $G$ is generated by elements $k_1,\ldots,k_n$, $l_1,\ldots, l_n$ such that there exist generators $v_i$ of $V$ and $f_i$ of $V^*$ which are skew-primitive of the form \begin{align}\label{coproductform} \Delta(v_i)&=v_i\otimes k_i+1\otimes v_i, &\Delta(f_i)=f_i\otimes 1+l_i\otimes f_i, \end{align} with $[k_i,l_j]=1$ for all $i,j$. For the characters of the actions on the homogeneous components of $V$ and $V^*$ we require that \begin{equation}\label{characterrequirement} \mu_j(k_i)=\lambda_i(l_j)^{-1}. \end{equation} \end{proposition} \begin{proof} To determine when pointed Hopf algebras are indecomposable as coalgebras, consider the graph $\Gamma_A$ described in \ref{indecomposability}. Assume that $A$ has generators given as in Remark~\ref{primitivenotation}. We claim that the connected components of $\Gamma_A$ are in bijection with the double cosets of the subgroup \[ Z:=\langle g_1^{-1}h_1,\ldots, g_n^{-1}h_n, a_1^{-1}b_1,\ldots, a_n^{-1}b_n\rangle \] in $G$ which partition $G$. Indeed, using that the elements $gv_i$ and $gf_i$ are skew-primitive of type $(gg_i,gh_i)$ and $(ga_i,gb_i)$, we find that the connected component of $g$ contains, for $i=1,\ldots, n$, of the strands \[ \ldots \longrightarrow g(g_i^{-1}h_i)^{-2}\longrightarrow g(g_i^{-1}h_i)^{-1}\longrightarrow g \longrightarrow g(g_i^{-1}h_i)^1\longrightarrow g(g_i^{-1}h_i)^{2} \longrightarrow \ldots \] for $i=1,\ldots, n$ and the same strands with $a_i^{-1}b_{i}$ instead of $g_i^{-1}h_i$ (and with $g$ multiplied on the right). Moreover, as the elements $gv_i$, $gf_i$, $v_ig$, $f_ig$ (and possibly linear combinations of products of them, which would again be skew-primitive with degrees given by elements in $Z$) are the only skew-primitive elements in $A$, and thus give the only arrows in $\Gamma_A$, two elements $g$ and $h$ are in the same connected component if and only if $z_1gz_2=z_3hz_4$, for some $z_i\in Z$. Thus, $A$ is indecomposable if and only if $G$ equals the connected component of $1$ in the graph $\Gamma_A$, hence if $G=Z$ which is the group generated by the elements $k_i:=h_i^{-1}g_i$, $l_i:=a_i^{-1}b_i$ for $i=1,\ldots, n$. Thus, in order to obtain indecomposability, the coproducts are of the form as stated in (\ref{coproductform}). This is achieved by replacing the generators $v_i$ by $v_ih_i^{-1}$ and $f_i$ by $a_i^{-1}f_i$. The remaining statements follow directly from Proposition \ref{symmetricprop}. \end{proof} \begin{theorem}\label{mainclassificationthm} For an indecomposable Hopf algebra $A$ of weakly separable type as in Proposition \ref{pointedthm}, the commutator relations (\ref{commrel}) are of the form \begin{align}\label{commrel2} [f_i,v_j]&=\gamma_{ij}(k_j-l_i),&\forall 1\leq i, j\leq n, \end{align} where $\gamma_{ij}$ are scalars in $k$ such that $\gamma_{ij}=0$ whenever $\lambda_i\neq \lambda_j$ in which case also $\langle f_i,v_j\rangle=0$, or if either of $l_i$ of $k_j$ are not central. Conversely, any choice of such scalars gives a pointed Hopf algebra of this form. \end{theorem} \begin{proof} With the work done in Proposition \ref{pointedthm}, it remains to verify that the form of the commutator relation (\ref{commrel}) is as stated. Recall that in \cite[3.1]{BB}, the commutator relation is given by means of a quasi-coaction. That is a morphism $\delta\colon V\to kG\otimes V$ satisfying (\ref{ydcond}) and (\ref{commrel}). Such a morphism has the general form \begin{align} \delta(v_j)=v_j^{[-1]}\otimes v_j^{[0]}&=\sum_{k,g}{\alpha_{k,g}^j g\otimes v_k},&\alpha_{k,g}^i\in k, \end{align} on the basis elements from (\ref{coproductform}). Then (\ref{eqn5}), which is required for $A$ to be a bialgebra, rewrites as \begin{align*} \sum_{k,g}{\alpha_{k,g}^j (g\otimes k_j+l_i\otimes g)\langle f_i,v_k \rangle} &=\sum_{k,g}{\alpha_{k,g}^j g\otimes g\langle f_i,v_k \rangle}, &\forall i,j. \end{align*} For each $i$, there exists $k$ such that $\langle f_i,v_k\rangle \neq 0$. For given $i$, we denote the set of indices such that $\langle f_i,v_k\rangle \neq 0$ by $I_i$. For such $k\in I_i$, we find that $\alpha_{k,g}^j=0$ for $g\neq k_j, l_i$, and $\alpha_{k,k_j}^j=-\alpha_{k,l_i}^j$. Thus, we obtain that $\delta$ is of the form \begin{equation}\label{qcoactionform} \delta(v_j)=v_j^{[-1]}\otimes v_j^{[0]}=\sum_{i=1}^n{\gamma_{ij} (k_j-l_i)\otimes v'_i}, \end{equation} where $\gamma_{ij}=\sum_{k\in I_i}{\alpha_{k,k_j}^j {\langle f_i,v_k\rangle\abs{I_i}}}$ and $\lbrace v'_i\rbrace$ is the dual basis of $V$ to $\lbrace f_i\rbrace$. Conversely, given arbitrary scalars $\gamma_{ij}$ for $i,j=1,\ldots,n$, we can define a quasi-coaction by the same formula (\ref{qcoactionform}). Then $\delta$ is YD-compatible with the given action of $G$ on $V$ if and only if (cf. condition (A) in \cite[Theorem~A]{BB}) \begin{align*} \gamma_{ij}\mu_i(g)(gk_j-gl_{i})=&g[f_i\triangleleft g,v_j]\stackrel{(\text{A})}{=}[f_i, g\triangleright v_j]g=\gamma_{ij}\lambda_j(g)(k_jg-l_ig). \end{align*} This implies $\lambda_j=\mu_i$ whenever $\gamma_{ij}\neq 0$. Further, if $\gamma_{ij}\neq 0$ we need $l_i, k_j\in Z(G)$. These two requirement ensure that $\delta$ is YD-compatible. Further, by duality of the action, if $\langle f_i,v_j \rangle\neq 0$ then $\lambda_i=\mu_j$. As for given $i=1,\ldots, n$, $\langle f_i,v_j \rangle\neq 0$ for some $j$ we have that $\lambda_i=\mu_j$ for at least some $j$, and vice versa. Hence, the set of characters and dual characters are in bijection. We can change the numbering and assume without loss of generality (recall that we are in the weakly separable case) to obtain \begin{equation} \lambda_i=\mu_i. \end{equation} From now on, we will hence only use the notation $\lambda_i$. \end{proof} \begin{example} The most degenerate case, where $\gamma_{ij}=0$, gives the Hopf algebra $(T(V)\otimes T(V^*))\rtimes kG$ where the tensor algebras are again computed in the category of YD-modules over $kG$. \end{example} \begin{remark} At this point, a comparison to \cite[2.4]{AS2} and \cite[4.3]{AS3} seems appropriate. The condition (\ref{commrel2}) is equivalent to the so-called \emph{linking relation} (\ref{asrel3}) after a change of generators $f_i\leftrightarrow l_i^{-1}f_i$, since in the form of Definition \ref{asform} all generators have coproducts $\delta(v_i)=v_i\otimes 1+g_i\otimes v_i$. Such a change of generators causes the commutators $\operatorname{ad}=[~,~]$ to become braided commutators $\underline{\operatorname{ad}}= \operatorname{Id}_{V^{\otimes 2}}-\Psi$. The scalars $\lambda_{ij}$ satisfy the condition (d) in \ref{classificationsurvey}, where for the characters $\chi_i\chi_j\neq \varepsilon$ implies $\lambda_{ij}=0$. This is the analogue of our condition $\lambda_i\neq \lambda_j$ implying $\gamma_{ij}=0$. The linking relations also appear in the quantum group characterization of \cite[Theorem 4.3]{AS3}. Hence we can conclude that the classification in this section gives Hopf algebras with similar relations as appearing in the work of Andruskiewitsch and Schneider. The outcome here is more restrictive as in our setting relations of the form (\ref{asrel4}) cannot involve non-trivial elements in $kG$, and we also have a symmetry in the set $\chi$ of connected components due to the triangular decomposition. \end{remark} The situation where $\lbrace v_i\rbrace$ and $\lbrace f_i \rbrace$ are orthogonal bases deserves particular attention. In this case, the scalars $\gamma_{ij}=0$ for $i\neq j$. The following concept of separability ensure this. \begin{definition} Let $A$ have a triangular decomposition of weakly separable type over a group $G$. If the characters $\lambda_1, \ldots, \lambda_n$ are distinct for different indices, we will speak of a triangular decomposition of \emph{separable type}. If $A$ is of the form as in Theorem \ref{mainclassificationthm}, we say that $A$ is \emph{non-degenerate} if $\gamma_{ii}\neq 0$ for all $i=1,\ldots,n$ (this implies $k_i\neq l_i$). \end{definition} Note that both definitions --- separability and non-degeneracy --- cause the group $G$ to be abelian, and hence the braidings on $V$ and $V^*$ to be of diagonal type. Assuming non-degeneracy, we can adapt the terminology of \cite[5.5]{BB} that the braided doubles in this case come from \emph{mixed} YD-structures. A mixed YD-structure is a quasi-coaction $\delta$ that is a weighted sum $\sum{t_i\delta_i}$, where $\delta_i$ are YD-modules compatible with the same action, and $t_i$ are generic scalars. The quasi-YD-module in the theorem is the sum $\delta=\delta_r-(\delta_l^*)^*$, where $(\delta_l^*)^*$ is the YD-module given by $v_j\mapsto l_j\otimes v_j$, which is dual to $\delta_l^*$. We will see that in this case all the Hopf algebras arising are certain \emph{asymmetric} braided Drinfeld doubles (as defined in \ref{asymdrin}). In the symmetric case, these algebras are in fact braided Drinfeld doubles. In particular, their appropriately defined module categories (resembling the Drinfeld center) are braided. \subsection{Interpretation as Asymmetric Braided Drinfeld Doubles}\label{quotientsection} Assume in this section that $A$ is non-degenerate of indecomposable separable type over $G$. So far, we have only classified \emph{free} braided doubles over $kG$. That is, as a $k$-vector space $A\cong T(V)\otimes kG\otimes T(V^*)$ via the multiplication map. To capture examples such as quantum groups, it is necessary to consider quotients of $A$ by triangular ideals $J=( I,I^*)$ such that $A/J\cong T(V)/I\otimes kG\otimes T(V^*)/{I^*}$ is still a Hopf algebra (and thus pointed). Here $I\triangleleft T^{>1}(V)$ and $I^*\triangleleft T^{>1}(V^*)$ are ideals and also coideals, and $J\in \mathcal{I}_{\Delta}(A)$. We will now refine our considerations from Section \ref{triangularhopfsect} to find for what ideals $I$ and $I^*$ this is the case. We will use the notation \begin{align} q_{ij}:=\lambda_j(k_i). \end{align} Then, by (\ref{characterrequirement}), we have that $\lambda_j(l_i)=q_{ji}^{-1}$, and the matrix $q=(q_{ij})$ describes the braiding on $V$ fully, i.e. it is of \emph{diagonal type}. The collection of triangular Hopf ideals $\mathcal{I}_\Delta(A)$ introduced in Section \ref{triangularhopfsect} can be described more concretely in the separable non-degenerate case: The ideals in $\mathcal{I}_\Delta(A)$ are of the form $J=I\otimes kG\otimes T(V^*)+T(V)\otimes kG\otimes I^*$ where $I$ is an ideal in the collection $\mathcal{I}_{(V,\delta_r)}$ for $V$ with the right coaction given by $\delta_r$, and $I^*$ is in $\mathcal{I}_{(V^*,\delta_l^*)}$ for the left dual coaction $\delta_l^*$ on $V^*$. This follows using \cite[Proposition 5.10]{BB} and the description of triangular Hopf ideals in Lemma \ref{ideallemma}. We use that by (\ref{characterrequirement}) the braiding $\Psi_r$ coming from $\delta_r$ and $\Psi_l$ from $(\delta_l^*)^*$ on $V$ are given by \begin{align} \Psi_r(v_i\otimes v_j)&=q_{ij}v_j\otimes v_i,&&\Psi_l(v_i\otimes v_j)=q_{ji}^{-1}v_j\otimes v_i,. \end{align} That is $\Psi_l=\Psi_r^{-1}$, the inverse braiding. Thus, $I^*$ is just the dual $k$-vector space to $I$. \begin{example} In the quantum groups $A=U_q(\mathfrak{g})$, the braiding satisfies the symmetry $q_{ij}=q^{i\cdot j}=q^{j\cdot i}=q_{ji}$ as the Cartan datum is symmetric. This implies that the relations in $I$ are symmetric under reversing the order of tensors $v_1\otimes \ldots\otimes v_n\leftrightarrow v_n\otimes \ldots\otimes v_1$. This can be verified explicitly by observing that in $U_q(\mathfrak{g})$ the ideal $I$ is generated by $q$-Serre relations, which carry such a symmetry. \end{example} \begin{theorem}\label{drinfeldtheorem} Let $A$ be an indecomposable bialgebra with triangular decomposition of separable non-degenerate type over $G$. Then $A$ is an asymmetric braided Drinfeld double. \end{theorem} That is, all quotients by triangular Hopf ideals $J\in \mathcal{I}_\Delta(A)$ of algebras $A$ of separable non-degenerate type occurring in the classification of Theorem \ref{mainclassificationthm} are asymmetric braided Drinfeld doubles. If $J$ is maximal in $\mathcal{I}_\Delta(A)$, then $A/J\cong U_{kG}(V, V^*)$. \begin{proof} Recall that every Hopf algebra with triangular decomposition is the quotient of a free braided double by a triangular Hopf ideal. We saw that in the free separable case the commutator relations are of the form $[f_i,v_j]=\delta_{ij}\gamma_{ii}(k_i-l_j)$. This is precisely the form of the asymmetric braided Drinfeld double of $V$ with right YD-module structure given by the right grading, and $V^*$ with left YD-module structure given by the left dual grading. The pairing is given by $\langle f_i,v_j\rangle=\delta_{ij}\gamma_{ii}$ here. We have to check that the braided Hopf algebras $T(V)$ and $T(V^*)$ of YD-modules over $G$ are dually paired when viewed in the category of left $kG$-modules. This however follows from condition (\ref{characterrequirement}). Taking the maximal quotient by a triangular ideal (or the left and right radical of the pairing) gives the asymmetric braided Drinfeld double $U_{kG}(V, V^*)$. \end{proof} If some of the parameters $\gamma_{ii}$ are zero, then the pointed Hopf algebras obtained are not asymmetric braided Drinfeld double any more (in the sense of Definition \ref{asymmetricdrinfelddef}). \subsection{Recovering a Lie Algebra}\label{liealgebrasection} We assume that $\operatorname{char} k=0$ in this section and study Hopf algebras with triangular decomposition of separable type which are of the form $U_{kG}(V, V^*)$ (see Theorem \ref{drinfeldtheorem}). The aim is to set the characters $\lambda_i$ and the group elements $k_i$, $l_i$ equal to 1. This way, we want to recover a Lie algebra $\fr{g}$ for any of the indecomposable pointed Hopf algebras of the form $U_{kG}(V, V^*)$, relating back to the question asked in the introduction of finding quantum groups for a given Lie algebra. The tool available for this is the Milnor--Moore theorem from \cite{MM} (see also \cite[Theorem 5.6.5]{Mon}) which shows that any cocommutative connected Hopf algebras is of the form $U(\fr{g})$ for a (possibly infinite-dimensional) Lie algebra $\fr{g}$. There are technical problems with this naive approach. To set the elements $q_{ij}$ --- which will be replaced by formal parameters --- equal to one, we need to give an appropriate integral form to avoid that the modules collapse to zero. This rules out examples like e.g. $k[x]/(x^n)$ (and, more generally, the small quantum groups) which are braided Hopf algebras in the category of YD-modules over $k\mathbb{Z}$, as here a generator of the group acts by a primitive $n$th root of unity $q$ on $x$, and $\mathbb{Z}[q]\subset k$ is a cyclotomic ring. As a first step, we introduce appropriate integral forms of $U_{kG}(V, V^*)$, for which we need the square roots of $q_{ij}$. We consider the subring $Z:=\mathbb{Z}[q_{ij}^{\pm 1/2}]_{i,j}\subset k$ adjoining all square roots of the numbers $q_{ij}$ and their inverses. These will now be treated as formal parameters with certain relations between them, coming from the relations we have among them in $k$. \begin{remark} In this section, we assume that the ideal $\langle q_{ij}^{\pm 1/2}-1 \mid i,j=1,\ldots,n\rangle $ in $Z$ is a proper ideal, and hence $p\colon Z\to \mathbb{Z}$, $q_{ij}^{\pm 1/2}\mapsto 1$ is a well-defined morphism of rings. \end{remark} This assumption is crucial in the formal limiting process. It, for example, prevents examples in which $q^n+q^{n-1}+\ldots+q+1=0$ as in cyclotomic rings. To produce an integral form, we replace a given YD-module $V$ over $kG$ of separable type as in the previous sections by a YD-module over $ZG$. For this, we can choose a $G$-homogeneous basis $v_1,\ldots,v_n$ and a homogeoneous dual basis $f_i,\ldots,f_n$ such that (possibly after rescaling) \begin{align} \langle f_i, v_j\rangle&=\frac{1}{q_{ii}^{1/2}-q_{ii}^{-1/2}}\delta_{ij}, &\forall i,j. \end{align} An important observation is that the Woronowicz symmetrizers, which are used to compute the Nichols ideal $I_{\op{max}}(V)$, have coefficients in $Z$. Hence their kernels will be $Z$-modules. That is, for $V^{\op{int}}$ defined as $Z\langle v_1, \ldots, v_n\rangle$, which is a YD-module over the group ring $ZG$, the Woronowicz symmetrizer $\operatorname{Wor}^n_{\op{int}}\Psi$ is a $Z$-linear map $V^{\op{int}\otimes n}\to V^{\op{int}\otimes n}$. Hence $I_{\op{max}}(V^{\op{int}}):=\ker \operatorname{Wor}_{\op{int}}\Psi$ is an ideal in $T(V^{\op{int}})$, the tensor algebra over $Z$. In order to provide an integral form of $U_{kG}(V, V^*)$, we change the presentation by introducing new commuting generators, namely $[f_i,v_i]=:t_i$. One verifies that the following commutator relations hold over $k$, as we are given the relation $t_i=\tfrac{1}{q_{ii}^{1/2}-q_{ii}^{-1/2}}(k_i-l_i)$ when working over the field: \begin{align} [f_i,t_j]&=\delta_{i,j}(q_{ii}^{1/2}k_if_i+q_{ii}^{-1/2}l_if_i),\label{tirel1}\\ [v_i,t_j]&=-\delta_{i,j}(q_{ii}^{-1/2}k_iv_i+q_{ii}^{1/2}l_iv_i).\label{tirel2} \end{align} \begin{definition} The \emph{integral form} $U_{ZG}(V^{\op{int}}, V^{\op{int}*})$ of $U_{kG}(V, V^*)$ is defined as the graded Hopf algebra over the ring $Z$ generated by $v_1,\ldots, v_n$, of degree 1, $f_1,\ldots, f_n$ of degree $-1$, and the group elements $k_1, \ldots, k_n, l_1, \ldots, l_n\in G$, and additional elements $t_1, \ldots, t_n$ of degree 0, subject to the relations of $I_{\op{max}}(V^{\op{int}})$ and $I_{\op{max}}^*(V^{\op{int}})$, bosonization relations \begin{align} gv_i&=(g\triangleright v_i)g,&f_ig&=g(f_i\triangleleft g), \end{align} as well as the relations (\ref{tirel1}), (\ref{tirel2}) and \begin{align} gv_i&=(g\triangleright v_i)g,\qquad f_ig=g(f_i\triangleleft g),\label{intrel1}\\ q_{ii}^{1/2}(k_i-l_i)&=(q_{ii}-1)t_i,\label{intrel2}\\ [f_i, v_j]&=\delta_{i,j}t_i,\label{intrel3}\\ [t_i,t_j]&=0. \end{align} The coproducts are given as before on the generators $f_i,v_i,k_i,l_i$ and $\Delta(t_i)=t_i\otimes k_i+l_i\otimes t_i$. \end{definition} Note that as $A=U_{ZG}(V^{\op{int}}, V^{\op{int}*})$ is a Hopf algebra over the commutative ring $Z$, the coproduct is a map $A\to A\otimes_Z A$. For the quantum groups $U_q(\fr{g})$ at generic parameter, the integral form in this case is the so-called \emph{non-restricted} integral form (see e.g. \cite[9.2]{CP}) which goes back to De Concini--Kac \cite{DK}. To set the parameters equal to one, and to consider extensions of Hopf algebras to fields, we use the following Lemma: \begin{lemma}\label{hopflemma2} Let $\phi \colon R\to S$ be a morphism of commutative algebras. We denote the category of Hopf algebras over $R$ by $\mathbf{Hopf}_{R}$. Then base change along $\phi$ induces a functor \begin{align*} \mathbf{Hopf}_{\phi}\colon \mathbf{Hopf}_{R}&\longrightarrow \mathbf{Hopf}_{S},&A&\longmapsto A\otimes_RS. \end{align*} \end{lemma} \begin{proof} Given a Hopf algebra $A$ which is an $R$-algebra, i.e. there is a morphism $R\to A$, we induce the multiplication and comultiplication on $A\otimes_RS$ using the isomorphism \[ (A\otimes_RS)\otimes_S(A\otimes_RS)\cong (A\otimes_R A)\otimes_RS. \] It is easy to check that the Hopf algebra axioms are preserved under base change. \end{proof} \begin{proposition} There is an isomorphism of graded Hopf algebras \[ U_{ZG}(V^{\op{int}}, V^{\op{int}*})\otimes_Zk\stackrel{\sim}{\longrightarrow} U_{kG}(V, V^*). \] \end{proposition} \begin{proof} Recall that $Z\leq k$ by construction. Extending to $k$, we are able to divide by $q_{ii}-1$ in (\ref{intrel2}), and recover the original commutator and bosonization relations in $U_{kG}(V, V^*)$. It remains to verify that \[ I_{\op{max}}(V^{\op{int}})\otimes_Zk=\ker \operatorname{Wor}_{\op{int}}\Psi\otimes_Z k=\ker \operatorname{Wor} \Psi=I_{\op{max}}(V). \] This follows by noting that $k$ is flat as a $Z$-module (since the function field $K(Z)$ is flat over $Z$ as a localization, and $k$ is free over $K(Z)$), and $V^{\op{int}}\otimes_Zk\cong V$ as $k$-vector spaces. \end{proof} \begin{definition} We define the \emph{classical limit} of $U_{kG}(V, V^*)$ as the algebra \[ U_k^{\op{cl}}(V, V^*):=\bigslant{(U_{ZG}(V^{\op{int}}, V^{\op{int}*})\otimes_Z\mathbb{Z})\otimes_\mathbb{Z} k}{( \ker \varepsilon_G)}, \] using the morphism $p\colon Z\to \mathbb{Z}$ mapping all $q_{ij}^{\pm 1/2}$ to $1$, and the two sided ideal $( \ker \varepsilon_G)$ generated by the kernel of the augmentation map $\varepsilon_G\colon kG\to k$ mapping all group elements to $1$. Note that this ideal is a Hopf ideal. \end{definition} That is, to obtain the classical limit, we first set the parameters $q_{ij}^{\pm 1/2}$ equal to 1 in the integral form and then extend the resulting $\mathbb{Z}$-module to a $k$-vector space, and finally set the group elements equal to 1 along the counit $\varepsilon_G\colon kG\to k$. We obtain a primitively generated Hopf algebra, and hence a Lie algebra, this way: \begin{proposition} The classical limit $U_k^{\op{cl}}(V, V^*)$ is a connected Hopf algebra, generated by primitive elements. Hence, for the Lie algebra $\fr{p}_V$ of primitive elements, $U(\fr{p}_V)=U_k^{\op{cl}}(V, V^*)$. This algebra is generated by triples $f_i,v_i,t_i$ which form a subalgebra isomorphic to $U(\fr{sl}_2)$. \end{proposition} \begin{proof} Lemma \ref{hopflemma2} ensures that $U_k^{\op{cl}}(V, V^*)$ is a Hopf algebra over $k$, and freeness of $V^{\op{int}}$ over $Z$ ensures that the positive and negative part do not collapse to the zero space. In particular, the $k$-vector space $V^{\op{int}}\oplus V^{\op{int}*}$ embeds into the Lie algebra $\fr{p}_V$ of primitive elements. In the classical limit, we obtain the relations \begin{align} [f_i,v_j]&=\delta_{i,j}t_i, &[f_i,t_j]&=2\delta_{i,j}f_i, &[v_i,t_j]&=-2\delta_{i,j}v_i. \end{align} Hence every triple $f_i, v_i, t_i$ generates a Lie subalgebra of $\fr{p}_V$ isomorphic to $\fr{sl}_2$. Note that $U_{k}^{\op{cl}}(V, V^*)$ is generated by the $t_i$ and the primitive elements: \begin{align*} \Delta(f_i)=&f_i\otimes 1+1\otimes f_i,&\Delta(v_i)&=v_i\otimes 1+1\otimes v_i. \end{align*} We also compute \[ \Delta(t_i)=\Delta([f_i,v_i])=[f_i,v_i]\otimes k_i+l_i\otimes[f_i,v_i]=t_i\otimes k_i+l_i\otimes t_i. \] Hence, $t_i$ is skew-primitive in $U_{ZG}^{\op{int}}(V, V^*)$ and primitive in the classical limit. Thus, $U^{\op{cl}}_{k}(V,V^*)$ is a pointed Hopf algebra over the trivial group. That is, a \emph{connected} pointed Hopf algebra. It is further cocommutative and Theorem 5.6.5 in \cite{Mon} implies that such a Hopf algebra is of the form $U(\fr{g})$ where $\fr{g}=\fr{p}_V$ in $\operatorname{char}k=0$. \end{proof} Note that $U_{k}^{\op{cl}}(V, V^*)$ is a braided double over the polynomial ring $S(T)$, where $T=k\langle t_1,\ldots,t_n \rangle$ (which is not necessarily $n$-dimensional). The action is given by $t_j\triangleright v_i=2\delta_{i,j}v_i$, and the quasi-coaction is given by $\delta(v_i)=t_i\otimes v_i$ which is \emph{not} a coaction, hence $U_{ZG}^{\op{int}}(V, V^*)$ is \emph{not} a braided Heisenberg double. It is also not an asymmetric braided Drinfeld double. \begin{example} For $U_q(\fr{g})$, $\fr{g}$ a semisimple Lie algebra, viewed as a braided Drinfeld double, the classical limit is $U(\fr{g})$. \end{example} We can also compute examples that do not give finite-dimensional semisimple Lie algebras. As a general rule, the relations between the parameters $q_{ij}$ determine the relations in the Lie algebra. It is easy to construct free examples, for which there are no relations between the $v_1,\ldots, v_n$ by choosing algebraically independent parameters $q_{ij}$. The work of \cite{Ros} and \cite{AS3} give restrictions on examples satisfying the growth condition of finite Gelfand--Kirillov dimension. We will view their results in the setting of this paper in Section \ref{section3}. \section{Classes of Quantum Groups}\label{multiparametersection} In this section, we relate the classification from Section~\ref{section2} to various classes of examples which are often regarded as quantum groups. This includes the multiparameter quantum groups studied by \cite{FRT, Res, Sud,AST} and others in Section \ref{quantumgroupsmulti}, a characterization of Drinfeld--Jimbo quantum groups in Section \ref{section3}, and classes of examples of pointed Hopf algebras from the work of Radford in Section \ref{radford}. The classification in Theorem \ref{mainclassificationthm} points out natural generalizations of these classes of examples\footnote{While this paper was under revision, it was pointed out by Dr Gast\'on Andr\'es Garc\'ia that a further series of examples of asymmetric braided Drinfeld doubles is given in \cite[Definition~7]{HPR} and described in \cite{Gar} using a family of pointed Hopf algebras defined in \cite{ARS}.}. We finally sketch how one can define analogues of quantum groups using triangular decompositions over other Hopf algebras than $kG$. \subsection{Multiparameter Quantum Groups}\label{quantumgroupsmulti} Let $k$ be a field of characteristic zero. For the purpose of this section, let $\lambda \in k$ be generic, and $p_{ij}\in k$ for $1\leq i<j\leq n$. Assume that $p_{ii}=1$ and $p_{ji}=p_{ij}^{-1}$. Following \cite{AST,CM} and to fix notation, we set \begin{align*} &\kappa_j^{(i)}=\begin{cases}p_{ij},& \text{if }i<j,\\ \lambda, & \text{if } i=j,\\ \tfrac{\lambda}{p_{ji}}, & \text{if }i>j.\end{cases}&&\lambda_j^{(i)}=\begin{cases}\tfrac{\lambda}{p_{ij}},& \text{if }i<j,\\ \lambda, & \text{if } i=j,\\ p_{ji}, & \text{if }i>j. \end{cases} \end{align*} We will provide a variation of the presentation of \cite{AST,CM} in order to display the multiparameter quantum group $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ as a Hopf algebra with triangular decomposition. \begin{example}[Multiparameter quantum groups] We define on $F=k \langle f_1, \ldots, f_{n-1}\rangle$ a YD-module structure over an abelian group $G$ with generators $k_1, \ldots, k_{n-1}$, $l_1, \ldots, l_{n-1}$. Denote the dual by $E=k\langle e_1,\ldots, e_{n-1}\rangle$, where the pairing is given by $\langle e_i,f_j\rangle=(1-\lambda)\delta_{ij}$. The YD-structure is of separable type, and given by assigning the right degree $k_i$ to $f_i$, and the left degree $l_i$ to $e_i$, and actions \begin{align} k_i\triangleright f_j&=\lambda_j(k_i)f_j=\frac{\lambda_{j+1}^{(i)}\lambda_{j}^{(i+1)}}{\lambda_{j}^{(i)}\lambda_{j+1}^{(i+1)}}f_j,\\ l_i\triangleright f_j&=\lambda_j(l_i)f_j=\frac{\kappa_j^{(i)}\kappa_{j+1}^{(i+1)}}{\kappa_{j+1}^{(i)}\kappa_j^{(i+1)}}f_j, \end{align} for $i,j=1,\ldots n-1$. We will relate the \emph{multiparameter quantum group} $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ to be the asymmetric braided Drinfeld double $U_{kG}(F,E)$. \end{example} Note that the definition of $U_{kG}(F,E)$ is possible as (\ref{characterrequirement}) holds, i.e. \begin{align*} q_{ij}:=\lambda_j(k_i)=\frac{\lambda_{j+1}^{(i)}\lambda_{j}^{(i+1)}}{\lambda_{j}^{(i)}\lambda_{j+1}^{(i+1)}}=\frac{\kappa_{i+1}^{(j)}\kappa_i^{(j+1)}}{\kappa_i^{(j)}\kappa_{i+1}^{(j+1)}}=\lambda_i(l_j)^{-1}. \end{align*} The commutator relation in $U_{kG}(F,E)$ is given by \begin{equation} [e_i,f_j]=(1-\lambda)\delta_{ij}(k_i-l_i). \end{equation} The following isomorphism displays $U_{kG}(F,E)$ as an indecomposable subalgebra of a multiparameter quantum group considered in the literature: \begin{proposition} There is an isomorphism of Hopf algebras $U_{kG}(F,E)\cong U'$ where $U'$ is a Hopf subalgebra of the multiparameter quantum group $U=U_{\lambda,\underline{p}}(\fr{gl}_n)$. \end{proposition} \begin{proof} We prove the theorem by first considering the morphism \[ \phi\colon T(E)\otimes kG\otimes T(F)\longrightarrow U. \] Such a morphism will descent to an injective morphism $\overline{\phi}\colon U_{kG}(F,E)\to U$ by the following Lemma \ref{qserrelemma}. We further note that the image $\op{Im}{\overline{\phi}}=:U'$ is a Hopf subalgebra isomorphic to $U_{kG}(F,E)$. Denote the generators of $U$ by $E_i,F_i$ for $i=1,\ldots,n-1$ and group elements $K_i,L_i$ for $i=1,\ldots, n$ (see \cite[4.8]{CM}). The map $\phi$ is defined by $\phi(e_i)=\lambda E_iK^{-1}_{i+1}K_i$, $\phi(f_i):=F_i$, $\phi(k_i)=L_{i+1}L_i^{-1}$, and $\phi(l_i):=K_{i+1}^{-1}K_i$. One checks directly that the relations in the free braided double $T(E)\otimes kG\otimes T(F)$ are preserved under this map, using the presentation in \cite[4.8]{CM} for $U$. \end{proof} \begin{lemma}\label{qserrelemma} The largest ideal in $\mathcal{I}_\Delta(A)$ for $A=U_{kG}(F,E)$ is generated by the quantum Serre relations \begin{align} \underline{\operatorname{ad}}(e_i)^{1-a_{ij}}(e_j)=\underline{\operatorname{ad}}(f_i)^{1-a_{ij}}(f_j)=0, \end{align} where $\underline{\operatorname{ad}}(e_i)(e_j)=e_ie_j-q_{ij}e_je_i$. \end{lemma} \begin{proof} It follows from Lemma \ref{ideallemma} that the maximal ideal $J$ in $\mathcal{I}_\Delta(A)$ is given by $J=( I,I^*)$ where $I$ is the Nichols ideal of the YD-module $F$. Generation of the maximal triangular ideal by quantum Serre relations for $U_{\lambda,\underline{p}}(\fr{gl}_n)$ follows from Lemma 4.5 in \cite{CM}. For this, it is crucial that $\lambda$ is not a root of unity. The proof uses the observation in \cite{Res}, or \cite{AST} for the deformed function algebra, that multiparameter quantum groups, using quantum coordinate rings, can be obtained via 2-cocycles from one-parameter quantum groups. The fact that the quantum Serre relations generate the Nichols ideal then follows from Theorem 4.4 in \cite{CM} where it is shown that these relations generate the radical of a Hopf pairing. Using the map $\phi$, this result describes the Nichols ideals of $T(F)$, $T(E)$ as generated by quantum Serre relations. \end{proof} The result that the multiparameter quantum group $U_{\lambda,\underline{p}}(\mathfrak{gl}_n)$ is the asymmetric braided Drinfeld double $U_{kG}(F,E)$ can be seen as a generalization of the result in \cite{BW} where the two-parameter quantum groups were shown to be Drinfeld doubles. \subsection{Characterization of Drinfeld--Jimbo Quantum Groups}\label{section3} Let $\op{char} k=0$ in this section. In Section~\ref{section2} we observed that for an algebra $A$ with symmetric triangular decomposition of separable type to be an indecomposable pointed Hopf algebra, $G(A)$ needs to be abelian acting on $V$ by scalars. That means, in the terminology of \cite{AS} that the YD-braiding $\Psi(v\otimes w)=v^{(-1)}\triangleright w\otimes v^{(0)}$ is of \emph{diagonal type}, i.e. there exist non-zero scalars $q_{ij}$ such that $\Psi(v_i\otimes v_j)=q_{ij}v_j\otimes v_i$ for a basis $\{v_1,\ldots,v_n\}$. We fix a choice of YD-module structure over an abelian group $G$ for this section to describe the diagonal braiding. That is, $q_{ij}=\lambda_j(k_i)$ for the characters $\lambda_i$ by which $G$ acts on $kv_i$ and group elements $k_i$ such that $\delta(v_i)=k_i\otimes v_i$. It is a basic observation that the braided Hopf algebras $T(V)/I$ for $I\in \mathcal{I}_V$, including the Nichols algebras for $V$, only depend on the braiding on $V$ (rather than the concrete choice of $\lambda_i$, $k_i$). However, different diagonal braidings $(V, \Psi)$ and $(V, \Psi')$ may give isomorphic braided Hopf algebras $T(V)/I$. Such isomorphisms can be obtained using the notion of \emph{twist equivalence} for diagonal braidings (which is a special case of the more general concept of twisting a Hopf algebra by a 2-cocycle). \begin{definition} Two braided $k$-vector spaces of diagonal type $(V,\Psi)$, $(V',\Psi')$ (given by scalars $q_{ij}$, $q_{ij}'$) are \emph{twist equivalent} if $V\cong V'$, $q_{ii}=q_{ii}'$, and $q_{ij}q_{ji}=q_{ij}'q_{ji}'$. \end{definition} \begin{lemma}\label{twistlemma} If $(V,\Psi)$, $(V',\Psi')$ are twist equivalent of diagonal type, then $T(V)\cong T(V')$ as braided Hopf algebras in the category of braided $k$-vector spaces, preserving the natural grading. \end{lemma} \begin{proof} For a proof see e.g. \cite[3.9--3.10]{AS}. We can find generators $v_i$ of $V$ and $v_i'$ of $V'$ such that the isomorphism $\phi$ is determined by $v_i\mapsto v_i'$. Defining a 2-cocycle $\sigma$ by $\sigma(v_i\otimes v_j)=q_{ij}'q_{ij}^{-1}$ for $i<j$ and $1$ otherwise, we find that the product $v_iv_j$ maps to the product twisted by $\sigma$. Note that the isomorphism is \emph{not} an isomorphism in the category of YD-modules over $kG$ unless $(V',\Psi')=(V,\Psi)$. \end{proof} For an ideal $I\in \mathcal{I}_V$, denote the corresponding ideal under the isomorphism $T(V)\cong T(V')$ from Lemma~\ref{twistlemma} by $I'$. Then we conclude that $T(V)/I\cong T(V')/{I'}$ is also an isomorphism of braided Hopf algebras. In particular, $\mathcal{B}(V)\cong \mathcal{B}(V')$ for the corresponding Nichols algebras. \begin{lemma}\label{twistdrinfelddoubles} If $(V,\Psi)$ and $(V',\Psi')$ are twist equivalent, such that \[ G=\langle k_1,\ldots, k_n \rangle\cong\langle k_1',\ldots, k_n' \rangle=G' \] via $k_i\mapsto k_i'$, then $U_{kG}(V, V^*)\cong U_{kG'}(V', {V'}^{*})$ as Hopf algebras. \end{lemma} \begin{proof} By Lemma~\ref{twistlemma}, $T(V)/I\cong T(V')/{I'}$ and $T(V^*)/{I^*}\cong T({V'}^{*})/{{I'}^{*}}$. By the assumptions on the group generators, $k_i\mapsto k_i'$ extends to an isomorphism $kG\cong kG'$. Thus we can define a morphism $U_{kG}(V, V^*)\to U_{kG}(V', V'^{*})$ which is an isomorphism of $k$-vector spaces. Further, preservation of the bosonization condition can be checked on generators using the isomorphism $\phi$ from Lemma~\ref{twistlemma}. Finally, the commutator relation (\ref{commrel2}) is preserved using the isomorphism on $kG$. \end{proof} Diagonal braidings are a very general class of braidings. Quantized enveloping algebras at generic parameters however are based on braidings of specific type, called \emph{Drinfeld--Jimbo type}. Following \cite{AS3}, there are different classes of braidings which we distinguish: \begin{definition}[{\cite[Definition~1.1]{AS3}}] Let $(q_{ij})$ be the $n\times n$-matrix of a braiding of diagonal type. \begin{enumerate} \item[(a)] The braiding given by $(q_{ij})$ is \emph{generic} if $q_{ii}$ is not a root of unity for any $i=1,\ldots,n$. \item[(b)] In the case $k=\mathbb{C}$ we say the braiding $(q_{ij})$ is \emph{positive} if it is generic and all diagonal elements $q_{ii}$ are positive real numbers. \item[(c)] The braiding $(q_{ij})$ is of \emph{Cartan type} if $q_{ii}\neq 1$ for all $i$ and there exists a $\mathbb{Z}$-valued $n\times n$-matrix $(a_{ij})$ with values $a_{ii}=2$ on the diagonal and $0\leq -a_{ij}<\operatorname{ord} q_{ii}$ for $i\neq j$, such that \begin{equation} q_{ij}q_{ji}=q_{ii}^{a_{ij}}\qquad \text{ for all }i,j. \end{equation} This implies that $(a_{ij})$ is a generalized Cartan matrix which may have several connected components. We denote the collection of these by $\chi$. \item[(d)] The braiding $(q_{ij})$ is of \emph{Drinfeld--Jimbo type (DJ-type)} if it is of Cartan type and there exist positive integers $d_1,\ldots, d_n$ such that for all $i,j$, $d_i a_{ij}=d_j a_{ji}$ (hence the matrix $(a_{ij})$ is symmetrizable), and for any $J\in \chi$, there exists a scalar $q_J\neq 0$ in $k$ such that $q_{ij}=q_J^{d_ia_{ij}}$ for any $i\in I$, and $j=1,\ldots, n$. \end{enumerate} \end{definition} Some observations can be made about the Nichols algebras associated to braid\-ed vector spaces of DJ-type. First, observe that for a braiding of Cartan type with connected components $I_1,\ldots,I_n\in \chi$, we have that $\mathcal{B}(V)$ is the braided tensor product $\mathcal{B}(V_{I_1})\otimes \ldots\otimes \mathcal{B}(V_{I_n})$ (\cite[Lemma 4.2]{AS4}). Further, for $V$ with braiding $(q_{ij})$ of DJ-type where $q_{ii}$ are generic, the Nichols algebra can be computed explicitly by the quantum Serre relations (\cite[Theorem 15]{Ros}): \[ \mathcal{B}(V)=k\langle x_1,\ldots,x_n \mid \un{\operatorname{ad}}(x_i)^{1-a_{ij}}(x_j)=0, \forall i\neq j\rangle. \] We now bring the growth condition of finite \emph{Gelfand--Kirillov dimension} (GK-dimension) into the picture, using characterization results of \cite{Ros} of Nichols algebras with this property. \begin{lemma}[\cite{Ros}]\label{rossolemma} Let $k=\mathbb{C}$ and $(q_{ij})$ be the matrix of a braiding of diagonal type which is \emph{generic} such that the Nichols algebra $\mathcal{B}(V)$ has finite Gelfand--Kirillov dimension. Then $(q_{ij})$ is of Cartan type. Moreover, if the braiding is positive then it is twist equivalent to a braiding of DJ-type with finite Cartan matrix if and only if the GK-dimension is finite. \end{lemma} \begin{proof} See \cite{AS3}, Corollary 2.12 and Theorem 2.13. \end{proof} \begin{corollary} Let $A=U_{\mathbb{C} G}(V,V^*)$, for $V$ of separable type, with generic positive braiding $(q_{ij})$. Then the following are equivalent \begin{itemize} \item[(i)] $A\cong U_q(\fr{g})$ for $\fr{g}$ a semisimple Lie algebra. \item[(ii)] The braided $\mathbb{C}$-vector space $V$ with braiding $(q_{ij})$ is twist equivalent to a braiding of DJ-type with Cartan matrix of finite type. \item[(iii)] $\mathcal{B}(V)$ has finite Gelfand--Kirillov dimension. \item[(iv)] $A$ has finite Gelfand--Kirillov dimension. \end{itemize} \end{corollary} \begin{proof} The equivalence of (ii) and (iii) is the statement of Lemma \ref{rossolemma} due to \cite{Ros}. Using Lemma \ref{twistdrinfelddoubles} we find that (ii) implies (i), while it is clear that (i) implies (ii). In fact, the GK-dimension of $\mathcal{B}(V)$ for $V$ of DJ-type equals the number of positive roots \cite[2.10(ii)]{AS3}. Further, we observed that $A$ is of the form $U(\mathcal{D})$ in \cite[Theorem 4.3]{AS3} in Theorem \ref{drinfeldtheorem} provided that $V$ has finite Cartan type. This observation (together with Lemma~\ref{twistdrinfelddoubles}) gives that (ii) is equivalent to (iv) using Theorem 5.2 in \cite{AS3}. \end{proof} \begin{corollary} The only indecomposable bialgebras with a symmetric triangular decomposition on $\mathcal{B}(V)\otimes k\mathbb{Z}^n\otimes \mathcal{B}(V^*)$ of separable type, such that $V=\mathbb{C}\langle v_1,\ldots,v_n\rangle$ is of positive diagonal type, and that no $v_i$ commutes with all of $V^*$ are isomorphic to $U_q(\mathfrak{g})$ for a semisimple Lie algebra $\mathfrak{g}$. \end{corollary} \begin{proof} This follows from the classification in Theorem \ref{mainclassificationthm}, combined with the results of Rosso. The Lie algebra $\fr{g}$ is determined by the Cartan matrix one obtains under twist equivalence in Lemma \ref{rossolemma}. The technical condition that no $v_i$ commutes with all of $V^*$ ensures that $[f_i,v_i]\neq 0$ for a dual basis $f_1,\ldots,f_n$ of $V^*$, resembling the non-degeneracy condition that the scalars $\gamma_{ii}\neq 0$ in Theorem \ref{drinfeldtheorem}. \end{proof} This is a characterization for quantum groups at generic parameters. The work surveyed in \cite{AS,AS2} on finite-dimensional pointed Hopf algebras can be viewed as a characterization of small quantum groups. The triangular decomposition can be interpreted as the case where the graph $\Gamma$ described in \ref{classificationsurvey} has two connected components, such that the corresponding generators for the two components give dually paired braided Hopf algebras. The characterization suggests that if we are looking for examples outside of DJ-type, we have to consider braidings of generic Cartan type which are not positive. In fact, \cite[2.6]{AS3} gives an example that is generic of Cartan type, but not of DJ-type. We compute the associated double here: \begin{example} Let $G=\langle k_1,k_2\rangle\cong C_\infty\times C_\infty$ be a free abelian group with two generators. We define a two-dimensional YD-module $V$ over $G$ on generators $v_1$ of degree $k_1$, $v_2$ of degree $k_2$ via \begin{align*} k_1\triangleright v_1&=q v_1,&k_1\triangleright v_2&=q^{-1}v_2,&k_2\triangleright v_1&=q^{-1}v_1,&k_2\triangleright v_2&=-qv_2. \end{align*} Lemma 2.1 in \cite{AS3} shows that \[ \mathcal{B}(V)=\langle v_1,v_2\mid \un{\operatorname{ad}}(v_1)^3(v_2)=\un{\operatorname{ad}}(v_2)^3(v_1)=0\rangle. \] The asymmetric braided Drinfeld double $U_{\mathbb{C} G}(V,V^*)$ is in fact a braided Drinfeld double if we define $V^*$ to be the dual YD-module. It is the Hopf algebra given on $\mathcal{B}(V)\otimes \mathbb{C} G\otimes \mathcal{B}(V^*)$, subject to the relations \begin{align*} [f_1,v_i]&=\delta_{1,i}\frac{k_1-k_1^{-1}}{q^{1/2}-q^{-1/2}}, &[f_2,v_i]&=\delta_{2,i}\frac{k_2-k_2^{-1}}{iq^{1/2}+iq^{-1/2}},&\\ k_1v_2&=q^{-1}v_2k_1,&k_2v_1&=q^{-1}v_1k_2,\\ k_1v_1&=qv_1k_1,&k_2v_2&=-qv_2k_2,\\ k_1f_2&=qf_2k_1,&k_2f_1&=qf_1k_2,\\ k_1f_1&=q^{-1}f_1k_1,&k_2f_2&=-q^{-1}f_2k_2, \end{align*} and with coproducts \begin{align*} \Delta(v_i)&=v_i\otimes k_i+1\otimes v_i,& \Delta(f_i)&=f_i\otimes 1+k_i^{-1}\otimes f_i. \end{align*} \end{example} Apart from such examples, we can also include examples where free and nilpotent generators are combined, hence capturing features of both small and generic quantum groups. Here is such an example of small rank: \begin{example} Let $G=C_\infty\times C_p=\langle g_{\infty}\rangle \times\langle g_p\rangle$ the product of an infinite cyclic group and one of order $p$. We define a 2-dimensional YD-module over $G$ on $\mathbb{C} v_\infty\oplus \mathbb{C} v_p$, where $v_\infty$ has degree $g_\infty$, and $v_p$ has degree $g_p$. The group action is given by \begin{align*} g_p\triangleright v_p&=\xi_pv_p, &g_p\triangleright v_\infty&=\eta_p v_\infty,\\ g_\infty\triangleright v_p&=\eta_p^{-1}v_p, &g_\infty \triangleright v_\infty&=\eta_\infty v_\infty, \end{align*} where scalars with a subscript $p$ are primitive $p$th roots of unity, and $\eta_\infty$ is generic. We can now compute the Nichols algebra with generators $v_p$ and $v_\infty$. It is given by \[ \mathcal{B}(V)=\mathbb{C}\langle v_p,v_\infty \rangle /{(v_p^p, v_pv_\infty-\eta_pv_\infty v_p )}. \] We denote the dual YD-module by $V^*$ with generators $f_p$, $f_\infty$. The braided Drinfeld double on $\mathcal{B}(V)\otimes k(C_p\times C_\infty)\otimes \mathcal{B}(V^*)$ of the braided Hopf algebra $\mathcal{B}(V)$ is a quantum group that combines both $u_q(\fr{sl}_2)$ and $U_q(\fr{sl}_2)$: \begin{align*} [f_p,v_i]&=\delta_{i,p}\frac{g_p-g_p^{-1}}{\xi_p^{1/2}-\xi_p^{-1/2}}, &[f_\infty,v_i]&=\delta_{\infty,i}\frac{g_\infty-g_\infty^{-1}}{\eta_\infty^{1/2}-\eta_\infty^{-1/2}},\\ g_pv_p&=\xi_pv_pg_p,&g_pv_\infty&=\eta_p v_\infty g_p,\\ g_\infty v_p&=\eta_p^{-1} v_pg_\infty,&g_\infty v_\infty&=\eta_\infty v_\infty g_\infty,\\ g_pf_p&=\xi_p^{-1}f_pg_p,&g_pf_\infty&=\eta_p^{-1}f_\infty g_p,\\ g_\infty f_p&=\eta_qf_pg_\infty,&g_\infty f_\infty&=\eta_\infty^{-1}f_\infty g_\infty.& \end{align*} and with coproducts \begin{align*} \Delta(v_i)&=v_i\otimes g_i+1\otimes v_i,& \Delta(f_i)&=f_i\otimes 1+g_i^{-1}\otimes f_i, &\text{for }i=p,\infty. \end{align*} Choosing instead $g_\infty\triangleright v_p=\xi_\infty v_p$ we obtain more examples where the Nichols algebra will involve other relations depending on choice of $\xi_\infty$. \end{example} \subsection{Classes of Pointed Hopf Algebras by Radford}\label{radford} In \cite{Rad}, a class of pointed Hopf algebras $U_{(N,\nu, \omega)}$ was introduced (see also \cite{Gel} for generalizations). These Hopf algebras are associated to the datum of a positive integer $N$ and $1\leq \nu <N$ such that $N$ does not divide $\nu^2$, and $\omega\in k$ is a primitive $N$th root of unity in a field $k$. Denote $q:=\omega^\nu$ and $r=\abs{q^\nu}=\abs{\omega^{\nu^2}}$. We let $C_N$ denote a cyclic group of order $N$ generated by an element $a$. The algebra $U_{(N,\nu,\omega)}$ is the braided Drinfeld double of the YD-module Hopf algebra $U_+:=k[x]/(x^r)$ over $C_p$, with grading given by $x\mapsto a^{\nu}\otimes x$ and action $a\triangleright x=q^{-1} x$. Note that $U_+$ is the Nichols algebra of the one-dimensional YD-module $kx$. The coalgebra structure is given by $\Delta(x)=x\otimes a^{\nu}+1\otimes x$, and $\Delta(y)=y\otimes 1+a^{-\nu}\otimes y$ for the dual generator $y$. Note further that the other Hopf algebra $H_{(N,\nu,\omega)}$ introduced by Radford is simply the bosonization $U_+\rtimes kC_N$ in this set-up. The algebras $U_{(N,\nu, \omega)}$ and $H_{(N,\nu,\omega)}$ are not indecomposable unless $\nu=1$. To obtain indecomposable pointed Hopf algebras, we can consider the subalgebras generated by $x, y$ and $a^{\nu}$ (respectively, $x$ and $a^{\nu}$). Since these only depend on the choices of $r$ and $q$ we denote these Hopf algebras by $U_{(r,q)}$ (respectively, $H_{(r,q)}$). Note that $U_{(r,1,q)}=U_{(r,q)}$. \subsection{Quantum Group Analogues in Other Contexts}\label{conclusion} To conclude this paper, we would like to adapt the point of view that quantum groups can also be studied over other Hopf algebras $H$ than the group algebra. For this, one can, motivated by the results of this paper, look for Hopf algebras $A$ with triangular decomposition over $H$. The property over a group that $A$ is of separable type can be generalized by requiring that the YD-modules $V$ with respect to the left and right coactions $\delta_r$ and $\delta_l$ are a direct sum of distinct (one-dimensional) simples. One-dimensionality of the simples is however a strong restriction. As a first example, we can consider the case where $H$ itself is primitively generated, i.e. $H=k[x_1, \ldots, x_n]$ over a field of characteristic zero. If $A$ is a bialgebra with triangular decomposition over $H$, then for $v\in V$, $\Delta(v)\in V\otimes H+H\otimes V$ implies that $\Delta(v)$ in fact equals $v\otimes 1+1\otimes v$ using the counitary condition. This gives that $A$ is generated by primitive elements and hence is a pointed Hopf algebra that is connected (i.e. the group-like elements are the trivial group). Now $A$ is in particular cocommutative, so Theorem 5.6.5 in \cite{Mon} implies (for $\operatorname{char} k=0$) that $A=U(\fr{g})$ where $\fr{g}$ is the Lie algebra of primitive elements in $A$. From this point of view, all quantum groups over $H=k[x_1,\ldots, x_n]$ are simply the classical universal enveloping algebras. Investigating bialgebras with triangular decomposition over other Hopf algebras $H$ can be the subject of future research. \begin{acknowledgements} A preliminary version of this paper is part of the PhD thesis of the author completed at the University of Oxford. I am grateful to my PhD advisor Prof Kobi Kremnizer for his guidance. I would also like to thank Dr Yuri Bazlov, Prof Arkady Berenstein, Prof Dan Ciubotaru and Prof Shahn Majid for helpful discussions on the subject matter. \end{acknowledgements}
{ "timestamp": "2016-02-02T02:16:22", "yymm": "1504", "arxiv_id": "1504.06456", "language": "en", "url": "https://arxiv.org/abs/1504.06456" }
\section{Introduction} One of the problems of the relativistic cosmology is the formulation of the initial conditions for the universe evolution. A lot have been done in this direction concerning quantum fields at the classical uniform background \cite{Birrell1982}, in particular in describing the origin of primordial inhomogeneities \cite{Mukhanov2005,Linde1990} giving the initial conditions on the last scattering surface for the cosmic microwave background radiation (see \cite{Dodelson2003} and references given herein). The modern description of the uniform background itself includes the inflation paradigm \cite{Starobinsky1980,Guth1981,Liddle} which besides the description of the density perturbation values, successfully solves the problems of horizon and flatness. In describing the earlier stage of evolution, one encounters the problem of the initial conditions again. The well-known Penrouse theorem \cite{Penrouse1965,Geroch1968,Hawking1970} states that under quite general conditions, the initial point of the evolution should be singular. One of the conditions of the Penrouse theorem is the energy condition, which is violated during inflation \cite{Borde1997}, but geodesics remains past uncomplete in this case also \cite{Borde2003}. The incompleteness of geodesics tells us that there is a moment in the past (i.e. singularity) beyond which one cannot move in past direction. It seems natural to set initial conditions at this last point of the backward evolution (initial point of future evolution). This seems quite impossible, at first sight, because the dynamical quantities such as amplitudes of the matter fields and scale factor logarithm, turns to infinity at the singularity. It is considered that near the singularity, at the Planck epoch, quantum effects are crucial. Thus, the problem of the initial conditions and the singularity should be considered at the quantum level \cite{Hartle1983,Vilenkin1988,Bojowald2001,Bojowald2003,Kiefer2009}, although one could attempt to avoid singularity at a classical level \cite{Minkevich2006,Santos2015}. In relation with the singularity problem we need to discuss some ways of gravity quantization. According to \cite{Ashtekar2008,Bojowald2011,Bojowald2012,Husain2004,Tarrio2013}, the loop quantum gravity removes the singularity completely, including different types of future singularities, such as Big Rip. The absence of singularities in loop quantum gravity originates from the fact that the volume operator (and consequently the universe scale factor) has a discrete spectrum bounded below. However, there remains a problem, how to connect this discrete spectrum with the time evolution of the universe, canonical gravity quantization and, about self-consistency of the loop quantum gravity itself. Work in this directions is in progress \cite{Ashtekar2015}. The canonical quantization of general relativity (GR) leads to the Wheeler-DeWitt (WDW) equation \cite{DeWitt1967,Wheeler1968}, which is the analog of the Schr\"{o}dinger equation of the ordinary quantum mechanics. However, the equation does not contain a time variable explicitly, so one has to interpret the wave function of the universe in some way. For instance, one could interpret the scale factor as time the variable; though, it could not be considered as a complete solution of the problem because one needs to describe the evolution of dynamical variables including the scale factor in time, explicitly. For instance, in Ref. \cite{Tarrio2013} the effective Hamiltonian have been deduced by corrections with the loop quantum gravity effects, and then it was investigated classically (i.e., to describe time evolution, the authors of Ref. \cite{Tarrio2013} return to classics). It seems a more fundamental to consider the problem of singularity and the initial conditions in a quantization scheme involving the evolution in time explicitly. Such a scheme was suggested for mini- and midi- superspace models \cite{Cherkas2006,Cherkas2012,Cherkas2015}. In ordinary quantum mechanics, Schr\"{o}dinger and Heisenberg pictures are equivalent. In quantum gravity, a canonically quantized Hamiltonian of the GR cannot serve for building the Heisenberg picture, that is, the conventional Heisenberg picture does not exist. Nevertheless, one can quantize the equations of motion straightforwardly, that is, quasi-Heisenberg picture exists (Fig. \ref{H}). In the quantization scheme of Ref. \cite{Cherkas2006,Cherkas2012,Cherkas2015} quasi-Heisenberg operators satisfy the commutation relations obtained from the system of constraints and gauge conditions with the help of the Dirac brackets at the initial moment of time. Then it is allowed quasi-Heisenberg operators to evolve according to the equations of motion. This evolution implicitly determines time-dependent gauge fixing, defined explicitly prior to quantization only at initial moment of time. \begin{figure}[h] \includegraphics[width=9cm]{scheme.eps}\\ \caption{Quasi-Heisenberg quntization scheme}\label{H} \end{figure} It should be noted that the Heisenberg picture for gravity quantization using anticommutative ghost variables was discussed in Ref. \cite{Vereshkov2013}. The Schr\"{o}dinger picture using anticommutative ghost variables has also been developed \cite{Shestakova1999,Savch,Savch1}. It would be instructive to compare these approaches with one another and with the quasi-Heisenberg picture at an example of some a simple minisuperspace model, but this has not be done yet. The aim of the present work is to consider more closely the setting of the initial conditions for the quasi-Heisenberg operators in connection with the singularity problem. Though the initial singularity remains, the situation differs substantially from the classic one. It appears, that one may set the initial conditions at the singularity directly. It will be demonstrated by the example of the Gowdy model described in section \ref{sect2} of the paper. This model admits the analytical solution within the whole time domain and have been used for singularity investigation \cite{Berger1982,Hussein1987}. Also, this model allows choosing the out-vacuum state, as the gravitational waves evolve against a classical background\footnote{ In the general case quasi-Heisenberg picture admits quantum background. }. Because the existence or nonexistence of the singularity turns out to be related to the problem of the regularization of the vacuum energy \cite{Berger1982,Hussein1987}, the issue of vacuum energy is briefly discussed in section \ref{sect3}, where the evolution of the system in a vacuum state is considered and then compared with the evolution in the state given by the wave packet used in section \ref{sect2}. \section{Quasi-Heisenberg quantization of the Gowdy Model}\label{sect2} The polarized ${\bf T}^3$ Gowdy model corresponds to an anisotropic universe, where the gravitational waves travel unidirectionally. Let us take a metric in the form of \be ds^2=e^{\tau-\lambda}(d\eta^2-d x^2)-e^{2 \tau+2\sqrt 3 V}dy^2-e^{2 \tau-2\sqrt 3 V}dz^2, \label{our} \ee where the coordinates $\eta,x,y,z$ define points of the Pseudo-Rimanian manifold. Quantities $\tau, \lambda$ and $V$ determine the manifold metric and depend on the variables $ \eta$ and $x$ only, which takes the values at $\{0,\infty\}$ and $\{0,2 \pi\}$ respectively. We treat the coordinate $\eta$ as a ``time''-parameter describing the evolution of a system. In Eq. (\ref{our}) we use a slightly different gauge than the original Gowdy's one: \[ ds^2=e^{-\lambda+3\tau}dt^2-e^{-\lambda-\tau}d X^2-e^{2 \tau+2\sqrt 3 V}dy^2-e^{2 \tau-2\sqrt 3 V}dz^2, \] where $d t=e^{-\tau} d\eta$ and $dX=e^{\tau} d x$. The motivation is that in the gauge given by (\ref{our}), the equations of motion contain a difference of the potential and kinetic energies of field oscillators. In the absence of evolution, this quantity is zero by virtue of the virial theorem. When the system evolves, the virial theorem is violated \cite{Anishchenko2008}. As was shown earlier, the difference of the potential and kinetic energies provides a value of the universe acceleration parameter for the Friedman universe which is comparable with the observed one \cite{Cherkas2007}. The Einstein equations lead to three equations of motion \bea V^{\prime\prime}-\ptl_{xx}V+2\tau^\prime V^\prime-\ptl_{x}V\ptl_{x}\tau=0,~~~~~~~~~~\label{eqmov1}\\ \tau^{\prime\prime}-\ptl_{xx}\tau=2(\ptl_x \tau)^2-2 (\tau^\prime)^2,~~~~~~~~~~~~~\label{eqmov2}\\ \lambda^{\prime\prime}-\ptl_{xx}\lambda=4(\ptl_x \tau)^2-4 (\tau^\prime)^2-6(\ptl_x V)^2+6(V^\prime)^2,~~~~~ \label{eqmov3} \eea and two constraints \bea \mathcal H(\eta, x)=e^{2\tau}\biggl(\frac{1}{3}(\ptl_x\tau)^2+\frac{1}{2}(\ptl_x V)^2+\frac{1}{6}\ptl_x\tau\ptl_x\lambda\nonumber\\+\frac{1}{3} \ptl_{xx}\tau-\frac{1}{3}\tau^{\prime 2}+\frac{1}{2}V^{\prime 2}+\frac{1}{6}\tau^\prime\lambda^\prime \biggr)=0,~~\\ \mathcal P(\eta, x)=e^{2\tau}\biggl(\frac{1}{6}\ptl_x\lambda\, \tau^\prime+\ptl_x V V^\prime+\frac{1}{6}\ptl_x \tau \lambda^\prime~~~~~~~\nonumber\\+\frac{1}{3}\ptl_x\tau^\prime \biggr)=0,~~ \eea where prime denotes differentiation over time $\eta$. Let us discuss the structure of the equations of motion (\ref{eqmov1})-(\ref{eqmov3}). Eqs. (\ref{eqmov1})-(\ref{eqmov3}) contain a part corresponding to the wave equation. The remaining parts belong to two different types. The first one is of $(\tau^\prime V^\prime-\ptl_{x}V\ptl_{x}\tau)$-type. In this case, we refer to $V$ as a ``field'' variable, whereas $\tau$ plays a role of the ``background'' against which the field $V$ oscillates. The equations for the ``background'' variable contain the difference of the kinetic and potential energies, e.g., $(\tau^\prime)^2-(\ptl_x \tau)^2$ or $(V^\prime)^2-(\ptl_x V)^2$. The situation is analogous to the model representing a string against a curved background \cite{Cherkas2012}. However, the equations for the background variable $\tau$ differ from those considered in Ref. \cite{Cherkas2012}, because Eq. (\ref{eqmov2}) for $\tau$ is isolated, whereas the field variables contribute to the corresponding equation for the ``background'' in the toy model \cite{Cherkas2012}. On the other hand, there is another ``background'' variable $\lambda$ here, because the Gowdy model is anisotropic, and one needs two variables $\tau$ and $\lambda$ to describe the background. It should be noted that the ``background" variable $\lambda$ does not influence the oscillations of the ``field" $V$. In a general case, an inhomogeneous variable $\tau$ has to be treated as quantum operator with the related algebra. However, the goal of the present paper is to consider the initial conditions near singularity. Thus, for simplicity, a particular gauge is taken where $\tau$ is non-quantum (i.e., ``c''-number valued) and spatially homogeneous. That results in the solution akin to the Gowdy one \cite{Gowdy1974,Mizner,Berger}. It is convenient to expand the dynamical variables into the Fourier series \bea V(\eta,x)=\sum_{k=1} {\mathcal V}_k(\eta) e^{i kx},\nonumber\\ \lambda(\eta,x)=\sum_{k=1} {\Lambda}_k(\eta) e^{i kx},\nonumber\\ \tau(\eta,x)=\sum_{k=1} {T}_k(\eta) e^{i kx} . \label{cond1} \eea \noindent The equation of motion (\ref{eqmov2}) for $\tau$ is isolated from others. Thus, the spatially uniform initial conditions for $\tau$ make it spatially independent in the course of evolution. So one can take the initial conditions \bea T_k(0)=\delta_{0,k}{\mathcal T}_0,~~~~ T^\prime_k(0)=\delta_{0,k}\,e^{-2{\mathcal T}_0}\Pi, \label{int} \eea \noindent where $\Pi$ and ${\mathcal T}_0$ are some constants. We shall further refer to ${T}_0(\eta)$ as $\tau(\eta)$. Advancing in such a way and using the aforementioned gauge, one comes to the following equations of motion and constraints: \bea \tau^{\prime\prime}+2 (\tau^\prime)^2=0,\label{diff1}\\ {\mathcal V}_k^{\prime\prime}+k^2{\mathcal V}_k+2\tau^\prime {\mathcal V}_k^\prime=0,\label{diff2}\\ \Lambda_0^{\prime\prime}=-4 (\tau^\prime)^2+6\sum_q{\mathcal V}_q^\prime{\mathcal V}_{-q}^\prime-q^2{\mathcal V}_q{\mathcal V}_{-q}, \label{diff3} \eea \bea \mathcal H_k=e^{2\tau}\biggl(-\delta_{k,0}\frac{1}{3 }\tau^{\prime 2}+\frac{1}{6}\tau^\prime\Lambda_k^\prime+\frac{1}{2}\sum_q {\mathcal V}_q^{\prime } {\mathcal V}_{k-q}^{\prime }\nonumber\\-q(k-q){\mathcal V}_q {\mathcal V}_{k-q} \biggr)=0,~~\label{hk}\\ \mathcal P_k=e^{2\tau}\biggl(\frac{1}{6}i k\Lambda_k\, \tau^\prime+\sum_q (i q) {\mathcal V}_q {\mathcal V}_{k-q}^\prime\biggr)=0.~~ \label{pk} \eea The equations of motion (\ref{diff1})-(\ref{diff3}) can be obtained from the Hamiltonian $H=\mathcal H_0$. It should be noted that $\Lambda_k$ at $k\ne 0$ is completely defined by the momentum constraint equation (\ref{pk}), namely \be \Lambda_k=-\frac{6}{k \tau^\prime}\sum_q q{\mathcal V}_q{\mathcal V}_{k-q}^\prime, \ee which reduces the system to $\tau,\Lambda_0,{\mathcal V}_k$. One can introduce the momenta \bea \pi_k=\frac{\ptl H}{\ptl {\mathcal V}_k^\prime}=e^{2 \tau}{\mathcal V}_{-k}^\prime,~~~ P_\Lambda=\frac{\ptl H}{\ptl \Lambda_0^\prime}=e^{2\tau}\tau^\prime/6,\nonumber\\ P_\tau=-\frac{\ptl H}{\ptl \tau^\prime}=e^{2\tau}\left(\frac{2}{3}\tau^\prime-\Lambda^\prime_0/6\right),~~ \eea and rewrite the Hamiltonian in terms of these momentums \bea H=e^{-2\tau}\left(-6P_\Lambda P_\tau+12 P_\Lambda^2+\frac{1}{2}\pi_0^2+\sum_{k\ge 1}\pi_k \pi_{k}^* \right)\nonumber\\+e^{2\tau}\sum_{k\ge 1}k^2{\mathcal V}_k {\mathcal V}^*_{k},~~~ \label{hsm} \eea where it is taken into account that $\pi_{-k}=\pi_k^*$ and ${\mathcal V}_{-k}={\mathcal V}_k^*$. The quasi-Heisenberg quantization consists in quantization of the equations of motion \cite{Cherkas2006,Cherkas2012,Cherkas2015}. Briefly, this procedure can be described in the following way. The operator initial conditions for the equations of motion include the conditions (\ref{int}) rewritten in terms of $\tau$ and the remaining conditions \bea {\hat {\mathcal V}_k}(0)=\hat v_k,~~~ \hat {\mathcal V}_k^\prime(0)=e^{-2{\mathcal T}_0}\hat p_{-k},~~ \hat \Lambda_0(0)=L_0,\nonumber\\ \hat \Lambda_0^\prime(0)=e^{-2{\mathcal T}_0}(24 P_\Lambda(0)-6 \hat P_\tau(0)),~~~ \tau(0)={\mathcal T}_0,\nonumber\\ \tau^\prime(0)=6 e^{-2{\mathcal T}_0}P_\Lambda(0),~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \eea where ${\mathcal T}_0$ and $L_0$ are some $c$-numbers, \bea P_\Lambda(0)=\Pi,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ \hat P_\tau(0)=\frac{1}{6\Pi}\left(12 \Pi^2+\frac{1}{2}\sum_k \hat p_k \hat p_{-k} +e^{4{\mathcal T}_0}k^2 \hat v_k \hat v_{-k}\right),\nonumber \eea and $\Pi$ is the $c$-number as well. The operators $\hat p_k$ and $\hat v_k$ do not depend on time and satisfy the standard commutation relations $[\hat p_k,\hat v_{k^\prime}]=-i\delta_{k,{k^\prime}}$, where $\delta_{k,k^\prime}$ is the Kronneker symbol. They are initial values of the time-dependent operators $\hat \pi_k(\eta)$ and $\hat {\mathcal V}_k(\eta)$. One may implement the above operator commutation relations by the representation $\hat v_k=v_k$, $\hat p_k=-i\frac{\ptl}{\ptl v_{k}}$, or by the representation $\hat p_k=p_k$, $\hat v_k=i\frac{\ptl}{\ptl p_{k}}$. Thus, one have the following commutator algebra at the initial moment of time $[\hat \pi_k,\hat {\mathcal V}_{k^\prime}]=-i\delta_{k,{k^\prime}}$, $[\hat P_\tau,\hat {\mathcal V}_{k}]=-\frac{i\hat \pi_{-k}}{12 \Pi}=-\frac{i\hat \pi_{k}^+}{12 \Pi}$, $[\hat P_\tau,\hat \pi_{k}]=\frac{i k^2 e^{4\mathcal T}\hat {\mathcal V}_{-k}}{12 \Pi}=\frac{i k^2 e^{4\mathcal T}\hat {\mathcal V}_{k}^+}{12 \Pi}$. The quantities $\hat P_{\Lambda}$, $\hat \Lambda$ and $\tau$ commutes with all others initially. The commutator algebra could be also obtained with the help of the Dirac brackets \cite{Cherkas2012,Cherkas2015}. After the definition of initial conditions for the operator evolution (Eq. (18), see Fig. \ref{H}), the following step is to define the Hilbert space where the quasi-Heisenberg operators act. As we stated previously, the quasi-Heisenberg picture is an alternative to the WDW equation, however, it turns out that for building the Hilbert space, one should return to the Hamiltonian (\ref{hsm}) and consider it as the WDW equation in the vicinity of ${\mathcal T}_0\rightarrow -\infty$ \cite{Cherkas2006,Cherkas2012,Cherkas2015}. Heretofore, the momentum $P_\Lambda$ should be excluded with the help of the gauge condition $P_\Lambda=\Pi$. The corresponding WDW equation in the vicinity of $\tau={\mathcal T}_0\rightarrow -\infty$ is given as \bea \biggl(-i 6 \Pi\frac{\ptl }{\ptl \tau}+12 \Pi^2-\frac{1}{2}\frac{\ptl^2}{\ptl v_0^2}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-\sum_{k\ge 1}\frac{\ptl}{\ptl v_k}\frac{\ptl}{\ptl v_{k}^*}\biggr)\Psi(\tau,..v_{1}^*,v_0,v_1...)=0.~~~~~ \label{wheel} \eea where term $e^{4\tau }k^2 \hat v_k \hat v_{k}^+$ is omitted because the states of the form of the wave packet will be considered below. Let in some of this states typical value of the square of momentum of the mode $k$ is $<\hat p_k \hat p_k^+>\sim 1/a_k$, then the typical value of $<e^{4\tau }k^2 \hat v_k \hat v_{k}^+>\sim e^{4\tau }k^2 a_k$ due to uncertainty principle, so it becomes negligible in the vicinity $\tau={\mathcal T}_0\rightarrow -\infty$ which just be needed. Here $\hat v_k^+=v_k^*$, $\hat {p^+_k}=-i\frac{\ptl}{\ptl v^*_{k}}$. The mean value of the quasi-Heisenberg operator $A(\eta,\tau,v_i,\hat p_i)$ is given by formula \bea <\psi|\hat A(\eta,\tau,v_j,-i\frac{\ptl}{\ptl v_j})|\psi>= \int\Psi^*(\tau,v_j)\hat A(\eta,\tau,v_j,-i\frac{\ptl}{\ptl v_j})\nonumber\\ \Psi(\tau,v_j)dv_0dv_1dv_1^*\dots \biggr|_{\,\tau={\mathcal T}_0\rightarrow -\infty }~~,~~~~~~~~~~ \label{meangen0} \eea where the integral over $dzdz^*\equiv\frac{\rho d\rho d\phi}{2\pi i}$ and $z=\rho e^{i\phi}$ is understood in the holomorphic representation \cite{Faddeev1987}. It should be noted that as well as in the Klein-Gordon current scalar product \cite{Cherkas2006,Cherkas2012,Cherkas2015} there is no integration over the variable $\tau$ in equation (\ref{meangen0}). Instead, it is set to some quantity ${\mathcal T}_0$. For instance, in more a general case of the equation containing the derivatives $\frac{\ptl^2}{\ptl \tau^2}$ as well as $\frac{\ptl}{\ptl \tau}$ , the scalar product should contain as the term $i\left(\frac{\ptl\Psi}{\ptl \tau} \Psi^*-\frac{\ptl\Psi^*}{\ptl \tau} \Psi\right)$ of the "current" type, so the term $\Psi^*\Psi$ of the "density" type. In any case the quantity $\tau$ should be set to some value $\mathcal T_0$ \cite{Mostafazadeh2004}. Here, the quantity ${\mathcal T}_0$ is chosen to be initially finite, thus avoiding the singularity, but finally the limit ${\mathcal T}_0 \rightarrow -\infty$ is taken. The general solution of Eq. (\ref{wheel}) may be written in the form of the wave packet \begin{widetext} \be \Psi(\tau,..v_{1}^*,v_0,v_1...)=\int C(..p_{1}^*,p_0,p_1...) \exp\left(-\frac{i}{6 \Pi}\left(12 \Pi^2+\frac{1}{2}p_0^2+\sum_{k\ge 1}p_kp_k^*\right)\tau+i \sum_{k\ge 0} v_kp_{k}^*\right)dp_0dp_1dp_1^*... \label{w} \ee \end{widetext} In the momentum representation, the wave function (\ref{w}) takes the form \bea \psi(\tau,..p_{1}^*,p_0,p_1...)=C(..p_{1}^*,p_0,p_1...) \nonumber~~~~~~~~~~~~~\\\exp\biggl(-\frac{i}{6 \Pi}\biggl(12 \Pi^2+\frac{1}{2}p_0^2+\sum_{k\ge 1}p_kp_k^*\biggr)\tau\biggr),~~~ \label{pack} \eea and formula (\ref{meangen0}) for mean value looks like \bea <\psi|\hat A(\eta,\tau,\mbox{i}\frac{\ptl}{\ptl p_j},p_j)|\psi>= \int\psi^*(\tau,p_j) ~~~~~~~~~~~~~~~\nonumber\\\hat A(\eta,\tau,\mbox{i}\frac{\ptl}{\ptl p_j},p_j)\psi(\tau,p_j)dp_0dp_1dp_1^*\dots \biggr|_{\,\tau={\mathcal T}_0\rightarrow -\infty}.~~~~~ \label{meangen} \eea For this simple model, the analytical solution exists that allows demonstrating the calculation of mean values in detail. The solution of Eq. (\ref{diff1}) is \be \tau(\eta)={\mathcal T}_0+\frac{1}{2}\ln\left(1+12 \Pi e^{-2{\mathcal T}_0}\eta\right). \label{tt} \ee First, let us consider the solution of Eq. (\ref{diff2}) in the vicinity of $\tau\sim {\mathcal T}_0\rightarrow -\infty$. It takes the form \be \hat {\mathcal V_k}(\eta)\approx \hat v_k+\frac{1}{12 \Pi} p_k^* \ln \left(1+12 \Pi e^{-2 {\mathcal T_0} }\eta\right). \label{field0} \ee If ${\mathcal T}_0$ tends to minus infinity, then the expression (\ref{tt}) for $\tau(\eta)$ becomes $\tau(\eta)=\frac{1}{2}\ln\left(12\Pi \eta\right)$. However, the expression for the operator $\hat {\mathcal V}_k(\eta)$ diverges formally as ${\mathcal T}_0\rightarrow -\infty$. This reflects the fact that it is impossible to set the field values at the singularity in the classical picture. Below we demonstrate that the quantum picture validates the limit of ${\mathcal T}_0\rightarrow -\infty$ for the mean observable values. Let us consider the mean value of (\ref{field0}) over the wave packet (\ref{pack}) \begin{widetext} \bea <\psi|\hat {\mathcal V}_k|\psi>=\int (C(..p_{1}^*,p_0,p_1...))^* \exp\left(\frac{i}{6 \Pi}(12 \Pi^2+\sum_{q\ge 0}p_qp_q^*){\mathcal T}_0\right)\nonumber\biggl(i\frac{\ptl}{\ptl p_k}+\frac{1}{12 \Pi}p_k^* \ln \left(1+12 \Pi e^{-2 {\mathcal T_0} }\eta\right)\biggr)\nonumber\\\exp\left(-\frac{i}{6 \Pi}(12 \Pi^2+\sum_{q\ge 0}p_qp_q^*){\mathcal T}_0\right)C(..p_{1}^*,p_0,p_1...)dp_0dp_1dp_1^*\dots\biggr|_{\,{\mathcal T}_0\rightarrow -\infty }\nonumber\\=\int (C(..p_{1}^*,p_0,p_1...))^*\biggl(\frac{1}{12\Pi}p_k^*\ln(1+12\Pi e^{-2{\mathcal T}_0}\eta)+\frac{1}{6 \Pi}p_k^*{\mathcal T}_0+i\frac{\ptl}{\ptl p_k}\biggr)C(..p_{1}^*,p_0,p_1...)dp_0dp_1dp_1^*\dots\biggr|_{\,{\mathcal T}_0\rightarrow -\infty }\nonumber\\ = \int (C(..p_{1}^*,p_0,p_1...))^*\biggl(\frac{1}{12\Pi}p_k^*\ln(12\Pi \eta)+i\frac{\ptl}{\ptl p_k}\biggr)C(..p_{1}^*,p_0,p_1...)dp_0dp_1dp_1^*\dots~~~~~~~ \label{calc} \eea \end{widetext} One can see from Eq. (\ref{calc}) that the divergent terms with ${\mathcal T}_0\rightarrow -\infty$ cancel each other, and the mean value of $\hat {\mathcal V}_k$ is finite. Hence, the wave packet defined at the singularity determines the entire evolution of the system. The approximate expression for $\hat {\mathcal V}_k$ has been used above. It is valid for $\eta\sim 0$. However, it is intensional to consider the exact expression and the contribution of $V-$ quantum fluctuations to the $\lambda-$ evolution. The exact solution of the equation of motion (\ref{diff2}) with $\tau(\eta)$ given by (\ref{tt}) takes the form \begin{widetext} \bea \hat {\mathcal V}_k(\eta)=\frac{\pi }{24 \Pi} \biggl( p_{k}^* J_0\biggl(\frac{e^{2 {\mathcal T}_0} k}{12 \Pi }\biggr) Y_0\biggl(k \biggl(\eta+\frac{e^{2 {\mathcal T}_0}}{12 \Pi }\biggr)\biggr)-J_0\biggl(k \biggl(\eta+\frac{e^{2 {\mathcal T}_0}}{12 \Pi }\biggr)\biggr) \biggl( p_{k}^* Y_0\biggl(\frac{e^{2 {\mathcal T}_0} k}{12 \Pi }\biggr) +k e^{2 {\mathcal T}_0} \hat v_k Y_1\biggl(\frac{e^{2 {\mathcal T}_0} k}{12 \Pi }\biggr) \biggr)\nonumber\\+k e^{2 {\mathcal T}_0} \hat v_k J_1\biggl(\frac{e^{2 {\mathcal T}_0} k}{12 \Pi }\biggr) Y_0\biggl(k \biggl(\eta +\frac{e^{2{\mathcal T}_0}}{12 \Pi }\biggr)\biggr)\biggr).~~~~ \label{solv} \eea \end{widetext} \noindent Here $J_0(z), Y_0(z), Y_1(z)$ and $ J_1(z)$ are the Bessel functions. The second derivative of $\Lambda_0$ can be determined from the equation of motion (\ref{diff3}), whereas its first derivative can be determined from the Hamiltonian constraint (\ref{hk}): \bea \hat \Lambda_0^\prime=\frac{1}{\tau^\prime}\left(2\tau^{\prime 2}-3\hat {\mathcal V}^{\prime 2}_0-6\sum_{k \ge 1}\hat {\mathcal V}^\prime_k \hat {\mathcal V}^{\prime + }_{k}+k^2 \hat {\mathcal V}_k\hat {\mathcal V}_{k}^{+}\right).~~~~~~~ \label{firstd} \eea Here $\hat {\mathcal V}^{+}_{k}$ should be obtained from Eq. (\ref{solv}) by changing $\hat v_k\rightarrow \hat {v^+_k}=i\frac{\ptl}{\ptl p_k^*}$, $p^*_k\rightarrow p_k$. Thus the most intriguing problem is the calculation mean values of $\hat {\mathcal V}^\prime_k \hat {\mathcal V}^{\prime+ }_{k}$ and $k^2 \hat{\mathcal V}_k\hat{\mathcal V}_{k}^{+}$, which are constituents of Eqs. (\ref{diff3}) and (\ref{firstd}) for $\hat \Lambda_0^{\prime}$, $\hat \Lambda_0^{\prime\prime}$. Tracing this quantities allows calculating the $\hat \Lambda_0-$evolution. Let us take the Gaussian form of the wave packet to determine the evolution of the system \be C(..p_{1}^*,p_0,p_1...)=\prod_{k=0}^\infty N_k\exp\left(-a_k p_kp_k^*\right), \ee where the constant $a_k$ determines the width of the packet for each mode and $N_k$ is the normalization factor. The calculation according to (\ref{meangen}) leads to the expressions defining the mean value of the potential energy $\Xi_k$ and the value of the kinetic energy $K_k$ of each mode $k\ne0$: \begin{widetext} \bea \Xi_k\equiv<\psi|k^2 \hat{\mathcal V}_k\hat {\mathcal V}_{k}^{+ }|\psi>=\frac{k^2}{1152 a_k \Pi ^2}\biggl( \biggl(4 J_0^2(k \eta) \biggl(144 a_k^2 \Pi ^2+\log ^2\biggl(\frac{k}{24 \Pi }\biggr) +2 \gamma \log \biggl(\frac{k}{24 \Pi }\biggr)+\gamma ^2\biggr)\nonumber\\-4 \pi \biggl(\log \biggl(\frac{k}{24 \Pi }\biggr)+\gamma \biggr) J_0(k \eta) Y_0(k \eta)+\pi ^2 Y_0^2(k \eta)\biggr)\biggr), \nonumber\\K_k\equiv<\psi|\hat{\mathcal V}^\prime_k \hat{\mathcal V}^{\prime +}_{k}|\psi>=\frac{k^2}{1152 a_k \Pi ^2}\biggl( \biggl(4 J_1^2(k \eta) \biggl(144 a_k^2 \Pi ^2+\log ^2\biggl(\frac{k}{24 \Pi }\biggr) +2 \gamma \log \biggl(\frac{k}{24 \Pi }\biggr)+\gamma ^2\biggr)\nonumber\\-4 \pi \biggl(\log \biggl(\frac{k}{24 \Pi }\biggr)+\gamma \biggr) J_1(k \eta) Y_1(k \eta)+\pi ^2 Y_1^2(k \eta)\biggr)\biggr), \eea \end{widetext} where $J_0(z), Y_0(z), J_1(z)$ and $ Y_1(z)$ are the Bessel functions and $\gamma$ is the Euler constant. A spatially uniform mode contains only the kinetic energy term \[ K_0\equiv\frac{1}{2}<\psi|{\mathcal V}^{\prime 2}_0 |\psi>=\frac{1}{1152 \,a_0^2 \Pi^2 \,\eta^2}. \] For further analysis, it is convenient to consider the quasi-classical sector corresponding to late times. This insight can be provided by expanding the Bessel function into series over a large argument and keeping the leading terms: \bea Y_0(z)\approx\frac{1}{\sqrt{\pi z}}\Biggl(\left(-\frac{9}{128 z^2}-\frac{1}{8 z}+1\right) \sin (z)~~~~~\nonumber\\+\left(\frac{9}{128 z^2}-\frac{1}{8 z}-{1}\right) \cos (z)\Biggr),\nonumber\\ J_0(z)\approx\frac{1}{\sqrt{\pi z}}\Biggl(\left(-\frac{9}{128 z^2}+\frac{1}{8 z}+{1}\right) \sin (z)~~~~~\nonumber\\+\left(-\frac{9}{128 z^2}-\frac{1}{8 z}+{1}\right) \cos (z)\Biggr),\nonumber\\ J_1(z)\approx\frac{1}{\sqrt{\pi z}}\Biggl(\left(\frac{15}{128 z^2}+\frac{3}{8 z}+{1}\right) \sin (z)~~~~~\nonumber\\+\left(-\frac{15}{128 z^2}+\frac{3}{8 z}-{1}\right) \cos (z)\Biggr),\nonumber\\ Y_1(z)\approx\frac{1}{\sqrt{\pi z}}\Biggl(\left(-\frac{15}{128 z^2}+\frac{3}{8 z}-{1}\right) \sin (z)~~~~~\nonumber\\+\left(-\frac{15}{128 z^2}-\frac{3}{8 z}-{1}\right) \cos (z)\Biggr)\nonumber. \eea Then, a simple estimation results from replacement the oscillating multipliers by their time-averaged values as $ \cos^2(k \eta)\rightarrow \frac{k}{2\pi}\int_0^{2\pi/k}\cos^2(k \eta)d \eta=\frac{1}{2}$, $\sin^2(k \eta)\rightarrow \frac{1}{2}$, and $\sin(k \eta)\cos(k \eta)\rightarrow 0$. Using Eqs. (\ref{diff3}) and (\ref{firstd}) we get \begin{widetext} \bea <\psi|\hat \Lambda_0^{\prime}|\psi>\approx \frac{1}{\eta}\left(1-\frac{1}{96 {a_0} \Pi ^2}\right)-\sum_{k\ge 1}\frac{k F_k}{6 \Pi^2\pi a_k}+\frac{12 a_k k}{\pi } +\frac{1}{\eta^2}\left(\frac{F_k}{48\pi\Pi^2 k a_k}+\frac{3 a_k}{2 \pi k}\right), \label{first} \eea \bea <\psi|\hat \Lambda_0^{\prime\prime}|\psi>\approx-\frac{1}{\eta^2}\left(1-\frac{1}{96 {a_0} \Pi ^2}\right)+\frac{1}{\eta^3}\sum_{k\ge 1}\frac{F_k}{24\pi\Pi^2 k a_k} +\frac{3 a_k}{\pi k}, \label{second} \eea \end{widetext} where $F_k=\biggl(\frac{\pi^2 }{8}+\frac{\gamma ^2}{2}+\frac{1}{2}\ln ^2\left(\frac{k}{24 \Pi }\right)+\gamma \ln \left(\frac{k}{24 \Pi }\right)\biggr)$. It should be noted that Eq. (\ref{second}) describing the averaged second derivative of $\Lambda_0$ in a sense of the time-averaged evolution can be obtained from Eq. (\ref{first}) by the differentiation over $\eta$. Turning to a continuous limit of $\sum_k \rightarrow \frac{1}{2\pi}\int dk $, we can see that the second term in Eq. (\ref{first}), corresponding to the vacuum energy, diverges for any asymptotic of $a_k$ at large $k$. The most divergent term $\frac{k F_k}{6 \Pi^2\pi a_k}+\frac{12 a_k k}{\pi }$ vanishes under differentiation of Eq. (\ref{first}). The remained term is the mean value of the difference of the potential and kinetic energies of field oscillators, and has been considered in Ref. \cite{Cherkas2007} for the Friedman universe. It has been found that this term defines the value of the acceleration parameter of universe, which is compatible with the observed one. One has note, that the UV cut-off of momenta was used for the estimates \cite{Cherkas2007} for the Friedman universe. The present-day universe expands isotropically, so one cannot compare the results of the above calculations with some observational values directly. The early stages of the universe could be highly anisotropic \cite{Belinsky1970}. Particle creation during the anisotropic cosmological expansion and its back reaction to the metric have been considered \cite{Lukash1974}. It is interesting that the authors of Ref. \cite{Lukash1974} faced the necessity to set initial conditions for the evolution. They were forced to begin the evolution from a certain artificial moment of time. As we have seen above in the quasi-Heisenberg picture there exists fundamental possibility to set the initial conditions at the singularity itself and therefore to improve the analysis of Ref. \cite{Lukash1974}. \section{Evolution determined by the vacuum state}\label{sect3} In the considered gauge the background variable $\tau$ is not quantum. For this particular case, one can use the ordinary quantization using the creation and annihilation operators. Thus, we consider the quantization of the field $V$ against the time-dependent background $\tau(\eta)=\frac{1}{2}\ln(12 \Pi \eta)$. In this case, the field $V$ is represented as \cite{Birrell1982} \be {\mathcal V}_k(\eta)=\sum_k \hat{\mbox a}_k u_k(\eta)+\hat{\mbox a}_k^+ u_k^*(\eta), \ee where $ [{\hat {\mbox{a}}_k},{\hat {\mbox{a}}}_k^+]=1. $ \noindent The function $u_k(\eta)$ should satisfy the condition \be e^{2 \tau (\eta)} \left({u_k^*}(\eta) u_k^\prime(\eta)-u_k(\eta) u_k^{*\prime}(\eta)\right)=i. \label{rel0} \ee The mean values of the kinetic and potential energies of the mode $k$ in a vacuum state equal to \bea \Xi_k= <0|k^2 {\mathcal V}_k{\mathcal V}_{k}^{+ }|0>=k^2 u^{*}_k u_k,\nonumber \\ K_k=<0|{\mathcal V}^\prime_k {\mathcal V}^{+\prime }_{k}|0>=u^{*\prime}_k u^{\prime}_k. \eea Thus, one has to determine the functions $u_k$. The vacuum state is defined as a state vanishing under the action of the annihilation operator: $\hat {\mbox{a}}_k|0>=0$. However, the definition of $u_k$ is ambiguous. It should be noted that there exists a family of functions $u_k$ which satisfy Eq. (\ref{rel0}) and are interrelated by the Bogolubov's transformation. It was shown \cite{Anishchenko2009} the vacuum state could be defined through the minimization of some functional containing the difference of the potential and kinematic energies of field oscillators. In such a way one comes to the function \be u_k(\eta)=\frac{1}{4} \sqrt{\frac{\pi }{3\Pi}}\, H_0^{(2)}(|k|\eta), \ee where $H_0^{(2)}(z)$ is the Hankel function of the second kind. There is no particle (i.e., graviton) creation here, because the difference of the kinetic and potential energies is not an oscillating quantity \cite{Anishchenko2009}. Using the asymptotics of the Hankel function for large arguments, \[ H_0^{(2)}(z)\approx\sqrt{\frac{2}{\pi z}}\,e^{-i(z-\pi/4)}\left(1+\frac{i}{8 z}-\frac{9}{128 z^2}\right) \] one can obtain for the mean values of $\hat\Lambda_0^\prime$ and $\hat\Lambda_0^{\prime\prime}$ over vacuum state \bea <0|\hat\Lambda_0^\prime|0>\approx\frac{1}{\eta}-\sum_{k\ge 1}\frac{k}{\Pi}+\frac{1}{8 k \eta^2 \Pi},\nonumber\\ <0|\hat\Lambda_0^{\prime\prime}|0>\approx -\frac{1}{\eta^2}+\sum_{k\ge 1}\frac{1}{4 k\eta^3 \Pi}. \label{vv} \eea It is interesting to compare the above results with those from the quasi-Heisenberg quantization. For this aim one has to find the value $a_k$ in Eqs. (\ref{first}),(\ref{second}) which minimizes the constant part contribution $\frac{k F_k}{6 \Pi^2\pi a_k}+\frac{12 a_k k}{\pi }$ of every mode to $\Lambda_0^\prime$ given by Eq. (\ref{first}). That gives $a_k=\frac{1}{6\Pi}\sqrt{\frac{F_k}{2}}$. Substitution of this value into Eqs. (\ref{first}) and (\ref{second}) leads to \bea <\psi|\hat \Lambda_0^{\prime}|\psi>\approx \frac{1}{\eta}\left(1-\frac{1}{96 {a_0} \Pi ^2}\right)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-\sum_{k\ge 1}\sqrt{1+\frac{4}{\pi^2}\left(\gamma+\ln\left(\frac{k}{24 \Pi}\right)\right)^2}\left(\frac{k}{\Pi} +\frac{1}{8 k \eta^2 \Pi}\right),~~~~~ \label{first1} \eea \bea <\psi|\hat \Lambda_0^{\prime\prime}|\psi>\approx-\frac{1}{\eta^2}\left(1-\frac{1}{96 {a_0} \Pi ^2}\right)~~~~~~~~~~~~~~~~~~~~~\nonumber\\+\sum_{k\ge 1}\sqrt{1+\frac{4}{\pi^2}\left(\gamma+\ln\left(\frac{k}{24 \Pi}\right)\right)^2}{\frac{1}{4 k\eta^3 \Pi}}.~~~~~ \label{second1} \eea The comparison with Eq. (\ref{vv}) demonstrates that the non-vanishing term supplements a vacuum state term in the quasi-Heisenberg quantization scheme. Thus, any momentum wave packet defined at singularity gives an inevitable counterpart corresponding to a matter (in this model "matter" consists of gravitational wave quants). There is no need in ``matter creation from nothing'' in the quasi-Heisenberg picture, because it exists primordially. Let us briefly discuss the vacuum energy and its relation to singularity. Before regularization, the expressions for the mean values of $\Lambda_0^\prime$ and $\Lambda_0^{\prime\prime}$ are singular. Regularization of the influence of quantized gravitational waves to a background have been considered \cite{Berger1982,Hussein1987}. The author of Ref. \cite{Berger1982} has found that the singularity disappears that occurs because the substraction, that she uses in a regularization, affects the classical terms. However, the author of Ref. \cite{Hussein1987} stated that the singularity still remains. His argumentation is that for coherent states the mean values in classical and quantum pictures must coincide. For this purpose he took an appropriate ordering of the creation and annihilation operators in calculating the mean values. However, it should be noted that the vacuum state is a particular case of the coherent state. Thus, it is not surprising that the vacuum fluctuations do not contribute to evolution (i.e. do not affect the singularity) according to \cite{Hussein1987}. In the previous section it has been conjectured that a difference of the potential and kinetic energies has a physical meaning if one uses the UV cut-off. It comes from the fact that difference of the potential and kinetic energies of field oscillators gives a value of the universe acceleration compatible with observations \cite{Cherkas2007}. Thus, it seems that only the main divergence (also existing in the Minkowsky space-time) should be subtracted. \section{Outlook} As was discussed in the previous section, we cannot say infallibly whether singularity exists or not without a fundamental theory of regularization of the vacuum energy. However, earlier it have been found no vacuum energy problem in the toy two dimensional model considering string on the curved background \cite{Cherkas2012}, because the cosmological expansion is simply a motion of the string center of mass. Fluctuations, including vacuum ones, do not affect the motion of the string center of mass, i.e. the cosmological expansion. Mathematically, this looks as a compensation of scale factor fluctuations by fluctuations of the matter fields \cite{Cherkas2012}. On the other hand, in GR there exists the Isaacson theorem \cite{Isaacson1968} which states that evolution in the mean is determined by the energy-momentum tensor of excitations (perturbation). Thus, in the theories for which the Isaacson theorem is valid the vacuum energy problem emerges. Roughly, since the Isaacson theorem does not differ the vacuum fluctuations from the excitations under vacuum, the vacuum fluctuations contribute to the mean evolution. Being capable of the solving the vacuum energy problem the theories where the Issacson theorem does not exist, are beyond the GR frameworks. One may assume, that a quantum version of the Isaacson theorem should be developed for GR to differ vacuum and non-vacuum fluctuations. Also, it seems important to investigate the connection of the Isaacson theorem with the conformal invariance of the gravity theories \footnote{Recent interesting example of the conformally invariant theory of gravity have been developed \cite{Gomes2011,Smolin2014}.}. To summarize, as it was shown in section \ref{sect2}, it is possible to describe the universe evolution before regularization by a wave packet definition at singularity regardless a regularization procedure. It should be emphasized that the wave packet determined at the singularity is not only an ``informational seed'' but it is also responsible for the part of the matter in the universe because the gravitons (and, in the general case, the quants of matter fields) appears inevitably at the late time evolution. It would be interesting to consider a quantum picture of the general 3+1 BKL-solution \cite{Belinsky1970} in the framework of the quasi-Heisenberg picture including building of the corresponding wave packet at singularity. This work is in progress \cite{Cherkas2014}.
{ "timestamp": "2016-09-27T02:03:11", "yymm": "1504", "arxiv_id": "1504.06188", "language": "en", "url": "https://arxiv.org/abs/1504.06188" }
\section{Introduction} According to the Cologne Database for Molecular Spectroscopy \citep{mull01,mull05}, around $180$ molecules have been detected in the interstellar medium (ISM) or in circumstellar shells. Among these, severals species are organic in nature. Existence of these complex molecules could not be explained without a proper consideration of interstellar dusts \citep{hase92,das10,boog04,gibb04}. Several attempts were made over the past few years to study physical and chemical processes on the interstellar grains \citep{chak06a,chak06b,cupp07,das08b,das10,das13b,sahu15}. Several experimental results have been reported in explaining the importance of various surface processes on interstellar dusts \citep{iopp08,ober09}, making it imperative to incorporate the grain surface chemistry extensively while predicting their abundances. The problem of Origin of life is a long standing puzzle and formation of amino acids in the laboratory by well known work of \citet{mille53} ushered a new direction of research in this area. More recently, \citet{chak00a,chak00b} for the first time, suggested that perhaps the process of formation of the complex molecules such as Adenine and other constituents of DNA is very generic and such complex pre-biotic molecules should be formed during any star forming process. In the absence of proper reaction cross sections, \citet{chak00a} used neutral-neutral reaction rates to compute adenine abundance with successive addition of $\mathrm{HCN}$. This was later improved upon with more realistic cross sections \citep{chak00b} and more realistic abundance was obtained. Presence of interstellar grains were in explicitly added in these works, and were incorporated indirectly by a higher rate coefficients in H$_2$ formation. A follow up study by \citet{maju12} explicitly considered presence of grains and showed that this significantly alters the abundance of adenine. They carried out several prescriptions of rate coefficients for successive $\mathrm{HCN}$ addition reactions. They also considered radical-radical/radical-molecular reactions proposed by \citet{gupt11} for the formation of adenine. Their results suggest that radical-radical/ radical-molecular reactions dominates over the neutral-neutral reactions. Recently \citet{merz14} used the concept of retro-synthetic analysis to produce interstellar adenine from observed interstellar molecules such as $\mathrm{C_3NH}$, $\mathrm{HNCNH}$ and its isomer $\mathrm{H_2NCN}$. They used MP2/6-311++G(2d,2p) method to calculate various chemical parameters involving a six step mechanism. This motivated us to to perform a comparative study among various available pathways for the formation of adenine in interstellar region. This we present below. The plan of this paper is the following. In Section 2, computational methods are discussed. Results are presented in Section 3, and finally in Section 4, we draw our conclusions. \begin{table*} \centering \vbox{ \scriptsize{ \caption{Available and estimated rate co-efficients for the formation of interstellar adenine in the gas phase via various reaction pathways} \begin{tabular}{|c|c|c|} \hline {\bf Pathways used in}& {\bf Reactions}&{\bf Rate coefficients} \\ \hline \hline &(i)$\mathrm{HCN+HCN \rightarrow CH(NH)CN}$& $8.38 \times 10^{-20} $cm$^3$ sec$^{-1}$\\ {\bf \citet{chak00a,chak00b},}&$(ii)\mathrm{CH(NH)CN+HCN\rightarrow NH_2CH(CN)}$$_2$& $3.43 \times 10^{-12}$ cm$^3$ sec$^{-1}$\\ {\bf \citet{maju12}}&(iii)$\mathrm{NH_2CH(CN)_2+HCN \rightarrow NH_2(CN)C=C(CN)NH_2} $& $3.30 \times 10^{-15} $cm$^3$ sec$^{-1}$\\ &(iv)$\mathrm{NH_2(CN)C=C(CN)NH_2+HCN \rightarrow C_5H_5N_5}$& $3.99 \times 10^{-10} $cm$^3$ sec$^{-1}$\\ \hline &(v)$\mathrm{HCCN+HCN \rightarrow Molecule \ 1}$& $ 2.13 \times 10^{-17} $cm$^3$ sec$^{-1}$\\ &(vi)$\mathrm{Molecule \ 1 +H \rightarrow Molecule \ 2}$ & $7.96 \times 10^{-9}$ cm$^3$ sec$^{-1}$\\ &(vii)$\mathrm{Molecule \ 2 +NH_2CN \rightarrow Molecule \ 3}$ & $6.24 \times 10^{-12}$ cm$^3$ sec$^{-1}$\\ {\bf \citet{gupt11}}&(viii)$\mathrm{Molecule \ 3 + CN \rightarrow Molecule \ 4}$ & $1.80 \times 10^{-9} $cm$^3$ sec$^{-1}$\\ &(ix)$\mathrm{Molecule \ 4 +H \rightarrow Molecule \ 5}$& $8.71 \times 10^{-9} $cm$^3$ sec$^{-1}$\\ &(x)$\mathrm{Molecule \ 5 + CN \rightarrow C_5H_5N_5+HNC}$& $1.89 \times 10^{-9} $cm$^3$ sec$^{-1}$\\ &(xi)$\mathrm{Molecule \ 5 + CN \rightarrow C_5H_5N_5+HCN}$& $1.91 \times 10^{-9} $cm$^3$ sec$^{-1}$\\ \hline &(xii)$\mathrm{C_3NH+HNCNH \rightarrow C_4N_3H_3}$& $8.30 \times 10^{-20} $cm$^3$ sec$^{-1}$\\ &(xiii)$\mathrm{C_4N_3H_3+HNCNH \rightarrow C_4H_3N_3 + NH_2CN}$& $6.43 \times 10^{-22} $cm$^3$ sec$^{-1}$\\ {\bf \citet{merz14}}&(xiv)$\mathrm{C_4H_3N_3+HNCNH \rightarrow C_5N_5H_5}$& $1.36 \times 10^{-9} $sec$^{-1}$ \\ &(xv)$\mathrm{C_5N_5H_5+HNCNH \rightarrow N_5C_5H_5 + NH_2CN}$& $1.00 \times 10^{-9} $cm$^3$ sec$^{-1}$\\ &(xvi)$\mathrm{N_5C_5H_5+HNCNH \rightarrow C_5H_5N_5 + NH_2CN}$& $1.89 \times 10^{-13} $sec$^{-1}$\\ \hline \end{tabular}}} \end{table*} \section{Methods and Computational Details} \subsection{Chemical modeling} \begin {figure} \centering \includegraphics[height=8cm,width=8cm]{compare_adenine_path.eps} \caption{\scriptsize Comparison of results of various adenine formation pathways. Pathways proposed by \citet{merz14} is dominating over all other pathways. } \label{fig-1} \end {figure} We develop a large gas-grain chemical network to explore chemical evolution of interstellar adenine and its related species. For the gas-phase chemical network, we follow UMIST 2006 data base. Formation of adenine via various reaction pathways used in \citet{chak00a,chak00b}, \citet{gupt11} and \citet{merz14} are included. Our gas phase chemical network consists of more than $6000$ reactions between $650$ species. For the grain surface reaction network, we follow \citet{hase92,cupp07,das10,das11,das13b,das15}. Our surface reaction network contains $292$ reactions. We consider gas-grain interaction to frame the actual interstellar scenario. Gas phase species are allowed to accrete on interstellar grains. Grains are actually acting as a catalyst for the formation of several complex interstellar species. Formation of the simple and most abundant interstellar molecule, $\mathrm{H_2}$ cannot not be explained without the consideration of interstellar grain chemistry. Binding energies of surface species mainly dictate chemical composition of interstellar grain mantle. We consider the most updated interaction barrier energies as mentioned in \citet{das13b} and references therein. Depending on the energy barriers, surface species would move throughout the grain surface by thermal hopping or tunneling whichever is faster. At low temperatures, for lighter species such as H atom, tunneling is much faster. During these movements, surface species would react with any suitable reactant partners. Based on their energy barriers, surface species would also thermally evaporate \citep{hase92} and populate the gas phase. Cosmic ray induced evaporation \citep{hase93} and non-thermal desorption \citep{garr07,das15} mechanisms are also considered in our model. So, in brief, gas and grains are interacting with each other to exchange their chemical components by various means. Since all the processes are random, Monte Carlo method would be appropriate to use while dealing with this randomness. However, it requires huge computational time to compute with our vast gas phase and grain phase chemical networks simultaneously. Thus, we use traditional Rate equation method to handle our chemical network on grains. Detailed discussions on our gas-grain chemical model are already presented in \citet{das13b,das15,maju14a,maju14b}. All the reactions, which are considered here for the formation of adenine are shown in Table 1. \citet{maju12,maju13} considered neutral-neutral pathways (reaction number (i)-(iv) of Table 1) of \citet{chak00a,chak00b} and radical-radical/radical-molecular pathways (reaction numbers (v)-(xi)) of \citet{gupt11} and concluded that the production of adenine is dominated by radical-radical/radical-molecular reaction pathways. They identified $\mathrm{HCCN}$ and $\mathrm{NH_2CN}$ as two main precursor molecules, which are responsible for the production of adenine. $\mathrm{HCCN}$ is highly abundant in the interstellar space \citep{jiur06,guel91}. Formation of $\mathrm{HCCN}$ on the interstellar grains were also considered \citep{hase92}. \citet{mcgo96} conducted a deep search for $\mathrm{HCCN}$ towards TMC-1 and several GMC's via its $N(J)=1(2) \rightarrow 0(1)$ transition. They came up with an upper limit of fractional abundance with respect to molecular hydrogen to be $\sim 2 \times 10^{-10}$. Existence of $\mathrm{NH_2CN}$ in the interstellar cloud was first reported by \citet{turn75}. Subsequently, it was observed in both diffuse and dense clouds by \citet{lisz01}. \citet{wood07} predicted a steady state fractional abundance of $2.02 \times 10^{-10}$ for $\mathrm{NH_2CN}$ with respect to $\mathrm{H_2}$. \citet{merz14} proposed a new pathway (reaction nos. x(ii)-(xvi) of Table 1) for the production of adenine. They used retro synthetic analysis by using two new species ($\mathrm{C_3NH}$ and $\mathrm{HNCNH}$). Carbodimide ($\mathrm{HNCNH}$), is an isomer of cyanamide ($\mathrm{NH_2CN}$). \citet{mcgu13} obtained the abundance of this molecule from ice mantle experiments. They proposed that tautomerization of $\mathrm{NH_2CN}$ on dust grain ice mantles is the dominant formation pathway for $\mathrm{HNCNH}$. They found its abundance would be $\sim 10\%$ of $\mathrm{NH_2CN}$ (in Sgr B2(N) column density of $\mathrm{NH_2CN}$ is $\sim 2 \times 10^{13}$ cm$^{-2}$). Since, this was below the detection limit of any current astronomical facility, they proposed to observe $\mathrm{HNCNH}$ by those transitions which are amplified by masing. In our chemical model, we consider that $10\%$ of $\mathrm{NH_2CN}$ could be converted into $\mathrm{HNCNH}$. We use semi-empirical relationship developed by \citet{bate83} for the computation of rate coefficients of the exothermic and barrier less reaction pathways (reaction nos. xii-xiv) of \citet{merz14}. \begin{equation} K = 1 \times 10^{-21} A_r (6E_0 + N -2)^{(3N-7)}/ (3N-7)! cm^3 s^{-1} \end{equation} where, $E_0$ is the association energy in $eV$, $A_r$ is the transition probability (in $s^{-1}$) of the stabilizing transition (the numerical value of which may be taken to be $100$ unless better information is available) and $N$ is the number of nuclei in reactants. For the reaction numbers xii, xiii \& xiv, we use association energies of $-5.2, \ -0.6, \, -72.8 $ kcal/mol respectively. If the calculated rate coefficients from Eqn. 1 exceeds the limit set by the following equation (Eqn. 2), then this limiting value was adopted: \begin{equation} K = 7.41 \times 10^{-10} \alpha^{1/2}(10/\mu)^{1/2} cm^3 s^{-1} \end{equation} where, $\alpha$ is the polarizability in {A$^{\circ}$}$^3$, $\mu$ is the reduced mass of the reactants in $^{12}C$ amu scale as suggested by \citet{bate83}. Rate coefficients of reactions having activation barriers (reaction nos. xv-xvi) are calculated by using conventional transition state theory. According to this theory, the rate coefficient has the following form: \begin{equation} k(T) = (K_B T /h C_0 )exp(-\Delta G/RT ) \ s^{-1} , \end{equation} where, $K_B$ is the Boltzmann constant, $h$ is the Plank's constant, $T$ is the temperature, $C_0$ is the concentration (set to 1, by following \citet{jalb08}), $R$ is the ideal gas constant, and $\Delta G$ is the free energy of activation. From the quantum chemical calculations by \citet{merz14}, we are having $\Delta G =+0.4$ kcal/mol and $+1.1$ kcal/mol respectively for reaction numbers xv and xvi. \begin{table*} {\centering \scriptsize \caption{Initial elemental abundances} \begin{tabular}{|c|c|} \hline {\bf Species}&{\bf Abundance}\\ \hline \hline H$_2$ & $5.00 \times 10^{-01}$\\ He & $1.00 \times 10^{-01}$\\ N & $2.14 \times 10^{-05}$\\ O & $1.76 \times 10^{-04}$\\ H$_3$$^+$& $1.00 \times 10^{-11}$\\ C$^+$ & $7.30 \times 10^{-05}$\\ S$^+$ & $8.00 \times 10^{-08}$\\ Si$^+$& $8.00 \times 10^{-09}$\\ Fe$^+$& $3.00 \times 10^{-09}$\\ Na$^+$& $2.00 \times 10^{-09}$\\ Mg$^+$& $7.00 \times 10^{-09}$\\ P$^+$ & $3.00 \times 10^{-09}$\\ Cl$^+$& $4.00 \times 10^{-09}$\\ e$^-$ & $7.31 \times 10^{-05}$\\ HD & $ 1.6 \times 10^{-05}$\\ \hline \end{tabular}} \end{table*} \subsection{Spectroscopic Modeling} An educated estimate of spectral properties is essential before observing any unidentified species. It is reported in earlier astrophysical literature that DFT and TDDFT \citep{rung84} computation methods would be applied to various astrophysical problems \citep{pule10,piev14}. \citet{das15} discussed the necessity of quantum chemical calculations prior to any spectroscopical survey. Since our chemical model would predict the abundances of adenine and its related species, we believe it would be useful to present various spectral aspects of these species. For finding the various spectral aspects, we perform quantum chemical calculations by using Gaussian 09W program. By following \citet{maju14b}, for the vibrational frequencies, we use B3LYP/6-311++G(d,p) level of theory. In case of grain phase species, Polarizable Continuum Model (PCM) is used with the integral equation formalism variant (IEFPCM) as the default SCRF method. For the electronic absorption spectrum, we use time dependent density functional theory (TDDFT study). Since most of the gas phase complex molecules were observed by their rotational transitions in the mm or sub-mm regime, we carry out quantum chemical calculations to find out the rotational transitions of our desired species. Computation of the anharmonic frequencies require the use of analytic second derivative of energies at the displaced geometries. But the CCSD method in Gaussian 09W program only implements energies, so analytic second derivative of energies are not available at this level of theory. So there are no options in Gaussian 09W program to compute the rotational and distortional constants at the CCSD level of theory. Here, we use B3LYP/aug-cc-pvTZ level of theory, which are also proven to be very accurate \citep{carl13,das15}. This level of theory would be appropriate because there are some earlier studies (such as, \citet{brun06}) for finding rotational and distortional constants of Uracil (one nucleo base of RNA beside adenine, cytosine and guanine). Corrections for the interaction between the rotational motion and vibrational motion along with the corrections for vibrational averaging and an-harmonic corrections to the vibrational motion are also considered in our calculations. Obtained rotational and distortional constants are then used in SPCAT \citep{pick91} program to predict various rotational transitions. Output of the SPCAT program is then directly used in ASCP Prgram \citep{kisi98,kisi00} to find out the rotational stick diagram of the desired species. \section{Results and Discussion} \subsection{Results of Chemical modeling} \begin {figure} \centering \includegraphics[height=8cm,width=8cm]{shu.eps} \caption{\scriptsize Radial distribution of density profile.} \label{fig-2} \end {figure} \citet{chak00a,chak00b,maju12} considered successive $\mathrm{HCN}$ addition for the formation of adenine. \citet{gupt11} proposed adenine formation pathways starting with $\mathrm{HCCN}$ and $\mathrm{HCN}$. \citet{gupt11} also pointed out that their reaction pathways would also be feasible in the grain phase. A completely new pathway has been proposed recently by \citet{merz14}, where adenine could be produced without any $\mathrm{HCN}$ addition. So, we wish to compare four different pathways where the first one is the gas phase pathway of \citet{gupt11}, the second one is the pathway considered by \citet{chak00a,chak00b} and updated by \citet{maju12}, the third one is the gas phase pathway proposed by \citet{merz14}. As a fourth method, we consider both gas and grain phase pathways of \citet{gupt11}. Since, no mention was made of the grain phase reactions in \citet{merz14} and \citet{chak00a,chak00b}, we do not consider these pathways in the grain phase. In general, we consider that the gas and the grain interact with each other to exchange their chemical components. So, grain phase could be populated by the acrretion from the gas phase and gas phase could be populated by the grain phase species through evaporation mechanisms such as, thermal desorption, non-thermal desorption and cosmic-ray induced desorption. Initial elemental abundances with respect to total hydrogen nuclei (Table 2) are considered to be similar to what is in \citet{das13b} and \citet{maju14b}. These are the typical low metallic abundances which are often adopted for TMC-1 cloud. In Fig. 1, a comparative study between these four pathways are shown for the chemical evolution of adenine. To draw this, we take number density of hydrogen nuclei existing in all possible forms ($n_H$) to be $= 10^4$ cm$^{-3}$, $A_V=10$ and $T=10$ K. Various evaporation mechanisms are considered here, among them, around low temperatures, non-thermal desorption mechanism \citep{garr07,das15} is the most efficient means to transfer these complex molecules to the gas phase. A moderate value of the desorption parameter ($\sim 0.05$) is considered. Significant difference in the abundance of adenine could be observed between the two considerations of \citet{gupt11}. In one case, only gas phase pathways of \citet{gupt11} is considered and in another case, the same pathways are considered for both the phases (gas and grain). From Fig. 1, we find that the peak abundances of adenine for these two cases appear to be $5.3 \times 10^{-22}$ and $1.5 \times 10^{-17}$ respectively. Since grain phase production is efficient, production is significantly enhanced in the case when grain phase pathways are also considered. While using pathways of \citet{chak00a,chak00b}, peak abundance of adenine turns out to be $4.8 \times 10^{-22}$ and using pathways proposed by \citet{merz14}, the peak abundance becomes $2.5 \times 10^{-14}$. Using neutral-neutral rate coefficients in an isothermal cloud, \citet{chak00a} found the abundance of $\sim 5 \times 10^{-13}$, one magnitude higher than this. It is also clear from Fig. 1 that the adenine formation pathways of \citet{merz14} dominates over all other pathways. In Fig. 1, we show chemical evolutions of $\mathrm{HNCNH}$ and $\mathrm{C_3NH}$ (which are required for the formation of adenine in the pathways proposed by \citet{merz14}). These molecules could be used as the precursors for estimating the abundances of adenine in interstellar region. \subsubsection{Mixing model} Chemical complexity of any interstellar region is highly dependent on surrounding physical condition. Formation of bio-molecules would also be influenced by surrounding physical process. To describe a realistic situation, we prepare a special model named ``Mixing model". We consider that density profile follows $\rho \sim r^{-2}$ distribution as described by \citet{shu77}. At the outer boundary of the cloud the density is $10^4$ cm$^{-3}$, inner boundary location is chosen in such way that the density would become $10^8$ cm$^{-3}$ (Fig. 2). For concreteness, we choose the outer boundary to be at $15962$ AU and the inner boundary becomes at $159.62$ AU. To reduce computational time we further subdivide this cloud into four shells, innermost shell is extended from $159.62$ AU to $504.76$ AU which has an average density of $2.12 \times 10^7$ cm$^{-3}$. The shell no. $2$ has an average density of $2.12 \times 10^6$ cm$^{-3}$ and extends from $504.76$ AU to $1596.2$ AU. The shell no. $3$ has an average density of $2.12 \times 10^5$ cm$^{-3}$ and extends from $1596.2$ AU to $5047.63$ AU. The fourth shell extends from $5047.63$ AU to $15962$ AU and has an average density of $2.12 \times 10^4$ cm$^{-3}$. Since the cloud is dense, we assume visual extinction ($A_V$) of $10$ and temperature $10$ K throughout the cloud. \begin {figure} \centering \includegraphics[height=7cm,width=7cm]{adenine_depth.eps} \caption{\scriptsize Radial distribution of adenine when mixing and no mixing models are considered. Due to heavy mixing the final and peak abundances (dashed lines) become similar.} \label{fig-3} \end {figure} We assume that a systematic mixing is going on throughout the cloud. Matter of shell 4 would contribute ($p_4 \%$ of shell 4 matter) to shell 3, material of shell 3 would contribute ($p_3 \%$ of shell 3 matter) to shell 2, material of shell 2 would contribute ($p_2 \%$ of shell 2 matter) to shell 1 and due to re-entry of the outflow at the outer edge, some matter would re-enter ($p_1 \%$ of shell 1 matter) into shell 4. Each transport should have different time scales. Inward material transfer (from shell 4$\rightarrow$ shell 3$\rightarrow$ shell 2 $\rightarrow$shell 1) is assumed to take place with the sound speed of ($\sqrt{\gamma KT/m_p}\sim 2.873 \times 10^4$ cm/s for $T=10$ K and $\gamma=5/3$). Since shell 4 is $10914 \ AU$ thick, sound wave will take $5.68 \times 10^{12} \ s$ ($t_4$) to cross it. We assume that after every $t_4$ second, some percentage matter would move to shell 3. Similarly, we calculate the time scale for transferring some fraction of matter from shell 3 to shell 2 ($t_3=1.80 \times 10^{12} \ s$) and shell 2 to shell 1 ($t_2=5.68 \times 10^{11} \ s$). Due to outflow, some fraction of matter of shell 1 would contribute to shell 4. Time scale for the outflow ($t_1$) is chosen by considering free fall time scale ($\sqrt{3\pi/32G\rho}$). Free fall time scale is inversely proportional to the square root of density. Here, we choose a typical dense cloud number density ($10^4 \ cm^{-3}$) for the calculation of free fall time. So in our case we assume $t_1=1.62 \times 10^{13} \ s$. For simplicity, we assume that $p_4=p_3=p_2=p_1=20\%$. In Fig. 3, depth dependence of peak and final adenine abundances are shown for the non-mixing and mixing cases with a density profile of Fig. 2. Abundances are represented with respect to the total hydrogen nuclei in all forms. For each shell, We carry out our simulation up to $2 \times 10^6$ year. By the peak abundance, we mean the peak value obtained during the simulation regime and by the final abundance, we mean the value obtained at the end of simulation time scale. Since from Fig. 1, it is clear that the pathways of \citet{merz14} is dominating over all other existing pathways for the production of adenine in interstellar region, here, for Fig. 3, we considered only the pathways of \citet{merz14}. Clearly, due to the mixing of matter among various shells, final abundance of adenine is severely affected in the mixing model. Peak abundances of adenine is decreasing inside the cloud. Reason behind is that as we penetrate inside the cloud, density increases and this results in shorter depletion (to the next shell) time scale. Due to heavy depletion, all related species depleted from the gas phase. As a result, the peak value of adenine is reduced. However, the final value goes up and indeed, the final and the peak values become similar due to significant mixing. \subsection {Results of Spectroscopic modeling} \subsubsection{Vibrational transitions} $\mathrm{HNCNH}$ is one of the isomers of $\mathrm{NH_2CN}$ which is a planar pentatonic molecule. It has nine fundamental vibrations \citep{rubl56}. Similar to $\mathrm{NH_2CN}$, $\mathrm{HNCNH}$ also has nine fundamental vibration lines as shown in Table 3. In Fig. 4a-c, infrared absorption spectra of $\mathrm{HNCNH}$ for the gas phase and grain phase are shown. We note that the gas phase $\mathrm{HNCNH}$ has its strongest feature at $2225.48$ cm$^{-1}$ and the second intense peak arises at $899.71$ cm$^{-1}$. In the grain phase $2225.48$ cm$^{-1}$ peak is shifted to $2179.37$ cm$^{-1}$ and peak at $899.71$ cm$^{-1}$ is also shifted to $925.14$ cm$^{-1}$ with much lower intensity. Beside these intense peaks, there are several peaks which are pronounced in our grain phase model (such as peaks at $902.61$ cm$^{-1}$ and $3564.13$ cm$^{-1}$). For the sake of better understanding, we placed all the infrared peak positions with their absorbance in Table 3. Moreover, a comparison with the other theoretical/experimental vibrational transitions for $\mathrm{HNCNH}$ (Birk \& Winnewisser, 1986; King \& Strope, 1971) are also shown in Table 3. Some of our peak positions are very close to the results reported in the literature. \begin {figure} \centering \includegraphics[height=6cm,width=6cm]{HNCNH_ir.eps} \vskip 1cm \includegraphics[height=6cm,width=6cm]{c3nh_ir.eps} \vskip 1cm \includegraphics[height=6cm,width=6cm]{adenine.eps} \caption{\scriptsize Infrared spectra of $\mathrm{HNCNH}$, $\mathrm{C_3NH}$ and adenine in gas and grain (water ice) phases.} \label{fig-2} \end {figure} In Table 3, we also note down the peak position of the infrared spectrum of $\mathrm{C_3NH}$ in the gas as well as in the grain phase. We find that the most intense mode in the gas phase appears at $2293.10$ cm$^{-1}$. This peak is shifted in the left side in the grain phase and appears at $2309.68$ cm$^{-1}$. Among the other peaks in the gas phase, $3718.05$ cm$^{-1}$ and $463.33$ cm$^{-1}$ has the major contributions. One strong peak at $3760.19$ cm$^{-1}$ appears in the grain phase. Fig. 4c shows infrared spectra of adenine in gas and grain phases. Gas phase infrared spectrum of adenine contains strong peaks at $1634.40$ cm$^{-1}$, $1656.77$ cm$^{-1}$, $31.23$ cm$^{-1}$ and all these peaks are shifted to grain phase and appears at $1638.84$ cm$^{-1}$, $1621.56$ cm$^{-1}$, $141.19$ cm$^{-1}$. Not only that, few new strong peaks appears at $3594.23$ cm$^{-1}$, $3634$ cm$^{-1}$, $3718.16$ cm$^{-1}$ for grain phase adenine. These features are clear in the Fig. 4c. \subsubsection{Electronic transitions} We continue our computation to obtain the spectral properties of $\mathrm{C_3NH}$ and $\mathrm{HNCNH}$ in the electronic absorption mode. Electronic absorption spectra of $\mathrm{HNCNH}$, $\mathrm{C_3NH}$ and adenine in gas as well in the grain phase are shown in Fig. 5a-c. Corresponding electronic transitions, absorbance and oscillator strengths are also summarized in Table 4. An electronic absorption spectrum of $\mathrm{HNCNH}$ molecule in the gas phase is characterized by five intense peaks. These five transitions occurred at $219.55$, $184.83$, $156.97$, $141.48$, $102$ $nm$ with major contributions from 1-A1 $\rightarrow$1-B2, 1-A1$\rightarrow$1-B1, 1-A1$\rightarrow$1-B2, 1-A1$\rightarrow$1-B2 and 1-A1$\rightarrow$ 1-A2 electronic transitions. All the peaks are shifted in grain phase with the corresponding change in electronic transitions, absorbance and oscillator strength. For $\mathrm{C_3NH}$ molecule, electronic absorption spectra in gas as well in the grain phases are shown in Fig. 5b. This electronic spectrum in gas phase contains peaks at $182.6$, $126.7$, $104.04$ nm due to $1-A' \rightarrow1-A'$, $1-A'\rightarrow1-A"$, $1-A' \rightarrow 1-A'$. As in case of $\mathrm{HNCNH}$, here also corresponding gas phase peaks shift in the grain phase. However, most interestingly, the gas phase as well as grain phase adenine contains only one strong peak in the electronic mode shown in the Fig. 5c. Gas phase strong peak appears at $256.12$ nm with oscillator strength $0.0655$ for $1-A\rightarrow 1-A$ electronic transition. This peak is shifted to grain phase and appears at $252.11$ nm with oscillator strength $0.1267$ for the same electronic transition. \subsubsection{Rotational transitions} In Table 5, we summarize our calculated rotational and distortional constants of the precursors $\mathrm{HNCNH}$ and $\mathrm{C_3NH}$ of adenine and adenine itself, in gas phase. Calculated constants are corrected for each vibrational state as well as vibrationally averaged structures. Here we use B3LYP/cc-pVTZ level to perform these calculations in gas phases. In Table 5, calculated distortional constants correspond to I$^t$ representation with `A' reduction. Figure 6 shows the stick diagram of two new precursors of adenine. \begin {figure} \centering \includegraphics[height=6cm,width=6cm]{HNCNH_uv.eps} \vskip 1cm \includegraphics[height=6cm,width=6cm]{c3nh_uv.eps} \vskip 1cm \includegraphics[height=6cm,width=6cm]{adenine_uv.eps} \caption{\scriptsize Electronic absorption spectra of $\mathrm{HNCNH}$, $\mathrm{C_3NH}$ and adenine in gas and grain (water ice) phases.} \label{fig-4} \end {figure} \begin{table*} \scriptsize{ \centering \vbox{ \caption{Vibrational frequencies of precursors HNCNH and C$_3$NH of adenine in gas phase and H$_2$O ice containing grains at B3LYP/6-311G++(d,p) level} \begin{tabular}{|c|c|c|c|c|c|} \hline {\bf Species}&{\bf Peak positions }&{\bf Absorbance}&{\bf Peak positions }&{\bf Absorbance}&{\bf Experiment or other}\\ {}&{\bf (Gas phase)}&{}&{\bf (water ice)}&{}&{\bf theoretical value}\\ &{\bf (Wavenumber in $cm^{-1}$)}&& {\bf (Wavenumber in $cm^{-1}$)}&& {\bf (Wavenumber in $cm^{-1}$)}\\ \hline &540.31& 79.926 & 539.48 & 141.348&$537^k$\\ & 540.65 & 0.195 & 539.60 & 0.142 &\\ & 718.17 & 107.962 & 719.18 & 140.079 &\\ &897.11 & 9.975 & 902.61 & 712.664 &890 $\pm$ 10$^b$, 886$^k$ \\ {\bf HNCNH}& 899.71 & 438.296 &925.14 & 18.842 &\\ & 1287.51 & 0.0593 & 1284.96 & 0.3371 &1285 $\pm$ 20 $^k$, 1275$^b$\\ & 2225.48 & 713.448 & 2179.37 & 1290.062& 2104.7$^b$, 2097$^k$\\ & 3596.90 & 143.912& 3564.13& 269.387 & \\ & 3599.41 & 26.106 & 3569.93 & 52.647 &\\ \hline & 180.31 & 3.324 &154.87 & 207.6288 &\\ & 190.52 & 1.491 & 187.05 & 38.291 & \\ & 463.33 & 409.108 & 188.75 & 2.815 & \\ & 592.72 & 2.186 & 563.39 & 8.437 &\\ {\bf C$_3$NH} & 596.46 & 62.223 & 572.20 & 2.130 & -\\ & 965.33 & 0.683 & 953.06 & 23.805 &\\ & 1957.85 & 27.241 & 2028.00 & 259.828 &\\ & 2293.10 & 1589.606 & 2309.68 & 2393.2106 & \\ & 3718.05 & 448.645 & 3760.19 & 1561.688 & \\ \hline \multicolumn{6}{|c|}{$^b$ Birk and Winnewisser, 1986}\\ \multicolumn{6}{|c|}{$^k$ King and Strope, 1971}\\ \hline \end{tabular}}} \end{table*} \begin{table*} \centering \addtolength{\tabcolsep}{-4pt} \scriptsize{ \vbox{ \caption{Electronic transitions of precursors HNCNH and C$_3$NH of adenine at B3LYP/6-311++g(d,p) level theory in gas phase and H$_2$O ice containing grain phase} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline {\bf Species}&{\bf Wavelength}&{\bf Absorbance}&{\bf Oscillator strength }&{\bf Transitions} &{\bf Wave length}&{\bf Absorbance}&{\bf Oscillator strength}&{\bf Transitions}\\ {}&{(gas phase)}&{}&{}&{}&{(H$_2$O ice)}&{}&{}&{}\\ &(in nm)&&&&(in nm)&&&\\ \hline & 182.6 & 47462.573 & 0.9608& 1-A'$\rightarrow$1-A'& 191.57 & 61654.567 & 1.5193 & 1-A$\rightarrow$1-A\\ & 126.7 & 12259.477 & 0.0073 & 1-A'$\rightarrow$1-A" & 143.78 & 10621.323 & 0.0276 & 1-A$\rightarrow$1-A\\ {\bf C$_3$NH}& 104.04 & 9869.950 & 0.0216 & 1-A'$\rightarrow$1-A'& 119.07 & 11042.926 & 0.0593 & 1-A$\rightarrow$1-A\\ & - & - & -& -& 104.3 & 7174.6216 & 0.0034 & 1-A$\rightarrow$1-A\\ \hline & 219.55 & 672.854 & 0.0166 & 1-A1$\rightarrow$1-B2& 215.94 & 716.869 & 0.0177 & 1-A1$\rightarrow$1-B2\\ & 184.83 & 1043.151 & 0.0217 & 1-A1$\rightarrow$1-B1 & 159.3 & 17358.816 & 0.4126 & 1-A1$\rightarrow$1-B2\\ {\bf HNCNH}& 156.97 & 10575.579 & 0.2225 & 1-A1$\rightarrow$1-B2& 144.73 & 22602.596 & 0 & 1-A1$\rightarrow$1-A2\\ & 141.48 & 22671.816 & 0.4362 & 1-A1$\rightarrow$1-B2 & 123.29 & 8869.585 & 0.1003 & 1-A1$\rightarrow$1-A1\\ & 102 & 18576.3854 & 0 & 1-A1$\rightarrow$1-A2 & 100.94 & 14075.884 & 0.0338 & 1-A1$\rightarrow$1-B2\\ & - & - & -& -& 87.24 & 5237.201 & 0.1028 & 1-A1$\rightarrow$1-B1\\ \hline \end{tabular}} } \end{table*} \begin{table*} \scriptsize{ \centering \vbox{ \caption{Theoretical rotational parameters of adenine and its two precursors HNCNH and C$_3$NH at B3LYP/cc-pVTZ level of theory} \begin{tabular}{|c|c|c|c|c|} \hline {\bf Species}&{\bf Rotational }&{\bf Values}& {\bf Distortional }&{\bf Values} \\ &{\bf constants }&{\bf in MHz}& {\bf constants }&{\bf in MHz} \\ &&&& \\ \hline \hline &A& 378941.67576 & $D_J$& 0.30806$\times$10$^{-2}$ \\ {\bf HNCNH in gas phase}& B & 10550.84447 & $D_{JK}$& 0.36739$\times$10$^{0}$ \\ &C& 10334.08855 & $D_{K}$& 0.11423$\times$10$^{3}$ \\ & & & $d_1$ & $-0.89721\times$10$^{-5}$ \\ &&& $d_2$& $-0.12535\times$10$^{-4}$\\ \hline &A& 1431947.28270 & $D_J$& 0.48833$\times$10$^{-3}$ \\ {\bf C$_3$NH in gas phase}& B & 4694.90368 & $D_{JK}$& -0.19558$\times$10$^{0}$ \\ &C& 4680.33880 & $D_{K}$& 0.41486$\times$10$^{5}$ \\ & & & $d_1$ & $-0.20704\times$10$^{-5}$ \\ &&& $d_2$& $0.29323\times$10$^{-6}$\\ \hline &A& 2387.05922 & $D_J$& 0.19731$\times$10$^{-4}$ \\ {\bf C$_5$H$_5$N$_5$ in gas phase}& B &1575.31931 & $D_{JK}$& 0.14017$\times$10$^{-3}$ \\ &C& 949.02096 & $D_{K}$& 0.10519$\times$10$^{-4}$ \\ & & & $d_1$ & $-0.10350 \times$10$^{-4}$ \\ &&& $d_2$& $ -0.49530 \times$10$^{-5}$\\ \hline \end{tabular}}} \end{table*} \begin {figure} \vskip 2cm \centering \includegraphics[height=7cm,width=10cm]{ascp_adenine.eps} \caption{\scriptsize Rotational stick diagrams of $\mathrm{HNCNH}$, $\mathrm{C_3NH}$ and adenine.} \label{fig-7} \end {figure} \section {Concluding Remarks} Interstellar chemistry to form complex bio-molecules began in this century and the first quantitative computation of adenine and other pre-biotic molecules was presented by \citet{chak00a,chak00b}. A follow up study by \citet{maju12} showed that formation of adenine mainly follows radical-radical/radical-molecular reaction pathways as proposed by \citet{gupt11}. More recently, \citet{merz14} performed retro synthetic analysis to find out a new mechanism for adenine formation in the gas-phase. They proposed that it could be formed from three already known interstellar molecules, namely, $\mathrm{C_3NH}$ and the isomers, $\mathrm{HNCNH}$ and ${\mathrm H_2NCN}$. Uniqueness of this pathway is that it does not involve $\mathrm{HCN}$, water or ammonia during the production of adenine by any of the six intermediate steps. In this paper, we carried out a comparative study among various formation pathways of adenine. We considered all the pathways used in \citet{chak00a,chak00b,gupt11,merz14}. Our chemical model suggests that the adenine formation pathways proposed by \citet{merz14} dominates over all other existing pathways. We studied the effects of mixing caused by both advection and outflows. We find that for adenine, though the peak abundance is severely reduced, the final abundance becomes higher as compared to the non-mixing (static evolution) case. We explored the infrared, electronic, sub-millimeter spectroscopy of adenine along with its two new precursor molecules ($\mathrm{HNCNH}$ and $\mathrm{C_3NH}$). We show that adenine will have one single strong line either in the gas or in the grain phase vibrational spectra. A detailed chemical and spectroscopical information about adenine as revealed by our analysis and its precursors would be extremely helpful for future survey of this species in the interstellar medium. In case the adenine is not directly detected, we believe that the detection of $\mathrm{HNCNH}$, ${\mathrm H_2NCN}$ and $\mathrm{C_3NH}$ would give adequate reason to believe that adenine must also be present. \section{Acknowledgments} AD, SKC are grateful to ISRO respond (Grant No. ISRO/RES/2/372/11-12) and DST (Grant No. SB/S2/HEP-021/2013) for financial support. LM thank MOES for funding during this work. \clearpage {\scriptsize
{ "timestamp": "2015-04-27T02:06:35", "yymm": "1504", "arxiv_id": "1504.06444", "language": "en", "url": "https://arxiv.org/abs/1504.06444" }
\section{Introduction} \label{sec:intro} In the absence of a convincing theory to explain the origin of the lepton flavor structure, different approaches have been pursued to address this question. Among them, the imposition of texture zeros in the lepton mass matrices has been quite popular. The reason is two-fold. The vanishing of some matrix elements obviously reduces the number of free parameters, thus increasing, in some cases, the predictive power of the flavor patterns. Furthermore, texture zeros can naturally appear in theories with an extended scalar sector in the presence of Abelian symmetries~\cite{Grimus:2004hf,Felipe:2014vka}. Thus, the study of the phenomenological implications of lepton mass matrices with vanishing elements is well motivated on theoretical grounds. During the last years, our knowledge of neutrino masses and leptonic mixing has been enriched thanks to the data accumulated from several solar, atmospheric, reactor and accelerator neutrino experiments~\cite{Capozzi:2013csa,Forero:2014bxa,Gonzalez-Garcia:2014bfa}, as well as to cosmological observations~\cite{Planck:2015xua}. Furthermore, an improved sensitivity to the Dirac CP phase has emerged from the complementarity of accelerator and reactor neutrino data. It is conceivable that leptonic CP violation is observed in current and next-generation neutrino oscillation experiments, which makes the search for such effects one of the main goals of the future research in neutrino physics~\cite{Branco:2011zb}. It has been known for some time that, in the flavor basis where the charged lepton mass matrix is diagonal, neutrino mass matrices with more than two independent zero entries are not compatible with neutrino oscillation data, while seven patterns with two zeros are viable, as shown by Frampton, Glashow and Marfatia (FGM) in Ref.~\cite{Frampton:2002yf}. The latter contain four complex parameters, from which nine physical quantities should be determined (three neutrino masses, three mixing angles, one Dirac CP phase and two Majorana phases), assuming that light neutrinos are Majorana particles. More recently, the aforementioned two-zero textures have been scrutinized (see e.g. Refs.~\cite{Fritzsch:2011qv,Meloni:2012sx,Meloni:2014yea}). Other predictive textures can be envisaged as well in the flavor basis. The so-called hybrid textures~\cite{Kaneko:2005yz}, having one texture zero and two equal nonzero elements, contain the same number of physical parameters as the FGM textures. A systematic analysis of such hybrid textures has been presented in Ref.~\cite{Liu:2013oxa}, in which the authors concluded that 39 patterns for Majorana neutrinos are compatible with current neutrino oscillation data at the $3\sigma$ confidence level (C.L.). Restrictive patterns for the lepton mass matrices can also be constructed when the charged lepton mass matrix is not diagonal. For instance, one can consider scenarios in which both matrices exhibit a ``parallel" structure~\cite{Branco:1999nb,Branco:2007nn} with the vanishing matrix elements located at the same positions~\cite{Nishiura:1999yt,Xing:2002sb} (see also Ref.~\cite{Gupta:2012dma} and references therein). Recently, a detailed survey of texture zeros in lepton mass matrices has been performed, for both Dirac and Majorana neutrinos, considering parallel and non-parallel matrix structures~\cite{Ludl:2014axa}. In the latter study, however, the Dirac phase was not included in the numerical $\chi^2$-analysis, which was carried out at the $5\sigma$ C.L.. In this work, we perform a detailed $\chi^2$-analysis of several popular \emph{Ans\"{a}tze} for lepton mass matrices that contain texture zeros. We aim at determining whether such patterns are consistent or not with current neutrino oscillation data at the $1\sigma\, (68.27\%)$ C.L.. In particular, the well-known FGM two-zero textures, the hybrid textures, as well as parallel structures will be analyzed. In our fitting procedure, we take into account six neutrino observables, namely, the two mass-squared differences, the three mixing angles, and the Dirac CP-violating phase. We also impose the recent cosmological bound on the sum of the neutrino masses~\cite{Planck:2015xua}. The paper is organized as follows. In Sec.~\ref{sec:chianalysis}, we briefly explain our strategy for the numerical analysis and minimization procedure. Then, we proceed in Sec.~\ref{sec:fgmtextures} to revisit the FGM two-zero textures for Majorana neutrinos. Two-zero textures for the lepton mass matrices in the case of Dirac neutrinos are also considered. Section~\ref{sec:hybridtextures} is devoted to the systematic $\chi^2$-analysis of hybrid textures containing one-zero texture and two equal nonzero elements, for both Majorana and Dirac neutrinos. Parallel structures with two and three zeros are studied in Sec.~\ref{sec:partextures}. In Sec.~\ref{sec:nnitextures}, predictive neutrino textures in combination with a charged lepton mass matrix exhibiting the so-called nearest-neighbor-interaction (NNI) form are considered. Finally, our concluding remarks are given in Sec.~\ref{sec:summary}. \section{Strategy for the numerical analysis} \label{sec:chianalysis} Leptonic mixing is described by the Pontecorvo, Maki, Nakagawa and Sakata (PMNS) matrix~\cite{pmns}, which, in the standard parametrization, can be written as~\cite{Agashe:2014kda} \begin{widetext} \begin{equation}\label{pmns-pdg} \mathbf{U} = \left( \begin{array}{ccc} c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta } \\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta } & \quad c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta }\quad & s_{23}c_{13} \\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta } & -c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta } & c_{23}c_{13} \end{array} \right) \cdot \mathrm{diag\, }(1,e^{i\alpha_{21}/2}, e^{i\alpha_{31}/2}), \end{equation} \end{widetext} where $c_{ij}\equiv \cos \theta _{ij}\ ,\ s_{ij}\equiv \sin \theta _{ij}\ $, with all the angles $\theta _{ij}$ in the first quadrant, $\delta$ is the Dirac CP phase, and $\alpha_{21}, \alpha_{31}$ are two Majorana phases. The unitary matrix $\mathbf{U}$ in Eq.~\eqref{pmns-pdg} relates the mass eigenstate neutrinos $\nu_i \ (i = 1,2,3)$ to the flavor eigenstate neutrinos $\nu_f \ (f=e,\mu,\tau)$. For Majorana neutrinos, the neutrino mass matrix $\mathbf{m}_\nu$ is a $3\times3$ complex symmetric matrix, which can be diagonalized by the unitary transformation $\mathbf{U}_{\nu L}^\dagger \mathbf{m}_\nu \mathbf{U}^{\ast}_{\nu L}=\mathrm{diag\,}(m_1,m_2,m_3)$, with $\mathbf{U}_{\nu L}$ a unitary matrix and the neutrino masses $m_i$ real and positive. If neutrinos are Dirac particles, then the corresponding unitary transformation is $\mathbf{U}_{\nu L}^\dagger \mathbf{m}_\nu \mathbf{U}_{\nu R}=\mathrm{diag\,}(m_1,m_2,m_3)$, in analogy to the charged leptons, for which the mass matrix $\mathbf{m}_\ell$ is diagonalized by $\mathbf{U}_{\ell L}^\dagger \mathbf{m}_\ell \mathbf{U}_{\ell R}=\mathrm{diag\,}(m_e,m_\mu,m_\tau)$. The leptonic mixing matrix $\mathbf{U}$ is then given by $\mathbf{U}=\mathbf{U}_{\ell L}^\dagger \mathbf{U}_{\nu L}$, which can always be parametrized in the form of Eq.~\eqref{pmns-pdg}. The absolute scale of neutrino masses is not yet known and there are two possible orderings of the light neutrino masses: normal ordering (NO) with $m_1<m_2<m_3$ or inverted ordering (IO) with $m_3<m_1<m_2$. The spectrum may vary from hierarchical to quasi-degenerate masses. Nevertheless, cosmological observations place a stringent upper bound on the sum of the masses. Assuming three species of degenerate massive neutrinos and a $\Lambda$CDM model, the Planck collaboration has released the bound~\cite{Planck:2015xua} \begin{equation} \label{planck} \sum_i m_i < 0.23~\text{eV}\quad \text{(95\% C.L.)}, \end{equation} obtained from a combined analysis of data.\footnote{Similar bounds are inferred from other cosmological observations. For instance, the median value $\sum_i m_i=0.32 \pm 0.11$~eV has been obtained by the South Pole Telescope collaboration, with a $3\sigma$ detection of a positive sum and $\sum_i m_i \in [0.01,0.63]$~eV at 99.7\% C.L.~\cite{Hou:2012xq}.} Although this bound is not definite and requires confirmation by forthcoming experiments, its inclusion in the analysis of neutrino mass models may lead to important conclusions about the viability of a given model. In this work, we shall perform a $\chi^2$-analysis using the standard $\chi^2$-function \begin{equation}\label{chisquared} \chi^2(x) = \sum_{i} \frac{(\mathcal{P}_i(x)-\overline{\mathcal{O}}_i)^2}{\sigma_i^2}, \end{equation} where $x$ denotes the physical input parameters (in our case, the matrix elements of the lepton mass matrices), $\mathcal{P}_i(x)$ are the predictions of the \emph{Ans\"{a}tze} for the observables $\mathcal{O}_i$, $\overline{\mathcal{O}}_i$ are the best-fit values of $\mathcal{O}_i$, and $\sigma_i$ are their corresponding $1\sigma$ errors. In our study, we make use of the current neutrino parameters at $1\sigma$, obtained in Ref.~\cite{Forero:2014bxa} from the global fit of neutrino oscillation data. Furthermore, we impose the cosmological constraint on the sum of the neutrino masses given in Eq.~\eqref{planck}. We shall fit the zero-textures of lepton mass matrices taking into account six observables: the mass-squared differences $\Delta m_{21}^2,\, \Delta m_{31}^2$, the mixing angles $\mathrm{sin}^2\theta_{12},\, \mathrm{sin}^2\theta_{23},\, \mathrm{sin}^2\theta_{13}$, and the Dirac CP phase $\delta$. Since the Majorana phases are presently not constrained, we do not include them in the analysis. A given texture is considered to agree well with the experimental data if the model predictions for the physical observables in Eq.~\eqref{chisquared} are within the $1\sigma$ interval given in Table~\ref{tab:nudata}. Thus, $\chi^2_{min}\lesssim 6$ is a necessary condition for a pattern to be consistent with all observations. \begin{table}[ht]\centering \begin{tabular}{lc} \hline Parameter & Best fit $\pm$ $1\sigma$\\ \hline $\Delta m^2_{21}\: [10^{-5}\,\text{eV}^2]$ & 7.60$^{+0.19}_{-0.18}$ \\[2mm] $|\Delta m^2_{31}|\: [10^{-3}\,\text{eV}^2]$ (NO) & 2.48$^{+0.05}_{-0.07}$ \\ \phantom{$|\Delta m^2_{31}|\: [10^{-3}\text{eV}^2]$ } (IO) & 2.38$^{+0.05}_{-0.06}$ \\[2mm] $\sin^2\theta_{12} / 10^{-1}$ & 3.23$\pm$0.16 \\[2mm] $\sin^2\theta_{23} / 10^{-1}$ (NO) & 5.67$^{+0.32}_{-1.24}$ \\ \phantom{$\sin^2\theta_{23} / 10^{-1}$ } (IO) & 5.73$^{+0.25}_{-0.39}$ \\[2mm] $\sin^2\theta_{13} / 10^{-2}$ (NO) & 2.26$\pm$0.12 \\ \phantom{$\sin^2\theta_{13} / 10^{-2}$ } (IO) & 2.29$\pm$0.12 \\[2mm] $\delta/\pi$\quad (NO) & 1.41$^{+0.55}_{-0.40}$ \\ \phantom{$\delta/\pi$ }\quad (IO) & 1.48$\pm$0.31 \\ \hline \end{tabular} \caption{\label{tab:nudata} Neutrino oscillation parameters at $68.27\%$ C.L. taken from Ref.~\cite{Forero:2014bxa}. The upper and lower rows in $\Delta m^2_{31}$, $\sin^2\theta_{23}$, $\sin^2\theta_{13}$,and $\delta$ correspond to normal (NO) and inverted (IO) neutrino mass ordering, respectively.} \end{table} We remark that our approach to the determination of the charged lepton masses slightly differs from that of Ref.~\cite{Ludl:2014axa}. In our search for viable charged-lepton mass matrices, we always require that the eigenvalues of the input mass matrix correctly reproduce the central values of the charged lepton masses~\cite{Agashe:2014kda}, i.e. \begin{align} m_e&=0.510998928~\text{MeV},\;\nonumber\\ m_\mu&=105.6583715~\text{MeV},\;\\ m_\tau&=1776.82~\text{MeV}.\nonumber \end{align} The minimization of the $\chi^2$-function is carried out with respect to the 6 neutrino observables using the MINUIT package~\cite{James:1975dr,root}. To improve the quality of the minima, this procedure is repeated $10^4$ times, with randomly chosen initial charged lepton and neutrino mass matrices. Clearly, in the weak basis where the charged lepton mass matrix is diagonal and real (flavor basis), one has $\mathbf{m}_\ell=\mathrm{diag\,}(m_e,m_\mu,m_\tau)$ and thus this matrix is fixed. Moreover, one can easily show that the absolute value of any matrix element of $\mathbf{m}_\nu$ is always smaller than the largest neutrino mass, i.e. $|(\mathbf{m}_\nu)_{ij}|<\max_k\, (m_k)$. Therefore, the cosmological bound in Eq.~\eqref{planck} implies $|(\mathbf{m}_\nu)_{ij}|\lesssim 0.08$~eV. \section{FGM textures} \label{sec:fgmtextures} In this section we revisit the well-known FGM patterns for lepton mass matrices~\cite{Frampton:2002yf}, consisting of $3\times3$ Majorana neutrino mass matrices $\mathbf{m}_\nu$ with two zero elements in the charged lepton flavor basis with $\mathbf{m}_\ell=\text{diag}\,(m_e,m_\mu,m_\tau)$. We shall also consider the case of Dirac neutrinos, for which the matrix $\mathbf{m}_\nu$ is Hermitian. For Majorana neutrinos, the mass matrix $\mathbf{m}_\nu$ is a symmetric matrix with six independent complex entries. There are $6!/[n! (6 - n)!]$ different textures, each containing $n$ independent texture zeros. One can show that any pattern of $\mathbf{m}_\nu$ with more than two independent zeros ($n> 2$) is not compatible with current neutrino oscillation data. For $n=2$, there are fifteen two-zero textures of $\mathbf{m}_\nu$, which can be classified into six categories ($\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$), according to their physical predictions: \begin{align} \label{FGMtextures} \mathbf{A}_1: \begin{pmatrix} 0 & 0 & \ast \\ 0 & \ast & \ast \\ \ast & \ast & \ast \end{pmatrix} \,,\quad \mathbf{A}_2: \begin{pmatrix} 0 & \ast & 0 \\ \ast & \ast & \ast \\ 0 & \ast & \ast \end{pmatrix} \,,\quad \mathbf{B}_1: \begin{pmatrix} \ast & \ast & 0 \\ \ast & 0 & \ast \\ 0 & \ast & \ast \end{pmatrix}\,,\nonumber\\[3mm] \mathbf{B}_2: \begin{pmatrix} \ast & 0 & \ast \\ 0 & \ast & \ast \\ \ast & \ast & 0 \end{pmatrix}\,,\quad \mathbf{B}_3: \begin{pmatrix} \ast & 0 & \ast \\ 0 & 0 & \ast \\ \ast & \ast & \ast \end{pmatrix}\,,\quad \mathbf{B}_4: \begin{pmatrix} \ast & \ast & 0 \\ \ast & \ast & \ast \\ 0 & \ast & 0 \end{pmatrix}\,,\nonumber\\[3mm] \mathbf{C}: \begin{pmatrix} \ast & \ast & \ast \\ \ast & 0 & \ast \\ \ast & \ast & 0 \end{pmatrix}\,,\quad \mathbf{D}_1: \begin{pmatrix} \ast & \ast & \ast \\ \ast & 0 & 0 \\ \ast & 0 & \ast \end{pmatrix}\,,\quad \mathbf{D}_2: \begin{pmatrix} \ast & \ast & \ast \\ \ast & \ast & 0 \\ \ast & 0 & 0 \end{pmatrix}\,,\nonumber\\[3mm] \mathbf{E}_1: \begin{pmatrix} 0 & \ast & \ast \\ \ast & 0 & \ast \\ \ast & \ast & \ast \end{pmatrix}\,,\quad \mathbf{E}_2: \begin{pmatrix} 0 & \ast & \ast \\ \ast & \ast & \ast \\ \ast & \ast & 0 \end{pmatrix}\,,\quad \mathbf{E}_3: \begin{pmatrix} 0 & \ast & \ast \\ \ast & \ast & 0 \\ \ast & 0 & \ast \end{pmatrix}\,,\nonumber\\[3mm] \mathbf{F}_1: \begin{pmatrix} \ast & 0 & 0 \\ 0 & \ast & \ast \\ 0 & \ast & \ast \end{pmatrix} \,,\quad \mathbf{F}_2: \begin{pmatrix} \ast & 0 & \ast \\ 0 & \ast & 0 \\ \ast & 0 & \ast \end{pmatrix}\,,\quad \mathbf{F}_3: \begin{pmatrix} \ast & \ast & 0 \\ \ast & \ast & 0 \\ 0 & 0 & \ast \end{pmatrix}\,.\nonumber\\ \end{align} Here, the symbol ``$\ast$" stands for arbitrary nonzero matrix elements. Clearly, the matrices $\mathbf{F}_i$ can be straightforwardly excluded since they lead to the decoupling of one generation and thus are not experimentally viable. \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\mathbf{A}_1$ & $2.92\times10^{-1}$ & ($3.81\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark (\times)$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark (\times)$ \\ $\mathbf{A}_2$ & $1.23\times10^{-2}$ & ($3.14\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark (\times)$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark (\times)$\\ $\mathbf{B}_1$ & $8.39\times10^{-1}$ & ($4.04\times10^{-3}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_2$ & $3.39\times10^{-2}$ &($1.02\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_3$ & $9.12\times10^{-1}$ & ($3.45\times10^{-3}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_4$ & $2.10\times10^{-2}$ & ($1.11\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{C}$ & $6.20\times10^{2}$ & ($1.04\times10^{-1}$) & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $ \checkmark$ \\ $\mathbf{D}_1$ & $1.33\times10^{2}$ & ($3.43\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\checkmark$ & $\checkmark (\times)$ \\ $\mathbf{D}_2$ & $2.82\times10^{2}$ & ($4.88\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\checkmark$ & $\checkmark (\times)$ \\ $\mathbf{E}_1$ & $1.40\times10^{1}$ & ($1.15\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{E}_2$ & $1.03\times10^{2}$ & ($1.14\times10^{2}$) &$\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{E}_3$ & $2.09\times10^{1}$ & ($1.17\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ \hline \end{tabular} \caption{\label{tab:FGM}The minimum of $\chi^2$ for the FGM zero-textures of the neutrino mass matrix with a normal (inverted) mass ordering. We use the data given in Table~\ref{tab:nudata} and impose the upper bound on the sum of neutrino masses of Eq.~\eqref{planck}. In all cases, the charged lepton mass matrix is $\mathbf{m}_\ell=\text{diag}\,(m_e,m_\mu,m_\tau)$. We also indicate with a check mark or a cross whether the predictions are or not within the $1\sigma$ interval given in Table~\ref{tab:nudata}.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Dirac $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\mathbf{C}$ & $6.19\times10^{2}$ & ($1.04\times10^{-1}$) & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $ \checkmark$ \\ $\mathbf{E}_1$ & $1.40\times10^{1}$ & ($1.15\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\mathbf{E}_2$ & $1.03\times10^{2}$ & ($1.14\times10^{2}$) &$\checkmark$ & $\checkmark$ & $\times$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark(\times)$ \\ \hline \end{tabular} \caption{\label{tab:FGM-Dirac}As in Table~\ref{tab:FGM}, for the case of Dirac neutrinos. We present only the patterns for which the Dirac phase $\delta$ is different from 0 or $\pi$, leading to leptonic CP violation.} \end{table*} Our results are presented in Table~\ref{tab:FGM}, in which the minimum of $\chi^2$ for each FGM texture with a normal or inverted neutrino mass ordering is given. The results are obtained using the current neutrino oscillation data of Table~\ref{tab:nudata} and imposing the upper bound on the sum of neutrino masses given in Eq.~\eqref{planck}. We indicate with a check mark or a cross whether the texture predictions are or not within the $1\sigma$ interval given in Table~\ref{tab:nudata}. Note that, in order to ease the reading of the table, whenever a given observable is simultaneously compatible (or incompatible) with data for NO and IO, we just indicate it with a single symbol, i.e. with a check mark (or a cross). Henceforth, this notation will be used in all tables. From Table~\ref{tab:FGM} we conclude that patterns $\mathbf{A}_{1,2}$ and $\mathbf{B}_{1,2,3,4}$ are allowed for NO, while only patterns $\mathbf{B}_{1,3}$ and $\mathbf{C}$ are compatible with neutrino oscillation data for an IO mass spectrum at the $1\sigma$ level.\footnote{The seven matrices were previously found to be compatible with neutrino oscillation data at the $1\sigma$ level for NO and IO mass spectrum~\cite{Fritzsch:2011qv}.} We remark that, if the stringent upper bound on the sum of neutrino masses given in Eq.~\eqref{planck} is relaxed, pattern $\mathbf{C}$ is also allowed for a NO neutrino mass spectrum~\cite{Meloni:2014yea}. In the latter case, we obtain $\chi^2_{min} \simeq 0.32$ with $\sum_i m_i < 1$~eV. For completeness, in Figs.~\ref{fig:fig-A1}-\ref{fig:fig-C} of Appendix~\ref{appendix1}, we present the probability distribution of the six neutrino observables, obtained for the seven viable FGM textures $\mathbf{A}_{1,2}$, $\mathbf{B}_{1,2,3,4}$ and $\mathbf{C}$, for both NO and IO mass spectra. We notice that textures in the same category lead in general to similar physical predictions for the observables. We now consider the case of Dirac neutrinos. We analyze again the two-zero textures given in Eq.~\eqref{FGMtextures}. These patterns have been recently studied for Dirac neutrinos in Ref.~\cite{Liu:2012axa}, where the authors concluded that only the patterns $\mathbf{A}_{1,2}$ and $\mathbf{C}$ are compatible with the oscillation data at the $2\sigma$ level. First we note that by redefining the right-handed neutrino fields we can assume, without loss of generality, that the mass matrix $\mathbf{m}_\nu$ is Hermitian. Furthermore, it is straightforward to show that if one off-diagonal matrix element is zero, then the invariant quantity $\mathcal{J}_{CP}=\text{Im}\, [\mathbf{U}_{12}\mathbf{U}_{23}\mathbf{U}_{13}^\ast \mathbf{U}_{22}^\ast]$ vanishes, and thus CP is conserved in the lepton sector. Therefore, only patterns $\mathbf{C}$, $\mathbf{E}_1$, and $\mathbf{E}_2$ can lead to leptonic CP violation, while $\delta=0$ or $\pi$ for the remaining two-zero patterns. In view of the above, we shall only present the results for patterns $\mathbf{C}$, $\mathbf{E}_1$, and $\mathbf{E}_2$. The minimum of $\chi^2$ is given in Table~\ref{tab:FGM-Dirac}. As can be seen from the table, there is essentially no difference with respect to the results obtained for Majorana neutrinos. Only pattern $\mathbf{C}$ with an inverted hierarchy is allowed by current data. Relaxing the cosmological bound on the sum of the neutrino masses, we conclude that a normal hierarchical neutrino spectrum is also allowed for pattern $\mathbf{C}$, with $\chi^2_{min} \simeq 0.29$ for $\sum_i m_i < 1$~eV. Notice also that the parameter counting for Hermitian Dirac matrices differs from that of symmetric Majorana matrices, since in the former case the counting depends on the position of the zeros. For two vanishing diagonal matrix entries, the matrix $\mathbf{m}_\nu$ contains at most seven real parameters. \section{Hybrid textures} \label{sec:hybridtextures} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{A}}_{1(13)}$ & $1.78\times10^{-8}$ & ($1.40\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark (\times)$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{1(22)}$ & $1.65$ & ($7.16\times10^{-7}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{A}}_{1(23)}$ & $1.76\times10^{1}$ & ($6.44\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{1(33)}$ & $9.11$ &($7.22\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times ($\checkmark$)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(12)}$ & $1.29\times10^{-8}$ & ($1.55\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(22)}$ & $3.45$ & ($1.99\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(23)}$ & $2.06\times10^{1}$ & ($4.75\times10^{-11}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(33)}$ & $7.34\times10^{-1}$ & ($9.89\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ \hline \end{tabular} \caption{\label{tab:hybridA}The minimum of $\chi^2$ for the $\widehat{\mathbf{A}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{B}}_{1(11)}$ & $6.63$ & ($3.14\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark(\times)$ & $ \times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{1(12)}$ & $6.34\times10^{-1}$ & ($3.67\times10^{-7}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $ \checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{1(23)}$ & $2.08\times10^{1}$ & ($1.31\times10^{-8}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{B}}_{1(33)}$ & $3.42\times10^{-5}$ & ($6.83$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{2(11)}$ & $3.04\times10^{1}$ &($3.80\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{2(13)}$ & $9.64\times10^{-3}$ &($7.34$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{2(22)}$ & $7.18\times10^{-1}$ & ($1.72\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{2(23)}$ & $1.80\times10^2$ & ($9.61\times10^{-10}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(11)}$ & $5.52$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(13)}$ & $5.59\times10^{-1}$ & ($2.69\times10^{-5}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(23)}$ & $1.77\times10^{1}$ & ($1.83\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(33)}$ & $7.61\times10^{-2}$ & ($6.27$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{4(11)}$ & $2.32\times10^{1}$ & ($1.18\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{4(12)}$ & $1.88\times10^{-2}$ & ($6.85$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{4(22)}$ & $9.72\times10^{-1}$ & ($8.36\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{4(23)}$ & $1.44\times10^{2}$ & ($1.25\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ \hline \end{tabular} \caption{\label{tab:hybridB}The minimum of $\chi^2$ for the $\widehat{\mathbf{B}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{C}}_{(11)}$ & $1.40\times10^{-1}$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{C}}_{(12)}$ & $3.19\times10^{-1}$ & ($3.68\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{C}}_{(13)}$ & $1.88\times10^{-6}$ & ($3.72$) & $\checkmark$ & $\checkmark$ & $\checkmark $ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{C}}_{(23)}$ & $6.20\times10^{2}$ & ($3.52\times10^{-11}$) & $\checkmark$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\checkmark$ \\ \hline \end{tabular} \caption{\label{tab:hybridC}The minimum of $\chi^2$ for the $\widehat{\mathbf{C}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{D}}_{1(11)}$ & $1.07\times10^{-8}$ & ($1.08\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{1(12)}$ & $4.77\times10^{-8}$ & ($2.11\times10^{-6}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{D}}_{1(13)}$ & $1.53\times10^{-7}$ & ($7.88\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{1(33)}$ & $6.88$ &($1.87$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times ($\checkmark$)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{2(11)}$ & $1.14\times10^{-8}$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{2(12)}$ & $6.60\times10^{-10}$ & ($1.62\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{2(13)}$ & $1.63\times10^{-7}$ & ($2.59\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{2(22)}$ & $3.12$ & ($3.38\times10^{-3}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ \hline \end{tabular} \caption{\label{tab:hybridD}The minimum of $\chi^2$ for the $\widehat{\mathbf{D}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{E}}_{1(12)}$ & $1.65\times10^{-6}$ & ($1.51\times10^{1}$) & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $ \checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{1(13)}$ & $7.10\times10^{-7}$ & ($1.97\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{1(23)}$ & $2.03\times10^{1}$ & ($1.33\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{1(33)}$ & $1.40$ & ($1.39\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{2(12)}$ & $2.55\times10^{-6}$ & ($2.58\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{2(13)}$ & $7.41\times10^{-9}$ & ($9.09$) & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{2(22)}$ & $1.88$ & ($9.94\times10^{-1}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{2(23)}$ & $1.77\times10^{2}$ & ($1.33\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{3(12)}$ & $2.90\times10^{-8}$ & ($3.24\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$\\ $\widehat{\mathbf{E}}_{3(13)}$ & $1.52\times10^{-6}$ & ($2.51\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$\\ $\widehat{\mathbf{E}}_{3(22)}$ & $1.32\times10^{2}$ & ($3.17\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{3(33)}$ & $2.80\times10^{2}$ & ($5.74\times10^{-4}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ \hline \end{tabular} \caption{\label{tab:hybridE}The minimum of $\chi^2$ for the $\widehat{\mathbf{E}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{F}}_{1(11)}$ & $2.15\times10^{-12}$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{1(22)}$ & $8.26\times10^{-1}$ & ($3.25\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{1(23)}$ & $1.88\times10^{1}$ & ($2.15\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{1(33)}$ & $2.87\times10^{-6}$ & ($6.97$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{2(11)}$ & $1.68\times10^{1}$ & ($3.68\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark(\times)$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{2(13)}$ & $7.95\times10^{-7}$ & ($1.43\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{2(22)}$ & $2.87$ & ($3.62\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{2(33)}$ & $2.50\times10^{2}$ & ($1.42\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{3(11)}$ & $1.86\times10^{1}$ & ($3.07\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{3(12)}$ & $6.88\times10^{-8}$ & ($9.89\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{3(22)}$ & $1.04\times10^{2}$ & ($2.94\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{3(33)}$ & $5.61$ & ($4.32\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ \hline \end{tabular} \caption{\label{tab:hybridF}The minimum of $\chi^2$ for the $\widehat{\mathbf{F}}$-type hybrid textures.} \end{table*} \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Dirac $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\widehat{\mathbf{A}}_{1(22)}$ & $1.16\times10^{1}$ & ($3.15\times10^{1}$) & $\checkmark$ &$\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{A}}_{1(33)}$ & $1.31\times10^{1}$ &($3.09\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(22)}$ & $5.65$ & ($4.35\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{A}}_{2(33)}$ & $7.65\times10^{1}$ & ($5.01\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{1(11)}$ & $6.91$ & ($3.15\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{1(33)}$ & $1.72\times10^{-2}$ & ($5.40\times10^{1}$) & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{2(11)}$ & $3.07\times10^{1}$ & ($3.80\times10^{2}$) & \checkmark & \checkmark & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{2(22)}$ & $9.05\times10^{-1}$ & ($4.16\times10^{1}$) & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(11)}$ & $6.37$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{B}}_{3(33)}$ & $6.56\times10^{-3}$ & ($1.41\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{4(11)}$ & $2.62\times10^{1}$ & ( $1.18\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{B}}_{4(22)}$ & $9.89\times10^{-1}$ & ($3.43\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\widehat{\mathbf{C}}_{(11)}$ & $2.51\times10^{-1}$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{1(11)}$ & $4.51\times10^{-1}$ & ($1.08\times10^{2}$) & $\checkmark$ &$\checkmark$ & $\checkmark (\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{1(33)}$ & $1.60\times10^{1}$ &($2.62$) & $\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\times ($\checkmark$)$ & $\checkmark$ & $\times$ \\ $\widehat{\mathbf{D}}_{2(11)}$ & $1.49$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{D}}_{2(22)}$ & $5.47$ & ($5.88\times10^{-1}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{E}}_{1(33)}$ & $5.76\times10^{1}$ & ($7.34\times10^{-1}$) & $\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark$\\ $\widehat{\mathbf{E}}_{2(22)}$ & $5.79\times10^{1}$ & ($8.78$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times $ & $\checkmark$ & $\checkmark(\times)$\\ $\widehat{\mathbf{E}}_{3(22)}$ & $1.33\times10^{2}$ & ($1.92\times10^{-3}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$ & $\times(\checkmark)$\\ $\widehat{\mathbf{E}}_{3(33)}$ & $2.80\times10^{2}$ & ($4.02\times10^{-1}$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$ & $\times(\checkmark)$\\ $\widehat{\mathbf{F}}_{1(11)}$ & $2.94\times10^{-2}$ & ($1.08\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{F}}_{1(22)}$ & $1.37\times10^{1}$ & ($2.12$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times(\checkmark)$ & $\checkmark$ & $\checkmark (\times)$\\ $\widehat{\mathbf{F}}_{1(33)}$ & $9.53\times10^{1}$ & ($3.50$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & $\checkmark$ & $\times(\checkmark)$\\ $\widehat{\mathbf{F}}_{2(11)}$ & $2.00\times10^{1}$ & ($3.68\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark(\times)$ & $\checkmark$ \\ $\widehat{\mathbf{F}}_{2(22)}$ & $9.31$ & ($2.40\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark(\times)$\\ $\widehat{\mathbf{F}}_{2(33)}$ & $2.50\times10^{2}$ & ($1.34$) & $\checkmark$ & $\checkmark$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\times (\checkmark)$ & $\checkmark$\\ $\widehat{\mathbf{F}}_{3(11)}$ & $1.86\times10^{1}$ & ($3.07\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\times$ & $\checkmark$ & $\checkmark$ \\ $\widehat{\mathbf{F}}_{3(22)}$ & $1.04\times10^{2}$ & ($1.22\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times (\checkmark)$ & $\times$\\ $\widehat{\mathbf{F}}_{3(33)}$ & $1.95\times10^{1}$ & ($2.37\times10^{1}$) & $\checkmark$ & $\checkmark$ & $\times$ & $\times (\checkmark)$ & $\checkmark$ & $\checkmark$\\ \hline \end{tabular} \caption{\label{tab:hybrid-Dirac}The minimum of $\chi^2$ for Dirac-type hybrid textures. We present only the patterns for which the Dirac phase $\delta$ is different from 0 or $\pi$ and CP is violated.} \end{table*} Hybrid textures~\cite{Kaneko:2005yz} are particular cases of one-zero textures of the Majorana neutrino mass matrix, which additionally have two equal nonzero elements, and are defined in the flavor basis. There are $(6!/5!) \times 5!/(2!\, 3!)=60$ possible hybrid textures. Among them, it has been shown that only 39 textures are compatible with current neutrino oscillation data at the $3\sigma$ level~\cite{Liu:2013oxa}. To keep a coherent notation, without the need of introducing any new classification scheme, we shall label these matrices as follows. We associate to each FGM matrix $\mathbf{M}$ given in Eq.~\eqref{FGMtextures} a hybrid-type matrix $\widehat{\mathbf{M}}$, in which the two zeros in $\mathbf{M}$ are replaced by equal nonvanishing elements in $\widehat{\mathbf{M}}$. Then, the position of the zero element in the hybrid matrix $\widehat{\mathbf{M}}$ is indicated with a subscript in parenthesis. Consider, for instance, the hybrid texture \begin{align} \label{hybridexample} \begin{pmatrix} \text{X} & \text{X} & \ast \\ \text{X} & \ast & \ast \\ \ast & \ast & 0 \end{pmatrix}\,, \end{align} where ``X" stands for equal nonzero elements. Following the definition of the matrix $\mathbf{A}_1$ given in Eq.~\eqref{FGMtextures}, the hybrid matrix~\eqref{hybridexample} would be represented as $\widehat{\mathbf{A}}_{1(33)}$ in our notation. Obviously, for each FGM texture in Eq.~\eqref{FGMtextures}, one can construct four different hybrid textures, depending on the position of the zero matrix element. For comparison, below we list the complete set of 39 hybrid textures studied in Ref.~\cite{Liu:2013oxa}: \begin{align} \label{hybridtextures} &\widehat{\mathbf{A}}_{1\,\{(13),(22),(23),(33)\}}\,, ~\widehat{\mathbf{A}}_{2\,\{(12),(22),(23),(33)\}}\,, \nonumber\\ &\widehat{\mathbf{B}}_{1\,\{(12),(23),(33)\}}\,, \widehat{\mathbf{B}}_{2\,\{(13),(22),(23)\}}\,,\nonumber\\ &\widehat{\mathbf{B}}_{3\,\{(13),(23),(33)\}}\,, ~\widehat{\mathbf{B}}_{4\,\{(12),(22),(23)\}}\,, ~\widehat{\mathbf{C}}_{(11)}\,,\nonumber\\ &\widehat{\mathbf{D}}_{1\,\{(11),(12),(13),(33)\}}\,, ~\widehat{\mathbf{D}}_{2\,\{(11),(12),(13),(22)\}}\,,\\ &\widehat{\mathbf{E}}_{1(33)}\,,~\widehat{\mathbf{E}}_{2(22)}\,, ~\widehat{\mathbf{E}}_{3\,\{(22),(33)\}}\,,\nonumber\\ &\widehat{\mathbf{F}}_{1\,\{(22),(33)\}}\,, ~\widehat{\mathbf{F}}_{2\,\{(22),(33)\}}\,, ~\widehat{\mathbf{F}}_{3\,\{(22),(33)\}}\,,\nonumber \end{align} where we have indicated, inside curly brackets, the possible choices for the texture-zero position. The results of the $\chi^2$-minimization are summarized in Tables~\ref{tab:hybridA}-\ref{tab:hybridF}. First we note that all textures given in Eq.~\eqref{hybridtextures} are compatible with data at the $1\sigma$ level either for NO, IO or both types of neutrino mass spectrum. In particular, the patterns $\widehat{\mathbf{A}}_{2(33)}$, $\widehat{\mathbf{B}}_{1(12)}$, $\widehat{\mathbf{B}}_{2(22)}$, $\widehat{\mathbf{B}}_{3(13)}$, $\widehat{\mathbf{B}}_{4(22)}$, $\widehat{\mathbf{C}}_{(12)}$, $\widehat{\mathbf{D}}_{1(12)}$, $\widehat{\mathbf{D}}_{1(13)}$, $\widehat{\mathbf{D}}_{2(12)}$, $\widehat{\mathbf{D}}_{2(13)}$, $\widehat{\mathbf{F}}_{1(22)}$, $\widehat{\mathbf{F}}_{2(13)}$, and $\widehat{\mathbf{F}}_{3(12)}$ turn out to be compatible with experimental data for NO and IO mass spectra. Despite the fact that our analysis is performed at the stringent $1\sigma$ C.L., constraining also the Dirac phase $\delta$, among the sixty possible hybrid patterns for Majorana neutrinos, only six fail in reproducing the data for any hierarchy and can be completely excluded. These are the matrices $\widehat{\mathbf{B}}_{1(11)}$, $\widehat{\mathbf{B}}_{2(11)}$, $\widehat{\mathbf{B}}_{3(11)}$, $\widehat{\mathbf{B}}_{4(11)}$, $\widehat{\mathbf{F}}_{2(11)}$, and $\widehat{\mathbf{F}}_{3(11)}$, all having a zero element in the (1,1) position. We remark that in Ref.~\cite{Liu:2013oxa}, only 13 patterns were found compatible with data at the $1\sigma$ level: $\widehat{\mathbf{A}}_{1(22)}$, $\widehat{\mathbf{A}}_{1(23)}$, $\widehat{\mathbf{A}}_{1(33)}$, $\widehat{\mathbf{B}}_{1(23)}$, $\widehat{\mathbf{B}}_{1(33)}$, $\widehat{\mathbf{B}}_{2(22)}$, $\widehat{\mathbf{B}}_{3(13)}$, $\widehat{\mathbf{B}}_{4(23)}$, $\widehat{\mathbf{D}}_{2(11)}$, $\widehat{\mathbf{D}}_{2(13)}$, $\widehat{\mathbf{E}}_{2(22)}$, $\widehat{\mathbf{F}}_{2(22)}$, and $\widehat{\mathbf{F}}_{2(33)}$. The fact that several viable hybrid textures were missed in Ref.~\cite{Liu:2013oxa} could be attributed to the numerical procedure followed by the authors, who performed a simple random scanning of the parameter space instead of the more reliable $\chi^2$-analysis. In the case of Dirac neutrinos, thirty patterns were considered, which are listed in Table~\ref{tab:hybrid-Dirac}. We include all the Hermitian patterns that do not have any off-diagonal zero element, and thus may lead to Dirac-type CP violation. For the remaining patterns, the Dirac phase $\delta$ is always 0 or $\pi$ and CP is conserved in the lepton sector. Looking at the table we note that only twelve textures are consistent with data either for NO or IO neutrino mass spectrum. These are the matrices $\widehat{\mathbf{B}}_{1(33)}$, $\widehat{\mathbf{B}}_{2(22)}$, $\widehat{\mathbf{B}}_{3(33)}$, $\widehat{\mathbf{B}}_{4(22)}$, $\widehat{\mathbf{C}}_{(11)}$, $\widehat{\mathbf{D}}_{1(11)}$, $\widehat{\mathbf{D}}_{2(22)}$, $\widehat{\mathbf{E}}_{1(33)}$, $\widehat{\mathbf{E}}_{3(22)}$, $\widehat{\mathbf{E}}_{3(33)}$, $\widehat{\mathbf{F}}_{1(11)}$, and $\widehat{\mathbf{F}}_{2(33)}$. None of these matrices is simultaneously allowed for both mass spectra. \section{Parallel textures} \label{sec:partextures} In this section, we perform a systematic $\chi^2$-analysis of lepton mass matrices that exhibit the same texture, i.e. with $\mathbf{m}_\ell$ and $\mathbf{m}_\nu$ having their zeros located at the same positions. Besides the possibility of implementing a universal flavor structure in the context of grand unified models, there is an additional theoretical motivation for considering parallel structures. It is well known that an attractive and economical framework to generate small neutrino masses is the seesaw mechanism. In its simplest type-I realization, three right-handed neutrinos are added to the standard model particle content. It is then conceivable that the presence of family symmetries enforces texture-zero structures in the Dirac neutrino mass matrix $\mathbf{m}_D$ and the heavy Majorana mass matrix $\mathbf{M}_R$, which, in some cases, could be preserved by the effective neutrino mass matrix $\mathbf{m}_\nu=-\mathbf{m}_D \mathbf{M}_R^{-1} \mathbf{m}_D^\mathsf{T}$.\footnote{The patterns belonging to the classes I and IV in Eq.~\eqref{psets} have this property~\cite{Branco:2007nn}.} It is worth noticing that any permutation transformation acting on parallel patterns is allowed, since it leads to textures with the same physical content. Indeed, they can be related by a weak basis transformation, performed by a permutation matrix $\mathbf{P}$, \begin{equation} \label{wbtransf} \mathbf{m}^{\prime}_{\ell}=\mathbf{P}^{\mathsf{T}}\,\mathbf{m}_{\ell}\,\mathbf{P}\,,\quad \mathbf{m}^{\prime}_{\nu}=\mathbf{P}^{\mathsf{T}}\,\mathbf{m}_{\nu}\,\mathbf{P}\,, \end{equation} which automatically preserves the parallel structure, but changes the position of the zeros. The matrix $\mathbf{P}$ belongs to the group of six permutations matrices, which are isomorphic to the symmetry group $S_3\,$. \subsection{Two-zero textures} The FGM-type Ans\"atze can be classified into four weak basis equivalent classes (or permutation sets)~\cite{Branco:2007nn}: \begin{align}\label{psets} \begin{split} \text{Class I:} &\quad \mathbf{A}_1, \mathbf{A}_2, \mathbf{B}_3,\mathbf{B}_4, \mathbf{D}_1, \mathbf{D}_2;\\ \text{Class II:} &\quad \mathbf{B}_1, \mathbf{B}_2, \mathbf{E}_3; \\ \text{Class III:} &\quad \mathbf{C}, \mathbf{E}_1, \mathbf{E}_2;\\ \text{Class IV:} &\quad \mathbf{F}_1, \mathbf{F}_2, \mathbf{F}_3. \end{split} \end{align} It is clear that class IV is not experimentally viable, since it always leads to the decoupling of one generation. Note also that the weak basis transformations given in Eq.~\eqref{wbtransf} are not allowed in a scheme with a diagonal and ordered charged lepton mass matrix, as in the texture schemes discussed in previous section. In our $\chi^2$-analysis, all parallel FGM textures with arbitrary complex Hermitian (or real symmetric) $\mathbf{m}_{\ell}$ and complex symmetric $\mathbf{m}_{\nu}$ were found to be viable for both normal and inverted neutrino mass ordering. Similar results were obtained for Dirac neutrinos with Hermitian neutrino mass matrices. We have also considered the feasibility of arbitrary complex Hermitian $\mathbf{m}_{\ell}$ and real symmetric $\mathbf{m}_{\nu}$. In this case, the number of physical parameters is equal to 10 for classes~I and~II, while for class~III there are 11 parameters since, in general, the invariant quantity $\arg\bigl[(\mathbf{m}_{\ell})_{12}(\mathbf{m}_{\ell}^{\ast})_{13}(\mathbf{m}_{\ell})_{23}\bigr]$ does not vanish. As far as the analysis of the neutrino oscillation data is concerned, there is no distinction between Majorana or Dirac neutrinos. The minimum of $\chi^2$ was always found to be much smaller than one, so that all patterns in classes I, II, and III are consistent with neutrino data for any mass hierarchy. \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Majorana $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\mathbf{A}_{1(13)}$ & $1.25\times10^{-7}$ & ($1.71\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{1(22)}$ & $9.66\times10^{-8}$ &($4.95\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{1(23)}$ & $5.53\times10^{-8}$ & ($3.06$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\mathbf{A}_{1(33)}$ & $1.34\times10^{-7}$ & ($5.13\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{2(22)}$ & $7.02\times10^{-8}$ & ($9.65\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{2(23)}$ & $1.31\times10^{-7}$ & ($4.51\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{A}_{2(33)}$ & $7.88\times10^{-8}$ & ($3.38\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{B}_{1(23)}$ & $5.35\times10^{-7}$ & ($3.78\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{1(33)}$ & $1.38\times10^{-6}$ & ($4.57\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(13)}$ & $1.52$ & ($2.88\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(22)}$ & $7.93\times10^{-7}$ & ($1.54\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(23)}$ & $1.52$ & ($6.29\times10^{-5}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{3(13)}$ & $1.84\times10^{-7}$ & ($3.05$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\mathbf{B}_{3(23)}$ & $1.20\times10^{-6}$ & ($4.05\times10^{-8}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{4(23)}$ & $1.95\times10^{-7}$ &($6.08\times10^{-9}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{C}_{(11)}$ & $1.49\times10^{-6}$ & ($1.75\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{C}_{(23)}$ & $2.69\times10^{-8}$ & ($2.43\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{D}_{1(11)}$ & $4.99\times10^{-7}$ & ($1.61\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{D}_{2(11)}$ & $4.21\times10^{-6}$ & ($5.29\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark $ & $\checkmark$ & $\checkmark$\\ $\mathbf{F}_{1(23)}$ & $1.98\times10^{1}$ & ($1.83\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times$ & $\checkmark$ & $\times$\\ \hline \end{tabular} \caption{\label{tab:nniMajorana} The minimum of $\chi^2$ for three-zero Majorana neutrino textures, with an NNI charged lepton mass matrix.} \end{table*} \subsection{Three-zero textures} There are only 6 possible three-zero parallel textures that can be constructed for both the charged-lepton and Majorana neutrino mass matrices. Since these matrices are related by weak basis transformations (permutations), they all have the same physical content and thus lead to the same predictions. We denote them by \begin{align} \label{3zerotextures} \mathbf{A}_{1(33)}\,, ~\mathbf{A}_{2(22)}\,,~\mathbf{B}_{1(33)}\,, ~\mathbf{B}_{2(22)}\,, ~\mathbf{D}_{1(11)}\,, ~\mathbf{D}_{2(11)}\,, \end{align} where the subscript in parenthesis refers to the position of the additional zero in the corresponding two-zero texture given in Eq.~\eqref{FGMtextures}. Note that the matrix $\mathbf{C}_{(11)}$ is not included in the above list since it is traceless and, therefore, incompatible with the lepton masses. Furthermore, textures with null determinant or those leading to the decoupling of one generation have also been excluded. The texture $\mathbf{A}_{2(22)}$ is known as the nearest-neighbor-interaction pattern~\cite{Fritzsch:1979zq,Branco:1988iq}. In the context of the standard model, it has been shown that imposing an NNI texture simultaneously in the up- and down-quark sectors simply corresponds to a weak basis choice~\cite{Branco:1988iq}. For non-Hermitian quark mass matrices, this is an example of parallel four-zero textures without any physical content. This is not necessarily true in the lepton sector, unless neutrinos are Dirac particles. For Majorana neutrinos, the assumption of a parallel NNI structure would have physical implications. For an arbitrary complex Hermitian $\mathbf{m}_{\ell}$ and a complex symmetric $\mathbf{m}_{\nu}$ (Majorana neutrinos), the above parallel 3-zero textures contain 9 physical parameters. No viable solution was found either for NO ($\chi^2_{min}\simeq 74$) or IO ($\chi^2_{min}\simeq 182$) neutrino mass spectrum. For a normal ordering of neutrino masses, all the textures fail in reproducing the three mixing angles, while for an inverted spectrum the mixing angles $\theta_{12}$ and $\theta_{23}$, and the phase $\delta$ did not satisfy the required $\chi^2$ criteria. In Fig.~\ref{fig:fig-3zeros-Majorana} of Appendix~\ref{appendix2}, we present the probability distribution of the neutrino observables, obtained for the textures given in Eq.~\eqref{3zerotextures}, for NO and IO mass spectrum, respectively. For Dirac neutrinos, with the matrix $\mathbf{m}_{\nu}$ being Hermitian, similar results were found, thus excluding these patterns for both NO and IO mass spectra.\footnote{Our conclusions do not agree with the result of Ref.~\cite{Fritzsch:2015haa}, in which the parallel NNI texture $\mathbf{A}_{2(22)}$ is found to be compatible with the experimental data.} \section{Predictive neutrino textures with NNI charged lepton mass matrix} \label{sec:nnitextures} In the previous section, we have considered parallel structures for both lepton sectors, assuming an Hermitian charged lepton mass matrix. In particular, we showed that the parallel NNI texture $\mathbf{A}_{2(22)}$ is not compatible with the current neutrino data. In this section, we shall lift the assumption of Hermiticity on the NNI pattern of the charged lepton mass matrix and look for viable predictive neutrino zero textures. Such patterns are of interest since they contain the same number of physical parameters as the FGM and hybrid textures (assuming that $\mathbf{m}_{\nu}$ has three zeros). From the theoretical viewpoint, NNI lepton structures are also well motivated. For instance, it is possible to conceive flavour symmetries in the two-Higgs doublet model~\cite{Branco:2010tx}, in supersymmetric theories~\cite{Babu:2009nn}, and in grand unified models based on $\mathsf{SU}(5)$~\cite{EmmanuelCosta:2011jq,Emmanuel-Costa:2013gia} that lead to NNI textures. In the latter models, the charged lepton mass matrix $\mathbf{m}_{\ell}$ exhibits an NNI pattern, while the effective neutrino mass matrix $\mathbf{m}_{\nu}$ contains some vanishing elements. We shall assume that the non-Hermitian charged lepton mass matrix $\mathbf{m}_{\ell}$ is described by the NNI form $\mathbf{A}_{2(22)}$ and search for a maximal number of zeros in $\mathbf{m}_{\nu}$ compatible with the data. As before, we take $\mathbf{m}_{\nu}$ as a general complex symmetric matrix for Majorana neutrinos and an Hermitian matrix for Dirac neutrinos. We remark that, without loss of generality, all the non-vanishing matrix elements in $\mathbf{m}_{\ell}$ can be assumed real and positive. Thus, there remain two free parameters in $\mathbf{m}_{\ell}$ after fitting the charged lepton masses. In our $\chi^2$-search, none of the neutrino textures with more than three zeros was found compatible with the observed neutrino data. In Table~\ref{tab:nniMajorana} and~\ref{tab:nniDirac} we present the results for three-zero $\mathbf{m}_{\nu}$ textures for Majorana and Dirac neutrinos, respectively. As can be seen from the tables, only the pattern $\mathbf{F}_{1(23)}$ fails in reproducing the data for any neutrino spectra. Moreover, the patterns $\mathbf{A}_{1(23)}$ and $\mathbf{B}_{3(13)}$ are compatible with the data only for NO neutrino masses. The remaining 17 textures are viable at the $1\sigma$ level irrespective of the mass ordering. In particular, once the Hermiticity of $\mathbf{m}_{\ell}$ is lifted, the parallel structure $\mathbf{A}_{2(22)}$ turns out to be consistent with data. We remark that taking an NNI Hermitian $\mathbf{m}_{\ell}$ together with any non-parallel three-zero neutrino pattern does not lead to viable pairs of textures. Therefore, the non-Hermiticity condition of the charged lepton mass matrix is a crucial ingredient in this particular case. \begin{table*}[ht] \begin{tabular}{clccccccc} \hline Dirac $\mathbf{m}_\nu$ &\;\;$\chi^2_{min}$ NO &\;\;\;\; (IO)\;\;\;\; &\; $\Delta m^2_{21}$\; &\; $\Delta m^2_{31}$\; & \;\;\;$\theta_{12}$\;\;\; &\;\;\;$\theta_{23}$\;\;\; &\;\;\;$\theta_{13}$\;\;\; &\; \;\;$\delta$\;\;\; \\ \hline $\mathbf{A}_{1(13)}$ & $6.44\times10^{-7}$ & ($3.51\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{1(22)}$ & $3.20\times10^{-8}$ &($1.93\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{1(23)}$ & $1.18\times10^{-8}$ & ($3.06$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\mathbf{A}_{1(33)}$ & $5.08\times10^{-7}$ & ($6.03\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{2(22)}$ & $5.44\times10^{-8}$ & ($2.25\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{A}_{2(23)}$ & $1.46\times10^{-7}$ & ($6.74\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{A}_{2(33)}$ & $2.36\times10^{-7}$ & ($6.21\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{B}_{1(23)}$ & $1.31\times10^{-7}$ & ($5.63\times10^{-3}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{1(33)}$ & $2.01\times10^{-5}$ & ($1.27\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(13)}$ & $1.52$ & ($1.51\times10^{-1}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(22)}$ & $9.22\times10^{-7}$ & ($1.64\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{2(23)}$ & $1.52$ & ($1.28\times10^{-6}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{3(13)}$ & $1.43\times10^{-6}$ & ($3.05$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark(\times)$ & $\checkmark$ & $\checkmark(\times)$ \\ $\mathbf{B}_{3(23)}$ & $7.99\times10^{-7}$ & ($9.56\times10^{-9}$) & $\checkmark$ &$\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{B}_{4(23)}$ & $2.83\times10^{-7}$ &($6.08\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{C}_{(11)}$ & $5.62\times10^{-7}$ & ($6.20\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{C}_{(23)}$ & $9.84\times10^{-8}$ & ($1.90\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ $\mathbf{D}_{1(11)}$ & $1.23\times10^{-6}$ & ($1.28\times10^{-7}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\ $\mathbf{D}_{2(11)}$ & $4.23\times10^{-6}$ & ($2.14\times10^{-8}$) & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark $ & $\checkmark$ & $\checkmark$\\ $\mathbf{F}_{1(23)}$ & $1.98\times10^{1}$ & ($1.83\times10^{2}$) & $\checkmark$ & $\checkmark$ & $\checkmark (\times)$ & $\times$ & $\checkmark$ & $\times$\\ \hline \end{tabular} \caption{\label{tab:nniDirac} The minimum of $\chi^2$ for three-zero Dirac neutrino textures, with an NNI charged lepton mass matrix.} \end{table*} \section{Conclusions} \label{sec:summary} There has been lately a revival of the interest in texture-zero models that aim at explaining the flavor structure observed in lepton mass matrices. In this work, we have confronted various popular texture-zero \emph{Ans\"{a}tze} of lepton mass matrices with current neutrino data. We have performed a thorough $\chi^2$-analysis in a wide class of schemes, considering Hermitian charged lepton mass matrices in combination with symmetric Majorana or Hermitian Dirac neutrino mass matrices. In our study we included the well-known FGM textures, the so-called hybrid textures, as well as parallel patterns. We concluded that while a significant number of these patterns is still consistent with all the observations at 68.27\% C.L., there are several textures that can be excluded or are marginally allowed. We have also considered predictive neutrino zero textures with the assumption that the charged lepton mass matrix has the well-known NNI form. In the latter case, requiring non-Hermiticity of the charged lepton mass matrix is a necessary condition to obtain viable neutrino patterns. Predictive textures were found with a maximum number of three zeros, for both Majorana and Dirac neutrinos. It is well known that texture-zero models have in general a weak predictive power. We have not addressed here the question of the predictability of a given texture. This issue is beyond the scope of the present work. The reader is referred, e.g., to Ref.~\cite{Ludl:2014axa}, in which the authors attempted to identify predictive classes of texture zeros by defining numerical measures of predictability. For instance, maximally restrictive Majorana textures can predict, in most cases, the effective neutrino mass parameter $m_{\beta\beta}=|\sum_i \mathbf{U}_{ei}^2\,m_i|$, relevant in neutrinoless double beta decays. From our study of different lepton mass matrix textures, it becomes clear that present neutrino oscillation data does not give an explicit hint on which category of textures is more appropriate to describe the observations. The precise measurements of neutrino oscillation parameters in upcoming experiments (including the determination of the absolute neutrino mass scale and the Dirac CP phase, and the improvement of the bounds on the sum of neutrino masses and the effective mass in $0\nu\beta\beta$ decays) are expected to shed some light on the flavor structure of the neutrino sector. This in turn would allow us to determine, among the plethora of texture-zero patterns, what are the most predictive textures capable of explaining the experimental data, as well as those that are disfavored or excluded at a high confidence level. \section*{Acknowledgements} We are grateful to M. Nebot and S. Palomares-Ruiz for useful discussions and comments. The work of D.E.C.~was supported by \emph{Associa\c c\~ao do Instituto Superior T\'ecnico para a Investiga\c c\~ao e Desenvolvimento} (IST-ID) and \textit{Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia} (FCT). D.E.C. and R.G.F. acknowledge support from FCT through the projects CERN/FP/123580/2011, PTDC/FIS-NUC/0548/2012 and UID/FIS/00777/2013, and thank CERN Theoretical Physics Unit for hospitality and financial support.
{ "timestamp": "2015-06-17T02:14:53", "yymm": "1504", "arxiv_id": "1504.06594", "language": "en", "url": "https://arxiv.org/abs/1504.06594" }
\section{Introduction} Copying information is a fundamental process in the natural world: all living systems, as well as the vast majority of manmade digital devices, need to replicate information to function properly. The quality of a copy relies on it being an accurate reproduction of the original and can be quantified by the fraction $\eta$ of wrongly copied bits that it contains. Errors can be provoked by several hardware-specific causes, such as imperfections in the copying machinery. At the molecular scale, perfect copying does not exist as thermal fluctuations constitute a fundamental source of error, regardless of the system. Since the reliability of the copying process is ultimately limited by thermal noise, it must be understood in terms of thermodynamics, as recognized by Von Neumann \cite{von1956probabilistic}. Therefore, a critical question is whether one can invoke the second law of thermodynamics to establish a universal connection between the error and physical quantities characterizing the copy process. This issue should be addressed in a general framework, incorporating two basic features of copying machineries. First, copying protocols often involve several intermediate discriminatory steps used to regulate the accuracy and speed of the process. This is a characteristic property of both natural and artificial error-correcting protocols. For example, accurate copying of DNA occurs via multistep reactions \cite{johnson1993conformational}. Second, due to the statistical nature of the second law, one should consider cyclically repeated copy operations rather than a single one \cite{Bennett:1982wx}. This cyclical operation is also consistent with the behavior of polymerases when duplicating long biopolymers. To understand the thermodynamics of copying, we introduce a general framework where both the copying protocol can be arbitrarily complex (as in models describing biochemical reactions \cite{hopfield1974kinetic,ninio1975kinetic, murugan2012speed,murugan2014discriminatory}) and copy operations are cyclically repeated (as in models inspired by the physics of polymer growth \cite{bennett1979dissipation,andrieux2008nonequilibrium,cady2009open,andrieux2009molecular,esposito2010extracting,sartori2013kinetic,andrieux2013information,gaspard2014kinetics}). Our framework describes template-assisted growth of a copy polymer (or ``tape'', see \cite{sharma2012template}) aided by a molecular machine, see Fig. \ref{fig:copol}. Gray and white circles represent two different monomer types. The molecular machine, represented as a red circle in the figure, is situated at the tip of the copy strand and tries to match freely diffusing monomers with corresponding ones on the template. When a free monomer arrives at the tip, the machine transitions through a network of intermediate states to determine whether to incorporate or to reject it. Incorporation is more likely if the matching is right, i.e. the color of the monomer matches that of the template, than if it is wrong. On average, the copy strand elongates at a speed $v\ge0$ and accumulates errors with probability $\eta$. \begin{figure}[!ht] \centerline{\includegraphics[width=.48\textwidth]{scheme_copol}} \caption{{\bf Template-assisted polymerization.} The template strand is a pre-existing polymer made up of two different kinds of monomers (gray and white circles). A molecular copying machine (red circle) assists the growth of a copy strand by incorporating freely diffusing monomers of two different types, trying to match them with those of the template strand. Right and wrong matches are noted $r$ and $w$.\label{fig:copol}} \end{figure} \begin{figure*}[hbt] \centerline{\includegraphics[width=.96\textwidth]{general_copol}} \caption{{\bf Transition network of template-assisted polymerization and examples.} {\bf A.} State space of the template-assisted polymerization model. Monomer incorporation occurs via a network of intermediate states represented inside the dashed circles. The two colors distinguish networks leading to incorporation of right and wrong monomers. The structure is repeated in a tree-shaped structure as the polymer grows by addition of more and more monomers. {\bf B.} Examples of networks of intermediate states. First example: template-assisted polymerization without intermediate states (see e.g. \cite{bennett1979dissipation,sartori2013kinetic,andrieux2009molecular,esposito2010extracting,gaspard2014kinetics}). Second example: kinetic proofreading, where after an intermediate state a backwards driven pathway removes errors to improve the overall accuracy of the copy \cite{hopfield1974kinetic,sartori2013kinetic}. Third example: mRNA translation, where the three copying steps represent initial binding, GTP hydrolysis and final accommodation; a proofreading reaction is also present \cite{johansson2008rate}. \label{fig:general}} \end{figure*} Close to thermodynamic equilibrium the process becomes very slow, $v \rightarrow 0$. The error is then $\eta_{\rm eq}\approx\exp[-(\Delta E^{w}-\Delta E^{r})/T]$, determined by the energy changes $\Delta E^{r}$ and $\Delta E^{w}$ of right and wrong monomer incorporation and independently of the copying protocol. In this case, the error can be reduced by increasing the gap $(\Delta E^{w}-\Delta E^{r})$, in agreement with Bennett's idea that cyclic copying can be performed near equilibrium with arbitrary precision \cite{Bennett:1982wx,sartori2013kinetic}. This mechanism is however unpractical, for example due to the low speed limitation. Instead, typical molecular machines spend chemical energy to copy at a finite speed and out of thermodynamic equilibrium. Non-equilibrium copying protocols can also reduce the error far below its equilibrium value. For example, the equilibrium estimate for the error in DNA duplication is $\eta_{\rm eq}\sim10^{-2}$, where the actual observed error is $\eta\sim10^{-9}$ \cite{johnson1993conformational}. An important non-equilibrium mechanism underlying error correction is kinetic proofreading, which feeds on chemical energy to preferentially undo wrong copies \cite{hopfield1974kinetic,ninio1975kinetic,bennett1979dissipation}. Other non-equilibrium mechanisms such as induced fit \cite{pape1999induced} and kinetic discrimination \cite{cady2009open,sartori2013kinetic} complement kinetic proofreading to underpin the high accuracy of replication in biological systems. In this work we demonstrate that, for the broad class of processes depicted in Fig.~\ref{fig:copol}, a direct relation links copy errors with non-equilibrium thermodynamic observables {\color{black} characterizing incorporation of errors}. In particular, at fixed work budget, the error decreases exponentially with the total entropy produced per wrongly copied bit. This relation is completely general, in contrast with conditions setting hardware-specific minimum errors $\eta_{\rm min}$ that characterize each particular copying protocol. When studying wrong matches alone, three copying regimes can be identified: {\it error amplification}, where energy is invested in increasing the error rate; {\it error correction}, where energy is invested in decreasing the error rate; and {\it Maxwell demon}, where the information contained in the errors is converted into work. We conclude by studying the specific copying protocol of kinetic proofreading. We show that proofreading can operate in all these three regimes. Furthermore, for a broad class of proofreading protocols, we show that error reduction is limited by the chemical energy spent in the proofreading reaction. \section{Results} \subsection{Template-assisted polymerization} We start our discussion by detailing the stochastic dynamics of the template-assisted polymerization process sketched in Fig. \ref{fig:copol}. Its transition network is represented in Fig. \ref{fig:general}A. The rectangles correspond to the states of the system after the copying machine finalized incorporation of a monomer. We denote them with a string such as $\dots rrwr$, which refers to a particular sequence of right and wrong matches (see also Fig. \ref{fig:copol}). Dashed circles encloses sub-networks of $n$ intermediate states, characteristic of the copying protocol. The intermediate states, represented as blue/green circles for right/wrong matches in Fig. \ref{fig:general}A, are used by the machine to process a tentatively matched monomer and decide whether to incorporate it or not. We note intermediate states as $\dots rrwr r_i$, with $1\le i\le n$, and analogously for wrong monomers. A copying protocol is fully specified by the topology of the sub-networks, assumed to be the same for right and wrong matches, and the kinetic rates $k_{ij}^r$ for right matches and $k_{ij}^w$ for wrong ones. Differences in the rates are responsible for discrimination. Possible examples of sub-networks of increasing complexity are represented in Fig. \ref{fig:general}B. Because of thermal fluctuations induced by the environment at temperature $T$, all kinetic transitions are stochastic. The states are thus characterized by time-dependent probabilities $P(\dots r)$, $P(\dots w)$, $P(\dots r_i)$ and $P(\dots w_i)$. Their evolution is governed by a set of master equations which can be solved at steady state, see {\em Methods}. Key to the solution is to postulate that errors are uncorrelated along the chain, so that $P(\dots)\propto\eta^{N^w}(1-\eta)^{N-N^w}$, where $N$ is the length of the chain and $N^w$ is the total number of incorporated wrong matches. The error $\eta$ can then be determined via the condition \begin{equation} \label{eq:error} \frac{\eta}{1-\eta}=\frac{v^w(\eta)}{v^r(\eta)}, \end{equation} where $v^r$ and $v^w$ are the average incorporation speeds of right and wrong monomers, respectively. They represent the average net rates at which right and wrong monomers are incorporated in the copy. The net elongation speed $v$ is the sum of these two contributions, $v=v^r+v^w$. Substituting the solution for $P(\ldots)$ into the master equations leads to explicit expressions for $v^w$ and $v^r$ as a function of the error and all the kinetic rates. In this way, Eq. (\ref{eq:error}) becomes a closed equation for the only unknown $\eta$. Note that Eq.~\ref{eq:error} and the definition of $v$ imply $v^r=(1-\eta)v$ and $v^w=\eta v$. \subsection{Thermodynamics of copying with errors} The kinetic rates $k_{ij}^r$ and $k_{ij}^w$ are determined by the energy landscape of the system, the chemical drivings $\mu_{ij}$ of the reactions, and the temperature $T$ of the thermal bath, as represented in Fig. \ref{fig:rates}A. The chemical drivings represent the difference in chemical potential of reactions, such as ATP hydrolysis, fueling the transitions $j\to i$. The energy differences of an intermediate state respect to the state before the candidate monomer incorporation are $\Delta E_i^r=E(\dots r_i)-E(\dots)$, and similarly for wrong incorporation; the energy changes after {\em finalizing} incorporation of a monomer are $\Delta E^r=E(\dots r)-E(\dots)$ and analogously for wrong matches. Note that these {\it energies} are in a strict sense free energies as they might depend, for example, on the monomer concentrations in the cell. Energetic discrimination can be exploited when the wrong match is energetically more unstable than the right one, $\Delta E^w\ge \Delta E^r$. In addition, wrong matches can also be discriminated kinetically, i.e. by exploiting different activation barriers $\delta_{ij}$ in the transitions performed by the machine when a right monomer is bound. In general, complex copying protocols can combine both these mechanisms \cite{sartori2013kinetic,zaher2009fidelity}. Full expressions of the rates are summarized in Fig. \ref{fig:rates}B. \begin{figure}[!ht] \centerline{\includegraphics[width=.48\textwidth]{diagram_rates.pdf}} \caption{{\bf Energy landscape and kinetic rates.} {\bf A} Energetic diagram of a single transition in the reaction network. {\bf B} Corresponding kinetic rates. The transition $j\to i$ can be driven by energy differences and the chemical driving $\mu_{ij}$. Transitions involving a right and a wrong monomer can be characterized by different kinetic barriers $\delta_{ij}$, as well as different energetic landscapes $\Delta E_j^w \neq \Delta E_j^r$. The bare rate $\omega_{ij}$ is the inverse characteristic time scale of each reaction. \label{fig:rates}} \end{figure} Given a steady-state elongation speed $v$, the chemical drivings perform an average work per added monomer $\Delta W = \sum_{\langle ij\rangle}\mu_{ij}(J^r_{ij}+J^w_{ij})/v$, where $J_{ij}^r$ and $J_{ij}^r$ are probability fluxes (see also {\em Methods}). Further, {\color{black} the free-energy change per added monomer at equilibrium would be} $\Delta F_{\rm eq} = - T\log (e^{-\Delta E^r/T}+e^{-\Delta E^w/T} )$. In the limit $v\rightarrow 0$, the system approaches equilibrium and the population of all states is determined by detailed balance. This implies that the equilibrium error is $\eta_{\rm eq} =\exp{[(-\Delta E^w+\Delta F_{\rm eq})/T]}$. When driving the dynamics out of equilibrium, the error will in general depart from its equilibrium value, leading to a positive total entropy production. In {\em Methods}, we derive that the total entropy production per copied monomer and the error are linked by the relation \begin{equation} T\Delta S_{\rm tot}=\Delta W - \Delta F_{\rm eq}- TD(\eta||\eta_{\rm eq})\ge0\quad , \label{eq:entropyprod} \end{equation} where $D(\eta||\eta_{\rm eq})=\eta \log(\eta/\eta_{\rm eq})+(1-\eta)\log[(1-\eta)/(1-\eta_{\rm eq})]$ is the Kullback-Leibler distance between the equilibrium and non-equilibrium error distribution, which is always non-negative and vanishes only for $\eta=\eta_{\rm eq}$. Eq. \ref{eq:entropyprod} states that the average performed work is greater than the equilibrium free energy increase by a configurational bound, $\Delta W-\Delta F_{\rm eq} \ge T~D(\eta||\eta_{\rm eq})\ge 0$. In this view, the Kullback-Leibler term in Eq. \ref{eq:entropyprod} can be interpreted as the additional free energy stored in a copy characterized by an error different from its equilibrium value. This additional free energy can be recovered by a spontaneous depolymerization process that will stop once the system reaches its equilibrium error \cite{bennett1979dissipation}. Eq. (\ref{eq:entropyprod}) relates the information content of the copy with thermodynamics. However, in many relevant cases, the entropy production is dominated by the {\color{black} ``excess work'' } $\Delta W-\Delta F_{\rm eq}$, so that in practice Eq. (\ref{eq:entropyprod}) reduces to the traditional form of the second law. Consider for example a case in which error correction is very effective, $\eta\ll\eta_{\rm eq}$. In this limit, the Kullback-Leibler term tends to a constant, $D(\eta||\eta_{\rm eq})\to-\log(1-\eta_{\rm eq})>0$. Since usually the equilibrium error is already small, this constant is also small, $D(\eta||\eta_{\rm eq})\approx\eta_{\rm eq}\ll1$. The reason is that, since errors are typically rare, their overall contribution will be small. To better understand the link between errors and thermodynamics, {\color{black} we consider the average entropy production associated with an error incorporation, $\Delta S_{\rm tot}^w=\dot{S}_{\rm tot}^w/v^w$, where $\dot{S}_{\rm tot}^w$ is the entropy production rate coming from incorporation of wrong monomers only. The quantity $\Delta S_{\rm tot}^w$ also obeys a second-law-like inequality} \begin{equation} \label{eq:dsw} T\Delta S_{\rm tot}^w =\Delta W^w-\Delta F_{\rm eq} - T \log(\eta/\eta_{\rm eq}) \ge 0, \end{equation} where $\Delta W^w=\sum_{\langle ij\rangle}J^w_{ij}\mu_{ij}/v^w$ is the average work performed per wrong match (see {\em Methods}). Rearranging terms in Eq. (\ref{eq:dsw}) yields a general expression for the error in terms of thermodynamic observables \begin{equation} \label{eq:ESA} \eta = \eta_{\rm eq}\exp\left[-\Delta S_{\rm tot}^w +(\Delta {W}^w-\Delta F_{\rm eq})/T\right]. \end{equation} This result does not depend on microscopic details of the copying protocol, such as the discrimination barriers $\delta_{ij}$. Eq. (\ref{eq:ESA}) provides a direct link between thermodynamic irreversibility and accuracy of copying. It states that, given a fixed work budget, reduction of the error beyond its equilibrium value always comes at a cost in terms of entropy production. However, the dependence of the error on the thermodynamic quantities is non-trivial to derive from Eq. (\ref{eq:ESA}), as varying the work also affects the entropy production. \begin{figure}[!htb] \centerline{\includegraphics[width=.48\textwidth]{copol_thermo.pdf}} \caption{{\bf Template-assisted polymerization without intermediate states.} {\bf A} {\color{black} Excess work} $\Delta W-\Delta F_{\rm eq}$, entropy production and Kullback-Leibler term of Eq. (\ref{eq:entropyprod}) as a function of the error. Notice that the excess work dominates over the information term. {\bf B} Same terms as in {\bf A}, but for wrong monomers only. In this case, the information term dominates the entropy production. {\bf C} Relation between error and entropy production of wrong monomers, together with thermodynamic (red, dashed) and hardware-specific (black, dashed) bounds. In all panels, the driving $\mu_{10}$ is varied to vary the error. Parameters are $\delta_{10}=10T$, $\Delta E^r_1=0$, $\Delta E^w_1=3T$.\label{fig:thermo}} \end{figure} The inequality of Eq. (\ref{eq:dsw}) reveals the existence of three possible copying regimes: \begin{enumerate} \item {\it Error amplification}, $\Delta W^w-\Delta F_{\rm eq}>0$ and $\eta>\eta_{\rm eq}$. In this regime, {\color{black} a positive excess work for wrong matches} leads to an error higher than its equilibrium value. While, in this case. dissipating energy is counterproductive in terms of the achieved error, it can be justified by the need of achieving a high copying speed. \item {\it Maxwell demon}, $\Delta W^w-\Delta F_{\rm eq}<0$ and $\eta<\eta_{\rm eq}$. In this regime, the machine {\em extracts} work while lowering the information entropy of the chain with respect to its equilibrium value, $-\eta\log(\eta)<-\eta_{\rm eq}\log(\eta_{\rm eq})$. This regime is reminiscent of a Maxwell demon, since an apparent violation of the second-law-like inequality, Eq. \ref{eq:dsw}, occurs from neglecting entropy production associated with information manipulation (see e.g. \cite{jarz}). Note, however, that the {\color{black} excess work associated to right matches compensates this term}, so that growth of a copolymer can not result in $\Delta W-\Delta F_{\rm eq}<0$, see Eq.~\ref{eq:entropyprod}. \item {\it Error correction}, $\Delta W^w-\Delta F_{\rm eq}>0$ and $\eta<\eta_{\rm eq}$. This is an error-correction scenario in which work is dissipated to achieve an error lower than the equilibrium error. In this case, which is the most common for biological machines, Eq. (\ref{eq:ESA}) implies a simple bound on the error, $\eta \ge \eta_{\rm eq}\exp(-\Delta S_{\rm tot}^w )$. \end{enumerate} Given the copying protocol and the kinetic rates, the copying machinery will achieve a certain error $\eta$ and operate in one of these three regimes. Varying the kinetic rates affects both the error and the thermodynamic observables, possibly shifting the operating regime of the machine. To better scrutinize these aspects, we now move to considering specific protocols. In the simplest possible example, incorporation occurs in a single step, as sketched on the top panel of Fig.~\ref{fig:general}B (see also \cite{bennett1979dissipation,sartori2013kinetic,andrieux2009molecular,esposito2010extracting,gaspard2014kinetics}). It can be shown that this protocol is always dissipative, $\Delta W^{w}-\Delta F_{\rm eq}\ge0$. In general, wrong monomers can be discriminated by a kinetic barrier $\delta_{10}$ and an energy difference $\Delta E^w-\Delta E^r$ \cite{sartori2013kinetic}. If the kinetic barrier is larger than the energy difference, $\delta_{10}>\Delta E^w-\Delta E^r$, it can be shown that $\eta<\eta_{\rm eq}$, corresponding to {\it error correction}. If it is lower, then $\eta>\eta_{\rm eq}$, which corresponds to {\it error amplification} \cite{sartori2013kinetic}. In Fig.~\ref{fig:thermo}A we plot the different terms of the total entropy production, Eq. (\ref{eq:entropyprod}), for the error correction case. As discussed before, the information contribution to the total entropy production is negligible for small errors. Instead, note in Fig.~\ref{fig:thermo}B that the information term of Eq. (\ref{eq:dsw}) dominates over the work performed per wrong matches. This implies that the universal expression for the error, Eq.~(\ref{eq:ESA}), is very well approximated by the lower bound of {\it error correction}, as shown in Fig.~\ref{fig:thermo}C. The error departs from this bound only when it approaches its hardware-specific minimum $\eta_{\rm min}\approx \mathrm{e}^{-\delta_{10}/T}$. Note that increasing $\delta_{10}$ decreases both $\eta_{\rm min}$ and the dissipative cost $\Delta S_{\rm tot}^{w}$ of copying at an error rate $\eta>\eta_{\rm min}$. \subsection{Energetic bound to proofreading accuracy} In kinetic proofreading, a copying pathway that incorporates monomers at a speed $v_{\rm c}\ge0$ is assisted by a parallel pathway which preferentially removes wrong matches at a speed $v_{\rm p}\le0$, see Fig.~(\ref{fig:kp}A). {\color{black} Hereafter the sub-index ``p'' indicates that quantities are computed only for the proofreading reaction}. To maintain a negative speed, the proofreading reaction must be driven backward either by performing a work {\color{black} per added monomer} $\Delta W_{\rm p}$ , or by exploiting a high free energy difference $\Delta F_{\rm eq}$ between the final and the initial state. By means of proofreading, one can achieve lower errors than those of the copying pathway alone, at the cost of spending additional chemical driving and reducing the net copying speed $v=v_{\rm c}+v_{\rm p}$. \begin{figure}[!htb] \begin{center} \centerline{\includegraphics[width=.48\textwidth]{kp_phase3.pdf}} \caption{{\bf Regimes and bounds of kinetic proofreading.} {\bf A} Scheme of a generic proofreading scheme. Copying occurs at a net speed $v_{\rm c}>0$ through an arbitrary reaction network of intermediate states. After the copy is finalized, a proofreading reaction removes errors at a speed $v_{\rm p}<0$. The net average speed is $v=v_{\rm c}+v_{\rm p}\ge0$ {\bf B.} Thermodynamic regimes of kinetic proofreading. The model combines a copying scheme with one intermediate state with kinetic proofreading, as represented in Fig. (\ref{fig:general}B). The shaded regions denote the three thermodynamic regimes discussed in the previous section. Parameters are $\delta_{10}=5T$, $\delta_{21}=0$, $\delta_{02}=5T$, $\Delta E_2^w=\Delta E_1^w=2T$, $\Delta E_2^r=\Delta E_1^r=0$. {\color{black} We remind that states $0$, $1$, and $2$ represent the state before monomer incorporation, the intermediate state, and the final state where monomer has been incorporated, respectively (see also Fig. \ref{fig:rates}B)}. For each value of the error $\eta$, the other free parameters ($\mu_{10}$, $\mu_{21}$, $\omega_{21}$, $\omega_{02}$) are determined by minimizing the entropy production per copied wrong monomer $\Delta S_{\rm tot}^w$. {\bf C} Minimum error as a function of the proofreading work $\Delta W_{\rm p}=\mu_{02}$. For each curve, energies and activation barriers are fixed parameters as in the previous panel (except for $\delta_{02}$ which varies, as in the captions). For each value of $\mu_{02}$, the other free parameters ($\mu_{10}$, $\mu_{21}$, $\omega_{21}$, $\omega_{02}$) are determined by numerically minimizing the error $\eta$. Red-dashed and black-dashed lines represent thermodynamic and hardware-specific bounds, respectively. \label{fig:kp} } \end{center} \end{figure} We consider a proofreading protocol consisting of a copying pathway with one intermediate step in addition to the proofreading reaction, see middle panel in Fig.~\ref{fig:general}B. By tuning the rates, this model can operate in all three regimes described in the previous section, as shown in Fig. \ref{fig:kp}B. In particular, in the {\it Maxwell demon} regime, the error can be reduced up to one order of magnitude below its equilibrium value while at the same time extracting work from the wrong copying reaction. Very small errors are achieved in a strongly driven {\it error correction} regime, where the error rate satisfies $\eta \ge \eta_{\rm eq}\exp(-\Delta S_{\rm tot}^w )$. However, at variance with the example of the previous section, here the entropy production becomes quickly much larger than this bound. The reason is that effective proofreading requires a cycle in the reaction pathway which fundamentally involves dissipation of work. This dissipation, rather than the information term, dominates the entropy production of wrong matches at low errors. This is at variance with the single-step model of the previous section, where no cycles are present and the configurational entropy dominates over dissipation. To derive a better estimate of the error in proofreading, we now focus on the rate of entropy production {\color{black} during the proofreading of wrong matches} $T\dot{S}^w_{\rm p, tot}=-v_{\rm p}^w\Delta W_{\rm p}^w-v_{\rm p}^w[\Delta F_{\rm eq}+ T\log\left({\eta}/{\eta_{\rm eq}}\right)]$. Using that in proofreading $v_{\rm p}^w<0$ while $\dot{S}^w_{\rm p, tot}\ge 0$, we can derivethe following bound for the error \begin{equation} \label{eq:kpbound} \eta\ge \eta_{\rm eq}\exp\left( -\frac{\Delta W_{\rm p}+\Delta F_{\rm eq}}{T}\right)\quad , \end{equation} {\color{black} where we have further used that $\Delta W_{\rm p}=\Delta W_{\rm p}^w$ (see {\em Methods} for details) }. This equation is one of the main results of this paper. It shows that error reduction in proofreading is limited by its energetic cost, either in the form of chemical work in the proofreading pathway \cite{zaher2009fidelity} or free energy of the final state, which involves performing work in the copying pathway \cite{hopfield1974kinetic}. Similarly to Eq. (\ref{eq:ESA}), this bound does not depend on details of the copying protocol. In Fig. \ref{fig:kp}C, we show the error of the specific proofreading model of Fig. \ref{fig:kp}B as a function of the proofreading work. One can appreciate that the bound from Eq.~\ref{eq:kpbound} is tightly met for a wide range of errors. For very small values of $\Delta W_{\rm p}$, when $v_{\rm p}>0$ and no proofreading occurs, the bound is not satisfied. Finally, for very large work values, the error approaches the hardware-specific minimum $\eta_{\rm min}$. In this case, the value of $\eta_{\rm min}$ can be obtained from the explicit solution of the model (see derivation in {\em Methods}). In the strongly driven regime, the error $\eta$ decreases at increasing proofreading work $\Delta W_{\rm p}$. At the same time, $v_{\rm p}$ becomes more negative as more copies are proofread. The minimum error is thus obtained in the limit of vanishing elongation speed, when the proofreading speed is negative enough to arrest copying, $v_{\rm p}=-v_{\rm c}$. Imposing this condition gives the hardware-specific minimum error \begin{equation} \eta_{\rm min}\approx e^{(-\delta_{10}+\delta_{02}-\Delta E^w +\Delta E^r)/T}\quad . \label{eq:hardware} \end{equation} This expression shows that the error of the first copying step, approximatively equal to $e^{-\delta_{10}/T}$ because of the large kinetic barrier, is reduced by a factor $e^{(\delta_{02}-\Delta E^w+\Delta E^r)/T}$ due to the additional discrimination of the proofreading reaction. \section{Discussion} In this paper, we analyzed template-assisted polymerization, where copies are cyclically produced by an arbitrary complex reaction network. This broadly extends {\color{black} Bennett's original copolymerization model \cite{bennett1979dissipation} and further studies \cite{andrieux2008nonequilibrium,cady2009open,andrieux2009molecular,esposito2010extracting,sartori2013kinetic,andrieux2013information,gaspard2014kinetics})} where monomer incorporation occurs in a single step. In particular, the results presented here allow for analyzing the thermodynamics of realistic biological copying protocols, where a complex reaction network is responsible for error correction. At variance with models for the copy of a single monomer \cite{hopfield1974kinetic,ninio1975kinetic, murugan2012speed,murugan2014discriminatory}, in template-assisted polymerization {\color{black} the number of possible states of the chain grows exponentially at steady-state. This exponential increase} causes the appearance of an information term in the formula for the total entropy production, Eq. \ref{eq:entropyprod}. A similar term appears in the context of Landauer principle out of equilibrium \cite{esposito_landauer}, and was interpreted as the amount of information necessary to shift from the equilibrium distribution to the non-equilibrium one. Eq. \ref{eq:entropyprod} should not be confused with a formally similar one {\color{black} derived by Gaspard and Andrieux} \cite{andrieux2008nonequilibrium}, whic represents a physically different quantity, i.e. the entropy of the copy given the template. {\color{black} This difference is physically important: the information term in Eq. \ref{eq:entropyprod} can be thought of as a measure of distance from equilibrium, as it is equal to zero at equilibrium and positive otherwise. In contrast, the information term in Gaspard and Andrieux's formula goes to zero only in the limit of vanishing error rate.} The main result of this paper is that, thanks to the explicit dependence on the error, the second law of thermodynamics can be used to obtain general expressions and bounds on the copy error. This allows us to identify three different copying regimes: error amplification, error correction, and Maxwell demon, all of which can be achieved by kinetic proofreading. Considering cyclic copying is analogous to considering cyclic transformation when studying the efficiency of thermodynamic engines. Besides being the natural choice to properly describe the thermodynamics of the process, template-assisted polymerization allows for out-of-equilibrium copying regimes which are absent in single-monomer models. For example, a lower bound to the error analogous to Eq. \ref{eq:kpbound} is generally valid in closed networks \cite{ehrenberg_proof,qian_noise}. In template-assisted polymerization, this limit can be broken when the proofreading reaction reverts its flux, as seen in Fig. \ref{fig:kp}D for small values of the work. We briefly discuss the relevance of our results for interpreting experimental data. Many biological copying pathways are driven by the hydrolysis of one single GTP molecule. The chemical work spent in this process is $\Delta\mu=\Delta\mu^{0}+k_{\rm B}T\log\left(\frac{[GTP]}{[GDP][Pi]}\right)$. Taking as reference the bare potential of ATP, $\Delta \mu^{0}=14.5k_{\rm B }T$, and typical concentrations $[GTP]=1$mM, $[GDP]=0.01$mM and $[Pi]=1$mM, we obtain $\Delta \mu_{GTP}\approx 20k_{\rm B}T$. In a protocol involving proofreading, this information and Eq.~\ref{eq:kpbound} can be used to set a lower bound for the error. Assuming that the energy of GTP is all spent to increase the free energy of the chain, $\Delta F\approx\Delta\mu_{GTP}$, we obtain that the total error reduction is $\eta/\eta_{\rm eq}\ge10^{-9}$. The value of this bound is smaller than typically observed errors, which reasonably suggests that not $100\%$ of the energy of hydrolysis is utilized to increase the free energy of the system. Given the flexibility of our framework, many complex copying mechanisms studied in the literature as non-cyclic processes \cite{johansson2008rate,pape1999induced,zaher2009fidelity} can be directly considered as template assisted polymerization problems and studied from the point of view of thermodynamic efficiency. One limitation of our treatment is the lack of long-term memory: while processing a monomer, the machine does not keep track of the past errors encountered along the chain. A more general scheme could exploit correlations in the template sequence to reduce the error. An example of this is backtracking \cite{galburt2007backtracking,depken,mellenius2015dna}, where regions of the template containing many errors are entirely reprocessed. Generalization of template-assisted polymerization to these cases will be the subject of a future study. The thermodynamic relations derived in this paper fundamentally limit the capabilities of stochastic machines to reduce and proofread errors, and are reminiscent of similar bounds derived for adaptation error in sensory systems \cite{lan2012energy} {\color{black} It will be of interest to understand whether our results can be applied to error correction in sensing. For example, it is known that sensory pathways exploit proofreading both in chemosensing by isolated receptors \cite{morathierry} or cooperative ones \cite{lalanne2015chemodetection}}. Clarifying the links between these problems will constitute an important step towards formulating general thermodynamic principles \cite{Parrondo2015} limiting the accuracy of non-equilibrium information-processing. \section{Methods} \subsection{Steady-state solution of template-assisted polymerization} In this section, we briefly outline how to solve the template-assisted polymerization model. We start by writing the master equations governing the evolution of probabilities of all main states $P(\dots)$, and those of the intermediate states $P(\dots r_i)$ and $P(\dots w_i)$. The probability flux between two arbitrary intermediate states $\dots r_j$ and $\dots r_i$ is $\mathcal{J}^r_{ij}(\dots)=k^r_{ij}P(\dots r_j)-k^r_{ji}P(\dots r_i)$, and analogous for wrong matches (see Fig.~\ref{fig:general}A). The master equations for the intermediate states can be expressed in a compact form in terms of these fluxes \begin{eqnarray} \label{eq:intermediate} \dot{P}(\dots r_i)=\sum\limits_{j=0}^{n+1} \mathcal{J}^r_{ij}(\dots)\;\; ,\;\; \dot{P}(\dots w_i)=\sum\limits_{j=0}^{n+1}\mathcal{J}^w_{ij}(\dots) \end{eqnarray} where the upper dot denotes time derivative. Note that the sum extends to $j=0$ and $j=n+1$, which with an abuse of notation correspond to the main states neighboring the network of intermediate states, $\dots r_0\equiv\dots w_0\equiv\dots\ $, $\dots r_{n+1}\equiv\dots r$ and $\dots w_{n+1}\equiv\dots w$. Master equations for main states are easily written by distinguishing states ending with a wrong match from those ending with a right match \begin{eqnarray}\label{eq:main} \dot{P}(\dots w)&=&\sum\limits_{j=0}^{n+1}\left[ \mathcal{J}^w_{n+1j}(\dots) -\mathcal{J}^r_{j0}(\dots w)-\mathcal{J}^w_{j0}(\dots w)\right],\nonumber\\ \dot{P}(\dots r)&=&\sum\limits_{j=0}^{n+1}\left[\mathcal{J}^r_{n+1j}(\dots) -\mathcal{J}^r_{j0}(\dots r)-\mathcal{J}^w_{j0}(\dots r)\right] \end{eqnarray} where the three sets of fluxes in each equation correspond to finalized incorporation of the last monomer in the main state, and attempted incorporation of a right and wrong monomer. Eqs. (\ref{eq:intermediate}) are similar to those written for biochemical models, while Eqs. (\ref{eq:main}) are similar to those used for polymer growth. The system of equations (\ref{eq:intermediate}) and (\ref{eq:main}) can be solved at steady state, $\dot{P}=0$, by means of the {\it ansatz} that errors are uncorrelated. Given an error $\eta$, to be determined {\it a posteriori}, we impose that the steady-state probability of a string of length $N$ with $N^w$ errors is $P(\dots)\propto\eta^{N^w}(1-\eta)^{N-N^w}$. This implies \begin{equation} P(\dots r)=P(\dots)(1-\eta)\;\;{\rm and}\;\; P(\dots w)=P(\dots)\eta\;. \label{eq:ansatz1} \end{equation} For the intermediate states we make the additional {\it ansatz} \begin{align} P(\dots r_i)=P(\dots)p^r_i\;\;{\rm and}\;\; P(\dots w_i)=P(\dots)p^w_i\;, \label{eq:ansatz2} \end{align} where $p_i^r$ and $p_i^w$ are the occupancies of the intermediate states $1\le i\le n$, assumed to be independent of $P(\dots)$. Substituting Eqs. \ref{eq:ansatz1} and \ref{eq:ansatz2} in \ref{eq:intermediate} yields a system of $2n$ linear equations, from which the occupancies can be expressed as functions of the kinetic rates and the error $\eta$, still to be determined. It is now convenient to define the occupation fluxes $J_{ij}^r$ as \begin{equation} \label{eq:occflux} J^r_{ij}=\mathcal{N}(k_{ij}^r p_j -k_{ji}^r p_i)\quad, \end{equation} where $\mathcal{N}=\left[1+\sum_{i=1}^{n}(p^r_i+p_i^w)\right]^{-1}$ is a normalization constant such that $\sum_{\ldots i} P(\ldots r_i)+P(\ldots w_i)=1$, and thus $\sum_{\dots}P(\ldots)=\mathcal{N}$. Occupation fluxes are related to the probability fluxes via $\mathcal{J}^r_{ij}(\dots)=P(\dots) J^r_{ij}/\mathcal{N}$ and analogously for wrong matches. The speed of right and wrong monomer incorporations can now be expressed as $v^r = \sum_iJ^r_{n+1i}=\sum_iJ^r_{i0}$ and $v^w=\sum_iJ^w_{n+1i}=\sum_iJ^w_{i0}$. Replacing the {\it ansatz} in Eqs. \ref{eq:main} and using these definitions results in Eq.~\ref{eq:error}, which can be finally used to determine the error. \subsection{Entropy production rate} To calculate the steady-state entropy production rate, we start with the general expression \cite{Schnakenberg1974} \begin{eqnarray}\label{algo1} \dot{S}_{\rm tot}=\frac12\sum_{\dots,i,j}\left[\mathcal{J}^r_{ij}(\dots)\log\left(\frac{k^r_{ij}P(\dots r_j)}{k^r_{ji}P(\dots r_i)}\right)\right. +\nonumber\\ \left. + \mathcal{J}^w_{ij}(\dots)\log\left(\frac{k^w_{ij}P(\dots w_j)}{k^w_{ji}P(\dots w_i)}\right) \right]. \end{eqnarray} We now factorize the sum into one over strings (noted $\sum_{\dots}$) and one over intermediate states (where $\langle ij\rangle$ denotes links). Using the definition of the occupation fluxes, Eq. \ref{eq:occflux}, we obtain: \begin{align}\label{algo2} \dot{S}_{\rm tot}=\frac{\sum_{\dots}P(\dots)}{\mathcal{N}}\sum_{\langle ij\rangle}&\Bigg[J^r_{ij}\log\left(\frac{k^r_{ij}p^r_j}{k^r_{ji}p^r_i}\right)\nonumber\\&+J^w_{ij}\log\left(\frac{k^w_{ij}{p}^w_j}{k^w_{ji}{p}^w_i}\right)\Bigg]. \end{align} Since the sum over all states is normalized to one, we have that $\sum_{\dots}P(\dots)=\left[1+\sum_{i=1}^{n}(p^r_i+p_i^w)\right]^{-1}$. Using the definition of $\mathcal{N}$ in previous section, the term outside the brackets is equal to $1$. Substituting the definition of the rates of Fig. (\ref{fig:rates}) into (\ref{algo2}) yields \begin{eqnarray} \label{eq:epr} \dot{S}_{\rm tot}&=&\sum_{\langle ij\rangle} (J^r_{ij}+J^w_{ij})\mu_{ij}/T+\sum_{\langle ij\rangle}J^r_{ij}\log\left(\frac{p^r_j}{ p^r_i}\right)\nonumber\\ &+&\sum_{\langle ij\rangle}J^w_{ij}\log\left(\frac{p^w_j}{ p^w_i}\right) +\sum_{\langle ij\rangle}J^r_{ij}\left(\Delta E^r_j-\Delta E^r_i\right)/T\nonumber\\ &+&\sum_{\langle ij\rangle}J^w_{ij}\left(\Delta E^w_j -\Delta E^r_i \right)/T\quad. \end{eqnarray} For an isolated network at steady state, all terms but the first one vanish by flux conservation \cite{Schnakenberg1974}. However, in cyclic copying the states $i=0$ and $i=n+1$ receive a finite flux from the rest of the transition network, see Fig. \ref{fig:general}A. Using $\sum_j J_{ij}^r=0$ for $1\le i\le n$, the definitions of $v^r$ and $v^w$, and Eq.~\ref{eq:error}, we obtain \begin{eqnarray}\label{algo} \dot{S}_{\rm tot}&=&\sum_{\langle ij\rangle}(J^r_{ij}+J^w_{ij})\mu_{ij}/T-\eta v[\log\left(\eta\right) + \Delta E^w/T]\nonumber\\ &-&(1-\eta)v[\log\left(1-\eta\right) + \Delta E^r/T] \end{eqnarray} Using the definition of equilibrium error and free energy difference per step given in {\em Results}, we arrive at \begin{equation} T\dot{S}_{\rm tot}=v[\Delta W - \Delta F_{\rm eq}-T~D(\eta||\eta_{\rm eq})]\quad. \end{equation} Defining the entropy production per step as $\Delta S_{\rm tot}=\dot{S}_{\rm tot}/v$ leads to Eq. \ref{eq:entropyprod}. Eq. (\ref{eq:ESA}) can be derived following the same procedure, but considering the contribution to the entropy production coming from incorporation of wrong matches, $\dot{S}^w_{\rm tot}=\frac12\sum_{\dots,i,j} \mathcal{J}^w_{ij}(\dots)\log[k^w_{ij}P(\dots w_j)/(k^w_{ji}P(\dots w_i)]$, from which we also define $\Delta S_{\rm tot}^w=\dot{S}^w_{\rm tot}/v^w$. Note that $\dot{S}^w_{\rm tot}\ge 0$, since all terms of the sum in its definition are non-negative. \subsection{Thermodynamic bound for proofreading} In copying schemes assisted by kinetic proofreading the proofreading reaction removes incorporated monomers at an average speed {\color{black} $v_{\rm p} = J^w_{n+1\ 0}+ J^r_{n+1\ 0}$, where the subindex ``p'' denotes quantities that correspond to the proofreading reaction. The average proofreading speed can be written as a sum of contributions coming from right and wrong monomers $v_{\rm p}=v_{\rm p}^r+v_{\rm p}^w<0$. Proofreading is fueled by a chemical driving $\mu_{0\ n+1}$, which is the same for right and wrong matches (we remind that the proofreading reaction is driven backward). By direct substitution, one can show that the average work per proofread monomer is $\Delta{W}_{\rm p}=\Delta{W}_{\rm p}^w=\Delta{W}_{\rm p}^r=\mu_{0\ n+1 }$ } According to our convention, monomer removal corresponds to $v_{\rm p}<0$. In an effective proofreading scheme, errors are removed on average, $v_{\rm p}^w=J^w_{n+1\ 0}<0$. Consider now the entropy production rate of proofreading wrong monomers, $\dot{S}^w_{\rm p, tot}=J^w_{n+1\ 0}\log[(p_0^wk_{ n+1\ 0}^w)/(p^w_{n+1}k_{0\ n+1})]$. As every term of $\dot{S}_{\rm tot}$, this quantity satisfies a second-law-like inequality $\dot{S}^w_{\rm p, tot}\ge0$. By means of this inequality, and using $v_{\rm p}^w<0$, $p^w_0=1$ and $p^w_{n+1}=\eta$, we obtain the general proofreading bound of Eq.~\ref{eq:kpbound}. \subsection{Solution of the proofreading model} To solve the proofreading protocol in Fig.~\ref{fig:kp}A, we start from Eqs. \ref{eq:intermediate}, which at steady state imply $J^r_{10}-J^r_{21}=0$ and $J^w_{10}-J^w_{21}=0$. Solving for the probabilities of the intermediate states yields $p_1^r=(k^r_{10}+(1-\eta)k^r_{12})/(k^r_{01}+k^r_{21})$ and $p_1^w=(k^w_{10}+k^w_{12}\eta)/(k^w_{01}+k^w_{21})$. The speed of incorporation of right and wrong monomers are $v^w=\mathcal{N}[k^w_{20}+k^w_{21}p_1^w-\eta(k^w_{12}+k^w_{02})]$ and $v^r=\mathcal{N}[k^r_{20}+k^r_{21}p_1^r-(1-\eta)(k^r_{12}+k^r_{02})]$, where $\mathcal{N}$ is the previously defined normalization constant. Substituting these expressions in Eq. \ref{eq:error} yields \begin{equation} \label{eq:kp_hop_sol} \frac{\eta}{1-\eta}=\frac{k^w_{20}+k^w_{21}p_1^w-\eta(k^w_{12}+k^w_{02})}{k^r_{20}+k^r_{21}p_1^r-(1-\eta)(k^r_{12}+k^r_{02})}\;, \end{equation} which can be easily solved for the error $\eta$. To scrutinize the effectiveness of proofreading, we parametrize the rates as in Fig.~\ref{fig:rates}B. Considering the strongly-driven regime $\mu_{21},\mu_{02}\gg1$, Eq. (\ref{eq:kp_hop_sol}) becomes \begin{equation} \label{eq:kp_hop_sol2} \frac{\eta}{1-\eta}=\frac{\omega_{21} p_1^w -\eta \omega_{02} e^{(\mu_{02}-\mu_{21}+\Delta E^w)/T} } {\omega_{21} p_1^r -(1-\eta) \omega_{02} e^{(\mu_{02}-\mu_{21}+\Delta E^r+\delta_{02})/T}}\;. \end{equation} From Eq. \ref{eq:kp_hop_sol2}, one can deduce that the error $\eta$ is a decreasing function of the combination of parameters $K=(\omega_{02}/\omega_{21}) e^{(\mu_{02}-\mu_{21})/T}$, which tunes the intensity of proofreading. However, increasing $K$ also increases the absolute value of the proofreading speed $v_{\rm p}=\mathcal{N}[k^r_{20}+k^w_{20}-k^r_{02}(1-\eta)+k^w_{02}\eta]$, so that $K$ can be increased only up to a point where the net elongation speed vanishes. Finding the maximum value of $K$ by the condition $v=0$ and substituting in Eq. (\ref{eq:kp_hop_sol2}) leads to Eq. \ref{eq:hardware}. In this case, $\eta_1$ is determined by the large kinetic barrier $\eta_1\approx e^{-\delta_{10}/T}$, see e.g. \cite{cady2009open,sartori2013kinetic}. \begin{acknowledgments} We thank L. Granger, R. Ma, L. Peliti, A. Puglisi and three anonymous referees for useful comments on a preliminary version of the manuscript. SP acknowledges partial support from Spanish research ministry through grant FIS2012-37655-C02-01. \end{acknowledgments}
{ "timestamp": "2015-12-14T02:14:04", "yymm": "1504", "arxiv_id": "1504.06407", "language": "en", "url": "https://arxiv.org/abs/1504.06407" }
\section{Introduction} \label{sect_intro} Cosmological gamma-ray bursts (GRBs) radiate most of their energy in the soft gamma-ray band between 100~keV and 10 MeV \citep{goldstein_2012, ackermann_2013b}. The MeV burst typically lasts seconds or minutes and is then followed by broad-band afterglow emission, which is associated with the deceleration of the explosion ejecta by the ambient medium (\citealt{meszaros_1997}). Afterglow observations can be used to estimate the main parameters of the GRB explosion --- its Lorentz factor, kinetic energy, and the density of the ambient medium. Interpretation of observations became, however, challenging after the {\it Swift} satellite discovered bizarre X-ray and optical light curves of the early afterglow \citep{gehrels_2009}. {\it Swift} observations challenge the standard assumption that afterglow is emitted by the decelerating shock wave in the external medium; instead, it was proposed that afterglow is produced by a long-lived reverse shock \citep{uhm_2007, genet_2007}. Disentangling the two possible mechanisms is difficult, and a reliable method for deducing the explosion parameters from observations has been lacking. The discovery of GeV flashes by the {\it Fermi} satellite opens a new way for solving this problem. The observed flashes have similar light curves with a special shape: they begin with a delay and sharply peak well before the end of the MeV burst; then the light curve exhibits a long gradual decay \citep{ackermann_2013b}. It is natural to associate the extended GeV emission with the external blast wave \citep{zou_2009, kumar_2009, ghisellini_2010} although this scenario faced difficulties in explaining the early peak of the GeV flash \citep{gao_2009, he_2011, maxham_2011}. The puzzling peak was recently explained by the exponential $e^\pm$ loading of the external medium by the prompt MeV radiation ahead of the blast wave \citep{beloborodov_2014}. Since the radius of pair loading is well defined and can be determined from the prompt GRB observations, the GeV flash provides a standard ``ruler'' and a unique opportunity for disentangling the explosion parameters. Detailed modeling of the pair-loaded blast wave was performed for GRB~080916C \citep{beloborodov_2014} and GRB~130427A \citep{vurm_2014}. It was shown that the GeV flash is emitted by the {\it thermal} plasma heated in the forward shock, which is cooled by inverse Compton (IC) scattering of photons of lower energies. Along with the IC flash, the blast wave should emit synchrotron radiation, in particular in the optical band. Pair loading shapes the optical synchrotron light curve similarly to the IC GeV flash. As a result, the model predicts an optical flash with a sharp peak at a time close to the GeV peak. Following the peak, the optical emission should quickly decay, as the pair-loading effect is reduced, and the steep decay should be followed by a flatter light curve of the normal pair-free afterglow. The predicted GeV+optical flash has been detected in GRB~130427A \citep{vestrand_2014}, and the radiative transfer simulation of \citet{vurm_2014} reproduced both the GeV and optical light curves of the flash. The consistency of the model with the data is remarkable given the fact that it has only four adjustable parameters: (1) the ambient wind density parameter $A=\rho R^2$, (2) the explosion Lorentz factor $\Gamma_{\rm ej}$, (3) the prompt emission efficiency $\epsilon_{\rm rad}=E_{\rm GRB}/(E_{\rm GRB}+E_{\rm ej})$, which determines the isotropic kinetic energy of the explosion $E_{\rm ej}$ for a GRB with observed isotropic energy $E_{\rm GRB}$, and (4) the magnetization of the blast wave $\epsilon_B$. This fourth parameter only enters the calculation of the optical light curve and does not affect the GeV flash; therefore, the model of GeV emission has only three adjustable parameters. Other parameters of the explosion, e.g. the deceleration radius $R_{\rm dec}$ and the pair loading factor $Z_\pm(R)$, are not adjustable --- they are calculated from the blast wave dynamics and the observed prompt radiation. In this paper, we investigate all observed GRBs with well measured light curves of the flash. Our goal is to further test the proposed model and, if the model fits the data, to determine the parameters of the GRB explosions. We perform radiative transfer calculations individually for each burst, search for a blast wave model that would be able to reproduce the observed light curves of GeV emission and, if data is available, optical. Section~2 describes the sample of GRBs, and Section~3 describes the model and the method of our analysis. The results are presented in Section~4 and further discussed in Section~5. \section{GRB sample} \label{sect_data} The choice of bursts for our sample is based on two criteria: (1) good {\it Fermi} LAT data, which provide the shape of the GeV flash, and (2) measured cosmological redshift $z$. In addition, we looked for bursts with available early optical data. We identified seven GRBs useful for our analysis: 080916C, 090510, 090902B, 090926A, 110731A, 120711A, and 130427A. The {\it Fermi} LAT data are taken from the published LAT catalogue \citep{ackermann_2013b} for GRBs 080916C, 090510, 090902B, 090926A, 110731A. Two recent GRBs 120711A and 130427A are not in the catalogue; their published LAT and optical flash data are taken from \citet{ackermann_2014}, \citet{martincarrillo_2014}, and \citet{vestrand_2014}. The prompt radiation data (which are used as an input of our transfer simulations) are from \cite{abdo_2009} for GRB~080916C, \citet{ackermann_2010} for GRB~090510, \citet{abdo_2009b} for GRB~090902B, \citet{ackermann_2011} for GRB~090926A, \citet{ackermann_2013} for GRB~110731A, and \citet{gruber_2012} for GRB~120711A. The prompt data for GRB~130427A are taken from \citet{golenetskii_2013}; \citet{ackermann_2014}. These papers provide an approximate description of the spectral evolution in each GRB, using spectral fits in temporal bins. This evolution is useful to take into account in our transfer simulations, however its details are not essential and weakly affect the results. For instance, in GRB~090510 it is safe to neglect the small precursor, as it carries small energy. In GRB~110731A we merged the first two time bins ``A'' and ``B'' into a single bin to smoothen the jumps in luminosity and spectral parameters reported by \citet{ackermann_2013}. These jumps mostly result from two different ways of fitting the observed spectrum: bin A spectrum was fitted by a power law with an exponential cutoff while bin B spectrum was fitted by a Band function whose high-energy slope is affected by the inclusion of the LAT data (see Ackermann et al. 2013a).\footnote{The inclusion of GeV data in the Band component can be dangerous, as the GeV signal contains a separate component from the external blast wave, which can corrupt the inferred parameters of the Band MeV spectrum.} The optical and X-ray afterglow data are from \citet{greiner_2009} for GRB~080916C, \citet{depasquale_2010} for GRB~090510, \citet{pandey_2010} for GRB~090902B, \citet{swenson_2010} for GRB~090926A, \citet{ackermann_2013} for GRB~110731A, \citet{martincarrillo_2014} for GRB~120711A, and \citet{vestrand_2014} for GRB~130427A. We use the afterglow data to estimate the soft radiation field in the source, which dominates the IC cooling of the blast wave after the prompt MeV radiation. The afterglow reconstruction uses the simple power-law interpolation of the observed spectra and light curves. Extrapolation of the available data to earlier times was needed for several bursts. The disadvantage of this method is that the accuracy of the simple power-law extrapolation is uncertain, in particular for GRBs 090510, 080916C, 090902B, and 090926A. The advantage is that the method is well defined, model-independent, and minimizes special treatment of bursts with incomplete afterglow data. Fortunately, the rough reconstruction of the target soft radiation for IC scattering does not create large uncertainties in the light curve of the predicted GeV flash. In particular, in the fast cooling regime, the details of the target spectrum play almost no role \citep{beloborodov_2014}. \section{The model} Pair loading of the external medium by the MeV radiation can be accurately calculated for any observed GRB with a known redshift, using the observed luminosity (isotropic equivalent) and spectrum of the prompt radiation. For any optically thin medium, the pair loading factor $Z_\pm=n_\pm/n$ does not depend on the medium density $n$, and is only a function of radius $R$ \citep{thompson_2000,beloborodov_2002}. The function $Z_\pm(R)$ is obtained by solving radiative transfer for the prompt MeV radiation. The transfer weakly affects the observed prompt radiation, however it strongly impacts the medium by depositing momentum and creating new particles. The pair loading factor is huge at small radii, $Z_\pm\sim 10^4-10^5$, and is steeply reduced outside a characteristic ``pair-loading'' radius. The fast evolution of the $e^\pm$ density shapes the peak of the GeV flash and its initial decay, as described in detail in \citet{beloborodov_2014}. For a typical bright burst observed by {\it Fermi} LAT, the peak radius $R_p$ is comparable to $10^{16}$~cm. It is smaller than the deceleration radius of the blast wave, $R_{\rm dec}$, and therefore the GeV flash peaks before the onset of normal afterglow, which is shaped by the blast wave deceleration. The mechanism of the GeV flash may be summarized as follows. The blast wave has a high Lorentz factor $\Gamma$ and heats the pair-loaded external medium to a relativistic temperature. The medium is cooled behind the shock via inverse Compton (IC) and synchrotron emission. IC cooling is extremely fast as long as the blast wave is exposed to the prompt MeV radiation (which is produced at much smaller radii and gradually overtakes the blast wave, with the relative speed of $\Delta v=c/2\Gamma^2$). When the prompt GRB radiation fully overtakes and decouples from the blast wave, the shock-heated plasma is cooled via the slower synchrotron-self-Compton (SSC) emission. The SSC regime occurs in the far tail of the GeV flash; the tail is well observed for some GRBs, in particular in GRB~130427A. In contrast to the older concept that GeV emission comes from nonthermal particles accelerated in the shock, \citet{beloborodov_2014} showed that the main, {\it thermal} population produces GeV-TeV emission. Therefore a large fraction of the blast wave energy is radiated in the high-energy bands. The thermal plasma also produces the synchrotron optical flash. Electrons/positrons injected with the thermal Lorentz factor $\gamma_{\rm th}$ strongly dominate synchrotron spectrum at frequencies $\nu<\nu_m\sim \Gamma\gamma_{\rm th}^2 eB/m_ec$, which covers the optical band during the flash. The dominance of radiation from the thermal plasma makes the model simple to test, without invoking phenomenological parameters describing the nonthermal tail. Nonthermal particles dominate synchrotron emission at $\nu>\nu_m$; this regime applies to the late optical afterglow when $\nu_m$ decreases below $10^{15}$~Hz. During the flash, $\nu>\nu_m$ only at high (X-ray) frequencies. An extended nonthermal tail of the electron distribution can produce synchrotron radiation from the X-ray band up to $\sim 0.1$~GeV, contributing to the flux detected by LAT. This radiation is, however, found to be negligible in our best-fit models of GeV flashes (except perhaps the special case of GRB~090510).\footnote{During the peak of the flash, synchrotron cooling of the blast wave is negligible compared with IC cooling (as long as $\epsilon_B\ll 0.1$). It remains negligible in the tail if $\epsilon_B\ll 10^{-3}$. } Therefore, effectively three parameters ($\Gamma_{\rm ej}$, $\epsilon_{\rm rad}$, $A$) enter the fitting of the GeV flash, and $\epsilon_B$ is only relevant for the optical flash. Nonthermal synchrotron radiation implicitly enters our flash model in a different way: it provides targets for IC scattering in the SSC tail of the GeV flash. Since the nonthermal synchrotron modeling is expensive (and uncertain, especially when the reverse shock contribution is included), we estimate the targets using the actual {\it observed} afterglow (Section~2). The model must assume a value for the fraction $\epsilon_e$ of the shock energy that is given to the thermal electron/positron plasma. \citet{beloborodov_2014} showed that $\epsilon_e\approx 1$ during the peak of the flash. At later times (when $Z_\pm$ drops to 500), $\epsilon_e$ is reduced to 0.3, a typical value reported by the simulations of collisionless shocks \citep{sironi_2011}. Note that the thermal $\epsilon_e$ is much less uncertain than the corresponding parameter of nonthermal particles; it is frozen and taken the same for all bursts. The progenitor wind has the mean molecular weight per electron $\mu_e=\rho/n_em_p=2$ (elements heavier than hydrogen). The correct choice of $\mu_e$ is essential in simulations of GRB afterglow (see the discussion in \citealt{vurm_2014} and comparison with \citealt{panaitescu_2013}). GRB~090510 is a short GRB and a special case; therefore for this burst we also search for a solution with uniform external medium and $\mu_e=1$. Appendix~A summarizes simplified analytical estimates for the theoretical GeV+optical flash, which demonstrate basic trends in the model. Deviations of the accurate simulations from the estimates highlight the importance of detailed calculations for each GRB individually, using as an input its observed prompt radiation. Below the model with adjustable parameters $A$, $\Gamma_{\rm ej}$, $\epsilon_{\rm rad}$ (and $\epsilon_B$, if optical flash data is available) is calculated for each GRB and fitted to the data. The calculation involves a careful radiative transfer simulation, as described in detail in \citet{beloborodov_2014}. The simulation is expensive and we do not attempt a formal fitting of the data that would give $\chi^2$ for the best fit. Instead, we manually search the parameter space for an acceptable solution. \begin{table*}[t] \begin{center} \caption{ Three main observational parameters of the GRBs in our sample and four adjustable parameters of the model. $E_{\rm GRB}$ -- prompt radiation energy (isotropic equivalent), $T_{\rm GRB} = T_{90}/(1+z)$ -- redshift-corrected duration of the prompt emission, $z$ -- cosmological redshift. $A=\rho R^2$ -- wind density parameter, $\Gamma_{\rm ej}$ -- ejecta Lorentz factor, $\epsilon_{\rm rad}$ -- prompt efficiency, $\epsilon_B$ -- magnetization. \label{tab_params}} \vspace{1.0mm} \scriptsize{ \begin{tabular}{c|ccc|cccc} \hline\hline GRB & $E_{\rm GRB}$ & $T_{\rm GRB}$ & $z$ & A & $\Gamma_{\rm ej}$ & $\epsilon_{\rm rad}$ & $\epsilon_B$ \\ & $[10^{54} \ \mathrm{erg}]$ & $[\mathrm{s}]$ & & $[10^{11}\ \rm g/cm]$ & & & \\ & & & & & & & \vspace{-0.3cm} \\ \hline 080916C & $8.8$ & $12$ & $4.35$ & $1.5 \rightarrow 3.5$ & $900 \rightarrow 1400$ & 0.17 & -- \\ 090510 & $0.11$ & $1.1$ & $0.903$ & $1.2 \rightarrow 2$ & $700 \rightarrow 800$ & 0.1 & -- \\ 090510 (uniform) & - & - & - & $n =2\times10^{4} \ \mathrm{cm}^{-3}$ & 900 & 0.1 & -- \\ 090902B & $3.6$ & $7.8$ & $1.822$ & $1\rightarrow2$ & $600 \rightarrow 900$ & 0.4 & -- \\ 090926A & $2.2$ & $4.2$ & $2.106$ & $1\rightarrow2$ & $600 \rightarrow 1000$ & 0.25 & -- \\ 110731A & $0.76$ & $1.9$ & $2.83$ & $0.4 \rightarrow 0.8$ & $800 \rightarrow 1100$ & 0.2 & -- \\ 120711A & $1.65$ & $48$ & $1.405$ & $1 \rightarrow 3$ & $320 \rightarrow 400$ & 0.3 & $10^{-5} \leftarrow 10^{-6}$ \\ 130427A & $0.85$ & $15$ & $0.34$ & $0.15 \rightarrow 0.5$ & $300 \rightarrow 350$ & 0.8 & $10^{-3} \leftarrow 2\times 10^{-4}$ \\ \hline \end{tabular}} \end{center} \end{table*} \begin{table*}[t] \begin{center} \caption{ Other physical quantities calculated from the model: $\Gamma_p$ -- blast wave Lorentz factor at radius $R_p$ of the GeV peak, $R_{\rm dec}$ -- deceleration radius, $R_\pm$ -- pair loading radius, $t_{\rm sc}$ -- time (measured in central engine frame) when the IC cooling of thermal electrons becomes inefficient. \label{tab_output}} \vspace{1.0mm} \scriptsize{ \begin{tabular}{c|ccccc} \hline\hline GRB & $\Gamma_p$ & $R_p$ & $R_{\rm dec}$ & $R_\pm$ & $t_{\rm sc}$ \\ & & $[10^{16} \ \mathrm{cm}]$ & $[10^{16} \ \mathrm{cm}]$ & $[10^{16} \ \mathrm{cm}]$ & [s] \\ & & & & & \vspace{-0.3cm} \\ \hline 080916C & $540$ & $1$ & $10 \leftarrow 6$ & 10 & $200 \rightarrow 300$ \\ 090510 & $500$ & $0.2$ & $0.35 \leftarrow 0.3$ & 0.9 & $100 \rightarrow 200$ \\ 090510 (uniform) & $400$ & $0.2$ & 0.3 & 0.9 & $>10^4$ \\ 090902B & $300$ & $1$ & $4 \leftarrow 3$ & 9 & $6\times 10^3 \rightarrow 3 \times 10^4$ \\ 090926A & $390$ & $1.1$ & $5 \leftarrow 4$ & 9 & $1.5\times 10^3 \rightarrow 6 \times 10^3$ \\ 110731A & $540$ & $0.5$ & $2.5 \leftarrow 2$ & 4 & $600 \rightarrow 4 \times 10^3$ \\ 120711A & $200$ & $3$ & $10 \leftarrow 6$ & 12 & $350 \rightarrow 10^3$ \\ 130427A & $250$ & $1$ & $3 \leftarrow 2.5$ & 6 & $2\times 10^3 \rightarrow 7\times 10^3$ \\ \hline \end{tabular}} \end{center} \end{table*} \begin{figure}[h] \begin{center} \hspace*{-1.cm} \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{lc_lab_080916C.eps} & \includegraphics[width=0.49\textwidth]{lc_lab_090902B.eps} \\ \includegraphics[width=0.49\textwidth]{lc_lab_090926A.eps} & \includegraphics[width=0.49\textwidth]{lc_lab_110731A.eps} \end{tabular} \end{center} \vspace*{-0.8cm} \caption{ Model light curves and data for GRBs 080916C, 090902B, 090926A and 110731A. Theoretical photon flux (isotropic equivalent) and data above 100~MeV are shown in blue, and the photon flux predicted above 100~GeV is shown in magenta. Time $t$ is measured in the rest frame of the central engine, $t=t_{\rm obs}/(1+z)$. Solid curves show the high-$A$ (high-$\Gamma_{\rm ej}$) model and dashed curves show the low-$A$ (low-$\Gamma_{\rm ej}$) model (see Table~1). } \label{fig_lcs_lph} \end{figure} \begin{figure}[h] \begin{center} \hspace*{-1.cm} \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{lc_lab_090510.eps} \end{tabular} \end{center} \vspace*{-0.8cm} \caption{ Model light curves and data for GRB~090510. Theoretical photon flux (isotropic equivalent) and data above 100~MeV are shown in blue, and the photon flux predicted above 100~GeV is shown in magenta. Time $t$ is measured in the rest frame of the central engine, $t=t_{\rm obs}/(1+z)$. Solid curves show the high-$A$ (high-$\Gamma_{\rm ej}$) model and dashed curves show the low-$A$ (low-$\Gamma_{\rm ej}$) model (see Table~1). The dashed-dotted curve shows the model with uniform external medium. } \label{fig_lcs_lph_090510} \end{figure} \begin{figure}[h] \begin{center} \hspace*{-1.cm} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth, trim= 2cm 0cm 8cm 0cm]{lc_lab_120711A.eps} & \includegraphics[width=0.45\textwidth, trim= 2cm 0cm 8cm 0cm]{lc_lab_130427A.eps} \end{tabular} \end{center} \vspace*{-0.8cm} \caption{ Model light curves and data for GRBs 120711A and 130427A. Solid curves show the high-$A$ (high-$\Gamma_{\rm ej}$) model and dashed curves show the low-$A$ (low-$\Gamma_{\rm ej}$) model (see Table~1). Upper panels: theoretical luminosity and data above 100~MeV (blue) and the luminosity predicted above 100~GeV (magenta). Lower panels: theoretical luminosity and data at 2~eV. } \label{fig_lcs_nrj} \end{figure} \section{Results} For each GRB in the sample, the model successfully reproduced the observations in a narrow region of the parameter space, which allowed us to measure the parameters (Table~\ref{tab_params}). The theoretical light curves of the GeV and optical flashes are compared with observations in Figures \ref{fig_lcs_lph}-\ref{fig_lcs_nrj}. For each GRB, we found two solutions whose predictions for the LAT/optical flux differ by about $1 \sigma$. This gives a rough estimate of the ``error bar'' on the measured $A$ and $\epsilon_B$. A moderate uncertainty also results from the uncertain contribution of the prompt emission to the GeV light curve at early times, which is suggested by the variability in the observed light curve. It implies a somewhat lower peak of the smooth flash associated with the external blast wave. In all seven GRBs the GeV peak occurs while the prompt emission is still going on. This fact alone shows that the GeV flash is not associated with the deceleration radius of the blast wave. Table~\ref{tab_output} gives more details of the reconstructed explosion --- Lorentz factor $\Gamma_p$ and radius $R_p$ of the blast wave at the GeV peak, the deceleration radius $R_{\rm dec}$, the pair loading radius $R_{\pm}$ where $Z_{\pm}$ drops below 2, and the (redshift-corrected) time $t_{\rm sc}$ when the cooling (synchrotron+IC) of thermal electrons becomes inefficient. We find $Z_\pm(R_p)\sim 10^3-10^4$, a robust feature of our model: pair loading of the external medium is responsible for the extremely efficient processing of shock energy into GeV gamma-rays, as explained in detail in \citet{beloborodov_2014}. The pair loading $Z_{\pm}$ rapidly decreases during the peak and early decay of the GeV light curve; as a result, the shock energy {\it per lepton} increases and the characteristic IC photon energy sweeps across the GeV band. All the presented models show similar evolution of the flash spectrum: the spectrum is soft during the rise of the GeV flash, then quickly hardens and remains approximately flat ($\nu L_\nu \sim \mbox{constant}$) during the most luminous phase. This behavior is consistent with the spectral slopes observed by {\it Fermi} LAT \citep{ackermann_2013b} (with one notable exception --- GRB~090510, whose spectrum is softer, see Section~\ref{par_090510}). At later times the predicted spectrum slightly hardens, approaching $\nu L_\nu\propto \nu^{1/2}$ that characterizes emission from fast-cooling thermal electrons. The predicted behavior of the optical spectrum is similar: flat near the peak, followed by moderate hardening as the synchrotron frequency $\nu_m$ moves beyond the optical band. At present we possess no data on the spectral slope of the optical flash; its predicted behavior could be tested by future optical observations. Figures~1-3 also show the predicted TeV emission to provide some guidance for future observations with Cherenkov telescopes. The characteristic IC photon energy reaches the TeV band in seconds or minutes after the GeV peak. The high-energy spectrum is most extended near the pair loading radius $R_{\pm}$; the maximum photon energy is typically between a few hundred GeV and ten TeV, and scales as $E_{\rm kin}/(A E_{\rm GRB}^{1/2})$ where $E_{\rm kin}$ is the kinetic energy of the blast wave. The electrons behind the shock are still fast-cooling at this stage, and their bolometric IC luminosity is $\sim\epsilon_e E_{\rm kin}/(4t)$. The flat (or even slightly rising) high-energy spectrum places a sizable fraction of this luminosity into the TeV band, resulting in comparable GeV and TeV fluxes. Our transfer simulations also show that the high-energy flash is partially absorbed by local photon-photon collisions. This intrinsic absorption is included in all GeV-TeV flash models presented in this paper, however the effect is never dramatic, because the flash always peaks where the radiation column density is reduced below a critical value (see Beloborodov et al. 2014 for a detailed discussion). In particular, intrinsic absorption does not create a break of the high-energy spectrum, although it does moderately affect the spectrum shape. The TeV emission peaks immediately after its onset. Its decay is relatively slow, so that most of the energy above 0.1~TeV is emitted approximately over a decade in time after the peak, typically within a few minutes of the GRB trigger (Figures~1-3). The TeV flux from the thermal electrons behind the shock cuts off when the characteristic energy of the IC photons falls below 0.1~TeV, which typically occurs after a few hours or a day. An additional nonthermal electron population could affect the rise of the TeV light curve and extend the emission to later times, however it would not influence the most luminous phase. The seed photon field for IC scattering significantly changes at the time when the prompt MeV radiation completely overtakes the blast wave, as the remaining X-ray and optical afterglow radiation is softer and less luminous. The theoretical GeV light curves show no special features at this transition (Figures \ref{fig_lcs_lph}-\ref{fig_lcs_nrj}). This is because the shock-heated plasma is still in the fast-cooling regime; therefore it produces the same IC emission regardless of the details of the target spectrum. At the time of the GeV peak the ejecta kinetic energy is still being transferred to the blast wave via the reverse shock. Comparison of the ejecta and blast wave Lorentz factors in Tables~\ref{tab_params} and \ref{tab_output} reveals that the reverse shock is at least mildly relativistic in most GRBs in the sample. When $\Gamma_{\rm ej}\gg \Gamma$ (ultra-relativistic reverse shock), the blast wave dynamics and radiation are insensitive to $\Gamma_{\rm ej}$. Then the model of the GeV flash effectively has only two parameters: $A$ and $\epsilon_{\rm rad}$. The flash observations provide accurate estimates for the blast wave Lorentz factor $\Gamma$, however $\Gamma_{\rm ej}$ is more difficult to measure. In some cases, only lower limits on $\Gamma_{\rm ej}$ are obtained. The relativistic reverse shock crosses the main part of the ejecta (carrying most of the explosion energy) in about the same time as it takes the prompt MeV radiation to fully overtake the blast wave. This time also corresponds to the observed time $T_{90}$ and the deceleration radius $R_{\rm dec}$. After this moment, the blast wave is approximately described by the Blandford-McKee self-similar solution. Note that $R_{\rm dec}<R_\pm$ for all bursts in the sample, i.e. the effects of pair-loading continue to impact the afterglow emission for some time after the end of the prompt emission. This ``memory'' of pair creation is even more significant at low frequencies where the emission occurs in the slow-cooling regime \citep{beloborodov_2005}. \begin{figure}[h] \begin{center} \hspace*{-1.cm} \begin{tabular}{cc} \includegraphics[width=0.4\textwidth]{Liso_rhodec.eps} & \includegraphics[width=0.4\textwidth]{Liso_Amax.eps} \\ \end{tabular} \end{center} \vspace*{-0.8cm} \caption{ Left panel: blast wave Lorentz factor (before deceleration) versus the prompt GRB luminosity $L_{\rm GRB}=E_{\rm GRB}/T_{\rm GRB}$. The 7 LAT bursts studied in this paper are plotted as magenta filled squares. We included the short GRB~090510 and modeled its GeV flash with both wind (magenta square) and uniform (blue triangle) external medium. For comparison, we also show the estimates (open circles) and lower limits (arrows) for $\Gamma$ obtained for a sample of weaker bursts with a different method \citep{hascoet_2014}. Right panel: distribution of wind parameters in units of $10^{11} \ \mathrm{g \ cm^{-1}}$. The LAT bursts studied in this paper are plotted in magenta. Arrows show the upper-limits from \citet{hascoet_2014}. } \label{fig_param} \end{figure} \section{Discussion} \label{discussion} In this paper, we tested the GeV flash model proposed by \citet{beloborodov_2014} using a sample of GRBs which includes all bursts with good LAT data and known redshifts. All 7 bursts in the sample are intrinsically bright, with peak luminosities ranging from $\sim 10^{53}$~erg~s$^{-1}$ (GRB~130427A) to $\sim 10^{54}$~erg~s$^{-1}$ (GRB~080916C). We performed radiative transfer simulations for each of the 7 bursts. The input of the simulation is the observed prompt MeV radiation and the observed optical/X-ray afterglow, and the output is the GeV light curve. The model has three adjustable parameters: the Lorentz factor of the GRB ejecta $\Gamma_{\rm ej}$, the ambient density parameter $A$, and the radiative efficiency of the prompt emission $\epsilon_{\rm rad}=E_{\rm GRB}/(E_{\rm GRB}+E_{\rm ej})$. We found that the model well explains the observed light curves in the sample. This allowed us to obtain estimates for $\Gamma_{\rm ej}$, $A$, and $\epsilon_{\rm rad}$. \subsection{Ambient medium} Explosion into a wind-type medium is consistent with the observed flash for all GRBs in the sample except possibly GRB~090510 (see below). We found that the wind parameter $A$ shows moderate variations in the sample, between $0.15\times10^{11}$~g~cm$^{-1}$ and $3.5\times10^{11}$~g~cm$^{-1}$ (Table~1). These values are comparable with typical $A\sim 3\times 10^{11}$~g~cm$^{-1}$ estimated for the winds of Wolf-Rayet stars in our Galaxy \citep{crowther_2007}. This result provides further support for the association of GRBs with collapse of Wolf-Rayet stars. Evidence for this association was previously provided by a few GRBs with a detected supernova counterpart of type Ib or Ic \citep{woosley_2006}. We also note that the obtained values of $A$ do not contradict the upper limits estimated by \citet{hascoet_2014} with a different method and for a different sample of bursts. In that work, the most constraining upper limits $A_{\max}\sim 10^{11}$~g~cm$^{-1}$ were obtained for GRBs with luminosities below $10^{52}$~erg~s$^{-1}$ (Figure~\ref{fig_param}). Comparison of these upper limits with our estimates presented here suggests that the luminous bursts detected by LAT have systematically higher $A$. In most bursts in the sample the flash peaks much earlier than the blast wave reaches the deceleration radius $R_{\rm dec}$. In this situation, the models of uniform and wind external medium predict very different light curves of the flash, and we find that only the wind medium is consistent with the data. The distinction is less clear when $R_p$ is comparable to $R_{\rm dec}$. Then both the shape of the GeV peak and the decay of the light curve are relatively insensitive to the profile of the ambient medium (provided that the density at $R_p$ remains similar and the emitting electrons are fast-cooling). In our sample, GRB~090510 falls into this category (see below). The separation between $R_{\rm dec}$ and $R_p$ is also moderate in GRB~120711A and GRB~130427A, however the wind medium is still preferred in these two cases. Thus the wind medium is preferred by the analysis of all long bursts in the sample. \subsection{Lorentz factor} Measurement of GRB Lorentz factors is a long standing problem. Until recent work, the main method was based on estimates of photon-photon opacity for the gamma-rays detected by LAT, which gave lower limits on $\Gamma_{\rm ej}$ (e.g. \citealt{lithwick_2001, granot_2008, hascoet_2012}). Reconstruction of the GeV flash mechanism gives a valuable measurement of the Lorentz factor of the blast wave $\Gamma$ {\it before its deceleration,} and also provides an estimate for the ejecta Lorentz factor $\Gamma_{\rm ej}\mathrel{\copy\simgreatbox}\Gamma$. This method was recently applied to GRB~080916C \citep{beloborodov_2014} and GRB~130427A \citep{vurm_2014}. Our results extend this analysis to the sample of 7 GRBs. Figure~\ref{fig_param} shows the obtained Lorentz factors versus the burst luminosity. We observe a positive correlation between $\Gamma$ and the average luminosity of the GRB $L_{\rm GRB}=E_{\rm GRB}/T_{\rm GRB}$, where the prompt burst energy $E_{\rm GRB}$ and its approximate duration $T_{\rm GRB}$ are given in Table~2. Provided that $\Gamma_{\rm ej}$ is not much higher than $\Gamma$ (which is the case for GRB~130427A, and likely in the other bursts) we find a $L_{\rm GRB}$-$\Gamma_{\rm ej}$ correlation, which may be roughly approximated by a power-law relation $\Gamma_{\rm ej}\approx 10^3 L_{54}^{1/2}$. Future observations of GeV flashes may allow a better measurement of this correlation. There is another method of estimating $\Gamma$ using the peak time of the optical afterglow \citep{liang_2010,ghirlanda_2012}. The method is based on the assumption that the peak is emitted at the deceleration radius of the blast wave. This assumption can be invalid for bright bursts which have large pair-loading radii (as demonstrated for the sample of GRBs studied in this paper); however it may be reasonable for less luminous bursts. A recent refinement of this method by \citet{hascoet_2014} gave measurements and upper limits for $\Gamma$ in a large sample of bursts, which are included in Figure~\ref{fig_param}.\footnote{ This method gives estimates for $\Gamma\rho_{\rm dec}^{1/8}$ where $\rho_{\rm dec}$ is the density of the external medium at the deceleration radius. Therefore, there is a weak dependence of the inferred $\Gamma$ on the assumed $\rho_{\rm dec}$. In the left panel of Figure~4 we used $\rho_{\rm dec}\sim 10^{-21}$~g~cm$^{-3}$ which corresponds to $A\sim10^{11}$~g~cm$^{-1}$ and $R_{\rm dec}\sim 10^{16}$~cm. } They extend the $L_{\rm GRB}$-$\Gamma$ diagram to lower luminosities $L_{\rm GRB}<10^{53}$~erg~s$^{-1}$, however do not allow a reliable measurement of the correlation in this region because of the large number of lower limits. Overall, the data is consistent with the existence of a lower bound $\Gamma>\Gamma_{\min}$ which increases with $L_{\rm GRB}$. This bound may result from strong subphotospheric adiabatic cooling in explosions with $\Gamma_{\rm ej}<\Gamma_{\min}$ (see \citealt{hascoet_2014}). We also note that within uncertainties our estimate for the jet Lorentz factor in GRB 130427A is in agreement with the value $\Gamma_{\rm ej} = 450$ found from radiative transfer modeling of its prompt emission (Vurm \& Beloborodov, in preparation). \subsection{Radiative efficiency} The inferred radiative efficiency $\epsilon_{\rm rad}$ of the prompt GRB emission varies between 0.1 and 0.8 in the sample, i.e. $E_{\rm ej}/E_{\rm GRB} = 0.25-9$. Such high values of radiative efficiency may be expected. High $\epsilon_{\rm rad}$ was previously suggested by the late afterglow analysis (e.g. \citealt{racusin_2011}) and also expected in theoretical models of the prompt emission (cf. \citealt{beloborodov_2010}; Vurm \& Beloborodov, in preparation). \subsection{The special case of GRB~090510} \label{par_090510} The IC cooling of shock-heated wind medium well explains the observed GeV flash for all GRBs in our sample except one: GRB~090510. In this case, the model rather well reproduces the peak of the flash at $t_{\rm obs}\mathrel{\copy\simlessbox} 1$~s and its initial decay at $t_{\rm obs}<10$~s, however at $t_{\rm obs}>20$~s the theoretical emission falls short of the observed flux. GRB~090510 is also special as the only short burst in our sample. Short GRBs are normally not associated with massive progenitors, although some of them may be ``impostors'' in the short class (the impostors are bursts that have a massive progenitor but happen to have short duration). The deficiency of the theoretical GeV emission in GRB~090510 may be explained in two ways: (1) The ambient medium is not a wind from a massive progenitor. Indeed, we find that the entire light curve of the GeV flash is reasonably well reproduced if the ambient density is uniform, $\rho\approx const$, rather than wind-like, $\rho\propto r^{-2}$. The uniform density must, however, be quite high, $n=\rho/m_p\sim 2\times 10^4$~cm$^{-3}$, well above the typical density of interstellar medium. (2) An additional emission component is present in the flash observed by LAT. It can be synchrotron emission (extending to $E>100$~MeV) from nonthermal electrons accelerated in the blast wave. This interpretation is consistent with the relatively soft photon index ($\beta \sim 2.5-3$) measured for GRB~090510 at the time when the additional component dominates. Our simulation did not include possible nonthermal electrons because they are harder to model form first principles and require additional phenomenological parameters. We also note that the nonthermal component is not needed for the other 6 bursts --- their GeV flashes are well explained by pure IC emission from the thermal plasma. \subsection{Optical flash} Two GRBs have a well measured optical counterpart of the GeV flash. The expected light curve of this (synchrotron) counterpart is obtained directly from the model of the GeV flash by introducing one additional parameter $\epsilon_B$. GRB~130427A has the best coverage in the LAT and optical bands. The entire GeV flash ($t<1$~d), and the main peak and steep decay of the optical flash ($t<100$~s), are well reproduced by the model of the pair-loaded forward shock. The optical light curve requires an additional contribution after 100~s, which we interpreted as emission from the reverse shock \citep{vurm_2014}. The forward shock produces the light curve similar to that observed in GRB~120711A (discussed below), with a plateau ending around $10^4$~s. This plateau may be responsible for the hump in the optical light curve of GRB~130427A at $t\sim 10^4$~s. The result obtained for GRB~120711A is rather striking. The model reproduces the entire complicated optical light curve: the sharp rise at 30~s, the peak at 50~s, the steep decay between 50 and 300~s, the plateau and the break at $10^4$~s. Our model has only four adjustable parameters $A$, $\epsilon_{\rm rad}$, $\Gamma_{\rm ej}$, and $\epsilon_B$. Such a remarkable fit of the optical light curve can hardly be obtained by chance. The same model also fits well the decay of the GeV flash, both the slope and the normalization. The GeV data are less constraining in this burst as the peak of the GeV flash was missed by LAT observations. The inferred $\epsilon_B\sim 4 \times 10^{-4}$ for GRB~130427A and $\epsilon_B\sim 3\times 10^{-6}$ for GRB~120711A give rough but reliable estimates for the characteristic magnetization in the external blast wave. In a more detailed model $\epsilon_B$ may be changing with time as discussed in \citet{vurm_2014}. The excellent fit for GRB~120711A does not require such variation. \subsection{TeV emission} Our model predicts luminous TeV emission accompanying the GeV flash, which can be detected by ground-based Cherenkov telescopes. The bulk of the TeV fluence is accumulated within 1-10 minutes after the GRB trigger; the energy radiated above 0.1~TeV in the sample ranges from $10^{51}$ to $4 \times 10^{53}$ erg, and constitutes up to 30$\%$ of the prompt MeV fluence. The predicted efficiency of TeV emission varies substantially from burst to burst. For example, in GRB 120711A most of the energy above 100~MeV is radiated in the TeV band (Figure \ref{fig_lcs_nrj}). In contrast, TeV emission is weak in the high-$A$ (dense wind) models for GRB~090902B and GRB~090926A (Figure \ref{fig_lcs_lph}). The difference in the TeV flux arises mainly from the different maximal IC photon energy that the thermal electrons can produce, $E_{\max}\sim \Gamma\gamma_{\rm th} m_ec^2$, where $\gamma_{\rm th}$ is the Lorentz factor of the thermal electrons behind the shock. In GRBs exploding into denser winds the blast wave is slower and loses a larger fraction of its kinetic energy at an early radiative stage. The reduced $\Gamma$ gives a lower $E_{\max}$ so that it may not exceed a few 100 GeV (in the GRB rest frame). Nonthermal electrons can extend the IC emission above $E_{\max}$; this emission was not included in our model, and thus the 0.1~TeV light curves shown in the figures should be viewed as a lower limit. The shock energy given to nonthermal electrons is small compared with the thermal population. Therefore, their presence in the 0.1~TeV light curve becomes important only when the contribution from the thermal electrons is suppressed, i.e. when $E_{\max}<0.1$~TeV. A major factor limiting the GRB detectability in the TeV band is the extinction by extragalactic background light (EBL). For example, at redshift $z=1$ the attenuation factor is $\sim 0.5$ at 0.1~TeV and $\sim 5\times 10^{-3}$ at 0.3~TeV \citep{dominguez_2011, gilmore_2009}. This leaves a narrow window between the Cherenkov detector threshold (presently $\sim 50-100$~GeV) and $\sim 0.2 - 0.3$ TeV for most bursts, except those that happen at unusually small $z$ (such as GRB~130427A). As long as the intrinsic high-energy spectral cutoff is well above 100~GeV, the {\it observed} turnover would arise from the extragalactic absorption and could be used to place independent constraints on the EBL density. The (approximate) low-energy threshold $\sim 50$~GeV, repositioning time of a few tens of seconds and sensitivity of the currently operating Imaging Atmospheric Cherenkov telescopes (IACT) such as MAGIC \citep{aleksic_2012, sitarek_2013}, VERITAS \citep{holder_2011, kieda_2013} and H.E.S.S. \citep{hinton_2004} would have been sufficient to detect at least two bursts in our sample: GRB~120711A and GRB~130427A. Their predicted fluxes above $0.1$~TeV a few minutes after the GRB trigger are well above the sensitivity limit despite the substantial EBL absorption for GRB~120711A ($z=1.405$). In the case of GRB~130427A ($z=0.34$) the TeV emission remains at a detectable level for its entire duration, i.e. until the cutoff at a few$\times 10^4$~s. The cutoff is consistent with the upper limit provided by the VERITAS observation at 1~day \citep{aliu_2014}. The projected sensitivity of the next generation Cherenkov Telescope Array (CTA) \citep{funk_2013, inoue_2013} is sufficient to detect most of the bursts in our sample (including~GRB 090510 if it exploded into a wind medium), with the possible exception of GRB~080916C and GRB~110731A owing to their high redshifts. Current efforts to reduce the energy threshold of the Cherenkov telescopes are key for future routine detection of high-energy GRB emission from the ground. \medskip In this paper, we focused on the flash observations, and only in the bands where we think it is dominated by the thermal plasma behind the forward shock. The analysis of all afterglow observations, at all times and energies, from radio (Laskar et al. 2013) to hard X-rays (Kouveliotou et al. 2013) is deferred to future work. It will be significantly more involved, as it has to include the emission from nonthermal particles, from both reverse and forward shocks. The prospects for such a model for GRB~130427A are outlined in \citet{vurm_2014}; they argue that the proposed blast wave model can be consistent with radio and hard X-ray data with reasonable assumptions regarding the reverse shock and nonthermal particle acceleration. \citet{vurm_2014} also pointed out the importance of using the correct mean molecular weight per electron, $\mu_e=2$, expected for a Wolf-Rayet wind. The external density estimated from the late nonthermal optical flux scales as $\mu_e^3$. Using $\mu_e=2$ instead of $\mu_e=1$ increases the inferred parameter $A$ by the factor of 8 and, at least in the case of GRB~130427A, makes it consistent with the value measured from the GeV flash. \acknowledgements We are grateful to Nicola Omodei for providing the LAT catalogue data, Antonio Martin-Carrillo for providing GRB~120711A data, and Tom Vestrand for providing GRB~130427A data. This work was supported by NSF grant AST-1412485, NASA ATP grant NNX15AE26G, and Swift Cycle 10 grant NNX14AI94G. \begin{appendix} \section{Analytical estimates} Here we summarize the analytical estimates derived in Beloborodov et al. (2014) to show basic trends in the flash model. Consider an idealized model of the prompt MeV radiation with a fixed spectrum and duration. One main parameter is left free --- the prompt luminosity $L_{\rm GRB}$. The Lorentz factor and radius of the blast wave at the GeV peak are related to the observed peak time $T_p$ by Equations~(48) and (49) in \citet{beloborodov_2014}, \begin{equation} \label{eqn_gp} \Gamma_p \approx 500 \ L_{54}^{3/13} \left( \frac{T_p}{1+z} \right)^{-3/13}, \end{equation} \begin{equation} \label{eqn_rp} R_p \approx 10^{16} \ L_{54}^{6/13} \left( \frac{T_p}{1+z} \right)^{7/13} \ \mathrm{cm}, \end{equation} where $L_{54}=L_{\rm GRB}/10^{54}$~erg~s$^{-1}$ and $T_p$ is measured in seconds. Note that the external density parameter $A$ does not enter these relations; the values of $R_p$ and $\Gamma_p$ are controlled by pair loading $Z_\pm(R)$ which does not depend on density. The results of detailed simulations reported in Section~4 for our GRB sample show moderate deviations from the simplified relations~(\ref{eqn_gp}) and (\ref{eqn_rp}). The photon number emitted in the GeV flash (its isotropic equivalent) is estimated as \begin{equation} \label{eqn_nhe} N_{\rm GeV} \sim \frac{4\pi A R_p}{\mu_e m_p} \, Z_\pm(R_p) \, \mathcal{M} \, , \end{equation} where \begin{equation} \label{eqn_m} \mathcal{M}(E) \sim \frac{\Gamma_p m_e c^2}{\left( E_t E \right)^{1/2}} \end{equation} is the average number of IC photons of energy $E\sim 1$~GeV emitted by a single post-shock electron, and $E_t\sim 1$~MeV is the typical energy of target prompt photons. Combining Equations (\ref{eqn_gp})--(\ref{eqn_m}), one can express the wind density parameter as \begin{equation} A_{11} \approx 0.3 \, N_{{\rm GeV},56} \, L_{54}^{-9/13} \left[ \frac{T_p}{ 1+z } \right]^{-4/13} \left[ \frac{Z_\pm(R_p)}{10^4} \right]^{-1} \left( \frac{E_t}{1 \ \mathrm{MeV}} \right)^{1/2} \left( \frac{E}{1 \ \mathrm{GeV}} \right)^{1/2}, \label{eq:A} \end{equation} where we have normalized the number of high-energy photons to the typical value $N_{\rm GeV}\sim 10^{56}$ observed in our sample. The kinetic luminosity of the ejecta $L_{\rm ej}$ can be roughly estimated assuming a relativistic reverse shock and a pressure balance between the reverse and forward shocks (see \citealt{beloborodov_2014}), \begin{equation} \label{eq_efficiency} L_{\rm ej} \sim 4\times 10^{53} \, N_{{\rm GeV},56} \, \left[ \frac{T_p}{ 1+z } \right]^{-1} \left( \frac{E_t}{1 \ \mathrm{MeV}} \right)^{1/2} \left( \frac{E}{1 \ \mathrm{GeV}} \right)^{1/2} {\rm ~erg~s}^{-1}. \end{equation} If the reverse shock is not ultra-relativistic, equation (\ref{eq_efficiency}) should be considered as a lower limit. Finally, if the simultaneous optical flash is detected, an estimate of the blast wave magnetization is given by (see Equation~81 in \citealt{beloborodov_2014}) \begin{equation} \epsilon_{B} \sim 3\times10^{-4} \left[ \frac{Z_\pm(R_{\rm opt})}{100} \right]^{-2} L_{\rm opt,49}^2 R_{\rm opt, 16}^{-2} A_{11}^{-1} \left(\frac{L_{\rm GRB}}{L_{\rm ej}}\right)^2 \left( \frac{E_t}{1 \ \mathrm{MeV}} \right)^{-2} (1+z)^{-2}, \end{equation} where $L_{\rm opt,49}$ is the peak luminosity of the optical flash normalized to $10^{49}$~erg~s$^{-1}$, and $R_{\rm opt}$ is the radius where the optical flash peaks, which is slightly outside the peak radius of the GeV flash, $R_p$. \end{appendix} \newpage \bibliographystyle{apj}
{ "timestamp": "2015-04-27T02:02:35", "yymm": "1504", "arxiv_id": "1504.06369", "language": "en", "url": "https://arxiv.org/abs/1504.06369" }
\section{Introduction}\label{section:intro} How can two parties run a protocol over a noisy channel? Interactive communication seeks to solve this problem while minimizing the total number of bits sent. Recently, Haeupler~\cite{haeupler2014interactive} gave an algorithm for this problem that is conjectured to be optimal. However, as in previous work~\cite{schulman:communication,brakerski:fast, brakerski:efficient, braverman:towards, braverman:towards-deterministic, gelles:efficient, ghaffari:optimal, ghaffari:optimal2}, his algorithm critically relies on the assumption that the algorithm knows the noise rate in advance, \emph{i.e.},\ the algorithm knows in advance the number of bits that will be flipped by the adversary. In this paper, we remove this assumption. To do so, we add a new assumption of privacy. In particular, in our model, an adversary can flip an unknown number of bits, at arbitrary times, but he never learns the value of any bits sent over the channel. This assumption is necessary: with a public channel and unknown noise rate, the adversary can run a man-in-the-middle attack to mislead either party (see Theorem~\ref{t:privateIsNecessary}, Section~\ref{sec:remarks}) \paragraph{Problem Overview} We assume that Alice and Bob are connected by a noisy binary channel. Our goal is to build an algorithm that takes as input some distributed protocol $\pi$ that works over a noise-free channel and outputs a distributed protocol $\pi'$ that works over the noisy channel. We assume an adversary chooses $\pi$, and which bits to flip in the noisy channel. The adversary knows our algorithm for transforming $\pi$ to $\pi'$. However, he neither knows the private random bits of Alice and Bob, nor the bits sent over the channel, except when it is possible to infer these from knowledge of $\pi$ and our algorithm. We let $T$ be the number of bits flipped by the adversary, and $L$ be the length of $\pi$. As in previous work, we assume that Alice and Bob know $L$. \paragraph{Our Results} Our main result is summarized in the following theorem. \begin{theorem}\label{thm:main} Algorithm~3 tolerates an unknown number of adversarial errors, $T$, succeeds with high probability in the transcript length\footnote{Specifically with probability at least $1 - \frac{1}{L \log L}$}, $L$, and if successful, sends in expectation $L + O\left(\sqrt{L(T+1)\log L} + T \right)$ bits. \end{theorem} The number of bits sent by our algorithm is within logarithmic factors of optimal, assuming a conjecture from~\cite{haeupler2014interactive} (see Theorem~\ref{thm:Lprime}). Results in this paper first appeared in conference proceedings~\cite{ICALP15}. \subsection{Related Work} For $L$ bits to be transmitted from Alice to Bob, Shannon~\cite{shannon:mathematical} proposes an error correcting code of size $O(L)$ that yields correct communication over a {\it noisy} channel with probability $1-e^{-\Omega(L)}$. At first glance, this may appear to solve our problem. But consider an {\it interactive} protocol with communication complexity $L$, where Alice sends one bit, then Bob sends back one bit, and so forth where the value of each bit sent {\it depends on the previous bits received}. Two problems arise. First, using block codewords is not efficient; to achieve a small error probability, ``dummy'' bits may be added to each bit prior to encoding, but this results in a superlinear blowup in overhead. Second, due to the interactivity, an error that occurs in the past can ruin all computation that comes after it. Thus, error correcting codes fall short when dealing with interactive protocols.\smallskip The seminal work of Schulman~\cite{schulman:deterministic,schulman:communication} overcame these obstacles by describing a deterministic method for simulating interactive protocols on noisy channels with only a constant-factor increase in the total communication complexity. This work spurred vigorous interest in the area (see~\cite{braverman:coding} for an excellent survey). Schulman's scheme tolerates an adversarial noise rate of $1/240$. It critically depends on the notion of a {\it tree code} for which an exponential-time construction was originally provided. This exponential construction time motivated work on more efficient constructions~\cite{braverman:towards-deterministic,peczarski:improvement,moore:tree}. There were also efforts to create alternative codes~\cite{gelles:efficient,ostrovsky:error}. Recently, elegant computationally-efficient schemes that tolerate a constant adversarial noise rate have been demonstrated~\cite{brakerski:efficient,ghaffari:optimal2}. Additionally, a large number of powerful results have improved the tolerable adversarial noise rate~\cite{brakerski:fast,braverman:towards,ghaffari:optimal,franklin:optimal,braverman:list}. The closest prior work to ours is that of Haeupler~\cite{haeupler2014interactive}. His work assumes a fixed and known adversarial noise rate $\epsilon$, the fraction of bits flipped by the adversary. Communication efficiency is measured by \textit{communication rate} which is $L$ divided by the total number of bits sent. Haeupler~\cite{haeupler2014interactive} describes an algorithm that achieves a communication rate of $1 - O(\sqrt{\epsilon \log\log(1/\epsilon)}$, which he conjectures to be optimal. We compare our work to his in Section~\ref{sec:remarks}. Feinerman, Haeupler and Korman~\cite{feinerman2014breathe} recently studied the interesting related problem of spreading a single-bit rumor in a noisy network. In their framework, in each synchronous round, each agent can deliver a single bit to a random anonymous agent. This bit is flipped independently at random with probability $1/2-\epsilon$ for some fixed $\epsilon >0$. Their algorithm ensures with high probability that in $O(\log n/\epsilon^2)$ rounds and with $O(n\log n/\epsilon^2))$ messages, all nodes learn the correct rumor. They also present a majority-consensus algorithm with the same resource costs, and prove these resource costs are optimal for both problems. \subsection{Formal Model} Our algorithm takes as input a protocol $\pi$ which is a sequence of $L$ bits, each of which is transmitted either from Alice to Bob or from Bob to Alice. As in previous work, we also assume that Alice and Bob both know $L$. We let Alice be the party who sends the first bit in $\pi$. \paragraph{Channel Steps} We assume communication over the channel is synchronous and individual computation is instantaneous. We define a \textit{channel step} as the amount of time that it takes to send one bit over the channel. \paragraph{Silence on the Channel} When neither Alice nor Bob sends in a channel step, we say that the channel is silent. In any contiguous sequence of silent channel steps, the bit received on the channel in the first step is set by the adversary for free. By default, the bit received in subsequent steps of the sequence remains the same, unless the adversary pays for one bit flip in order to change it. In short, the adversary pays a cost of one bit flip each time it wants to change the value of the bit received in any contiguous sequence of silent steps. \subsection{Overview of Our Result} \paragraph{Challenges} Can we adapt prior results by guessing the noise rate? Underestimation threatens correctness if the actual number of bit flips exceeds the algorithm's tolerance. Conversely, overestimation leads to sending more bits than necessary. Thus, we need a protocol that adapts to the adversary's actions. One idea is to adapt the amount of communication redundancy based on the number of errors detected thus far. However, this presents a new challenge because the parties may have different views of the number of errors. They will need to synchronize their adaptions over the noisy channel. This is a key technical challenge to achieving our result. Another technical challenge is termination. The length of the simulated protocol is necessarily unknown, so the parties will likely not terminate at the same time. After one party has terminated, it is a challenge for the other party to detect this fact based on bits received over the noisy channel. A high-level overview of how we address these challenges is given in Section~\ref{s:alg-overview}. \subsection{Paper Organization} The rest of this paper is organized as follows. In Section~\ref{sec:alg-bounded}, we describe a simple algorithm for interactive communication that works when $T = O(L/\log L)$. We analyze this algorithm in Section~\ref{sec:bounded-analysis}. In Section~\ref{sec:alg-unbounded}, we describe an algorithm for interactive communication that works for any finite $T$; we prove this algorithm correction in Section~\ref{sec:analysis-unbounded}. Section~\ref{sec:remarks} gives some relevant remarks, including justifying private channels and comparing our algorithm with past work. Finally, we conclude and give directions for future work in Section~\ref{sec:conc}. \section{Bounded $T$ - Algorithm} \label{sec:alg-bounded} In this section, we describe an algorithm that enables interactive communication problem when $T= O(L/\log L)$. \subsection{Overview, Notation and Definitions} \begin{figure} \begin{center} \begin{tabular}{l p{15cm}} $L$ & The length of the protocol to be simulated.\\ $\pi$ & The $L$-bit protocol to be simulated, augmented by random bits to length $ \left(1 + \left\lceil \frac{L}{R_0} \right \rceil \right) R_0$. \\ $\pi[\mathcal{T}, \ell]$ & The result of the computation of the next $\ell$ bits of $\pi$ after history $\mathcal{T}$. \\ $R_0$ & Initial round size in the algorithm. This is the smallest power of 2 that is greater than $\sqrt{LF}$. So $\sqrt{LF} \leq R_0 \leq 2\sqrt{LF}$ \\ $F$ & The length of the fingerprint.\\ $\mathcal{T}_a$ & Alice's tentative transcript.\\ $\mathcal{T}_b$ & Bob's tentative transcript.\\ $\mathcal{T}^*_a$ & Alice's verified transcript.\\ $\mathcal{T}^*_b$ & Bob's verified transcript.\\ $\mathcal{T}[0:\ell]$ & The first $\ell$ bits of $\mathcal{T}$. If $|\mathcal{T}| <L$ this is $null$ \\ \end{tabular} \caption{Glossary of Notation} \label{f:notation} \end{center} \end{figure} Our algorithm is presented as Algorithm~\ref{alg:bdIC}. The overall idea of the algorithm is simple: the parties run the original protocol $\pi$ for a certain number of steps as if there was no noise. Then, Alice determines whether an error has occurred by checking a fingerprint from Bob. Based on the result of this verification, the computation of $\pi$ either moves forward or is rewound to be performed again. \subsection{Helper Functions} \label{s:helper} Before giving details of the algorithm, we first describe some helper functions and notation (see Figure~\ref{f:notation}). \paragraph{Fingerprinting} To verify communication, we make use of the following well-known theorem. \begin{theorem} \label{thm:hash}~[Naor and Naor~\cite{naorandnaorJ}] For any positive integer $\mathcal{L}$ and any probability $p$, there exists a hash function $\mathcal{F}$ that given a uniformly random bit string $S$ as the seed, maps any string of length at most $\mathcal{L}$ bits to a bit string hash value $H$, such that the collision probability of any two strings is at most $p$, and the length of $S$ and $H$ are $|S|=\Theta(\log(\mathcal{L}/p))$ and $|H| = \Theta(\log(1/p))$ bits. \end{theorem} We define two functions based on this theorem, $\hash$ and $\matchesFP$. In this section, we will write $\hash_L$ to denote that the probability of error $p$ is polynomial in $L$. In particular, we can set $p = 1/L^{2}$, with fingerprints of size $O(\log L)$. The function $\hash_L(T)$ takes a transcript $T$ and returns a tuple $(s, f)$, where $s$ is uniformly random bit string and $f$ is the output of the hash function $\mathcal{F}$ in the theorem above when given inputs $s$ and $T$. We refer to this tuple as the \emph{fingerprint} of $T$. The function $\matchesFP((s,f),T)$ takes a fingerprint $(s,f)$ and a transcript $T$. It returns true if and only if the output of $\mathcal{F}$ when given bit string $s$ and transcript $T$ is equal to the value $f$. In both of these functions, the total length of the fingerprint is given by the value $F$, which will be defined later. \medskip \paragraph{Algebraic Manipulation Detection Codes} Our result makes critical use of Algebraic Manipulation (AMD) Codes from~\cite{cramer2008detection}. These codes provide three functions: $\eAMD$, $\isCodeword$ and $\dAMD$. The function $\eAMD(m)$ creates an encoding of a message $m$. The function $\isCodeword(m')$ returns true if and only if a received message $m'$ is equal to $\eAMD(m)$ for some sent message $m$. The function $\dAMD(m')$ takes a received value $m'$, where $\isCodeword(m')$, and returns the value $m$ such that $\eAMD(m) = m'$. Intuitively, AMD Codes enable detection of bit corruptions on encoded words, with high probability. We make use of the following theorem about AMD codes. This is a slight rewording of a theorem from~\cite{cramer2008detection}. \begin{theorem}~\cite{cramer2008detection} \label{t:amd} For any $\delta > 0$, there exists functions $\eAMD$, $\isCodeword$ and $\dAMD$, such that, for any bit string $m$ of length $x$: \begin{itemize} \item $\eAMD(m)$ is a string of length $x+ C \log (1/\delta)$, for some constant $C$; \item $\isCodeword(\eAMD(m))$ and $\dAMD(\eAMD(m)) = m$; \item For any bit string $s \neq 0$ of length $x$, $Pr(\isCodeword(\eAMD(m) \oplus s)) \leq \delta$ \end{itemize} \end{theorem} In this section, we set $\delta = 1/L^{2}$ and add $O(\log L)$ additional bits to the message word. Also in this section, we will always encode strings of size $O(\log L)$, so the AMD encoded messages will be of size $O(\log L)$. In the algorithm, we will denote the fixed length of the AMD-encoded fingerprint by $F$. \subsection{Remaining Notation} \paragraph{Transcripts} We define Alice's \emph{tentative transcript}, $\mathcal{T_A}$, as the sequence of possible bits of $\pi$ that Alice has either sent or received up to the current time. Similarly, we let $\mathcal{T_B}$ denote Bob's transcript. For both Alice or Bob, we define a \emph{verified transcript} to be the longest prefix of a transcript for which a verified fingerprint has been received. We denote the verified transcript for Alice as $\mathcal{T^*_A}$, and for Bob as $\mathcal{T^*_B}$. The notation $T\preccurlyeq T'$ signifies that a transcript $T$ is a prefix of a transcript $T'$. \paragraph{Rounds} We define one of \emph{Alice's rounds} as one iteration of the repeat loop in Alice's protocol. Alice's round consists of $r_a$ channel steps, where $r_a$ is the \emph{round size} value maintained by Alice. Similarly, we define one of \emph{Bob's rounds} as one iteration of the repeat look in Bob's protocol. Such a round consists of $r_b$ channel steps, where $r_b$ is the \emph{round size} for Bob. \paragraph{Other Notation} For a transcript $\mathcal{T}$ and integer $i$, we define $\mathcal{T}[0 : i]$ to be the first $i$ bits of $\mathcal{T}$. For two strings $x$ and $y$, we define $x \odot y$ to be the concatenation of $x$ and $y$. \subsection{Algorithm Overview} \label{s:alg-overview} \begin{algorithm*} \caption{Bounded Error Interactive Communication} \label{alg:bdIC} \begin{minipage}{.43\textwidth} \CommentSty{\bf ALICE'S PROTOCOL} \BlankLine \nl $\mathcal{T}_a \gets null$; $\mathcal{T}^*_a \gets null$\; $m_a \gets 0$; $r_a \gets R_0$\; \nl \label{ap:repeat3}\Repeat{$m_a = \frac{R_0^2}{4F^2} -1$}{ \nl $\mathcal{F}_a \gets \eAMD(m_a, r_a, |\mathcal{T}^*_a|)$\; \nl Send $\mathcal{F}_a$\; \nl Append $\pi[\mathcal{T}_a, r_a - 2F]$ to $\mathcal{T}_a$\; \nl Receive Bob's $F$-bit message, $\mathcal{F}'_b$\; \nl \uIf{\emph{$\isCodeword(\mathcal{F}'_b)$}}{ \nl \uIf{$|\mathcal{T}^*_a| \ge L$}{ \nl Output $\mathcal{T}^*_a[0:L]$ and \\ \textbf{Terminate}\; } \nl $\mathcal{F} \gets \dAMD(\mathcal{F}'_b)$\; \nl \uIf {\emph{$\matchesFP(\mathcal{F},\mathcal{T}_a)$}}{ \CommentSty{// successful round}\; \nl $\mathcal{T}^*_a \gets \mathcal{T}_a$\; }} \nl \uElse{ \CommentSty{// round failed }\; \nl $\mathcal{T}_a \gets \mathcal{T}^*_a$\; \nl \label{ap:aecincr}$m_a \gets m_a +1$\; \nl \uIf{$1+m_a$ is a power of 4}{\nl $r_a \gets r_a/2$\;} } } \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \end{minipage} \hfill \begin{minipage}{.43\textwidth} \setcounter{AlgoLine}{0} \BlankLine \CommentSty{\bf BOB'S PROTOCOL} \BlankLine \nl $\mathcal{T}_b \gets null$; $\mathcal{T}^*_b \gets null$\; $m_b \gets 0$; $r_b \gets R_0$\; \nl \label{bp:repeat}\Repeat{$m_b = \frac{R_0^2}{4F^2}-1$} { \nl Receive Alice's $F$-bit message, $\mathcal{F}'_a$\; \nl \uIf{\emph{all bits of $\mathcal{F}'_a$ are equal}}{ \CommentSty{// Alice has likely left}\; \nl Output $\mathcal{T}^*_b[0:L]$ and \\ \textbf{Terminate}\; } \nl \uIf{\emph{$\isCodeword(\mathcal{F}'_a)$}}{ \nl $(m, r, \ell) \gets \dAMD(\mathcal{F}'_a)$\; \CommentSty{// synchronize values}\; \nl $r_b \gets r $\; \nl $m_b \gets m$\; \nl \uIf{$\ell > |\mathcal{T}^*_b|$}{ \nl $\mathcal{T}^*_b \gets \mathcal{T}_b$\; } \nl \uElse{ \nl $\mathcal{T}_b \gets \mathcal{T}^*_b$\; } \nl Append $\pi[\mathcal{T}_b,r_b - 2F]$ to $\mathcal{T}_b$\; \nl $\mathcal{F}_b \gets \eAMD (\hash_L(\mathcal{T}_b))$\; \nl Send $\mathcal{F}_b$\; } \nl \uElse{ \CommentSty{// corruption occurred}\; \nl Send random bits for $r_b -F$ steps\; \nl $m_b \gets m_b +1$ \; \nl \uIf{$1+m_b$ is a power of 4}{\nl $r_b \gets r_b/ 2 $\;} }} \end{minipage} \end{algorithm*} To facilitate discussion of the algorithm, we first state some important properties of rounds (proven in Section~\ref{sec:bounded-analysis}). First, the size of any round is always a power of two. Second, the start of each of Bob's rounds always coincides with the start of one of Alice's rounds. This ensures that whenever Bob is listening for the message $\mathcal{F}'_a$, Alice will be sending such a message. We first describe one of Alice's rounds in which 1) neither Alice nor Bob terminate; and 2) there are no adversarial bit flips. In such a round, Alice sends an encoded message containing two pieces of information. These are $m_a$, which is the number of failed rounds Alice has counted so far; and $|\mathcal{T}^*_a|$, which is the size of Alice's verified transcript. When Bob decodes this message, he synchronizes several values with Alice. In particular, he sets his round size value, $r_b$, and mistake estimate value, $m_b$, so they equal the values Alice sent. Then, based on $|\mathcal{T}^*_a|$, Bob either increases the length of his verified transcript, or else decreases the length of his tentative transcript. After this synchronization, Alice and Bob both compute a certain number of bits of $\pi$ and add these to their tentative transcripts. Finally Bob sends an encoded fingerprint to Alice. She verifies this fingerprint, and then adds the bits of $\pi$ computed during this round to her verified transcript. There are two key ways in which adversarial bit flips can alter the above scenario. First, when the encoded message Alice sends containing $m_a$ and $|\mathcal{T}^*_a|$ is corrupted. In this case, Bob will send random bits for the remainder of the round. This ensures two things. First, whenever Alice is listening for a fingerprint from Bob, Bob will either be sending a fingerprint or random bits. Thus, \whp, the adversary will be unable to forge an encoding of a fake fingerprint by flipping bits. Second, Bob's error count updates at the same time as Alice's. The other key way in which adversarial bit flips can alter the ideal scenario is as follows. The adversary flips bits in such a way that the encoded fingerprint, $\mathcal{F}'_b$ that Bob sends to Alice, fails to be a valid fingerprint for Alice's tentative transcript. In this case, Alice rewinds her tentative transcript, increments her error count, and updates her block size. \paragraph{Handling Termination} In previous work, since $\epsilon$ and $L'$ are known, both parties know when to terminate (or {\it leave} the protocol), and can do so at the same time. However, since we know neither parameter, termination is now more challenging. In our algorithm, $\pi$ is augmented with a certain number of additional bits that Alice sends to Bob. Each of these bits is set independently and uniformly at random by Alice. Alice terminates when her verified transcript is of length greater than $L$. Bob terminates when he receives a value $\mathcal{F}'_a$, where all bits are the same. This conditions ensures that 1) Bob is very unlikely to terminate before Alice; and 2) Bob terminates soon after Alice, unless the adversary pays a significant cost to delay this. \section{Bounded T - Analysis} \label{sec:bounded-analysis} We now prove that with high probability, Algorithm~1 correctly simulates $\pi$ when $T$ is promised to be $O(L / \log L)$. Before proceeding to our proof, we define two bad events. \begin{itemize} \item[] \textit{Hash Collision.} Either Alice or Bob incorrectly validates a fingerprint and updates their verified transcript to include bits not in $\pi$. \item[] \textit{Failure of AMD Codes} The adversary corrupts an encoded message into the encoding or a different message. Or the encoding of some message, after possible adversary corruption, equals a bit string of all zeroes or all ones. \end{itemize} Throughout this section, we will assume neither event occurs. At the end of this section, we will show that the probability that either even occurs is polynomially small in $L$. \begin{lemma} \label{l:po2} Each player's round size is always a power of two. \end{lemma} \begin{proof} This is immediate from the fact that the round size starts out as a power of $2$ and the fact that each time it decreases, it decreases by a factor of $2$. \end{proof} \begin{lemma}\label{lem:Amonotonicity} $m_a$ is monotonically increasing, and hence Alice's round size never increases. \end{lemma} \begin{proof} This follows immediately from the fact that the only time $m_a$ changes is on Line~\ref{ap:aecincr} of Alice's protocol, when it is incremented by 1. \end{proof} \begin{lemma} \label{lem:bec-upper-bd} Algorithm~\ref{alg:bdIC} has the following properties: \begin{enumerate} \item When Bob starts a round, Alice starts a round, \item $m_b \le m_a$ at all times that Alice remains in the protocol. \end{enumerate} \end{lemma} \begin{proof} This follows by induction on $m_a$. \paragraph{Base Case} We first show that the lemma holds while $m_a = 0$. Note that $m_b$ can only increase after Bob has spent a round sending random bits. During such a round, Alice will increment $m_a$ before Bob increments $m_b$. Next, note that while $m_b = m_a = 0$, Alice and Bob both have the same round sizes, and so when Bob starts a round, Alice starts a round. \paragraph{Inductive Step} Consider the channel step, $t$, at which Alice increases $m_a$ to some value $j>0$. We must show that the lemma statement holds throughout the time while $m_a = j$. By the inductive hypothesis, up to time $t$, $m_b \le m_a$, and when Bob started a round, Alice started a round. There are two cases for the value of $m_b$ at the end of channel step $t$. \paragraph{Case 1} $m_b < j$. In this case, Bob must not have received $\mathcal{F}_a$ at the beginning of the round he is in at channel step $t$. Hence, Bob transmits random bits during this entire round. Bob's round size is an integer multiple of Alice's round size (by Lemma~\ref{l:po2}). Thus, Bob will transmit random bits throughout Alice's round begun at channel step $t+1$. So Alice will not receive a matching fingerprint at the end of the round she began at step $t+1$, and so she will increment $m_a$ before Bob increments $m_b$. This will happen before Bob completes the round he is in at time $t$, so both conditions of the lemma hold while $m_a = j$. \paragraph{Case 2} $m_b = j$. Note that $m_b$ can only increase after Bob has spent a round sending random bits. During such a round, Alice will increment $m_a$ before Bob increments $m_b$. Thus, while $m_a = j$, $m_b = j$. Next, note that, if $m_b = m_a = j$ at step $t$, then Alice and Bob both ended their rounds at step $t$. Hence, during the time that $m_a =j$, when Bob starts a round, Alice starts a round. \end{proof} The following corollaries are immediate from the above lemma. \begin{corollary} When Bob ends a round, Alice ends a round. \end{corollary} \begin{corollary}\label{c:bob-round} Bob's rounds are at least as large as Alice's rounds. \end{corollary} The following corollary holds from the above lemma and the fact that Bob's round sizes are at least as large as Alice's. \begin{corollary} \label{c:listening} While both parties remain in the protocol, whenever Bob is listening for a $\mathcal{F}_a$, Alice is sending it. Also, whenever Alice is listening for $\mathcal{F}_b$, either Bob is sending it, or Bob is sending random bits. \end{corollary} The following lemma also follows from Lemma~\ref{lem:bec-upper-bd}. \begin{lemma} \label{lem:aec-bec} Let $\mathcal{R}$ be one of Alice's rounds which starts and ends at the same time as one of Bob's rounds. Then, at the end of $\mathcal{R}$, either $m_a - m_b$ is the same as it was at the beginning of $\mathcal{R}$ or it equals $0$ or $1$. \end{lemma} \begin{proof} If $\mathcal{F}_a$ is corrupted at the beginning of $\mathcal{R}$, Bob transmits random bits for the rest of $\mathcal{R}$, and both Alice and Bob increment their error counts at the end, so $m_a - m_b$ stays the same. If $\mathcal{F}_a$ is not corrupted at the beginning of $\mathcal{R}$, then Bob sets $m_b$ to $m_a$ at the beginning of $\mathcal{R}$, so at the end, $m_a-m_b \leq 1$. By Lemma~\ref{lem:bec-upper-bd} (2), $m_a-m_b \geq 0$. \end{proof} \subsection{Phases} We now give some definitions. \begin{definition} \label{d:phaseRS} We define \emph{phase} $j$ to be all of Alice's rounds of size $R_0 / 2^{j}$. \end{definition} \begin{definition} \label{d:Delta} We define $\Delta_j$, for all $j>0$, to be the value $m_a-m_b$ at the end of phase $j$. \end{definition} Note that at the beginning of phase $j$, Alice's error count is $4^j - 1$. We now give a few lemmas about phases. \begin{lemma} \label{l:phaseNumRounds} For any $j>0$, phase $j$ contains at least $3\Delta_{j-1}$ of Alice's rounds, \end{lemma} \begin{proof} Consider any $j>0$. At the beginning of phase $j$, $m_a = 4^{j} - 1$. Also, at the beginning of phase $j$, by Lemma~\ref{lem:bec-upper-bd} (2), $m_b \leq m_a$. Hence, $0 \leq \Delta_{j-1} \leq 4^{j} - 1$. Note that $m_a$ increases by at most $1$ in each of Alice's rounds. Thus, $3\Delta_{j-1}$ rounds after the beginning of phase $j$, the value of $m_a$ is at most: \begin{align*} 4^j -1 + 3\Delta_{j-1} &\le 4^j -1 + 3 (4^{j}-1)\\ & < 4^{j+1} -1 \end{align*} Thus after $3\Delta_{j-1}$ rounds, $m_a$ is not large enough for Alice to advance to phase $j+1$. \end{proof} \paragraph{Progressive, Corrupted and Wasted Rounds} Let $\mathcal{R}$ be one of Alice's rounds. We call $\mathcal{R}$ \emph{progressive} if Alice does not update her error count during the round, or equivalently if her verified transcript length increases. We call $\mathcal{R}$ \emph{corrupted} if the adversary flipped at least one bit in the round. We call $\mathcal{R}$ \emph{wasted} if it is neither progressive nor corrupted. We want to bound the number of wasted rounds since this number represents amount by which $m_a$ is potentially an overestimate of $T$. We note that wasted rounds occur only when $r_b > r_a$. In this case, Bob is not listening when Alice sends him $\mathcal{F}_a$. As a result, Bob does not send Alice a valid fingerprint at the end of her round, and so her verified transcript does not increase, even though the adversary has not flipped any bits. \medskip The following lemma bounds the number of wasted rounds in a phase, and gives other critical properties. \begin{lemma} \label{lem:aec-becW} Suppose at the beginning of phase $j$, $j>0$, Bob is at the start of a round and his round size is at most $R_0/2^{j-1}$. Then \begin{enumerate} \item There are at most $\Delta_{j-1}$ wasted rounds in phase $j$; \item $\Delta_j \in \{0, 1, 2\Delta_{j-1} \}$; and \item Bob ends a round at the end of phase $j$. \end{enumerate} \end{lemma} \begin{proof} If Bob's round size initially less than $R_0/2^{j-1}$, then it must equal $R_0/2^{j}$ in order to be a power of two. Hence Alice and Bob will have rounds that are the same size for the entire phase, and the lemma holds trivially. We now consider the harder case where Bob's round size equals $R_0/2^{j-1}$. By Definition~\ref{d:phaseRS}, Alice has round size $R_0/2^j$ throughout phase $j$. By Lemma~\ref{lem:bec-upper-bd} (2), Bob's round size is always greater than or equal to Alice's round size. Thus, as soon as 1) Bob receives $\mathcal{F}_a$ in one of his rounds in phase $j$, or 2) Bob sets $m_b$ equal to Alice's error count at the beginning of phase $j$, then Bob's round size will be $R_0/2^j$ for the remainder of the phase. Finally, by Lemma~\ref{lem:bec-upper-bd} (1), from that point on, Alice and Bob will begin, and thus end, all rounds at the same time. Now consider Bob's rounds in phase $j$. Assume the adversary corrupts $\mathcal{F}_a$ in Bob's rounds $1$ through $i$ for some value $i \geq 0$, and then the adversary does not corrupt $\mathcal{F}_a$ in Bob's round $i+1$. We consider two cases. \paragraph{Case 1: $i < \Delta_{j-1}$} Each of the first $i$ rounds of Bob spans two rounds of Alice. By Lemma~\ref{l:phaseNumRounds}, these rounds are all contained in phase $j$. Consider each pair of Alice's rounds spanned by one of Bob's rounds. The first round in the pair is corrupted, but during the second, Bob is transmitting random bits and Alice will not receive a fingerprint from him. Thus, this round is wasted. Hence, there are $i$ wasted rounds. In round $i+1$, Bob synchronizes his round size with Alice since he receives $\mathcal{F}_a$. Thus, there are no more wasted rounds. Applying Lemma~\ref{lem:aec-bec} for the remaining rounds of the phase, we see that at the end of the phase, $m_a-m_b = \Delta_{j}$ is either $0$ or $1$. \paragraph{Case 2: $i \geq \Delta_{j-1}$} Bob increases $m_b$ by $1$ in each of his first $i$ rounds. Note that at the beginning of phase $j$, Alice's error count is $4^j -1$. Thus, after Bob's first $i$ rounds, $m_b = (4^j - 1) - \Delta_{j-1} + i$. Hence when $i = \Delta_{j-1}$, $m_b = (4^j - 1)$. At that time, Bob sets his round size to $R_0/2^j$, and so Alice and Bob will have the same round sizes, and will hence begin and end all rounds at the same step, for the rest of phase $j$. Thus, there are no more wasted rounds. Note that in this case, at Bob's $\Delta_{j-1}$ round, $m_a-m_b$ will be $2\Delta_{j-1}$. Applying Lemma~\ref{lem:aec-bec} for the remaining rounds of the phase, we see that $\Delta_j = 2\Delta_{j-1}$, or $\Delta_j$ is $0$ or $1$. \end{proof} \begin{lemma} \label{lem:phaseJ} For every $j \geq 0$: \begin{enumerate} \item There are at most $2^{j-1}$ wasted rounds in phase $j$; \item $\Delta_j \leq 2^j$; and \item Bob ends a round at the end of phase $j$. \end{enumerate} \end{lemma} \begin{proof} We prove this by induction on $j$. \paragraph{Base Case} At the beginning of phase $0$, Bob is at the start of a round and his round size is $R_0$. Thus, by Lemma~\ref{lem:aec-becW}: there are $0$ wasted rounds in phase $0$; $\Delta_{0} \leq 1$; and Bob ends a round at the end of phase $0$. \paragraph{Inductive Step} Consider some $j>0$. By the inductive hypothesis, $\Delta_{j-1} \leq 2^{j-1}$. At the beginning of phase $j$, $m_b = m_a - \Delta_{j-1} \leq (4^{j}-1) -\Delta_{j-1}$, so that $r_b = R_0 /2^{\lfloor\log_4{(1 + m_b)}\rfloor} \leq R_0 /2^{\lfloor\log_4{(4^j- \Delta_{j-1})}\rfloor} \leq R_0 /2^{j-1}$. The last line holds since $0 \leq \Delta_{j-1} \leq 2^{j-1}$. Also, by the inductive hypothesis, Bob ended a round at the end of phase $j-1$, and so is starting a round at the beginning of phase $j$. Hence, we can apply Lemma~\ref{lem:aec-becW} to phase $j$. From this lemma, it follows that 1) the number of wasted rounds in phase $j$ is at most $2^{j-1}$; 2) $\Delta_j \leq 2 \Delta_{j-1} \leq 2^j$; and 3) Bob ends a round at the end of phase $j$. \end{proof} Note from the above lemma that Bob's rounds are never more than double the size of Alice's rounds. The following lemma sums up what we now know about Alice and Bob's rounds. \begin{lemma} \label{l:bround} The following are always true. \begin{enumerate} \item Bob's round size is either equal to Alice's round size or double Alice's round size. \item If Bob's round size equals Alice's round size, then when Alice starts a round, Bob starts a round. \item If Bob's round size is twice Alice's round size, then when Alice starts a round, either Bob starts a round, or Bob is in the middle of a round. \end{enumerate} \end{lemma} \begin{proof} The lemma follows from Corollary~\ref{c:bob-round}, Lemma~\ref{lem:bec-upper-bd}, and Lemma~\ref{lem:phaseJ}. \end{proof} \subsection{Correctness and Termination}\label{sec:termination} \begin{lemma} \label{lem:termAlice} It is always the case that $\mathcal{T}^*_a \preccurlyeq \pi$, where $\pi$ is the padded transcript. \end{lemma} \begin{proof} This holds by Lemma~\ref{lem:collision} and Lemma~\ref{lem:amdfail} and the fact that Alice never adds any string to $\mathcal{T}^*_a$ that is not verified by an encoded fingerprint from Bob. \end{proof} \begin{lemma} \label{l:prefixes} At the beginning and end of each of Alice's rounds, \[ \mathcal{T}^*_b \preccurlyeq \mathcal{T}^*_a = \mathcal{T}_a \preccurlyeq \mathcal{T}_b; \] where at most one of the inequalities is strict. Moreover, at the end of a channel step in which Bob receives $\mathcal{F}_a$ correctly, \[ \mathcal{T}^*_b = \mathcal{T}_b = \mathcal{T}^*_a. \] \end{lemma} \begin{proof} We prove this by induction on Alice's round number. \paragraph{Base Case} At the beginning of the algorithm, all transcripts are $null$, so $\mathcal{T}^*_b = \mathcal{T}^*_a = \mathcal{T}_a = \mathcal{T}_b$. Moreover if Bob receives $\mathcal{F}_a$ correctly in this round, then $\mathcal{T}^*_b = \mathcal{T}_b = \mathcal{T}^*_a$. \paragraph{Inductive Step} We must show that the lemma holds for the $j$-th round. By the inductive hypothesis, at the end of the $j-1$-th round, \[ \mathcal{T}^*_b \preccurlyeq \mathcal{T}^*_a = \mathcal{T}_a \preccurlyeq \mathcal{T}_b, \] with at most one of the inequalities being strict. Clearly the statement about the inequalities will thus hold at the beginning of the $j$-th round. Alice's $j$-th round starts with Alice sending Bob $\mathcal{F}_a$. \paragraph{Case 1: Bob does not receive $\mathcal{F}_a$} If Bob does not receive $\mathcal{F}_a$, then either 1) he was listening and it was corrupted; or 2) he was not listening for it. If he was listening and $\mathcal{F}_a$ was corrupted, then Bob transmits random bits for the remainder of his round, which will be the remainder of Alice's round by Lemma~\ref{l:bround}. By the same lemma, if Bob was not listening, then he must be in the middle of a round that is twice as large as Alice's. In either case, Bob transmits random bits for the remainder of Alice's $j$-th round. Thus, Alice does not receive a matching fingerprint from Bob at the end of her $j$-th round. Thus, at the end of her round, $\mathcal{T}_a \gets \mathcal{T}^*_a$ and $\mathcal{T}_b$ and $\mathcal{T}^*_b$ are unchanged. Hence, it continues to hold that: \[ \mathcal{T}^*_b \preccurlyeq \mathcal{T}^*_a = \mathcal{T}_a \preccurlyeq \mathcal{T}_b; \] and at most one of the inequalities is strict. \paragraph{Case 2: Bob receives $\mathcal{F}_a$} If Bob receives $\mathcal{F}_a$, then he learns the length of $\mathcal{T}^*_a$ and also Alice's round size. By the inductive hypothesis, either $\mathcal{T}^*_a =\mathcal{T}^*_b$ or $\mathcal{T}^*_a =\mathcal{T}_b$. Based on the length of $\mathcal{T}^*_a$, Bob either updates $\mathcal{T}^*_b$ or rewinds $\mathcal{T}_b$, so that $\mathcal{T}^*_b = \mathcal{T}_b = \mathcal{T}^*_a$. This establishes the second part of the lemma for the $j$-th round. Next Alice and Bob continue their rounds which are the same size. If Alice receives a correct fingerprint from Bob at the end of her round, then the following holds: \[ \mathcal{T}^*_b \preccurlyeq \mathcal{T}^*_a = \mathcal{T}_a = \mathcal{T}_b. \] If Alice does not receive a correct fingerprint from Bob at the end of her round, then the following holds: \[ \mathcal{T}^*_b = \mathcal{T}^*_a = \mathcal{T}_a \preccurlyeq \mathcal{T}_b. \] In either case, the first part of the lemma statement holds at the end of Alice's $j$-th round. \end{proof} \begin{lemma} \label{lem:termBob} Bob leaves after Alice. When Alice leaves, $|\mathcal{T}^*_b| \geq L$. \end{lemma} \begin{proof} Bob leaves only when he receives an $\mathcal{F}'_a$ that is all zeroes or all ones. By Lemma~\ref{lem:amdfail}, $\mathcal{F}'_a$ is never such a string, and the adversary cannot convert $\mathcal{F}_a$ to such a string by bit flipping. It follows that Bob receives such a string only after Alice has left. Alice leaves only when 1) she has received an encoded fingerprint from Bob; and 2) $|\mathcal{T}^*_a| \geq L$. If Alice receives a correctly encoded fingerprint from Bob, then by Lemma~\ref{lem:amdfail}, Bob must have sent one, and hence Bob must be in a round where he received $\mathcal{F}_a$ correctly. By Lemma~\ref{l:prefixes}, at that channel step, $\mathcal{T}^*_b = \mathcal{T}_b = \mathcal{T}^*_a$. Hence at the step when Alice receives the encoded fingerprint from Bob, $\mathcal{T}^*_b = \mathcal{T}^*_a$. Thus, when Alice leaves, $|\mathcal{T}^*_b| \geq L$. \end{proof} \begin{lemma} \label{l:correctness} When either party terminates, their output is correct. \end{lemma} \begin{proof} The proof follows from Lemmas~\ref{lem:termAlice},~\ref{l:prefixes}, and~\ref{lem:termBob}, and the fact that when either party terminates, they output the first $L$ bits of their verified transcript. \end{proof} \subsection{Cost} \begin{lemma} \label{lem:boberrors} After Alice leaves, the adversary must flip at least one bit for each of Bob's rounds that does not result in Bob leaving. \end{lemma} \begin{proof} After Alice has left, there is silence on the channel in the steps when Bob is listening for Alice's encoded message. This means that if there is no bit flipping by the adversary, the channel transmits the same bit in every channel step, causing Bob to read a string of all zeroes or all ones, and terminate. Thus, the adversary must flip at least one bit each time Bob is listening for a codeword. \end{proof} \begin{lemma} \label{lem:2^j-wasted} There are at most $2^{j}-1$ wasted rounds prior to the end of phase $j$, for all $j \geq 0$. \end{lemma} \begin{proof} This follows trivially by repeated applications of Lemma~\ref{lem:phaseJ} (1). \end{proof} Throughout this section, we assume the worst case, that the adversary corrupts at most one bit per corrupted round. \begin{lemma} \label{lem:aec} At all times, $m_a \le T + \sqrt{T}$. In particular, there are no more than $\sqrt{T}$ wasted rounds. \end{lemma} \begin{proof} By way of contradiction, assume $m_a > T + \sqrt{T}$ at some step, in some phase $j$, $j \geq 0$. Then the number of wasted rounds at this step must be greater than $\sqrt{T}$. But by Lemma~\ref{lem:2^j-wasted}, the number of wasted rounds at the end of phase $j$ is no more than $2^j-1$. Thus, we have $\sqrt{T} < 2^j - 1$, or $T < (2^j - 1)^2$. But $m_a$ is no larger than the number of corrupted rounds plus the number of wasted rounds. By the above paragraph, $T < (2^j - 1)^2$ and the number of wasted rounds is no more than $2^j-1$. Thus $m_a < (2^j - 1)^2 + (2^j - 1)$. Moreover, we know that in phase $j$, $m_a \geq 4^j - 1$. Thus, we know \[ 4^j - 1 < (2^j - 1)^2 + (2^j - 1). \] Simplifying, we get $2^j < 1$, which is a contradiction for any $j \geq 0$. \end{proof} Let $m_a^*$ denote Alice's error count when she leaves the algorithm, and $m_b^*$ denote Bob's error count when he himself leaves the algorithm. \begin{lemma}\label{lem:acost} Alice terminates in at most $L+ O(\sqrt{LF(1+m_a^*)})$ steps. \end{lemma} \begin{proof} We first calculate the cost of the rounds that are not progressive for Alice. The number of non-progressive rounds that she has executed is $m_a^*$. Her cost for these rounds is at most the following. \begin{align*} \sum_{i=1}^{m_a^*} \frac{R_0}{2^{\lfloor \log_4 i \rfloor}} & \leq 2R_0 \sum_{i=1}^{m_a^*} \frac{1}{2^{\log_4 i}} \\ & = 2R_0 \sum_{i=1}^{m_a^*} \frac{1}{\sqrt{i}} \\ & \le 2R_0 \int_{0}^{m_a^*} \frac{1}{\sqrt{i}} \\ & = 4 R_0 \sqrt{m_a^*} \end{align*} In every progressive round, except possibly the last, Alice's block size is at least $R_0 2^{-\log_4 (1+m_a^*)}$. Thus in all but possibly the last progressive round, Alice always adds bits to her verified transcript at a rate of at least \[ \frac{R_0 2^{-\log_4 (1+m_a^*)} - 2F}{R_0 2^{-\log_4 (1+m_a^*)}}. \] Thus, the total number of bits Alices sends in all but the last progressive round is no more than \[ L \cdot \frac{R_0 2^{-\log_4 (1+m_a^*)}}{R_0 2^{-\log_4 (1+m_a^*)} - 2F}. \] We will make use of the inequality \[ \frac{1}{1-\delta} \le 1+2\delta \ \ \ \ \mbox{ for } 0 < \delta \le 1/2 \] and let $\delta = 2F/R_0 2^{-\log_4 (1+m_a^*)}$. Note that $\delta \leq 1/2$, since Alice's round size is always at least $4F$. Then we have that the total number of bits sent by Alice in all but the last progressive round is no more than \[ L + \frac{4LF}{R_0 2^{-\log_4 (1+m_a^*)}}. \] Adding in the last progressive round, we get that the total number of bits sent by Alice in progressive rounds is no more than \[ L + \frac{4LF}{R_0 2^{-\log_4 (1+m_a^*)}} + R_0 2^{-\log_4 (1+m_a^*)}. \] Putting this together with the number of bits send in non-progressive rounds, we have that the total number of bits send by Alice is no more than \begin{align*} L + 4 R_0 \sqrt{m_a^*} + \frac{4LF}{R_0 2^{-\log_4 (1+m_a^*)}} + R_0 2^{-\log_4 (1+m_a^*)} & \leq L + 5 R_0 \sqrt{m_a^*} + 4 \sqrt{LF} (2^{\log_4 (1+m_a^*)}) \\ & \leq L + 10 \sqrt{LFm_a^*} + 4 \sqrt{LF(1+ m_a^*)}\\ & \leq L + 14 \sqrt{LF(1+ m_a^*)} \qedhere \end{align*} \smallskip \end{proof} \begin{lemma}\label{lem:bcost} Bob terminates in at most $L+ 14 \sqrt{LF (1+m_a^*)} + 8\sqrt{LFm_b^* }$ steps. \end{lemma} \begin{proof} Since Bob never leaves before Alice, Bob's cost must be at least as much as Alice's. We now compute Bob's additional cost. At the time of Alice's departure, $r_a =R_0/2^{\lfloor \log_4 (1+ m_a^*)\rfloor} $. By Lemma~\ref{l:bround}, $r_b \le 2R_0/2^{\lfloor \log_4 (1+ m_a^*)\rfloor} $. Let $m_b'$ denote Bob's error count when Alice leaves the algorithm. Then $1+m_b' \ge 4^{\lfloor\log_4 (1+m_a^*)\rfloor -1} $. Bob's final error count is $m_b^*$. Thus, Bob's additional cost is at most \begin{align*} \sum_{i= m_b'}^{m_b^*-1} \frac{R_0}{2^{\lfloor \log_4(1+ i) \rfloor}} & \leq 2 R_0 \sum_{i= 1}^{m_b^*} \frac{1}{2^{\log_4 i}}\\ & = 2 R_0 \sum_{i= 1}^{m_b^*} \frac{1}{i^2} \\ & \leq 4 R_0 \sqrt{m_b^*} \\ & \leq 8 \sqrt{LFm_b^*} \end{align*} Combining this with Alice's cost gives the result. \end{proof} \begin{lemma} \label{l:numSteps} The algorithm ends in at most $12L$ time steps. \end{lemma} \begin{proof} By Lemma~\ref{lem:bcost}, Bob terminates in at most $L+ 14 \sqrt{LF (1+m_a^*)} + 8\sqrt{LFm_b^* }$ steps. Moreover, $m_a^*$ and $m_b^*$ are no more than $R_0^2/4F^2 -1$. Thus, the algorithm terminates in at most the following number of steps. \begin{align*} L + 14 \sqrt{LF(1+m_a^*)} + 8 \sqrt{LFm_b^* } &\le L + 22 \sqrt{\frac{LF R_0^2}{4F^2}}\\ &= L+ 22 \sqrt{\frac{L^2}{4}}\\ & = 12L \,. \qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:A1cost} If $T \le \frac{L}{8F} -1$ then both players terminate with the correct output in at most $L + O(\sqrt{LF(T+1)})$ steps. \end{lemma} \begin{proof} Let $T_a$ denote the number of bits flipped by the adversary while Alice is still in the protocol, and $T_b$ the bits flipped after Alice has left. Then $T_a + T_b = T$. By Lemma~\ref{lem:aec}, $m_a^* \le T_a +\sqrt{T_a}$. By Lemmas~\ref{lem:bec-upper-bd} and~\ref{lem:boberrors}, $m_b^* \le m_a^* + T_b$. Since $T_a + T_b =T$ it follows that \[ m_a^* \le T+\sqrt{T} \le 2T \le \frac{L}{4F} -2 < \frac{R_0^2}{4F^2}-1 \] and similarly \[ m_b^* < \frac{R_0^2}{4F^2}-1 \] Thus, Alice and Bob will both terminate by outputting the bits of $\pi$ by Lemma~\ref{l:correctness}. Plugging $m_a^* \leq 2T$ and $m_b^* \leq 3T$ into Lemma~\ref{lem:bcost} gives the total number of steps required. \end{proof} \begin{lemma} \label{lem:collision} With high probability in $L$, there are no hash collisions. \end{lemma} \begin{proof} By Lemma~\ref{l:numSteps}, the algorithm ends in at most 12L~steps. Also, there are at least $4F = \Theta(\log L)$ steps in a round. Thus, the algorithm has at most $O(L\log L)$ rounds. Each round has one fingerprint. By Theorem~\ref{thm:hash} and the setting of our fingerprint sizes, each fingerprint fails with probability at most $1/L^{2}$. Thus, a simple union bound gives the result. \end{proof} \begin{lemma}\label{lem:amdfail} With high probability in $L$, any bit flipping of a AMD encoded message is detected. \end{lemma} \begin{proof} We noted in the previous lemma that the algorithm terminates in $O(L\log L)$ rounds. Each round has two AMD encoded messages. By Theorem~\ref{t:amd} and the setting of our encoding sizes, each AMD encoding fails with probability at most $1/L^{2}$. Again, a union bound gives the result.\end{proof} \section{Unbounded $T$ - Algorithm} \label{sec:alg-unbounded} Algorithm 1 uses fingerprints of a fixed size, $F$ in order to check its transcripts. Each of these has a $1/L^2$ chance to fail due to a hash collision. Since the algorithm only computes about $O(L/\log L)$ fingerprints, a union bound tells us that with high probability the algorithm succeeds, below its threshold value of $T$. When $T$ is large, many more checks may need to be made, and eventually there will be a good chance that there is a hash collision. Since the algorithm cannot really recover from a hash collision, we cannot afford this. On the other hand, we cannot simply start out with larger fingerprints, both because this would be too expensive if $T$ turned out to be small, and also because even bigger fingerprints are still of a fixed size and eventually become unreliable. A natural solution is is to allow the fingerprints to grow, adapting the size to the value of $T$ seen so far, and this is indeed what we will do. \subsection{Helper Functions} As in Algorithm 1, we make black-box use of the Naor and Naor hash family, as well as AMD codes to protect information. However, in Iteration $j$ we need the failure probabilities for both these primitives to be $1/(2^jL^2)$. Thus, we want the fingerprint size to grow with $j$. We will denote the hash function which has a collision probability of at most $1/(2^jL^2)$ by $\hash_j$. \footnote{By abuse of notation, we will \emph{not} subscript all the other helper functions with $j$; it should be clear from context that the version of the function used is the one that operates on strings of the correct size and has the correct failure probability} It is easy to see that $O(j)$ extra bits are required for this, so that the fingerprint size is $O(j+\log L)$. Algorithm 1 works well when the adversary can only afford to flip a fraction of a bit per block of the algorithm. In this case, it doesn't matter that he can corrupt an entire round of the protocol by flipping a single bit. However, when the adversary has a larger budget, it becomes crucial to force him to pay a larger price to corrupt a round. To this end, we wrap each fingerprint and protocol bit in a linear error-correcting code. To be concrete, we will use a repetition code for each protocol bit, and a Reed-Solomon code~\cite{doi:10.1137/0108018} to provide the already AMD-encoded messages with a degree of error correction. This enables us to encode a message so that it can be recovered even if the adversary corrupts a third of the bits. We will denote the encoding and decoding functions by $\eECC$ and $\dECC$ respectively. The following theorem, a slight restatement from~\cite{doi:10.1137/0108018}, gives the properties of these functions. \begin{theorem}~\cite{doi:10.1137/0108018} \label{l:ecc} There is a constant $c>0$ such that for any message $m$, $|\eECC(m)| \le c|m|$. Moreover, if $m'$ differs from $\eECC(m)$ in at most one-third of its bits, then $\dECC(m') = m$. \end{theorem} Finally, we observe that the linearity of $\eECC$ and $\dECC$ ensure that when the error correction is composed with the AMD code, the resulting code has the following properties: \begin{enumerate} \item If at most a third of the bits of the message are flipped, then the original message can be uniquely reconstructed by rounding to the nearest codeword in the range of $\eECC$. \item Even if an arbitrary set of bits is flipped, the probability of the change not being recognized is at most $\delta$, $\emph{i.e.}$ the same guarantee as the AMD codes. \end{enumerate} This is because $\dECC$ is linear, so when noise $\eta$ is added by the adversary to the codeword $x$, effectively what happens is the decoding function $\dECC(x+\eta) = \dECC(x) + \dECC(\eta) = m + D(\eta)$, where $m$ is the AMD-encoded message. But now $\dECC(\eta)$ is an obliviously selected string added to the AMD-encoded codeword. \subsection{Algorithm} \begin{algorithm*} \caption{Interactive Communication: Iteration $j$} \label{alg:IC} \begin{minipage}{.43\textwidth} \CommentSty{\bf ALICE'S PROTOCOL} \BlankLine \CommentSty{Parameters: $N_j, F_j, \rho_j$}\; \nl \label{ap:repeat1}\For{ $i = 1$ to $N_j$}{ \nl $\mathcal{F}_a \gets \eECC(\eAMD( |\mathcal{T}^*_a|))$\; \nl Send $\mathcal{F}_a$\; \nl \uIf{$|\mathcal{T}^*_a| < L$}{ \nl \For{the next $\lfloorF_j/\rho_j\rfloor$ bits of $\pi$}{ \nl \uIf{sender}{ \nl Send next bit $\rho_j$ times\; \nl Append to $\mathcal{T}_a$\; } \nl \uElse{ \nl Receive $\rho_j$ bits\; \nl Append majority bit to $\mathcal{T}_a$\; }}} \nl \uElse{ \nl Transmit $F_j$ random bits. } \nl Receive Bob's $c F_j$-bit message, $\mathcal{F}'_b$\; \nl \uIf{\emph{$\isCodeword(\mathcal{F}'_b)$}}{ \nl \uIf{$|\mathcal{T}^*_a| \ge L$}{ \nl Output $\mathcal{T}^*_a[0:L]$ and \\ \textbf{Terminate}\; } \nl $\mathcal{F} \gets \dAMD(\mathcal{F}'_b)$\; \nl \uIf {\emph{$\matchesFP(\mathcal{F},\mathcal{T}_a)$}}{ \CommentSty{// successful round}\; \nl $\mathcal{T}^*_a \gets \mathcal{T}_a$\; }} \nl \uElse{ \CommentSty{// round failed }\; \nl $\mathcal{T}_a \gets \mathcal{T}^*_a$\; } } \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \BlankLine \end{minipage} \hfill \begin{minipage}{.43\textwidth} \setcounter{AlgoLine}{0} \BlankLine \CommentSty{\bf BOB'S PROTOCOL} \BlankLine \CommentSty{Parameters: $N_j, F_j, \rho_j$}\; \nl \label{ap:repeat2}\For{ $i = 1$ to $N_j$}{ \nl \uIf{$|\mathcal{T}^*_b| \ge L$}{ \nl Wait $cF_j$ channel steps\; \nl Receive $F_j$ bits\; \nl \uIf {fewer than $F_j/3$ alternations in the received string}{ \nl Output $\mathcal{T}^*_b[0:L]$ and \\ \textbf{Terminate}\; } \nl \uElse{ \nl $\mathcal{F}_b \gets \eECC(\eAMD (\hash_j(\mathcal{T}^*_b)))$\; \nl Send $\mathcal{F}_b$\;} } \nl \uElse{ \nl Receive Alice's $cF_j$-bit message $\mathcal{F}'_a$\; \nl \uIf{\emph{$\isCodeword(\dECC(\mathcal{F}'_a))$}}{ \nl $\ell \gets \dAMD (\dECC(\mathcal{F}'_a))$\; \nl \uIf{$\ell > |\mathcal{T}^*_b|$}{ \nl $\mathcal{T}^*_b \gets \mathcal{T}_b$\; } \nl \uElse{ \nl $\mathcal{T}_b \gets \mathcal{T}^*_b$\; } \nl \For{the next $\lfloorF_j/\rho_j\rfloor$ bits of $\pi$}{ \nl \uIf{sender}{ \nl Send next bit $\rho_j$ times\; \nl Append to $\mathcal{T}_b$\; } \nl \uElse{ \nl Receive $\rho_j$ bits\; \nl Append majority bit to $\mathcal{T}_b$\; }} \nl $\mathcal{F}_b \gets \eECC(\eAMD (\hash_j(\mathcal{T}_b)))$\; \nl Send $\mathcal{F}_b$\; } \nl \uElse{ \nl Transmit $(c+1)F_j$ random bits. }}} \end{minipage} \end{algorithm*} \begin{algorithm*} \caption{Interactive Communication} \label{alg:IC2} \begin{minipage}{.43\textwidth} \CommentSty{\bf ALICE'S PROTOCOL} \BlankLine \CommentSty{\bf // Iteration 0}\; \nl Run Alice's protocol from Alg 1 \; \nl \uIf{not terminated}{ \nl transmit random bits until channel step $12L$\; } \CommentSty{\bf // End of Iteration 0}\; \nl $j \gets 1$\; \nl \While{still present}{ \CommentSty{\bf // Iteration $j$}\; \nl $F_j \gets \beta(j + \log L)$\; \nl $\rho_j \gets 2^j \lceil\frac{F_j}{F}\rceil \wedge F_j$\; \nl $N_j \gets 2^{j-1} \lceil 8L/F \rceil$\; \nl Run Alice's protocol from Algorithm 2, with parameters $N_j, F_j, \rho_j$\; \CommentSty{\bf // End of Iteration $j$}\; \nl $j \gets j+1$\; } \end{minipage} \hfill \begin{minipage}{.43\textwidth} \setcounter{AlgoLine}{0} \CommentSty{\bf BOB'S PROTOCOL} \BlankLine \CommentSty{\bf // Iteration 0}\; \nl Run Bob's protocol from Alg 1 \; \nl \uIf{not terminated}{ \nl transmit random bits until channel step $12L$\; } \CommentSty{\bf // End of Iteration 0}\; \nl $j \gets 1$\; \nl \While{ still present}{ \CommentSty{\bf // Iteration $j$}\; \nl $F_j \gets \beta(j + \log L)$\; \nl $\rho_j \gets 2^j \lceil\frac{F_j}{F}\rceil \wedge F_j$\; \nl $N_j \gets 2^{j-1} \lceil 8L/F \rceil$\; \nl Run Bob's protocol from Algorithm 2, with parameters $N_j, F_j, \rho_j$\; \CommentSty{\bf // End of Iteration $j$}\; \nl $j \gets j+1$\; } \end{minipage} \end{algorithm*} Let $N_1 := \lceil 8 L/F\rceil $ be the number of rounds in Iteration 1. Let $N_j := 2^{j-1} N_1$ be the number of rounds in Iteration $j>1$. Let $F_j = 2\beta j + F$ be the size of the fingerprints in Iteration $j$, where $\beta$ is the constant from the Naor and Naor hash function. Thus the hash collision probability of a single fingerprint is $2^{-2j} L^{-2}$. Each round of the iteration begins with Alice sending Bob a (1/3)-error-corrected, AMD-encoded synchronization message of length $cF_j$, followed by simulation of the protocol for $F_j$ channel steps, followed by Bob sending Alice a (1/3)-error-corrected, AMD-encoded fingerprint of length $cF_j$. Here $c$ is the constant factor blowup we get from the ECC and AMD encodings, but for technical reasons we will further ensure that it is at least 5. Thus, the total round length is $(2c+1)F_j \ge 11 F_j$. We will let $\alpha$ equal $(2c+1)$. As in Algorithm 1, Alice will decide whether to update her verified transcript and advance to the next block of $\pi$ or to rewind to redo the current block, based on whether she receives a fingerprint from Bob that matches the fingerprint of her own transcript. Similarly, Bob will decide whether to join in the simulation of $\pi$ or to transmit random bits until the end of the round based on receiving or failing to receive Alice's synchronization message at the round's start. Where the round differs from a round in Algorithm 1, is in the actual simulation of $\pi$. For the whole iteration, a fixed number of bits of $\pi$ will be simulated per round. Each bit will be repeated $\rho_j = 2^{j-1}\lceilF_j/F \rceil \wedge F_j$ times. \footnote{We remind the reader that $x\wedge y$ denotes the minimum of $x$ and $y$, while $x\vee y$ denotes their maximum.} The receiving party will use majority filtering to infer the transmitted bit. Since $F_j$ time steps in the round are allocated to protocol simulation, this allows $\lfloor F_j/\rho_j\rfloor$ bits of $\pi$ to be simulated. Notice that the number of rounds doubles from one iteration to the next. Also, the number of repetitions of each simulated bit also roughly doubles between iterations, at least until it hits its cap, which is a constant fraction of the length of the round. This is the so-called doubling trick, (though in our case perhaps it should be quadrupling) which results in the overall cost being dominated by the cost in the last (or second to last) iteration. \section{Unbounded $T$ - Analysis} \label{sec:analysis-unbounded} We now analyze the main algorithm presented in Section~\ref{sec:alg-unbounded}. As in Section~\ref{sec:bounded-analysis}, we begin by noting that a hash collision or an AMD code failure will cause the algorithm to fail. Additionally, the algorithm could fail during the padding rounds, if the adversary happens to flip bits in such a way as to cause Alice's random bits to look like silence, resulting in Bob's premature departure. In Section~\ref{sec:badpr} we will show that with high probability each of these events does not occur. Meanwhile, throughout this section we will assume without further mention that we are in the good event where none of the undesirable events occur. \subsection{Alice and Bob are both present} \begin{lemma} For every $j \ge 1$, Alice and Bob are always synchronized. That is, they begin the iteration as well as every round therein at the same time. \end{lemma} \begin{proof} Alice and Bob synchronize themselves after Iteration 0 by both starting Iteration 1 at channel step $12L +1$. Thereafter, for each $j\ge 1$, they have the same round sizes $\alpha F_j$ and number of rounds $N_j$ in Iteration $j$, so that they remain synchronized. \end{proof} We will call a round \emph{corrupted} if enough bits are flipped in the round that the bits of $\pi$ being simulated cannot be recovered or verified by Alice. We will call it \emph{uncorrupted} or \emph{progressive} if it is not corrupted in the above sense. \begin{lemma} Each round is either corrupted at a cost of at least $\rho_j/2$ to the adversary or results in $\lfloor F_j/\rho_j \rfloor$ bits of progress in $\pi$. \end{lemma} \begin{proof} Since each simulated protocol bit is sent $\rho_j$ times, with majority filtering at the receiving end, it costs the adversary $\rho_j/2$ to corrupt the repetition-encoded bit. It costs the adversary at least $c F_j /3 \ge \rho_j/2$ to corrupt Alice's synchronization message or Bob's fingerprint since these are protected by error-correction. Thus it costs the adversary at least $\rho_j/2$ to corrupt the round. Otherwise, since there are $F_j$ steps allocated to sending protocol bits, and each one is repeated $\rho_j$ times, the protocol is successfully simulated for $\lfloor \frac{F_j}{\rho_j}\rfloor$ bits. \end{proof} The following lemma is the equivalent of Lemmas~\ref{lem:termAlice} to~\ref{l:correctness} for Iteration $j$. Its proof is nearly identical to the proofs in Section~\ref{sec:termination} (indeed, it is simpler, since Iteration $j$ does not have the synchronization problems faced by Algorithm 1) and we omit it \begin{lemma} Iteration $j$ has the following properties: \begin{enumerate} \item It is always the case that $\mathcal{T}^*_a \preccurlyeq \pi$, where $\pi$ is the padded transcript. \item At the beginning and end of each round, \[ \mathcal{T}^*_b \preccurlyeq \mathcal{T}^*_a = \mathcal{T}_a \preccurlyeq \mathcal{T}_b; \] where at most one of the inequalities is strict. Moreover, at the end of a channel step in which Bob receives $\mathcal{F}_a$ correctly, \[ \mathcal{T}^*_b = \mathcal{T}_b = \mathcal{T}^*_a. \] \item Bob leaves after Alice. When Alice leaves, $|\mathcal{T}^*_b| \geq L$. \item When either party terminates, their output is correct. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:uncorrupted} There are at most $N_j/4$ uncorrupted rounds in Iteration $j$ \end{lemma} \begin{proof} Since each uncorrupted round results in $\lfloor F_j/ \rho_j \rfloor$ bits of progress in $\pi$, $\lceil L \rho_j / F_j \rceil $ rounds are sufficient for Alice's transcript length to exceed $L$. One additional uncorrupted round is sufficient for Bob to catch up to Alice if necessary, using her synchronization message, and for Alice to infer from Bob's fingerprint that Bob's transcript length has exceeded $L$, resulting in Alice's departure. After that, if a round is uncorrupted, then Bob will perceive silence on the channel, resulting in Bob's departure. Thus $\lceil L \rho_j / F_j \rceil +2$ uncorrupted rounds are enough for both parties to terminate. Finally note that for all $j\ge 1$, \[ \frac{\rho_j}{F_j} \le \frac{2^{j-1}}{F} \wedge 1 \le \frac{2^{j-1}}{F} \] It follows that (for sufficiently large $L$) there are at most $2^j L/F = N_j/4$ uncorrupted rounds in Iteration $j$. \end{proof} The following corollary is immediate. \begin{corollary}\label{cor:corr} If $j$ is not the last iteration, then at least $3/4$ of the rounds are corrupted. \end{corollary} Although the adversary can flip any number of bits in a round, we will only charge him the minimum number of bit-flips required for the outcome we see in the round, \emph{i.e.}, we will charge him 0 for uncorrupted rounds and $\rho_j/2$ for corrupted rounds. Let $T_j$ denote the number of corruptions charged to the adversary in Iteration $j$. Clearly, for $j>0$ \begin{equation}\label{eqn:maxTj} T_j \le \frac12 N_j \rho_j \end{equation} Also, we know from Section~\ref{sec:alg-bounded} that if the algorithm does not end in Iteration 0, then $T_0 \ge L/8F$. In this case, we will generously only charge the adversary that amount. In other words, if Iteration 1 is reached, either by both Alice and Bob, or by Bob alone, $T_0= \lceil L/8F \rceil$. \begin{lemma}\label{lem:minTj} If $j$ is not the last iteration then $T_j \ge \frac38 N_j \rho_j$ \end{lemma} \begin{proof} This follows from Corollary~\ref{cor:corr}, since it costs the adversary at least $\rho_j /2 $ to corrupt a round. \end{proof} \begin{lemma} If $j$ is not the last iteration then \[ 3T_{j-1}/2 \le T_j \le 64 T_{j-1} \] \end{lemma} \begin{proof} If $j=1$ \[ T_1 \ge \frac38 N_1 \rho_1 \ge \frac{3L}{F} \ge 24 T_0 > 3T_0 \] and \[ T_1 \le N_1 \rho_1 /2 \le \frac{8L}{F} =64 T_0\, . \] If $j>1$, then by \eqref{eqn:maxTj} and Lemma~\ref{lem:minTj}, \[ \frac32 \frac{3N_j \rho_j/8}{N_{j-1}\rho_{j-1} /2} \le \frac{T_j}{T_{j-1}} \le \frac{N_j \rho{j} /2}{3N_{j-1} \rho_{j-1}/8 } \le \ 64 \] since $N_{j-1} = N_j/2$ and $\rho_{j-1} \le \rho_j \le 4\rho_{j-1}$. \end{proof} \begin{lemma} \label{lem:uncorrsmall}The cost to either player due to uncorrupted rounds in Iteration $j\le \log F$ is at most \[ 7 \alpha \sqrt{ LT_{j-1} F } \] \end{lemma} \begin{proof} Each uncorrupted round costs the players $\twocplusF_j$. Since there are at most $N_j/4$ uncorrupted rounds, the resulting cost is no more than $\frac{\alpha}{4} N_j F_j$. Since $j\le \log F$, $\rho_j = 2^{j-1}\lceilF_j /F\rceil$ and $F_j \le 2F$. Combining these we have \[ F_j \le F \sqrt{2^{2-j}\rho_j} \] so that \begin{align*} \frac{\alpha}{4} N_j F_j &\le \alpha N_{j-1}F_{j-1} \\ &\le \alpha N_{j-1} F \sqrt{2^{3-j}\rho_{j-1}}\\ &\le \alpha F \sqrt{N_{j-1}2^{3-j}}\sqrt{N_{j-1}\rho_{j-1}} \\ & \le \twocplusF \sqrt{2 N_1 }\sqrt{8 T_{j}/3}\\ &\le \alpha\sqrt{128L T_{j} F/3}\\ &\le 7\alpha\sqrt{LT_{j}F} \,. \qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:uncorrbig} If $j > \log F$, the cost to either player due to uncorrupted rounds in Iteration $j$ is at most \[ 3\alpha T_{j-1} \] \end{lemma} \begin{proof} When $j > \log F $, $F_j = \rho_j$ and by Lemma~\ref{lem:minTj}, \[ \frac{\alpha}{4} N_j F_j = \frac{\alpha}{4} N_j \rho_j \le \alpha N_{j-1} \rho_{j-1} \le \frac{8\alpha}{3} T_{j-1} \le 3\alpha T_{j-1}\, . \qedhere \] \end{proof} \begin{lemma}\label{lem:corrcost} The cost to the players from corrupted rounds in Iteration $j$ is at most $4 \alpha \sqrt{2 LT_jF} $ if $j\le \log F$ and $ 2\alpha T_j$ otherwise. \end{lemma} \begin{proof} Suppose there are $k$ corrupted rounds. Then the cost to the players is $k\alpha F_j$, while the adversary's cost is $k\rho_j/2$. If $j \ge \log F +1$, $F_j = \rho_j$ and we easily see that the players' cost is at most $2\alpha T$. When $j\le \log F$, since $k \le N_j$, \begin{align*} k\alpha F_j &= \alpha\sqrt{k \rho_j F 2^{1-j}}\sqrt{N_j F_j}\\ &\le \alpha\sqrt{T_j F 2^{2-j}}\sqrt{2^{j}N_1 F}\\ &\le 2\alpha\sqrt{8 LT_jF} \, . \qedhere \end{align*} \end{proof} Collecting the various costs and noting that $T_j \le 64 T_{j-1}$, we see that for a suitably large constant $\gamma$, we have \begin{lemma}\label{lem:totalcost} The total cost to the players from Iteration $j$ is at most $\gamma \sqrt{ LT_{j-1}\log L} $ if $j\le \log F$ and $ \gamma T_{j-1}$ otherwise. \end{lemma} \subsection{Bob plays alone} After Alice's verified transcript has length at least $L$, in each subsequent round, she transmits her synchronization message, and then random bits to indicate her continued presence. Once Alice has left, there is silence on the channel. To corrupt this silence, the adversary must make it look like a corrupted synchronization message followed by random bits. Since a random string of length $F_j$ has, on average, $F_j/2$ alternations of bits, Bob considers the string to represent silence if it has fewer than $F_j/3$ alternations. Thus, to corrupt such a round the adversary must pay at least $F_j/3$. Alice leaves when she has received word that Bob has a verified transcript of length at least $L$, and a single extra uncorrupted round thereafter will cause Bob to leave as well. Thus, if iteration $j$ was not Bob's last one, the adversary must have corrupted every round. If $1\le k<N_j$ rounds are corrupted, Bob pays at most $(k +1)\alpha F_j \le 2k\alpha F_j $ and the adversary pays $kF_j/3$. If $k=0$, we will generously account for the lone uncorrupted round from Iteration $j$ in Iteration $j-1$ by noting that $\alpha( N_{j-1} F_{j-1} + F_j) \le 2\alpha( N_{j-1} F_{j-1})$ Finally a calculation identical to that in Lemma~\ref{lem:corrcost} shows that Bob's cost for an iteration $j$ that he played alone is no more than \[ \gamma \sqrt{ LT_{j-1}\log L} \] if $j<\log F$ and \[ \gamma T_{j-1} \] otherwise. \subsection{Failure Probabilities} \label{sec:badpr} In this section we bound the probabilities of the events that cause the algorithm to fail. \begin{lemma} With high probability in $L$, there is no hash collision during Iteration $j$. \end{lemma} \begin{proof} The fingerprint size has been selected large enough that the probability of a hash collision for a single hash is $\frac{1}{2^{2j} L^2}$. Since there are $N_j = 2^{j+2} L/F$ rounds in Iteration $j$, by a union bound, the probability of a hash collision during the iteration is $O\left(\frac{1}{2^j L\log L}\right)$. \end{proof} \begin{lemma} With high probability in $L$, any bit flipping of an AMD encoded message during Iteration $j$ is detected. \end{lemma} \begin{proof} The size of the AMD encoding has been selected so that the probability of a failure to detect a single instance of tampering is $\frac{1}{2^{2j} L^2}$. Since there are two AMD encodings per round and $2^{j+2} L/F$ rounds, again the probability that such a failure occurs during the iteration is $O\left(\frac{1}{2^jL\log L}\right)$. \end{proof} \begin{lemma} With high probability in $L$, Alice leaves before Bob. \end{lemma} \begin{proof}Bob does not terminate until he thinks Alice has left, and he does not even start checking for whether she seems to have left until after his transcript has length at least $L$. Since Bob's transcript lags behind that of Alice, this means that by the time Bob is checking for whether Alice has left, Alice either really has left, in which case it is fine for Bob to leave, or she is transmitting i.i.d. random bits in batches of length $F_j$, between fingerprints. Since the adversary cannot see the bits, any bit flips on his part do not alter the fact that the string received by Bob is a uniformly random bit string of length $F_j$. Such a string has $F_j/2$ alternations (consecutive bits that differ) in expectation. Bob leaves if he sees fewer than $F_j/3$ alternations. If the string is random, the likelihood of Bob seeing fewer than $F_j/3$ alternations is, by Chernoff's bound, at most $\mathrm{e}^{-F_j/18} \le \frac{1}{2^{2j} L^2}$ provided $\beta = \frac{F_j}{2j +\log L}$ was chosen suitably large. Since there are at most $N_j$ chances in Iteration $j$ for the adversary to try this attack, a union bound again shows that Bob leaves after Alice, except with probability $O\left(\frac{1}{2^j L\log L}\right)$. \end{proof} \subsection{Putting everything together} We will now prove our main theorem by putting all these costs together and calculating the total cost to either player and the failure probability of the algorithm. As before, $T$ denotes the number of bits flipped by the adversary. \begin{theorem} The algorithm succeeds with probability at least $1- 1/L\log L$. If it succeeds, then each player's cost is at most \[ L + O(\sqrt{LT \log L} + T) \] \end{theorem} \begin{proof} First we note that for each $j\ge 0$ (Iteration 0 being Algorithm 1), the probability that Algorithm 3 fails during iteration $j$ is at most $O\left(\frac{1}{2^{2j} L \log L}\right) $. Thus the overall probability that it fails at all is \[ O\left(\sum_{j=0}^{\infty} \frac{1}{2^{j} L \log L }\right) = O\left(\frac{1}{L \log L}\right) \] Thus, with high probability the algorithm succeeds. Let $J$ denote the last iteration in which the player participates. If $J=0$ then Lemma~\ref{lem:A1cost} already proves that the players' total cost is at most $L + O(\sqrt{L(T+1) \log L})$. Suppose $J\ge 1$. For each $j$, let $Cost(j)$ denote the player's cost from Iteration $j$. We know that \begin{itemize} \item[$\bullet$] $Cost(0) = 12L \le L + \gamma \sqrt{LT_0 \log L}$ where $T_0 = L/(8F)$ \item[$\bullet$] $Cost(j) \le \gamma \sqrt{ L T_{j-1} \log L}$ if $1\le j \le \log F$ \item[$\bullet$] $Cost(j) \le \gamma T_{j-1} $ if $j > \log F$ \end{itemize} When $J \le \log F$, the player's total cost is \begin{align*} \sum_{j=0}^J Cost(j) &\le Cost(0) + \sum_{j=1}^J Cost(j)\\ &\le L+ \gamma \sqrt{L T_0 \log L} + \sum_{j=1}^J \gamma \sqrt{L T_{j-1} \log L}\\ &\le L+ \gamma \sqrt{L \log L} \left( \sqrt{(2/3)^{J-1} T_{J-1}} + \sum_{j=1}^J \sqrt{(2/3)^{J-1-j } T_{J-1}}\right)\\ &\le L+ \gamma \sqrt{L T_{J-1}\log L} \left( \sqrt{(2/3)^{J-1}} + \sum_{j=0}^{J-2} \sqrt{(2/3)^{j} }\right)\\ &\le L + \frac{\sqrt3 \gamma}{\sqrt3 -\sqrt2} \sqrt{L T_{J-1}\log L} \\ &= L+ \gamma' \sqrt{L T_{J-1}\log L}\\ &\le L+ \gamma' \sqrt{L T\log L} \end{align*} On the other hand, $T_{\lfloor \log F\rfloor} = \Theta(N_{\lfloor \log F\rfloor} \rho_{\lfloor \log F\rfloor}) = \Theta(L\log L)$, so that $\sqrt{LT_{\lfloor \log F\rfloor} \log L} = \Theta(T_{\lfloor \log F\rfloor})$ and for $J > \log F$ we have \begin{align*} \sum_{j=0}^J Cost(j) &\le Cost(0) + \sum_{j=1}^{\lfloor \log F\rfloor} Cost(j) + \sum_{j={\lfloor \log F\rfloor}+1}^J Cost(j)\\ &\le L+ \gamma' \sqrt{L T_{\lfloor \log F\rfloor} \log L} + \sum_{j={\lfloor \log F\rfloor}+1}^J \gamma T_{j-1} \\ &\le L+\gamma'' T_{\lfloor \log F\rfloor} + \sum_{j={\lfloor \log F\rfloor}+1}^J \gamma T_{j-1}\\ &\le L+ O(T) \end{align*} Thus the players' cost is always $L+O\left(\sqrt{L(T+1)\log L} + T\right)$. \end{proof} \section{Some Additional Remarks}\label{sec:remarks} \subsection*{Need for Private Channels} The following theorem justifies our assumption of private channels. \begin{theorem} \label{t:privateIsNecessary} Consider any algorithm for interactive communication over a public channel that works with unknown $T$ and always terminates in the noise-free case. Any such algorithm succeeds with probability at most $1/2$. \end{theorem} \begin{proof} The adversary chooses some protocol $\pi$ with transcript length $L$ and some separate ``corrupted'' protocol $\pi_c$ such that 1) $\pi_C$ has transcript length $L$ and 2) Bob's individual input for $\pi_c$ is equivalent to his individual input for $\pi$. The goal of the adversary will be to convince Bob that $\pi_c$ is the protocol, rather than $\pi$. Note that we can always choose some appropriate pair $\pi$ and $\pi_c$ meeting the above criteria. Assume that if $\pi_c$ is the protocol and there is no noise on the channel, then Bob will output $\pi_c$ with probability at least $1/2$; if not, then the theorem is trivially true. Then, the adversary sets $\pi$ to be the input protocol. Next, the adversary simulates Alice in the case where her input protocol is $\pi_c$, and sets the bits received by Bob to be the bits that would be sent by Alice in such a case. Since the the algorithm eventually terminates, Bob will halt after some finite number of rounds, $X$. Using the above strategy, Bob will incorrectly output $\pi_c$ with probability at least $1/2$ and the value of $T$ will be no more than $X$. Note that in the above, we critically rely on the fact that $T$ is unknown to Bob. \end{proof} \subsection*{Communication Rate Comparison.} In Haeupler's algorithm~\cite{haeupler2014interactive}, the noise rate $\epsilon$ is known in advance and is used to design an algorithm with a communication rate of $1 - O(\sqrt{\epsilon \log \log 1/\epsilon})$. Let $L'$ be the length of $\pi'$. Then in his algorithm, $L' = O(L)$, and so the adversary is restricted to flipping $\epsilon L' = O(L)$ bits. Thus, in his model, $T$ and $L'$ are always $O(L)$. In our model, the values of $T$ and $L'$ are not known in advance, and so both $T$ and $L'$ may be asymptotically larger than $L$. How do our results compare with~\cite{haeupler2014interactive}? As noted above, a direct comparison is only possible when $T = O(L)$. Restating our algorithm in terms of $\epsilon$, we have the following theorem. \begin{theorem}\label{thm:comm-rate} If the adversary flips $O(L)$ bits and the noise rate is $\epsilon$ then our algorithm guarantees a communication rate of $1-O\left(\sqrt{\frac{\log L}{L}} + \sqrt{\epsilon \log L}\right)$. \end{theorem} \begin{proof} When $T < L$ we also have $T < \sqrt{L(T+1)\log L}$ and our algorithm guarantees that for some $\gamma>0$, \[ L'= L +\gamma\sqrt{L(T+1)\log L} \] Let $\epsilon = T/L'$ and $R=L/L'$ be the effective noise and communication rates respectively. Then, \begin{align*} R = \frac{L}{L'} &= 1 - \frac{L' -L}{L'} \\ &\ge 1- \frac{\gamma\sqrt{L(T+1)\log{L}}}{L'} \\ &\ge 1- \gamma\frac{\sqrt{L\log L} + \sqrt{LT\log L}}{L'}\\ &\ge 1- \gamma\left(\frac{\sqrt{R\log L}}{\sqrt{L'}}+ \sqrt{R\epsilon\log L}\right)\\ &\ge 1 - \gamma\sqrt{\log L}\left( \frac{1}{\sqrt{L} }+ \sqrt{\epsilon} \right), \end{align*} where the last line follows because $1/\sqrt{L'} \le 1/\sqrt{L}$ and $R\le 1$. \end{proof} We note that the additive term $\sqrt{\frac{\log L}{L}}$ arises from the fact that because we do not know the error rate ahead of time, we cannot get a communication rate of 1 even when the effective error rate turns out to be zero. \subsection*{A Note on Fingerprint Size.}\label{sec:fpnote} A natural question is whether more powerful probabilistic techniques than union bound could enable us to use smaller fingerprints as done in~\cite{haeupler2014interactive}. The variability of block sizes poses a challenge to this approach since Alice and Bob must either agree on the current block size, or be able to recover from a disagreement by having Bob stay in the listening loop so he can receive Alice's message. If their transcripts diverge by more than a constant number of blocks, it may be difficult to make such a recovery, and therefore it seems challenging to modify our algorithm to use smaller fingerprints. However, it is a direction for further investigation. \subsection*{A Lower Bound} In this section, we prove a lower bound that demonstrates the near optimality of our upper bound by assuming the following conjecture by Haeupler holds~\cite{haeupler2014interactive}. We now restate Haeupler's conjecture.\smallskip \noindent{\bf Conjecture 1.}~{\it (Haeupler~\cite{haeupler2014interactive}, 2014) The maximal rate achievable by an interactive coding scheme for any binary error channel with random or oblivious errors is $1-\Theta(\sqrt{\epsilon})$ for a noise rate $\epsilon \rightarrow 0$. This also holds for for fully adversarial binary error channels if the adversary is computationally bounded or if parties have access to shared randomness that is unknown to the channel.}\smallskip \noindent For the remainder of this section, we \textit{\textbf{assume that Haeupler's conjecture holds}} for any algorithm that succeed with high probability in L with an expected cost of at most $L'$ under adversarial noise. For ease of exposition, we omit such statements in all of our claims below. By {\it robust} interactive communication, we mean interactive communication tolerates $T$ errors. We begin by showing the near optimality with respect to the communication rate achieved: \begin{theorem}\label{thm:Lprime} Any algorithm for robust interactive communication must have $ L' = L + \Omega\left(T + \sqrt {L T}\right)$ for some $T\geq 1$. \vspace{-3pt} \end{theorem} \begin{proof} Let $T\geq 1$ be any value such that $T/L' = o(1)$. Then, Haeupler's Conjecture applies and the expected total number of bits sent is $L' \geq L/(1-d\sqrt{\epsilon})$ for some constant $d>0$. Noting that $1/(1-d\sqrt{\epsilon}) \geq 1 + d\sqrt{\epsilon}$ by the well-known sum of a geometric series, this implies that $L' \geq L/(1-d\sqrt{\epsilon}) \geq (1 + d \sqrt{\epsilon})L = (1 + d \sqrt{T/L'})L$ since $\epsilon = T/L'$. This implies that $L/L' \leq 1/(1 + d \sqrt{T/L'})$. Now observe that $1/(1+x) = 1/(1-(-x)) \leq 1 -x +x^2$ for $|x| < 1$, again by the sum of a geometric series. Plugging in $d\sqrt{T/L'}$ for $x$, we have $1/(1 + d \sqrt{T/L'}) \leq 1 - d\sqrt{T/L'} + d^2(T/L')$. Therefore, $L/L' \leq 1 - d\sqrt{T/L'} + d^2(T/L') = 1 - d\sqrt{T/L'}(1 - d\sqrt{T/L'}) \leq 1 - d'\sqrt{T/L'}$ for some $d'>0$ depending only on $d$. We then derive: $L \leq L'(1 - d'\sqrt{T/L'}) = L' - d'\sqrt{L' T}$. It follows that $L' \geq L + d'\sqrt{L' T} = L + \Omega(\sqrt{L T})$ since $L' \geq L$. Finally, we show that $\sqrt{LT} = \Theta( T + \sqrt{LT})$. Assume that given any algorithm A for interactive computation, we create a new algorithm A' that has expected value of $L' = O(L)$. To do this, A' checks based on $\epsilon$ and $L$ whether or not Haeupler's algorithm~\cite{haeupler2014interactive} will send fewer bits in expectation than A. If so it runs Haeupler's algorithm. Note that the expected number of bits sent by A' is no more than the expected number of bits sent by A. Note that $T = \epsilon L'$ and for algorithm A', the expected value of $L' = O(L)$. This implies that implies that $T = \epsilon O(L)$ or $T = O(L)$. Since $T< L$, it holds that $\sqrt{LT} = \Theta( T + \sqrt{LT})$ which completes the proof. \end{proof} \section{Conclusion} \label{sec:conc} We have described the first algorithm for interactive communication that tolerates an unknown but finite amount of noise. Against an adversary that flips $T$ bits, our algorithm sends $L + O\left(\sqrt{L(T+1)\log L} +T\right)$ bits in expectation where $L$ is the transcript length of the computation. We prove this is optimal up to logarithmic factors, assuming a conjectured lower bound by Haeupler. Our algorithm critically relies on the assumption of a private channel, an assumption that we show is necessary in order to tolerate an unknown noise rate. Several open problems remain including the following. First, can we adapt our results to interactive communication that involves more than two parties? Second, can we more efficiently handle an unknown amount of stochastic noise? Finally, for any algorithm, what are the optimal tradeoffs between the overhead incurred when $T=0$ and the overhead incurred for $T>0$? \subsection*{Acknowledgments} We are grateful to Nico D{\"o}ttling, Bernhard Haeupler, Mahdi Zamani, and the anonymous reviewers for their useful discussions and comments.
{ "timestamp": "2015-08-17T02:03:44", "yymm": "1504", "arxiv_id": "1504.06316", "language": "en", "url": "https://arxiv.org/abs/1504.06316" }
\section{Introduction} Steady states of an open quantum system are considered equilibrium or non equilibrium states according to whether or not they satisfy a quantum detailed balance condition (see \cite{Agarwal,Alicki-Lendi,FFVU07,FFVU08,FFVU,FFVU12,KFGV,MaSt,Seif} and the references therein). Concepts of entropy production have been proposed in several papers (\cite{BolQue,Breuer,FFRR-QP29,FFRR-ep,JP-2001,MRM,Seif} is a short list far from being complete) as an index of deviation from detailed balance (see \cite{Qian2003} also for classical Markov processes). In \cite{FFRR-QP29,FFRR-ep} we introduced a definition of entropy production rate for faithful normal invariant states of quantum Markov semigroups, inspired by the one brought into play for classical Markov processes, by considering the derivative of relative entropy of the one-step forward and backward two-point states at time $t=0$. Moreover, we proved an explicit formula for the entropy production of a quantum Markov semigroup in terms of the completely positive part of the generator (Theorem \ref{th:ep-formula} here). This formula shows that non zero entropy production is closely related with the violation of quantum detailed balance conditions and singles out states with finite entropy production as a rich class of simple non equilibrium invariant states. In this paper we compute the entropy production for a class of quantum Markov semigroups, a faithful invariant state $\rho$, arising in the weak coupling limit of a system coupled with reservoirs, whose generators ${\mathcal{L}}$ are sums of other generators ${\mathcal{L}}_\omega$ associated with positive Bohr frequencies $\omega$ of the system (see \cite{AcLuVo,Dav,DeFr}). Our main result is the explicit formula (\ref{eq:ep-formula}) for the entropy production rate in terms of second order moments of Kraus operators in the GKSL representation of the generator. This formula shows that the entropy production of a semigroup in this class is the \emph{sum} of non-negative entropy productions of all semigroups generated by each ${\mathcal{L}}_\omega$. As a consequence (Theorem \ref{th:GDB-trivial}) the semigroup generated by ${\mathcal{L}}$ satisfies the quantum detailed balance condition if and only if so does each semigroup generated an ${\mathcal{L}}_\omega$. The plan of the paper is as follows. In Section \ref{sect:QMSclass} we introduce the class of quantum Markov semigroups we are dealing with. In Section \ref{sect:QDB-ep} we recall various notions of quantum detailed balance. Our new formula for the entropy production is proved in Section \ref{sect:ep} and, finally, in Section \ref{sect:global-local} we essentially show that equilibrium states for the semigroup generated by ${\mathcal{L}}$ are equilibrium state for all semigroups generated by each ${\mathcal{L}}_\omega$. \section{QMS of stochastic limit type} \label{sect:QMSclass} We will be concerned with the class of quantum Markov semigroups (QMS) we describe below under some restrictive assumptions in order to avoid domain problems and similar technicalities. This class arises in the weak coupling limit as well as in the stochastic limit of a Hamiltonian system $S$ interacting with a reservoir (see \cite{AcLuVo,Dav,DeFr} and the references therein). Let ${\mathsf{h}}$ be a fixed $d$-dimensional ($d<\infty$) Hilbert space and let $H_S$ be a self-adjoint operator on ${\mathsf{h}}$ with spectral decomposition \[ H_S=\sum_n\varepsilon_n P_{\varepsilon_n} \] where $\varepsilon_n\not=\varepsilon_m$ for $m\not=n$ and $P_{\varepsilon_n}$ is the orthogonal projection onto the nullspace of $H_S - \varepsilon_n \unit_{\mathsf{h}}$ (here $\unit_h$ denotes the identity operator on ${\mathsf{h}}$). We denote as $\mathcal{B}({\mathsf{h}})$ the algebra of all bounded operators on ${\mathsf{h}}$. We call \emph{Bohr frequencies} the differences \[ \omega=\varepsilon_n-\varepsilon_m \qquad \text{with \ $\varepsilon_n>\varepsilon_m$} \] Choose an operator $V$ on ${\mathsf{h}}$ and define \begin{equation}\label{eq:V-omega} V_\omega=\sum_{\varepsilon_n-\varepsilon_m=\omega} P_{\varepsilon_m}VP_{\varepsilon_n}. \end{equation} Moreover, let $H_\omega$ be a self-adjoint operator on ${\mathsf{h}}$ commuting with $H_S$. For all Bohr frequency $\omega$ let ${\mathcal{L}}_\omega$ be the GKSL (Gorini-Kossakowski-Sudarshan-Lindblad) generator of a QMS on $\mathcal{B}({\mathsf{h}})$ \begin{eqnarray}\label{eq:slt-local} {\mathcal{L}}_\omega(x)&=&{\mathrm{i}} [H_\omega,x]-\frac{\gamma^-_\omega}{2}\left(V_\omega^*V_\omega x-2V^*_\omega xV_\omega+xV_\omega^*V_\omega \right)\nonumber\\ &-&\frac{\gamma^+_\omega}{2}\left(V_\omega V_\omega^* x-2V_\omega xV_\omega^*+xV_\omega V_\omega^* \right). \end{eqnarray} where $\gamma_\omega^-,\gamma_\omega^+>0$. QMSs in our class are generated by the linear map ${\mathcal{L}}$ \begin{equation}\label{eq:slt-global} {\mathcal{L}}=\sum_\omega {\mathcal{L}}_\omega. \end{equation} Note that, defining \begin{equation}\label{eq:G-global} G_\omega=-\frac{1}{2}\left (\gamma^-_\omega V^*_\omega V_\omega+\gamma^+_\omega V_\omega V^*_\omega\right)-{\mathrm{i}} H_\omega \end{equation} we can write the generator ${\mathcal{L}}_\omega$ simply as \begin{equation*} {\mathcal{L}}_\omega(x) = G_\omega^* x + \gamma^-_\omega V^*_\omega x V_\omega +\gamma^+_\omega V_\omega x V^*_\omega + x G_\omega. \end{equation*} Since the Hilbert space ${\mathsf{h}}$ is finite dimensional the QMS generated by ${\mathcal{L}}_\omega$ admits an invariant state $\rho$. Moreover, it is well-known (see e.g. \cite{AcLuVo}) that there exists an invariant state whose density matrix $\rho$ commutes with the system Hamiltonian $H$ so that it can be written as \[ \rho = \sum_{1\le j\le d} |e_j\rangle\langle e_j| \] where $\rho_j\ge 0$, the above sum is finite, $\sum_{1\le j\le d}\rho_j=1$, $(e_j)_{1\le j\le d}$ is an orthonormal basis of ${\mathsf{h}}$ and each $e_j$ belongs to an eigenspace $P_n$ of $H_S$. We shall also assume that $\rho$ is faithful (if not we can reduce the semigroup by its recurrent projection \cite{FFRR-LNM1882}). The generators of these QMSs turn out to admit a special GKSL representantion (\cite{Partha} Theorem 30.16) \begin{equation}\label{eq:GKSL} {\mathcal{L}}(x)={\mathrm{i}}[H,x] - \frac{1}{2} \sum_{\ell =1}^{2b} \left(L^*_\ell L_\ell x-2L^*_\ell xL_\ell + xL^*_\ell L_\ell\right) \end{equation} where $b$ is the number of Bohr frequencies, such that $\hbox{\rm tr}(\rho L_\ell)=0$ for all $1\le \ell \le b$ and operators $(L_\ell)_{\ell\ge 1}$ are linearly independent in $\mathcal{B}({\mathsf{h}})$. Indeed, it suffices to associate with each Bohr frequency $\omega$ a pair of operators \begin{equation}\label{eq:GKSL-L} L_{2\ell}=(\gamma^-_\omega)^{1/2}V_\omega\qquad L_{2\ell-1}=(\gamma^+_\omega)^{1/2}V_\omega^*, \end{equation} where the indexes run over a finite set, and define $H=\sum_\omega H_\omega$. \section{Quantum detailed balance and entropy production} \label{sect:QDB-ep} A number of conditions called \emph{quantum detailed balance} (QDB) have been proposed in the literature to distinguish, among invariant states, those enjoying reversibility properties. The first one, to the best of our knowledge, appeared in the work of Agarwal \cite{Agarwal} in 1973 (see also Majewski \cite{Maje}) and involves a reversing operation $\Theta:\mathcal{B}({\mathsf{h}}) \to\mathcal{B}({\mathsf{h}})$, namely a linear $*$-map (\,$\Theta(x^*)= \Theta(x)^*$ for all $x\in\mathcal{B}({\mathsf{h}})$), that is also an antihomomorphism (\,$\Theta(xy)=\Theta(y)\Theta(x)$\,) and satisfies $\Theta^2=I$, where $I$ denotes the identity map on $\mathcal{B}({\mathsf{h}})$. A QMS $\mathcal{T}$ satisfies the Agarwal-Majewski QDB condition with respect to a faihtful normal invariant state $\rho$ if $\tr{\rho x{\mathcal{T}}_t(y) } = \tr{\rho \,\Theta(y){\mathcal{T}}_t(\Theta(x))}$, for all $x,y\in\mathcal{B}(h)$. If the state $\rho$ is invariant under the reversing operation, i.e. $\tr{\rho\Theta(x)}=\tr{\rho x}$ for all $x\in\mathcal{B}({\mathsf{h}})$, as we shall assume throughout the paper, this condition can be written in the equivalent form $\tr{\rho x{\mathcal{T}}_t(y)}= \tr{\rho \left((\Theta\circ{\mathcal{T}}_t\circ\Theta)(x)\right) y}$ for all $x,y\in\mathcal{B}(h)$. Therefore the Agarwal-Majewski QDB condition means that maps ${\mathcal{T}}_t$ admit dual maps coinciding with $\Theta\circ{\mathcal{T}}_t\circ\Theta$ for all $t\ge 0$; in particular dual maps must be positive since $\Theta$ is obviously positivity preserving. The map $\Theta$ often appears in the physical literature as a parity map; a self-adjoint $x$ is an even (resp. odd) observable if $\Theta(x)=x$ (resp. $\Theta(x)=-x$). In our framework, since $H_S$ is the energy of the system, which is a typical even observable, a reasonable map $\Theta$ is the transpose $\Theta(a)=a^\intercal$ with respect to an orthonormal basis $(e_j)_{1\le j\le d}$ of ${\mathsf{h}}$ diagonalizing $H_S$ as in \cite{DuSn}. Interested readers can consult \cite{FFVU08,FFVU,Maje} in more general situations. The best known QDB notion, however, is due to Alicki \cite{Alicki-Lendi} and Kossakowski, Frigerio, Gorini, Verri \cite{KFGV}. According to these authors, a QMS with generator ${\mathcal{L}}$ as in \eqref{eq:GKSL}, with invariant state $\rho$ whose density commutes with $H$, satisfies the quantum detailed balance condition if $\tr{\rho\, x{\mathcal{L}}(y)}=\tr{\rho\,\widetilde{{\mathcal{L}}}(x)y}$ where $\widetilde{{\mathcal{L}}}={\mathcal{L}} - 2{\mathrm{i}}[H,\cdot]$. As a consequence, the QMS $\widetilde{{\mathcal{T}}}$ on $\mathcal{B}({\mathsf{h}})$ generated by $\widetilde{{\mathcal{L}}}$ satisfies $\tr{\rho\, x{\mathcal{T}}_t(y)} =\tr{\rho\,\widetilde{{\mathcal{T}}}_t(x)y}$ for all $t\ge 0$. Both the above QDB conditions depend in a crucial way from the bilinear form $(x,y)\to \tr{\rho xy}$. In particular, if they hold true, all positive maps ${\mathcal{T}}_t$ admit \emph{positive} dual maps; as a consequence, all the maps ${\mathcal{T}}_t$ must commute with the modular group $(\sigma^\rho_t)_{t\in\mathbb{R}}$, given by $\sigma^\rho_t(x) =\rho^{{\mathrm{i}} t} x \rho^{-{\mathrm{i}} t}$, associated with the state $\rho$ (see \cite{KFGV} Prop. 2.1, \cite{MaSt} Prop. 5 and also \cite{Cipriani}) as well as the generator ${\mathcal{L}}$. This algebraic restriction is unnecessary if we consider the bilinear form $(x,y)\to \omega\left(\sigma_{{\mathrm{i}}/2}(x)y \right)$ for defining dual QMSs. QDB conditions arising when we consider this bilinear form are called \emph{standard} (see e.g. \cite{DeFr}, \cite{FFVU}); we could not find them in the literature, but it seems that they belong to the folklore of the subject. In particular, they were considered by R. Alicki and A. Majewski (private communication). \begin{definition}\label{def:SQDB} Let ${\mathcal{T}}$ be a QMS with a dual ${\mathcal{T}}^\prime$ defined by $\omega\left(\sigma_{{\mathrm{i}}/2}(x){\mathcal{T}}_t(y) \right) =\omega \left(\sigma_{{\mathrm{i}}/2}\left({\mathcal{T}}_t^\prime(x)\right)y \right)$ for all $x,y\in\mathcal{B}({\mathsf{h}})$, $t\ge 0$. The semigroup ${\mathcal{T}}$ satisfies: \begin{enumerate} \item the standard quantum detailed balance condition with respect to the reversing operation $\Theta$ (SQBD-$\Theta$) if ${\mathcal{T}}_t^\prime= \Theta\circ{\mathcal{T}}_t\circ\Theta$ for all $t\ge 0$, \item the standard quantum detailed balance condition (SQDB) if the difference of generators ${\mathcal{L}} -{\mathcal{L}}^\prime$ of ${\mathcal{T}}$ and ${\mathcal{T}}^\prime$ is a derivation. \end{enumerate} \end{definition} It is worth noticing here that the above \emph{standard} QDB conditions coincide with the Agarwal-Majewski and Alicki-Gorini-Kossakowski-Frigerio-Verri respectively when the QMS ${\mathcal{T}}$ commutes with the modular group $(\sigma_t)_{t\in\mathbb{R}}$ associated $\omega$ (see \cite{FFVU07,FFVU}). In the framework of the present paper all states are normal will and be identified with their densities. In particular, $\omega(x)=\tr{\rho\, x}$, $\sigma_t(x)= \rho^{{\mathrm{i}} t} x \rho^{-{\mathrm{i}} t}$ and $\omega\left(\sigma_{{\mathrm{i}}/2}(x)y \right) =\tr{\rho^{1/2} x \rho^{1/2}y}$. In \cite{FFVU} (Theorems 5, 8 and Remark 4) we proved the following characterisations of QMS satisfying a standard QDB condition we recall here in the present framework. \begin{theorem}\label{th:SQDB} A QMS ${\mathcal{T}}$ satisfies the SQDB if and only if for any special GKSL representation of the generator ${\mathcal{L}}$ by means of operators $G,L_\ell$ there exists a unitary $(u_{m\ell})_{1\le m, \ell\le 2b}$ on $\mathsf{k}$ which is also symmetric (i.e. $u_{\ell m}= u_{\ell m}$ for all $m,\ell$) such that, for all $\ell\ge 1$, \begin{equation}\label{sqdb-cond} \rho^{1/2}L^*_\ell=\sum_{1\le m\le 2b} u_{\ell m}L_m\rho^{1/2}. \end{equation} \end{theorem} \begin{theorem}\label{th:SQDB-TR} A QMS ${\mathcal{T}}$ satisfies the {SQBD-$\Theta$} condition if and only if for any special GKSL representation of ${\mathcal{L}}$ by means of operators $G, L_\ell$, there exists a self-adjoint unitary $(u_{\ell m})_{1\le m, \ell\le 2b}$ such that: \begin{enumerate} \item \label{sdb-theta-1} $\rho^{1/2}G^\intercal =G\rho^{1/2} $, \item \label{sdb-theta-2} $\rho^{1/2} L_\ell^\intercal = \sum_{1\le m\le 2b} u_{\ell m} {L_m}\rho^{1/2}$ for all $1\le \ell \le 2b$. \end{enumerate} \end{theorem} The SQBD-$\Theta$ condition is more restrictive than the SQDB condition because it involves also the identity $\rho^{1/2}G^\intercal=G\rho^{1/2} $ (see Example 7.3 in \cite{FFRR-ep}). However, this does not happen if $G^\intercal = G$ and $\rho$ commutes with $G$. This is a reasonable physical assumption satisfied by many QMSs as, for instance, those of stochastic limit type we are considering in this paper. Conditions obtained including the reversing map $\Theta$ seem more suitable for studying quantum detailed balance (\cite{FFVU,JP-2013}). \section{A formula for entropy production}\label{sect:ep} We begin this section by recalling our notion of entropy production \cite{FFRR-ep}. Since it provides an index describing deviation from detailed balance, it was introduced in \cite{FFRR-QP29, FFRR-ep} through the forward and backward two-point states on $\mathcal{B}({\mathsf{h}})\otimes\mathcal{B}({\mathsf{h}})$ \begin{eqnarray*} \overrightarrow{\Omega}_t\left( x \otimes y\right) & = & \tr{ \rho^{1/2} x^\intercal \rho^{1/2} {\mathcal{T}}_t(y)}\\ \overleftarrow{\Omega}_t\left( x \otimes y\right) & = & \tr{ \rho^{1/2} {\mathcal{T}}_t(x^\intercal)^\intercal \rho^{1/2} {\mathcal{T}}_t(y)}, \end{eqnarray*} which clearly coincide if and only if ${\mathcal{T}}$ satisfies the SQDB-$\Theta$ condition, and their relative entropy $S(\overrightarrow{\Omega}_t,\overleftarrow{\Omega}_t)$ as \begin{equation}\label{eq:ep-def} \eprod{{\mathcal{T}},\rho} = \limsup_{t\to 0^+}\frac{S(\overrightarrow{\Omega}_t,\overleftarrow{\Omega}_t)}{t} \end{equation} Moreover, in \cite{FFRR-ep} Theorem 5, we proved an explicit formula based on the Kraus operators $L_\ell$ in a GKSL decomposition of the generator ${\mathcal{L}}$. Let ${\overrightarrow{\Phi}_*}$ and ${\overleftarrow{\Phi}_*}$ be the linear maps on trace class operators on ${\mathsf{h}}\otimes{\mathsf{h}}$ \begin{eqnarray} {\overrightarrow{\Phi}_*}(X)&=& \sum_{\omega}\left(\gamma^-_\omega\left(\unit\otimes V_\omega\right) X \left(\unit\otimes V_\omega^*\right)+\gamma^+_\omega\left(\unit\otimes V_\omega^*\right) X \left(\unit\otimes V_\omega\right)\right)\label{eq:ava}\\ {\overleftarrow{\Phi}_*}(X) &= &\sum_{\omega}\left(\gamma^-_\omega\left(V_\omega\otimes\unit \right) X \left(V_\omega^*\otimes\unit\right) +\gamma^+_\omega\left(V_\omega^*\otimes\unit\right) X \left( V_\omega\otimes\unit\right)\right)\label{eq:ind} \end{eqnarray} Let $\theta$ be the antilinear conjugation in a basis $(e_j)_{1\le j\le d}$ diagonalizing $\rho$ and let $D$ be the entangled state on $\mathcal{B}({\mathsf{h}})\otimes\mathcal{B}({\mathsf{h}})$ introduced in \cite{FFRR-ep} as \begin{equation}\label{eq:density_Omega0} D=\ketbra{r}, \qquad r= \sum_{j}\rho_j^{1/2} \theta e_j\otimes e_j. \end{equation} It is not hard to check as in \cite{FFRR-ep} that $D$ is the density of the state $\overrightarrow{\Omega}_0=\overleftarrow{\Omega}_0$. \begin{theorem}\label{th:ep-formula} Let ${\mathcal{T}}$ be QMS on $\bo{{\mathsf{h}}}$ and $\rho$ a faithful invariant state. Assume: \begin{enumerate} \item $\rho^{1/2}G^\intercal=G\rho^{1/2}$, \item the linear spans of $\set{L_\ell \rho^{1/2}\,\mid\, \ell\ge 1}$ and $\set{ \rho^{1/2} L_\ell^\intercal \,\mid\, \ell\ge 1}$ coincide. \end{enumerate} Then the ranges of ${\overrightarrow{\Phi}_*}(D)$ and ${\overleftarrow{\Phi}_*}(D)$ coincide and the entropy production is \begin{eqnarray}\label{eq:ep-formula} \eprod{{\mathcal{T}},\rho}& =& \frac{1}{2} \Tr{\left({\overrightarrow{\Phi}_*}(D)-{\overleftarrow{\Phi}_*}(D)\right) \left(\log\left({\overrightarrow{\Phi}_*}(D)\right) - \log\left({\overleftarrow{\Phi}_*}(D)\right) \right)}\nonumber\\ & =& \Tr{{\overrightarrow{\Phi}_*}(D) \left(\log\left({\overrightarrow{\Phi}_*}(D)\right) - \log\left({\overleftarrow{\Phi}_*}(D)\right) \right)} \end{eqnarray} \end{theorem} In order to compute explicitly the entropy production for QMS in the class described in Section \ref{sect:QMSclass} we begin by establishing a preliminary Lemma. We denote by $\langle \cdot, \cdot \rangle$ the scalar product in ${\mathsf{h}}\otimes {\mathsf{h}}$. \begin{lemma}\label{lem:XY} Let $X$ and $Y$ be bounded operators on ${\mathsf{h}}$, then \begin{eqnarray} \langle (Y\otimes\unit)r,(\unit\otimes X)r\rangle &=&\tr{(\rho^{1/2}\theta Y^*\theta )^*X\rho^{1/2}}\\ \langle (\unit\otimes Y)r,(\unit\otimes X)r\rangle &=&\tr{\rho\, Y^*X}. \end{eqnarray} \end{lemma} \noindent{\it Proof.} Both formulas follow from straightforward computations \begin{eqnarray*} \langle (Y\otimes\unit)r,(\unit\otimes X)r\rangle &=&\sum_{j,k}(\rho_j\rho_k)^{1/2}\langle Y\theta e_j,\theta e_k\rangle \langle e_j,Xe_k\rangle\\ &=&\sum_{j,k}\langle e_k,\theta Y\theta\rho^{1/2}e_j\rangle \langle e_j,X\rho^{1/2}e_k\rangle\\ &=&\sum_{j,k}\langle \rho^{1/2}\theta Y^*\theta e_k,e_j\rangle \langle e_j,X\rho^{1/2}e_k\rangle\\ &=&\sum_{k}\langle \rho^{1/2}\theta Y^*\theta e_k,X\rho^{1/2}e_k\rangle\\ &=&\tr{(\rho^{1/2}\theta Y^*\theta )^*X\rho^{1/2}} \end{eqnarray*} \begin{eqnarray*} \langle (\unit\otimes Y)r,(\unit\otimes X)r\rangle &=&\sum_{j,k}(\rho_j\rho_k)^{1/2}\langle\theta e_j,\theta e_k\rangle \langle Ye_j,Xe_k\rangle\\ &=&\sum_j\langle Y \rho^{1/2}e_j, X\rho^{1/2}e_j\rangle\\ &=&\tr{\rho\, Y^*X} \end{eqnarray*} \hfill $\square$ Replacing the operators $X,Y$ in Lemma \ref{lem:XY} by operators $V_\omega$ and keeping into account that $\theta V_\omega^* \theta=V_\omega^*$ if $V_\omega$ is a real matrix, we have the following \begin{corollary}\label{cor:scalar-prod} If the operators $V_\omega$, defined by \eqref{eq:V-omega}, are represented by real matrices we have \begin{eqnarray*} \langle (V_{\omega^\prime}\otimes\unit)r, (\unit\otimes V_{\omega})r\rangle&=&\delta_{\omega, \omega^\prime}\tr{\rho^{1/2}V_{\omega}\rho^{1/2}V_{\omega}}\\ \langle (\unit\otimes V_{\omega^\prime})r,(\unit\otimes V_{\omega})r\rangle &=&\delta_{\omega, \omega^\prime} \tr{\rho V_{\omega}^*V_\omega} \end{eqnarray*} where $\delta_{\omega,\omega^\prime}$ is the Dirac delta. \end{corollary} We are now in a position to prove our entropy production formula \begin{theorem}\label{th:ep-QMS-SLT} Assume that $V_\omega$ and $H_\omega$ are real matrices for all Bohr frequency $\omega$, then the entropy production is \begin{eqnarray}\label{eq:ep-formula} \eprod{{\mathcal{T}},\rho}& =& \sum_\omega\left( \gamma^-_\omega\, \tr{\rho V^*_\omega V_\omega}\log\left( \frac{\gamma^-_\omega \, \tr{\rho V^*_\omega V_\omega}^2}{\gamma^+_\omega\,\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}\right) \right.\nonumber\\ &+&\left. \gamma^+_\omega \tr{\rho V_\omega V^*_\omega}\log\left( \frac{\gamma^+_\omega \,\tr{\rho V_\omega V^*_\omega}^2}{\gamma^-_\omega\,\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}\right)\right) \end{eqnarray} \end{theorem} \noindent{\it Proof.} Replacing $X$ by $D$ in \eqref{eq:ava} and \eqref{eq:ind} and denoting $\Vava{\omega}=\unit\otimes V_\omega$, $\Vind{\omega}=V_\omega\otimes\unit$ one obtains \begin{eqnarray}\label{eq:phiava-phiind} {\overrightarrow{\Phi}_*}(D)&=& \sum_{\omega}\left(\gamma^-_\omega\ketbra{\Vava{\omega} r}+\gamma^+_\omega \ketbra{\Vava{\omega}^* r}\right)\\ {\overleftarrow{\Phi}_*}(D) &= &\sum_{\omega}\left(\gamma^-_\omega \ketbra{\Vind{\omega} r}+\gamma^+_\omega \ketbra{\Vind{\omega}^* r}\right) \end{eqnarray} By Corollary \ref{cor:scalar-prod}, each vector $\Vava{\omega}r$ is orthogonal to any vector $\Vava{\omega}^*r$ and each $\Vava{\omega}r$ (respectively $\Vava{\omega}^* r$) is orthogonal to $\Vava{\omega^\prime}r$ (resp. $\Vava{\omega^\prime}^*r$) with $\omega^\prime\not=\omega$. Therefore, normalising vectors $\Vava{\omega}r,\Vava{\omega}^*r$ yield an orthonormal basis of ${\mathsf{h}}\otimes{\mathsf{h}}$. In this basis ${\overrightarrow{\Phi}_*}(D)$ turns out to be a diagonal matrix with $2\times 2$ blocks associated with each Bohr frequency $\omega$ given by \[ \left(\begin{array}{cc} \gamma^-_\omega \tr{\rho V_\omega^*V_\omega}&0\\ 0&\gamma^+_\omega \tr{\rho V_\omega V_\omega^*} \end{array}\right) \] In order to write ${\overleftarrow{\Phi}_*}(D)$, compute first $\Vind{\omega}r$: \begin{eqnarray*} \Vind{\omega}r & = &\frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V^*_\omega}} {\tr{\rho V^*_\omega V_\omega}}\Vava{\omega}r +\frac{\tr{\rho^{1/2}V_\omega\rho^{1/2}V^*_\omega}} {\tr{\rho V_\omega V_\omega^*}}\Vava{\omega}^*r \\ & = & \frac{\tr{\rho^{1/2}V_\omega\rho^{1/2}V^*_\omega}} {\tr{\rho V_\omega V_\omega^*}}\Vava{\omega}^*r, \end{eqnarray*} since the first term is 0. In the same way we have \begin{eqnarray*} \Vind{\omega}^*r & = &\frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}} {\tr{\rho V^*_\omega V_\omega}}\Vava{\omega}r +\frac{\tr{\rho^{1/2}V_\omega\rho^{1/2}V_\omega}} {\tr{\rho V_\omega V_\omega^*}}\Vava{\omega}^*r \\ & = & \frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}} {\tr{\rho V^*_\omega V_\omega}}\Vava{\omega}r, \end{eqnarray*} Thus, by the cyclic property of the trace, we have \[ \Vind{\omega}r=\frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}}{\tr{\rho V_\omega V^*_\omega}^{1/2}}\frac{\Vava{\omega}^*r}{\left\Vert \Vava{\omega}^*r\right\Vert},\qquad \Vind{\omega}^*r=\frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}}{\tr{\rho V^*_\omega V_\omega}^{1/2}}\frac{\Vava{\omega}r}{\left\Vert \Vava{\omega}r\right\Vert} \] It follows that, in the above orthonormal basis of ${\mathsf{h}}\times{\mathsf{h}}$, obtained normalising vectors $\Vava{\omega}r,\Vava{\omega}^*r$, ${\overleftarrow{\Phi}_*}(D)$ becomes \begin{eqnarray*} {\overleftarrow{\Phi}_*}(D)&=&\sum_\omega\left(\gamma^+_\omega \frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}{\tr{\rho V^*_\omega V_\omega}}\frac{\ketbra{\Vava{\omega}r}}{\left\Vert \Vava{\omega}r\right\Vert^2}\right.\\ &+&\left . \gamma^-_\omega \frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}{\tr{\rho V_\omega V^*_\omega}}\frac{\ketbra{\Vava{\omega}^*r}}{\left\Vert \Vava{\omega}^*r\right\Vert^2}\right) \end{eqnarray*} and it turns out to be a matrix with $2\times 2$ diagonal blocks associated with each Bohr frequency $\omega$ given by \[ \left(\begin{array}{cc} \gamma^+_\omega \frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}{\tr{\rho V^*_\omega V_\omega}} &0\\ 0&\gamma^-_\omega \frac{\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega}^2}{\tr{\rho V_\omega V^*_\omega}}\end{array} \right) \] and our entropy production formula follows immediately. \hfill $\square$ \section{Global and local equilibrium}\label{sect:global-local} In this section we show that the entropy production \eqref{eq:ep-formula} vanishes if and only if the all the semigroups generated by each ${\mathcal{L}}_\omega$ satisfy the SQDB-$\Theta$ condition. It is useful to introduce some notation that allows us to focus more clearly contributions of each QMS generated by ${\mathcal{L}}_\omega$ to the entropy production: \[ \nu_\omega^-=\tr{\rho V_\omega^*V_\omega},\qquad \nu^+_\omega=\tr{\rho V_\omega V^*_\omega},\qquad \mu_\omega=\tr{\rho^{1/2}V^*_\omega\rho^{1/2}V_\omega} . \] In this notation the entropy production is written as \begin{equation} \eprod{{\mathcal{T}},\rho} =\sum_\omega\left(\gamma^-_\omega\nu^-_\omega \log\left(\frac{\gamma^-_\omega\nu^{-\,2}_\omega}{\gamma^+_\omega\mu^2_\omega}\right) +\gamma^+_\omega\nu^+_\omega \log\left(\frac{\gamma^+_\omega\nu^{+\,2}_\omega} {\gamma^-_\omega\mu^2_\omega}\right)\right) \end{equation} Note that, by the Schwarz inequality, \begin{equation}\label{eq:Schwarz} \mu^2_\omega\leq \nu^+_\omega\nu^-_\omega. \end{equation} Moreover \[ \log \left( \frac{\gamma^\mp_\omega\nu^{\mp\,2}_\omega}{\gamma^\pm_\omega\mu^2_\omega} \right)=\log\left(\frac{\gamma^\mp_\omega \nu^\mp_\omega}{\gamma_\omega^\pm\nu^\pm_\omega}\right)+\log\left(\frac{\nu^+_\omega\nu^-_\omega}{\mu^2_\omega}\right) \] so that we can rewrite the entropy production as \begin{eqnarray*} \eprod{{\mathcal{T}},\rho}&=&\sum_\omega\left(\gamma^-_\omega\nu^-_\omega-\gamma^+_\omega\nu^+_\omega\right)\log\left(\frac{\gamma^-_\omega\nu^{-}_\omega}{\gamma^+_\omega\nu^+_\omega}\right)\\ &+&\left(\gamma^-_\omega\nu^-_\omega+\gamma^+_\omega\nu^+_\omega\right) \log\left(\frac{\nu_\omega^{+}\nu^{-}_\omega}{\mu^2_\omega}\right) \end{eqnarray*} \begin{corollary}\label{cor:ep=0iff} The entropy production is zero if and only if $\gamma^-_\omega\nu^-_\omega= \gamma^+_\omega\nu^+_\omega$ and $\nu^-_\omega\nu^+_\omega=\mu^2_\omega$. \end{corollary} \noindent{\it Proof.} It suffices to note that $\log({\nu_\omega^{+}\nu^{-}_\omega}/{\mu^2_\omega})$ is non-negative by \eqref{eq:Schwarz} and $(t-s)\log(t/s)$ is non-negative for all reals $t,s$ with $t\not= s$. \hfill $\square$ \medskip The following result shows that QMSs of stochastic limit type have zero entropy production of and only if the standard quantum detailed balance condition with reversing map $\Theta$ (SQDB-$\Theta$ condition) holds. This is not true for an arbitrary QMS as shows Example 7.3 in \cite{FFRR-ep}. \begin{theorem}\label{th:SQDB-ep-zero-iff} Assume that $V_\omega$ and $H_\omega$ are real matrices for all $\omega$, so that the semigroup commutes with the reversing map $\Theta$. Then the following are equivalent: \begin{enumerate} \item the entropy production is zero, \item $\rho^{1/2}L_{2\ell-1}^*= L_{2\ell} \,\rho^{1/2}$ for all $\ell=1,\dots, 2b$, \item $\left(\gamma_\omega^{+}\right)^{1/2} \rho^{1/2}V_\omega = \left(\gamma_\omega^{-}\right)^{1/2} V_\omega \rho^{1/2}$ for all $\omega$, \item the SQDB-$\Theta$ condition holds. \end{enumerate} \end{theorem} {\it Proof.} $2.\Leftrightarrow 3.$ Clear from the definition \eqref{eq:GKSL-L} of $L_{2\ell}$ and $L_{2\ell+1}$. Indeed, $ \left(\gamma_\omega^{+}\right)^{1/2} \rho^{1/2}V_\omega =\rho^{1/2}L_{2\ell-1}^* $ and $\left(\gamma_\omega^{-}\right)^{1/2} V_\omega \rho^{1/2} = L_{2\ell}\rho^{1/2}$ for all $\ell=1,\dots, b$. \\ $1.\Rightarrow 3.$ By Corollary \ref{cor:ep=0iff} we have $\nu^-_\omega\nu^+_\omega=\mu^2_\omega$ and so the Schwarz inequality \eqref{eq:Schwarz} turns out to be an equality. It follows that the operators $V_\omega\rho^{1/2}$ and $V_\omega\rho^{1/2}$, thought of as vectors in the Hilbert space of Hilbert-Schmidt operators on ${\mathsf{h}}$ are parallel, i.e. $\rho^{1/2}V_\omega = c_\omega V_\omega \rho^{1/2}$ for some constant $c_\omega$. Computing the scalar product with $V_\omega \rho^{1/2}$ we immediately find $ \mu_\omega = c_\omega \nu^-_{\omega}$, i.e., since $\mu^2_\omega=\nu^-_\omega\nu^+_\omega$, $ c_\omega =\left( \nu^+_{\omega}/\nu^-_{\omega}\right)^{1/2}$ so that \[ \rho^{1/2}V_\omega = \left(\frac{ \nu^+_{\omega}} { \nu^-_{\omega}}\right)^{1/2} V_\omega \rho^{1/2} \] and $3.$ follows from $\gamma_\omega^-\nu_\omega^- =\gamma_\omega^+\nu_\omega^+$. \\ $3. \Rightarrow 4.$ The SQDB-$\Theta$ condition is characterised by Theorem 8 in \cite{FFVU}, namely Theorem \ref{th:SQDB-TR} in this paper. Now, the identity $\rho^{1/2} \theta G^*\theta = G\rho^{1/2}$ holds because we have assumed that $V_\omega$ and $H_\omega$ are real matrices. Moreover, since $L_\ell=\theta L_\ell \theta$ and $L_\ell^*=\theta L_\ell^* \theta$ for all $\ell\ge 1$, condition 2 of Theorem \ref{th:SQDB-TR} holds choosing as unitary self-adjoint the operator $u$ flipping even and odd indexes $\ell$ i.e. $u_{kj}= 1$ if either $k=2\ell$ and $j=2\ell -1$ or $k=2\ell-1$ and $j=2\ell$ and $u_{kj}= 0$ otherwise. \\ $4.\Rightarrow 1.$ For all vector $v=\sum_{\alpha,\beta} v_{\alpha\beta}\, \theta e_\alpha\otimes e_\beta$ we have \begin{equation}\label{eq:vphiavav} \left\langle v, {\overrightarrow{\Phi}_*}(D) v\right\rangle = \sum_{\ell,j,k,\beta,\beta'} \overline{v}_{j\beta'}v_{k\beta} \left\langle e_{\beta'}, L_\ell \rho^{1/2}e_j\right\rangle \left\langle L_\ell\rho^{1/2}e_k , e_\beta \right\rangle \end{equation} and also, by the properties of the antiunitary $\theta$ \begin{eqnarray*} \left\langle v, {\overleftarrow{\Phi}_*}(D) v\right\rangle & = & \sum_{\ell,j,k,\alpha,\alpha'} \overline{v}_{\alpha'j}v_{\alpha k} \left\langle \theta e_{\alpha'}, L_\ell \rho^{1/2}\theta e_j\right\rangle \left\langle L_\ell\rho^{1/2}\theta e_k , \theta e_\alpha \right\rangle \\ & = & \sum_{\ell,j,k,\alpha,\alpha'} \overline{v}_{\alpha'j}v_{\alpha k} \left\langle \theta L_\ell \theta \rho^{1/2}e_j, e_{\alpha'}\right\rangle \left\langle e_\alpha, \theta L_\ell\theta \rho^{1/2}e_k \right\rangle \\ & = & \sum_{\ell,j,k,\alpha,\alpha'} \overline{v}_{\alpha'j}v_{\alpha k} \left\langle e_j, \rho^{1/2}\theta L_\ell^* \theta e_{\alpha'}\right\rangle \left\langle \theta L_\ell^*\theta \rho^{1/2}e_\alpha, e_k \right\rangle \end{eqnarray*} Now, the SQDB-$\Theta$ condition holds, then $\rho^{1/2}\theta L_\ell^* \theta = \sum_m u_{\ell m} L_m \rho^{1/2}$ for a unitary self-adjoint $(u_{\ell m})_{1\le \ell,m \le 2b}$ so that, $\sum_\ell\overline{u}_{\ell m'} u_{\ell m} =\delta_{m'm}$ and \begin{eqnarray*} \left\langle v, {\overleftarrow{\Phi}_*}(D) v\right\rangle & = & \sum_{\ell,j,k,\alpha,\alpha',m,m'} \overline{v}_{\alpha'j}v_{\alpha k} \overline{u}_{\ell m'} u_{\ell m} \left\langle e_j, L_{m'} \rho^{1/2}e_{\alpha'}\right\rangle \left\langle L_m \rho^{1/2}e_\alpha, e_k \right\rangle \\ & = & \sum_{j,k,\alpha,\alpha',m} \overline{v}_{\alpha'j}v_{\alpha k} \left\langle e_j, L_{m} \rho^{1/2}e_{\alpha'}\right\rangle \left\langle L_m \rho^{1/2}e_\alpha, e_k \right\rangle. \end{eqnarray*} Changing indexes and comparing with \eqref{eq:vphiavav}, by the arbitrarity of $v$, we find ${\overrightarrow{\Phi}_*}(D)={\overleftarrow{\Phi}_*}(D)$ and the entropy production, given by \eqref{eq:ep-formula}, is zero. \hfill $\square$ \medskip {\bf Remark.} It is worth noticing here that conditions of Theorem \ref{th:SQDB-ep-zero-iff} are also equivalent to the QDB-$\Theta$ condition and so in our class of QMSs of stochastic limit type the SQDB-$\Theta$ and QDB-$\Theta$. Indeed, since the modular group is given by $\sigma_{t}(x) =\rho^{{\mathrm{i}} t} x \rho^{-{\mathrm{i}} t}$, the identity $\left(\gamma_\omega^{+}\right)^{1/2} \rho^{1/2}V_\omega = \left(\gamma_\omega^{-}\right)^{1/2} V_\omega \rho^{1/2}$ reads $\sigma_{-{\mathrm{i}}/2}(V_\omega) = \left(\gamma_\omega^-/\gamma_\omega^+\right)^{1/2} V_\omega$. Taking the adjoint of $\left(\gamma_\omega^{+}\right)^{1/2} \rho^{1/2}V_\omega = \left(\gamma_\omega^{-}\right)^{1/2} V_\omega \rho^{1/2}$ we find also, in the same way, $\sigma_{-{\mathrm{i}}/2}(V_\omega^*) = \left(\gamma_\omega^+/\gamma_\omega^-\right)^{1/2} V_\omega^*$. It follows that \begin{eqnarray*} \sigma_{-{\mathrm{i}}}(V_\omega) & \kern-2truept =\kern-2truept & \sigma_{-{\mathrm{i}}/2}\left(\sigma_{-{\mathrm{i}}/2}(V_\omega)\right) =\left( \gamma_\omega^-/\gamma_\omega^+\right)^{1/2} \sigma_{-{\mathrm{i}}/2}(V_\omega) = \left( \gamma_\omega^-/\gamma_\omega^+\right) V_\omega \\ \sigma_{-{\mathrm{i}}}(V_\omega^*) & \kern-2truept =\kern-2truept & \sigma_{-{\mathrm{i}}/2}\left(\sigma_{-{\mathrm{i}}/2}(V_\omega^*)\right) =\left( \gamma_\omega^+/\gamma_\omega^-\right)^{1/2} \sigma_{-{\mathrm{i}}/2}(V_\omega^*) = \left( \gamma_\omega^+/\gamma_\omega^-\right) V_\omega^* \end{eqnarray*} and \begin{eqnarray*} \sigma_{-{\mathrm{i}}}(L_{2\ell}) = \left( \gamma_\omega^-/\gamma_\omega^+\right) L_{2\ell}, & \qquad & \sigma_{-{\mathrm{i}}}(L_{2\ell+1}) = \left( \gamma_\omega^+/\gamma_\omega^-\right) L_{2\ell-1}\\ \sigma_{-{\mathrm{i}}}(L_{2\ell}^*) = \left( \gamma_\omega^+/\gamma_\omega^-\right) L_{2\ell}^*, & \qquad & \sigma_{-{\mathrm{i}}}(L_{2\ell+1}^*) = \left( \gamma_\omega^-/\gamma_\omega^+\right) L_{2\ell-1}^*. \end{eqnarray*} Straightforward computations show that $\sigma_{-{\mathrm{i}}}(H_\omega) = H_\omega$. It follows then from Theorem 9 in \cite{FFVU07}, that the QDB-$\Theta$ condition holds. \medskip Theorem \ref{th:SQDB-ep-zero-iff} and the above remark lead us to the following result essentially showing that $\rho$ is an equilibrium state for the QMS generated by ${\mathcal{L}}$ if and only if it is an equilibrium state for the QMSs generated by \emph{each} ${\mathcal{L}}_\omega$. \begin{theorem}\label{th:GDB-trivial} Let ${\mathcal{L}}$ be the generator of a QMS as in Section \ref{sect:QMSclass}, let $\rho$ be a faithful invariant state. Assume that $V_\omega$ is a real matrix for all $\omega$ and $H_\omega$ is a linear combination of $V_\omega^* V_\omega$ and $V_\omega V_\omega^*$. Then the following are equivalent: \begin{enumerate} \item the QMS generated by ${\mathcal{L}}$ satisfies the SQDB-$\Theta$ condition, \item for all $\omega$, the QMSs generated by each ${\mathcal{L}}_\omega$ admits $\rho$ as invariant state and satisfies the SQDB-$\Theta$ condition. \end{enumerate} \end{theorem} \noindent{\it Proof.} Clearly 2. $\Rightarrow 1.$ Conversely, if the QMS generated by ${\mathcal{L}}$ satisfies the SQDB-$\Theta$ condition, then by Theorem \ref{th:SQDB-ep-zero-iff} and the above Remark we have \begin{eqnarray*} \rho V_\omega^* V_\omega \rho^{-1} & = & \rho V_\omega^*\rho^{-1}\, \rho V_\omega \rho^{-1} = \frac{\gamma_\omega^+}{\gamma_\omega^{-}}\, V_\omega^* \,\frac{\gamma_\omega^{-}}{\gamma_\omega^{+}}\, V_\omega = V_\omega^* V_\omega \end{eqnarray*} and so $\rho$ commutes with $V_\omega^*V_\omega$. In the same way, we can check that it commutes with $V_\omega V_\omega^*$. As a consequence, by the commutation rules found in the above Remark $\rho V_\omega^* = ({\gamma_\omega^+}/ {\gamma_\omega^{-}}) V_\omega^*\rho$ and $\rho V_\omega = ({\gamma_\omega^{-}}/ {\gamma_\omega^{+}}) V_\omega\rho$ and we have \begin{eqnarray*} & & G_\omega \rho + \gamma_\omega^{-}V_\omega\rho V_\omega^* + \gamma_\omega^+V_\omega^*\rho V_\omega + \rho G_\omega^* \\ & & = (G_\omega + G_\omega^*) \rho + \gamma_\omega^{+}V_\omega V_\omega^* \rho + \gamma_\omega^{-}V_\omega^* V_\omega \rho \\ & & = \left( G_\omega + G_\omega^* +\gamma_\omega^{+}V_\omega V_\omega^* + \gamma_\omega^{-}V_\omega^* V_\omega\right) \rho = 0. \end{eqnarray*} Thus $\rho$ is an invariant state for the QMS generated by ${\mathcal{L}}_\omega$. This semigroup also satisfies the SQDB-$\Theta$ condition because, from $\left(\gamma_\omega^{+}\right)^{1/2} \rho^{1/2}V_\omega = \left(\gamma_\omega^{-}\right)^{1/2} V_\omega \rho^{1/2}$, condition 2 of Theorem \ref{th:SQDB-TR} follows immediately. \hfill $\square$ \medskip \noindent{\bf Remark.} If we drop the assumptions on matrices $V_\omega$ and $H_\omega$ similar result holds considering the quantum detailed balance condition without reversing operation $\Theta$. In this case, however, the forward $\overrightarrow{\Omega}_t$ and backward $\overleftarrow{\Omega}_t$ states used to define the entropy production, defined in the same way without transpositions, must be thought of as states on the tensor product of the \emph{opposite} algebra $\mathcal{B}({\mathsf{h}})^{\rm o}$ with $\mathcal{B}({\mathsf{h}})$ (see \cite{FFRR-ep} Remark 2). \section*{Acknowledgements} Financial support from FONDECYT 1120063, ``Stochastic Analysis Network'' CONICYT-PIA grant ACT 1112 and MIUR-PRIN project 2010MXMAJR ``Evolution differential problems: deterministic and stochastic approaches and their interactions'' are gratefully acknowledged.
{ "timestamp": "2015-04-27T02:06:09", "yymm": "1504", "arxiv_id": "1504.06430", "language": "en", "url": "https://arxiv.org/abs/1504.06430" }
\section{Introduction} Heavy-quarkonium production is typically a multi-scale process, which involves both short- and long-distance facets of the strong interaction. This particularity makes heavy-quarkonium production an ideal probe to study Quantum Chromodynamics (QCD) in its perturbative and non-perturbative regimes simultaneously. Studies have extensively been performed at collider and fixed-target energies in proton-proton, proton-nucleus and nucleus-nucleus collisions (see reviews e.g. Refs.~\cite{Brambilla:2010cs,Lansberg:2006dh,Andronic:2015wma}). The associated production of heavy quarkonium is a very interesting process not only because it provides a way to pin down the heavy-quarkonium production mechanism but also because it can help to understand a new dynamics of hadron collisions appearing at high energies, where multiple scatterings of partons (MPS) happen simultaneously, among which the most likely is of course two short-distance interactions from a single hadron-hadron collision -- double-parton scattering (DPS). A number of experimental studies relevant for DPS analyses with heavy quarkonia have recently been carried out such as $J/\psi+W$~\cite{Aad:2014rua}, $J/\psi+Z$~\cite{Aad:2014kba}, $J/\psi+$charm~\cite{Aaij:2012dz} and $J/\psi+J/\psi$~\cite{Abazov:2014qba} production. In particular, the latter process, {\it i.e.}\ double-quarkonium production, is of specific interest. It provides an original tool to study the quarkonium production from the conventional single-parton scatterings (SPSs), whose contribution has theoretically been studied in many works~\cite{Kartvelishvili:1984ur,Humpert:1983yj,Vogt:1995tf,Li:2009ug,Qiao:2009kg,Ko:2010xy,Berezhnoy:2011xy,Li:2013csa,Lansberg:2013qka,Sun:2014gca,Lansberg:2014swa,Likhoded:2015zna}. Moreover, it has been claimed in Refs.~\cite{Kom:2011bd,Baranov:2011ch,Berezhnoy:2012xq,Baranov:2012re,d'Enterria:2013ck,d'Enterria:2014dva,Lansberg:2014swa,Likhoded:2015zna} that DPS contributions should be a significant source of $J/\psi+J/\psi$, especially at high energies where there is a high gluon flux. On the experimental side, the spin-triplet $S$-waves ({\it e.g.}\ $J/\psi$, $\psi'$, $\Upsilon(nS)$) provide clean signatures with their small background when they are studied in their decay into muon pairs. They are easy to trigger on, in contrast to hadronic jets and open-charm meson productions, which require either good calorimetry or good particle identification. A first comprehensive comparison between experiments~\cite{Aaij:2011yc,Abazov:2014qba,Khachatryan:2014iia} and theory for $J/\psi$-pair production at the Tevatron and the LHC has been performed in Ref.~\cite{Lansberg:2014swa}, where we have pointed out that this observable could be used to probe different mechanisms in different kinematical regions. We noted that the direct DPS measurement by D0 collaboration~\cite{Abazov:2014qba} --looking at the rapidity-difference spectrum-- is consistent with the $J/\psi$-pair measurement by the CMS collaboration~\cite{Khachatryan:2014iia} and, as we will discuss later on, compatible with rather large DPS rates. On the other hand, as we advocated in \cite{Lansberg:2013qka}, one cannot draw a definite conclusion on the presence of DPS in the early LHCb data~\cite{Aaij:2011yc} with their relatively low statistics. In this context, we find it important to study the potentialities offered by the use of the 7~TeV proton LHC beams in the fixed-target mode to study quarkonium-pair production. Its multi-TeV beams indeed allow one to study $p+p$, $p+d$ and $p+A$ collisions at a centre-of-mass energy $\sqrt{s_{NN}} \simeq 115$~GeV as well as ${\rm Pb}+p$ and ${\rm Pb}+A$ collisions at $\sqrt{s_{NN}} \simeq 72$~GeV, with the high precision typical of the fixed-target mode. It has indeed been advocated in~\cite{Brodsky:2012vg,Lansberg:2012kf} that such a facility, referred to as AFTER@LHC, would become a quarkonium, prompt photon and heavy-flavour observatory thanks to its large expected luminosity (for recent phenomenological studies, see~\cite{Liu:2012vn,Boer:2012bt,Chen:2014hqa,Kanazawa:2015fia,Mikkelsen:2015dva,Goncalves:2015hra,Lansberg:2015kha,Ceccopieri:2015rha,Anselmino:2015eoa,Lyonnet:2015dca}). A first feasibility study for quarkonium production was presented in~\cite{Massacrier:2015qba} and demonstrated that a LHCb-like detector would perform extremely well in the fixed-target mode. Similar performances are expected for quarkonium-pair production. Integrated luminosities as large as 20 fb$^{-1}$~\cite{Brodsky:2012vg} can be delivered during a one-year run of $p+{\rm H}$ collisions with a bent crystal to extract the beam~\cite{Uggerhoj:2005xz}. The LHC beam can also go through an internal-gas-target system\footnote{This is in fact already tested at low gas pressures by the LHCb collaboration in order to monitor the luminosity of the beam~\cite{Barschel:2014iua,FerroLuzzi:2005em,Aaij:2014ida}.}. Conservatively sticking to gas pressures already reachable now, yearly integrated luminosities reach 100 pb$^{-1}$. With a designed target cell similar to that of HERMES~\cite{Airapetian:2004yf}, a few fb$^{-1}$ yr$^{-1}$ are probably also easily reachable~\cite{Barschel:463141}. We have reported in \ct{tablumi} the instantaneous and yearly integrated luminosities expected with the proton beams on various target species of various thicknesses, for both options. \begin{table}[hbt!] \begin{center} {\begin{tabular}{|c c c c c c|} \hline Beam & Target & Thickness & $\rho$ & $\cal{L}$ & $\int{\cal{L}}$ \\ & & (cm) & (g.cm$^{-3}$) & ($\mu$b$^{-1}$.s$^{-1}$) & (pb$^{-1}$.y$^{-1})$ \\ \hline p & Liquid H & 100 & 0.068 & 2000 & 20000 \\ & & & & & \\ \hline \hline Beam & Target & Usable gas zone & Pressure & $\cal{L}$ & $\int{\cal{L}}$ \\ & & (cm) & (Bar) & ($\mu$b$^{-1}$.s$^{-1}$) & (pb$^{-1}$.y$^{-1})$ \\ \hline p & perfect gas & 100 & $10^{-9}$ & 10 & 100 \\ \hline \hline \end{tabular}} \caption{\label{tablumi}Expected luminosities obtained for a 7~TeV proton beam extracted by means of a bent crystal or obtained with an internal gas target with a pressure similar to that of SMOG@LHCb~\cite{FerroLuzzi:2005em}.} \end{center} \end{table} The structure of this paper is as follows. In section 2, we detail and justify our methodology to compute both DPS and SPS contributions to quarkonium-pair production. Section 3 contains a general discussion of the interest to look at DPS vs SPS contributions at different energies. Section 4 presents a comparison between results up to $\alpha_s^4$ and $\alpha_s^5$. This prepares the discussion of our results at $\sqrt{s}=115$ GeV relevant for AFTER@LHC in Section 5. Section 6 gathers our conclusions. \section{Methodology\label{sec:meth}} In this section, we explain the main ingredients used to compute the rates for double-quarkonium production at AFTER@LHC, which closely follows from our previous work in Ref.~\cite{Lansberg:2014swa}. \subsection{Double-parton scatterings} The description of such a mechanism is usually done by assuming that DPSs can be factorised into two single-parton scatterings (SPS) resulting each in the production of a quarkonium. This can be seen as a first rough approximation which can however be justified by the fact that possible unfactorisable corrections due to parton correlations could be small at small $x$. In the case of the double-quarkonium production, the master formula from which one starts under the factorisation assumption is~(see e.g. Ref.~\cite{d'Enterria:2013ck}) \begin{eqnarray} \sigma_{{\cal Q}_1{\cal Q}_2}&=&\frac{1}{1+\delta_{{\cal Q}_1{\cal Q}_2}}\sum_{i,j,k,l}{\int{dx_1dx_2dx_1^{\prime}dx_2^{\prime}}d^2{\bold b_1}d^2{\bold b_2}d^2{\bold b}} \nonumber\\ &&\times \, \Gamma_{ij}(x_1,x_2,{\bold b_1},{\bold b_2}) \, \hat{\sigma}^{{\cal Q}_1}_{ik}(x_1,x_1^{\prime})\, \hat{\sigma}^{{\cal Q}_2}_{jl}(x_2,x_2^{\prime})\, \Gamma_{kl}(x_1^{\prime},x_2^{\prime},{\bold b_1}-{\bold b},{\bold b_2}-{\bold b}) , \end{eqnarray} where $\Gamma_{ij}(x_1,x_2,{\bold b_1},{\bold b_2})$ is the generalised double distributions with the longitudinal fractions $x_1$,$x_2$ and the transverse impact parameters ${\bold b_1}$ and ${\bold b_2}$, $\hat{\sigma}^{{\cal Q}_i}_{jk}(x_l,x_l^{\prime})$ are the usual partonic cross sections for single quarkonium production and $\delta_{{\cal Q}_1{\cal Q}_2}$ is the Kronecker delta function. A further factorisation assumption is to decompose $\Gamma_{ij}(x_1,x_2,{\bold b_1},{\bold b_2})$ into a longitudinal part and a transverse part \begin{equation} \Gamma_{ij}(x_1,x_2,{\bold b_1},{\bold b_2})=D_{ij}(x_1,x_2)T_{ij}({\bold b_1},{\bold b_2}), \end{equation} where $D_{ij}(x_1,x_2)$ is the double-parton distribution functions (dPDF)~\cite{Gaunt:2009re}. Moreover, by ignoring the correlations between partons produced from each hadrons, one can further assume \begin{eqnarray} D_{ij}(x_1,x_2)=f_i(x_1)f_j(x_2),\nonumber\\ T_{ij}({\bold b_1},{\bold b_2})=T_i({\bold b_1})T_j({\bold b_2}), \end{eqnarray} where $f_i(x_1)$ and $f_j(x_2)$ are the normal single PDFs. This yields to \begin{eqnarray} \sigma_{{\cal Q}_1{\cal Q}_2}=\frac{1}{1+\delta_{{\cal Q}_1{\cal Q}_2}}\sum_{i,j,k,l}{\sigma_{ik\to{\cal Q}_1}\sigma_{jl\to{\cal Q}_2}}\int{d^2{\bold b}}\!\! \int{\!T_i({\bold b_1})T_k({\bold b_1}-{\bold b})d^2{\bold b_1}}\! \int{\!T_j({\bold b_2})T_l({\bold b_2}-{\bold b})d^2{\bold b_2}}. \end{eqnarray} If one also ignores the parton flavour dependence in $T_{i,j,k,l}({\bold b})$ and defines the overlapping function \begin{equation} F({\bold b})=\int{T({\bold b_i})T({\bold b_i}-{\bold b})d^2{\bold b_i}}, \end{equation} one reaches the so-called ``pocket formula" \begin{equation} \sigma_{{\cal Q}_1{\cal Q}_2}=\frac{1}{1+\delta_{{\cal Q}_1{\cal Q}_2}}\frac{\sigma_{{\cal Q}_1}\sigma_{{\cal Q}_2}}{\sigma_{\rm eff}},\label{eq:dpseq} \end{equation} where $\sigma_{{\cal Q}_1}$ and $\sigma_{{\cal Q}_2}$ are the cross sections for respectively single ${\cal Q}_1$ and ${\cal Q}_2$ production and $\sigma_{\rm eff}$ is a parameter to characterise an effective spatial area of the parton-parton interactions via \begin{equation} \sigma_{\rm eff}=\left[\int{d^2{\bold b}F({\bold b})^2}\right]^{-1}. \end{equation} Under these assumptions, it is only related to the initial state and should be independent of the final state. However, the validation of its universality (process independence as well as energy independence) and the factorisation in Eq.(\ref{eq:dpseq}) should be cross checked case by case. In a fact, some factorisation-breaking effects have recently been identified (see {\it e.g.}\ \cite{Blok:2013bpa,Kasemets:2012pr,Diehl:2014vaa}). Thanks to its larger luminosity and its probably wide rapidity coverage, AFTER@LHC provides a unique opportunity to probe DPS and to extract $\sigma_{\rm eff}$ from double-quarkonium final states. To perform our predictions, we will use $\sigma_{\rm eff}=5.0\pm2.75$ mb, which was determined from $J/\psi$-pair production data at the Tevatron by D0 collaboration~\cite{Abazov:2014qba}.\footnote{Note that Ref.~\cite{Abazov:2014qba} has updated the value of $\sigma_{\rm eff}$ to be $4.8\pm2.55$ mb. However, since the difference is very small, we still used the original one.} The reason for such a choice is that all of the double-quarkonium-production processes share the same gluon-gluon initial states and the typical $x$ are not that much different. This also means that we only need to assume the energy independent of $\sigma_{\rm eff}$. However, we do not claim that this value is the only one possible; we only take it as our reference number. If one wants to use another value of $\sigma_{\rm eff}$, one can just simply perform a rescaling (proportional to $1/\sigma_{\rm eff}$) of the numbers given in the following. \begin{table}[!hbtp] \begin{center} \subfloat[Charmonia]{ \begin{tabular}{c|cccc}\footnotesize & $\kappa$ &$\lambda$ & \# of data & $\chi^2$ \\ \hline\hline $J/\psi$ & $0.67\pm0.08$ & $0.38$ & $51$ & $422$\\ $\psi(2S)$ & $0.15\pm0.03$ & $0.35$ & $4$ & $1.12$ \\ \\ \end{tabular}\label{cfit} } \\ \subfloat[Bottomonia]{ \begin{tabular}{c|cccc}\footnotesize & $\kappa$ &$\lambda$ & \# of data & $\chi^2$ \\ \hline\hline $\Upsilon(1S)$ & $0.89$ & $0.084\pm0.0061$ & $14$ & $29$\\ $\Upsilon(2S)$ & $0.79$ & $0.056$ & $9$ & $2.2$ \\ $\Upsilon(3S)$ & $0.68\pm 0.029$ & $0.046$ & $9$ & $3.9$ \\ \end{tabular}\label{bfit} } \caption{Results of a fit of $d^2\sigma/dP_Tdy$ to (a) the $\psi(nS)$ PHENIX data~\cite{Adare:2011vq} by fixing $n=2$ and $\langle P_T\rangle = 4.5$~GeV and (b) the $\Upsilon(nS)$ data CDF~\cite{Acosta:2001gv} data by fixing $n=2$ and $\langle P_T\rangle = 13.5$ GeV. Only the $>1\%$ errors are given.} \end{center} \end{table} Since the description of single heavy-quarkonium production at hadron colliders in the whole kinematical region is still a challenge to theorists, using {\it ab initio} theoretical computation of $\sigma_{{\cal Q}}$ would significantly inflate theoretical uncertainties. Instead, we will work in a data-driven way to determine $\sigma_{{\cal Q}}$. \begin{figure}[hbt!] \begin{center} \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{jpsifit1-crop.pdf}\label{fig:CompareDataJPsi1}}\ \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{jpsifit2-crop.pdf}\label{fig:CompareDataJPsi2}}\\ \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{psi2sfit-crop.pdf}\label{fig:CompareDataPsi2S}}\ \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{Y1sfit-crop.pdf}\label{fig:CompareDataUpsi1}}\\ \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{Y2sfit-crop.pdf}\label{fig:CompareDataUpsi2}}\ \subfloat[]{\includegraphics[width=0.49\textwidth,draft=false]{Y3sfit-crop.pdf}\label{fig:CompareDataUpsi3}} \caption{Comparisons with the PHENIX measurements~\cite{Adare:2011vq} for $J/\psi$ (a,b) and $\psi(2S)$ (c) production and with the CDF measurements~\cite{Acosta:2001gv} for $\Upsilon(1S)$ (d), $\Upsilon(2S)$ (e) and $\Upsilon(3S)$ (f) production. } \label{fig:CompareData} \end{center}\vspace*{-1cm} \end{figure} Our procedure is as follows. We start from the cross section $\sigma_{{\cal Q}_i}$ which can be written as \begin{eqnarray} \sigma(pp\to{\cal Q}+X)&=&\sum_{a,b}\int{dx_1dx_2f_a(x_1)f_b(x_2)} \frac{1}{2\hat{s}}\overline{|\mathcal{A}_{ab\to{\cal Q}+X}|^2}d{\rm LIPS}_{{\cal Q}+X}, \end{eqnarray} where $f_a,f_b$ are the parton distribution functions (PDF) of the initial partons $a$ and $b$, $d{\rm LIPS}_{{\cal Q}+X}$ is the Lorentz-invariant phase-space measure for $pp\to {\cal Q}+X$ and $\sqrt{\hat{s}}$ is the partonic centre-of-mass energy (i.e. $\hat{s}=x_1x_2s$). For single quarkonium production in $p+p$ collisions at $\sqrt{s}=115$ GeV, the gluon-gluon initial state is dominant. The initial colour and helicity averaged amplitude square for $gg\to {\cal Q}+X$ can be expressed in the form of a crystal ball function~\cite{Kom:2011bd} \begin{eqnarray} &&\overline{|\mathcal{A}_{gg\to{\cal Q}+X}|^2}= \left\{ \begin{array}{ll} K\exp(-\kappa\frac{P_T^2}{M_{{\cal Q}}^2}) & \mbox{when $P_T\leq \langle P_T\rangle$} \\ K\exp(-\kappa\frac{\langle P_T \rangle^2}{M_{{\cal Q}}^2})\left(1+\frac{\kappa}{n}\frac{P_T^2-\langle P_T \rangle^2}{M_{{\cal Q}}^2}\right)^{-n} & \mbox{when $P_T> \langle P_T\rangle$} \\ \end{array} \right.\label{eq:crystalball} \end{eqnarray} where $K=\lambda^2\kappa\hat{s}/M_{{\cal Q}}^2$. The parameters $\kappa$,$\lambda$,$n$ and $\langle P_T \rangle$ can be determined by fitting the (differential) cross sections to the experimental data. The dedicated codes to perform the fit and to compute the DPS contributions to double-quarkonium production have been implemented in {\sc\small HELAC-Onia}~\cite{Shao:2012iz,Shao:2015vga}. Once a fit is done, $|\mathcal{A}_{gg\to{\cal Q}+X}|^2$ is fixed and it allows us to evaluate $\sigma(pp\to{\cal Q}+X)$ (or its differential counterparts in any variable) which can then be injected into the ``pocket formula'' \ce{eq:dpseq} in order to predict the DPS yield. Since we do not apply any muon cuts, we do not need to make any assumptions regarding the polarisation of the production quarkonia. The code was tested and, with the same parameters as in Ref.~\cite{Kom:2011bd}, we have reproduced their results. However, their combined fit of the charmonium data taken at the Tevatron and the LHC cannot reproduce well the low-energy data measured by PHENIX collaboration~\cite{Adare:2011vq} at RHIC. Since the collision energy of RHIC $\sqrt{s}=200$ GeV is very close to the centre-of-mass energy of the fixed-target experiment at the LHC (AFTER@LHC), {\it i.e.}\ $\sqrt{s}=115$ GeV, we prefer to use the PHENIX data alone to determine the parameters in~\ce{eq:crystalball}. A fit of $d^2\sigma/dP_Tdy$ to the PHENIX data~\cite{Adare:2011vq} for $J/\psi$ and $\psi(2S)$ production gives the $\chi^2$ results presented in \ct{cfit} having fixed $n=2$ and $\langle P_T\rangle = 4.5$ GeV. We also show the comparisons of the $P_T$ spectra in \cf{fig:CompareData}a-c. The large $\chi^2$ for the single $J/\psi$ production can be reduced to $55.8$ when one only considers the $23$ PHENIX data points in the central region (i.e. $|y_{J/\psi}|<0.35$) and excluding the lowest-$P_T$ bin. A fit to the sole PHENIX data in the forward region $1.2<|y_{J/\psi}|<2.4$ changes $\kappa$ by $\sim 15\%$ and $\lambda$ by $\sim 5\%$. However, the main uncertainty in predicting DPS contributions to double $\psi$ production remains from that of $\sigma_{\rm eff}$ and those from these fits are in practice nearly irrelevant for our predictions. This is obvious for $\lambda$ which only affects the normalisation. In contrast, there is no differential measurement of $\Upsilon$ yields at RHIC. There exists data from the fixed-target Fermilab experiment E866~\cite{Zhu:2007aa} but only at low $P_T$. We therefore performed a fit of $d^2\sigma/dP_Tdy$ to CDF~\cite{Acosta:2001gv} Run I data at $\sqrt{s}=1.8$ TeV. The results for $\Upsilon$ are presented in \ct{bfit} having fixed $n=2$ and $\langle P_T\rangle = 13.5$ GeV. For illustration, the comparisons between the fit and the CDF data~\cite{Acosta:2001gv} are shown in \cf{fig:CompareData}d-f. Some comments about the fit are however in order. If we instead performed a combined fit to CDF~\cite{Acosta:2001gv}, ATLAS~\cite{Aad:2012dlq}, CMS~\cite{Chatrchyan:2013yna} and LHCb~\cite{LHCb:2012aa,Aaij:2013yaa} data, the value of $\kappa$ ($\lambda$) would be shifted by at most $30\%$ ($10\%$) but with significantly worse $\chi^2$. All this may however not be so relevant since, as for the charmonia, the fit to TeV data tend to underestimate the RHIC $P_T$-integrated $\Upsilon$ production cross section as measured by STAR~\cite{Adamczyk:2013poh} by a factor a bit smaller than 2 -- the STAR result has however a 30\% uncertainty. The uncertainties on $\kappa$ and $\lambda$ given by the $\chi^2$ fit are therefore far too optimistic since the Crystall Ball parametrisation seems not to correctly capture the energy dependence of the cross section. The corresponding DPS yields of $\Upsilon$ at AFTER@LHC which we give here should therefore be considered as conservative {\it lower} estimates. All of the above fits are performed with MSTW2008NLO PDF set~\cite{Martin:2009iq} available in LHAPDF5~\cite{Whalley:2005nh} and the factorisation scale $\mu_F=\sqrt{M_{{\cal Q}}^2+P_T^2}$. The physical mass $M_{{\cal Q}}$ for quarkonium is taken from PDG data~\cite{Agashe:2014kda} as well as the branching ratios. \subsection{Single-parton scatterings} \subsubsection{Double-charmonium and double-bottomonium production} The SPS contribution to $J/\psi$-pair production have systematically been investigated in our previous works~\cite{Lansberg:2013qka,Lansberg:2014swa}. We have shown that a leading order (LO) calculation in the strong coupling constant, $\alpha_s$, is enough to account for the low-$P_T$ data as well as the $P_T$-integrated cross section, the bulk of the events lying at low $P_T$. However, if one goes to mid $P_T$ ({\it e.g.}\ $P_T>5$ GeV), $\mathcal{O}(\alpha_s^5)$ contribution start to be large. As a consequence, the yield and the polarisation changes significantly compared to a LO calculation. Since we are only interested in the data which are measurable with up to 20 fb$^{-1}$ in order to assess the feasibility of measuring quarkonium-pair production with AFTER@LHC, we will focus on the low $P_T$ region. As we will explicitly show, LO evaluations happen to be sufficient. Besides, the colour-octet contributions are also negligible at low $P_T$ for they are suppressed by powers of $v$ without any kinematical enhancement at variance with the single-quarkonium-production case. \begin{table}[!hbtp] \begin{center} \subfloat[Decay within a family]{ \begin{tabular}{{c}*{1}{c}}\hline\hline decay channel & branching ratio ($\%$)\\\hline $\psi(2S)\to J/\psi+X$ & $57.4$\\ $\Upsilon(2S)\to \Upsilon(1S)+X$ & $30.2$\\ $\Upsilon(3S)\to \Upsilon(1S)+X$ & $8.92$\\ $\Upsilon(3S)\to \Upsilon(2S)+X$ & $10.6$\\\\ \hline\hline \end{tabular}} \quad \subfloat[Leptonic decays]{ \begin{tabular}{{c}*{1}{c}}\hline\hline decay channel & branching ratio ($\%$)\\\hline $J/\psi\to \mu^+\mu^-$ & $5.93$\\ $\psi(2S)\to\mu^+\mu^-$ & $0.75$\\ $\Upsilon(1S)\to \mu^+\mu^-$ & $2.48$ \\ $\Upsilon(2S)\to \mu^+\mu^-$ & $1.93$ \\ $\Upsilon(3S)\to \mu^+\mu^-$ & $2.18$ \\ \hline\hline \end{tabular}} \end{center} \caption{Various decays (and branching ratios) considered in this article~\cite{Agashe:2014kda}.} \label{tab:br} \end{table} On the contrary, the feed-down contributions from higher excited spin-triplet $S$-wave quarkonium has to be considered. It is substantial as already shown for the $J/\psi$-pair production in Ref.~\cite{Lansberg:2014swa}. These will systematically be taken into account in our predictions as done in Ref.~\cite{Lansberg:2014swa}. The branching ratios that will be used in this context are taken from PDG~\cite{Agashe:2014kda} and we have listed them in \ct{tab:br} for completeness. The general formula for the amplitude of the production of a pair of colour-singlet (CS) $S$-wave quarkonia ${\cal Q}_1$ and $Q_2$ with as initial partons $a$ and $b$ is \begin{eqnarray} &&\mathcal{A}_{ab\to {\cal Q}_1^{\lambda_1}(P_1)+{\cal Q}_2^{\lambda_2}(P_2)+X}=\\ &&\sum_{s_1,s_2,c_1,c_2}{\sum_{s_3,s_4,c_3,c_4}{\frac{N(\lambda_1|s_1,s_2)N(\lambda_2|s_3,s_4)}{\sqrt{M_{{\cal Q}_1}M_{{\cal Q}_2}}}\frac{\delta_{c_1c_2}\delta_{c_3c_4}}{N_c}\frac{R_1(0)R_2(0)}{4\pi}}}\mathcal{A}_{ab\to Q_{c_1}^{s_1}\bar{Q}_{c_2}^{s_2}({\bold p_1}={\bold 0})+Q_{c_3}^{s_3}\bar{Q}_{c_4}^{s_4}({\bold p_2}={\bold 0})+X},\nonumber\label{eq:form} \end{eqnarray} where we denote the momenta of quarkonia ${\cal Q}_1$ and ${\cal Q}_2$ as $P_1$ and $P_2$ respectively and their polarisations as $\lambda_{1,2}$, $N(\lambda_{1,2}|s_{1,3},s_{2,4})$ are the two spin projectors and $R_{1,2}(0)$ are the radial wave functions at the origin in the configuration space for both quarkonia. In the above equation, we have defined the heavy-quark momenta to be $q_{1,2,3,4}$ such that $P_{1,2}=q_{1,3}+q_{2,4}$ and $p_{1,2}=(q_{1,3}-q_{2,4})/2$. $s_{1,2,3,4}$ are then the heavy-quark spin components and $\delta_{c_ic_j}/\sqrt{N_c}$ is the colour projector. The spin-triplet projector $N(\lambda|s_i,s_j)$ has, in the non-relativistic limit, $v \to 0$, the following expression \begin{equation} N(\lambda|s_i,s_j)=\frac{\varepsilon^{\lambda}_{\mu}}{2\sqrt{2}M_{{\cal Q}}}\bar{v}(\frac{{\bold P}}{2},s_j)\gamma^{\mu}u(\frac{{\bold P}}{2},s_i). \end{equation} All these computations can be performed automatically in the {\sc\small HELAC-Onia}~\cite{Shao:2012iz} framework based on recursion relations. The radial wave functions at the origin $R(0)$ are taken from Ref.~\cite{Eichten:1995ch}, which were derived in the QCD-motivated Buchm\"uller-Tye potential~\cite{Buchmuller:1980su}. We also listed their values in \ct{tab:R02}. \begin{table*}[!hbtp] \begin{center} \begin{tabular}{{c}*{1}{c}}\hline\hline Quarkonium & $|R(0)|^2$ (GeV$^3$)\\\hline $J/\psi$ & $0.81$ \\ $\psi(2S)$ & $0.529$ \\ $\Upsilon(1S)$ & $6.477$\\ $\Upsilon(2S)$ & $3.234$\\ $\Upsilon(3S)$ & $2.474$\\ \hline\hline \end{tabular} \end{center} \caption{The radial wave functions at the origin squared $|R(0)|^2$~\cite{Eichten:1995ch} of $S$-wave quarkonium used in this article.} \label{tab:R02} \end{table*} \subsubsection{Charmonium-bottomonium pair production} The simultaneous production of a charmonium and a bottomonium has been studied in Refs.~\cite{Ko:2010xy,Likhoded:2015zna}. Its CSM contributions are expected to be suppressed because the direct LO contributions in CS mechanism (CSM) are $\mathcal{O}(\alpha_s^6)$, i.e. $\alpha_s^2$ suppressed compared to double-charmonium and double-bottomonium production. Hence, it is expected to be a golden channel to probe colour-octet mechanism (COM) at the LHC~\cite{Ko:2010xy}. However, such a statement is valid only if one can clearly separate DPS and SPS events experimentally since the DPS contributions would be substantial. For a thorough discussion, the reader is guided to~\cite{Likhoded:2015zna}. In contrast, colour octet (CO) contributions can appear at $\mathcal{O}(\alpha_s^4)$, which however are suppressed by the small size of the CO long distance matrix elements (LDMEs). If one follows the arguments of Ref.~\cite{Ko:2010xy}, one is entitled to consider only the $c\bar{c}({\bigl.^3\hspace{-1mm}S^{[8]}_1})+b\bar{b}({\bigl.^3\hspace{-1mm}S^{[8]}_1})$, $c\bar{c}({\bigl.^3\hspace{-1mm}S^{[1]}_1})+b\bar{b}({\bigl.^3\hspace{-1mm}S^{[8]}_1})$ and $c\bar{c}({\bigl.^3\hspace{-1mm}S^{[8]}_1})+b\bar{b}({\bigl.^3\hspace{-1mm}S^{[1]}_1})$ channels. This approximation is however based on the validity of the velocity scaling rules of the LMDEs which may not be reliable. A complete computation --even at LHC energies-- accounting for all the possible channels up to $v^7$ in NRQCD is still lacking in the literature: there are indeed more than 50 channels at LO in $\alpha_s$ contributing to $\psi+\Upsilon$ production. Thanks to the the automation of {\sc\small HELAC-Onia}~\cite{Shao:2012iz,Shao:2015vga}, such a complete is at reach. The formula for the $S$-wave CO amplitude is similar to that for CS state production with the following formal replacements for CO in Eq.(\ref{eq:form}) \begin{equation} \frac{\delta_{c_i,c_j}}{\sqrt{N_c}}\to \sqrt{2}T^a_{c_ic_j}, \frac{R_i(0)}{\sqrt{4\pi}}\to \frac{\sqrt{\langle\mathcal{O}^i({\bigl.^{2s+1}\hspace{-1mm}S^{[8]}_J}) \rangle}}{\sqrt{(2J+1)(N_c^2-1)}}, \end{equation} where $T^a_{c_ic_j}$ is the Gell-Mann matrix and $\langle\mathcal{O}^i({\bigl.^3\hspace{-1mm}S^{[8]}_1}) \rangle$ is the CO LDME. We refer the reader to Ref.~\cite{Shao:2012iz} for the $P$-wave amplitudes. The non-perturbative CO LDMEs should be determined from experimental data. Their values unfortunately depend much on the fit procedures. We took four sets of LDMEs from the literature (see the details in \ref{appA2}). Finally, we describe our parameters for our SPS calculations. In the non-relativistic limit, the mass of the heavy quarkonium can be expressed as the sum of the corresponding heavy-quark-pair masses. In our case, we have \begin{equation} M_{{\cal Q}}=2m_Q, \end{equation} where $m_Q=m_c$ for charmonium and $m_Q=m_b$ for bottomonium. The masses of charm quark and bottom quark are taken as $m_c=1.5\pm0.1$ GeV and $m_b=4.75\pm 0.25$ GeV. The factorisation scale $\mu_F$ and the renormalisation scale $\mu_R$ are taken as $\mu_F=\mu_R \in [\frac{1}{2}\mu_0,2\mu_0]$ with $\mu_0=\sqrt{(M_{{\cal Q}_1}+M_{{\cal Q}_2})^2+P_T^2}$. The advantage of using $\mu_0=\sqrt{(M_{{\cal Q}_1}+M_{{\cal Q}_2})^2+P_T^2}$ is that we are able to recover the correct mass threshold $M_{{\cal Q}_1}+M_{{\cal Q}_2}$ in the low $P_T$ regime. Finally, the PDF set for the SPS calculation is CTEQ6L1~\cite{Pumplin:2002vw} with the one-loop renormalisation group running of $\alpha_s$. \section{Energy dependence of the ratio DPS over SPS} Due to the very large integrated luminosity of AFTER@LHC (up to 20 fb$^{-1}$ per year) compared to the experiments performed at RHIC, the measurement of double-quarkonium production at AFTER@LHC will provide a unique test of the interplay between the DPS and SPS production mechanisms in a new energy range. The energy dependence of $\sigma_{\rm eff}$ will be explored at a wide energy range when combined with the LHC collider and Tevatron data\footnote{Since we noted that the energy dependence obtained with the partonic amplitude ($gg \to {\cal Q} X)$ given by a Crystal Ball fit with fixed parameters is not optimal when going to TeV energies down to RHIC energies, we have used the fit parameters of~\cite{Kom:2011bd} (based on a fit of Tevatron and LHC data) to predict the DPS yield in the TeV range and our fit to the PHENIX data for the RHIC and fixed-target-experiment energy range.}. Due to the double enhancement of the initial gluon-gluon luminosity with the energy, $\sqrt{s}$, DPS contributions are expected to be more and more important with respect to the SPS ones at larger $\sqrt{s}$. This can be observed on \cf{fig:energydep}. One however sees on \cf{fig:energydep} that a change of $\sigma_{\rm eff}$ from 15 mb --which seems to be the favoured value for jet-related observables-- to 5mb --which is the value extracted by D0 from the $J/\psi+J/\psi$ data~\cite{Abazov:2014qba}-- results in a significant change in the point where both contributions are equal. In the former case, it occurs very close to the energy of AFTER@LHC, in the latter case, it occurs between the Tevatron and the LHC energies. All this clearly motivates for measurement and $\sigma_{\rm eff}$ extractions at low energies. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.7\textwidth,draft=false]{pp2psipsi_energy-crop.pdf} \caption{(Upper panel) The cross sections of (prompt-)$J/\psi$ pair production via SPS and DPS mechanisms for two values of $\sigma_{\rm eff}$ as a function of $\sqrt{s}$. (Lower panel) DPS over SPS yield ratio for $5 < \sigma_{\rm eff} < 15$~mb. The black circles correspond to 10 mb. [Aside from the choice of $\sigma_{\rm eff}$, no theoretical uncertainties are included]. } \label{fig:energydep} \end{center}\vspace*{0cm} \end{figure*} \section{Impact of the QCD corrections at low transverse momenta} Before showing our results and in order to motivate the use of LO predictions for this exploratory study, we have found it useful to give an explicit comparison between the differential cross section at LO and NLO$^\star$ for double-$J/\psi$ production in the kinematical domain accessible with 20 fb$^{-1}$, that is up to transverse momenta on the order of 10 GeV at the very most. Indeed, in a previous study~\cite{Lansberg:2013qka}, we have showed that the impact of the real-emission corrections, such as $gg\to J/\psi + J/\psi + g$, becomes increasingly important at large transverse momenta. \begin{figure}[t!] \begin{center} \subfloat[Absolute rapidity difference between both $J/\psi$]{\includegraphics[width=0.49\textwidth,draft=false]{dy_jpsijpsi_LOvsNLO.pdf}\label{fig:dsiglovsnlob}} \subfloat[Pair Invariant mass]{\includegraphics[width=0.49\textwidth,draft=false]{dM_jpsijpsi_LOvsNLO.pdf}\label{fig:dsiglovsnloc}}\\ \subfloat[Leading $P_T$ among the $J/\psi$ pair]{\includegraphics[width=0.49\textwidth,draft=false]{dPtmax_jpsijpsi_LHCb_LOvsNLO.pdf}\label{fig:dsiglovsnlod}} \subfloat[Pair transverse momentum]{\includegraphics[width=0.49\textwidth,draft=false]{dPt_jpsijpsi_LOvsNLO.pdf}\label{fig:dsiglovsnloa}} \caption{LO vs. NLO$^{\star}$ differential distributions. } \label{fig:dsiglovsnlo} \end{center}\vspace*{-1cm} \end{figure} Figs.~\ref{fig:dsiglovsnlo} show the comparison between LO results and NLO$^\star$ results (which are known to reproduce well the full NLO~\cite{Sun:2014gca}). The invariant-mass and rapidity-difference spectra are not affected by the real emission at $\alpha_S^5$ . Indeed, in the low-$P_T$ region, the Born topologies are dominant, and there is no kinematical enhancement in the real-emission topologies which could compensate the $\alpha_S$ suppression. Only when one goes to large transverse momenta, these are enhanced and can become dominant. This explains the difference in the slope as a function of the leading $P_T$ in \cf{fig:dsiglovsnlod}. The results are however similar for $P_T<10$ GeV where the cross sections are larger than 0.1 fb. In addition, as we already discussed in Ref.\cite{Lansberg:2014swa}, at LO, a $2\to 2$ kinematics for SPS would result in a transverse momentum of the $J/\psi$-pair $P_T^{\psi\psi}$ being zero and in a trivial LO distribution on \cf{fig:dsiglovsnloa}. This is however not the case if one takes into account a possible intrinsic $k_T$ of the initial partons which can also been considered as a part of QCD radiative corrections -- initial-state radiations to be precise. Such a smearing can be phenomenologically be accounted for and compared to a pQCD result. To do so, we have smeared the kinematics of LO events using a Gaussian distribution with $\langle k_T \rangle=1\ \&\ 2$ GeV as done in Refs.~\cite{Lansberg:2013qka,Lansberg:2014swa}. We stress that the value of $\langle k_T \rangle$ is essentially empirical, hence the choice of two values for illustration (resp. curves labelled sm1 and sm2). This can thus be compared with our NLO$^\star$ curves in the accessible domain with ${\cal O} (20) $fb$^{-1}$ at AFTER@LHC, that is $P_T^{\psi\psi}<10$ GeV. One sees that the smearing mimics relatively well the effect of the QCD corrections with $\langle k_T \rangle=2$ GeV which we will use in the following for the comparison with the DPS yield. Overall, the $P_T^{\psi\psi}$ distribution is obviously very different than a single peak at 0. \section{Predictions at AFTER@LHC} We are now in the position to present our numerical results at $\sqrt{s}=115$ GeV in $p+p$ collisions. The total cross section we obtained are given in \ct{xsectionscc}, \ref{xsectionscb} and \ref{xsectionsbb}. The results have been multiplied by the branching ratios into a muon pair and they are all in unit of fb. In general, we have \begin{equation} \sigma^{\Upsilon\Upsilon\to 4\mu}\ll \sigma^{\psi\Upsilon \to 4\mu}\ll \sigma^{\psi\psi\to 4\mu}. \end{equation} The DPS contributions decrease quickly when the mass threshold $M_{{\cal Q}_1}+M_{{\cal Q}_2}$ increases because of its square dependence of the initial-state parton luminosity. With the nominal integrated luminosity of $20$ fb$^{-1}$ proposed to be collected at AFTER@LHC, we find that the measurement double-bottomonium production is out of reach\footnote{We note that such a measurement has never been done anywhere else.} and one may be able to record a few $J/\psi+\Upsilon(1S)$ events, which receives substantial DPS contributions. \begin{table}[!hbtp] \begin{center} \begingroup \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|ccc}\footnotesize &$J/\psi+J/\psi$ &$J/\psi+\psi(2S)$ & $\psi(2S)+\psi(2S)$ \\ \hline\hline $\sigma_{\rm DPS}$ & $590^{+730}_{-210}$ & $19^{+23}_{-6.7}$ & $0.15^{+0.18}_{-0.052}$\\ $\sigma^{\rm CSM}_{\rm SPS}$ & $700^{+3600}_{-560}$ & $85^{+440}_{-68}$ & $2.5^{+13}_{-2.0}$ \\ \end{tabular} \caption{$\sigma(pp\to {\cal Q}_1+{\cal Q}_2+X) \times {\cal B}({\cal Q}_1\to\mu^+\mu^-)\, {\cal B}({\cal Q}_2\to\mu^+\mu^-)$ in units of fb at $\sqrt{s}=115$ GeV, where ${\cal Q}_1,{\cal Q}_2=J/\psi,\psi(2S)$. The DPS uncertainties are from $\sigma_{\rm eff}$ and the SPS ones from $m_Q$ and the scales.} \label{xsectionscc} \endgroup \end{center} \end{table} \begin{table}[!hbtp] \begin{center} \begingroup \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|ccc}\footnotesize &$J/\psi+\Upsilon(1S)$ &$J/\psi+\Upsilon(2S)$ & $J/\psi+\Upsilon(3S)$ \\ \hline $\sigma_{\rm DPS}$ & $0.17^{+0.21}_{-0.058}$ & $0.037^{+0.045}_{-0.013}$ & $0.018^{+0.023}_{-0.0063}$ \\ $\sigma^{\rm NRQCD}_{\rm SPS}$ & $<0.69$ & $<0.14$ & $<0.11$ \\ \hline\hline & $\psi(2S)+\Upsilon(1S)$ & $\psi(2S)+\Upsilon(2S)$ & $\psi(2S)+\Upsilon(3S)$\\ \hline $\sigma_{\rm DPS}$ &$2.6\cdot 10^{-3}~^{+3.2\cdot 10^{-3}}_{-9.1\cdot 10^{-4}}$ & $5.7\cdot 10^{-4}~^{+6.9\cdot 10^{-4}}_{-2.0\cdot 10^{-4}}$ & $2.8\cdot 10^{-4}~^{+3.4\cdot 10^{-4}}_{-9.8\cdot 10^{-5}}$ \\ $\sigma^{\rm NRQCD}_{\rm SPS}$ & $<0.031$ & $<5.4\cdot 10^{-3}$ & $<3.0\cdot 10^{-3}$\\ \end{tabular} \caption{$\sigma(pp\to {\cal Q}_1+{\cal Q}_2+X) \times {\cal B}({\cal Q}_1\to\mu^+\mu^-){\cal B}({\cal Q}_2\to\mu^+\mu^-)$ in units of fb with $\sqrt{s}=115$ GeV, where ${\cal Q}_1=J/\psi,\psi(2S)$ and ${\cal Q}_2=\Upsilon(1S),\Upsilon(2S),\Upsilon(3S)$. For SPS production, only the upper limits of the yields are given (see text). The DPS uncertainties are from $\sigma_{\rm eff}$.} \label{xsectionscb} \endgroup \end{center} \end{table} One should however always keep in mind that $\sigma_{\rm SPS}$ for $\psi+\Upsilon$ production strongly depends on the CO LDMEs. We have investigated this dependence in \ref{appA2} with four different sets of LDMEs and the results vary up to one order of magnitude which precludes any strong conclusions\footnote{For convenience and possible future studies, we have tabulated in \ref{appA1} the values of all the relevant short-distance coefficients which can then be combined with any LDME set.}. In addition, these LDMEs are usually fit from the experimental data at high transverse momentum region and are known to overestimate the single-quarkonium yields at low $P_T$ (see~\cite{Feng:2015cba} and references therein). This is also probably the case for quarkonium-pair production especially when they come from single gluon splittings. We have therefore find it only meaningful to show upper limits on $\sigma_{\rm SPS}$ for $\psi+\Upsilon$ production in Table.~\ref{xsectionscb}. These numbers are in any case at the limit of observability. The quoted theoretical uncertainties in the tables result from the variation of $\sigma_{\rm eff}$ within $5\pm 2.75$ mb for the DPS yields and from the scale uncertainties as well as heavy-quark-mass uncertainties for the SPS yields, as discussed in Sec.\ref{sec:meth}. As regards double-charmonium production, about 10 thousand events could be collected per year --which is more than what has so far been collected by LHCb and CMS. In the analysis of the differential distributions, we therefore only focus on these and, in particular, on $J/\psi$-pair production. We show three interesting distributions without kinematical cuts. Along the lines of~\cite{Massacrier:2015qba}, we also used the LHCb kinematical acceptance, {\it i.e.}\ the rapidity of $J/\psi$ restricted to be in the interval of $[2,5]$. \begin{table}[!hbtp] \begin{center} \begingroup \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|ccc}\footnotesize &$\Upsilon(1S)+\Upsilon(1S)$ &$\Upsilon(2S)+\Upsilon(2S)$ & $\Upsilon(3S)+\Upsilon(3S)$ \\ \hline $\sigma_{\rm DPS}$ & $1.2\cdot 10^{-5}~^{+1.4\cdot 10^{-5}}_{-4.0\cdot 10^{-6}}$ & $5.6\cdot 10^{-7}~^{+6.8\cdot 10^{-7}}_{-1.9\cdot 10^{-7}}$ & $1.4\cdot 10^{-7}~^{+1.7\cdot 10^{-7}}_{-4.7\cdot 10^{-8}}$ \\ $\sigma^{\rm CSM}_{\rm SPS}$ & $2.8\cdot10^{-3}~^{+1.3\cdot 10^{-2}}_{-2.2\cdot 10^{-3}}$ & $3.5\cdot10^{-4}~^{+1.7\cdot 10^{-3}}_{-2.8\cdot 10^{-4}}$ & $2.2\cdot 10^{-4}~^{+1.1\cdot 10^{-3}}_{-1.8\cdot 10^{-4}}$ \\ \hline\hline &$\Upsilon(1S)+\Upsilon(2S)$ & $\Upsilon(1S)+\Upsilon(3S)$ & $\Upsilon(2S)+\Upsilon(3S)$ \\ \hline $\sigma_{\rm DPS}$ & $5.1\cdot 10^{-6}~^{+6.2\cdot 10^{-6}}_{-1.7\cdot 10^{-6}}$ & $2.5\cdot 10^{-6}~^{+3.0\cdot 10^{-6}}_{-8.7\cdot 10^{-7}}$ & $5.5\cdot 10^{-7}~^{+6.7\cdot 10^{-7}}_{-1.9\cdot 10^{-7}}$ \\ $\sigma^{\rm CSM}_{\rm SPS}$ & $2.0\cdot 10^{-3}~^{+9.3\cdot 10^{-3}}_{-1.6\cdot 10^{-3}}$ & $1.6\cdot 10^{-3}~^{+7.4\cdot 10^{-3}}_{-1.3\cdot 10^{-3}}$ & $5.6\cdot 10^{-4}~^{+2.6\cdot 10^{-3}}_{-4.4\cdot 10^{-4}}$\\ \end{tabular} \caption{$\sigma(pp\to {\cal Q}_1+{\cal Q}_2+X) \times {\cal B}({\cal Q}_1\to\mu^+\mu^-){\cal B}({\cal Q}_2\to\mu^+\mu^-)$ in units of fb with $\sqrt{s}=115$ GeV, where ${\cal Q}_1,{\cal Q}_2=\Upsilon(1S),\Upsilon(2S),\Upsilon(3S)$. The DPS uncertainties are from $\sigma_{\rm eff}$ and the SPS ones from the $m_Q$ and the scales.} \label{xsectionsbb} \endgroup \end{center} \end{table} \begin{figure}[hbt!] \begin{center} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dy_jpsijpsi.pdf}\label{fig:dsigb}} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dy_jpsijpsi_LHCb.pdf}\label{fig:dsigLHCbb}} \caption{Differential cross section as a function of the absolute rapidity difference of the $J/\psi$ pair, without (left) or with (right) a rapidity cut.} \label{fig:dsigdDeltay} \end{center}\vspace*{-1cm} \end{figure} \begin{figure}[hbt!] \begin{center} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dM_jpsijpsi.pdf}\label{fig:dsigc}} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dM_jpsijpsi_LHCb.pdf}\label{fig:dsigLHCbc}} \caption{Differential cross section as a function of the invariant mass of the $J/\psi$ pair, without (left) or with (right) a rapidity cut.} \label{fig:dsigdM} \end{center}\vspace*{-1cm} \end{figure} \begin{figure}[hbt!] \begin{center} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dPt_jpsijpsi.pdf}\label{fig:dsiga}} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dPt_jpsijpsi_LHCb.pdf}\label{fig:dsigLHCba}} \caption{Differential cross section as a function of the transverse momentum of the $J/\psi$ pair, without (left) or with (right) a rapidity cut.} \label{fig:dsigdPtpsipsi} \end{center}\vspace*{-1cm} \end{figure} \begin{figure}[hbt!] \begin{center} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dPtmin_jpsijpsi_LHCb.pdf}\label{fig:dsigLHCbd}} \subfloat{\includegraphics[width=0.49\textwidth,draft=false]{dYsum_jpsijpsi.pdf}\label{fig:dsigd}} \caption{Differential cross section as a function of (left) the sub-leading $P_T$ with a rapidity cut and (right)the rapidity of the $J/\psi$ pair.} \label{fig:dsigdYanddpt} \end{center}\vspace*{-1cm} \end{figure} The absolute rapidity difference between the $J/\psi$ pair is expected to be a good observable to discriminate the DPS and SPS contributions. This was first pointed out in Ref.~\cite{Kom:2011bd} and this was used later on by D0 collaboration~\cite{Abazov:2014qba} to extract $\sigma_{\rm eff}$ from double-$J/\psi$ production at the Tevatron. The DPS events should have a broader distribution in $\Delta y$ than the SPS ones, because two (relatively) independent hard interactions happen simultaneously in DPS while the two $J/\psi$ from SPS are more correlated. The situation still does not change at AFTER@LHC without or with cut as \cf{fig:dsigdDeltay} (left) and (right) show. In the latter case, the restriction to negative rapidities in the centre-of-mass obviously reduce the $\Delta y$ range. Starting from $\Delta y=2$, the DPS events dominate the SPS events. A ratio DPS/SPS of 10 is obtained for $\Delta y>2$. The distribution of the invariant mass for the $J/\psi$ pair $M_{\psi\psi}$ reflects a similar information as the $\Delta y$ distribution. Hence, it follows that the $M_{\psi\psi}$ spectra of DPS are also broader than those of SPS, which can be seen on \cf{fig:dsigdM} (left) and (right). As we discussed earlier, predictions for the $P_T^{\psi\psi}$ dependence of the SPS yield depend much on the $k_T$ smearing of the initial partons which can mimic a part of the QCD corrections. Due to the relative smaller yields at AFTER@LHC energies than at LHC energies, one can only access $P_T^{\psi\psi}<10$ GeV, as illustrated on \cf{fig:dsigdPtpsipsi}. In such a kinematical region, the $k_T$ smearing effect makes the SPS spectrum as broad as the DPS one with $\langle k_T \rangle = 2$ GeV. Finally, we present on \cf{fig:dsigdYanddpt} the cross section as a function of the total rapidity of the $J/\psi$ pair (right), $Y_{\psi\psi}$, and of the sub-leading $P_T$ between the $J/\psi$ pair (left). One sees that the sub-leading $P_T$ spectrum may be measured up to 6 GeV with AFTER@LHC. As regards the rapidity distribution, its maximum is obviously located at $Y_{\rm cms}=0$, that is $Y=4.8$ in the laboratory frame. One sees that one can expect some counts down to $Y_{\psi\psi}\simeq 2.5$ where $x_F\simeq \frac{2M_{\psi\psi}}{\sqrt{s}} \sinh(Y_{\psi\psi}-4.8) \simeq -0.5$. This is precisely the kinematical region where double intrinsic $c \bar c$ coalescence contributes on average~\cite{Vogt:1995tf}. Any modulation in the pair-rapidity distribution would sign the presence of such a contribution. Finally, we have investigated the impact of using different (double)PDFs (MSTW2008NLO~\cite{Martin:2009iq}, CTEQ6L1~\cite{Pumplin:2002vw}, GS09 dPDF~\cite{Gaunt:2009re}) on differential distributions are also shown in \cf{fig:dsigDPS}; they are found to be moderate in all cases. \begin{figure}[t!] \begin{center} \subfloat[Pair transverse momentum ]{\includegraphics[width=0.49\textwidth,draft=false]{dPt_jpsijpsi_PDFs.pdf}\label{fig:dsigDPSa}} \subfloat[Absolute rapidity difference between both $J/\psi$]{\includegraphics[width=0.49\textwidth,draft=false]{dy_jpsijpsi_PDFs.pdf}\label{fig:dsigDPSb}}\\ \subfloat[Pair invariant mass]{\includegraphics[width=0.49\textwidth,draft=false]{dM_jpsijpsi_PDFs.pdf}\label{fig:dsigDPSc}} \subfloat[Pair rapidity]{\includegraphics[width=0.49\textwidth,draft=false]{dYsum_jpsijpsi_PDFs.pdf}\label{fig:dsigDPSd}} \caption{Differential distributions for DPS with various PDFs: (a) transverse momentum spectrum; (b) absolute rapidity difference ; (c) invariant mass distribution; (d) rapidity of $J/\psi$ pair. } \label{fig:dsigDPS} \end{center}\vspace*{-1cm} \end{figure} \section{Conclusion} We have discussed double-quarkonium production in proton-proton collisions at a fixed-target experiment using the LHC proton beams, AFTER@LHC. These processes have lately attracted much attention, both in the theorist and experimentalist communities. They are expected to be good observables to further constrain the various models describing heavy-quarkonium production. Double-quarkonium production also provides a good opportunity to study DPS since the yields of single quarkonium production is large and their decay to four muons is a clean signal at a hadron colliders. AFTER@LHC provides very appealing opportunities to study these observables with a LHCb-like detector and in new energy region. In this paper, we have studied both DPS and SPS contributions for double-quarkonium production. These processes include $\psi(n_1S)+\psi(n_2S)$, $\psi(n_1S)+\Upsilon(m_1S)$ and $\Upsilon(m_1S)+\Upsilon(m_2S)$ with $n_1,n_2=1,2$ and $m_1,m_2=1,2,3$. DPS contributions are estimated in a data-driven way, while SPS ones are calculated at LO in non-relativistic QCD (NRQCD)~\cite{Bodwin:1994jh}, more precisely in the CSM for $\psi(n_1S)+\psi(n_2S)$ and $\Upsilon(m_1S)+\Upsilon(m_2S)$ and accounting for CO contributions for $\psi(n_1S)+\Upsilon(m_1S)$. From our calculations, we find that ten thousand of double-charmonium events can indeed be measured at AFTER@LHC with the yearly integrated luminosity of $20$ fb$^{-1}$. In the most backward region, a careful analysis of the rapidity distribution could also uncover double intrinsic $c \bar c$ coalescence contributions. In general, future measurements on double-charmonium production can provide extremely valuable information on QCD, in particular important tests on the factorisation formula for DPS and the energy (in)dependence of $\sigma_{\rm eff}$.
{ "timestamp": "2015-11-04T02:14:36", "yymm": "1504", "arxiv_id": "1504.06531", "language": "en", "url": "https://arxiv.org/abs/1504.06531" }
\section{Introduction} Active galactic nuclei (AGN) occasionally produce powerful and collimated jets of magnetized relativistic particles which can extend beyond galactic scales, and have an impact on galaxy evolutions. Although magnetic driven scenario of relativistic jets is widely discussed (e.g., \cite{BZ77}; \cite{M06}), there is no conclusive observational evidence to prove it. In order to explore jet formation processes by a central engine composed of a supermassive black hole and accreting matter onto it, Very Long Baseline Interferometer (VLBI) is one of the most powerful tools because they can probe innermost regions of relativistic jets with its high spatial resolution. Recently, a new VLBI facility, named KaVA, consisting of Korean VLBI network (KVN) and VLBI Exploration of Radio Astrometry (VERA) has been constructed in East Asia region (http://veraserver.mtk.nao.ac.jp/). KVN is the first VLBI array dedicated to the mm-wavelength radio observations in East Asia operated by Korean Astronomy and Space Science Institute (KASI) (\cite{LPB14}) (http://kava.kasi.re.kr/kava main.php). KVN consists of three 21-m-diameter radio telescopes: one in Seoul, one in Ulsan, and one on Jeju Island, Korea. In this proceedings, AGN Key Science Project (KSP) with KaVA, i.e., monitoring M87 and Sgr A* at 23~GHz and 43~GHz is summarized. \section{Imaging Capability of KaVA} Here we briefly review imaging capability of KaVA. In radio interferometers, the detection limit (equivalent to thermal noise level) of images is given by $\sigma_{\rm th} = \frac{2k_{\rm B}T_{\rm sys}} {A_{\rm eff}\eta_{\rm q}\sqrt{N_{\rm ant}(N_{\rm ant}-1)BW t_{\rm int}}}$ where $A_{\rm eff}$, $\eta_{\rm q}$, $T_{\rm sys}$, $N_{\rm ant}$, $BW$, $t_{\rm int}$, are the efficient aperture area of the antennas, the quantization loss factor, the system temperature, the number of antennas, bandwidth, and the total integration time, respectively. When using natural weighting, a typical thermal noise for KaVA observation of M87 at 23~GHz with $BW=32$~MHz is $\sigma_{\rm th}\approx 0.7~{\rm mJy~beam^{-1}}$ and the corresponding signal-to-noise ratio (SNR) is $\sim 1400$ (see details for \cite{NLK15}). In Figure \ref{fig:m87-32M}, we show the comparison of VERA and KaVA images of M87 at 23~GHz. The higher dynamic range of KaVA enables us to obtain the extended structure of the M87 jet up to 10~mas scale. Qualities of VLBI images tend to be limited by their dynamic ranges rather than SNR. The dynamic range of VLBI images can be given by $\frac{I_{p}}{\sigma_{\rm im} }= \frac{\sqrt{M_{\rm scan}}\sqrt{N_{\rm ant}(N_{\rm ant}-1)}}{max[\epsilon, \Delta \phi]}$ where $M_{\rm scan}$, $\epsilon$, and $\Delta \phi$ are the number of scans for each observation, and degrees of amplitude and phase errors, respectively. The dynamic range of KaVA image in Figure \ref{fig:m87-32M} reaches $\sim 1000$ and it is more than three times better than that achieved by VERA alone (\cite{NLK15}). \section{Key Science Project\label{sec:sections}} \subsection{M87} M87, a nearby giant radio galaxy located at a distance of 16.7~Mpc, hosts one of the most massive super-massinve black holes with it mass of $6\times 10^{9}~M_{\rm sun}$. Thanks to its proximity and the largeness of its central black hole, M87 is well known for being the best source for imaging the innermost part of the jet base (e.g., \cite{HDK11}). According to the leading scenario of jet formation model, a jet is thought to be powered by a central engine in a highly magnetized state, and accelerated via conversion of magnetic energy into kinetic one. Relativistic magnetohydrodynamic models have suggested that jets are gradually accelerated on a scale which can be well observed by VLBI in the case of M87 (e.g., \cite{M06}). Hence, it is possible to constrain such models by comparing the model-predicted velocity fields and observed one. Indeed, mapping the apparent velocities of the M87 jet has been explored in previous work \cite{Asada14}. However, the reported apparent velocities at 1-10~mas from the central engine are significantly different and it is controversial. The aim of this KSP is measuring the actual velocity field in the M87 jet to test magnetically-driven jet paradigm with sufficiently short interval to avoid possible component mis-identifications. \subsection{Sagittarius~A*} The center of Milky Way hosts Sgr~A*, a massive black hole with the largest angular size, hence it is one of the best laboratories to explore ultimate vicinity of black holes (e.g., \cite{DWR08}). During the past VLBI monitoring of Sgr~A* at 43~GHz, the flare phenomena was found in 2007 \cite{ATH13}. Interestingly, the size of the major axis remained the same while the flux increased about 2~Jy. This is the first report for VLBI scale flare and we do not know how frequently it happens. In Figure~\ref{fig:sgra-kava}, we present the $u,v$ coverage and the preliminary image of Sgr~A* at 43~GHz with the bandwidth of 256~MHz. We emphasize that KaVA can achieve a good performance for Sgr~A* observations, since it contains more short baselines than other VLBI arrays. Short baselines in KaVA provide more effective sampling of the visibilities of Sgr~A* than VLBA and it enables us to get better measurements of the source size. Accurate measurements of size and flux of Sgr A* by KaVA at 43~GHz will enable us to give tight constraints on physical quantities in Sgr~A* such as magnetic field strength since these quantities have strong dependences on the source size \cite{K14}. \begin{figure*}[t] \centering \includegraphics[width=60mm]{m87.vera.kband_rev.eps} \includegraphics[width=60mm]{m87.kava.kband_rev.eps} \caption{The comparison of VERA and KaVA images of M87 at 23~GHz. \label{fig:m87-32M}} \vspace{5mm} \end{figure*} \begin{figure*}[t] \centering \includegraphics[angle=-90,width=60mm]{uvplot_final.eps} \hspace{2cm} \includegraphics[angle=-90,width=50mm]{map_mod.eps} \caption{ Left: The $u, v$ coverage of KaVA observation of Sgr A* at 7~mm performed on Oct 7th 2013. Right: Corresponding KaVA image of Sgr A* with the best-fit elliptical Gaussian model. } \label{fig:sgra-kava} \vspace{5mm} \end{figure*}
{ "timestamp": "2015-04-27T02:04:22", "yymm": "1504", "arxiv_id": "1504.06399", "language": "en", "url": "https://arxiv.org/abs/1504.06399" }
\section{Introduction } \subsection{Motivating context and problem formulation} Mathematical dynamical models are usually necessary to understand, analyse and control the behaviour of physical phenomena. Usually, high fidelity models require numerous equations and variables. The resulting associated state-space realization is consequently of large dimensions and the resulting system is said to be a large-scale one. In addition, in some cases, a linear finite dimension realization is not accessible or even does not exist (\emph{e.g.}, partial differential equations models, irrational transfer functions, etc.). Although such models can faithfully and accurately reproduce reality, it might lead to \emph{(i)} high complexity and/or \emph{(ii)} realization-less models for which classical methods cannot be reasonably applied due to high numerical burden, low computational speed and inappropriate tools. In these cases and for control concerns, an approximation by a less complex realization is therefore desirable. This justifies the use of model approximation and interpolation techniques allowing to find a simpler model which faithfully approaches the original one and that can be used in place for simulation, analysis and control (for survey and historical references, see \cite{Luenberger1967, wilson1970optimum, antoulas2005approximation} and references therein). Moreover, as Time-Delay Systems (TDS) is a large class of dynamical systems which generalizes the finite dimension realization one, \emph{approximating any transfer functions}\footnote{Throughout this paper, we denote $\mathcal{H}_2^{(n_y\times n_u)}$ or simply $\mathcal{H}_2$, the open subspace of $\mathcal{L}_2$ with matrix-valued function $H(s)$ ($n_y$ outputs, $n_u$ inputs), $\forall s \in \mathbb{C}$, which are analytic in $\textbf{Re}(s)> 0$ (functions that are locally given by a convergent power series and differentiable on each point of its definition set) \cite{PontesECC:2015}.} $H(s) \in\mathcal{H}_2^{(n_y\times n_u)}$ or complex data sets\footnote{We denote as $\big(s_i,H(s_i)\big)$ the evaluation of transfer $H$ at $s_i$.} $\big(s_i,H(s_i)\big)$ (for $i=1,\dots,r$, $r\in \mathbb{N}^*$), \emph{by a time-delay dynamical model} might be relevant for some specific applications where the delay naturally appears. Indeed for such kind of systems many dedicated and powerful results have been obtained for stability, performance analysis and control (see \emph{e.g.}, \cite{richard2003time, Niculescu:01,Briat:14e}). In this paper, the approximation of any realization or realization-free linear dynamical model by a \emph{single time-delay model of finite dimension} is developed. More specifically, we are interested in approximating any MIMO transfer function $H(s) \in \mathcal{H}_2$ by a single delay finite-dimensional linear time-invariant descriptor system denoted $\mathbf{H_d} = (E,A,B,C,\tau)$ and defined by: \begin{equation}\label{eq:SSDescriptordelay} E\dot{x}(t) = Ax(t-\tau) +Bu(t),~y(t) = Cx(t), \end{equation} whose transfer function is $H_d(s) = C(sE-Ae^{-\tau s})^{-1}B$. It is straightforward to note that the approximation form \eqref{eq:SSDescriptordelay} generalizes the delay-free one used in \cite{mayo2007framework,beattie2012realization} given as $\mathbf{H} = (E,A,B,C,0)$ (or simply $(E,A,B,C)$), \begin{equation}\label{eq:SSDescriptor} E\dot{x}(t) = Ax(t) +Bu(t),~y(t) = Cx(t). \end{equation} Following \cite{mayo2007framework, beattie2012realization}, and inspired by the widely used $\mathcal{H}_2$-approximation problem \cite{benner2005dimension,dooren2007,gugercin2008h_2}, our objective can be mathematically formulated as follows: \begin{problem}\label{pb:General} Given a LTI system $H(s) \in \mathcal{H}_2$ (or $\big(s_i,H(s_i)\big)$, the evaluation of $H(s)$ at $s_i\in\mathbb{C}$, for $i=1,\dots,r$), a positive integer $r \in \mathbb{N}^*$ and a positive scalar $\tau \in \mathbb{R}$, find a model $\mathbf{\hat{H}_d} = (E,A,B,C,\tau)\in \mathcal{H}_{2}$ such that \begin{equation} \mathbf{\hat{H}_d} := \argmin_{\mathbf{G_d}\in \mathcal{H}_2 ,\dim(\mathbf{G_d} ) \leq r} \|\mathbf{H}-\mathbf{G_d}\|_{\mathcal{H}_2}. \end{equation} \end{problem} In other words, if an evaluation of the transfer function $H(s)$, for any $s\in\mathbb{C}$, is available (either from data or by simply evaluating $H(s)$), our goal is to find a delay model of the form \eqref{eq:SSDescriptordelay}, that well approximates $H$, in the sense of the $\mathcal{H}_2$-norm. \subsection{Contributions} The purpose of this paper is thus to extend the application domain of the Loewner framework established in \cite{mayo2007framework,IonitaPhd2013} to dynamical systems with one single internal delay. To do so, a new \emph{delay Loewner framework} is firstly developed to interpolate a given transfer function by a time-delay model of the form \eqref{eq:SSDescriptordelay}, enabling the delayed Loewner framework to be applied to any models for which the transfer function is accessible only. This allows then model approximation for both infinite/finite dimensional systems and data-based ones. Then, following Problem \ref{pb:General}, the \emph{$\mathcal{H}_2$-oriented optimality conditions} are formulated and used to construct an iterative algorithm, similar to the recently proposed \textbf{TF-IRKA} \cite{beattie2012realization}, allowing to obtain an approximated model $\mathbf{\hat{H}_d}$ satisfying \emph{a finite number of the $\mathcal{H}_2$ optimality conditions}. \subsection{Notations and outlines} We denote by $\mathbb{N}^*$ the set of natural numbers without 0, by $\mathcal{H}_2$ the Hilbert space of matrix-valued functions $F: \mathbb{C} \rightarrow \mathbb{C}^{n_y\times n_u}$ satisfying $\int_{\mathbb{R}}\textnormal{Trace}[\overline{F(i\omega)}F(i\omega)^T]d\omega <\infty$ whose components $f_{i,j}$ are analytic in the open right half plane. For $\mathbf{H},\mathbf{G}\in \mathcal{H}_2(i\mathbb{R})$, we define the inner-product \[\langle\textbf{H} ,\textbf{G} \rangle_{\mathcal{H}_2} = \int_{-\infty}^{\infty}\textnormal{trace}\Big(\overline{H(i\omega)}G(i\omega)^T\Big)d\omega, \] with corresponding induced-norm $\|\mathbf{H}\|_{\mathcal{H}_2} = \langle \mathbf{H},\mathbf{H}\rangle_{\mathcal{H}_2}^{\frac{1}{2}}$. Let finally denote by $F'(\lambda) = \eval{dF/ds}{s=\lambda}$. The paper is organized as follows: Section \ref{sec:reviewinterpolation} recalls some preliminary results on the rational interpolation Loewner framework proposed in \cite{mayo2007framework}. Section \ref{sec:delayinterpolation} presents the extension of these results to the single-delay case. Section \ref{sec:Optimality} derives the first order optimality conditions from the $\mathcal{H}_2$-optimisation Problem \ref{pb:General}. Then, Section \ref{sec:delayTFIRKA} details an iterative algorithm celebrated as \textbf{dTF-IRKA}\footnote{\textbf{dTF-IRKA} stands for delay Transfer Function Iterative Rational Krylov Algorithm.} (inspired by the \textbf{TF-IRKA} from \cite{beattie2012realization}), which allows to obtain an approximation satisfying some optimality conditions in a numerically efficient and memory affordable way. Finally, Section \ref{sec:applications} illustrates the proposed approach and framework on numerical examples. \section{Realization-less interpolation}\label{sec:reviewinterpolation} \subsection{ Preliminary results in Loewner framework for rational interpolation} The interpolation problem, in its basic general form, is stated as follows: \begin{problem}[General interpolation problem \cite{mayo2007framework}] \label{pb:generalinterp} Given \emph{right}: \begin{equation}\label{eq:rightinterdata} \{(\lambda_i,\mathbf{r}_i,\mathbf{w}_i) | \lambda_i \in \mathbb{C}, \mathbf{r}_i \in \mathbb{C}^{n_u \times 1} , \mathbf{w}_i \in \mathbb{C}^{n_y \times 1}, i =1,\dots, \rho \} \end{equation} and \emph{left interpolation data}: \begin{equation}\label{eq:leftinterdata} \{(\mu_j,\mathbf{l}_j,\mathbf{v}_j) | \mu_j \in \mathbb{C}, \mathbf{l}_j \in \mathbb{C}^{1 \times n_u} , \mathbf{v}_j \in \mathbb{C}^{1 \times n_y}, j =1,\dots, \nu \} \end{equation} construct a realization $\mathbf{H} = (E,A,B,C)$ of appropriate dimensions whose transfer function $H(s) = C(sE-A)^{-1}B$ both satisfies the \emph{right}: \begin{equation} H(\lambda_i)\mathbf{r}_i = \mathbf{w}_i,~ i = 1,\dots \rho \label{eq:constRight} \end{equation} and the \emph{left constraints}: \begin{equation} \mathbf{l}_jH(\mu_j) = \mathbf{v}_j,~ j = 1,\dots \nu. \label{eq:constLeft} \end{equation} \end{problem} The above problem can be solved (to obtain real-valued matrices) thanks to the following theorem, proposed by \cite{mayo2007framework}. \begin{theorem}[Loewner framework \cite{mayo2007framework}]\label{thm:Loewner} Given \emph{right} and \emph{left interpolation data} as in \eqref{eq:rightinterdata}-\eqref{eq:leftinterdata}, and assuming that $\rho = \nu = r$ , the realization $\mathbf{H} = (E,A,B,C)$ of order $r$ constructed as \begin{equation} E= -\mathbb{L}, A= -\mathbb{L}_{\sigma}, B=V, C=W, \end{equation} interpolates the right and left constraints \eqref{eq:constRight}-\eqref{eq:constLeft}, if \begin{equation} \begin{array}{rcl} [\mathbb{L}]_{ij} \hspace{-0.3cm} &=& \dfrac{\mathbf{v}_i\mathbf{r}_j - \mathbf{l}_i\mathbf{w}_j}{\mu_i - \lambda_j} = \dfrac{\mathbf{l}_i\big( H(\lambda_i) - H(\mu_j) \big) \mathbf{r}_j}{\mu_i - \lambda_j} \\ \hspace{-0.9cm}\,[\mathbb{L}_{\sigma}]_{ij}\hspace{-0.3cm} &=& \dfrac{\mu_i\mathbf{v}_i\mathbf{r}_j - \mathbf{l}_i\mathbf{w}_j\lambda_j}{\mu_i - \lambda_j} = \dfrac{\mu_i \mathbf{l}_i\big( H(\lambda_i) - H(\mu_j) \big) \mathbf{r}_j \lambda_j}{\mu_i - \lambda_j} \end{array} \end{equation} known as the Loewner and the shifted Loewner matrices, respectively, and \[W =[\textbf{w}_1, \dots, \textbf{w}_r] ~,~ V^T = [\textbf{v}_1, \dots,\textbf{v}_r]. \] \end{theorem} Theorem \ref{thm:Loewner} allows to obtain a model $\mathbf{H} = (E,A,B,C)$ whose transfer function interpolates right and left constraints as stated in Problem \ref{pb:generalinterp}. This has been extensively used for system identification from complex data obtained by a signal generator and for large-scale model approximation purposes \cite{IonitaPhd2013},\cite{ionita2014data}. An extension of Problem \ref{pb:generalinterp}, including some derivative constraints, has also been considered to solve the $\mathcal H_2$ model approximation problem \cite{beattie2012realization}. To this aim, the following theorem, initially stated in \cite{mayo2007framework}, provides a solution for the problem with derivatives constraints in the case where the right and left interpolation points are equals, \emph{i.e.}, $s_i = \mu_i = \lambda_i~,~ \forall i = 1,\dots r$. \begin{theorem}[Derivative Loewner framework \cite{mayo2007framework}]\label{thm:DerivativeLoewner} Given a system represented by its transfer function $H(s)$, $r$ shift points $\{s_1,\dots,s_r\} \in \mathbb{C}$ and $r$ tangential directions $\{\mathbf{l}_1,\dots, \mathbf{l}_r\} \in \mathbb{C}^{1\times n_y}$, $\{\mathbf{r}_1,\dots,\mathbf{r}_r\}\in \mathbb{C}^{n_u \times 1}$, the $r$-dimensional descriptor model $\mathbf{\hat{H}} = (\hat{E},\hat{A},\hat{B},\hat{C})$, as in \eqref{eq:SSDescriptor}, interpolates $H(s)$ as follows, for $k=1,\dots r$: \begin{equation} H({s}_k)\mathbf{r}_k = \hat{H}({s}_k)\mathbf{r}_k,\,\,\mathbf{l}_kH(s_k) = \mathbf{l}_k\hat{H}({s}_k)\, \label{pbh2} \end{equation} \begin{equation} \mathbf{l}_kH'(s_k)\mathbf{r}_k = \mathbf{l}_k\hat{H}'(s_k)\mathbf{r}_k, \label{pbh21} \end{equation} if for $i,j=1,\dots,r$: \begin{eq} (\hat{E})_{ij} = \left\{\begin{array}{lr} -\frac{\mathbf{l}_i \big(H(s_i)-H(s_j)\big)\mathbf{r}_j}{s_i-s_j} & i\neq j\\ -\mathbf{l}_i H'(s_i)\mathbf{r}_i & i=j \end{array}\right.\nonumber \end{eq} \begin{eq} (\hat{A})_{ij} = \left\{\begin{array}{lr} -\frac{\mathbf{l}_i \big(s_iH(s_i)-s_jH(s_j)\big)\mathbf{r}_j}{s_i-s_j} & i\neq j\\ -\mathbf{l}_i \big(sH(s)\big)'|_{s=s_i}\mathbf{r}_i& i=j \end{array}\right. \nonumber \end{eq} \begin{equation} \hat{C} = [H(s_1)\mathbf{r}_1,\dots ,H(s_r)\mathbf{r}_r] \hspace{0.2cm} \textnormal{and} \,\,\hat{B} = \left[\begin{array}{ccc} \mathbf{l}_1H(s_1) \\ \vdots\\ \mathbf{l}_rH(s_r) \nonumber \end{array}\right]. \end{equation} \end{theorem} In the following section the extensions of both Theorems \ref{thm:Loewner} and \ref{thm:DerivativeLoewner} are presented in the case where a time-delay realization $(E,A,B,C,\tau)$ as in \eqref{eq:SSDescriptordelay} is looked for. \section{Delay Loewner framework}\label{sec:delayinterpolation} Before introducing the main result of this section, let us consider the following representation of system \eqref{eq:SSDescriptordelay}, which will be useful along the rest of the paper. \begin{lemma}\label{lemma:DSDrep} Given $\mathbf{H_d} =(E,A,B,C,\tau)$, its transfer function $H_d(s)$ can be decomposed as: \begin{equation} H_d(s) = G\big(f(s)\big)e^{s\tau} \end{equation} where $G(s)$ is the transfer function of the delay-free model $\textbf G=(E,A,B,C)$ as in \eqref{eq:SSDescriptor} and $f(s) = se^{s\tau}$. \end{lemma} \begin{proof} The result is straightforwardly obtained by injecting $f(s)$ in \eqref{eq:SSDescriptor} as: \begin{eq} \begin{array}{rcl} H_d(s) &=& C(sE-Ae^{-s\tau})^{-1}B \\ & =& C(se^{s\tau}E-A)^{-1}Be^{{s\tau}} \\ &= &G(se^{s\tau})e^{s\tau}. \end{array} \end{eq} \end{proof} Then one extension of Theorem~\ref{thm:Loewner} which makes feasible the interpolation with a single delay descriptor system as defined in \eqref{eq:SSDescriptordelay} can be done by using $f(s)$ as a variable substitution and applying the standard Loewner framework to the new transformed data. This first main result can be stated as follows: \begin{theorem}[Delay Loewner framework]\label{thm:delayLowner} Let us consider $\rho = \nu = r$, $\tau \in \mathbb{R}$ and given $(\lambda_i,\mathbf{r}_i,\mathbf{w}_i) $ and $(\mu_j,\mathbf{l}_j,\mathbf{v}_j)$ the \emph{right and left interpolation data} respectively, as stated in \eqref{eq:rightinterdata}-\eqref{eq:leftinterdata}. Assuming that $f(s)= se^{s\tau}$ is one-to-one in the interpolation points domain\footnote{This means that for any $h_1, h_2 \in \{\lambda_1,\dots,\lambda_r\}\cup\{\mu_1,\dots,\mu_r \} $, then $f(h_1) \neq f(h_2)$ if $h_1\neq h_2$, where $f(s) = se^{s\tau}$.} and let $\mathbf{G} = (\hat{E},\hat{A},\hat{B},\hat{C})$ be a realization satisfying right and left constraints from the data $(f(\lambda_i),\mathbf{r}_i,\mathbf{w}_ie^{-\lambda_i\tau}) $ and $(f(\mu_j),\mathbf{l}_j,\mathbf{v}_je^{-\mu_i\tau})$ constructed with Theorem \ref{thm:Loewner}. Then $\mathbf{H_d} = (\hat{E},\hat{A},\hat{B},\hat{C},\tau)$ satisfies the \emph{right}: \begin{eq} H_d(\lambda_i)\mathbf{r}_i = \mathbf{w}_i,~ i = 1,\dots r \label{eq:delayRightConst} \end{eq} and \emph{left constraints}: \begin{eq} \mathbf{l}_jH_d(\mu_j) = \mathbf{v}_j,~ j = 1,\dots r \label{eq:delayLeftConst} \end{eq} for the given right and left interpolation data. \end{theorem} \begin{proof} The result for the right constraints \eqref{eq:delayRightConst} is obtained as follows: first note that if the delay-free model $G(s)$ satisfies the right constraints for $(f(\lambda_i),\mathbf{r}_i,\mathbf{w}_ie^{-\lambda_i\tau})$, then one obtains: \begin{eq} G(f(\lambda_i))\mathbf{r}_i = \mathbf{w}_ie^{-\lambda_i\tau} \end{eq} then, it equivalently follows that: \begin{eq} G(f(\lambda_i))e^{\lambda_i\tau}\mathbf{r}_i = \mathbf{w}_i \end{eq} and by invoking Lemma \ref{lemma:DSDrep}, we obtain the result: \begin{eq} H_d(\lambda_i)\mathbf{r}_i = \mathbf{w}_i. \end{eq} The left data constraints \eqref{eq:delayLeftConst} is similarly obtained. \end{proof} Theorem \ref{thm:delayLowner} provides a method to construct a model $\mathbf{H_d} = (E,A,B,C,\tau)$ whose transfer function $H_d(s) = C(sE-Ae^{-s\tau})^{-1}B$ interpolates given right and left constraints. This is possible by noticing that the problem can be rewritten as right and left interpolation constraints for the delay-free for which a realization is obtained by the standard Loewner framework as in Theorem \ref{thm:Loewner}. A similar reasoning enables the generalization of Theorem~\ref{thm:DerivativeLoewner} as stated follows. \begin{theorem}[Derivative delay Loewner framework]\label{thm:DerivdelayLoewner} Let us consider a given system represented by its transfer function $H(s)$, $r$ shift points $\{s_1,\dots,s_r\} \in \mathbb{C}$ and $r$ tangential directions $\{\mathbf{l}_1,\dots, \mathbf{l}_r\} \in \mathbb{C}^{1\times n_y}$, $\{\mathbf{r}_1,\dots,\mathbf{r}_r\}\in \mathbb{C}^{n_u \times 1}$. We assume that for all $k \neq m $, $f(s_k) \neq f(s_m)$, where $f(s) = se^{s\tau}$ ($f$ is one-to-one in the interpolation points domain). The $r$-dimensional single delay model $\mathbf{\hat{H}} = (\hat{E},\hat{A},\hat{B},\hat{C},\tau)$, as in \eqref{eq:SSDescriptordelay}, interpolates $H(s)$ as follows, for $k=1,\dots , r$: \begin{equation} H({s}_k)\mathbf{r}_k = \hat{H}({s}_k)\mathbf{r}_k,\,\, \mathbf{l}_kH(s_k) = \mathbf{l}_k\hat{H}({s}_k)\, \end{equation} \begin{equation} \mathbf{l}_kH'(s_k)b_k = \mathbf{l}_kH'(s_k)\mathbf{r}_k, \label{pbh21} \end{equation} if only if the $r$-dimensional delay-free model $\mathbf{G} = (\hat{E},\hat{A},\hat{B},\hat{C})$ is constructed with the derivative Loewner framework as in Theorem \ref{thm:DerivativeLoewner} for the shift points: \begin{equation} \big(\sigma_1,\dots,\sigma_r\big) = \big(f(s_1),\dots,f(s_r)\big),\end{equation} and the transfer function evaluation: \begin{equation}\big( G(\sigma_1), \dots, G(\sigma_r)\big) =\big( H(s_1) e^{-s_1\tau}, \dots, H(s_r) e^{-s_r\tau}\big) \end{equation} and the derivative transfer function evaluation: \begin{equation} \big(G'(\sigma_1),\dots, G'(\sigma_r)\big) = \big(F_1, \dots , F_r\big) \end{equation} where for $i = 1,\dots r$: \begin{eq} \begin{array}{rcl} G'(\sigma_i) &=& F(H(s_i),H'(s_i),s_i) = F_i \\ &=& \big(H'(s_i) - \tau H(s_i)\big)\bigg(\frac{e^{-2 s_i\tau}}{1+\tau s_i}\bigg) \end{array} \end{eq} \end{theorem} \begin{proof} First, one can note that a single delay descriptor system can be expressed as \begin{equation}\label{descriptortheorem} \hat{ H }_d(s) = \hat{C}(se^{s\tau}\hat{E}-\hat{A})^{-1}\hat{B}e^{{s\tau}} = G(f(s))e^{s\tau} \end{equation} where $G(s)$ is a descriptor system whose representation is $(\hat{E},\hat{A},\hat{B},\hat{C})$ and $f(s) = se^{s\tau}$. Thus one can use Loewner matrices to construct the realization of system $G(s)$ for the shift points $(\sigma_1,\dots \sigma_r) = (f(s_1),\dots,f(s_r))$ whose transfer function data are $(G(\sigma_1),\dots,G(\sigma_r)) = (H(s_1)e^{-s_1\tau},\dots,H(s_r)e^{-s_r\tau}) $. For the transfer function derivative data, one can take the derivative of \eqref{descriptortheorem} written as $G(f(s)) = H_d(s)e^{-s\tau}$ with respect to $s$ as follows \[ G(f(s))f'(s) = \hat{H}_d'(s)e^{-s\tau}-\tau \hat{H}_d(s)e^{-s\tau} \] and by solving the equation for $G'(s_k)$ one can obtain the result. \end{proof} This theorem allows to obtain a single delay descriptor system which interpolates any given transfer function $H(s)$. This can also be used in the case of data obtained through a signal generator, considering that the derivative is accessible as well. Applications of this result can be found in section \ref{sec:applications}. Now that the delay interpolation framework has been established, one might be interested in obtaining a good interpolant in the sense of the $\mathcal{H}_2$-norm as formulated in Problem \ref{pb:General}. We will now formulate mathematical conditions to select the optimal shift complex points $s_i$ and tangential directions $\mathbf{r}_i$ and $\mathbf{l}_i$. \section{$\mathcal{H}_2$ model reduction optimality conditions}\label{sec:Optimality} \subsection{Preliminary results in $\mathcal{H}_2$ model reduction optimality conditions} The first-order optimality conditions for the delay-free $\mathbf{\hat{H}} = (\hat{E},\hat{A},\hat{B},\hat{C},0)$ Problem \ref{pb:General} in terms of poles and residues are given in Theorem \ref{thm:opt}. \begin{theorem}[\cite{gugercin2008h_2}]\label{thm:opt} Assume that $\mathbf{H}$ and $\mathbf{\hat{H}}$ have semi-simple poles and suppose that $\mathbf{\hat{H}}$ is a $r^{th}$-order finite-dimensional model with transfer function: \begin{equation}\label{sysr} \hat{H}(s) = \sum_{k =1}^r \frac{\hat{c}_k\hat{b}_k^T}{s-\hat{\lambda}_k}. \end{equation} If $\mathbf{H}, \mathbf{\hat{H}}\in \mathcal{H}_2$ and $\mathbf{\hat{H}}$ of the form \eqref{eq:SSDescriptor}, is a local minimum of the $\mathcal{H}_2$ delay-free approximation problem, then the following interpolation equations hold: \begin{equation} H(-\hat{\lambda}_k)\hat{b}_k = \hat{H}(-\hat{\lambda}_k)\hat{b}_k,\,\,\hat{c}^T_kH(-\hat{\lambda}_k) = \hat{c}^T_k\hat{H}(-\hat{\lambda}_k)\, \label{eq:pbh2} \end{equation} \begin{equation} \hat{c}^T_kH'(-\hat{\lambda}_k)\hat{b}_k = \hat{c}^T_kH'(-\hat{\lambda}_k)\hat{b}_k, \label{eq:pbh21} \end{equation} for all $k= 1,\dots,r$ where $\hat{\lambda}_k$ are the poles of $\mathbf{\hat{H}}$ and $\hat{b}_k$ and $\hat{c}_k$ are its tangential directions, respectively. \end{theorem} As previously for the interpolation conditions, the $\mathcal{H}_2$-optimality conditions are now extended to the single delay case. \subsection{Results in single delay model reduction $\mathcal{H}_2$ optimality conditions} \begin{proposition} Using the notation in Lemma \ref{lemma:DSDrep}, $\lambda \in \mathbb{C}$ is a pole of $H_d(s) = C(sE-Ae^{-s\tau})^{-1}B$ if and only if $f(\lambda) \in \mathbb{C}$ is a generalized eigenvalue of the pair $(A,E)$. \end{proposition} \begin{proof} $\lambda$ is pole of $H(s)$ $\iff $ $w(\lambda)$ is pole of $G(s)$ $\iff $ $w(\lambda) \in$ $\sigma(A,E)$, the spectrum of pencil $(A,E)$. \end{proof} Now let recall that the Lambert $W$ function is a multivalued (except at $0$) function associating for each $k^{th}$ complex branch, a complex number $W_k(z)$ such that \[ z = W_k(z)e^{W_k(z)}, \hspace{0.5cm} k\in \mathbb{Z}.\] Consequently any TDS can be viewed as a system with an infinite number of poles. Now, as in the delay-free case, analog optimality conditions are derived in the case where the approximation model has the form \eqref{eq:SSDescriptordelay}. \begin{theorem}[Delay $\mathcal{H}_2$-optimality conditions]\label{thm:DelayOptim} Assume that $\mathbf{H}$ and $\mathbf{\hat{H}_d}$ have semi-simple poles and suppose that $\mathbf{\hat{H}_d}$ is a $r^{th}$-order single delay model whose transfer reads: \begin{equation}\label{sysr} \hat{H}_d(s) = \hat{C}(s\hat{E} - \hat{A}e^{-s\tau})^{-1}\hat{B} = \sum\limits_{k =1}^r \frac{\hat{c}_k\hat{b}_k^T}{s-\hat{\alpha}_ke^{-s\tau}}. \end{equation} If $\mathbf{H}, \mathbf{\hat{H}_d}\in \mathcal{H}_2$ and $\mathbf{\hat{H}}$ is a local minimum of the $\mathcal{H}_2$ approximation problem, then the following interpolation equations hold: \begin{equation}\label{eq:pbh2delay} \hspace{-0.03cm} H(-\hat{\lambda}_{k,p})\hat{b}_k = \hat{H}_d(-\hat{\lambda}_{k,p})\hat{b}_k,~\hat{c}^T_kH(-\hat{\lambda}_{k,p}) = \hat{c}^T_k\hat{H}_d(-\hat{\lambda}_{k,p})\, \end{equation} \begin{equation}\label{eq:pbh21delay} \hat{c}^T_kH(-\hat{\lambda}_{k,p})\hat{b}_k = \hat{c}^T_k\hat{H}_d(-\hat{\lambda}_{k,p})\hat{b}_k, \end{equation} for all $p= 1,\dots,r$ and $k \in \mathbb{Z}$ where, for given $p$, $\hat{\lambda}_{k,p}$ are defined by: \begin{equation}\label{eq:polesinfinity} \lambda_{k,p}= \frac{1}{\tau}W_k(\tau \alpha_p) \end{equation} where $W_k$ is the $k^{th}$ branch of the multivalued Lambert function. \end{theorem} \begin{proof} The proof is similar those of Theorem \ref{thm:opt} as in \cite{gugercin2008h_2} using the infinite poles and residues decomposition of model $\hat{H}_d(s)$. \end{proof} Theorem \ref{thm:DelayOptim} states that the optimal model $\mathbf{\hat{H}_d}$ of Problem \ref{pb:General}, if it exists, satisfies an infinite number of optimality conditions related with the Lambert $W$ function and the general eigenvalues of $(\hat{E},\hat{A})$. Nevertheless, given $\tau \in \mathbb{R}$, as $\hat{H}_d(s) = \hat{C}(s\hat{E}-\hat{A}e^{-s\tau})^{-1}\hat{B}$ is parametrized by a finite number of variables, and it can be shown that it lives in a sub-manifold of dimension $n(n_u+n_y)$. This can be simply shown by noticing that there is a simple isomorphism between $(E,A,B,C,\tau)$ and $(E,A,B,C,0)$ and the last one is parametrized by $n(n_u+n_y)$ variables as it can be seen in \cite{byrnes1979applications,dooren2007}. All the optimality conditions cannot be achieved in the general case. However, as stated in the following proposition, in a given particular case, the infinite optimality conditions fall into an equivalent finite number of relationships. \begin{proposition} Assuming the same hypotheses of Theorem \ref{thm:DelayOptim} about $\mathbf{H}$ and $\mathbf{\hat{H}}$. Moreover if model $\mathbf{H} = (E,A,B,C,\tau)$, then the infinite optimality conditions of Theorem \ref{thm:DelayOptim} can be resumed to a finite number of optimality conditions as follows for $p=1,\dots r$ : \begin{equation}\label{eq:pbh2delayfinite} H(-\hat{\lambda}_{1,p})\hat{b}_k = \hat{H}(-\hat{\lambda}_{1,p})\hat{b}_k,\,\,\hat{c}^T_kH(-\hat{\lambda}_{1,p}) = \hat{c}^T_k\hat{H}(-\hat{\lambda}_{1,p})\, \end{equation} \begin{equation}\label{eq:pbh21delayfinite} \hat{c}^T_kH(-\hat{\lambda}_{1,p})\hat{b}_k = \hat{c}^T_k\hat{H}(-\hat{\lambda}_{1,p})\hat{b}_k, \end{equation} where $\lambda_{1,p}= \frac{1}{\tau}W_1(\tau \alpha_p) $ and $W_1$ is the evaluation of the Lambert function along its $1^{st}$ branch. \end{proposition} \begin{proof} One have to prove that the finite conditions \eqref{eq:pbh2delayfinite}-\eqref{eq:pbh21delayfinite} imply \eqref{eq:pbh2delay}-\eqref{eq:pbh21delay}. This is possible due to the fact that: \[f(\lambda_{1,p}) = \lambda_{1,p} e^{\lambda_{1,p}\tau} = \lambda_{k,p} e^{\lambda_{k,p}\tau} = f(\lambda_{k,p}),~\forall k\in \mathbb{Z}\] Thus, using the decomposition given in Lemma \ref{lemma:DSDrep}, it can be shown that: \begin{equation*} \begin{array}{rcl} H(\lambda_{k,p}) &=& G(f(\lambda_{k,p}))e^{\tau\lambda_{k,p}} \\ &=& G(f(\lambda_{1,p}))e^{\tau\lambda_{1,p}}e^{\tau(\lambda_{k,p} - \lambda_{1,p})} \\ &=& H(\lambda_{1,p}) e^{\tau(\lambda_{k,p} - \lambda_{1,p})} \end{array} \end{equation*} and finally \[ H(\lambda_{k,p})\mathbf{b}_k = CH(\lambda_{1,p})\mathbf{b}_k = C\hat{H}(\lambda_{1,p})\mathbf{b}_k = \hat{H}(\lambda_{k,p})\mathbf{b}_k ,\] $\forall k \in \mathbb{Z}$, where $C = e^{\tau(\lambda_{k,p}-\lambda_{1,p})}$. The reasoning is analog for the right and derivative constraints, which concludes the proof. \end{proof} Now that the optimality conditions have been derived, next section is dedicated to the derivation of an algorithm, based on \textbf{TF-IRKA} \cite{beattie2012realization}, denoted \textbf{dTF-IRKA} for \emph{delay Transfer Function Iterative Rational Krylov Algorithm}, which allows to obtain a sub-optimal model of the form \eqref{eq:SSDescriptordelay}, satisfying $n(n_u+n_y)$ optimality conditions. \section{Delay TF-IRKA algorithm}\label{sec:delayTFIRKA} The algorithm proposed in this section permits to derive a system which satisfies the optimality conditions for $r$ complex points. The idea behind is based on \textbf{TF-IRKA} \cite{beattie2012realization} which finds a model satisfying the optimality conditions in \eqref{eq:pbh2}-\eqref{pbh21} using a fixed point iteration. For each iteration the new shift points will be the poles located in the $1^{st}$ branch of the Lambert function, only. This algorithm is celebrated as \textbf{delay TF-IRKA} (or \textbf{dTF-IRKA}) and is summed up as follows: \begin{algorithm}[H] \caption{ dTF-IRKA } \label{TFIRKAalgo} \begin{algorithmic}[1] \State \textbf{Initialization:} transfer function $H(s)$, approximation order $r\in \mathbb{N}^*$, $\sigma^0 = \{ {\sigma_1^0,\dots,\sigma_r^0}\}\in \mathbb{C}$ initial interpolation points and tangential directions $\{b_1,\dots,b_r\} \in \mathbb{C}^{n_u \times 1}$ and $\{c_1,\dots,c_r\}\in \mathbb{C}^{n_y\times 1}$, $W_1$ the first branch of the Lambert function. \While{not convergence} \State\textbf{Build} $\big(\hat{E}$, $\hat{A}$, $\hat{B}$, $\hat{C},\tau\big)$ using Theorem~\ref{thm:DerivdelayLoewner}. \State \textbf{Solve} the generalized eigenvalue problem in $x_i^{(k)}$, $y_i^{(k)}$ and $\lambda_i^{(k)}$, for $i=1,\dots,r$ \begin{eq} \begin{array}{rcl} \hat{A}^{(k)}x_i^{(k)}&=&\lambda_i^{(k)}\hat{E}^{(k)}x_i^{(k)} \\ y_i^{(k)*}\hat{E}^{(k)}x_j^{(k)} &=& \delta_{i,j} \end{array} \end{eq} \State \emph{Set}, for $i=1,\dots,r$ \begin{eq} \begin{array}{rcl} \sigma_i^{(k+1)} &\gets& -\frac{1}{\tau}W_1\big(\tau\lambda_i^{(k)}\big) \\ b_i^{(k+1)T} &\gets& y_i^{(k)}\hat{B}^{(k)}\\ c^{(k+1)}_i &\gets& \hat{C}^{(k)}x_i^{(k)} \end{array} \end{eq} \EndWhile \State \textbf{Ensure} conditions \eqref{eq:pbh2delayfinite}-\eqref{eq:pbh21delayfinite} are satisfied. \State\textbf{Build} $\mathbf{\hat{H}} = \big(\hat{E}$, $\hat{A}$, $\hat{B}$, $\hat{C},\tau\big)$. \end{algorithmic} \end{algorithm} If the algorithm converges, the approximation model will satisfy optimality conditions given by \eqref{eq:pbh2delayfinite}-\eqref{eq:pbh21delayfinite} and will therefore be suboptimal. The \textbf{dTF-IRKA} then allows to obtain good (in the sense of the metric given in Problem \ref{pb:General}) shift points and tangential directions for which the interpolation problem will lead to a good approximation model. As a remark, one should note that the Lambert function evaluated in the $1^{st}$ branch can sometimes associate a real number to a complex one. In this way, the shift points might not be a closed set (by conjugation) and the obtained single delay interpolation model will not have a real representation. To avoid this, one should enforce at each iteration the shift points to be closed by conjugation. \section{Applications}\label{sec:applications} This section is dedicated to the application of both methods proposed in Sections \ref{sec:delayinterpolation} and \ref{sec:delayTFIRKA}, namely, the delay model interpolation and optimal $\mathcal H_2$ model approximation. We will emphasize the potential benefit and effectiveness of the proposed approach. \subsection{Example 1: rational interpolation} Let us consider a dynamical model governed by the following delay model $\mathbf{H} \in \mathcal{H}_2$ whose transfer function is given as \begin{equation}\label{eq:example1TF} H(s) =\dfrac{2s+1.3 e^{-s}}{s^2+1.3s e^{-s}+0.3e^{-2s}}. \end{equation} First, model \eqref{eq:example1TF} (which is obviously of order 2) is approximated by a delay-free model of order $r=2$ using the \textbf{TF-IRKA} (Figure \ref{fig:Exemple1a}, green dashed dotted curve). It is also interpolated using the delay Loewner framework with derivatives as stated in Theorem \ref{thm:DerivdelayLoewner} whose delay is set to $\tau =1$, at the shift points $s_1 = 0.1$ and $s_2 = 1$ (Figure \ref{fig:Exemple1a}, red dashed thick curve). All the results are reported on Figure \ref{fig:Exemple1a}, and compared to the original model $H(s)$ (solid blue line). \begin{figure}[here] \centering \includegraphics[width=0.75\textwidth]{Example1_order2} \caption{Bode diagram of original model (blue solid line), model of order $r=2$ approximated with \textbf{TF-IRKA} (green dashed dotted curve) and delay interpolation model using Theorem \ref{thm:DerivdelayLoewner} of order $r=2$ (red dashed line).} \label{fig:Exemple1a} \end{figure} Figure \ref{fig:Exemple1a} shows that model defined in \eqref{eq:example1TF} is well interpolated by a delay model obtained by Theorem \ref{thm:DerivdelayLoewner}, for any interpolation points. Indeed, since the transfer function \eqref{eq:example1TF} has a realization of the form \eqref{eq:SSDescriptordelay} of order 2 it can be reconstructed using the Theorem \ref{thm:DerivdelayLoewner}. Figure \ref{fig:Exemple1b} shows quite similar results but where \textbf{TF-IRKA} has targeted an order $r=4$. \begin{figure}[here] \centering \includegraphics[width=0.75\textwidth]{Example1_order4} \caption{Bode diagram of original model (blue solid line), model of order $r=4$ approximated with \textbf{TF-IRKA} (green dashed dotted curve) and delay interpolation model using Theorem \ref{thm:DerivdelayLoewner} of order $r=2$ (red dashed line).} \label{fig:Exemple1b} \end{figure} \begin{figure}[here] \centering \includegraphics[width=0.75\textwidth]{Example1_order4_error} \caption{Singular value frequency response diagram of original model (blue solid line), approximation error with model of order $r=2$ obtained with \textbf{TF-IRKA} (green dashed dotted curve) and approximation error of the delay interpolation model using Theorem \ref{thm:DerivdelayLoewner} of order $r=2$ (red dashed line).} \label{fig:Exemple1b_error} \end{figure} This specific example clearly emphasizes the fact that, if the original model is a delay model, the counter part of obtaining a good delay-free approximation (\emph{e.g.}, using \textbf{TF-IRKA}) is to increase the approximation order (here the original model of order 2 must be approximated with an order 4 to well recover the frequency behaviour). As illustrated on Figure \ref{fig:Exemple1b_error}, even with an order $r=4$, the delay-free model cannot perfectly recover the original infinite dimensional model, while the delay model (obtained by Theorem \ref{thm:DerivdelayLoewner}) provides perfect matching (subject to numerical machine precision errors). On the other hand, the proposed delay Loewner framework allows to find an exact realization. \subsection{Example 2: optimal approximation and method scalability} Let us now consider the SISO Los-Angeles Hospital model extracted from the COMP$l_eib$ library \cite{COMPleib2003} whose order is $n=48$, denoted $\mathbf{H}_{build} = C(sI_{48}-A)^{-1}B \in \mathcal{H}_2$. In order to fit the framework proposed in this paper, a delay model is constructed by injecting an internal delay $\tau = 0.01$ to all states, \emph{i.e.}, $\mathbf{H}_{delay} = C(sI_{48}-Ae^{-s\tau})^{-1}B $. This last transfer function is firstly interpolated on the basis of realisation of order $r=10$ by applying the delay Loewner framework from Theorem \ref{thm:DerivdelayLoewner} using $10$ real shift points logarithmically spaced from $0.1$ to $1$. Then, an approximation is obtained using the \textbf{dTF-IRKA} algorithm proposed in Section \ref{sec:delayTFIRKA}. Figure \ref{fig:Exemple2} compares the Bode plot of these models. \begin{figure}[H] \centering \includegraphics[width=0.75\textwidth]{Example2} \caption{Bode diagram of original model (blue curve), delay Loewner interpolation model of order $r=10$ (green dashed curve) and \textbf{dTF-IRKA} of order $r=10$ (red dashed curve).} \label{fig:Exemple2} \end{figure} As clearly shown on Figure \ref{fig:Exemple2}, the proposed \textbf{dTF-IRKA} allows to obtain shift points and tangential directions for which the interpolated delay model is much more accurate than the approximation using random shift points. This shows the scalability of the proposed approach for larger models. \section{Conclusions and perspectives} In this paper, the problem of interpolating and approximating any dynamical model (provided its transfer function or its evaluation at given points) by a single time-delay finite dimensional one is analysed. Firstly, we present an extended framework which generalizes the Loewner one \cite{mayo2007framework} to the case where the interpolant is a single time-delay model. Then, as a second contribution, the $\mathcal{H}_2$-optimality conditions are derived to solve Problem \ref{pb:General}, leading to an infinite set of conditions. Finally, an algorithm, denoted \textbf{dTF-IRKA}, allowing to obtain a model which satisfies a finite number of optimality conditions is developed and successfully applied to some numerical examples\footnote{Reader should note that the \textbf{dTF-IRKA} will be made available in the MORE toolbox \cite{vuillemin2012}, developed by the Onera research group.}. One weakness of the proposed method, is the fact that one should know in advance the delay value $\tau$. Future works will investigate this issue by taking into consideration the delay as an optimization variable in the $\mathcal{H}_2$ optimization problem. The extension to the multiple delay case will also be addressed in future works.
{ "timestamp": "2015-04-27T02:06:48", "yymm": "1504", "arxiv_id": "1504.06457", "language": "en", "url": "https://arxiv.org/abs/1504.06457" }
\section{INTRODUCTION} More than 7000 transit-like light curves have been obtained by Kepler observations\footnote{http://exoplanetarchive.ipac.caltech.edu/cgi-bin/ExoTables/nph-exotbls?dataset=cumulative}. Among them, more than 3000 objects are identified as planetary candidates and more than 2000 objects are false positives. Over 800 of them show transit depth comparable to that caused by gas giant planets. In this paper, we point out that some of the known inflated gas giant planets and objects classified as false positive detections could be gravitationally bounded pairs of gas-giant planets, which we call ``binary planets." Orbital stability of exomoons around gas-giant planets have been studied, and these moons are stable except for those around close-in gas giants \citep[e.g.,][]{Namouni,gong}. ``Habitability" of the moons around gas giants in habitable zones has also been discussed \citep[e.g.,][]{Williams, Heller12, Heller13}. In addition, the observational detectability of exomoons has been discussed and many detection methods have been proposed \citep[e.g.,][]{SS99,SSS,kippingttv,kippingb,SA09,Kaltenegger,kipping12,zgy,Lewis13}. Gas giants that are distant from their host star may commonly have moons, because regular moons are formed in circumplanetary disks \citep[e.g.,][]{Mosqueira03a,Mosqueira03b,Canup06,Sasaki10} and the formation of these disks is a part of gas-giant planet formation. However, since \citet{Canup06} proposed that the maximum mass of a moon may be $\la 10^{-4} M_{\rm p}$ where $M_{\rm p}$ is the planet mass, detection of these moons is not easy. On the other hand, it is relatively easy to distinguish binary gas giants from single gas giants, because the companions are as large as the primary. \citet[][hereafter refereed to as Paper I]{Ochiai} found through N-body simulations incorporating both planet-planet and planet-star tidal interactions, that the formation rate of binary planets is as much as $\sim 10\%$ of the systems in which orbital crossing among multiple gas giants occurs. Furthermore, Paper I also predicted the binary orbital separation distribution and the limit of stellarcentric semimajor axis of the binaries beyond which the binary orbits are stable during the main sequence lifetime of solar-type stars. Thereby, binary planets may have more chance to be detected than exomoons if these theoretical predictions are correct.\footnote{If exomoons can be formed in a different way than \citet{Canup06} considered, exomoons, can be larger and their detection is not so difficult.} The important point is that the mechanism to form the binary planets proposed by \citet{Ochiai} is one of the natural outcomes of orbital crossing among gas giants. The gas giants so far discovered in extrasolar planetary systems often have eccentric (say, $e \ga 0.2$) orbits, except for close-in planets where the eccentricities are tidally damped. It is considered that orbital crossing for the case of a system containing three gas giants or more, is the most likely origin of gas giants in eccentric orbits \citep[e.g.,][]{LI97,MW02,c08,Juric08}. Typical fates of such three planet systems are ejection of a planet, planet-planet collisions, and planet-star collisions. However, if planet-star tidal interaction is taken into account, most of the planet-star collisions are replaced by formation of ``hot jupiters." \citet{nagasawa} and \citet{BN12} found through N-body simulations that hot jupiters are formed in as much as 10-30\% of the systems. The discovery of retrograde hot jupiters \citep[e.g.,][]{Narita09,Winn09} strongly suggests that some fraction of hot jupiters were formed through the tidal capture. \citet{posi} pointed out that incorporation of planet-planet tidal interactions replaces some of planet-planet collisions with formation of binary planets. While \citet{posi} assumed arbitrary initial conditions of two giants in closely packed, nearly circular orbits, Paper I considered more appropriate initial conditions consisting of three giants in separated orbits, and found that the formation rate is still as much as $\sim 10\%$. \citet{Ida13} showed through a planet population synthesis simulation that such orbital crossing among gas giants commonly occurs in systems formed from relatively massive protoplanetary disks. Possible methods to detect binary gas-giant planets include radial velocity, transit light curves, transit timing variations (TTV), Rossiter-McLaughlin (RM) effect, and gravitational microlensing. Because binary planets may be tight binaries, the detection of such binary planets will be through deviation from a single planet fit. \citet{posi} showed that the radial velocity amplitude of the deviation is too small. Detection by gravitational microlensing is also possible if a background star passes between the binary planets, but they suggested that the possibility of such events is too low. We found that TTV signals are also too small. While detection by RM effect is not ruled out, a bright host star is required, and the number of sufficiently bright stars with transiting gas giants beyond 0.3 AU is limited. Therefore, transit (light curve) observation is the most promising method to detect these binary planets. In addition, binary planets may be sufficiently numerous to be detected in large transiting planet surveys. Assuming that the $\sim 10\%$ of planetary systems hosting eccentric gas giants are all a result of orbital instability, and combining this with the predicted $\sim 10\%$ binary planet formation probability, we have that $\sim 1\%$ of systems could host a binary planet. Taking a typical stellarcentric semi-major axis of 0.5AU, this gives a transit probability of $\sim 1\%$. Consequently, we expect that approximately 1 in 10000 stars would host a transiting gas giant binary planet pair. This compares favourably with the $\sim 145000$ stars monitored by Kepler and $\sim 100000$ light curves produced by CoRoT. Although \citet{posi} also concluded that transit observations are the most promising way to detect binary planets, they did not discuss how planet binarity modifies transit light curves, or calculate detection probabilities. In addition, they did not calculate long-term tidal evolution of the binaries either. \citet{SA09} calculated the modulation of transit light curves, but as they considered Earth-Moon like systems, focusing on mutual eclipses, their arguments cannot be applied to the binary gas giants that we consider here. In this paper, based on the detailed theoretical predictions by Paper I, we calculate the effect of planet binarity on transit light curves and discuss the possibility of detection of binary planets. In section \ref{riron}, we summarize the results of Paper I. In \S \ref{rei}, we predict light curves of possible binary planet systems and discuss the detectability of extrasolar binary planets. In \S \ref{KepSec}, we apply this work to the problem of detecting binary planets in the CoRoT and Kepler candidates, while \S \ref{kousatsu} is a summary. \section{THEORETICAL PREDICTIONS} \label{riron} Here we briefly summarize the results of theoretical calculations in Paper I. Paper I carried out two sets of calculations: 1) N-body simulations of three gas giants incorporating planet-planet dynamical tides (as well as planet-star tides) on timescales $\sim 10^7$ years to investigate tidal capture to form binary planets and 2) numerical integration for long-term evolution of the binary orbits due to planet-planet and planet-star quasi-static tides during main sequence lifetime of solar type stars ($\sim 10^{10}$ years). Dynamical behaviors in two planet systems are qualitatively different from those with three planets or more. For the case of two planets, they immediately start orbital crossing when their initial orbital separation is smaller than a particular critical value, while close encounters never happen otherwise. In systems with three planets or more, there is no solid stability boundary. With modest initial orbital separations, three planet systems can start orbital crossing after their eccentricities are built up over relatively long timescales \citep[e.g.,][]{Chambers}. Because it is not easy to establish unstable orbital configurations of two planet systems, Paper I considered the three gas giant planet systems having equal-mass of $M_{\rm J}$ and radius $2R_{\rm J}$, where $M_{\rm J}$ and $R_{\rm J}$ are the present values of Jupiter. The results of the N-body simulations are summarized as follows: \begin{enumerate} \item During close encounters, energy dissipation due to planet-planet tides often results in formation of a gravitationally bound pair of planets (binary planets). Tidal capture usually occurs in the early phase of the orbital crossing before the planets' stellarcentric eccentricities have not been maximally excited. The formation rate is $\sim 10\%$ of the runs, almost independent of initial stellarcentric semimajor axes of the planets. \item The stellarcentric semimajor axes of the binary barycenters are comparable to the initial locations. Their stellarcentric eccentricities are distributed in a broad range with median values of $\sim 0.15$. \item The binary planets are tight binaries. After the orbital circularization and long-term tidal evolution, the binary separations are $\sim 3-5$ times of the sum of the physical radii of the planets ($R_{\rm tot}$). \end{enumerate} Hereafter we use the subscripts "0", "1", and "2" to represent the period at the tidal capture, tidal circularization, and the spin-orbit synchronous state just after quasi-static planet-planet tidal evolution, respectively. Because tidal interaction is a sensitive function of distance, the close encounters that lead to tidal capture are usually grazing ones. So, the binary orbits just after the tidal capture have pericenter distance of $q_{\rm bi,0}\sim R_{\rm tot}-2R_{\rm tot}$ and binary orbital eccentricity $e_{\rm bi,0} \sim1$. The binary separations after tidal circularization are given by $a_{\rm bi,1} \sim 2q_{\rm bi,0} \sim 2R_{\rm tot}-4R_{\rm tot}$ due to conservation of angular momentum ($\sqrt{a_{\rm bi,1}} \sim \sqrt{a_{\rm bi,0}(1-e_{\rm bi,0}^2)}\sim \sqrt{2q_{\rm bi,0}}$). After the tidal trapping to form binary planets and the binary orbital circularization due to planet-planet dynamical tides, the binary separation expands, and the binary planet pair enters a spin-orbit synchronous state through quasi-static tidal evolution. The initial total angular momentum of the circularized binary is $L_1 = 2 \times (2/5)M_{\rm p} R_{\rm p}^2 \omega_{\rm p}+ \mu a_{\rm bi,1}^2 \Omega_{\rm bi,1}$, where $\mu = M_{\rm p}/2$, $M_{\rm p}$ is the planetary mass, $R_{\rm p}$ is the planetary radius, $\omega_{\rm p}$ is the spin angular velocity of the individual planets, $\Omega_{\rm bi,1} = \sqrt{2GM_{\rm p}/a_{\rm bi,1}^3}$, and $G$ is the gravitational constant. In the spin-orbit synchronous state after quasi-static planet-planet tidal evolution, the total angular momentum is $L_2 = 2 \times (2/5)M_{\rm p} R_{\rm p}^2 \Omega_{\rm bi,2}+ \mu a_{\rm bi,2}^2 \Omega_{\rm bi,2}$. For initial spin period of 10 hours, $M_{\rm p}=M_{\rm J}$ and $R_{\rm p}=2R_{\rm J}$, the orbital separation in the spin-orbit synchronous state is given by \begin{eqnarray} a_{\rm bi,2} &\sim & a_{\rm bi,1}\left(1+\frac45\sqrt{\frac2{GM_{\rm p}a_{\rm bi,1}}}R_{\rm p}^2\omega_{\rm p}\right)^2 \nonumber \\ &\sim& a_{\rm bi,1} \left(1+0.4\bun{a_{\rm bi,1}}{10R_{\rm J}}^{-0.5}\right)^2, \label{eq:a_bi_2} \end{eqnarray} where we assumed $a_{\rm bi,2} \gg R_{\rm p}$. The distributions of $a_{\rm bi,1}$ and $a_{\rm bi,2}$ obtained by Paper I are showin in Fig.~\ref{fig:a_bi}. At stellarcentric distance $a_{\rm G} \la a_{\rm G, Hill} \simeq 0.2$ AU, $a_{\rm bi,2}$ exceeds $r_{\rm H}/3$ where $r_{\rm H}$ is Hill radius ($r_{\rm H} = (M_{\rm p}/3M_\ast)^{1/3}a_{\rm G}$) and the perturbations of the central star destabilize the binary \citep{Sasaki}. At $a_{\rm G} \la a_{\rm G, tide} \simeq 0.4$ AU, planet-star quasi-static tide removes the binary orbital angular momentum and the binary planets collide with each other within main sequence phase of solar type stars ($\sim 10^{10}$ years). Since the planets' gas envelopes fully contract in $10^{8}$ years, in the quasi-static tidal evolution $R_{\rm p}=R_{\rm J}$ is more appropriate than $R_{\rm p}=2R_{\rm J}$. With $R_{\rm p}=R_{\rm J}$, the critical stellarcentric semimajor axis is $a_{\rm G, tide} \sim 0.3$ AU, rather than 0.4 AU. Therefore, binary planets should be orbitally stable at $a_{\rm G} \ga 0.3$ AU. Since the timescale to establish the spin-orbit synchronous state is $\la 10^6$ years and binary separation does not evolve significantly once this state is achieved, except in the case of the orbital destabilizations, observed binary planets should be in this spin-orbit synchronous state, that is, binary separations of $3R_{\rm tot}$--$5R_{\rm tot}$, which is comparable to the stellar diameter. In addition, these binary systems should be stable when $a_{\rm G} \ga 0.3$ AU. Based on these theoretically predicted orbital parameters, we discuss the detectability of binary planets by transit observations in the following sections. \section{TRANSIT LIGHT CURVES OF BINARY PLANETS} \label{rei} We discuss how the transit light curves of binary planets are modified compared with those of a single planet, using the parameters given in the last section, with the aim of providing insight into the types of light curves that can be produced by a binary planet pair. For simplicity, we take the stellarcentric eccentricity of the binary barycenter as $e_{\rm G}=0$ and the stellarcentric semi-major axis to be $a_{\rm G}=0.4$ AU, which is near the stability limit, $a_{\rm G, tide}$. Although transit detectability increases with decreasing $a_{\rm G}$, the lifetime of binary planets is shorter. So, binary planets just outside $a_{\rm G, tide}$ may be the most promising targets. We show the transit light curves of binary planets with equal-mass ($M_{\rm p} = M_{\rm J}$) and equal-radius ($R_{\rm p}$). For the radius, we consider two cases: $R_{\rm p} = 2 R_{\rm J}$ and $R_{\rm p} = 1R_{\rm J}$. We set the orbital separation after the tidal circularization of binaries as $a_{\rm bi,1}=2.5 R_{\rm tot}$, which is a typical value obtained in Paper I. From Eq.~(\ref{eq:a_bi_2}), the orbital separation after the orbit enters the spin-orbit synchronous state is estimated as $a_{\rm bi,2}\simeq 3.9 R_{\rm tot}$ for $R_{\rm p} = 2 R_{\rm J}$ and $a_{\rm bi,2}\simeq 2.8 R_{\rm tot}$ for $R_{\rm p} = 1 R_{\rm J}$. The corresponding binary rotation periods are $\simeq 5.4$ and 3.3 days for $R_{\rm p} = 2 R_{\rm J}$ and $R_{\rm p} = 1R_{\rm J}$, respectively. Since $R_{\odot} \sim 10 R_{\rm J}$, the binary separations are $\sim 0.8$ and $\sim 0.3$ times the stellar diameter for $R_{\rm p} = 2 R_{\rm J}$ and $R_{\rm p} = R_{\rm J}$, respectively. As we pointed out in the last section, these binaries are most likely to be observed in the spin-orbit synchronous state. So, we assume a spin-orbit synchronous state for the binary planets. The stellarcentric orbital period is $\sim 92$ days, assuming a solar-mass central star and $a_{\rm G}=0.4$ AU. For simplicity, we assume that the stellarcentric and binary orbits are coplanar. For light curves, limb-darkening is also taken into account, while stellar spots and pulsation are neglected. We calculate sample light curves using the method of \citet{Pal2012}, using the quadratic limb darkening model of \citet{Pierce2000}. For simplicity, we assume the radius, mass and luminosity of the host star are equal to the solar values of $R_\odot$, $L_\odot$, and $M_\odot$. Figure~\ref{kurowakusei} shows an example of a transit light curve along with the positions of the binary planets, corresponding to the case where the binary's barycenter passes the stellar center. In this case, the silhouette of the planets overlap (mutual transit) near the transit center. The top panel shows the transit light curve. The solid purple line represents the predicted light curve of a binary consisting of planets with $M_{\rm p}=1M_{\rm J}$ and $R_{\rm p}=2R_{\rm J}$. For comparison, the light curve for a single planet with $M_{\rm p}=2M_{\rm J}$, $R_{\rm p}=2\sqrt2 R_{\rm J}$ at semimajor axis $a=0.4$ AU is also shown using a dotted red line. For this case we set $R_{\rm p}=2\sqrt2 R_{\rm J}$, such that the transit depth is equal to the total transit depth of the two planets. Similarly, the light curve for a binary with $M_{\rm p}=1M_{\rm J}$ and radius $R_{\rm p}=R_{\rm J}$, and a single planet with $2M_{\rm J}$ and $\sqrt2 R_{\rm J}$ is also shown using a dashed blue line and a dash-dotted light-blue line respectively. The middle panel gives the projected positions of the planets. The $x$-axis is the distance along the transit path with the origin at the center of the star. The bottom panel shows the projected positions of the transiting binary planets (the filled black and gray circles) in the case of $R_{\rm p} = 2R_{\rm J}$ for a number of snapshots during the transit. As shown in this illustration, the motion of the two planets around their common barycenter during the transit is taken into account when calculating the light curve. The leading planet of the binary (the filled black circle) reaches the edge of the photosphere at $t\sim -6$ h from the transit center. Because of limb darkening and the area of the star being blocked by the planet increasing, the light curve gradually goes down with time. Because there is an off-set due to the binary separation, the ingress (the egress) is earlier (later) than for the single planet fit with $a=0.4$ AU. For the case where only one transit is observed, and a single planet fit is done for the transit duration, the stellarcentric semimajor axis may be overestimated. The overestimation depends on the binary orbital phase during ingress and egress. Although overestimation is only slight in the case of Fig.~\ref{kurowakusei}, the maximum overestimation of the orbital period is a factor of 2, and thus, the semimajor axis would be overestimated by a factor of $2^{2/3} \sim 1.6$. For the case where multiple transits are observed, the orbital period is known, and thus the semi-major axis can be accurately determined. At the transit center ($t\sim0$), the light curve of the binary planet pair shows a bump due to a mutual eclipse between the planets. Depending on the positions of the two planets in their mutual orbit and orbital separation, a dip often appears at the transit center, instead of the bump (Figure~\ref{sougoshoku04}). Such a bump or a dip never occurs for a single planet transit, except for a bump resulting from a planet passing in front of a stellar spot. The most pronounced property of the light curves of binary planets is variation in the light curves from transit-to-transit. Figure~\ref{sougoshoku04} shows the light curves for a sequence of six consecutive transits of the same binary system as that shown in Fig.~\ref{kurowakusei}. As can be seen, each light curve is different. The upper-left panel is the same as Figure~\ref{kurowakusei}, which we call ``case A". The three panels (the upper-right and upper/lower-middle panels) show a dip. Because the orbital separation is comparable to the stellar diameter for $R_{\rm p} = 2R_{\rm J}$, the duration in which only one planet of the binary is transiting and that in which both planets are transiting are often comparable. In that case, the transit light curves show a deep dip (transit by both planets) near the transit center, sandwiched by relatively shallow transit (transit by one planet). We call this ``case B". In this case, the two-planet transit occurs without a mutual transit. However, since one or the other of the planets is transiting a highly limb darkened region, the transit depth is slightly shallower than for the single planet case. Also in the lower-right/left panels, the case of the transit of one planet and that of two planets can coexist. However, since the projected binary separations are smaller than those in case B, the curves show a ``step" rather than a ``dip." Since the two planet transit is not affected by the limb darkening, the maximum transit depth is similar to the single planet fit. We call this case ``case C". Note that if the orbital separation is slightly larger, such that one planet enters the transit just at the time when the other leaves it, the transit curve has a bump at the transit center with a shallower transit depth than case A (Figure~\ref{smallbump}). We call this case ``case D". We summarize cases A to D in table \ref{ABCDmatome}. Note that if we consider the binary systems with non-equal $R_{\rm p}$, the modulation of light curves becomes less pronounced. Figure~\ref{sougoshoku04r21} is the same as Figure~\ref{sougoshoku04}, except that the ratio of the physical radii of the binary planets is 2:1, keeping the total cross-section the same. In this non-equal $R_{\rm p}$ case, statistical techniques for detecting exomoons may become necessary. Case A is characterized by a mutual transit. From \citet{SA09}, the detection probability of binary planets with a mutual transit is \begin{eqnarray} p=p_1p_2, \end{eqnarray} where $p_1=t_{\rm{obs}}/T_{\rm K}$ ($T_{\rm K}$ is a stellarcentric Keplerian period of the binary barycenter) is the probability for the binary center to pass across the surface of the central star during the observational duration $t_{\rm obs}$, $p_2=t_{\rm{E}}/T_{\rm{bi}}$ is the fraction of transits for which a mutual transit occurs during the transit duration $t_{\rm{E}}=2R_\odot/v_{\rm{K}}$ ($p_2=1$ for $T_{\rm{bi}}<t_{\rm{E}}$), where $T_{\rm{bi}}$ is the Keplerian period of the binary system and $v_{\rm{K}}$ is the stellarcentric Keplerian velocity of the binary barycenter. For $a_{\rm{bi}}=2.5(R_i+R_j)=10R_{\rm J}$, $p \sim 0.1$. So, the probability of case A is low. Note that \citet{Hirano} detected a rare mutual transit, although it is for a pair of non-bounded planets. If the total mass of the binary system is determined through RV measurement, their mean bulk density can be calculated. However, as stated in the above, a single planet fit based on the transit duration leads to fitting parameters of $M_{\rm p}=2M_{\rm J}$ and $R_{\rm p} = \sqrt{2}R_{\rm J}$. As a result the calculated bulk density can be up to $(\sqrt{2})^3/2=\sqrt{2}$ times smaller than the real one, with the exact factor depending on the derived impact parameter for both fits (see section \ref{detect_corot} for an example). So, binary planets may be mis-classified as inflated exoplanets, if a single planet fit is applied. The tidal stability limit $a_{\rm G, tide}$ evaluated by the overestimated $R_{\rm p}$ is artificially large, so if inflated exoplanets are located inside of $a_{\rm G, tide}$ (but outside of $a_{\rm G, Hill}$), it is worth considering the possibility that the object is really a binary planet pair. The changes in the light curves in a sequence of consecutive transits may be the most pronounced signal of planet binarity. However, if the stellarcentric orbital period is close to an integer number of times of the binary period and the total number of observed transit is relatively small, the transit-to-transit changes may not be significant. So, the detection of perturbations to the light curve shape for each individual transit is also important. To see if such changes could be detectable, we have conducted a preliminary search of the open access CoRoT and Kepler transiting planet data. This investigation is discussed in the next section. \section{DETECTABILITY OF BINARY PLANETS IN ARCHIVED COROT AND KEPLER DATA} \label{KepSec} To investigate if binary planets analogous to those investigated in this paper could be detected using current technology, we focussed on data from the CoRoT and Kepler satellites. To provide context, both these missions, and the types of planets they were designed to detect will be discussed in turn. CoRoT is an ESA-led mission with the aim of using a 27cm diameter space telescope to detect transiting planets larger than Earth, in short period orbits as well as monitoring and characterising stars \citep{Auvergne2009}. As the CoRoT satellite was in orbit of the Earth it suffered from thermal effects related to its orbital phase which led to predictable data errors. In addition, this satellite monitored a range of fields, located in the galactic centre and anti-center, with one observing run per field. Before the mission ended, nine long runs (150-90 days) and three short runs (20 days) towards fields in the galactic centre were completed along with six long runs, five short runs, and one intermediate length run (50 days) towards the galactic anti-center. As a result of this mission, $\sim$ 100,000 light curves have been released and 27 planets have been detected. Given that less than a year's worth of contiguous data was available for each candidate, it is unsurprising that most planets detected by CoRoT had small semi-major axes and short periods. On the other hand, Kepler is a Nasa-led mission with the aim of using a 0.95m diameter telescope to discover an Earth-twin around a Sun-like star \citep{Borucki2008}. To ensure less noise and a longer time baseline, Kepler was placed in an Earth-trailing orbit and monitored $\sim$ 145,000 stars in one particular field. Apart from small gaps in the data due to satellite rotations, data downlinks and problems such as safe modes and coronal mass ejections, the data is continuous. As a result of the high data quality and the long time baseline, multiple transits for each candidate were routinely collected, allowing planetary orbital period to be derived and allowing sensitivity to sub-earth sized planets. These factors resulted in a much larger number of Kepler planet candidates (4234) as well as confirmed planets (978).\footnote{Retrieved from http://kepler.nasa.gov/, 4$^{th}$ August, 2014.} In this context, detection of binary planets using CoRoT and Kepler data will be discussed in turn. In particular, we show that simple light curve fitting programs can successfully identify binary planet candidates using a possible binary planet candidate from the CoRoT data set and confirmed binary star pair, transiting a brighter star, from the Kepler data set. Then, as the Kepler data set is a much richer place to search for binary planets, we conduct a simulation to demonstrate that Kepler quality data is sufficient to detect these planets. \subsection{DETECTABILITY OF BINARY PLANETS IN COROT DATA} \label{detect_corot} To investigate binary planet detection, the CoRoT candidates were checked by eye. One interesting object that we discovered is CoRoT SRc01 E2 1066, which is described in detail in \citet{erikson}. The transit has a relative depth of 4\% and duration of 66 hours (see figure~\ref{CoRoTFitFigure}). From the long transit duration, \citet{erikson} suggested that this event might be a transit of an evolved or dwarf star by a distant gas giant planet, where, by chance, the planet occulted a stellar active region (spot) at the centre of the transit. But this event could also be due to a transit of a binary gas giant planet pair with smaller stellarcentric semi-major axis. To check this claim we performed a single planet (dotted red) and binary planet (solid purple) fit to this data (see figure~\ref{CoRoTFitFigure}). The single planet and binary planet models were calculated using the light curve simulation code of \citet{Pal2012}, where, to account for long term trends in the light curve, the out-of-transit light curve was modelled using a cubic. For simplicity, we assume that the binary orbit normal is circular and is perpendicular to the line-of-sight and the chord made by the planet-planet barycenter across the star. The best fit models were then determined by locating the minimum residual using a simplex minimisation routine, with the fit improvement, calculated using the change in Bayesian Information Criterion (BIC), defined as \begin{equation} \Delta BIC = n \ln\left(\frac{RSS_{sing}}{n}\right) - \ln\left(\frac{RSS_{bin}}{n}\right) - k \ln (n) \end{equation} where $RSS_{sing}$ and $RSS_{bin}$ are the residual sum of squares for the single planet and binary planet cases respectively, $k$ is the number of extra degrees of freedom, five for this work, and $n$ the number of data points used for the fit, in this case 1333. The derived parameters for the single and binary planet cases are shown in table~\ref{CoRoTFitTable}, noting that the BIC value improves by $\sim80$ when the binary, as opposed to the single planet model, is used, where a value of 10 is robust. As can be seen from table~\ref{CoRoTFitTable}, the binary planet fit yields orbital and physical parameters for the putative binary tantalisingly similar to those derived in paper I. However, as only one transit is available (as this was a short run) it is not possible to rule out the starspot case. If more data were available for this target, its true nature could be determined by e.g. testing for long term trends due to light curve modulation due to starspots, resulting from stellar rotation. In addition, if another transit were detected, it would help differentiate between these cases by first, placing a strong constraint on the orbital period (which is different in the binary and non-binary planet cases) and second indicating if there were transit-to-transit variations in transit width, depth and shape (see figures~\ref{sougoshoku04} to \ref{sougoshoku04r21}) as is likely for the binary planet case. These issues are discussed in greater detail in section~\ref{det_starspots}. \subsection{DETECTABILITY OF BINARY PLANETS IN KEPLER DATA} While no visually obvious binary planets are present in the Kepler candidate and binary star data sets, a transiting binary star pair was discovered \citep{Carter2011}. In this system, the central star has radius $2.0254 \pm 0.0098$ $R_\odot$ while the transiting pair have radius $0.2543 \pm 0.0014$ $R_\odot$ and $0.2318 \pm 0.0013$ $R_\odot$ respectively, and the binary has a semi-major axis of 4.729 $R_\odot$. As the host star is between 3000 and 5000 times brighter than the members of the transiting binary pair, and the radius ratios are similar to those of the planetary binaries described in this paper and in paper I, these light curves should be approximately analogous to those of the planetary case, and we can test our fitting code on a real transiting binary system. Example transits from this system are shown in figure~\ref{CarterTransitSequenceFigure}. Note how they look markedly different from those from a single object transit and similar to some of the sample transits shown in figures~\ref{sougoshoku04} and \ref{smallbump}. We fit this data with our simplified fitting model, given by the solid (purple) line. As this transiting binary system has an inclined orbit, and the impact parameter for the transit is large, we do not expect the light curve and the model to exactly match. In particular, this binary orbit is inclined such that the leading star transits closer to the stellar limb than the trailing star and is the reason why the transit duration and depth observed for the leading star is shorter and shallower than predicted by the fit and the transit duration and depth observed for the trailing star is longer and deeper than predicted by the fit. While the fit isn't perfect, the binary system is unequivocally detected with a reduction in the BIC of over 2000, which indicates that our simple model still can successfully detect binary systems for inclined binaries and high transit impact parameter systems. To investigate if binary planets analogous to those investigated in this paper could be detected in real Kepler observations, a preliminary investigation was conducted. Focussing on the long cadence data, we selected a Kepler candidate, KOI 3681.01 (KIC 2581316), showing an anomalously large radius (22 Earth radii), transit duration (21.3 hr) and with a host magnitude close to the 12$^{\textrm{th}}$ magnitude Kepler target specifications (11.69), with the aim of demonstrating that it is possible to robustly detect the difference between a binary planet transit light curve and a transit light curve due to an inflated single planet. To do this, we simulated realistic noisy binary planet transit light curves using two different noise sources, KIC 2581316/KOI 3681, a 11.69 magnitude star and KIC 9517393/KOI 2076, a 15.3 magnitude host star, and then tried to detect the planet binary by fitting a single planet and binary planet light curve model. The method used to produce these light curves is described in the next section. \subsubsection{Simulating Noisy Binary Planet Lightcurves} Simulated realistic binary planet light curves were constructed from real Kepler light curves in five stages. First, long cadence data was downloaded from the NASA Exoplanet Archive\footnote{http://exoplanetarchive.ipac.caltech.edu/index.html} corresponding to quarters 0 to 16. Second, all planetary transits were completely removed. Third a sequence of binary planet transits was simulated. Fourth, out-of-transit data was used to give the simulated transits realistic noise. Finally, the simulations and the original data were stitched together, where a cubic was added to the simulation to ensure that the gradient and value of the endpoints matched. The final four stages will be discussed in detail. For both KIC 2581316 and KIC 9517393, all planetary transits were first identified. To ensure that no signal corresponding to real moons of planetary candidates in these systems remained, estimates of the planetary masses were used to calculate reasonable values for the Hill sphere, and all data corresponding to the transit of the planet or the Hill sphere was removed. Then, using the code of \citet{Pal2012}, we simulated sequences of transits of binary planet pairs. Following Paper I, we simulated two classes of binary planet pairs, one where both components had equal radii and one where the radii of one of the components was twice that of the second. To ensure that the transit depth for the binary case was similar to that observed for the real candidate (see figure~\ref{KeplerTransitSequenceFigure}), for the equal radius case we set $R_1 = R_2 = 0.0629R_*$ while for the other case we set $R_1=0.0795R_*$ and $R_2=0.0398R_*$. Also, the semi-major axis of the planet binary was taken to be $R_*$ corresponding to a typical binary planet separation (see section~\ref{rei}), and the transit velocity was altered to approximately match the transit duration. In addition, following the analysis in section~\ref{rei}, the planets are assumed to be equal mass and, for simplicity, these systems are assumed to have zero eccentricity, be coplanar with the planet orbit and have impact parameter equal to that of KOI 3681.01. To ensure these light curves displayed realistic photometric noise, we randomly selected sections of out of transit light curve from Q1-15 data from KIC 2581316 for the case of low noise and KIC 9517393 for the case of high noise. Our simulated light curves were then multiplied by these sections of of data to produce noisy binary planet light curves. Finally, the simulated light curves were stitched into the original data. To ensure that the light curves were continuous, a cubic was added to each transit light curve such that the gradient and value at both edges of the simulated transit light curve matched the gradient and value at the edges of the original data. Using this method we simulated 50 six transit\footnote{KOI 3681.01 shows seven transits while KOI 2076.02 shows six.} sequences of light curves for these systems (see figure~\ref{KeplerTransitSequenceFigure} for examples), where the initial true anomaly was randomised for each sequence. \subsubsection{Determining if the Binary Planets were Detectable} To determine if planet binarity was detectable in these simulated light curves, using our light curve fitting code, we fitted each sequence of transits with a single planet and binary planet model and recorded the difference in the BIC. To determine the effect of data length we repeated the process including only one and only three transits. Some example fits are shown in figure~\ref{KeplerTransitSequenceFigure} and the results are shown in figures~\ref{KeplerEqualResultsFigure} and \ref{KeplerUnequalResultsFigure}. All binary planets were detected, nearly all, robustly. To investigate the behaviour of the detection threshold as a function of host star magnitude, three additional stars were chosen from the Kepler catalog, KIC 12121701 (magnitude 15.611), KIC 8827930 (magnitude 15.999) and KIC 2438406 (magnitude 16.546). While these results apply to these specific systems, the trend should be indicative of the true trend. The analysis described previously was repeated for the equal radius ratio and one or three transit cases, with the results plotted in figure~\ref{KeplerMagnitudesFigure} along with the results for KIC 9517393. As can be seen, in our sample, binary planets are robustly detected for stars with magnitude less than 16 and 15.5 for the three and one transit cases. In addition, binary planets may exist with radius ratios outside of those investigated in Paper I. To investigate the detectability of such systems, simulations with a range of radius ratios were constructed using photometric noise from KIC9517393, the 15.3 magnitude target, and analysed using our code. As can be seen from figure~\ref{KeplerRadiiFigure}, for the case where one transit is observed, binary planets are robustly detected for radius ratios smaller than 2:1 while for the case of three observed transits, even planet pairs with radius ratio 5:1 are robustly detected. From these simulation results we show that a range of binary planets, analogous to those simulated in this paper could be practically detected in Kepler data even for the case where the star is dim (15$^{th}$ magnitude) and the number of transits is small. In particular we show that binary planets are more detectable around host stars with low relative photometric noise compared to high photometric noise targets. In addition, we show that detectability improves when the number of observed transits increases. Finally while this analysis does include many important physical factors e.g. realistic photometric noise, transit to transit variation, it does not include the effect of spot crossing events in the light curve. As discussed in section~\ref{rei}, possible spot crossing events may closely mimic the effect of gas giant binary planet transit, so this topic warrants discussion. \subsection{THE EFFECT OF STARSPOTS ON BINARY PLANET DETECTION} \label{det_starspots} One factor which may hamper the detection of binary planets is the presence of spot crossing events in a transit light curve. We propose that it should be possible to differentiate between a single planet which orbits a spotty star and a binary planet given sufficient numbers of transits by either confirming the presence of spots or by providing supporting evidence for a binary planet. In particular this can be done in a number of ways including: \begin{enumerate} \item \emph{Investigate if the timing of spot crossing events corresponds to a physically realistic star:} For the case where the planetary orbit is inclined with respect to the stellar spin axis, the transit chord may cross one or more active latitudes. As a result, spot crossing events are more likely to appear on the same part of different transits light curves, with the position corresponding to the location of the active latitudes \citep[e.g.][]{SanchisOjeda2011}. In addition, it has been suggested that interactions between the planet and the star can also lead to correlation between the position of the planet and the position of active regions. This phenomenon may be present and detectable in some systems \citep[e.g.][]{Pagano2009,Herrero2013} but not others \citep[e.g.][]{Miller2012,Scandariato2013}, however, as the planets of interest for this work are distant (outside 0.3AU), such an interaction is likely to be very weak or absent. Consequently, the presence of light curve perturbations that cannot be predicted, but occur with higher probability in certain sections of the light curve, could mean strong evidence for starspots being the cause. However for the case of binary planet pairs, the size and location of bumps in the light curve should relate to the sizes of the planets and will always correspond to a physically realistic planet model. \item \emph{Use trends in the out-of-transit light curve to confirm presence of spots:} Starspots also cause changes to the out-of-transit light curve. \citet{SanchisOjeda2012} suggested corellating long term light curve modulations and spot crossing events to determine relative inclination in multi planet systems, but the same processes could be used to provide evidence for or against a spot being the cause of a particular light curve feature. \item \emph{Image the stellar surface to confirm presence of spots:} For a possible evolved host star, they are known to host large starspots. Such spots have been imaged \citep[e.g.,][]{Vogt83,Vogt87,Strassmeier02}. For some host stars this may be an option to determine the presence of spots. \item \emph{Compare measured planetary mass to predicted mass:} As mentioned previously, for the single planet case, the mass derived from radial velocity is likely to match the measured radii, while for binary planets that have been incorrectly classified, it will be systematically low. \end{enumerate} Similar to the simulations presented, these arguments indicate that the detectability of binary planet pairs increases as the number of transits increases. In addition, observations that are likely to be taken to try to prove planetary nature e.g. RV observations can also be used to determine likelihood of a given candidate being a binary planet. \section{CONCLUSIONS} \label{kousatsu} In this paper, we have discussed the possibility of the observational detection of extrasolar binary planets (gravitationally bound pairs of gas giant planets) by transit observations, based on the results of N-body simulations on the tidal capture between gas giants and calculations of long-term tidal evolution after the capture performed by Paper I. Paper I showed that the formation probability of a planetary binary is as much as $\sim 10\%$ almost independent of stellarcentric semimajor axis, of the binary ($a_{\rm G}$) and predicted that the typical binary separation is 3--5 times the sum of physical radii of the planets and the binary planets are tidally stable for $\sim 10^{10}$ years if $a_{\rm G} \ga 0.3$ AU. Using these constraints, we have modelled transit light curves of physically plausible binary planets. These light curves have a deep dip, a big bump or a step or two separate dips, and are noticeably and statistically different from those of a single planet. Furthermore, the transit shape changes from transit to transit, compared to the single planet case. Because of these features, the transits of binary planets might be classified as false positives.\footnote{ Light curves that change irregularly have been already discovered \citep[e.g.,][]{Barnes}, although this object is inside of the tidal stability limit and would not be a binary planet.} If RV measurement is also available, the bulk density can be estimated. The single planet fit for two equi-sized binary planets can give a lower bulk density than the real value. Thus true binary planets could also have been classified as ``inflated" planets if $a_{\rm G} \ga 0.3$ AU. We show that the CoRoT target SRc01 E2 1066 is well fit by a binary planet model and put forward the alternate scenario that it could also be due to the transit of a binary planet in addition to the starspot scenario proposed by \citet{erikson}. In addition, we show that binary planets may be present in, and would be detectable in the Kepler data set and are most detectable where the host star shows little noise and a number of transits are available. In addition, for host stars with magnitude less that 15, we show that a broad range of binary planets are robustly detectable, even for the case of one or a few observed transits. Prompted by this preliminary analysis we propose that we should do an accurate reanalysis of the irregular changing light curves of Kepler and CoRoT planets and planet candidates orbiting beyond 0.4AU from their central star, where over 100 candidates and false positives exist. \acknowledgements We are grateful for the offer of the gravitational lensing calculation program by Takahiro Sumi. We also thank Tristan Guillot and Rosemary Mardling for discussions on observations of binary planets. In addition, we thank Jessie Christiansen for advice on the Kepler pipeline. We would also like to thank an anonymous referee for helpful comments which improved the quality of this paper. This research was supported by a grant for JSPS (23103005) Grant-in-aid for Scientific Research on Innovative Areas. KML was supported by JSPS KAKENHI Grant Number: 24-02764.
{ "timestamp": "2015-04-27T02:02:28", "yymm": "1504", "arxiv_id": "1504.06365", "language": "en", "url": "https://arxiv.org/abs/1504.06365" }
\section{Introduction} Quotient stacks form a distinguished class of algebraic stacks which provide intuition for the geometry of general algebraic stacks. Indeed, equivariant algebraic geometry has a long history with a wealth of tools at its disposal. Thus, it has long been desired---and more recently believed \cite{alper-quotient,alper-kresch}---that certain algebraic stacks are locally quotient stacks. This is fulfilled by the main result of this paper: at a point with linearly reductive stabilizer, an algebraic stack is \'etale-locally a quotient stack of an affine scheme by the stabilizer. In the case of smooth algebraic stacks, we can provide a more refined description which resolves the algebro-geometric counterpart to the Weinstein conjectures~\cite{weinstein_linearization}---now known as Zung's Theorem \cite{zung_proper_grpds,MR2776372,MR3185351,MR3259039}---on the linearization of proper Lie groupoids in differential geometry. What the main theorems (Theorem \ref{T:smooth} and \ref{T:field}) of this paper really justify is a philosophy that quotient stacks of the form $[\Spec A / G]$, where $G$ is a linearly reductive group, are the building blocks of algebraic stacks near points with linearly reductive stabilizers. These theorems yield a number of applications to old and new problems, including: \begin{itemize} \item a generalization of Luna's \'etale slice theorem to non-affine schemes (\S\ref{A:luna}); \item a generalization of Sumihiro's theorem on torus actions to Deligne--Mumford stacks (\S\ref{A:sumihiro}), confirming an expectation of Oprea \cite[\S2]{oprea}; \item Bia\l ynicki-Birula decompositions for smooth Deligne--Mumford stacks (\S\ref{A:BB}), generalizing Skowera \cite{skowera}; \item a criterion for the existence of a good moduli space (\S\ref{A:gms}), generalizing \cite{keel-mori,afsw}; \item a criterion for \'etale-local equivalence of algebraic stacks (\S\ref{A:etale}), extending Artin's corresponding results for schemes \cite[Cor.~2.6]{artin-approx}; \item the existence of equivariant miniversal deformation spaces for curves (\S\ref{A:mv_curve}), generalizing~\cite{alper-kresch}; \item a characterization of toric Artin stacks in terms of stacky fans \cite[Thm.~6.1]{GS-toric-stacks-2} (\S\ref{A:global-type}); \item the \'etale-local quotient structure of a good moduli space (\S\ref{A:gms_app}); \item formal GAGA for good moduli space morphisms (\S\ref{A:gms_app}), resolving a conjecture of Geraschenko--Zureick-Brown \cite[Conj.\ 32]{geraschenko-brown}; \item a short proof of Drinfeld's results \cite{drinfeld} on algebraic spaces with a $\mathbb{G}_m$-action (\S\ref{A:drinfeld}); and \item compact generation of derived categories of algebraic stacks (\S\ref{A:compact-generation}). \end{itemize} Our first theorem gives a precise description of the \'etale-local structure of an algebraic stack at a non-singular point with linearly reductive stabilizer. This is the algebro-geometric analogue of Zung's resolution of the Weinstein conjectures. Before we state the theorem, we introduce the following notation: if $\cX$ is an algebraic stack over a field $k$ and $x\in \cX(k)$ is a closed point with stabilizer group scheme $G_x$, then we let $N_x$ denote the normal space to $x$ viewed as a $G_x$-representation. If $\cI \subseteq \oh_{\cX}$ denotes the sheaf of ideals defining $x$, then $N_{x} = (\cI/\cI^2)^\vee$. If $G_x$ is smooth, then $N_{x}$ is identified with the tangent space of $\cX$ at $x$; see Section \ref{S:tangent}. \begin{theorem}\label{T:smooth} Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over an algebraically closed field $k$, with affine stabilizers. Let $x \in |\cX|$ be a smooth and closed point with linearly reductive stabilizer group $G_x$. Then there exists an affine and \'etale morphism $(U,u) \to (N_x /\!\!/ G_x,0)$, where $N_{x} /\!\!/ G_x$ denotes the GIT quotient, and a cartesian diagram $$\xymatrix{ \bigl([N_{x}/G_x],0\bigr) \ar[d] & \bigl([W / G_x],w\bigr)\ar[r]^-f \ar[d] \ar[l] & (\cX,x) \\ (N_{x} /\!\!/ G_x,0) & (U,u) \ar[l] \ar@{}[ul]|\square & }$$ such that $W$ is affine and $f$ is \'etale and induces an isomorphism of stabilizer groups at $w$. In addition, if $\cX$ has affine diagonal, the morphism $f$ can be arranged to be affine. \end{theorem} In particular, this theorem implies that $\cX$ and $[N_{x} /G_x]$ have a common \'etale neighborhood of the form $[\Spec A / G_x]$. Our second theorem gives a local description of an algebraic stack at a (potentially singular) point with respect to a linearly reductive subgroup of the stabilizer. \begin{theorem}\label{T:field} Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over an algebraically closed field $k$, with affine stabilizers. Let $x \in \cX(k)$ be a point and $H \subseteq G_x$ be a subgroup scheme of the stabilizer such that $H$ is linearly reductive and $G_x / H$ is smooth (resp.\ \'etale). Then there exists an affine scheme $\Spec A$ with an action of $H$, a $k$-point $w \in \Spec A$ fixed by $H$, and a smooth (resp.\ \'etale) morphism $$f\colon \bigl([\Spec A/H],w\bigr) \to (\cX,x)$$ such that $BH \cong f^{-1}(BG_x)$; in particular, $f$ induces the given inclusion $H \to G_x$ on stabilizer group schemes at $w$. In addition, if $\cX$ has affine diagonal, then the morphism $f$ can be arranged to be affine. \end{theorem} The main techniques employed in the proof of Theorem \ref{T:smooth} are \begin{enumerate} \item deformation theory, \item coherent completeness, \item Tannaka duality, and \item Artin approximation. \end{enumerate} Deformation theory produces an isomorphism between the $n$th infinitesimal neighborhood $\cN^{[n]}_x$ of $0$ in $\cN_x = [N_x/G_x]$ and $\cX^{[n]}$, the $n$th infinitesimal neighborhood of $x$ in $\cX$. It is not at all obvious, however, that the system of morphisms $\{f^{[n]} \colon \cN_x^{[n]} \to \cX\}$ algebraizes. We establish algebraization in two steps. The first step is effectivization. To accomplish this, we introduce \emph{coherent completeness}, a key concept of the article. Recall that if $(A,\mathfrak{m})$ is a complete local ring, then $\Coh(A) = \varprojlim_n \Coh(A/\mathfrak{m}^{n+1})$. Coherent completeness is a generalization of this, which is more refined than the formal GAGA results of \cite[III.5.1.4]{EGA} and \cite{geraschenko-brown} (see \S\ref{A:gms_app}). What we prove in \S\ref{S:cc} is the following. \begin{theorem} \label{key-theorem} Let $G$ be a linearly reductive affine group scheme over an algebraically closed field $k$. Let $\Spec A$ be a noetherian affine scheme with an action of~$G$, and let $x \in \Spec A$ be a $k$-point fixed by $G$. Suppose that $A^{G}$ is a complete local ring. Let $\cX = [\Spec A / G]$ and let $\cX^{[n]}$ be the $n$th infinitesimal neighborhood of $x$. Then the natural functor \begin{equation} \label{eqn-coh} \Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX^{[n]}\bigr) \end{equation} is an equivalence of categories. \end{theorem} Tannaka duality for algebraic stacks with affine stabilizers was recently established by the second two authors \cite[Thm.~1.1]{hallj_dary_coherent_tannakian_duality} (also see Theorem \ref{T:tannakian}). This proves that morphisms between algebraic stacks $\cY \to \cX$ are equivalent to symmetric monoidal functors $\Coh(\cX) \to \Coh(\cY)$. Therefore, to prove Theorem \ref{T:smooth}, we can combine Theorem \ref{key-theorem} with Tannaka duality (Corollary \ref{C:tannakian}) and the above deformation-theoretic observations to show that the morphisms $\{f^{[n]} \colon \cN_x^{[n]} \to \cX\}$ effectivize to $\hat{f} \colon \hat{\cN}_x \to \cX$, where $\hat{\cN}_x = \cN_x \times_{N_x/\!\!/ G_x} \Spec \hat{\oh}_{N_X/\!\!/ G_x,0}$. The morphism $\hat{f}$ is then algebraized using Artin approximation \cite{artin-approx}. The techniques employed in the proof of Theorem \ref{T:field} are similar, but the methods are more involved. Since we no longer assume that $x \in \cX(k)$ is a non-singular point, we cannot expect an \'etale or smooth morphism $\cN_x \to \cX$. Using Theorem \ref{key-theorem} and Tannaka duality, however, we can produce a closed substack $\hat{\cH}$ of $\hat{\cN}_x$ and a formally versal morphism $\hat{f} \colon \hat{\cH} \to \cX$. To algebraize $\hat{f}$, we apply an equivariant version of Artin algebraization (Corollary \ref{C:equivariant-algebraization}), which we believe is of independent interest. For tame stacks with finite inertia, Theorems \ref{T:smooth} and \ref{T:field} are one of the main results of \cite{tame}. The structure of algebraic stacks with infinite stabilizers has been poorly understood until the present article. For algebraic stacks with infinite stabilizers that are not---or are not known to be---quotient stacks, Theorems \ref{T:smooth} and \ref{T:field} were only known when $\cX = \fM_{g,n}^{\ss}$ is the moduli stack of semistable curves. This is the central result of \cite{alper-kresch}, where it is also shown that $f$ can be arranged to be representable. For certain quotient stacks, Theorems \ref{T:smooth} and \ref{T:field} can be obtained using traditional methods in equivariant algebraic geometry, see \S\ref{A:luna} for details. \subsection{Some remarks on the hypotheses} We mention here two counterexamples to Theorems \ref{T:smooth} and \ref{T:field} if some of the hypotheses are weakened. \begin{example} \label{ex1} Some reductivity assumption of the stabilizer $G_x$ is necessary in Theorem \ref{T:field}. For instance, consider the group scheme $G= \Spec k[x,y]_{xy+1} \to \AA^1 = \Spec k[x]$ (with multiplication defined by $y \mapsto xyy' + y + y'$), where the generic fiber is $\mathbb{G}_m$ but the fiber over the origin is $\mathbb{G}_a$. Let $\cX = BG$ and $x \in |\cX|$ be the point corresponding to the origin. There does not exist an \'etale morphism $([W/ \mathbb{G}_a], w) \to (\cX, x)$, where $W$ is an algebraic space over $k$ with an action of $\mathbb{G}_a$. \end{example} \begin{example} \label{ex2} It is essential to require that the stabilizer groups are affine in a neighborhood of $x \in |\cX|$. For instance, let $X$ be a smooth curve and $\cE \to X$ be a group scheme whose generic fiber is a smooth elliptic curve but the fiber over a point $x \in X$ is isomorphic to $\mathbb{G}_m$. Let $\cX = B\cE$. There is no \'etale morphism $([W/ \mathbb{G}_m], w) \to (\cX, x)$, where $W$ is an affine $k$-scheme with an action of $\mathbb{G}_m$. \end{example} \subsection{Generalizations} Using a similar argument, one can in fact establish a generalization of Theorem \ref{T:field} to the relative and mixed characteristic setting. This requires developing some background material on deformations of linearly reductive group schemes, a more general version of Theorem \ref{key-theorem} and a generalization of the formal functions theorem for good moduli spaces. To make this paper more accessible, we have decided to postpone the relative statement until a future paper. If $G_x$ is not reductive, it is possible that one could find an \'etale neighborhood $([\Spec A/GL_n],w)\to (\cX,x)$. However, this is not known even if $\cX=B_{k[\epsilon]}G_\epsilon$ where $G_\epsilon$ is a deformation of a non-reductive algebraic group~\cite{mathoverflow_groups-over-dual-numbers}. In characteristic $p$, the linearly reductive hypothesis in Theorem \ref{T:field} is quite restrictive. Indeed, an algebraic group $G$ over an algebraically closed field $k$ of characteristic $p$ is linearly reductive if and only if $G^0$ is a torus and $|G/G^0|$ is coprime to $p$ \cite{nagata}. We ask however: \begin{question} \label{Q:geom-red} Does a variant of Theorems \ref{T:smooth} and \ref{T:field} remain true if ``linearly reductive" is replaced with ``reductive"? \end{question} We remark that if $\cX$ is a Deligne--Mumford stack, then the conclusion of Theorem \ref{T:field} holds. We also ask: \begin{question} If $\cX$ has separated (resp.\ quasi-affine) diagonal, then can the morphism $f$ in Theorems \ref{T:smooth} and \ref{T:field} be chosen to be representable (resp.\ quasi-affine)? \end{question} If $\cX$ does not have separated diagonal, then the morphism $f$ cannot necessarily be chosen to be representable. For instance, consider the non-separated affine line as a group scheme $G \to \AA^1$ whose generic fiber is trivial but the fiber over the origin is $\ZZ_2$. Then $BG$ admits an \'etale neighborhood $f \co [\AA^1/\ZZ_2] \to BG$ which induces an isomorphism of stabilizer groups at $0$, but $f$ is not representable in a neighborhood. \subsection{Notation} \label{S:notation} An algebraic stack $\cX$ is quasi-separated if the diagonal and the diagonal of the diagonal are quasi-compact. An algebraic stack $\cX$ has \emph{affine stabilizers} if for every field $k$ and point $x\colon \Spec k\to \cX$, the stabilizer group $G_x$ is affine. If $\cX$ is an algebraic stack and $\cZ \subseteq \cX$ is a closed substack, we will denote by $\cX_{\cZ}^{[n]}$ the $n$th nilpotent thickening of $\cZ \subseteq \cX$ (i.e., if $\cI \subseteq \oh_{\cX}$ is the ideal sheaf defining $\cZ$, then $\cX_{\cZ}^{[n]} \to \cX$ is defined by $\cI^{n+1}$). If $x\in |\stX|$ is a closed point, the \emph{$n$th infinitesimal neighborhood of $x$} is the $n$th nilpotent thickening of the inclusion of the residual gerbe $\cG_x \to \cX$. Recall from \cite{alper-good} that a quasi-separated and quasi-compact morphism $\phi \co \cX \to \cY$ of algebraic stacks is {\it cohomologically affine} if the push-forward functor $\phi_*$ on the category of quasi-coherent $\oh_{\cX}$-modules is exact. If $\cY$ has quasi-affine diagonal and $\phi$ has affine diagonal, then $\phi$ is cohomologically affine if and only if $\DERF{R} \phi_* \colon \DCAT_{\QCoh}^+(\cX) \to \DCAT_{\QCoh}^+(\cY)$ is $t$-exact, cf.\ \cite[Prop.~3.10~(vii)]{alper-good} and \cite[Prop.~2.1]{hallj_neeman_dary_no_compacts}; this equivalence is false if $\cY$ does not have affine stabilizers \cite[Rem.~1.6]{hallj_dary_alg_groups_classifying}. If $G \to \Spec k$ is an affine group scheme of finite type, we say that $G$ is {\it linearly reductive} if $BG \to \Spec k$ is cohomologically affine. We say that $G$ is an {\it algebraic group over $k$} if $G$ is a smooth, affine group scheme over $k$. A quasi-separated and quasi-compact morphism $\phi \co \cX \to Y$ of algebraic stacks is {\it a good moduli space} if $Y$ is an algebraic space, $\phi$ is cohomologically affine and $\oh_Y \to \phi_* \oh_{\cX}$ is an isomorphism. If $G$ is an affine group scheme of finite type over $k$ acting on an algebraic space $X$, we say that a $G$-invariant morphism $\pi \co X \to Y$ of algebraic spaces is {\it a good GIT quotient} if the induced map $[X/G] \to Y$ is a good moduli space; we often write $Y = X /\!\!/ G$. In the case that $G$ is linearly reductive, a $G$-equivariant morphism $\pi \co X \to Y$ is a good GIT quotient if and only if $\pi$ is affine and $\oh_Y \to (\pi_* \oh_X)^G$ is an isomorphism. If $\cX$ is a noetherian algebraic stack, we denote by $\Coh(\cX)$ the category of coherent $\oh_{\cX}$-modules. \subsection*{Acknowledgements} We thank Andrew Kresch for many useful conversations. We also thank Bjorn Poonen for suggesting the argument of Lemma \ref{L:curves}. We would also like to thank Dragos Oprea and Michel Brion for some helpful comments. \section{Applications} \label{S:applications} Theorems \ref{T:smooth} and \ref{T:field} have many striking applications. We first record some immediate consequences of Theorems \ref{T:smooth} and \ref{T:field}. Unless stated otherwise, for this section $k$ will denote an algebraically closed field. \subsection{Immediate consequences}\label{A:immediate} If $\cX$ is a quasi-separated algebraic stack, locally of finite type over $k$ with affine stabilizers, and $x \in \cX(k)$ has linearly reductive stabilizer $G_x$, then \begin{enumerate} \item there is an \'etale neighborhood of $x$ with a closed embedding into a smooth algebraic stack; \item there is an \'etale-local description of the cotangent complex $L_{\cX/k}$ of $\cX$ in terms of the cotangent complex $L_{\cW/k}$ of $\cW=[\Spec A/G_x]$. If $x \in |\cX|$ is a smooth point (so that $\cW$ can be taken to be smooth), then $L_{\cW/k}$ admits an explicit description. In general, the $[0,1]$-truncation of $L_{\cW/k}$ can be described explicitly by appealing to (1); \item for any representation $V$ of $G_x$, there exists a vector bundle over an \'etale neighborhood of $x$ extending $V$; and \item the $G_x$-invariants of a formal miniversal deformation space of $x$ is isomorphic to the completion of a finite type $k$-algebra. \end{enumerate} We now state some further applications of Theorems \ref{T:smooth} and \ref{T:field}. We will defer their proofs until \S\ref{S:pfs_applications}. \subsection{Generalization of Luna's \'etale slice theorem} \label{A:luna} We now provide a refinement of Theorem \ref{T:field} in the case that $\cX = [X/G]$ is a quotient stack. The following theorem provides a generalization of Luna's \'etale slice theorem. \begin{theorem}\label{T:luna} Let $X$ be a quasi-separated algebraic space, locally of finite type over $k$, with an action of an algebraic group $G$. Let $x \in X(k)$ be a point with a linearly reductive stabilizer $G_x$. Then there exists an affine scheme $W$ with an action of $G_x$ which fixes a point $w$, and an unramified $G_x$-equivariant morphism $(W,w) \to (X,x)$ such that $\tilde{f} \co W \times^{G_x} G \to X$ is \'etale.\footnote{Here, $W \times^{G_x} G$ denotes the quotient $(W \times G) / G_x$. Note that there is an identification of GIT quotients $(W \times^{G_x} G) /\!\!/ G \cong W /\!\!/ G_x$.} If $X$ admits a good GIT quotient $X \to X /\!\!/ G$, then it is possible to arrange that the induced morphism $W /\!\!/ G_x \to X /\!\!/ G$ is \'etale and $W \times^{G_x} G \cong W /\!\!/ G_x \times_{X /\!\!/ G} X$. Let $N_x = T_{X,x} / T_{G \cdot x, x}$ be the normal space to the orbit at $x$; this inherits a natural linear action of $G_x$. If $x \in X$ is smooth, then it can be arranged that there is an \'etale $G_x$-equivariant $W \to N_x$ such that $W /\!\!/ G_x \to N_x /\!\!/ G_x$ is \'etale and $$\xymatrix{ N_x \times^{G_x} G \ar[d] & W \times^{G_x} G\ar[r]^-{\tilde f} \ar[d] \ar[l] & X \\ N_x /\!\!/ G_x & W /\!\!/ G_x \ar[l] \ar@{}[ul]|\square & }$$ is cartesian. \end{theorem} \begin{remark}\label{R:luna} The theorem above follows from Luna's \'etale slice theorem \cite{luna} if $X$ is affine. In this case, Luna's \'etale slice theorem is stronger than Theorem \ref{T:luna} as it asserts additionally that $W \to X$ can be arranged to be a locally closed immersion (which is obtained by choosing a $G_x$-equivariant section of $T_{X,x} \to N_x$ and then restricting to an open subscheme of the inverse image of $N_x$ under a $G_x$-equivariant \'etale morphism $X \to T_{X,x}$). Note that while \cite{luna} assumes that $\mathrm{char}(k) = 0$ and $G$ is reductive, the argument goes through unchanged in arbitrary characteristic if $G$ is smooth, and $G_x$ is smooth and linearly reductive. Moreover, with minor modifications, the argument in \cite{luna} is also valid if $G_x$ is not necessarily smooth. \end{remark} \begin{remark}\label{R:luna-alper-kresch} More generally, if $X$ is a normal scheme, it is shown in \cite[\S 2.1]{alper-kresch} that $W \to X$ can be arranged to be a locally closed immersion. However, when $X$ is not normal or is not a scheme, one cannot always arrange $W \to X$ to be a locally closed immersion and therefore we must allow unramified ``slices" in the theorem above. \end{remark} \subsection{Generalization of Sumihiro's theorem on torus actions}\label{A:sumihiro} In \cite[\S 2]{oprea}, Oprea speculates that every quasi-compact Deligne--Mumford stack $\cX$ with a torus action has an equivariant \'etale atlas $\Spec A \to \cX$. He proves this when $\cX=\overline{\cM}_{0,n}(\mathbb{P}^r,d)$ is the moduli space of stable maps and the action is induced by any action of $\mathbb{G}_m$ on $\mathbb{P}^r$ and obtains some nice applications. We show that Oprea's speculation holds in general. Let $T$ be a torus acting on an algebraic stack $\cX$, locally of finite type over $k$, via $\sigma \co T \times \cX \to \cX$. Let $\cY = [\cX /T]$. Let $x \in \cX(k)$ be a point with image $y \in \cY(k)$. There is an exact sequence \begin{equation} \label{E:stab} 1 \to G_x \to G_y \to T_x \to 1 \end{equation} where the stabilizer $T_x \subseteq T$ is defined by the fiber product \begin{equation} \label{D:stab} \begin{split} \xymatrix{ T_x\times B G_x \ar[r]^-{\sigma_x} \ar[d] & B G_x \ar[d] \\ T\times B G_x \ar[r]^-{\sigma|_x} \ar@{}[ur]|\square & \cX } \end{split} \end{equation} and $\sigma|_x \co T\times B G_x \xrightarrow{(\id, \iota_x)} T \times \cX \xrightarrow{\sigma} \cX$. Observe that $G_y= \Spec k \times_{BG_x} T_x$. The exact sequence \eqref{E:stab} is trivially split if and only if the induced action $\sigma_x$ of $T_x$ on $BG_x$ is trivial. The sequence is split if and only if the action $\sigma_x$ comes from a group homomorphism $T \to \Aut(G_x)$. \begin{theorem} \label{T:sumi1} Let $\cX$ be a quasi-separated algebraic (resp.\ Deligne--Mumford) stack, locally of finite type over $k$, with affine stabilizers. Let $T$ be a torus with an action on $\cX$. Let $x \in \cX(k)$ be a point such that $G_x$ is smooth and the exact sequence \eqref{E:stab} is split (e.g., $\cX$ is an algebraic space). There exists a $T$-equivariant smooth (resp.\ \'etale) neighborhood $(\Spec A,u) \to (\cX,x)$ that induces an isomorphism of stabilizers at $u$. \end{theorem} The theorem above fails when \eqref{E:stab} does not split. For a simple example, consider the action of $T=\mathbb{G}_m$ on $\cX=B \pmb{\mu}_n$ defined by: for $t \in \mathbb{G}_m(S) = \Gamma(S, \oh_S)^*$ and $(\cL, \alpha) \in B\pmb{\mu}_n(S)$ (where $\cL$ is a line bundle on $S$ and $\alpha \co \cL^{\otimes n} \to \oh_S$ is an isomorphism), then $t \cdot (\cL, \alpha) = (\cL, t \circ \alpha)$. The exact sequence of \eqref{E:stab} is $1 \to \pmb{\mu}_n \to \mathbb{G}_m \xrightarrow{n} \mathbb{G}_m \to 1$ which does not split. In this case though, there is an \'etale presentation $\Spec k \to B \pmb{\mu}_n$ which is equivariant under $\mathbb{G}_m \xrightarrow{n} \mathbb{G}_m$. More generally, we have: \begin{theorem} \label{T:sumi2} Let $\cX$ be a quasi-separated Deligne--Mumford stack, locally of finite type over $k$. Let $T$ be a torus with an action on $\cX$. If $x\in \cX(k)$, then there exist a reparameterization $\alpha \co T \to T$ and an \'etale neighborhood $(\Spec A, u) \to (\cX,x)$ that is equivariant with respect to $\alpha$. \end{theorem} In the case that $\cX$ is a normal scheme, Theorem \ref{T:sumi1} was proved by Sumihiro~\cite[Cor.~2]{sumihiro}, \cite[Cor.~3.11]{sumihiro2}; then $\Spec A \to \cX$ can be taken to be an open neighborhood. The nodal cubic with a $\mathbb{G}_m$-action provides an example where there does not exist a $\mathbb{G}_m$-invariant affine open cover. Theorem~\ref{T:sumi1} was also known if $\cX$ is a quasi-projective scheme \cite[Thm.~1.1(iii)]{brion-linearization} or if $\cX$ is a smooth, proper, tame and irreducible Deligne--Mumford stack, whose generic stabilizer is trivial and whose coarse moduli space is a scheme \cite[Prop.~3.2]{skowera}. We can also prove: \begin{theorem} \label{T:sumi3} Let $X$ be a quasi-separated algebraic space, locally of finite type over $k$, with an action of an affine group scheme $G$ of finite type over $k$. Let $x \in X(k)$ be a point with linearly reductive stabilizer $G_x$. Then there exists an affine scheme $W$ with an action of $G$ and a $G$-equivariant \'etale neighborhood $W \to X$ of~$x$. \end{theorem} This is a partial generalization of another result of Sumihiro~\cite[Lemma.~8]{sumihiro}, \cite[Thm.~3.8]{sumihiro2}. He proves the existence of an open $G$-equivariant covering by quasi-projective subschemes when $X$ is a normal scheme and $G$ is connected. \subsection{Bia\l ynicki-Birula decompositions}\label{A:BB} In \cite[Prop.~5]{oprea}, Oprea proved the existence of a Bia\l ynicki-Birula decomposition \cite{bb} for a smooth Deligne--Mumford stack $\cX$ with a $\mathbb{G}_m$-action provided that there exists a $\mathbb{G}_m$-equivariant, separated, \'etale atlas $\Spec A \to \cX$. Therefore, Theorem \ref{T:sumi2} implies: \begin{theorem} \label{T:bb} Let $\cX$ be a smooth, proper Deligne--Mumford stack over $k$ with separated diagonal, equipped with a $\mathbb{G}_m$-action. Let $\cF = \sqcup_i \cF_i$ be the decomposition of the fixed substack into connected components. Then $\cX$ decomposes into disjoint, locally closed, $\mathbb{G}_m$-equivariant substacks $\cX_i$ which are $\mathbb{G}_m$-equivariant affine fibrations over $\cF_i$. \end{theorem} This is proven in \cite[Thm.~3.5]{skowera} under the additional assumptions that $\cX$ is tame, and the coarse moduli space of $\cX$ is a scheme. See also Section \ref{A:drinfeld} for a similar statement due to Drinfeld for algebraic spaces with a $\mathbb{G}_m$-action. \subsection{Existence of equivariant versal deformations of curves}\label{A:mv_curve} By a {\it curve}, we mean a proper scheme over $k$ of pure dimension one. \begin{theorem} \label{T:curves} Let $C$ be an $n$-pointed curve. Suppose that every connected component of $C$ is either reduced of arithmetic genus $g \ne 1$ or contains a marked point. Suppose that a linearly reductive group scheme $H$ acts on $C$. If $\Aut(C)$ is smooth, then there exist an affine scheme $W$ of finite type over $k$ with an action of $H$ fixing a point $w \in W$ and a miniversal deformation $$\xymatrix{ \cC \ar[d] & C \ar[l] \ar[d]\\ W \ar@{}[ur]|\square & \Spec k \ar[l]_{w} }$$ of $C \cong \cC_w$ such that there exists an action of $H$ on the total family $\cC$ compatible with the action of $H$ on $W$ and $\cC_w$. \end{theorem} The theorem above was proven for Deligne--Mumford semistable curves in \cite{alper-kresch}. \subsection{Good moduli spaces}\label{A:gms_app} In the following result, we determine the \'etale-local structure of good moduli space morphisms. \begin{theorem} \label{T:consequences-gms} Let $\cX$ be a locally noetherian algebraic stack over $k$. Suppose there exists a good moduli space $X$ such that the moduli map $\pi \colon \cX \to X$ is of finite type with affine diagonal. If $x\in \cX(k)$ is a closed point, then there exists an affine scheme $\Spec A$ with an action of $G_x$ and a cartesian diagram $$\xymatrix{ [\Spec A / G_x] \ar[r] \ar[d] & \cX \ar[d]^{\pi} \\ \Spec A /\!\!/ G_x \ar[r]\ar@{}[ur]|\square & X }$$ such that $\Spec A /\!\!/ G_x \to X$ is an \'etale neighborhood of $\pi(x)$. \end{theorem} The following corollary answers negatively a question of Geraschenko--Zureick-Brown~\cite[Qstn.\ 32]{geraschenko-brown}: does there exist an algebraic stack, with affine diagonal and good moduli space a field, that is not a quotient stack? In the equicharacteristic setting, this result also settles a conjecture of theirs: formal GAGA holds for good moduli spaces with affine diagonal~\cite[Conj.\ 28]{geraschenko-brown}. The general case will be treated in forthcoming work. \begin{corollary}\label{C:gb-c28} Let $\cX$ be a noetherian algebraic stack over a field $k$ (not assumed to be algebraically closed) with affine diagonal. Suppose there exists a good moduli space $\pi \colon \cX \to \Spec R$ of finite type, where $(R,\mathfrak{m})$ is a complete local ring. \begin{enumerate} \item \label{C:gb-c28:res} Then $\cX\cong[\Spec B/\mathrm{GL}_n]$; in particular, $\cX$ has the resolution property; and \item \label{C:gb-c28:fGAGA} the natural functor \[ \Coh(\cX) \to \varprojlim \Coh\bigl( \cX \times_{\Spec R} \Spec R/\mathfrak{m}^{n+1}\bigr) \] is an equivalence of categories. \end{enumerate} \end{corollary} \begin{remark} If $k$ is algebraically closed, then in (1) above, $\cX$ is in fact isomorphic to a quotient stack $[\Spec A / G_x]$ where $G_x$ is the stabilizer of the unique closed point. \end{remark} \subsection{Existence of coherent completions} \label{A:coherent-completion} Let $\cX$ be a noetherian algebraic stack with affine stabilizers and $\cZ \subseteq \cX$ be a closed substack. Denote by $\cX_{\cZ}^{[n]}$ the $n$th nilpotent thickening of $\cZ \subseteq \cX$. We say that $\cX$ is {\it coherently complete along $\cZ$} if the natural functor $$\Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX_{\cZ}^{[n]}\bigr)$$ is an equivalence of categories. When $|\cZ|$ consists of a single point $x$, we say that $(\cX, x)$ is a {\it complete local stack} if $\cX$ is coherently complete along the residual gerbe $\cG_x$. See Section \ref{S:cc} for more details on coherent completion. The next result asserts that the coherent completion always exists under very mild hypotheses. \begin{theorem} \label{T:complete} Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over $k$, with affine stabilizers. For any point $x \in \cX(k)$ with linearly reductive stabilizer $G_x$, there exists a complete local stack $(\hat{\cX}_x,\hat{x})$ and a morphism $\eta\colon (\hat{\cX}_x,\hat{x}) \to (\cX,x)$ inducing isomorphisms of $n$th infinitesimal neighborhoods of $\hat{x}$ and $x$. The pair $(\hat{\cX}_x,\eta)$ is unique up to unique $2$-isomorphism. \end{theorem} We call $\hat{\cX}_x$ the {\it coherent completion of $\cX$ at $x$}. If $\cW=[\Spec A/G_x]\to \cX$ is an \'etale morphism as in Theorem~\ref{T:field} and $\pi\colon \cW\to W=\Spec A^{G_x}$ is the good moduli space of $\cW$, then $$\hat{\cX}_x=\cW\times_W \hat{W}_{\pi(x)} = \cW\times_W \Spec \hat{A^{G_x}}.$$ That is, $\hat{\cX}_x=[\Spec B/G_x]$ where $B=A\otimes_{A^{G_x}} \widehat{A^{G_x}}$ and $\widehat{A^{G_x}}$ denotes the completion at $\pi(x)$ (Theorem \ref{key-theorem}). In particular, $B^G\to B$ is of finite type and $B^G$ is the completion of an algebra of finite type over $k$. The \emph{henselization of $\cX$ at $x$} is the stack $\cX^\mathrm{h}_x=\cW\times_W \Spec (A^{G_x})^\mathrm{h}$. This stack also satisfies a universal property (initial among pro-\'etale neighborhoods of the residual gerbe at $x$) and will be treated in forthcoming work. \subsection{\'Etale-local equivalences}\label{A:etale} Before we state the next result, let us recall that if $\cX$ is an algebraic stack, locally of finite type over $k$, and $x \in \cX(k)$ is a point, then a formal miniversal deformation space of $x$ is a formal affine scheme $\hat{\Def}(x)$ together with a formally smooth morphism $\hat{\Def}(x) \to \cX$ which is an isomorphism on tangent spaces. If the stabilizer group scheme $G_x$ is smooth and linearly reductive, $\hat{\Def}(x)$ inherits an action of $G_x$. \begin{theorem} \label{T:etale} Let $\cX$ and $\cY$ be quasi-separated algebraic stacks, locally of finite type over $k$, with affine stabilizers. Suppose $x \in \cX(k)$ and $y \in \cY(k)$ are points with smooth linearly reductive stabilizer group schemes $G_x$ and $G_y$, respectively. Then the following are equivalent: \begin{enumerate} \item\label{TI:etale:miniversal} There exist an isomorphism $G_x \to G_y$ of group schemes and an isomorphism $\hat{\Def}(x) \to \hat{\Def}(y)$ of formal miniversal deformation spaces which is equivariant with respect to $G_x \to G_y$. \item\label{TI:etale:completion} There exists an isomorphism $\hat{\cX}_x \to \hat{\cY}_y$. \item\label{TI:etale:etale} There exist an affine scheme $\Spec A$ with an action of $G_x$, a point $w \in \Spec A$ fixed by $G_x$, and a diagram of \'etale morphisms $$\xymatrix{ & [\Spec A /G_x] \ar[ld]_f \ar[rd]^g \\ \cX & & \cY }$$ such that $f(w) = x$ and $g(w) = y$, and both $f$ and $g$ induce isomorphisms of stabilizer groups at $w$. \end{enumerate} If additionally $x \in |\cX|$ and $y \in |\cY|$ are smooth, then the conditions above are equivalent to the existence of an isomorphism $G_x \to G_y$ of group schemes and an isomorphism $T_{\cX,x} \to T_{\cY,y}$ of tangent spaces which is equivariant under $G_x \to G_y$. \end{theorem} \begin{remark} If the stabilizers $G_x$ and $G_y$ are not smooth, then the theorem above remains true (with the same argument) if the formal miniversal deformation spaces are replaced with flat adic presentations (Definition \ref{D:adic}) and the tangent spaces are replaced with normal spaces. \end{remark} \subsection{Characterization of when $\cX$ admits a good moduli space} \label{A:gms} Using the existence of completions, we can give an intrinsic characterization of those algebraic stacks that admit a good moduli space. We will need one preliminary definition. We say that a geometric point $y \co \Spec l \to \cX$ is {\it geometrically closed} if the image of $(y, \mathrm{id}) \co \Spec l \to \cX \times_k l$ is a closed point of $|\cX \times_k l|$. \begin{theorem} \label{T:gms} Let $\cX$ be an algebraic stack, locally of finite type over $k$, with affine diagonal. Then $\cX$ admits a good moduli space if and only if \begin{enumerate} \item\label{TI:gms:unique-closed} For every point $y \in \cX(k)$, there exists a unique closed point in the closure $\overline{ \{ y \}}$. \item For every closed point $x \in \cX(k)$, the stabilizer group scheme $G_x$ is linearly reductive and the morphism $\hat{\cX}_x \to \cX$ from the coherent completion of $\cX$ at $x$ satisfies: \begin{enumerate} \item\label{TI:gms:stab-pres} The morphism $\hat{\cX}_x \to \cX$ is stabilizer preserving at every point; that is, $\hat{\cX}_x \to \cX$ induces an isomorphism of stabilizer groups for every point $\xi \in |\hat{\cX}_x|$. \item\label{TI:gms:geom-closed} The morphism $\hat{\cX}_x \to \cX$ maps geometrically closed points to geometrically closed points. \item\label{TI:gms:injective-on-k} The map $\hat{\cX}_x(k) \to \cX(k)$ is injective. \end{enumerate} \end{enumerate} \end{theorem} \begin{remark} The quotient $[\mathbb{P}^1 / \mathbb{G}_m]$ (where $\mathbb{G}_m$ acts on $\mathbb{P}^1$ via multiplication) does not satisfy \itemref{TI:gms:unique-closed}. If $\cX=[X/\ZZ_2]$ is the quotient of the non-separated affine line $X$ by the $\ZZ_2$-action which swaps the origins (and acts trivially elsewhere), then the map $\Spec k[[x]] = \hat{\cX}_0 \to \cX$ from the completion at the origin does not satisfy \itemref{TI:gms:stab-pres}. If $\cX=[(\AA^2 \setminus 0) / \mathbb{G}_m]$ where $\mathbb{G}_m$-acts via $t \cdot (x,y) = (x,ty)$ and $p=(0,1) \in |\cX|$, then the map $\Spec k[[x]] = \hat{\cX}_{p} \to \cX$ does not satisfy \itemref{TI:gms:geom-closed}. If $\cX=[C/\mathbb{G}_m]$ where $C$ is the nodal cubic curve with a $\mathbb{G}_m$-action and $p \in |\cX|$ denotes the image of the node, then $[\Spec(k[x,y]/xy) / \mathbb{G}_m] = \hat{\cX}_{p} \to \cX$ does not satisfy \itemref{TI:gms:injective-on-k}. (Here $\mathbb{G}_m$ acts on coordinate axes via $t \cdot (x,y) = (tx, t^{-1}y)$.) These pathological examples in fact appear in many natural moduli stacks; see \cite[Appendix A]{afsw}. \end{remark} \begin{remark} Consider the non-separated affine line as a group scheme $G \to \AA^1$ whose generic fiber is trivial but the fiber over the origin is $\ZZ_2$. In this case \itemref{TI:gms:stab-pres} is not satisfied. Nevertheless, the stack quotient $\cX=[\AA^1/G]$ does have a good moduli space $X=\AA^1$ but $\cX\to X$ has non-separated diagonal. \end{remark} \begin{remark} When $\cX$ has finite stabilizers, then conditions~\itemref{TI:gms:unique-closed}, \itemref{TI:gms:geom-closed} and \itemref{TI:gms:injective-on-k} are always satisfied. Condition~\itemref{TI:gms:stab-pres} is satisfied if and only if the inertia stack is finite over $\cX$. In this case, the good moduli space of $\cX$ coincides with the coarse space of $\cX$, which exists by~\cite{keel-mori}. \end{remark} \subsection{Algebraicity results} \label{S:algebraicity} In this subsection, we fix a field $k$ (not necessarily algebraically closed), an algebraic space $S$ locally of finite type over $k$, and an algebraic stack $\cW$ of finite type over $S$ with affine diagonal over $S$ such that $\cW \to S$ is a good moduli space. We prove the following algebraicity results. \begin{theorem}[Stacks of coherent sheaves]\label{T:coh} The $S$-stack $\underline{\Coh}_{\cW/S}$, whose objects over $S' \to S$ are finitely presented quasi-coherent sheaves on $\cW \times_S S'$ flat over $S'$, is an algebraic stack, locally of finite type over $S$, with affine diagonal over $S$. \end{theorem} \begin{corollary}[Quot schemes]\label{C:quot} If $\cF$ is a quasi-coherent $\oh_{\cW}$-module, then the $S$-sheaf $\underline{\Quot}_{\cW/S}(\cF)$, whose objects over $S' \to S$ are quotients $p_1^* \cF \to \cG$ (where $p_1 \co \cW \times_S S' \to \cW$) such that $\cG$ is a finitely presented quasi-coherent $\oh_{\cW \times_S S'}$-module flat over $S'$, is a separated algebraic space over $S$. If $\cF$ is of finite presentation, then $\underline{\Quot}_{\cW/S}(\cF)$ is locally of finite presentation. \end{corollary} \begin{corollary}[Hilbert schemes]\label{C:hilb} The $S$-sheaf $\underline{\mathrm{Hilb}}_{\cW/S}$, whose objects over $S' \to S$ are closed substacks $\cZ \subseteq \cW \times_S S'$ such that $\cZ$ is flat and locally of finite presentation over $S$, is a separated algebraic space locally of finite type over $S$. \end{corollary} \begin{theorem}[Hom stacks] \label{T:hom} Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over $S$ with affine stabilizers. If $\cW \to S$ is flat, then the $S$-stack $\underline{\Hom}_S(\cW, \cX)$, whose objects are pairs consisting of a morphism $S' \to S$ of algebraic spaces and a morphism $\cW \times_S S' \to \cX$ of algebraic stacks over $S$, is an algebraic stack, locally of finite type over $S$, with quasi-separated diagonal. If $\cX \to S$ has affine (resp.\ quasi-affine, resp.\ separated) diagonal, then the same is true for $\underline{\Hom}_S(\cW, \cX) \to S$. \end{theorem} Variants of the above results were considered in \cite[Thm.~1.6]{hlp}. We also prove the following, which we have not seen in the literature before. \begin{corollary}[$G$-equivariant Hom sheaves] \label{C:homG} Let $W$, $X$ and $S$ be quasi-separated algebraic spaces, locally of finite type over $k$. Let $G$ be a linearly reductive affine group scheme acting on $W$ and $X$. Let $W \to S$ and $X \to S$ be $G$-invariant morphisms. Suppose that $W \to S$ is flat and a good GIT quotient. Then the $S$-sheaf $\underline{\Hom}_S^{G}(W, X)$, whose objects over $S' \to S$ are $G$-equivariant $S$-morphisms $W \times_S S' \to X$, is a quasi-separated algebraic space, locally of finite type over $S$. \end{corollary} \subsection{Drinfeld's results on algebraic spaces with $\mathbb{G}_m$-actions} \label{A:drinfeld} Let $Z$ be a quasi-separated algebraic space, locally of finite type over a field $k$ (not assumed to be algebraically closed), with an action of $\mathbb{G}_m$. Define the following sheaves on $\mathsf{Sch}/k$: \[ \begin{aligned} Z^0 & := \underline{\Hom}^{\mathbb{G}_m}(\Spec k, Z) & \quad & \text{(the fixed locus)}\\ Z^+ & := \underline{\Hom}^{\mathbb{G}_m}(\AA^1, Z) & \quad & \text{(the attractor)}\\ \end{aligned} \] where $\mathbb{G}_m$ acts on $\AA^1$ by multiplication, and define the sheaf $\tilde{Z}$ on $\mathsf{Sch}/\AA^1$ by \[ \tilde{Z} := \underline{\Hom}_{\AA^1}^{\mathbb{G}_m}(\AA^2 , Z \times \AA^1) \] where $\mathbb{G}_m$ acts on $\AA^2$ via $t \cdot (x,y) = (tx, t^{-1} y)$ and acts on $\AA^1$ trivially, and the morphism $\AA^2 \to \AA^1$ is defined by $(x,y) \mapsto xy$. \begin{theorem} \cite[Prop.~1.2.2, Thm.~1.4.2 and Thm.~2.2.2]{drinfeld} \label{T:drinfeld} With the hypotheses above, $Z^0$, $Z^+$ and $\tilde{Z}$ are quasi-separated algebraic spaces locally of finite type over $k$. Moreover, the natural morphism $Z^0 \to Z$ is a closed immersion, and the natural morphism $Z^+ \to Z^0$ obtained by restricting to the origin is affine. \end{theorem} The algebraicity follows directly from Corollary \ref{C:homG}. The final statements follow from Theorem \ref{T:sumi1} above and Lemma \ref{L:drinfeld} proved in Section \ref{S:drinfeld}. \subsection{The resolution property holds \'etale-locally}\label{A:global-type} \begin{theorem}\label{T:global-type} Let $\cX$ be a quasi-separated algebraic stack, of finite type over a perfect (resp.\ arbitrary) field $k$, with affine stabilizers. Assume that for every closed point $x\in |\cX|$, the unit component $G_x^0$ of the stabilizer group scheme $G_x$ is linearly reductive. Then there exists \begin{enumerate} \item a finite field extension $k'/k$; \item a linearly reductive group scheme $G$ over $k'$; \item a $k'$-algebra $A$ with an action of $G$; and \item an \'etale (resp.\ quasi-finite flat) surjection $p\colon [\Spec A/G] \to \cX$. \end{enumerate} Moreover, \begin{enumerate}[label=(\alph*)] \item If $\cX$ has affine diagonal, then $p$ can be arranged to be affine. \item We can replace $G$ with $GL_n$ (which is linearly reductive in characteristic zero). \end{enumerate} \end{theorem} A stack of the form $[\Spec A/GL_n]$ has the \emph{resolution property}, that is, every coherent sheaf is a quotient of a vector bundle~\cite{totaro}. Although we do not know if $\cX$ has the resolution property, we conclude that $\cX$ has the resolution property \'etale-locally. In~\cite[Def.~2.1]{rydh-noetherian}, an algebraic stack $\cX$ having the resolution property locally for a representable (resp.\ representable and separated) \'etale covering $p\colon \cW\to \cX$ is said to be of \emph{global type} (resp.\ \emph{s-global type}). Thus, if $\cX$ has linearly reductive stabilizers at closed points and affine diagonal, then $\cX$ is of s-global type. Geraschenko and Satriano define toric Artin stacks in terms of stacky fans. They show that a stack $\cX$ is toric if and only if it is normal, has affine diagonal, has an open dense torus $T$ acting on the stack, has linearly reductive stabilizers, and $[\cX/T]$ is of global type~\cite[Thm.~6.1]{GS-toric-stacks-2}. If $\cX$ has linearly reductive stabilizers at closed points, then so has $[\cX/T]$. Theorem~\ref{T:global-type} thus shows that the last condition is superfluous. \subsection{Compact generation of derived categories}\label{A:compact-generation} For results involving derived categories of quasi-coherent sheaves, perfect (or compact) generation of the unbounded derived category $\DCAT_{\QCoh}(\cX)$ continues to be an indispensable tool at one's disposal \cite{neeman_duality,BZFN}. We prove: \begin{theorem}\label{T:compact-generation} Let $\cX$ be an algebraic stack of finite type over a field $k$ (not assumed to be algebraically closed) with affine diagonal. If the stabilizer group $G_x$ has linearly reductive identity component $G_x^0$ for every closed point of $\cX$, then \begin{enumerate} \item $\DCAT_{\QCoh}(\cX)$ is compactly generated by a countable set of perfect complexes; and \item for every open immersion $\cU\subseteq \cX$, there exists a compact and perfect complex $P \in \DCAT_{\QCoh}(\cX)$ with support precisely $\cX\setminus \cU$. \end{enumerate} \end{theorem} Theorem \ref{T:compact-generation} was previously only known for stacks with finite stabilizers~\cite[Thm.~A]{perfect_complexes_stacks} or quotients of quasi-projective schemes by a linear action of an algebraic group in characteristic $0$ \cite[Cor.~3.22]{BZFN}. In positive characteristic, the theorem is almost sharp: if the reduced identity component $(G_x)^0_\mathrm{red}$ is not linearly reductive, i.e., not a torus, at some point $x$, then $\DCAT_{\QCoh}(\cX)$ is not compactly generated~\cite[Thm.~1.1]{hallj_neeman_dary_no_compacts}. If $\cX$ is an algebraic stack of finite type over $k$ with affine stabilizers such that either \begin{enumerate} \item the characteristic of $k$ is $0$; or \item \emph{every} stabilizer is linearly reductive; \end{enumerate} then $\cX$ is concentrated, that is, a complex of $\oh_{\cX}$-modules with quasi-coherent cohomology is perfect if and only if it is a compact object of $\DCAT_{\QCoh}(\cX)$ \cite[Thm.~C]{hallj_dary_alg_groups_classifying}. If $\cX$ admits a good moduli space $\pi\colon \cX\to X$ with affine diagonal, then one of the two conditions hold by Theorem~\ref{T:consequences-gms}. If $\cX$ does not admit a good moduli space and is of positive characteristic, then it is not sufficient that closed points have linearly reductive stabilizers as the following example shows. \begin{example} Let $\cX=[X/\mathbb{G}_m\times \ZZ_2]$ be the quotient of the non-separated affine line $X$ by the natural $\mathbb{G}_m$-action and the $\ZZ_2$-action that swaps the origins. Then $\cX$ has two points, one closed with stabilizer group $\mathbb{G}_m$ and one open point with stabilizer group $\ZZ_2$. Thus if $k$ has characteristic two, then not every stabilizer group is linearly reductive and there are non-compact perfect complexes~\cite[Thm.~C]{hallj_dary_alg_groups_classifying}. \end{example} \section{Coherently complete stacks and the Tannakian formalism} \subsection{Coherently complete algebraic stacks} \label{S:cc} We now prove Theorem \ref{key-theorem}. \begin{proof}[Proof of Theorem \ref{key-theorem}] Let $\mathfrak{m} \subset A$ be the maximal ideal corresponding to $x$. A coherent $\oh_{\cX}$-module $\cF$ corresponds to a finitely generated $A$-module $M$ with an action of $G$. Note that since $G$ is linearly reductive, $M^G$ is a finitely generated $A^G$-module. We claim that the following two sequences of $A^G$-submodules $\{(\mathfrak{m}^{n} M)^{G} \}$ and $\{ (\mathfrak{m}^G)^n M^{G} \}$ of $M^{G}$ define the same topology, or in other words that \begin{equation} \label{E:formal} M^{G} \to \varprojlim M^{G} / \bigl(\mathfrak{m}^{n} M\bigr)^{G} \end{equation} is an isomorphism of $A^G$-modules. To this end, we first establish that \begin{equation} \label{E:intersection} \bigcap_{n \ge 0} \bigl(\mathfrak{m}^n M\bigr)^{G} = 0, \end{equation} which immediately informs us that \eqref{E:formal} is injective. Let $N = \bigcap_{n \ge 0} \mathfrak{m}^n M$. Krull's intersection theorem implies that $N \otimes_A A/\mathfrak{m} = 0$. Since $A^G$ is a local ring, $\Spec A$ has a unique closed orbit $\{x\}$. Since the support of $N$ is a closed $G$-invariant subscheme of $\Spec A$ which does not contain $x$, it follows that $N=0$. We next establish that \eqref{E:formal} is an isomorphism if $A^G$ is artinian. In this case, $\{(\mathfrak{m}^n M)^G\}$ automatically satisfies the Mittag-Leffler condition (it is a sequence of artinian $A^G$-modules). Therefore, taking the inverse limit of the exact sequences $0 \to (\mathfrak{m}^n M)^G \to M^G \to M^G / (\mathfrak{m}^n M)^G \to 0$ and applying \eqref{E:intersection}, yields an exact sequence $$0 \to 0 \to M^G \to \varprojlim M^G / (\mathfrak{m}^n M)^G \to 0.$$ Thus, we have established \eqref{E:formal} when $A^G$ is artinian. To establish \eqref{E:formal} in the general case, let $J = (\mathfrak{m}^G) A \subseteq A$ and observe that \begin{equation} \label{E:limit1} M^G = \varprojlim M^G / \bigl(\mathfrak{m}^G\bigr)^n M^G = \varprojlim \bigl(M/J^n M\bigr)^G. \end{equation} For each $n$, we know that \begin{equation} \label{E:limit2} \bigl(M/J^nM\bigr)^G = \varprojlim_l M^G / \bigl((J^n + \mathfrak{m}^l)M \bigr)^G \end{equation} using the artinian case proved above. Finally, combining \eqref{E:limit1} and \eqref{E:limit2} together with the observation that $J^n \subseteq \mathfrak{m}^l$ for $n \ge l$, we conclude that $$\begin{aligned} M^G & = \varprojlim_n \bigl(M / J^n M\bigr)^G \\ & = \varprojlim_n \varprojlim_l M^G / \bigl((J^n + \mathfrak{m}^l)M \bigr)^G \\ & = \varprojlim_l M^G / \bigl(\mathfrak{m}^l M\bigr)^G. \end{aligned}$$ We now show that \eqref{eqn-coh} is fully faithful. Suppose that $\shv{G}$ and $\shv{F}$ are coherent $\oh_{\cX}$-modules, and let $\shv{G}_n$ and $\shv{F}_n$ denote the restrictions to $\cX^{[n]}$, respectively. We need to show that \begin{equation*} \Hom(\shv{G}, \shv{F}) \to \varprojlim \Hom(\shv{G}_n, \shv{F}_n) \end{equation*} is bijective. Since $\cX$ satisfies the resolution property, we can find locally free $\oh_{\cX}$-modules $\cE'$ and $\cE$ and an exact sequence \[ \cE' \to \cE \to \shv{G} \to 0. \] This induces a diagram \[ \xymatrix{ 0 \ar[r] & \Hom(\shv{G}, \shv{F}) \ar[r] \ar[d] & \Hom(\cE, \shv{F}) \ar[r] \ar[d] & \Hom(\cE', \shv{F}) \ar[d]\\ 0 \ar[r] & \varprojlim \Hom(\shv{G}_n, \shv{F}_n) \ar[r] & \varprojlim \Hom(\cE_n, \shv{F}_n) \ar[r] & \varprojlim \Hom(\cE'_n, \shv{F}_n) } \] with exact rows. Therefore, it suffices to assume that $\shv{G}$ is locally free. In this case, \[ \Hom(\shv{G}, \shv{F}) = \Hom(\oh_{\cX}, \shv{G}^{\vee} \otimes \shv{F}) \quad \text{and} \quad \Hom(\shv{G}_n, \shv{F}_n) = \Hom\bigl(\oh_{\cX^{[n]}}, (\shv{G}_n^{\vee} \otimes \shv{F}_n)\bigr). \] Therefore, we can also assume that $\shv{G} = \oh_{\cX}$ and we need to verify that the map \begin{equation*} \Gamma(\cX,\shv{F}) \to \varprojlim \Gamma\bigl(\cX^{[n]},\shv{F}_n\bigr) \end{equation*} is an isomorphism, but this is precisely the isomorphism from \eqref{E:formal}, and the full faithfulness of \eqref{eqn-coh} follows. We now prove that the functor \eqref{eqn-coh} is essentially surjective. Since $\cX$ has the resolution property, there is a vector bundle $\shv{E}$ on $\cX$ together with a surjection $\phi_0\colon \shv{E} \to \shv{F}_0$. We claim that $\phi_0$ lifts to a compatible system of morphisms $\phi_n \colon \shv{E} \to \shv{F}_n$ for every $n>0$. It suffices to show that for $n>0$, the natural map $\Hom(\cE, \cF_{n+1}) \to \Hom(\cE, \cF_n)$ is surjective but this is clear as $\Ext_{\cX}^{1}(\cE, \mathfrak{m}^{n+1} \cF_{n+1}) = 0$ since $\cE$ is locally free and $G$ is linearly reductive. By Nakayama's Lemma, each $\phi_n$ is surjective. It follows that we obtain an induced morphism of systems $\{\phi_n\} \colon \{\cE_n\} \to \{\cF_n\}$. Applying this procedure to $\{\ker(\phi_n)\}$ (which is not necessarily an adic system), there is another vector bundle $\cH$ and a morphism of systems $\{\psi_n\} \colon \{\cH_n\} \to \{\cE_n\}$ such that $\coker(\psi_n) \cong \shv{F}_n$. By the full faithfulness of \eqref{eqn-coh}, the morphism $\{\psi_n\}$ arises from a unique morphism $\psi \colon \cH \to \cE$. Letting $\tilde{\shv{F}} = \coker \psi$, the universal property of cokernels proves that there is an isomorphism $\tilde{\shv{F}}_n \cong \shv{F}_n$; the result follows. \end{proof} \begin{remark} \label{R:explicit} In this remark, we show that with the hypotheses of Theorem \ref{key-theorem} the coherent $\oh_{\cX}$-module $\cF$ extending a given system $\{\cF_n\} \in \varprojlim \Coh(\cX^{[n]})$ can in fact be constructed explicitly. Let $\Gamma$ denote the set of irreducible representations of $G$ with $0 \in \Gamma$ denoting the trivial representation. For $\rho \in \Gamma$, we let $V_{\rho}$ be the corresponding irreducible representation. For any $G$-representation~$V$, we set $$V^{(\rho)} = \bigl(V \otimes V_{\rho}^{\vee}\bigr)^G \otimes V_\rho.$$ Note that $V = \bigoplus_{\rho \in \Gamma} V^{(\rho)}$ and that $V^{(0)} = V^G$ is the subspace of invariants. In particular, there is a decomposition $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$. The data of a coherent $\oh_{\cX}$-module $\cF$ is equivalent to a finitely generated $A$-module $M$ together with a $G$-action, i.e., an $A$-module $M$ with a decomposition $M = \oplus_{\rho \in \Gamma} M^{(\rho)}$, where each $M^{(\rho)}$ is a direct sum of copies of the irreducible representation $V_\rho$, such that the $A$-module structure on $M$ is compatible with the decompositions of $A$ and $M$. Given a coherent $\oh_{\cX}$-module $\cF = \widetilde{M}$ and a representation $\rho \in \Gamma$, then $M^{(\rho)}$ is a finitely generated $A^G$-module and $$M^{(\rho)} \to \varprojlim \bigl(M/ \mathfrak{m}^{k} M\bigr)^{(\rho)}$$ is an isomorphism (which follows from \eqref{E:formal}). Conversely, given a system of $\{\cF_n = \widetilde{M}_n\} \in \varprojlim \Coh(\cX^{[n]})$ where each $M_n$ is a finitely generated $A / \mathfrak{m}^{n+1}$-module with a $G$-action, then the extension $\cF = \widetilde{M}$ can be constructed explicitly by defining: $$M^{(\rho)} := \varprojlim M_n^{(\rho)} \qquad \text{ and } \qquad M := \bigoplus_{\rho \in \Gamma} M^{(\rho)}.$$ One can show directly that each $M^{(\rho)}$ is a finitely generated $A^G$-module, $M$ is a finitely generated $A$-module with a $G$-action, and $M/ \mathfrak{m}^{n+1} M = M_n$. \end{remark} \begin{remark} Theorem \ref{key-theorem} also implies that every vector bundle on $\cX$ is the pullback of a $G$-representation under the projection $\cX \to BG$. In particular, suppose that $G$ is a diagonalizable group scheme. Then using the notation of Remark \ref{R:explicit}, every irreducible $G$-representation $\rho \in \Gamma$ is one-dimensional so that a $G$-action on $A$ corresponds to a $\Gamma$-grading $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$, and an $A$-module with a $G$-action corresponds to a $\Gamma$-graded $A$-module. Therefore, if $A = \bigoplus_{\rho \in \Gamma} A^{(\rho)}$ is a $\Gamma$-graded noetherian $k$-algebra with $A^{(0)}$ a complete local $k$-algebra, then every finitely generated projective $\Gamma$-graded $A$-module is free. When $G = \mathbb{G}_m$ and $A^G=k$, this is the well known statement (e.g., \cite[Thm. 19.2]{eisenbud}) that every finitely generated projective graded module over a Noetherian graded $k$-algebra $A = \bigoplus_{d \ge 0} A_d$ with $A_0 = k$ is free. \end{remark} The theorem above motivates the following definition: \begin{definition} \label{D:cc} Let $\cX$ be a noetherian algebraic stack with affine stabilizers and let $\cZ \subseteq \cX$ be a closed substack. We say that $\cX$ is {\it coherently complete along $\cZ$} if the natural functor \[ \Coh(\cX) \to \varprojlim_n \Coh\bigl(\cX_{\cZ}^{[n]}\bigr) \] is an equivalence of categories. \end{definition} \begin{remark} \label{R:cc} If $(A, \mathfrak{m})$ is complete local noetherian ring, then $\Spec A$ is coherently complete along $\Spec A/\mathfrak{m}$, and more generally if an algebraic stack $\cX$ is proper over $\Spec A$, then $\cX$ is coherently complete along $\cX \times_{\Spec A} \Spec A /\mathfrak{m}$. See \cite[III.5.1.4]{EGA} for the case of schemes and \cite[Thm.~1.4]{olsson-proper}, \cite[Thm.~4.1]{conrad-gaga} for algebraic stacks. Theorem \ref{key-theorem} concludes that $\cX$ is coherently complete along $BG$. \end{remark} \subsection{Tannakian formalism} The following Tannaka duality theorem proved by the second and third author is crucial in our argument. \begin{theorem}\cite[Thm.~1.1]{hallj_dary_coherent_tannakian_duality} \label{T:tannakian} Let $\cX$ be an excellent stack and $\cY$ be a noetherian algebraic stack with affine stabilizers. Then the natural functor $$\Hom(\cX, \cY) \to \Hom_{r\otimes, \simeq}\bigl(\Coh(\cY), \Coh(\cX)\bigr)$$ is an equivalence of categories, where $\Hom_{r\otimes, \simeq}(\Coh(\cY), \Coh(\cX))$ denotes the category whose objects are right exact monoidal functors $\Coh(\cY) \to \Coh(\cX)$ and morphisms are natural isomorphisms of functors. \end{theorem} We will apply the following consequence of Tannakian duality: \begin{corollary} \label{C:tannakian} Let $\cX$ be an excellent algebraic stack with affine stabilizers and $\cZ \subseteq \cX$ be a closed substack. Suppose that $\cX$ is coherently complete along $\cZ$. If $\cY$ is a noetherian algebraic stack with affine stabilizers, then the natural functor $$\Hom(\cX, \cY) \to \varprojlim_n \Hom\bigl(\cX^{[n]}_{\cZ}, \cY\bigr)$$ is an equivalence of categories. \end{corollary} \begin{proof} There are natural equivalences \begin{align*} \Hom(\cX, \cY) & \simeq \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \Coh(\cX)\bigr) & & \text{(Tannakian formalism)}\\ & \simeq \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \varprojlim \Coh\bigl(\cX_{\cZ}^{[n]}\bigr) \bigr) & & \text{(coherent completeness)}\\ & \simeq \varprojlim \Hom_{r\otimes, \simeq}\bigl( \Coh(\cY), \Coh\bigl(\cX_{\cZ}^{[n]}\bigr) \bigr) & & \\ & \simeq \varprojlim \Hom\bigl(\cX_{\cZ}^{[n]}, \cY\bigr) & & \text{(Tannakian formalism)}.\qedhere \end{align*} \end{proof} \section{Proofs of Theorems \ref{T:smooth} and \ref{T:field}} \subsection{The normal and tangent space of an algebraic stack} \label{S:tangent} Let $\cX$ be a quasi-separated algebraic stack, locally of finite type over a field $k$, with affine stabilizers. Let $x \in \cX(k)$ be a closed point. Denote by $i \co BG_x \to \cX$ the closed immersion of the residual gerbe of $x$, and by $\cI$ the corresponding ideal sheaf. The {\it normal space to $x$} is $N_x := (\cI/\cI^2)^{\vee} = (i^* \cI)^{\vee}$ viewed as a $G_x$-representation. The {\it tangent space $T_{\cX,x}$ to $\cX$ at $x$} is the $k$-vector space of equivalence classes of pairs $(\tau, \alpha)$ consisting of morphisms $\tau \co \Spec k[\epsilon]/\epsilon^2 \to \cX$ and 2-isomorphisms $\alpha \co x \to \tau|_{\Spec k}$. The stabilizer $G_x$ acts linearly on the tangent space $T_{\cX,x}$ by precomposition on the 2-isomorphism. If $G_x$ is smooth, there is an identification $T_{\cX,x} \cong N_x$ of $G_x$-representations. Moreover, if $\cX = [X/G]$ is a quotient stack where $G$ is an algebraic group and $x \in X(k)$ (with $G_x$ not necessarily smooth), then $N_x$ is identified with the normal space $T_{X,x} / T_{G \cdot x, x}$ to the orbit $G \cdot x$ at $x$. \subsection{The smooth case} \label{S:smooth} We now prove Theorem \ref{T:smooth} even though it follows directly from Theorem \ref{T:field} coupled with Luna's fundamental lemma \cite[p.~94]{luna}. We feel that since the proof of Theorem \ref{T:smooth} is more transparent and less technical than Theorem \ref{T:field}, digesting the proof first in this case will make the proof of Theorem \ref{T:field} more accessible. \begin{proof}[Proof of Theorem \ref{T:smooth}] Define the quotient stack $\cN= [N_x/G_x]$, where $N_x$ is viewed as an affine scheme via $\Spec(\Sym N_x^{\vee})$. Since $G_x$ is linearly reductive, we claim that there are compatible isomorphisms $\cX^{[n]} \cong \cN^{[n]}$. To see this, first note that we can lift $\cX^{[0]}=BG_x$ to a unique morphism $t_n\colon \cX^{[n]}\to BG_x$ for all $n$: the obstruction to a lift from $t_n\colon \cX^{[n]}\to BG_x$ to $t_{n+1}\colon \cX^{[n+1]}\to BG_x$ is an element of the group $\Ext_{\oh_{BG_x}}^{1}(L_{BG_x/k}, \cI^{n}/\cI^{n+1})$ \cite{olsson-defn} which is zero since $BG_x$ is cohomologically affine and $L_{BG_x/k}$ is a perfect complex supported in degrees $[0,1]$ since $BG_x\to \Spec k$ is smooth. In particular, $BG_x=\cX^{[0]}\inj \cX^{[1]}$ has a retraction. This implies that $\cX^{[1]} \cong \cN^{[1]}$ since both are trivial deformations by the same module. Since $\cN\to BG_k$ is smooth, the obstruction to lifting the morphism $\cX^{[1]}\cong \cN^{[1]}\inj \cN$ to $\cX^{[n]}\to \cN$ vanishes as $H^1(BG_x,\Omega_{\cN/BG_k}^\vee\otimes \cI^{n}/\cI^{n+1})=0$. We have induced isomorphisms $\cX^{[n]} \cong \cN^{[n]}$ by Proposition~\ref{P:closed/iso-cond:artinian}~\itemref{PI:iso:artinian}. Let $\cN \to N = N_x /\!\!/ G_x$ be the good moduli space and denote by $0 \in N$ the image of the origin. Set $\hat{\cN} := \Spec \hat{\oh}_{N,0} \times_N \cN$. Since $\hat{\cN}$ is coherently complete (Theorem \ref{key-theorem}), we may apply the Tannakian formalism (Corollary \ref{C:tannakian}) to find a morphism $\hat{\cN} \to \cX$ filling in the diagram \vspace{.2cm} $$ \xymatrix{ \cX^{[n]} \cong \cN^{[n]} \ar[r] \ar@/^1.6pc/[rrr] & \hat{\cN} \ar[r] \ar[d] \ar@/^1pc/@{-->}[rr] & \cN \ar[d] & \cX\\ & \Spec \hat{\oh}_{N,0} \ar[r] \ar@{}[ur]|\square & N. } $$ Let us now consider the functor $F \co \mathsf{Sch}/N \to \Sets$ which assigns to a morphism $S \to N$ the set of morphisms $S \times_N \cN \to \cX$ modulo 2-isomorphisms. This functor is locally of finite presentation and we have an element of $F$ over $\Spec \hat{\oh}_{N,0}$. By Artin approximation \cite[Cor.~2.2]{artin-approx}, there exist an \'etale morphism $(U,u) \to (N,0)$ where $U$ is an affine scheme and a morphism $(U \times_N \cN, (u,0) ) \to (\cX,x)$ agreeing with $(\hat{\cN},0) \to (\cX,x)$ to first order. Since $\cX$ is smooth at $x$, it follows that $U \times_N \cN \to \cX$ is \'etale at $(u,0)$ by Proposition~\ref{P:closed/iso-cond:artinian}~\itemref{PI:iso:artinian}. This establishes the theorem after shrinking $U$ suitably; the final statement follows from Proposition \ref{P:refinement}. \end{proof} \subsection{The general case} \label{S:proof} We now prove Theorem \ref{T:field} by a similar method to the proof in the smooth case but using Corollary \ref{C:equivariant-algebraization} in place of Artin approximation. \begin{proof}[Proof of Theorem \ref{T:field}] We may assume that $x \in |\cX|$ is a closed point. Let $\cN := [N_x / H]$, $\cN \to N := N_x /\!\!/ H$, and $\hat{\cN} := \Spec \hat{\oh}_{N,0} \times_N \cN$ where $0$ denotes the image of the origin. Let $\eta_0 \co BH \to BG_x = \cX^{[0]}$ be the morphism induced from the inclusion $H \subseteq G_x$; this is a smooth (resp.\ \'etale) morphism. We first prove by induction that there are compatible $2$-cartesian diagrams \[ \xymatrix{\cH_n \ar[d]_{\eta_n} \ar@{(->}[r] & \cH_{n+1} \ar[d]^{\eta_{n+1}} \\ \cX^{[n]} \ar@{(->}[r] \ar@{}[ur]|\square& \cX^{[n+1]},} \] where $\cH_0 = BH$ and the vertical maps are smooth (resp.\ \'etale). Indeed, given $\eta_n \co \cH_n \to \cX^{[n]}$, by \cite{olsson-defn}, the obstruction to the existence of $\eta_{n+1}$ is an element of $ \Ext^2_{\oh_{BH}}(\Omega_{BH/BG_x},\eta_0^*(\cI^n/\cI^{n+1}))$, but this group vanishes as $H$ is linearly reductive and $\Omega_{BH/BG_x}$ is a vector bundle. Let $\tau_0 \co \cH_0 = BH \to \cN$. Since $H$ is linearly reductive, the deformation $\cH_0\inj \cH_1$ is a trivial extension by $N$ and hence we have an isomorphism $\tau_1\colon \cH_1\cong \cN^{[1]}$ (see proof of smooth case). Using linearly reductivity of $H$ once again and deformation theory, we obtain compatible morphisms $\tau_n \colon \cH_n \to \cN$ extending $\tau_0$ and $\tau_1$. These are closed immersions by Proposition~\ref{P:closed/iso-cond:artinian}~\itemref{PI:closed:artinian}. The closed embeddings $\cH_n \inj \cN$ factor through $\hat{\cN}$ so that we have a compatible family of diagrams $$\xymatrix{ \cH_n \ar[d]_{\eta_n} \ar@{(->}[r] & \hat{\cN} \\ \cX^{[n]} \ar[r] & \cX. }$$ Since $\hat{\cN}$ is coherently complete, there exists a closed immersion $\hat{\cH} \inj \hat{\cN}$ extending $\cH_n \inj \hat{\cN}$. Since $\hat{\cH}$ is also coherently complete, the Tannakian formalism yields a morphism $\eta\co \hat{\cH} \to \cX$ extending $\eta_n \co \cH_n \to \cX^{[n]}$. By Proposition \ref{P:formal-versality-criterion}, $\hat{\cH} \to \cX$ is formally versal (resp.\ universal). Also note that $\hat{\cH}\to \Spec \hat{\oh}_{N,0}$ is of finite type. We may therefore apply Corollary \ref{C:equivariant-algebraization} to obtain a stack $\cH=[\Spec A/H]$ together with a morphism $f\co (\cH,w)\to (\cX,x)$ of finite type and a flat morphism $\varphi\co \hat{\cH}\to \cH$, identifying $\hat{\cH}$ with the completion of $\cH$ at $w$, such that $f\circ\varphi=\eta$. In particular, $f$ is smooth (resp.\ \'etale) at $w$. Moreover, $(f\circ \varphi)^{-1}(\cX^{[0]})=\cH^{[0]}$ so we have a flat morphism $BH=\cH^{[0]}\to f^{-1}(BG_x)$ which equals the inclusion of the residual gerbe at $w$. It follows that $w$ is an isolated point in the fiber $f^{-1}(BG_x)$. We can thus find an open neighborhood $\cW \subseteq \cH$ of $w$ such that $\cW\to \cX$ is smooth (resp.\ \'etale) and $\cW \cap f^{-1}(BG_x) = BH$. Since $w$ is a closed point of $\cH$, we may further shrink $\cW$ so that it becomes cohomologically affine (Lemma~\ref{L:shrink}). The final statement follows from Proposition \ref{P:refinement}. \end{proof} \subsection{The refinement} The following trivial lemma will be frequently applied to a good moduli space morphism $\pi \colon \cX \to X$. Note that any closed subset $\cZ\subseteq \cX$ satisfies the assumption in the lemma in this case. \begin{lemma}\label{L:shrink} Let $\pi\colon \cX \to X$ be a closed morphism of topological spaces and let $\cZ\subseteq \cX$ be a closed subset. Assume that every open neighborhood of $\cZ$ contains $\pi^{-1}(\pi(\cZ))$. If $\cZ \subseteq \cU$ is an open neighborhood of $\cZ$, then there exists an open neighborhood $U' \subseteq X$ of $\pi(\cZ)$ such that $\pi^{-1}(U') \subseteq \cU$. \end{lemma} \begin{proof} Take $U'=X\setminus \pi(\cX\setminus \cU)$. \end{proof} \begin{proposition} \label{P:refinement} Let $f \co \cW \to \cX$ be a morphism of noetherian algebraic stacks such that $\cW$ is cohomologically affine with affine diagonal. Suppose $w \in |\cW|$ is a closed point such that $f$ induces an injection of stabilizer groups at $w$. \begin{enumerate} \item \label{P:refinement:affine_pres} If there exists an affine and faithfully flat morphism of finite type $\cX' \to \cX$ such that $\cX'$ has quasi-finite and separated diagonal, then there exists a cohomologically affine open neighborhood $\cU \subseteq \cW$ of $w$ such that $f|_{\cU}$ is quasi-compact, representable and separated. \item \label{P:refinement:affine_diag} If $\cX$ has affine diagonal, then there exists a cohomologically affine open neighborhood $\cU \subseteq \cW$ of $w$ such that $f|_{\cU}$ is affine. \end{enumerate} \end{proposition} \begin{proof} We first establish \itemref{P:refinement:affine_pres}. By shrinking $\cW$, we may assume that $\Delta_{\cW/\cX}$ is quasi-finite and after further shrinking, we may arrange so that $\cW$ remains cohomologically affine (Lemma~\ref{L:shrink}). Let $\cW' = \cX'\times_{\cX} \cW$ and let $f' \colon \cW' \to \cX'$ be the induced morphism. Then $\cW'$ is cohomologically affine with quasi-finite and affine diagonal. By applying \cite[Prop.~6.4]{alper-good} to $\Delta_{\cW'}$, we obtain that $\Delta_{\cW'}$ is finite. Since $\cX'$ has separated diagonal, it follows that $f'\colon \cW' \to \cX'$ is separated. By descent, $f$ is separated. In particular, the relative inertia of $f$, $i\colon I_{\cW/\cX} \to \cW$, is finite. By Nakayama's lemma, there is an open substack $\cU$ of $\cW$, containing $w$, with trivial inertia relative to $\cX$. Thus $\cU\to \cX$ is quasi-finite, representable and separated. Shrinking $\cU$ appropriately, $\cU$ also becomes cohomologically affine and the claim follows. For \itemref{P:refinement:affine_diag}, by \itemref{P:refinement:affine_pres} we may assume that $f$ is representable and separated. Since $\cW$ is cohomologically affine and $\Delta_X$ is affine, it follows that $f$ is cohomologically affine. But cohomologically affine, quasi-compact, representable and separated implies that $f$ is affine (Serre's Criterion \cite[Prop.~3.3]{alper-good}). \end{proof} Note that the condition in~\itemref{P:refinement:affine_pres} implies that $\Delta_{\cX}$ is quasi-affine. It is possible that the condition in~\itemref{P:refinement:affine_pres} can be replaced by this weaker condition. \section{Proofs of Applications} \label{S:pfs_applications} We now prove the results stated in \S\ref{S:applications}. \subsection{Generalization of Sumihiro's theorem} We first establish Theorems \ref{T:sumi1}--\ref{T:sumi3} since they will be used to establish Theorem \ref{T:luna}. \begin{proof}[Proof of Theorem \ref{T:sumi1}] Let $\cY = [\cX/T]$ and $y \in \cY(k)$ be the image of $x$. As the sequence \eqref{E:stab} splits, we can consider $T_x$ as a subgroup of $G_y$. By applying Theorem \ref{T:field} to $\cY$ at $y$ with respect to the subgroup $T_x \subseteq G_y$, we obtain a smooth (resp.\ \'etale) morphism $f \co [W/T_x] \to \cY$, where $W$ is an affine scheme with an action of $T_x$, which induces the given inclusion $T_x \subseteq G_y$ at stabilizer groups at a preimage $w \in [W/T_x]$ of $y$. Consider the cartesian diagram $$\xymatrix{ [W/T_x] \times_{\cY} \cX \ar[r] \ar[d] & \cX \ar[d]\ar[r] & \Spec k\ar[d] \\ [W/T_x] \ar[r] & \cY\ar[r] & BT }$$ The map $[W/T_x]\to \cY\to BT$ induces the injection $T_x\inj T$ on stabilizers groups at $w$. Thus, by Proposition~\ref{P:refinement}~\itemref{P:refinement:affine_diag}, there is an open neighborhood $\cU\subseteq [W/T_x]$ of $w$ such that $\cU$ is cohomologically affine and $\cU\to BT$ is affine. The fiber product $\cX\times_\cY \cU$ is thus an affine scheme $\Spec A$ and the induced map $\Spec A\to \cX$ is $T$-equivariant. If $u\in \Spec A$ is a closed point above $w$ and $x$, then the map $\Spec A\to \cX$ induces an isomorphism $T_x\to T_x$ of stabilizer groups at $u$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:sumi2}] In the exact sequence \eqref{E:stab}, $G_x$ is \'etale and $T_x$ is diagonalizable. This implies that $(G_y)^0$ is diagonalizable. Indeed, first note that we have exact sequences: \[ 1 \to G_x\cap (G_y)^0 \to (G_y)^0 \to (T_x)^0 \to 1 \] \[ 1 \to G_x\cap (G_y)^0 \to (G_y)^0_\mathrm{red} \to (T_x)^0_\mathrm{red} \to 1 \] The second sequence shows that $(G_y)^0_\mathrm{red}$ is a torus (as it is connected, reduced and surjects onto a torus with finite kernel) and, consequently, that $G_x\cap (G_y)^0$ is diagonalizable. It then follows that $(G_y)^0$ is diagonalizable from the first sequence~\cite[Exp.~17, Prop.~7.1.1~b)]{sga3ii}. Theorem \ref{T:field} produces an \'etale neighborhood $f\colon ([\Spec A/(G_y)^0],w) \to (\cY,y)$ such that the induced morphism on stabilizers groups is $(G_y)^0 \to G_y$. Replacing $\cX\to \cY$ with the pull-back along $f$, we may thus assume that $G_y$ is connected and diagonalizable. \newcommand{\tor}{\mathrm{tor}}% If we let $G_y=D(N)$, $T_x=D(M)$ and $T=D(\ZZ^r)$, then we have a surjective map $q\colon \ZZ^r\to M$ and an injective map $\varphi\colon M\to N$. The quotient $N/M$ is torsion but without $p$-torsion, where $p$ is the characteristic of $k$. Since all torsion of $M$ and $N$ is $p$-torsion, we have that $\varphi$ induces an isomorphism of torsion subgroups. We can thus find splittings of $\varphi$ and $q$ as in the diagram \[ \xymatrix@C+10mm{ \mathllap{\ZZ^r=}\ZZ^s\oplus M/M_\tor\ar@{(->}[r]^{\alpha=\id\oplus\varphi_2}\ar@{->>}[d]^{q=q_1\oplus\id} & \ZZ^s\oplus N/N_\tor\mathrlap{=\ZZ^r}\ar@{->>}[d]^{q'=\varphi_1 q_1\oplus\id} \\ \mathllap{M=}M_\tor\oplus M/M_\tor\ar@{(->}[r]^{\varphi=\varphi_1\oplus \varphi_2} & N_\tor\oplus N/N_\tor\mathrlap{=N.} } \] The map $q'$ corresponds to an embedding $G_y\inj T$ and the map $\alpha$ to a reparameterization $T\to T$. After reparameterizing the action of $T$ on $\cX$ via $\alpha$, the surjection $G_y\surj T_x$ becomes split. The result now follows from Theorem \ref{T:sumi1}. \end{proof} \begin{proof}[Proof of Theorem \ref{T:sumi3}] By Theorem \ref{T:field}, there exists an \'etale neighborhood $f\colon (\cW,w) \to ([X/G],x)$ such that $\cW$ is cohomologically affine, $f$ induces an isomorphism of stabilizers at $w$, and $w$ is a closed point. By Proposition~\ref{P:refinement}~\itemref{P:refinement:affine_diag}, we can assume after shrinking $\cW$ that the composition $\cW \to [X/G] \to BG$ is affine. It follows that $W = \cW \times_{[X/G]} X$ is affine and that $W \to X$ is an $G$-equivariant \'etale neighborhood of $x$. \end{proof} \subsection{Generalization of Luna's \'etale slice theorem} \begin{proof}[Proof of Theorem \ref{T:luna}] By applying Theorem \ref{T:sumi3}, we can find an affine scheme $X'$ with an action of $G$ and a $G$-equivariant, \'etale morphism $X' \to X$. This reduces the theorem to the case when $X$ is affine which was established in \cite[p.~97]{luna}, cf.\ Remark~\ref{R:luna}. \end{proof} \subsection{Bia\l ynicki-Birula decompositions} Theorem \ref{T:bb} follows immediately from Theorem \ref{T:sumi2} and \cite[Prop.~5]{oprea}. \subsection{Existence of equivariant versal deformations for curves} Theorem \ref{T:curves} follows directly from Theorem \ref{T:field} and the following lemma (because the image of $H$ in $\Aut(C)$ is linearly reductive and $BH \to B\Aut(C)$ is smooth): \begin{lemma} \label{L:curves} If $(C, \{p_j\}_{j=1}^n)$ is an $n$-pointed proper scheme of pure dimension 1 over an algebraically closed field $k$ and no connected component of $C_{\mathrm{red}}$ is a smooth unpointed irreducible curve of genus 1, then $\Aut(C, \{p_j\})$ is an affine group scheme over $k$. \end{lemma} \begin{proof} We first handle the case when $C$ is reduced. Let $(\tilde{C}, \{p_j\}_{j=1}^n, \{q_j\}_{j=1}^k)$ be the pointed normalization of $(C, \{p_j\})$. The subgroup $K \subseteq \Aut(C, \{p_j\})$ of automorphisms fixing the singular locus of $C$ has finite index, and there is an injective homomorphism $h \co K \to \Aut(\tilde{C}, \{p_j\}, \{q_j\}).$ As the automorphism group of any $n$-pointed smooth genus $g$ curve with $(g,n) \neq (1,0)$ is affine, the hypotheses imply $\Aut(\tilde{C}, \{p_j\}, \{q_j\})$ is affine. It follows that $K$ is affine and thus $\Aut(C, \{p_j\})$. In our argument for the general case, the marked points will not play a role and will be dropped from the notation. As $C_{\mathrm{red}} \to C$ can be factored by square-zero closed immersions, by induction, it suffices to verify the following claim: if $C \to D$ is a closed immersion of proper curves defined by an ideal sheaf $\cI \subseteq \oh_{D}$ such that $\cI^2 = 0$ and such that $\Aut(C)$ is affine, then $\Aut(D)$ is affine. Let $K_1 = \ker(\Aut(D) \to \Aut(C))$. Since $\Aut(C)$ is affine, it suffices to prove that $K_1$ is affine. Any element of $K_1$ induces naturally an automorphism of the coherent $\oh_{C}$-module $\cI = \cI/\cI^2$. Since $\Aut_{\oh_C}(\cI)$ is affine, it suffices to show that $K_2 = \ker(K_1 \to \Aut_{\oh_C}(\cI))$ is affine. Each element $\alpha \in K_2$ defines naturally an $\oh_D$-derivation $$\oh_C \to \cI, \qquad s \mapsto \alpha(s) - s$$ since $\alpha$ is acting trivial on $\cI$ and induces the identity on $\oh_D$. The vector space $\Der_{\oh_D}(\oh_C, \cI)$ is finite dimensional and there is an injective group homomorphism $K_2 \to \Der_{\oh_D}(\oh_C, \cI)$, and we conclude that $K_2$ is affine. \end{proof} \begin{remark} From the lemma above, we see that the conclusion of Theorem \ref{T:curves} holds for pointed curves $C$ such that $C$ and every deformation of $C$ has no connected component whose reduction is a smooth unpointed irreducible curve of genus $1$. \end{remark} \begin{remark} If $\cC \to S$ is a family of curves such that every fiber satisfies the hypothesis of Lemma \ref{L:curves}, then the automorphism group scheme $\Aut(\cC/S) \to S$ of $\cC$ over $S$ need not be affine (or even quasi-affine). This even fails for families of Deligne--Mumford semistable curves; see \cite[\S4.1]{alper-kresch}. \end{remark} \subsection{Good moduli spaces} \begin{proof}[Proof of Theorem \ref{T:consequences-gms}] We may assume that $X=\Spec R$, where $R$ is a noetherian $k$-algebra. By noetherian approximation along $k \to R$, there is a finite type $k$-algebra $R_0$ and an algebraic stack $\cX_0$ of finite type over $\Spec R_0$ with affine diagonal. We may also arrange that the image $x_0$ of $x$ in $\cX_0$ is closed with linearly reductive stabilizer $G_x$. We now apply Theorem \ref{T:field} to find a pointed affine \'etale $k$-morphism $f_0 \colon ([\Spec A_0/G_x],w_0) \to (\cX_0,x_0)$ that induces an isomorphism of stabilizers at~$w_0$. Pulling this back along $\Spec R \to \Spec R_0$, we obtain an affine \'etale morphism $f \colon [\Spec A/G_x] \to \cX$ inducing an isomorphism of stabilizers at all points lying over the preimage of $w_0$. The result now follows from a generalization of Luna's fundamental lemma \cite[Thm.~6.10]{alper-quotient}. \end{proof} \begin{proof}[Proof of Corollary \ref{C:gb-c28}] By \cite[Thm.~1]{geraschenko-brown}, we have \itemref{C:gb-c28:res} $\Rightarrow$ \itemref{C:gb-c28:fGAGA}; thus, it suffices to prove \itemref{C:gb-c28:res}. If $R/\mathfrak{m} =k$ and $k$ is algebraically closed, then the result follows from Theorem \ref{T:consequences-gms} since $GL_n/G_x$ is affine for any embedding $G_x\inj GL_n$. In this case \itemref{C:gb-c28:res} holds even if $R$ is not complete but merely henselian. If $R/\mathfrak{m}=k$ and $k$ is not algebraically closed, then we proceed as follows. Let $\bar{k}$ be an algebraic closure of $k$. By \cite[$0_{\mathrm{III}}$.10.3.1.3]{EGA}, $\bar{R}=R\otimes_k \bar{k}=\varinjlim_{k \subseteq k' \subseteq \bar{k}} R\otimes_k k'$ is local, noetherian, $\bar{\mathfrak{m}} = \mathfrak{m}\bar{R}$ is the maximal ideal, $\bar{R}/\bar{\mathfrak{m}}\cong \bar{k}$, and the induced map $R/\mathfrak{m} \to \bar{R}/\bar{\mathfrak{m}}$ coincides with $k \to \bar{k}$. Since each $R\otimes_k k'$ is henselian, $\bar{R}$ is henselian. Let $\bar{\cX} = \cX\otimes_R \bar{R}$. By the case considered above, $\bar{\cX}$ has the resolution property. Having the resolution property descends to some $\cX_{k'} = \cX\otimes_k k'$, where $k \subseteq k' \subseteq \bar{k}$ is a finite extension. Since $\cX_{k'} \to \cX$ is finite and faithfully flat, $\cX$ has the resolution property \cite[Prop.~4.3(vii)]{gross-resolution}. In general, let $K=R/\mathfrak{m}$. Since $R$ is a complete $k$-algebra, it admits a coefficient field; thus, it is also a $K$-algebra. We are now free to replace $k$ with $K$ and the result follows. \end{proof} \subsection{Existence of coherent completions} Theorem \ref{T:field} gives an \'etale morphism $(\cW=[\Spec A/G_x], w) \to (\cX,x)$. If we let $\pi \co \cW \to W = \Spec A^{G_x}$, then Theorem \ref{T:complete} follows by taking $\hat{\cX}_x = \cW \times_W \Spec \hat{\oh}_{W,\pi(w)}$. Indeed, this stack is coherently complete by Theorem~\ref{key-theorem} and the uniqueness follows by the Tannakian formalism (Corollary~\ref{C:tannakian}). \subsection{\'Etale-local equivalences} \begin{proof} [Proof of Theorem \ref{T:etale}] The implications \itemref{TI:etale:etale}$\implies$\itemref{TI:etale:completion}$\implies$\itemref{TI:etale:miniversal} are immediate. We also have \itemref{TI:etale:miniversal}$\implies$\itemref{TI:etale:completion} as $\cX^{[n]} = [\hat{\Def}(x)^{[n]} / G_x]$ and $\cY^{[n]} = [\hat{\Def}(y)^{[n]} / G_y]$. We now show that \itemref{TI:etale:completion}$\implies$\itemref{TI:etale:etale}. We are given an isomorphism $\alpha \co\hat{\cX}_x \stackrel{\sim}{\to} \hat{\cY}_y$. Let $(\cW=[\Spec A/G_x],w)\to (\cX,x)$ be an \'etale neighborhood as in Theorem~\ref{T:field}. Let $W=\Spec A^{G_x}$ denote the good moduli space of $\cW$ and let $w_0$ be the image of $w$. Then $\hat{\cX}_x=\cW\times_W \Spec \hat{\oh}_{W,w_0}$. The functor $F\co (T\to W)\mapsto \Hom(\cW\times_W T,\cY)$ is locally of finite presentation. Artin approximation applied to $F$ and $\alpha\in F(\Spec \hat{\oh}_{W,w_0})$ thus gives an \'etale morphism $(W',w')\to (W,w)$ and a morphism $\varphi\co \cW':=\cW\times_W W'\to \cY$ such that $\varphi|_{\cW'^{[1]}}\co \cW'^{[1]}\to \cY^{[1]}$ is an isomorphism. Since $\hat{\cW'}_{w'}\cong \hat{\cX}_x\cong \hat{\cY}_y$, it follows that $\varphi$ induces an isomorphism $\hat{\cW'}\to \hat{\cY}$ by Proposition~\ref{P:closed/iso-cond:complete}. After replacing $W'$ with an open neighborhood we thus obtain an \'etale morphism $(\cW',w')\to (\cY,y)$. The final statement is clear from Theorem \ref{T:smooth}. \end{proof} \subsection{Characterization of when $\cX$ admits a good moduli space} \begin{proof}[Proof of Theorem \ref{T:gms}] The necessity of the conditions follow from Theorem~\ref{T:consequences-gms}. For the sufficiency, by \cite[Theorem 4.1]{afsw}\footnote{The underlying hypotheses in \cite{afsw} is that the base field $k$ has characteristic $0$ but this hypothesis is not necessary.}, it is enough to verify: \begin{enumerate} \item[(I)] For every closed point $x \in |\cX|$, there exists an affine \'etale morphism $$f \co \bigl([\Spec A / G_x], w\bigr) \to (\cX, x)$$ such that for each closed point $w' \in [\Spec A/G_x]$, \begin{enumerate} \item $f$ is stabilizer preserving at $w'$ (i.e., $f$ induces an isomorphism of stabilizer groups at $w'$); and \item $f(w')$ is closed. \end{enumerate} \item[(II)] For any point $y \in \cX(k)$, the closed substack $\overline{ \{y\}}$ admits a good moduli space. \end{enumerate} We first verify condition (I). Let $x \in \cX(k)$ be a closed point. By Theorem \ref{T:field}, there exist a quotient stack $\cW = [\Spec A / G_x]$ with a closed point $w \in |\cW|$ and an affine \'etale morphism $f \co (\cW, w) \to (\cX, x)$ such that $f$ is stabilizer preserving at $w$. As the coherent completion of $\cW$ at $w$ is identified with $\hat{\cX}_x$, we have a 2-commutative diagram \begin{equation} \label{E:completion} \begin{split} \xymatrix{ \hat{\cX}_x \ar[d] \ar[rd] & \\ \cW \ar[r]^f & \cX. } \end{split} \end{equation} The subset $Q_a\subseteq |\cW|$ consisting of points $\xi \in |\cW|$ such that $f$ is stabilizer preserving at $\xi$ is constructible. Since $Q_a$ contains every point in the image of $\hat{\cX}_x \to \cW$ by hypothesis \itemref{TI:gms:stab-pres}, it follows that $Q_a$ contains a neighborhood of $w$. Thus after replacing $\cW$ with an open saturated neighborhood containing $w$ (Lemma~\ref{L:shrink}), we may assume that $f \co \cW \to \cX$ satisfies condition (Ia). Let $\pi\co \cW\to W$ be the good moduli space of $\cW$ and consider the morphism $g=(f,\pi)=\cW\to \cX\times W$. For a point $\xi\in |W|$, let $\xi^0\in |\cW|$ denote the unique point that is closed in the fiber $\cW_\xi$. Let $Q_b\subseteq |W|$ be the locus of points $\xi\in |W|$ such that $g(\xi^0)$ is closed in $|(\cX\times W)_\xi|=|\cX_{\kappa(\xi)}|$. This locus is constructible. Indeed, the subset $\cW^0=\{\xi^0\;:\;\xi\in |W|\}\subseteq |\cW|$ is easily seen to be constructible; hence so is $g(\cW^0)$ by Chevalley's theorem. The locus $Q_b$ equals the set of points $\xi\in |W|$ such that $g(\cW^0)_\xi$ is closed which is constructible by~\cite[IV.9.5.4]{EGA}. The locus $Q_b$ contains $\Spec \oh_{W,\pi(w)}$ by hypothesis \itemref{TI:gms:geom-closed} (recall that $\hat{\cX}_x=\cW\times_W \Spec \hat{\oh}_{W,\pi(w)}$). Therefore, after replacing $\cW$ with an open saturated neighborhood of $w$, we may assume that $f \co \cW \to \cX$ satisfies condition (Ib). For condition (II), we may replace $\cX$ by $\overline{ \{y\} }$. By \itemref{TI:gms:unique-closed}, there is a unique closed point $x \in \overline{ \{y\} }$ and we can find a commutative diagram as in \eqref{E:completion} for $x$. By \itemref{TI:gms:geom-closed} we can, since $f$ is \'etale, also assume that $\cW$ has a unique closed point. This implies that $\Gamma(\cW, \oh_{\cW}) = k$ and $\hat{\cX}_x = \cW$. By hypothesis \itemref{TI:gms:injective-on-k}, $f \co \cW \to \cX$ is an \'etale monomorphism which is also surjective by hypothesis \itemref{TI:gms:unique-closed}. We conclude that $f \co \cW \to \cX$ is an isomorphism establishing condition (II). \end{proof} \subsection{The resolution property holds \'etale-locally} \begin{proof}[Proof of Theorem \ref{T:global-type}] First assume that $k$ is algebraically closed. Since $\cX$ is quasi-compact, Theorem~\ref{T:field} gives an \'etale surjective morphism $q\co [U_1/G_1]\amalg\dots\amalg [U_n/G_n]\to \cX$ where $G_i$ is a linearly reductive group scheme over $k$ acting on affine scheme $U_i$. If we let $G=G_1\times G_2\times\dots\times G_n$ and let $U$ be the disjoint union of the $U_i\times G/G_i$, we obtain an \'etale surjective morphism $p\co [U/G]\to \cX$. If $\cX$ has affine diagonal, then we can assume that $q$, and hence $p$, are affine. For general $k$, write the algebraic closure $\overline{k}$ as a union of its finite subextensions $k'/k$. A standard limit argument gives a solution over some $k'$. Since $G$ is reductive, the quotient $GL_n/G$ is affine for any embedding $G\inj GL_n$. Note that $[U/G]=[U\times^G GL_n/GL_n]$ and $U\times^G GL_n$ is affine since $U\times^G GL_n\to U$ is a $GL_n/G$-fibration. \end{proof} \subsection{Compact generation of derived categories} Theorem \ref{T:compact-generation} follows immediately from Theorem~\ref{T:global-type} together with \cite[Thm.~B]{perfect_complexes_stacks} (characteristic $0$) or \cite[Thm.~D]{hallj_dary_alg_groups_classifying} (positive characteristic). \subsection{Algebraicity results} These will be established using Artin's criterion, as formulated in \cite[Thm.~A]{hallj_openness_coh}. Consequently, we will need a preparatory result on coherence (in the sense of Auslander \cite{auslander}) of the relevant deformation and obstruction functors, which will also help with the separation conditions. Throughout this subsection, we assume the following: \begin{itemize} \item $k$ is a field (not necessarily algebraically closed); \item $S$ is an algebraic space, locally of finite type over $k$; and \item $\cW$ is an algebraic stack of finite type over $k$ with affine diagonal that admits a good moduli space $\cW \to S$. \end{itemize} The following proposition is a variant of \cite[Thm.~C]{hallj_coho_bc} and \cite[Thm.~D]{perfect_complexes_stacks}. \begin{proposition}\label{P:coh_gms} Assume $S$ is an affine scheme. If $\cplx{F} \in \DCAT_{\QCoh}(\stW)$ and $\cplx{G} \in \mathsf{D}_{\Coh}^b(\stW)$, then the functor \[ \Hom_{\oh_{\stW}}\bigl(\cplx{F},\cplx{G} \otimes_{\oh_\stW}^{\DERF{L}} \QCPBK{\pi}(-)\bigr) \colon \mathsf{QCoh}(S) \to \mathsf{QCoh}(S) \] is coherent. \end{proposition} \begin{proof} By Theorem \ref{T:compact-generation}, $\DCAT_{\QCoh}(\stW)$ is compactly generated. Also, the restriction of $\QCPSH{f} \colon \DCAT_{\QCoh}(\stW) \to \DCAT_{\QCoh}(S)$ to $\mathsf{D}_{\Coh}^+(\stW)$ factors through $\mathsf{D}_{\Coh}^+(S)$ \cite[Thm.~4.16(x)]{alper-good}. By \cite[Cor.~4.19]{perfect_complexes_stacks}, the result follows. \end{proof} The following corollary is a variant of \cite[Thm.~D]{hallj_coho_bc}, whose proof is identical. \begin{corollary}\label{C:aff_hom_fund} Let $\cF$ be a quasi-coherent $\oh_{\cW}$-module and let $\cG$ be a coherent $\oh_{\cW}$-module. If $\cG$ is flat over $S$, then the $S$-presheaf $\underline{\Hom}_{\oh_{\stW}/S}(\cF,\cG)$ whose objects over $S' \xrightarrow{\tau} S$ are homomorphisms $\tau_{\cW}^*\cF \to \tau_{\cW}^*\cG$ of $\oh_{\cW\times_S S'}$-modules (where $\tau_{\cW} \co \cW \times_S S' \to \cW$ is the projection) is representable by an affine $S$-scheme. \end{corollary} \begin{proof} Argue exactly as in the proof of \cite[Thm.~D]{hallj_coho_bc}, but using Proposition \ref{P:coh_gms} in place of \cite[Thm.~C]{hallj_coho_bc} to deduce that automorphisms, deformations and obstructions are coherent. \end{proof} \begin{proof} [Proof of Theorem \ref{T:coh}] This only requires small modifications to the proof of \cite[Thm.~8.1]{hallj_openness_coh}: the formal GAGA statement of Corollary \ref{C:gb-c28} implies that formally versal deformations are effective and Proposition \ref{P:coh_gms} implies that the automorphism, deformation and obstruction functors are coherent. Therefore, Artin's criterion (as formulated in \cite[Thm.~A]{hallj_openness_coh}) is satisfied and the result follows. Corollary \ref{C:aff_hom_fund} implies the diagonal is affine. \end{proof} Corollaries \ref{C:quot} and \ref{C:hilb} follow immediately from Theorem \ref{T:coh}. Indeed, the natural functor $\underline{\mathrm{Quot}}_{\cW/S}(\cF)\to \underline{\Coh}_{\cW/S}$ is quasi-affine by Corollary \ref{C:aff_hom_fund} and Nakayama's Lemma (see \cite[Lem.~2.6]{lieblich-coherent} for details). \begin{proof}[Proof of Theorem \ref{T:hom}] This only requires small modifications to the proof of \cite[Thm.~1.2]{hallj_dary_coherent_tannakian_duality}, which uses Artin's criterion as formulated in \cite[Thm.~A]{hallj_openness_coh}. Indeed, using Corollary \ref{C:gb-c28} in place of \cite[Thm.~1.4]{olsson-proper} (effectivity), Proposition \ref{P:coh_gms} in place of \cite[Thm.~C]{hallj_coho_bc} (coherence) and Corollary \ref{C:aff_hom_fund} in place of \cite[Thm.~D]{hallj_coho_bc} (conditions on the diagonal), the result follows. \end{proof} \begin{proof} [Proof of Corollary \ref{C:homG}] By Theorem \ref{T:hom}, it suffices to prove that the natural map \begin{equation}\label{EQ:Hom} \underline{\Hom}_S^G(W,X) \to \underline{\Hom}_S\bigl([W/G],[X/G]\bigr) \end{equation} is representable and quasi-separated. If $T \to \Hom([W/G], [X/G])$ is a morphism, the corresponding morphism $T \times [W/G] \to [X/G]$ is induced from a $G$-equivariant morphism $T \times W \to X$ if and only if the two $G$-bundles over $T \times [W/G]$ corresponding to $r \co T \times [W/G] \to [W/G] \to BG$ and $s \co T \times [W/G] \to [X/G] \to BG$ are isomorphic. Therefore, the map in \eqref{EQ:Hom} is a pull-back of the diagonal of $\Hom([W/G],BG)$ which is affine. \end{proof} \subsection{Drinfeld's results on algebraic spaces with $\mathbb{G}_m$-actions} \label{S:drinfeld} We begin this subsection with the following coherent completeness lemma. \begin{lemma} \label{L:A1-complete} If $S$ is a noetherian affine scheme, then $[\AA^1_S / \mathbb{G}_m]$ is coherently complete along $[S / \mathbb{G}_m]$. \end{lemma} \begin{proof} Let $A=\Gamma(S,\oh_S)$; then $\AA^1_S = \Spec A[t]$ and $V(t) = [S/\mathbb{G}_m]$. If $\cF \in\Coh([\AA^1_S/\mathbb{G}_m])$, then we claim that there exists an integer $n\gg 0$ such that the natural surjection $\Gamma(\cF) \to \Gamma(\cF/t^n\cF)$ is bijective. Now every coherent sheaf on $[\AA^1_S/\mathbb{G}_m]$ is a quotient of a finite direct sum of coherent sheaves of the form $p^*\cE_l$, where $\cE_l$ is the weight $l$ representation of $\mathbb{G}_m$ and $p\co [\AA^1_S/\mathbb{G}_m] \to [S/\mathbb{G}_m]$ is the natural map. It is enough to prove that $\Gamma(p^*\cE_l) \to \Gamma(p^*\cE_l/t^n p^*\cE_l)$ is bijective, or equivalently, that $\Gamma((t^n) \otimes p^*\cE_l) = 0$. But $(t^n) = p^*\cE_n$ and $\Gamma(p^*\cE_{n+l})=0$ if $n+l>0$, hence for all $n\gg 0$. We conclude that $\Gamma(\cF) \to \varprojlim_n \Gamma(\cF/t^n\cF)$ is bijective. What remains can be proven analogously to Theorem \ref{key-theorem}. \end{proof} \begin{proposition} \label{P:tannakian2} Let $W$ be an excellent algebraic space over a field $k$ and $G$ be an algebraic group acting on $W$. Let $Z \subseteq W$ be a $G$-invariant closed subspace. Suppose that $[W/G]$ is coherently complete along $[Z/G]$. Let $X$ be a noetherian algebraic space over $k$ with an action of $G$. Then the natural map $$ \Hom^{G}(W, X) \to \varprojlim_n \Hom^{G}\bigl(W_Z^{[n]}, X\bigr) $$is bijective. \end{proposition} \begin{proof} We have a cartesian diagram $$\xymatrix{ \Hom^G(W,X)\ar[r]\ar[d] & \Hom\bigl([W/G],BG\bigr)\ar[d] \\ \Hom\bigl([W/G],[X/G]\bigr)\ar[r] & \Hom\bigl([W/G],BG\bigr)\times \Hom\bigl([W/G],BG\bigr) }$$ and a similar cartesian diagram for $W$ replaced with $W_Z^{[n]}$ for any $n$ which gives the cartesian diagram $$\xymatrix{ \varprojlim_n \Hom^G\bigl(W_Z^{[n]},X\bigr)\ar[r]\ar[d] & \varprojlim_n \Hom\bigl([W_Z^{[n]}/G],BG\bigr)\ar[d] \\ \varprojlim_n \Hom\bigl([W_Z^{[n]}/G],[X/G]\bigr)\ar[r] & \varprojlim_n \Hom\bigl([W_Z^{[n]}/G],BG\bigr)\times \Hom\bigl([W_Z^{[n]}/G],BG\bigr) }$$ Since $[W/G]$ is coherently complete along $[Z/G]$, it follows by Tannaka duality that the natural maps from the first square to the second square are isomorphisms. \end{proof} \begin{lemma} \label{L:drinfeld} Let $f \co U \to Z$ be a $\mathbb{G}_m$-equivariant \'etale morphism of quasi-separated algebraic spaces of finite type over a field $k$. Then $U^0 = Z^0 \times_Z U$ and $U^+ = Z^+ \times_{Z^0} U^0$. \end{lemma} \begin{proof} The inclusion of stabilizer group schemes $\Stab(U)\to \Stab(Z)\times_Z U$ is an open immersion since $f$ is \'etale. The first statement follows since any open subgroup of $\mathbb{G}_m$ is $\mathbb{G}_m$. For the second statement, we need to show that there exists a unique $\mathbb{G}_m$-equivariant morphism filling in the $\mathbb{G}_m$-equivariant diagram \begin{equation} \label{D:drinfeld} \begin{split} \xymatrix{ \Spec k \times S \ar[r] \ar[d] & U \ar[d]^f \\ \AA^1 \times S \ar[r] \ar@{-->}[ur] & Z } \end{split} \end{equation} where $S$ is an affine scheme of finite type over $k$, and the vertical left arrow is the inclusion of the origin. For each $n \ge 1$, the formal lifting property of \'etaleness yields a unique $\mathbb{G}_m$-equivariant map $\Spec k[x]/x^n \times S \to U$ such that $$\xymatrix{ \Spec k \times S \ar[r] \ar[d] & U \ar[d]^f \\ \Spec (k[x]/x^n) \times S \ar[r] \ar@{-->}[ur] & Z }$$ commutes. By Lemma \ref{L:A1-complete} and Proposition \ref{P:tannakian2}, there exists a unique $\mathbb{G}_m$-equivariant morphism $\AA^1 \times S \to U$ such that \eqref{D:drinfeld} commutes. \end{proof} \begin{proof}[Proof of Theorem \ref{T:drinfeld}] The algebraicity of $Z^0$, $Z^+$ and $\tilde{Z}$ follows directly from Corollary \ref{C:homG}. The final statements may be verified after passing to an algebraic closure of $k$. Our generalization of Sumihiro's theorem (Theorem \ref{T:sumi1}) and Lemma \ref{L:drinfeld} now further reduce these to the case when $Z$ is an affine scheme, which can be established directly; see \cite[\S 1.3.4]{drinfeld}. \end{proof}
{ "timestamp": "2015-04-27T02:07:15", "yymm": "1504", "arxiv_id": "1504.06467", "language": "en", "url": "https://arxiv.org/abs/1504.06467" }
\section{Introduction} \let\ootimes\otimes \renewcommand{\otimes}{\circ} Hybrid structures containing superconducting (S) and ferromagnetic (F) materials became a focus of nanoelectronic research because of their relevance for spintronics applications as well as their potential impact on fundamental research \cite{Eschrig11,Eschrig15,Linder15}. Examples of successful developments include the discoveries of the $\pi$-junction \cite{Bulaevskii77,Buzdin82} in S/F/S Josephson devices \cite{Ryazanov01,Kontos02}, of odd-frequency superconductivity \cite{Berezinskii74} in S/F heterostructures \cite{Bergeret01,Kadigrobov01}, and of the indirect Josephson effect in S/half-metal/S junctions \cite{Eschrig03,eschrig_nphys_08}. Other recent topics of interest include the study of Majorana fermions at interfaces between superconductors and topological insulators \cite{Tanaka12} and at edges in superfluid $\,^3$He \cite{Volovik02,Roy08}, and the appearance of pure spin supercurrents in topological superconductors \cite{Vorontsov08}, and in S/FI-F-FI devices as a result of geometric phases \cite{grein_prl_09}. The central subject in many of these studies is to understand how in the case of a superconductor coupled to a ferromagnetic material superconducting correlations penetrate into the ferromagnet, and how magnetic correlations penetrate into the superconductor \cite{Izyumov02,Golubov04,Eschrig04,Buzdin05,Bergeret05,Pokrovsky07}. A powerful method to treat such problems is the quasiclassical theory of superconductivity developed by Larkin and Ovchinnikov and by Eilenberger \cite{Eilenberger,Larkin}. Within this theory \cite{Serene83,Rammer86,Belzig99,Eschrig01,Kopnin09} the quasiparticle motion is treated on a classical level, whereas the particle-hole and the spin degrees of freedom are treated quantum mechanically. The transport equation, which is a first order matrix differential equation for the quasiclassical propagator, must be supplemented by physical boundary conditions in order to obtain a unique solution. Whereas for the full microscopic Green functions, the Gor'kov Green functions \cite{Gorkov58}, such boundary conditions can be readily formulated (\eg in terms of interface scattering matrices or in terms of transfer matrices), this is a considerably more difficult task for quasiclassical Green functions. In quasiclassical theory only the information about the envelope functions of Bloch waves is retained, information about the phases of the waves is however missing. Such envelope amplitudes can show jumps at interfaces, and one complex task is to calculate these jumps without knowing the full microscopic Green functions near the interface. Correspondingly, there is a long history of deriving boundary conditions for quasiclassical propagators, both for the Eilenberger equations, and their diffusive limit, the Usadel equations \cite{Usadel}. For ballistic transport, described by the Eilenberger equations, such boundary conditions were first formulated for spin-inactive interfaces in pioneering work by Shelankov and by Zaitsev \cite{Shelankov84,Zaitsev84}, who showed the non-trivial fact that these jumps can be calculated using only the envelope functions. More general formulations were proposed subsequently \cite{Ashauer86,Zhang87,Nagai88,Millis88}, including a formulation in terms of interface scattering matrices by Millis, Rainer, and Sauls \cite{Millis88}. All these formulations were implicit in terms of non-linear matrix equations, and problems arose in numerical implementations due to spurious (unphysical) additional solutions which must be eliminated. Progress was made with the help of Shelankov's projector formalism \cite{Shelankov80}, allowing for explicit formulations of boundary conditions in both equilibrium \cite{Yip97,Eschrig00,Shelankov00} and non-equilibrium \cite{Eschrig00} situations. Further generalizations included spin-active interfaces, formulated for equilibrium \cite{Fogelstrom00} and for non-equilibrium \cite{Zhao04}, and interfaces with diffusive scattering characteristics \cite{Lueck03}. An alternative formulation in terms of quantum mechanical $t$-matrices \cite{Cuevas96} proved also fruitful \cite{Cuevas01,Huertas02,Eschrig03,Eschrig04,Kopu04,Graser07}. The latest formulation, in terms of interface scattering matrices, is able to include non-equilibrium phenomena, interfaces and materials with weak or strong spin polarization, multi-band systems, as well as disordered systems \cite{Eschrig09}. For the diffusive limit a set of second order matrix differential equations has been derived by Usadel \cite{Usadel}. In contrast to the ballistic case, where boundary conditions have been formulated for a wide set of applications, boundary conditions for the diffusive limit have been formulated so far only in certain limiting cases. The first formulation is by Kupriyanov and Lukichev, appropriate for the tunneling limit \cite{Kupriyanov88}. This was generalized to arbitrary transmission by Nazarov \cite{naz}. A major advance was done by Cottet \emph{et al.} in formulating boundary conditions for Usadel equations appropriate for spin-polarized interfaces \cite{cottet}. These boundary conditions are valid in the limit of small transmission, spin polarization, and spin-dependent scattering phase shifts (this term is often used interchangeably with ``spin-mixing angles'' \cite{Tokuyasu88}). Subsequent formulations allowed for arbitrary spin polarization, although being restricted to small transmission and spin-dependent scattering \cite{Machon1,Machon2,Bergeret12}. In Ref. \cite{Bergeret12} the authors present ``heuristically'' deduced boundary conditions, which coincide with the ones used in Refs. \cite{Machon1,Machon2}. Here we not only present the full derivation of the specific boundary conditions used in Refs. \cite{Machon1,Machon2,Bergeret12}, but go further and give a full solution of the problem. With this, the long-standing problem of how to generalize Nazarov's formula for arbitrary transmission probability \cite{naz} to the case of spin-polarized systems with arbitrary spin polarization and arbitrary spin dependent scattering phases is solved. Our boundary conditions are general enough to allow for non-equilibrium situations within Keldysh formalism, as well as for complex interface spin textures. We reproduce as limiting cases all previously known formulations. \section{Transport Equations} The central quantity in quasiclassical theory of superconductivity \cite{Eilenberger,Larkin} is the quasiclassical Green function (``propagator'') $\check{g}({\bf p}_F,{\bf R},E,t)$. It describes quasiparticles with energy $E$ (measured from the Fermi level) and momentum ${\bf p}_F$ moving along classical trajectories with direction given by the Fermi velocity ${\bf v}_F({\bf p}_F)$ in external potentials and self-consistent fields that are modulated by the slow spatial (${\bf R}$) and time ($t$) coordinates \cite{Serene83,Rammer86,Belzig99}. The quasiclassical Green function is a functional of self-energies $\check\Sigma({\bf p}_F,{\bf R},E,t)$, which in general include molecular fields, the superconducting order parameter $\Delta ({\bf p}_F,{\bf R},t)$, impurity scattering, and the external potentials. The quantum mechanical degrees of freedom of the quasiparticles show up in the matrix structure of the quasiclassical propagator and the self-energies. It is convenient to formulate the theory using 2$\times$2 matrices in Keldysh space \cite{Keldysh} (denoted by a ``check'' accent), the elements of which in turn are 2$\times$2 Nambu-Gor'kov matrices \cite{Gorkov58,Nambu} in particle-hole (denoted by a ``hat'' accent) space. The structure of the propagators and self-energies in Keldysh-space is \numparts \begin{eqnarray} \check g= \left( \begin{array}{cc} \hat g^R & \hat g^K \\ 0 & \hat g^A \end{array} \right)_{\!\rm kel}, \quad \label{sigma} \check{\Sigma}=\left( \begin{array}{cc} \hat{\Sigma}^R & \hat{\Sigma}^K \\ 0 & \hat{\Sigma}^A \end{array} \right)_{\! \rm kel}, \end{eqnarray} where the superscripts $R$, $A$, and $K$ refer to retarded, advanced, and Keldysh components, respectively, and with the particle-hole space structure \footnote{ For the definitions of all Green functions in this paper we use a basis of fermion field operators in Nambu $\ootimes$ spin-space as $\Psi(\vecr,t) = [\psi_\uparrow(\vecr,t), \psi_\downarrow(\vecr,t), \psi_\uparrow(\vecr,t)^\dag, \psi_\downarrow(\vecr,t)^\dag]^T$ . } \begin{eqnarray} \label{gl_green3} \hat{g}^{R,A}=\! \left( \begin{array}{cc} g^{R,A} & f^{R,A} \\ \tilde{f}^{R,A} & \tilde {g}^{R,A} \end{array} \right)_{\! \rm ph},\quad \hat{g}^{K}=\! \left( \begin{array}{cc} \; \, g^K & \; \, f^K \\ -\tilde{f}^K & -\tilde {g}^K \end{array} \right)_{\! \rm ph} \end{eqnarray} for Green functions, and \begin{eqnarray} \label{gl_self3} \hat{\Sigma}^{R,A}=\! \left( \begin{array}{cc} \Sigma^{R,A} & \Delta^{R,A} \\ \tilde{\Delta}^{R,A} & \tilde {\Sigma}^{R,A} \end{array} \right)_{\! \rm ph},\quad \hat{\Sigma}^{K}=\! \left( \begin{array}{cc} \;\, \Sigma^K & \; \, \Delta^K \\ -\tilde{\Delta}^K & -\tilde {\Sigma}^K \end{array} \right)_{\! \rm ph} \end{eqnarray} \endnumparts for self-energies. For spin-degenerate trajectories (\ie in systems with weak or no spin-polarization) the elements of the 2$\times$2 Nambu-Gor'kov matrices are 2$\times$2 matrices in spin space, \eg $g^R=g^R_{ab}$ with $a,b\in \{\uparrow, \downarrow\}$, and similarly for others. In strongly spin-polarized ferromagnets the elements of the 2$\times$2 Nambu-Gor'kov matrices are spin-scalar (due to very fast spin-dephasing in a strong exchange field), and the system must be described within the preferred quantization direction given by the internal exchange field. The terms ``weak'' and ``strong'' refer to the spin-splitting of the energy bands being comparable to the superconducting gap or to the band width, respectively. In writing Eqs. \eqref{sigma}-\eqref{gl_self3} we used general symmetries, which are accounted for by the ``tilde'' operation, \begin{equation} \label{tilde} \tilde{X}({\bf p}_F,{\bf R},E,t)=X(-{\bf p}_F,{\bf R},-E,t)^\ast. \end{equation} Retarded (advanced) functions can be analytically continued into the upper (lower) complex energy half plane, in which case the relation is modified to $\tilde{X}({\bf p}_F,{\bf R},E,t)=X(-{\bf p}_F,{\bf R},-E^\ast,t)^\ast$ with complex $E$. The quasiclassical Green functions satisfy the Eilenberger-Larkin-Ovchin\-nikov transport equation and normalization condition \begin{equation} \left[E \check \tau_3 - \check \Sigma , \check g \right]_{\otimes} + \mathrm{i} \hbar {\bf v}_F \cdot \nabla \check g=\check 0, \quad \check g \otimes \check g = -\pi^2 \check 1. \label{eilen} \end{equation} The non-commutative product $\otimes$ combines matrix multiplication with a convolution over the internal energy-time variables in Wigner coordinate representation, \begin{equation} (\check A \otimes \check B)(E,t) \equiv e^{\frac{\mathrm{i}}{2} (\partial_E^A\partial_t^B-\partial_t^A\partial_E^B)} \check A(E,t) \check B(E,t), \end{equation} and $\check \tau_3=\hat \tau_3 \check 1$, where $\hat \tau_3$ is a Pauli matrix in particle-hole space. Here and below, $\left[A,B \right]_\otimes \equiv A\otimes B-B\otimes A$. The operation $\nabla$ acts on the variable ${\bf R}$. The functional dependence of the quasiclassical propagator on the self-energies is given in the form of self-consistency conditions. For instance, for a weak-coupling, $s$-wave order parameter the condition reads \begin{equation} \hat \Delta ({\bf R},t) = V_{s}\int^{E_c}_{-E_c} \frac{dE }{4\pi \mathrm{i}} \langle N_{F}({\bf p}_F)\hat f^{K}_s({\bf p}_F,{\bf R},E,t) \rangle_{{\bf p}_F}, \end{equation} where $V_{s}$ is the $s$-wave part of the singlet pairing interaction, $N_F$ is the density of states per spin at the Fermi level, $\hat f^{K}_s$ is spin-singlet part of the the Keldysh component $\hat f^{K}$, and $\langle \hspace{2mm} \rangle_{{\bf p}_F}$ denotes averaging over the Fermi surface. The cut-off energy $E_c$ is to be eliminated in favor of the superconducting transition temperature in the usual manner. When the quasiclassical Green function has been determined, physical quantities of interest can be calculated. For example, the current density at position ${\bf R}$ and time $t$ reads (with $e<0$ the electron charge) \begin{equation} {\bf j} ({\bf R},t) = e \int^{\infty}_{-\infty} \frac{dE }{8\pi \mathrm{i}} {\rm Tr} \langle N_{F}({\bf p}_F) {\bf v}_F({\bf p}_F) \hat \tau_3 \hat g^{K}({\bf p}_F,{\bf R},E,t)\rangle_{{\bf p}_F}. \label{densityofstates} \end{equation} The symbol Tr denotes a trace over the 2$\times$2 particle-hole space as well as over 2$\times$2 spin space in the case of spin-degenerate trajectories. In the dirty (diffusive) limit, strong scattering by non-magnetic impurities effectively averages the quasiclassical propagator over momentum directions. The Green function may then be expanded in the small parameter $k_{\rm B}T_{c}\tau/\hbar$ ($\tau$ is the momentum relaxation time) following the standard procedure \cite{Usadel,Alexander85} \begin{eqnarray} \check{g}({\bf p}_F,{\bf R},E,t) \approx \check{G}({\bf R},E,t) + \check{g}^{(1)} ({\bf p}_F,{\bf R},E,t) \label{exp} \end{eqnarray} where the magnitude of $\check{g}^{(1)}$ is small compared to that of $\check{G}$. The impurity self-energy is related to an (in general anisotropic) lifetime function $\tau({\bf p}_F',{\bf p}_F)$ \cite{Alexander85}. Substituting \eqref{exp} into \eqref{eilen}, multiplying with $N_F({\bf p}_F')\mbox{v}_{F,j}({\bf p}_F')\tau({\bf p}_F',{\bf p}_F)$, averaging over momentum directions, considering that $\check{\Sigma}' \tau /\hbar $ is small, where $\check{\Sigma}' $ is the self-energy reduced by the contribution due to non-magnetic impurity scattering, and using $\check G\otimes \check G=-\pi^2 \check 1$ and $\check G\otimes \check{g}^{(1)} + \check{g}^{(1)} \otimes \check G = \check 0$, one obtains (we suppress here the arguments ${\bf R},E,t$) \begin{equation} \label{relation} \left\langle N_F({\bf p}_F) \mbox{v}_{F,j}({\bf p}_F) \check{g}^{(1)}({\bf p}_F)\right\rangle_{{\bf p}_F} = N_F\sum_k \frac{D_{jk}}{\mathrm{i}\pi} \check{G} \otimes \nabla_k \check{G}, \end{equation} where $N_F=\langle N_F({\bf p}_F )\rangle_{{\bf p}_F}$ is the local density of states per spin at the Fermi level, $\nabla_k=\partial/\partial R_k$, the summation is over $k\in \left\{x,y,z\right\}$, and \begin{equation} D_{jk}= \frac{1}{N_F^2}\Big\langle\Big\langle N_F({\bf p}_F' ) \mbox{v}_{F,j}({\bf p}_F') \, \tau ({\bf p}_F',{\bf p}_F)\, \mbox{v}_{F,k}({\bf p}_F) N_F({\bf p}_F) \Big\rangle_{{\bf p}_F}\Big\rangle_{{\bf p}_F'} \end{equation} is the diffusion constant tensor. For isotropic systems, $D_{jk}=D\delta_{jk}$. The Usadel Green function $\check{G}$ obeys the following transport equation and normalization condition \cite{Usadel}, \begin{eqnarray} \label{gl_usdl} \left[ E \hat{\tau}_3 \check{1} -\check{\Sigma}_0\, , \, \check{G} \right]_\otimes &+&\sum_{jk}\frac{\hbar D_{jk}}{\pi} \nabla_j \left( \check{G} \otimes \nabla_k \check{G} \right) = \check 0, \quad \check G \otimes \check G = -\pi^2 \check 1, \end{eqnarray} where $\check \Sigma_0=\langle N_F({\bf p}_F) \check \Sigma'({\bf p}_F)\rangle_{{\bf p}_F}/N_F$. The Usadel propagator $\check G$ is a functional of $\check \Sigma_0$. The structures of $\check G$ and $\check \Sigma_0$ are the same as in Eqs. \eqref{sigma}-\eqref{gl_self3} (with $\check G$ replacing $\check g$ and $\Sigma_0$ replacing $\Sigma $). Eq. \eqref{tilde} is replaced by \begin{equation} \label{ustilde} \tilde{X}({\bf R},E,t)=X({\bf R},-E,t)^\ast. \end{equation} The current density for diffusive systems is obtained from Eqs. \eqref{relation} and \eqref{densityofstates}, and is given by \begin{equation} j_i({\bf R},t) = -e \sum_k \int^{\infty}_{-\infty} \frac{dE }{8\pi^2} {\rm Tr} N_F D_{ik} \hat \tau_3 [\check G ({\bf R},E,t)\otimes \nabla_k \check G({\bf R},E,t)]^{K} . \label{densityofstatesdiff} \end{equation} A vector potential ${\bf A}({\bf R},t)$ enters in a gauge invariant manner by replacing the spatial derivative operators in all expressions by (see \eg \cite{Alexander85,Tanaka09}) \begin{equation} \nabla_{i} \hat X \to \hat \partial_i \otimes \hat X \equiv \nabla_{i} \hat X -\mathrm{i} \left[ \frac{e}{\hbar }\hat \tau_3 A_i,\hat X\right]_{\otimes}. \end{equation} Finally, the case of a strongly spin-polarized itinerant ferromagnet with superconducting correlations (\eg due to the proximity effect when in contact with a superconductor) can be treated by quasiclassical theory as well \cite{Eschrig03,Eschrig04,Kopu04}. In this case, when the spin-splitting of the energy bands is comparable to the band width of the two spin bands, there exist two well separated fully spin-polarized Fermi surfaces in the system, and the length scale associated with $\hbar/|{\bf p}_{F\uparrow}-{\bf p}_{F\downarrow} |$ is much shorter than the coherence length scale in the ferromagnet. Equal-spin correlations stay still coherent over long distance in such a system; $\uparrow\downarrow $ and $\downarrow\uparrow$ correlations are, however, incoherent and thus negligible within quasiclassical approximation. Fermi velocity, density of states, diffusion constant tensor, and coherence length all become spin-dependent. The quasiclassical propagator is then spin-scalar for each trajectory, with either all elements $\uparrow\uparrow $ or all elements $\downarrow\downarrow $ depending on the spin Fermi surface the trajectory corresponds to. Eilenberger equation and Usadel equation have the same form as before for each separate spin band. The spin-resolved current densities are given in the ballistic case by \begin{equation} {\bf j}_{\uparrow}=e \int^{\infty}_{-\infty} \frac{dE }{8\pi \mathrm{i}} {\rm Tr} \big\langle N_{F\uparrow} {\bf v}_{F\uparrow} \hat \tau_3 \hat g^K_{\uparrow\uparrow} \big\rangle_{{\bf p}_{F\uparrow}} , \label{currentstrong} \end{equation} and in the diffusive case by \begin{equation} j_{k\uparrow}= -e \sum_k \int^{\infty}_{-\infty} \frac{dE }{8\pi^2} {\rm Tr} N_{F\uparrow} D_{\uparrow kj} \hat \tau_3 \left[ \check G_{\uparrow\uparrow}\otimes \nabla_j \check G_{\uparrow\uparrow} \right]^K, \label{diffcurrentstrong} \end{equation} and analogously for spin down. For heterostructures, the above equations must be supplemented with boundary conditions at the interfaces. A practical formulation of boundary conditions for diffusive systems valid for arbitrary transmission and spin polarization is the goal of this paper. \section{Boundary Conditions } \subsection{Interface Scattering Matrix} We formulate boundary conditions at an interface in terms of the normal-state interface scattering matrix $\hat {\bf S}$ \cite{Lambert91,Takane92,Beenakker92}, connecting incoming with outgoing Bloch waves on either side of the interface with each other. We use the notation \begin{equation} \label{SM0} \hat {\bf S}= \left( \begin{array}{cc} \hat {\bf S }_{11} & \quad \hat {\bf S}_{12} \\ \hat {\bf S}_{21} & -\hat {\bf S}_{22} \end{array} \right)_{\!\scriptscriptstyle \! \nearrow \!\!\!\!\!\! \nwarrow \;}, \end{equation} where $1$ and $2$ refer to the two sides of the interface, and the subscript label $\; \scriptscriptstyle \! \nearrow \!\!\!\!\!\! \nwarrow \; \!\! $ indicates that the 2x2 matrix structure refers to reflection and transmission amplitudes at an interface. The components $\hat {\bf S}_{ij}$ are matrices in particle-hole space as well as in scattering channel space (\eg scattering channels for ballistic transport would be parameterized by the Fermi momenta of incoming and outgoing Bloch waves). Each element in 2$\times$2 particle hole space is in turn a matrix in combined spin and channel space, i.e. the number of incoming directions (assumed to be equal to the number of outgoing directions due to particle conservation) gives the dimension in channel space. The dimension in spin space is for spin-degenerate channels 2 and for spin-scalar channels 1. If time-reversal symmetry is preserved, Kramers degeneracy requires that each element of the scattering matrix has a 2x2 spin (or more general: pseudo-spin) structure (as it connects doubly degenerate scattering channels on either side of the interface). For spin-polarized interfaces (\eg ferromagnetic or with Rashba spin-orbit coupling) the scattering matrix is not spin-degenerate. However if the splitting of the spin-degeneracy is on the energy scale of the superconducting gap, it can be neglected within the precision of quasiclassical theory of superconductivity. On the other hand, if the lifting of the spin-degeneracy of energy bands is comparable to the Fermi energy, the degeneracy of the scattering channels must be lifted as well in order to achieve consistency within quasiclassical theory. For definiteness, we denote the dependence on the scattering channels by indices $n,n'$: \begin{equation} [\hat {\bf S}_{\alpha \beta}]_{nn'}, \end{equation} even for the ballistic case for which $[\hat {\bf S}_{\alpha \beta}]_{nn'} \equiv \hat {\bf S}_{\alpha \beta} ({\bf p}_{F,n},{\bf k}_{F,n'})$. As shown in \ref{app1} and \ref{app2}, the scattering matrix for an interface can be written in polar decomposition in full generality as \begin{equation} \hat {\bf S}= \left( \begin{array}{cc} \sqrt{1-C C^\dagger }& C\\ C^\dagger & -\sqrt{1-C^\dagger C} \end{array} \right)_{\! \scriptscriptstyle \! \nearrow \!\!\!\!\!\! \nwarrow \;} \left( \begin{array}{cc} {\cal S} & 0\\ 0& \breve{\cal S} \end{array} \right)_{\! \scriptscriptstyle \! \nearrow \!\!\!\!\!\! \nwarrow \;} \end{equation} with unitary matrices ${\cal S}$ and $\breve{\cal S}$, and a transmission matrix $C$. All are matrices in particle-hole space, scattering channel space, and possibly (pseudo-)spin space. The above decomposition divides the scattering matrix into a Hermitian part and a unitary part. From this decomposition, we can define the auxiliary scattering matrix \begin{eqnarray} \label{SV0} \hat {\bf S}_0&=& \left( \begin{array}{cc} {\cal S} & 0\\ 0& \breve{\cal S} \end{array} \right)_{\! \scriptscriptstyle \! \nearrow \!\!\!\!\!\! \nwarrow \;} , \end{eqnarray} which retains all the phase information during reflection on both sides of the interface, and has zero transmission components. The decomposition is uniquely defined when there are no zero-reflection singular values (we will assume here that always a small non-zero reflection takes place for each transmission channel; perfectly transmitting channels can always be treated separately as the corresponding boundary conditions are trivial). For the matrix $C$ we introduce the parameterization \begin{equation} C=\left(1+tt^\dagger \right)^{-1} 2t , \label{C} \end{equation} (see \ref{app3}) which is uniquely defined when all singular values of $t$ are in the interval $[0,1]$ (which is required in order to ensure non-negative reflection singular values). We define for notational simplification ``hopping amplitude'' matrices \begin{eqnarray} \pi \tau_{12} = t\breve{\cal S},\quad \pi \tau_{21}=t^\dagger {\cal S} , \label{tau} \end{eqnarray} as well as unitary matrices \begin{eqnarray} S_1={\cal S},\qquad S_2=\breve{\cal S}. \end{eqnarray} In terms of those, obviously the relation \begin{eqnarray} \label{tausymm} \tau_{\alpha \bar \alpha } = S_\alpha (\tau_{\bar \alpha \alpha })^\dagger S_{\bar \alpha } \end{eqnarray} holds, where $(\alpha, \bar \alpha )\in \{(1,2),(2,1)\}$, and the labels 1 and 2 refer to the respective sides of the interface. Here, and below, the Hermitian conjugate operation involves a transposition in channel indices. The particle-hole structures of the surface scattering matrix and the hopping amplitude are given by, \begin{eqnarray} \hat S_{\alpha}&=&\left( \begin{array}{cc} S_{\alpha}& 0 \\ 0 & (\tilde S_{\alpha})^\dagger \end{array} \right)_{\! \rm ph}, \qquad \hat \tau_{\alpha\bar \alpha }=\left( \begin{array}{cc} \tau_{\alpha \bar \alpha } & 0 \\ 0 & (\tilde \tau_{\bar \alpha \alpha })^\dagger \end{array} \right)_{\!\rm ph}, \end{eqnarray} with \begin{eqnarray} \, [\tilde S_{\alpha} ]_{nn'}&=& [S_{\alpha}]_{\bar n\bar n'}^{\ast}, \quad \, [\tilde \tau_{\alpha\bar\alpha }]_{nn'}= [\tau_{\alpha \bar \alpha }]_{\bar n\bar n'}^{\ast}, \end{eqnarray} where $\bar n$ and $\bar n'$ denote mutually conjugated channels, \eg defined by ${\bf p}_{F,\bar n'}\equiv -{\bf k}_{F,n'}$ and ${\bf k}_{F,\bar n} \equiv -{\bf p}_{F,n}$. Finally, the Keldysh structure of these quantities is \begin{eqnarray} \check S_{\alpha}&=& \left( \begin{array}{cc} \hat S_{\alpha}^{R}& 0 \\ 0 & (\hat S_{\alpha}^{A})^\dagger \end{array} \right)_{\! \rm kel} \equiv \left( \begin{array}{cc} \hat S_{\alpha}& 0 \\ 0 & \hat S_{\alpha} \end{array} \right)_{\! \rm kel} , \\ \check \tau_{\alpha\bar \alpha }&=&\left( \begin{array}{cc} \hat \tau_{\alpha \bar \alpha }^{R} & 0 \\ 0 & (\hat \tau_{\bar \alpha \alpha }^{A})^\dagger \end{array} \right)_{\!\rm kel } \equiv \left( \begin{array}{cc} \hat \tau_{\alpha \bar \alpha } & 0 \\ 0 & \hat \tau_{\alpha \bar \alpha } \end{array} \right)_{\!\rm kel } \end{eqnarray} (the additional Hermitian conjugate in these equations is due to the fact that advanced Green functions have the roles of ``incoming'' and ``outgoing'' momentum directions interchanged compared to retarded Green functions; this is similar to the additional Hermitian conjugate appearing for hole components in particle-hole space). Thus, the Keldysh matrix structure for $\check S_{\alpha}$ and $\check \tau_{\alpha\bar \alpha }$ is trivial (proportional to unit matrix). The full normal-state scattering matrix is diagonal in particle-hole and in Keldysh space, with reflection components \begin{eqnarray} \check {\bf S}_{\alpha\alpha}&=& (1+\pi^2\check \tau_{\alpha\bar \alpha } \check \tau_{\alpha \bar\alpha }^\dagger )^{-1} \; (1-\pi^2\check \tau_{\alpha\bar\alpha } \check \tau_{\alpha \bar\alpha }^\dagger)\; \check S_\alpha , \end{eqnarray} and with transmission components \begin{eqnarray} \check {\bf S}_{\alpha\bar\alpha }&=& (1+\pi^2\check\tau_{\alpha\bar\alpha } \check \tau_{\alpha \bar\alpha }^\dagger)^{-1} \; 2\pi \check \tau_{\alpha\bar\alpha } . \end{eqnarray} Note that $\tau_{\alpha \bar \alpha } $ connects incoming with outgoing Bloch waves per definition (as the scattering matrix does). \begin{figure}[t!] \centering{ (a)\includegraphics[width=0.7\linewidth]{Iso.pdf}\\ \includegraphics[width=0.5\linewidth]{ScattTransf.pdf} } \caption{ (a): Illustration of notation used in this paper. (b) and (c): Structure of boundary condition with transfer matrices ${\bf M}$ in (b), and with scattering matrices ${\bf S}$ in (c) (yellow). ``Drone'' amplitudes in the propagators (orange fields) connect in (b) incoming ($i$) and outgoing ($o$) momentum directions, and in (c) the two sides, $\alpha $ and $\overline\alpha $, of the interface. To obtain quasiclassical boundary conditions, Drone amplitudes in (b) and (c) must be eliminated. In this paper we use formulation (c). To connect to the notation in the main text, $g^{ii}_{\alpha\alpha}\equiv g^i$, $g^{ii}_{\bar\alpha\bar\alpha}\equiv \underline{g}^i$, $g^{oo}_{\alpha\alpha}\equiv g^o$, and $g^{oo}_{\bar\alpha\bar \alpha}\equiv \underline{g}^o$. } \label{fig:notation} \end{figure} We will formulate the theory such that all equations are valid on either side of the interface. This allows us to drop the indices $\alpha, \bar \alpha $ for simplicity of notation by randomly choosing one side of the interface, and denoting quantities on the other side of the interface by underline. In particular, we will use \begin{eqnarray} &&\check S_\alpha \equiv \check S, \quad \check S_{\bar \alpha} \equiv \underline {\check S}, \quad \check \tau_{\alpha \bar \alpha}\equiv \check \tau ,\quad \check \tau_{\bar \alpha \alpha}\equiv \underline{\check \tau } \nonumber \\ && \check g_\alpha \equiv \check g , \quad \check g_{\bar \alpha }\equiv \underline{ \check g },\quad \check G_\alpha \equiv \check G , \quad \check G_{\bar \alpha }\equiv \underline{ \check G }, \end{eqnarray} and so forth [see figure \ref{fig:notation}(a)]. Also, from Eq. \eqref{tausymm} we have $\check \tau = \check S \underline{\check\tau}^\dagger \underline{\check S}$. \subsection{General Boundary Conditions for diffusive systems} One main problem with boundary conditions for quasiclassical propagators is illustrated in figure \ref{fig:notation} (b) and (c). In previous treatments \cite{Millis88,naz,cottet} the starting point was a transfer matrix description, see figure \ref{fig:notation} (b), which however required the elimination of so-called ``Drone amplitudes'', which are propagators that mix incoming with outgoing directions. Here, we will employ a scattering matrix description, see figure \ref{fig:notation} (c), which, on the other hand, requires a similar elimination of Drone amplitudes, this time being propagators mixing the two sides of the interface. However, for an impenetrable interface this latter problem does not arise, a fact we will exploit. The strategy to derive the needed boundary conditions is to apply a three-step procedure. In the first step, the problem of an impenetrable interface with the auxiliary scattering matrix defined in Eq.~\eqref{SV0} is solved on each side of the interface \cite{Eschrig03}. For this step, the ballistic solutions for the envelope functions for the Gor'kov propagators close to the interfaces should be expressed by the solutions $\check{G}$ of the Usadel equation. In a second step, these ballistic solutions (auxiliary propagators) are used in order to find the full ballistic solutions for finite transmission by utilizing a $t$-matrix technique \cite{Cuevas01,Eschrig03,Eschrig04,Kopu04}. In the third, and final, step the matrix current will be derived from the ballistic solutions, which then enters the boundary conditions for the Usadel equations. We will present explicit solutions for all three steps, such that the procedure describes effectively boundary conditions for the solutions of Usadel equations on either side of the interface. We use for the auxiliary propagators the notation $\check g_{0}^{o}$, $\check g_{0}^{i}$, $\underline {\check g}_{0}^{o}$ and $\underline {\check g}_{0}^{i}$, where the upper index denotes the direction of the Fermi velocity. {\it Incoming} momenta (index $i$) are those with a Fermi velocity pointing towards the interface, and {\it outgoing} momenta (index $o$) are those with a Fermi velocity pointing away from the interface. \subsubsection{Solution for impenetrable interface:} We solve first for the auxiliary ballistic propagators fulfilling the impenetrable boundary conditions \begin{eqnarray} \label{aux} \check { g}_{0}^{o}= \check S\; \check { g}_{0}^{i} \; \check S^\dagger , \quad \underline{\check{ g}}_{0}^{o}= \underline{\check S}\; \underline{\check{ g}}_{0}^{i} \; \underline{\check S}^\dagger , \end{eqnarray} implying matrix multiplication in the combined [Keldysh] $\times$ [particle-hole] $\times$ [combined scattering-channel and spin] space. For diffusive banks, it is necessary to connect the ballistic propagators ${\check g}_{0}^{i,o}$ with the isotropic solutions of the Usadel equation, ${\check G}$. The ballistic propagators $\check{g}_{0}^{i,o}$ and $\underline{\check{g}}_{0}^{i,o}$, which characterize electronic correlations next to the scattering barrier, depend on the electronic momentum. However, in the diffusive case, impurity scattering leads to momentum isotropization away from the scattering barrier. This process occurs in isotropization zones with a thickness corresponding to a few times the inelastic mean free path of the materials, see figure \ref{fig:notation} (a). This scale is itself much smaller than the scale on which the isotropic diffusive Green functions evolve in the bulk of the materials, in the framework of the Usadel equations. Indeed, the Usadel equations involve a superconducting coherence length, which is typically much larger than the elastic mean free path. Therefore, in order to describe disordered hybrid structures with Usadel equations, suitable boundary conditions should be expressed in terms of the values of the isotropic Green functions $\check{G}$ and $\underline{\check{G}}$ right at the beginning of the isotropization zones. To obtain such boundary conditions from Eq. \eqref{aux}, it is necessary to express the propagators $\check{g}_{0}^{i,o}$ and $\underline{\check{g}}_{0}^{i,o}$ in terms of $\check{G}$ and $\underline{\check{G}}$. This can be done by studying the spatial dependence of the Gor'kov Green functions (or full Green functions without the quasiclassical approximation) in the isotropization zones (see Refs. \cite{naz,cottet} for details). Using the fact that the dynamics of electrons is dominated by impurity scattering in these zones, one can express the Gor'kov Green functions in terms of $\check{g}_{0}^{i,o}$ , $\underline{\check{g}}_{0}^{i,o}$, $\check{G}$ and $\underline{\check{G}}$. Then, an elimination of unphysical solutions imposes the conditions \cite{naz} \numparts \begin{eqnarray} \label{naz1} ({\check G}-\mathrm{i}\pi {\check 1})\otimes (\check { g}_{0}^{i} + \mathrm{i}\pi {\check 1}) &=& {\check 0} ,\quad \label{naz2} (\check { g}_{0}^{i} - \mathrm{i}\pi {\check 1}) \otimes ({\check G}+\mathrm{i}\pi {\check 1})= {\check 0} \\ \label{naz3} ({\check G}+\mathrm{i}\pi {\check 1})\otimes (\check { g}_{0}^{o} - \mathrm{i}\pi {\check 1}) &=& {\check 0} ,\quad \label{naz4} (\check { g}_{0}^{o} + \mathrm{i}\pi {\check 1}) \otimes ({\check G}-\mathrm{i}\pi {\check 1})= {\check 0} \end{eqnarray} \endnumparts and similarly for $\underline{\check{G}}$ and $\underline{\check{g}}_{0}^{i,o}$. From this one obtains the identity $ \frac{1}{2}\left\{ \check { g}_{0}^{i,o} , \check G\right\}_\otimes=-\pi^2 {\check 1}$ for the anticommutator $\left\{ \ldots \right\}$. This allows to solve after some straightforward algebra for $\check { g}_{0}^{i,o}$, using Eq. \eqref{aux}, and using the abbreviations \begin{eqnarray} \label{def1} \check G'&=& \frac{1}{2\pi^2} \; (\check S^\dagger \check G \check S-\check G), \label{def2} \quad \check G''= \frac{1}{2\pi^2} \; (\check S\check G\check S^\dagger -\check G) , \end{eqnarray} (both are matrices depending via $\check S$ on the scattering channel index) leading to \cite{cottet} \numparts \begin{eqnarray} \label{gid} \check { g}_{0}^{i} -\mathrm{i}\pi {\check 1} = (1-\check G\otimes \check G')^{-1} \otimes (\check G-\mathrm{i}\pi {\check 1} ) , \\ \label{god} \check { g}_{0}^{o} +\mathrm{i}\pi {\check 1} = (1-\check G\otimes \check G'')^{-1} \otimes (\check G+\mathrm{i}\pi {\check 1} ) \end{eqnarray} (here and below the inverse is defined with respect to the $\otimes $-product), which, using identities like $\check G' \otimes \check G'=-\frac{1}{2\pi^2}\left\{ \check G',\check G\right\}_\otimes $ (with $\left\{ A,B\right\}_\otimes \equiv A\otimes B+B\otimes A$), alternatively can be written also as \begin{eqnarray} \label{gid1} \check { g}_{0}^{i} +\mathrm{i}\pi {\check 1} = (\check G+\mathrm{i}\pi {\check 1} ) \otimes (1-\check G'\otimes \check G)^{-1} , \\ \label{god1} \check { g}_{0}^{o} -\mathrm{i}\pi {\check 1} = (\check G-\mathrm{i}\pi {\check 1} ) \otimes (1-\check G''\otimes \check G)^{-1} . \end{eqnarray} \endnumparts Similarly equations hold for $\underline{\check{G}}$ and $\underline{\check{g}}_{0}^{i,o}$ in terms of the scattering matrix $\underline{\check{S}}$. Introducing these solutions into Eqs.~\eqref{naz1}-\eqref{naz4} shows readily that the latter are fulfilled. We note that the relation $\check g_{0}^{i,o}\otimes \check g_{0}^{i,o}=-\pi^2 \check 1$ follows from $\check G\otimes \check G=-\pi^2 \check 1$ and $\check S\check S^\dagger = \check S^\dagger \check S= \check 1$. It is also important to notice that whereas $\check G$ is proportional to the unit matrix in channel space due to their isotropic nature \cite{cottet}, $\check S $, and consequently $\check G'$, $\check G''$, and $\check g_{0}^{i,o}$, are in general non-trivial matrices in channel space. Eqs. \eqref{gid}-\eqref{god}, or alternatively \eqref{gid1}-\eqref{god1}, together with Eq. \eqref{def1} determine uniquely $\check g_{0}^{i,o}$ in terms of the diffusive Green function $\check G$. We can rewrite the difference $\check { g}_{0}^{o}-\check { g}_{0}^{i}$ in a more explicit manner, using the abbreviations $\check \delta' \equiv \check G\otimes \check G'$ and $\check \delta''\equiv \check G''\otimes \check G$, leading to \begin{eqnarray} \label{gomgi} \check { g}_{0}^{o}-\check { g}_{0}^{i} = && (\check 1-\check \delta')^{-1} \otimes \left[ (\check G-\mathrm{i}\pi {\check 1} ) \otimes \check \delta'' - \check \delta' \otimes (\check G-\mathrm{i}\pi {\check 1} ) \right] \otimes (\check 1-\check \delta'')^{-1} . \end{eqnarray} \subsubsection{Solution for finite transmission:} The second step follows Refs.~ \cite{Eschrig03,Eschrig04}. Once the auxiliary propagators are obtained, the full propagators can be obtained directly, without further solving the transport equation, in the following way. We solve {\it $t$-matrix equations} resulting from the transmission parameters $\check \tau$, for incoming and outgoing directions, which according to a procedure analogous to the one discussed in Ref. \cite{Cuevas96,Cuevas01} take the form, \begin{eqnarray} \label{ti} \check { t}^{i}&=& \underline{\check \tau}^\dagger \; \underline{\check g}_{0}^{o} \; \underline{\check \tau} \otimes \left( \check 1 + \check { g}_{0}^{i} \otimes \check { t}^{i}\right), \quad \label{to} \check { t}^{o}= \check \tau\; \underline{\check g}_{0}^{i} \; \check \tau^\dagger \otimes \left( \check 1 + \check { g}_{0}^{o} \otimes \check { t}^{o}\right). \end{eqnarray} Using the symmetry Eq.~\eqref{tausymm}, the $t$-matrices for incoming and outgoing directions can be related through \begin{eqnarray} \label{tsym} \check { t}^{o}&=& \hat S\; \check { t}^{i} \; \hat S^\dagger . \end{eqnarray} Using the short notation \begin{eqnarray} \label{def01} \check { g}_{1}^{o} &\equiv & \check \tau\; \underline{\check g}_{0}^{i} \; \check \tau^\dagger, \qquad \check { g}_{1}^{i} \equiv \underline{\check \tau}^\dagger \; \underline{\check g}_{0}^{o} \; \underline{\check \tau}, \end{eqnarray} we solve formally Eqs.~\eqref{to} for $\check t^{i,o}$: \begin{eqnarray} \label{tmatrix} \check { t}^{i,o}&=& \left(1- \check { g}_{1}^{i,o} \otimes \check { g}_{0}^{i,o} \right)^{-1} \otimes \check { g}_{1}^{i,o}. \end{eqnarray} The {\it full propagators}, fulfilling the desired boundary conditions at the interface, can now be easily calculated. For incoming and outgoing directions they are obtained from \cite{Eschrig03,Kopu04} \numparts \begin{eqnarray} \label{gi} \check { g}^{i}&=& \check { g}_{0}^{i} + \left( \check { g}_{0}^{i} + \mathrm{i}\pi\check 1 \right) \otimes \check { t}^{i} \otimes \left(\check { g}_{0}^{i}- \mathrm{i}\pi\check 1\right), \quad \\ \check { g}^{o}&=& \check { g}_{0}^{o} + \left(\check { g}_{0}^{o} - \mathrm{i}\pi\check 1\right) \otimes \check { t}^{o} \otimes \left(\check { g}_{0}^{o} + \mathrm{i}\pi\check 1\right). \quad \label{go} \end{eqnarray} Noticing that $\left( \check { g}_{0}^{i,o} + \mathrm{i}\pi\check 1 \right) \otimes \left(\check { g}_{0}^{i,o}- \mathrm{i}\pi\check 1\right)=\check 0$, and $\left( \check { g}_{0}^{i,o} - \mathrm{i}\pi\check 1 \right) \otimes \left(\check { g}_{0}^{i,o}+ \mathrm{i}\pi\check 1\right)=\check 0$, as well as identities like ${\check { g}}_{0}^{i,o} \otimes ({\check { g}}_{0}^{i,o} +\mathrm{i}\pi {\check 1})= \mathrm{i}\pi {\check 1} \otimes ({\check { g}}_{0}^{i,o}+\mathrm{i}\pi {\check 1})$ etc, it is obvious that the normalization $\check { g}^{i,o}\otimes \check { g}^{i,o}=-\pi^2 \check 1$ holds. Using the same identities, we obtain the alternative to Eqs. \eqref{gi}-\eqref{go} expressions \begin{eqnarray} \label{gi1} \check { g}^{i} &=& \check { g}_{0}^{i}+(\check { g}_{0}^{i}+\mathrm{i}\pi \check 1) \otimes [ \check { t}^{i} , \check { g}_{0}^{i} ]_\otimes =\check { g}_{0}^{i}- [ \check { t}^{i} , \check { g}_{0}^{i} ]_\otimes \otimes (\check { g}_{0}^{i}-\mathrm{i}\pi \check 1) ,\\ \label{go1} \check { g}^{o}&= & \check { g}_{0}^{o}+ (\check { g}_{0}^{o}-\mathrm{i}\pi \check 1) \otimes [ \check { t}^{o} , \check { g}_{0}^{o} ]_\otimes =\check { g}_{0}^{o}- [ \check { t}^{o} , \check { g}_{0}^{o} ]_\otimes \otimes (\check { g}_{0}^{o}+\mathrm{i}\pi \check 1) . \end{eqnarray} \endnumparts Equations \eqref{gi}-\eqref{go}, or alternatively, \eqref{gi1}-\eqref{go1}, in conjunction with Eqs. \eqref{def01}-\eqref{tmatrix}, solve the problem of finding the ballistic solutions for finite transmission. We are now ready for the last step, to relate these solutions to the matrix current which enters in the expression for boundary conditions for $\check G$ and $\underline{\check G}$. \subsubsection{Matrix current and boundary conditions for diffusive propagators:} We now turn to the third, final, step. As shown in Refs. \cite{naz,cottet}, the boundary conditions for quasiclassical isotropic Green functions can be obtained from the conservation of the matrix current $\mathcal{I}$ in the isotropization zones surrounding the scattering barrier. This quantity contains physical information on the flows of charge, spin and electron-hole coherence in a structure. We refer the reader to Refs. \cite{naz,cottet} for the general definition of $\mathcal{I}$ in terms of the Gor'kov Green functions. Using this definition, one can verify that $\mathcal{I}$ is spatially conserved along the entire isotropization zones. Then, one can express $\mathcal{I}$ next to the scattering barrier in terms of the propagators $\check{g}^{i,o}$ and $\underline{\check{g}}^{i,o}$, and at the beginning of the isotropization zones in terms of $\check{G}$ and $\underline{\check{G}}$, see Fig. \ref{fig:notation} (a). The conservation of the matrix current provides an equality between the two expressions. Since $\check{g}^{i,o}$ can be expressed in terms of $\check{g}_0^{i,o}$ and $\underline{\check{g}}_0^{i,o}$, and these in terms of the $\check{G}$ and $\underline{\check{G}}$, this gives the desired boundary conditions. Following Ref.~ \cite{Kopu04}, after some straightforward algebra we obtain \begin{eqnarray} \label{comm1} &&[ \check { t}^{o} , \check { g}_{0}^{o} ]_\otimes = \left(1- \check { g}_{1}^{o} \otimes \check { g}_{0}^{o} \right)^{-1} \left[ \check { g}_{1}^{o}, \check { g}_{0}^{o} \right]_\otimes \left(1- \check { g}_{0}^{o} \otimes \check { g}_{1}^{o} \right)^{-1} .\qquad \end{eqnarray} Using relations \eqref{aux} and \eqref{tsym} above, we find \begin{eqnarray} \check { g}^{i}&=& \check S^\dagger \; \left[ \check { g}_{0}^{o} + \left(\check { g}_{0}^{o} + \mathrm{i}\pi\check 1\right) \otimes \check { t}^{o} \otimes \left(\check { g}_{0}^{o} - \mathrm{i}\pi\check 1\right) \right] \check S, \label{eq8} \end{eqnarray} which allows to derive the following relation \begin{eqnarray} \label{comm} \check{\cal I}'\equiv \check { g}^{o}- \check S\check { g}^{i} \check S^\dagger &=& -2\pi \mathrm{i}[ \check { t}^{o} , \check { g}_{0}^{o} ]_\otimes . \end{eqnarray} For calculating the charge current density in a given structure, it is sufficient to know $\check{\cal I}'$, because the matrices $\check S$ and $\check S^\dagger $ drop out of the trace as they commute with the $\hat \tau_3$ matrix in particle-hole space. Finally we relate the obtained propagators $\check g^{i,o}$ to the matrix current ${\cal I}$, \begin{eqnarray} \label{matrixcurrent} \check{\cal I}\equiv \check { g}^{o}- \check { g}^{i} \equiv \check{\cal I}'+ \check{\cal I}'' \end{eqnarray} with \begin{eqnarray} \label{matrixcurrent2} \check{\cal I}''\equiv \check S\check { g}^{i} \check S^\dagger - \check { g}^{i} . \end{eqnarray} We remind the reader here that $\check{\cal I}$ has a matrix structure in Keldysh space, in particle-hole space, and in combined scattering-channel and spin space. In terms of $\check{\cal I}$ the boundary condition results then from Eq.~\eqref{relation} and from the matrix current conservation in the isotropization regions \cite{naz} \begin{eqnarray} \label{gl_naz} {\cal G}_q\sum_{n=1}^{\cal N} \frac{\check{\cal I}_{nn} }{\mathrm{i}\pi } = -\frac{\sigma {\cal A}}{\pi^2}\check{G}\otimes \frac{d}{dz}\check{G}, \end{eqnarray} where $z$ is the coordinate along the interface normal ({\it away from} the interface), $n$ is a scattering channel index (${\cal N}$ channels, spin-degenerate channels count as one), $\sigma=e^2N_{{\rm F}} D$ refers to the conductivity per spin, ${\cal A}$ is the surface area of the contact, and ${\cal G}_q$ is the quantum of conductance, ${\cal G}_q= e^2/h$. The number of scattering channels is expressed in terms of the projection of the Fermi surfaces on the contact plane, $A_{F,z}$, by ${\cal N}= A_{F,z}{\cal A}/(2\pi)^2$. For isotropic Fermi surfaces $A_{F,z}=\pi k_F^2$. In general, \begin{eqnarray} \frac{1}{\cal A} \sum_{n=1}^{\cal N} \ldots = \int_{A_{F,z}} \frac{d^2 k_{||}}{(2\pi)^2} \ldots , \end{eqnarray} where $\hbar {\bf k}_{||}$ is the momentum component parallel to the interface. \section{Special Cases} \subsection{Spin-scalar and channel-diagonal case} The transition to the diffusive Green functions is trivial for the case of $\hat S=\hat 1$, as then $\check g_{0}^{i}=\check g_{0}^{o}=\check G$. If we start from Eq.~\eqref{comm1} in conjunction with \eqref{def01}, we obtain in the case of a spin-scalar and channel-diagonal matrix $\hat \tau_{nn}$ with the notation $\check G = -\mathrm{i}\pi \check {\bf G}$ \numparts \begin{eqnarray} \frac{2\sum_n\check{\cal I}_{nn}}{\mathrm{i}\pi }= \sum_n\frac{ 4{\cal T}_n[\underline{\check{\bf G}},\check{\bf G}] }{ 4 +{\cal T}_n\left(\{\underline{\check{\bf G}},\check{\bf G}\}-2\right) } =\frac{2\sigma {\cal A}}{{\cal G}_q} \check{\bf G} \otimes \frac{d}{dz}\check{\bf G} \end{eqnarray} with $\sigma=e^2N_FD$ and \begin{eqnarray} {\cal T}_n=\frac{4\pi^2 |\tau_{nn}|^2}{\left(1+\pi^2|\tau_{nn}|^2\right)^2} . \end{eqnarray} \endnumparts This reproduces Nazarov's boundary condition \cite{naz,Kopu04}. \subsection{Case for interface between superconductor and ferromagnetic insulator} For the case of zero transmission, $\check \tau\equiv \check 0$, we can find a closed solution if we assume that we can find a spin-diagonal basis for all reflection channels. For a channel-diagonal scattering matrix we write $\check S_{nn}=e^{\mathrm{i}\varphi_n} e^{\mathrm{i}\frac{\vartheta_n}{2}\check \kappa }$ with $\check \kappa = \mbox{diag}\left\{ \vec{m}\vec{\sigma},\vec{m}\vec{\sigma}^\ast \right\}$, where $\vec{m}^2=1$ (leading to $\check \kappa^2=1$). In this case we have $\check g^{i,o}=\check g^{i,o}_0$. We use Eq. \eqref{gomgi}, which straightforwardly leads to \begin{eqnarray} \frac{2\sum_n\check {\cal I}_{nn}}{\mathrm{i}\pi}&= & \sum_n \left[ \check 1-\frac{\mathrm{i}\sin \vartheta_n}{4} (\check {\bf G} \check \kappa \check {\bf G}-\check \kappa ) +\frac{\sin^2 \frac{\vartheta_n}{2}}{2} (\check {\bf G} \check \kappa \check {\bf G} \check \kappa - \check 1) \right]^{-1} \nonumber \\ &&\qquad \times \left\{ -\mathrm{i}\sin \vartheta_n [\check \kappa,\check {\bf G}] + \sin^2 \frac{\vartheta_n}{2}[\check \kappa \check {\bf G} \check \kappa, \check {\bf G}] \right\} \nonumber \\ &&\times \left[ \check 1-\frac{\mathrm{i}\sin \vartheta_n}{4} (\check {\bf G} \check \kappa \check {\bf G}-\check \kappa ) +\frac{\sin^2 \frac{\vartheta_n}{2}}{2} (\check \kappa \check {\bf G} \check \kappa \check {\bf G} - \check 1) \right]^{-1} \label{FI} \end{eqnarray} (where we remind that $\check {\bf G}^2 =\check 1$). Note that $\varphi_n $ drops out, only the spin mixing angle $\vartheta_n $ matters. Eq. \eqref{FI} generalizes the results of Ref. \cite{cottet} to arbitrary spin-dependent reflection phases. Further below we will give a physical interpretation of the leading order terms arising in an expansion for small $\vartheta_n$. \subsection{Exact series expansions} \label{series} We now provide explicit series expansions for all quantities which will be useful for deriving formulas for various limiting cases. We start with writing the scattering matrix as $\check S=e^{\mathrm{i}\check K}$ with hermitian $\check K$ due to unitarity of $\check S$, \ie $\check K=\check K^\dagger $. Then we use an expansion formula for Lie brackets in order to obtain the series expansion \begin{eqnarray} \label{Lie} \check S^\dagger \check G \check S = e^{-\mathrm{i}\check K} \check G e^{\mathrm{i}\check K} = \sum_{m=0}^\infty \frac{(-\mathrm{i})^m}{m!} \left[ \check K \stackrel{m}{,} \check G\right] \end{eqnarray} with the definitions $\left[ \check K \stackrel{m}{,} \check G\right] = \left[ \check K , \left[ \check K\stackrel{m-1}{,} \check G\right]\right]$ and $\left[ \check K \stackrel{0}{,} \check G\right]=\check G$. With this we obtain from Eq. \eqref{def1} \begin{eqnarray} \check G' &=&\frac{1}{2\pi^2} \sum_{m=1}^\infty \frac{(-\mathrm{i})^m}{m!} \left[ \check K \stackrel{m}{,} \check G\right] , \qquad \check G'' =\frac{1}{2\pi^2} \sum_{m=1}^\infty \frac{\mathrm{i}^m}{m!} \left[ \check K \stackrel{m}{,} \check G\right] , \end{eqnarray} which are very useful if $\check K$ has a small pre-factor. Note also the identity $\check G \otimes \left[ \check K , \check G\right] \otimes \check G= \pi^2 \left[ \check K , \check G\right]$. Furthermore, from Eqs. \eqref{gid1}-\eqref{god1} we find \numparts \begin{eqnarray} \check g_0^i &=& \check G+ (\check G+\mathrm{i}\pi \check 1) \otimes \sum_{l=1}^\infty (\check G' \otimes \check G)^l \\ \check g_0^o &=& \check G+ (\check G-\mathrm{i}\pi \check 1) \otimes \sum_{l=1}^\infty (\check G''\otimes \check G)^l. \end{eqnarray} \endnumparts From Eq. \eqref{comm1}, and using Eqs. \eqref{aux}, \eqref{tsym}, we derive \numparts \begin{eqnarray} \left[ \check t^o,\check g^o_0\right]_\otimes &=& \sum_{k,n=0}^\infty (\check g_1^o \otimes \check g_0^o)^k \otimes\left[ \check g_1^o, \check g_0^o\right]_\otimes \otimes (\check g_0^o \otimes \check g_1^o)^n,\quad \\ \left[ \check t^i,\check g^i_0\right]_\otimes &=& \sum_{k,n=0}^\infty (\check g_1^i \otimes \check g_0^i)^k \otimes\left[ \check g_1^i, \check g_0^i\right]_\otimes \otimes (\check g_0^i \otimes \check g_1^i)^n, \end{eqnarray} \endnumparts which is useful if the transmission amplitudes $\check \tau $ entering into $\check g_1^{i,o} $ are small. Finally, we obtain from Eqs. \eqref{comm} and \eqref{matrixcurrent2} \begin{eqnarray} \check{\cal I}'=-2\pi \mathrm{i} \left[ \check t^o,\check g^o_0\right]_\otimes,\quad \check {\cal I}'' &=& \sum_{m=1}^\infty \frac{\mathrm{i}^m}{m!} \left[ \check K \stackrel{m}{,} \check g^i\right] . \end{eqnarray} Here, $\check g^i$ is obtained from \begin{eqnarray} \label{seriesgi} \check g^i+\mathrm{i}\pi \check 1= (\check G+\mathrm{i}\pi \check 1) \otimes \sum_{l=0}^\infty (\check G'\otimes \check G)^l \otimes \left(\check 1+\left[ \check t^i,\check g_0^i\right]_\otimes\right) . \quad \end{eqnarray} \subsection{Boundary condition for spin-polarized surface to third order in spin-mixing angles} We first treat the case when $\check t^{i,o}\equiv \check 0$, for example the case where one side of the junction is a ferromagnetic insulator (FI). Then \begin{eqnarray} \label{expansion} \check{\cal I} &=& \sum_{m=1}^\infty \frac{\mathrm{i}^{m}}{m!} \left[ \check K \stackrel{m}{,} \check G \right] +\sum_{m,l=1}^\infty \frac{\mathrm{i}^{m}}{m!} \left[ \check K \stackrel{m}{,} (\check G+\mathrm{i}\pi \check 1) \otimes (\check G' \otimes \check G)^l \right] . \end{eqnarray} To third order we have $\check{\cal I}=\check{\cal I}^{(1)} +\check{\cal I}^{(2)}+\check{\cal I}^{(3)}$, and the derivation in \ref{thirdorder} leads to \numparts \begin{eqnarray} \label{I3} &&\check{\cal I}^{(1)} = \mathrm{i}\left[ \check K , \check G\right],\qquad \check{\cal I}^{(2)} = -\frac{\mathrm{i}}{2\pi} \left[ \check K \check G \check K ,\check G \right]_\otimes \quad \\ \label{three} &&\check{\cal I}^{(3)} = -\frac{\mathrm{i}}{24}\left[ \check K \stackrel{3}{,} \check G\right] -\frac{\mathrm{i}}{8\pi^2 }\left[ \check K , \check G \otimes \left[ \check K \stackrel{2}{,} \check G\right] \otimes \check G \right] . \qquad \end{eqnarray} \endnumparts For the special case of channel diagonal $\check K_{nn}=\frac{\vartheta_n}{2} \check \kappa $ with $\check \kappa^2=\check 1$, which follows also from directly expanding Eq. \eqref{FI}, we reproduce the results from Ref. \cite{cottet} ($\check G=-i\pi \check{\bf G}$), \numparts \begin{eqnarray} &&\frac{2\sum_n\check{\cal I}_{nn}^{(1)}}{\mathrm{i}\pi} = -\mathrm{i}\left(\mbox{$\sum_n$}\vartheta_n\right) \left[\check \kappa,\check{\bf G}\right] ,\quad \frac{2\sum_n\check{\cal I}_{nn}^{(2)}}{\mathrm{i}\pi} = \frac{\sum_n \vartheta_n^2}{4} \left[\check \kappa\check {\bf G}\check \kappa ,\check {\bf G} \right]_\otimes \\ &&\frac{2\sum_n\check{\cal I}_{nn}^{(3)}}{\mathrm{i}\pi} = -\mathrm{i} \frac{\sum_n\vartheta_n^3}{16} \left(\frac{1}{3} \left[\check \kappa,\check {\bf G}\right] - \left[\check \kappa\check {\bf G}\check \kappa \otimes \check {\bf G}\check \kappa, \check {\bf G} \right]_\otimes \right) . \quad \end{eqnarray} \endnumparts Note that the first order term $\sim[\check \kappa,\check {\bf G}]$ accounts for the effective exchange field induced inside the superconductor by the spin-mixing, whereas the term $\sim[\check \kappa \check {\bf G} \check \kappa,\check {\bf G}]$ produces a pair breaking effect similar to that of paramagnetic impurities \cite{Abrikosov60}. This second term occurs only at second order in $\vartheta_n$ because it requires multiple scattering at the S/FI interface, which together with random scattering in the diffusive superconductor leads to a magnetic disorder effect. \subsection{Boundary condition for spin-polarized interface to second order in spin-mixing angles and transmission probability} We now allow for finite transmission, and concentrate on the matrix current to second order in the quantities $\check K$, $\underline{\check K}$, and $\check g^{i,o}_1$. We need to take care of the scattering phases during transmission events. For this, we define \begin{equation} \check \tau=\check S^{\frac{1}{2}} \check \tau_0 \underline{\check S}^{\frac{1}{2}} ,\quad \underline{\check \tau}= \underline{\check S}^{\frac{1}{2}} \underline{\check \tau}_0 \check S^{\frac{1}{2}} . \end{equation} We note that Eq. \eqref{tausymm}, or $\check \tau=\check S \underline{\check \tau}^\dagger \underline{\check S}$, results into \begin{equation} \check \tau_0=\underline{\check \tau}^\dagger_0. \end{equation} Thus, the $\check \tau_0$ and $\underline{\check \tau}_0$ are the appropriate transmission amplitudes, with transmission spin-mixing phases removed. We further define \begin{equation} \check G_1\equiv \tau_0 \underline{\check G} \tau_0^\dagger . \end{equation} We expand $\check \tau $ up to first order in $\check K$ and $\underline{\check K}$, \begin{equation} \check \tau = \check \tau_0+ \frac{\mathrm{i}}{2} \left( \check K \check \tau_0+ \check \tau_0 \underline{\check K} \right) + \ldots , \end{equation} and obtain $\check{\cal I}=\check{\cal I}^{(1)} +\check{\cal I}^{(2)}$ from a systematic expansion to second order in $\check K$, $\underline{\check K}$, and $\check G_1$, as shown in \ref{secondorder}, leading to one of the main results of this paper \numparts \begin{eqnarray} \label{mainBC1} \check{\cal I}^{(1)}&=&-2\pi \mathrm{i} \left[ \check G_1 ,\check G \right]_\otimes +\mathrm{i} \left[ \check K ,\check G \right] , \\ \label{mainBC2} \check{\cal I}^{(2)}&=&-2\pi \mathrm{i} \left[ \check G_1 \otimes \check G \otimes \check G_1 ,\check G \right]_\otimes -\frac{\mathrm{i}}{2\pi} \left[ \check K \check G\check K,\check G\right]_\otimes \nonumber \\ &&+\mathrm{i}\left[ \check G_1 \otimes \check G \check K+\check K \check G \otimes \check G_1 +\check \tau_0 \underline{\check G} \otimes \left[ \underline{\check K} ,\underline{\check G} \right] \check \tau_0^\dagger , \check G\right]_\otimes . \end{eqnarray} \endnumparts These relations generalize the results of Ref. \cite{cottet} for the case of arbitrary spin polarization, and are valid even when $\check K$, $\underline{\check K}$ and $\tau $ have different spin quantization axes, i.e. cannot be diagonalized simultaneously. Using the notation $\check G = -\mathrm{i}\pi \check {\bf G} $ and $2\pi \check \tau_0 = \check T$, we can rewrite the result in leading order in the quantities $\check K$, $\underline{\check K}$, and the transmission probability ($\sim\check T\check T^\dagger$) as \numparts \begin{eqnarray} \label{mainBCa1} &&\frac{2\check{\cal I}^{(1)}}{\mathrm{i}\pi}= \left[ \check T\; \underline{\check {\bf G}} \; \check T^\dagger -2\mathrm{i}\check K ,\check {\bf G} \right]_\otimes , \end{eqnarray} and for the next to leading order \begin{eqnarray} \frac{2\check{\cal I}^{(2)}}{\mathrm{i}\pi}&=& -\frac{1}{4} \left[ \check T\; \underline{\check {\bf G}} \; \check T^\dagger \otimes \check {\bf G} \otimes \check T\; \underline{\check {\bf G}} \; \check T^\dagger ,\check {\bf G} \right]_\otimes +\left[ \check K \check {\bf G} \check K , \check {\bf G} \right]_\otimes \nonumber \\ \label{mainBCa2} & +& \frac{\mathrm{i}}{2} \left[ \check T\; \underline{\check {\bf G}} \; \check T^\dagger \otimes \check {\bf G} \check K+ \check K \check {\bf G} \otimes \check T\; \underline{\check {\bf G}} \; \check T^\dagger +\check T \underline{\check {\bf G} } \otimes \left[ \underline{\check K}, \underline{\check {\bf G} } \right] \check T^\dagger , \check {\bf G} \right]_\otimes . \end{eqnarray} \endnumparts These equations are still fully general with respect to the magnetic (spin) structure, and allow for channel off-diagonal scattering as well as different numbers of channels on the two sides of the interface. Note that $\check T$, $\check K$, and $\underline{\check K}$ are matrices in channel space, whereas $\check {\bf G}$ and $\underline{\check{\bf G}}$ are proportional to the unit matrix in channel space. Whereas $\check K$, and $\underline{\check K}$ are square matrices, $\check T$ in general can be a rectangular matrix (when the number of channels on the two sides of the interface differ). \subsection{Boundary conditions for channel-independent spin quantization direction} As an application, we assume next that each of the quantities $\check K$, $\underline{\check K}$, and $\check \tau $ can be spin-diagonalized simultaneously for all channels, with spin quantization directions $\vec{m}'$, $\underline{\vec{m}}'$, and $\vec{m}$ for $\check K$, $\underline{\check K}$, or $\check \tau $, respectively. We also use that $\check {\bf G}$ and $\underline{\check {\bf G}}$ are proportional to the unit matrix in channel space, as they are isotropic \cite{cottet}, and we assume that the number of channels on both sides of the interface are equal. We define \numparts \begin{eqnarray} &&\mathbb{T}_{0,nl}\; \check 1+\mathbb{T}_{1,nl} \; \vec{m}\cdot \vec{\check\sigma} =\check T_{nl} , \\ &&\varphi_{nn'} \; \check 1+\frac{1}{2}\vartheta_{nn'} \; \vec{m}'\cdot \vec{\check\sigma} = \check K_{nn'}, \quad \underline{\varphi}_{ll'}\; \check 1+\frac{1}{2}\underline{\vartheta}_{ll'} \; \underline{\vec{m}}' \cdot\vec{\check\sigma} = \underline{\check K}_{ll'}, \\ &&\vec{\check\sigma} = \vec{\hat\sigma}\check 1,\quad \vec{\hat\sigma}=\left( \begin{array}{cc} \vec{\sigma}&0\\0& \vec{\sigma}^\ast \end{array} \right)_{\! \rm ph}, \check \kappa \equiv \vec{m}\cdot \vec{\check\sigma}, \quad \check\kappa' \equiv \vec{m}' \cdot \vec{\check\sigma}, \quad \underline{\check\kappa}' \equiv \underline{\vec{m}}' \cdot\vec{\check\sigma} \end{eqnarray} \endnumparts with $\vec{m}^2=(\vec{m}')^2=(\underline{\vec{m}}')^2=1$, i.e. $\check \kappa^2=(\check \kappa')^2=(\underline{\check\kappa}')^2=\check 1$, and introduce the transmission probability ${\cal T}_{nl}$ and the spin polarization ${\cal P}_{nl}$ as \begin{eqnarray} &&{\cal T}_{nl}\left(\check 1+{\cal P}_{nl} \vec{m}\vec{\check\sigma} \right)= \check T_{nl} [\check T_{nl}]^\dagger. \end{eqnarray} We write for $\mathbb{T}_{0,nl}$ and $\mathbb{T}_{1,nl}$, allowing for some spin-scalar phases $\psi_{nl}$, \begin{eqnarray} \label{Tfactors} \mathbb{T}_{0,nl}^2= \frac{{\cal T}_{nl}}{2} \left[ 1+\sqrt{1-{\cal P}_{nl}^2} \right]e^{2\mathrm{i}\psi_{nl}},\; \mathbb{T}_{1,nl}^2= \frac{{\cal T}_{nl}}{2} \left[ 1-\sqrt{1-{\cal P}_{nl}^2} \right]e^{2\mathrm{i}\psi_{nl}}. \end{eqnarray} We will average over all spin-scalar phases $\psi_{nl}$ of the transmission amplitudes as there are usually many scattering channels in an area comparable with the superconducting coherence length squared. This filters out all the terms in Eqs. \eqref{mainBCa1}-\eqref{mainBCa2} where these scalar scattering phases cancel. For a magnetic system, in linear order in ${\cal T}_{nl}$ and $\vartheta_{nn'}$ we obtain \begin{eqnarray} I^{(1)}\equiv\frac{2{\cal G}_q\sum_n\check{\cal I}^{(1)}_{nn}}{\mathrm{i}\pi} &=&{\cal G}_q\mbox{$\sum_{nl}$} \left[(\mathbb{T}_{0,nl}\check 1+\mathbb{T}_{1,nl}\check{\kappa})\underline{\check{\bf G}} (\mathbb{T}_{0,nl}^\ast\check 1+\mathbb{T}_{1,nl}^\ast\check{\kappa}), \check{\bf G}\right] \nonumber \\ &&-{\cal G}_q\mbox{$\sum_{n}$} \mathrm{i} \vartheta_{nn} \left[\check \kappa', \check{\bf G}\right], \end{eqnarray} where ${\cal G}_q=e^2/h$ is the conductance quantum. After multiplying out we obtain the set of boundary conditions \numparts \begin{eqnarray} 2I^{(1)}&=& \left[{\cal G}^0\underline{\check{\bf G}}+ {\cal G}^{\rm MR}\left\{\check{\kappa},\underline{\check{\bf G}}\right\} +{\cal G}^{1}\check{\kappa}\underline{\check{\bf G}}\check{\kappa}-\mathrm{i}{\cal G}^{\phi}_{}\check{\kappa}',\check{\bf G}\right]_\otimes \label{newBC} \end{eqnarray} with \begin{eqnarray} {\cal G}^{0} &=& {\cal G}_q \mbox{$\sum\nolimits_{nl}$} {\cal T}_{nl}\label{GT}\left( 1 + \sqrt{1-{\cal P}_{nl}^2}\right)\\ {\cal G}^{1} &=&{\cal G}_q \mbox{$\sum\nolimits_{nl}$} {\cal T}_{nl}\left( 1- \sqrt{1-{\cal P}_{nl}^2}\right)\label{GMR2}\\ {\cal G}^{\rm MR} &=&{\cal G}_q \mbox{$\sum\nolimits_{nl}$} {\cal T}_{nl}{\cal P}_{nl}\label{GMR}, \qquad {\cal G}^{\phi}_{} =2{\cal G}_q \mbox{$\sum\nolimits_{n}$} \vartheta_{nn}\label{Gfi} \end{eqnarray} \endnumparts For $\kappa=\kappa'$ and the assumption of a channel-diagonal scattering matrix ($n=l$) this also provides the derivation of the boundary conditions used for Ref.~ \cite{Machon1}. We now proceed to the second order terms: \numparts \begin{eqnarray} \label{newBC2} 2I^{(2)}&=& -2I_4 +{\cal G}^{\phi}_2 \left[\check \kappa' \check{\bf G} \check \kappa' , \check{\bf G} \right]_\otimes +\mathrm{i}\left[ \check {\bf M}_{\chi,\underline{\chi}}^0 +\check {\bf M}_{\chi,\underline{\chi}}^1 +\check {\bf M}_{\chi,\underline{\chi}}^{\rm MR} , \check{\bf G} \right]_\otimes \\ \check {\bf M}_{\chi,\underline{\chi}}^0&=& {\cal G}_{\chi}^0\; \left(\underline{\check{\bf G}} \otimes \check{\bf G} \check \kappa' + \check \kappa' \check{\bf G}\otimes \underline{\check{\bf G}} \right) + {\cal G}_{\underline\chi}^0\; \underline{\check{\bf G}} \otimes \left[\underline{\check \kappa}',\underline{\check{\bf G}}\right] \nonumber \\ \check {\bf M}_{\chi,\underline{\chi}}^1&=& {\cal G}_{\chi}^1 \; \left(\check \kappa\underline{\check{\bf G}}\check \kappa \otimes \check{\bf G} \check \kappa' + \check \kappa' \check{\bf G}\otimes \check \kappa\underline{\check{\bf G}}\check \kappa \right) +{\cal G}_{\underline \chi}^1 \; \check \kappa \underline{\check{\bf G}} \otimes \left[\underline{\check \kappa}',\underline{\check{\bf G}}\right]\check \kappa \nonumber \\ \check {\bf M}_{\chi,\underline{\chi}}^{\rm MR}&=& {\cal G}_{\chi}^{\rm MR} \left(\left\{\check \kappa,\underline{\check{\bf G}} \right\} \otimes \check{\bf G} \check \kappa' + \check \kappa' \check{\bf G}\otimes \left\{ \check \kappa,\underline{\check{\bf G}}\right\} \right) +{\cal G}_{\underline \chi}^{\rm MR} \left\{ \check \kappa,\underline{\check{\bf G}} \otimes \left[\underline{\check \kappa}', \underline{\check{\bf G}}\right]\right\} \nonumber \end{eqnarray} where $I_4$ denotes a cumbersome expression in fourth order of the transmission amplitudes, which we do not write down here explicitly (see \ref{I4}). We have used the abbreviations \begin{eqnarray} \label{Gchi1} {\cal G}_\chi^{0} &=& \frac{1}{4}{\cal G}_q \mbox{$\sum\nolimits_{nl}$} \vartheta_{nn}{\cal T}_{nl}\left( 1 + \sqrt{1-{\cal P}_{nl}^2}\right)\\ {\cal G}_\chi^{1} & =&\frac{1}{4} {\cal G}_q \mbox{$\sum\nolimits_{nl}$} \vartheta_{nn} {\cal T}_{nl}\left( 1- \sqrt{1-{\cal P}_{nl}^2}\right)\\ {\cal G}_\chi^{\rm MR} &=&\frac{1}{4} {\cal G}_q \mbox{$\sum\nolimits_{nl}$} \vartheta_{nn}{\cal T}_{nl}{\cal P}_{nl} , \qquad {\cal G}^{\phi}_2 =\frac{1}{2}{\cal G}_q \mbox{$\sum\nolimits_{nn'}$} \vartheta_{nn'}^2 \label{Gchi2} \end{eqnarray} \endnumparts and ${\cal G}_{\underline\chi}^0$, ${\cal G}_{\underline\chi}^1$, ${\cal G}_{\underline\chi}^{\rm MR}$ are defined as ${\cal G}_\chi^{0}$, ${\cal G}_\chi^{1}$, and $ {\cal G}_{\chi}^{\rm MR} $ with $\vartheta_{nn}$ replaced by $\underline{\vartheta}_{ll}$. Note that $\varphi_{nn'}$ and $\underline{\varphi}_{ll'}$ do not appear in these expressions, in accordance with the intuitive notion that scalar scattering phases should drop out in the quasiclassical limit, which operates with envelope functions only. The case for only channel-conserving scattering (channel-diagonal problem) follows by taking in Eqs. \eqref{GT}-\eqref{Gfi} and \eqref{Gchi1}-\eqref{Gchi2} only the terms with $n=l$. All other formulas \eqref{newBC}, \eqref{newBC2} remain unchanged. This case is treated in Ref. \cite{cottet} to linear order in ${\cal P}_{nn}$, and our formulas reduce to these results for the considered limit. Note that for this case all spin-scalar phases cancel automatically and no averaging procedure over these phases is necessary. \section{Application for diffusive superconductor/half metal heterostructure} The problem of a superconductor in proximity contact with a half-metallic ferromagnet has been studied within the frameworks of Eilenberger equations \cite{Eschrig04,Eschrig03,eschrig_nphys_08,Kopu04,Eschrig09,Galaktionov08,Lofwander10,Grein10}, Bogoliubov-de Gennes equations \cite{Halterman09, Linder10, Kupferschmidt11,Wilken12}, recursive Green function methods \cite{Asano07}, circuit theory \cite{Braude07}, within a magnon assisted tunneling model \cite{Takahashi07}, and in the quantum limit \cite{Beri09}. Various experiments on superconductor/half-metal devices have been reported, both for layered systems involving high-temperature superconductors \cite{Sefrioui03,Pena04,Kalcheim12,Visani12} and in diffusive structures involving conventional superconductors \cite{Keizer06,Anwar10,Sprungmann10,Anwar11,Anwar12,Yates13,Singh15}. An important consequence of the new boundary conditions in Eq.~\eqref{newBC} is that half-metals can now be incorporated in the Usadel equation, appropriate to describe the second class of experiments mentioned above, whereas there previously existed no suitable boundary conditions to do so. Consider first a superconductor/half-metal bilayer with the interface located at $x=0$ (see Fig. \ref{fig:model}). \begin{figure}[t!] \centering{ \includegraphics[width=0.6\linewidth]{model00.pdf} } \caption{ A superconductor/half-metal bilayer with a magnetically inhomogeneous barrier region. The magnetization direction associated with the spin-dependent phase-shifts occurring on the superconducting side (described by the matrix $\underline{\check \kappa}'$) does not in general align with the magnetization direction associated with the transmission of quasiparticles across the barrier (described by the matrix $\check \kappa$). } \label{fig:model} \end{figure} The superconductor is assumed to have a thickness well exceeding the superconducting coherence length. Our expansion parameters are the spin-dependent reflection phase shifts at the superconducting side of the interface, $\underline{\vartheta}_{ll'}$, and the tunneling probabilies ${\cal T}_{nl}$. For calculating triplet components in the half-metal it is sufficient to expand the solution for the Green function in the superconductor up to linear order, and the solution for the Green function in the half-metal up to quadratic order. The zeroth order term in the superconductor is pure spin-singlet, and the first order term pure spin-triplet. Thus, up to inlcuding first order we can assume a bulk singlet order parameter, not affected by the interface scattering (corrections to the singlet order parameter arise only in second order in $\underline{\vartheta}_{ll'}$ and ${\cal T}_{nl}$). For future reference, we define the quantities $c\equiv\cosh(\nu)=-\mathrm{i}\frac{E}{\Omega }$, $s\equiv\sinh(\nu)=\mathrm{i}\frac{|\Delta|}{\Omega }$ with $\nu= \mbox{atanh}(|\Delta|/E)$, $\Omega=\sqrt{|\Delta|^2-E^2}$, and denote the SC phase as $\theta$. We find for the triplet component $\underline{F}_{t0}$ in the superconductor \begin{equation} \underline{F}_{t0}(x)= \mathrm{i} \frac{\underline{\cal G}^\phi cs}{\sigma_{\rm SC}{\cal A}q} e^{\mathrm{i}\theta} e^{-q|x|}(\underline{\vec{m}}'\cdot \vec{\sigma}) \mathrm{i}\sigma_y \end{equation} with the normal-state conductivity $\sigma_{\rm SC}=2e^2N_{\rm SC}D_{\rm SC}$ in the superconductor ($N_{\rm SC}$ and $D_{\rm SC}$ are the normal-state density of states per spin projection at the Fermi level and the diffusion constant, respectively), contact area ${\cal A}$, and $q=\sqrt{2\Omega/\hbar D_{\rm SC}}$. In the half-metal (width $d$), only spin-$\uparrow$ particles have a non-zero density of states at the Fermi level. In the spirit of quasiclassical theory of superconductivity, a strong exchange field is incorporated not in the transport equation, but directly in the band structure which is integrated out at the quasiclassical level \cite{grein_prl_09,Grein10}, leaving only parameters like diffusion constant, and normal state density of states at the Fermi level for each itinerant spin band. For transport in a half-metallic ferromagnet, this means one must just include one spin-band with diffusion constant $D_{\rm HM}$ in the Usadel equation. Thus, only the elements $G_{\uparrow\uparrow}$ and $F_{\uparrow\uparrow}$ exist in the Green function $\check{{\bf G}}$ of the half-metal. As we expand in the tunneling probability, we can (for energies well exceeding the Thouless energy $\hbar D_{\rm HM}/d^2$ of the half-metal) use the linearized Usadel equation, \begin{equation} \hbar D_{\rm HM} \partial_x^2 F_{\uparrow\uparrow}+ 2\mathrm{i} E F_{\uparrow\uparrow} = 0. \end{equation} Since there is only one anomalous Green function in the half-metal, we omit the spin indices for brevity of notation and define $F\equiv F_{\uparrow\uparrow}$. The general solution is $F(x)=Ae^{\mathrm{i} k x}+Be^{-\mathrm{i} kx}$ with $A,B$ being complex coefficients to be determined from the boundary conditions, and $k=\sqrt{2\mathrm{i} E/\hbar D_{\rm HM}}$. At the vacuum edge of the half-metal $(x=d)$, we have $\partial_x F=0$. At the interface between the superconductor and half-metal, the boundary conditions for $F$ from the half-metallic side is obtained from Eqs. \eqref{newBC}-\eqref{Gchi2} with ${\cal P}_{nl} = 1$. Note that for ${\cal P}_{nl} = 1$, we have ${\cal G}_{\underline\chi}^0={\cal G}_{\underline\chi}^1={\cal G}_{\underline\chi}^{\rm MR} \equiv {\cal G}_{\underline\chi}$ as well as ${\cal G}^0={\cal G}^1={\cal G}^{\rm MR} $. We find that in order to obtain a non-vanishing proximity effect, it is necessary that the magnetization direction associated with transmission across the barrier ($\check{\kappa}$) and spin-dependent phase-shifts picked up on the superconducting side of the interface $(\underline{\check \kappa}')$ are different. We set $\check{\kappa} = \check{\sigma}_z$ since the barrier magnetization determining the transmission properties is expected to be dominated by the half-metal magnetization which points in the $z$-direction. The boundary condition for $F$ at $x=0$ reads: \begin{equation} \sigma_{\rm HM}{\cal A}\partial_x F = 2\mathrm{i} \; {\cal G}_{\underline{\vartheta}}\; cs e^{\mathrm{i}\theta}(\underline{m}'_x-\mathrm{i} \underline{m}'_y), \quad {\cal G}_{\underline{\vartheta}}= 2\mathcal{G}_{\underline\chi} + \frac{\underline{{\cal G}}^\phi {\cal G}^0 }{\sigma_{\rm SC}{\cal A}q} \end{equation} with the normal-state conductivity $\sigma_{\rm HM}=e^2N_{\rm HM}D_{\rm HM}$ in the half-metal ($N_{\rm HM}$ is the normal-state density of states at the Fermi level), and the conductance ${\cal G}_{\underline{\vartheta}}$ contains two terms: $2\mathcal{G}_{\underline\chi}$ which is proportional to $\sum_{nl} \underline{\vartheta}_{ll} {\cal T}_{nl}$, and a second term containing $\underline{{\cal G}}^\phi {\cal G}^0$ which is proportional to $(\sum_l \underline\vartheta_{ll}) (\sum_{nl'} {\cal T}_{nl'})$. Moreover, $\underline{m}'_x$ and $\underline{m}'_y$ are the normalized components of a possible misaligned barrier moment compared to the magnetization of the half-metal. We have taken this into account by writing: \begin{equation} \underline{\hat \kappa}' = \underline{m}'_x \left(\begin{array}{cc} \sigma_x & 0 \\ 0 & \sigma_ x\\ \end{array}\right)_{\! \rm ph} + \underline{m}'_y \left(\begin{array}{cc} \sigma_y & 0 \\ 0 & \sigma_y^\ast\\ \end{array}\right)_{\! \rm ph} + \underline{m}'_z \left(\begin{array}{cc} \sigma_z & 0 \\ 0 & \sigma_ z\\ \end{array}\right)_{\! \rm ph} \end{equation} Inserting the general solution of $F$ into the boundary conditions, one arrives at the final result for the proximity-induced superconducting correlations $F$ in the half-metal: \begin{equation} F(x) = -\frac{2\cosh[\mathrm{i} k(x-d)]}{\sinh(\mathrm{i} kd)} \frac{{\cal G}_{\underline{\vartheta}} cs}{\sigma_{\rm HM}{\cal A}k} e^{\mathrm{i}\theta}(\underline{m}'_x-\mathrm{i} \underline{m}'_y). \end{equation} This is the first time the Usadel equation has been used to describe the proximity effect in a superconductor/half-metal structure. Several observations can be made from the above expression. For small $E$ the energy factors $c \propto E$ in the numerator and $k^2\propto E$ in the denominator cancel, such that the proximity-effect, if present, happens even at $E=0$. The proximity-effect is seen to be non-zero only if spin-dependent scattering phases at the superconducting side of the interface are present, and at the same time their quantization axis $\underline{\kappa}'$ is misaligned with that of the transmission amplitudes, $\kappa $. The reason for this is that phase-shifts on the half-metallic side are irrelevant on the quasiclassical level, because they are spin-scalar (only spin-$\uparrow$ particles have a finite density of states there). On the other hand, the phase-shifts $\underline\vartheta_{nn}$ on the superconducting side have two consequences: they are responsible for an $\vec{S}\cdot \vec{\underline{m}}=0$ spin-triplet component on that side of the interface (where $\vec{S}$ is the spin vector of the Cooper pair), and they affect also transmission amplitudes. As a consequence, during transmission the quantization axis $\underline\kappa'$ can be rotated into the $S_z=\pm 1$ spin triplet components which are allowed to exist in the half-metal \mbox{if} spin-flip processes exist at the interface (\eg due to some misaligned interface moments). This is exactly the reason for why $F$ also depends on $\underline{m}'_x$ and $\underline{m}'_y$ whereas it is independent on the barrier moment $\underline{m}'_z$: only a barrier moment with a component perpendicular to the magnetization of the half-metal can create spin-flip processes which rotate the $\vec{S}\cdot\vec{\underline{m}}=0$ into the $S_z=\pm 1$ components, and thus $F$ also vanishes if $\underline{m}'_x=\underline{m}'_y=0$. Another important observation that can be made from the above expression is that a misaligned barrier moment effectively renormalizes the superconducting phase. Using spherical coordinates, we may write $\underline{m}'_x-\mathrm{i} \underline{m}'_y = \sin\underline{\Theta}' e^{-\mathrm{i}\underline{\varphi }'}$ where $\underline{\varphi}'$ is the azimuthal angle describing the orientation of the barrier moment in the $xy$-plane. Thus, the effective phase becomes $\theta \to \theta-\underline{\varphi }'$. To see what consequence this has in terms of measurable quantities, we proceed to consider a Josephson junction with a half-metal by replacing the vacuum boundary condition at $x=d$ with another superconductor. Solving for the anomalous Green function $F$ in the same way as above, we may compute the supercurrent flowing through the system via the formula [see Eq. \eqref{diffcurrentstrong}]: \begin{equation} I = \frac{eN_{\rm HM}D_{\rm HM}\mathcal{A}}{8} \int^{\infty}_{-\infty} \mbox{d}E \mbox{Tr}\{ \hat \tau_3 (\check{\bf G}_{\rm HM}\partial_x\check{\bf G}_{\rm HM})^K\}. \end{equation} Here, Tr denotes a trace over 2$\times $2 Nambu-Gor'kov space. After some calculations, one arrives at the result: \begin{equation} \label{I0} I = I_0 \sin\underline\Theta'_L\sin\underline\Theta'_R \sin(\theta_R-\theta_L + \underline{\varphi}'_L - \underline{\varphi}'_R), \end{equation} where $I_0$ is a lengthy expression depending on parameters such as the width $d$ of the half-metal and the temperature $T$ (and which vanishes unless ${\cal G}_{\underline{\vartheta}}^L$ and ${\cal G}_{\underline{\vartheta}}^R$ are non-zero). To be general, we have allowed the spin-dependent phase-shifts for each superconductor and the barrier moment at each interface to be different, indicated by the notation '$L$' and '$R$' for left and right. We find that $I_0$ is negative, giving rise to a $\pi$-Josephson junction behavior for the case of $\underline{\varphi}'_L = \underline{\varphi}'_R$. Expression \eqref{I0} is consistent with the ballistic case result of Refs. \cite{eschrig_nphys_08,Eschrig09,Eschrig07} and shows how a finite supercurrent will appear in a ring geometry even in the absence of any superconducting phase difference, $\theta_R-\theta_L=0$, if the barrier moments are misaligned in the plane perpendicular to the junction, $\underline{\varphi}'_L-\underline{\varphi}'_R\neq 0$. A similar effect was also reported via circuit theory for a diffusive system \cite{Braude07}, however not due to spin-dependent scattering phase shifts but due to some ``leakage terms''. Within our formalism, we thus obtain a so-called $\phi_0$ Josephson junction behavior \cite{Josephson62,Geshkenbein86,Krive04,Buzdin08,Reynoso08} with $\phi_0=(\pi+\underline{\varphi}'_L - \underline{\varphi}'_R)$mod$(2\pi)$. The above framework can be readily generalized to cover strongly spin-polarized ferromagnets building on the same idea as Ref. \cite{grein_prl_09}. For a sufficiently large spin-splitting, the $\uparrow$- and $\downarrow$-conduction bands can be treated separately in the bulk with a separate Usadel equation for $F_{\uparrow\uparrow}$ and $F_{\downarrow\downarrow}$. These would then only couple via interface scattering and the strong exchange field would only enter by having different normal-state density of states $N_\uparrow $, $N_\downarrow $ and diffusion coefficients $D_\uparrow $, $D_\downarrow $ of the spin-bands in each separate Usadel equation. \section{Conclusions} We have derived new sets of boundary conditions for Usadel theory of superconductivity, appropriate for spin-polarized interfaces. We present a general solution of the problem appropriate for arbitrary transmission, spin-polarization, and spin-dependent scattering phases. The explicit equations for the most general set of boundary conditions are given in Eqs. \eqref{def1}-\eqref{god}, \eqref{def01}-\eqref{go}, and \eqref{comm}-\eqref{gl_naz}. With the solution of this long-standing problem we anticipate a multitude of practical implementations in future to tackle superconducting systems that involve strongly spin-polarized materials. We have applied the general set of equations to various special cases important for practical use. We derived boundary conditions for an interface between a superconductor and a ferromagnetic insulator valid for arbitrary spin dependent scattering phases, Eq. \eqref{FI}. This extends previous work of Ref. \cite{cottet}, which was restricted to small scattering phases. Using an exact series expansion of the general set of boundary conditions, Eqs. \eqref{Lie}-\eqref{seriesgi}, we have obtained a perturbation series for the boundary conditions appropriate for such an interface, which allows for channel off-diagonal scattering and channel-dependent spin quantization axes, Eqs. \eqref{I3}-\eqref{three}. For the tunneling limit, we have presented a new set of boundary conditions appropriate for arbitrary spin polarization, non-trivial spin texture across the interface, and allowing for channel off-diagonal scattering, Eqs. \eqref{mainBCa1}-\eqref{mainBCa2}. Neither of these three ranges of validity has been covered previously. As an application we then proceed to give a theoretical foundation of the boundary conditions used in Refs. \cite{Machon1,Machon2,Bergeret12}, Eqs. \eqref{newBC}-\eqref{Gfi}, which we have generalized for channel off-diagonal scattering and non-trivial spin texture across the interface. One central result of the application of our formalism is the extension of these relations to second order, including the important mixing terms between transmission and spin-dependent scattering phases. These terms, Eqs. \eqref{newBC2}-\eqref{Gchi2} generalize the corresponding terms from Ref. \cite{cottet} to arbitrary spin polarization, possible nontrivial spin-texture across the interface, and channel off-diagonal scattering. We have demonstrated the application of the new set of boundary conditions by treating a diffusive superconductor/half-metal proximity junction and a diffusive superconductor/half-metal/superconductor Josephson junction. In the latter case we found a realization of a $\phi_0$-junction. We are confident that our boundary conditions will advance the field of superconducting spintronics considerably. \section*{Acknowledgments} ME acknowledges financial support from the Lars Onsager committee during his stay as Lars Onsager Professor at NTNU, as well as support from the UK EPSRC under grant reference EP/J010618/1. ME also benefited from fruitful discussions at the Aspen Center of Physics and within the Hubbard Theory Consortium. He thanks in particular Mikael Fogelstr\"om for valuable discussions. AC acknowledges financial support from the ANR-NanoQuartet [ANR12BS1000701] (France). WB acknowledges useful discussions with Peter Machon and financial support from the DFG through BE 3803/03 and SPP 1538, and from the Baden-W\"urttemberg-Foundation through the Network of Competence ``Functional Nanostructures''. JL was supported by the ``Outstanding Academic Fellows'' programme at NTNU and Norwegian Research Council grants no. 205591 and no. 216700, and acknowledges support from the Onsager committee at NTNU and by the COST Action MP-1201 ``Novel Functionalities through Optimized Confinement of Condensate and Fields''.
{ "timestamp": "2015-09-09T02:11:11", "yymm": "1504", "arxiv_id": "1504.06258", "language": "en", "url": "https://arxiv.org/abs/1504.06258" }
\section{Introduction} Observationally, power spectrum of the {\rm H~{\sc i}~} intensity fluctuation in our Galaxy suggest existence of scale invariant structures in the {\rm H~{\sc i}~} density over length scales ranging as wide as sub parsec to a few hundred parsec \citep{1983A&A...122..282C, 1993MNRAS.262..327G}. These structures are understood \citep{2004ARA&A..42..211E} in terms of compressible fluid turbulence in the interstellar medium (ISM). In present theoretical understanding of ISM dynamics, compressible fluid turbulence plays an important role in the ISM evolution, energy transfer, star formation etc. Possible source of energy into the turbulence cascade is however debated, though is mostly ascribed to the supernova shocks as a large scale energy input. Different techniques have been developed to measure the velocity spectrum of the turbulence and hence infer the energy involved in the process. Techniques, originally developed to get the velocity structures in the Galaxy, include statistics of centroid of velocities \citep{2009RMxAC..36...45E}, velocity coordinate spectrum or VCS \citep{2009RMxAC..36...54P}, velocity channel analysis or VCA (\cite{2000ApJ...537..720L}, henceforth LP00), spectral correlation function \citep{2001ApJ...555L..33P} etc. We particularly bring attention of our reader here to VCA, which has been applied \citep{2010ApJ...714.1398C, 2009ApJ...693.1074C, 2008ApJ...688.1021C, 1988A&A...191...10S} for observations in our Galaxy as well as nearby dwarf galaxies like Large and Small Magellanic Clouds. It was found that the velocity fluctuations also follow a power law. Interested reader may have a look at the article LP00 for a complete description of VCA, here we outline the basic principle behind this analysis. VCA aims to extract the velocity power spectrum by comparing the power spectrum of intensity averaged over the entire velocity range of observation with the same averaged over a relatively small velocity range, smaller than the expected turbulence velocity dispersion. Differential rotation of our Galaxy allows us to have a direct mapping of the velocity values in the position-position-velocity data cube with the line of sight distance to the observing cloud. This in turn let us estimate the three dimensional power spectrum of the {\rm H~{\sc i}~} density fluctuations from the Galaxy. However, turbulence velocity fluctuations change the velocity to distance mapping and hence also modifies the intensity power spectrum. This is precisely what VCA explores. \citet{2006MNRAS.372L..33B} have used a visibility based power spectrum estimator to measure the intensity fluctuation power spectrum of the nearby dwarf galaxy DDO~210. They infer that the density power spectrum has a slope of $-2.75$ over a length scale range of $80$ to $500$ pc. They used the VCA with their position-position-velocity data cube and inferred an upper limit to the slope of the velocity power spectrum. \citet{2008MNRAS.384L..34D, 2009MNRAS.398..887D, 2009MNRAS.397L..60D} has extended these study to several external dwarf and spiral galaxies and have estimated the density power spectrum. Recently, \citet{2013NewA...19...89D, 2013MNRAS.436L..49D} has estimated the power spectrum of the $18$ spiral galaxies from THINGS \footnote{THINGS: The {\rm H~{\sc i}~} Nearby Galaxy Survey \citep{2008AJ....136.2563W}.} sample and found that the power spectrum of column density follow a power law over the length scales ranging $400$ pc to $16$ kpc considering the entire sample. Slope of the power spectra for most of these galaxies was found to be within $-1.5$ to $-1.8$. Generating mechanism of these large scale structures are yet to be understood. Measuring statistics of the velocity fluctuations would help us understand the dynamical phenomena responsible for these structures. We note here two main difference between the position-position-velocity data cubes of the {\rm H~{\sc i}~} emission observation in the Galaxy and external spiral galaxies. Line of sight to the observations for the {\rm H~{\sc i}~} emission from our Galaxy is mostly along the plane of the disk, while, the external galaxies are mostly for the face on galaxies and the line of sight is perpendicular to the disk. For observations in our Galaxy, different velocity slices of the data cube can be considered to be at different distances but at the same angular direction in the sky, whereas, for the external galaxies, different velocity slices of the data cube originates from different parts of the galaxy. This suggests that it would not be wise to directly use the results obtained in LP00 while inferring observations from external galaxies. As there exist no direct position to velocity mapping for the external spiral galaxies, an analytical investigation on the effect of the turbulence velocity is not straight forward, we refer to the numerical methods here. In this letter we perform numerical simulation to access how the intensity power spectrum is modified with the velocity fluctuations for the spiral galaxies. Section (2) gives a brief outline to our approach and the section (3) describes the numerical investigation we have performed. Results and discussions are discussed in section (4). We conclude in section (5). \section{Modelling {\rm H~{\sc i}~} emission from spiral galaxy} We adopt a coordinate system centred at the {\rm H~{\sc i}~} cloud in concern (or the external galaxy) with the line of sight direction aligned to the $z$ axis, such that \begin{equation} \vec{r}\ =\ (x, y, z)\ =\ (\vec{R}, z), \end{equation} where $\vec{R} = (x,y)$ is a two dimensional vector in the sky plane. At small optical depth limit, the specific intensity of radiation \citep{2011piim.book.....D} with rest frequency $\nu_{0}$ originated from a gas at $\vec{r}$ having temperature $T$ is given as \begin{equation} I (\vec{R}, v)\ =\ I_{0}\, \int d\, z\, n_{HI} (\vec{r})\, \phi (v), \end{equation} where $v = c ( \nu_{0} - \nu)$, $\nu$ is the frequency of observation, $I_{0} = \frac{3 h \nu_{0} A_{21}}{16 \pi}$ and $\phi(v)$ is the line shape function: \begin{equation} \phi(v) = \phi_{0}\, \exp \left [ - \frac{ \left ( v - v_{z} (\vec{r}) \right ) ^{2} } {2 \sigma^{2}(\vec{r}) } \right ]. \end{equation} Here $v_{z}(\vec{r})$ is the line of sight component of the velocity of the gas and $\sigma(\vec{r}) = \sqrt{\frac{ k_{b} T}{m_{HI}}}$ \footnote{$k_{b}$ : Boltzmann constant, $m_{HI}$ : mass of hydrogen atom} is the thermal velocity dispersion. In practice, the observed specific intensity is always averaged over a velocity width $\delta v$ around $v$, hence \begin{equation} I ^{obs}(\vec{R}, v, \delta v)\ =\ \frac{1}{\delta v} \int _{v - \delta v/2}^{v + \delta v/2} dv' I (\vec{R}, v'). \end{equation} Clearly, \begin{equation} \lim_{\delta v \to \infty}\ I ^{obs}(\vec{R}, v, \delta v)\ =\ I_{0}\, N_{HI}(\vec{R}), \end{equation} where, $N_{HI}(\vec{R}) = \int dz\, n_{HI}(\vec{r})$ is the column density. In practice, as the emission from the galaxy falls of to zero beyond a certain velocity, say $\pm \Delta v$, it is sufficient to carry the integration in the above equation over the range $-\Delta v$ to $\Delta v$. Here we consider that the galaxy has no overall motion. Compressible fluid turbulence in the ISM of the galaxies induce scale invariant fluctuations in the density as well as velocity. Power spectrum of the column density fluctuation is given as \begin{equation} P_{N_{HI}}(K) \ =\ \int \ d\vec{X}\, e^{-i \vec{K} . \vec{X}} \langle N_{HI}(\vec{R}+ \vec{X}) N_{HI}(\vec{R})\rangle, \end{equation} where the averaging is performed over all possible values of $\vec{R}$ and all directions assuming homogeneity and isotropy in the random fluctuations. We define the power spectrum of the observed intensity fluctuation as \begin{equation} P(K , \delta v) =\int d\vec{X}\, e^{-i \vec{K} . \vec{X}} \langle I ^{obs}(\vec{R}+ \vec{X}, v, \delta v) I ^{obs}(\vec{R}, v, \delta v)\rangle \end{equation} Here we have assumed the homogeneity and isotropy of the intensity field and that the intensity power spectrum is independent of the centroid of the velocity $v$. Clearly, \begin{equation} \lim_{\delta v \to \Delta v} P(K , \delta v) \ \propto P_{N_{HI}}(K). \end{equation} This has been used extensively in literature to estimate the {\rm H~{\sc i}~} column density power spectrum of our Galaxy \citep{1983A&A...122..282C,1993MNRAS.262..327G}, external dwarf \citep{2006MNRAS.372L..33B, 2009MNRAS.398..887D} and spiral galaxies \citep{2013NewA...19...89D}. The power spectra is found to follow power laws indicating turbulence to be operational, hence \begin{equation} P_{N_{HI}}(K) \ =\ A_{N_{HI}} K^{\alpha} \end{equation} The line of sight component of velocity $v_{z}(\vec{r})$ has component from the systematic rotation of the galaxy $v^{\Omega}(\vec{r})$, as well as form the random motion of the cloud because of turbulence $v^{T}(\vec{r})$, i.e, $v_{z}(\vec{r}) = v^{\Omega}(\vec{r}) + v^{T}(\vec{r})$. Power spectrum of the turbulent velocity component is also expected to follow power law \begin{equation} P_{v^{T}}(K) \ =\ A_{v_{T}} K^{\beta}. \end{equation} LP00 has investigated the nature of $P(K, \delta v)$ in detail in order to estimate the modification of the {\rm H~{\sc i}~} power spectrum by turbulence. They show that for observed {\rm H~{\sc i}~} gas in our galaxy, for $\alpha > -3$, \begin{equation} \lim_{\delta v < \sigma_{T}} P(K, \delta v) \propto K^{\alpha + \beta/2}, \end{equation} while as we expect, \begin{equation} \lim_{\delta v \to \sigma_{T}} P(K, \delta v) \propto K^{\alpha} \propto P_{N_{HI}} (K). \end{equation} Hence, by estimating the power spectra in two different limits above one can infer the velocity power spectrum slope. This method, usually known as velocity channel analysis, has been used to estimate the velocity fluctuation power spectrum of {\rm H~{\sc i}~} in our Galaxy and nearby dwarf galaxies. It is important to realise that the particular direct linear mapping between $v_{z}$ and $z$ was exploited in VCS, is rather different when we consider {\rm H~{\sc i}~} emission from an external galaxy. Considering tilted ring model, in later case, $v^{\Omega}(\vec{r})$ depends on the galacto-centric radius, position and inclination angles. Moreover, at a given $v$ with $\delta v < \Delta v$, only a part of the galaxy's disk is visible. In this letter we attempt to see how the turbulent velocity modifies the {\rm H~{\sc i}~} power spectrum for the external spiral galaxies and investigate if a similar procedure as VCS can be adopted to estimate the velocity fluctuation spectrum. \subsection{Simplifications} In order to simulate the {\rm H~{\sc i}~} emission from the external galaxies we adopt the following simplifications. Note that in this work we are not interested to simulate all aspect of the {\rm H~{\sc i}~} emission from the external galaxies, rather we are interested in investigating the modification in the power spectrum due to the turbulent velocity, which justifies these simplifications. \begin{itemize} \item In case of a spiral galaxy the average {\rm H~{\sc i}~} profile $W(\vec{r})$ varies with the galacto-centric radius as well as in vertical direction. This leads to modification in the {\rm H~{\sc i}~} power spectrum as discussed in \citet{2009MNRAS.398..887D}. Here, we consider $W(\vec{r})$ to be independent of $\vec{r}$. It is to be noted that this simplification also means that we are assuming the galaxy's disk to be thick. We shall discuss the effect of this in the conclusions section. \item Systematic rotation of the galaxy $v^{\Omega}(\vec{r})$, depends on the inclination and position angle as well as galacto-centric radius. To simplify matter we assume here that the position angle and inclination angle $(i)$ of the galaxy do not change with galacto-centric radius and adopt a flat rotation curve with tangential velocity $v_{0}$. In such a case, we can write \begin{equation} v^{\Omega}(\vec{r}) \ =\ \frac{v_{0} sin(i) x}{\sqrt{x^2 + y^2 cos^{2}(i)}} \end{equation} \item ISM is known to be in pressure equilibrium (see \cite{2003ApJ...587..278W} and references therein) and more than one temperature gas coexists in it. This means, in principle, we need to consider different temperatures at different part of the galaxy and hence a varying $\sigma(\vec{r})$. This would give rise to an additional fluctuation in the observed specific intensity. Here we assume that the gas across the galaxy is at a constant temperature and we adopt the temperature that of the cold gas. \end{itemize} \section{Simulation} \begin{figure} \begin{center} \epsfig{file=Fig0.eps,width=2.8in} \end{center} \caption{Plot showing map of (a) Column density, (b) line of sight velocity, (c) observed specific intensity for $\delta v = \sigma_{T}/2$. Part (d) is the integrated line profile of the simulated galaxy with $\alpha = -2.5$, $\beta = -2.5$, $i = 10^{\circ}$.} \label{fig:NHI} \end{figure} We divide the simulation volume into $NGrid^{3}$ individual grids (cubes) which represents individual {\rm H~{\sc i}~} cloud with associated $n_{HI}$ and $v_{z}$. As discussed in the previous section, we have kept $\sigma$ constant everywhere across the simulation volume. It can be shown that for a thick cube the power spectrum of the three dimensional density distribution has the same slope of that of the projected density, like the column density here (\citet{2009MNRAS.398..887D}, LP 00). Hence, we generate $\delta n_{HI}$ and $v^{T}$ such that they follow Gaussian distribution with power spectrum of slope $\alpha$ and $\beta$ respectively. \citet{2013MNRAS.436L..49D} have estimated the amplitude of the {\rm H~{\sc i}~} fluctuations for six galaxies of the THINGS sample \citep{2008AJ....136.2563W}. They found that the amplitude of the column density fluctuations are approximately $1/10^{th}$ that of the mean column density for the galaxies. Hence, here we consider \begin{equation} n_{HI} \ =\ n_{0} \left [ 1 + f_{n_{HI}} \delta n_{HI} (\vec{r}) \right ], \end{equation} with $f_{n_{HI}} = 0.1$ and $\delta n_{HI}(\vec{r})$, the random component because of turbulence. \citet{2009AJ....137.4424T} has estimated the {\rm H~{\sc i}~} velocity dispersion for the galaxies in the THINGS from Moment-II maps and found that the turbulence velocity dispersion $\sigma_{T}$ vary in the range $\sim 5 $ to $20$ km sec$^{-1}$. On the other hand, flattening velocity of the rotation curve for the same galaxies has the range $\sim 100$ to $200$ km sec$^{-1}$ \citep{2008AJ....136.2648D}. Here we adopt $\sigma_{T} = f_{v} v_{0}$, with $f_{v} = 0.1$. Considering the gas at a temperature of $\sim 500$ K, we adopt the thermal velocity dispersion to be $\sigma = f_{T} v_{0}$, where $f_{T} = 0.01$. Note that the actual value of $v_{0}$ is unimportant here. In order to see the effect of the density and velocity fluctuations in $P(K, \delta v)$, we consider $\alpha = (-1.5, -2.5)$, $\beta = (-1.5, -2.5)$ and all combinations of these. In literature the velocity and density fluctuations because of turbulence are assumed to be uncorrelated. Here we consider two cases, either $n_{HI}$ and $v^{T}$ are completely uncorrelated or completely correlated. Galaxies with higher inclination angles can have scale mixing in the projected direction. Rotational velocity effects also manifest more at higher inclination angle because of the $sin(i)$ factor in eqn.~(13). We choose the inclination angle to be $i= 10 ^{\circ}$. Figure~(1) shows different aspects of the simulated galaxy for $\alpha = -2.5$, $\beta = -2.5$, $i=10^{\circ}$ and for the case when $n_{HI}$ and $v^{T}$ are uncorrelated. Column density map is shown in Figure~(1a), while the line of sight velocity $v_{z}$ is shown in Figure~(1b). Figure~(1c) shows $I^{obs}$ for a certain value of $v$ with $\delta v = \sigma_{T}/2$. In this case only a part of the galaxy is visible and the area over which $P(K, \delta v)$ can be estimated is restricted. The integrated line profile of the galaxy is shown in Figure~(1d). Assuming the centre of the galaxy to be at the centre of the simulation volume and the inclination angle to be $i$, we generated the specific intensity given in eqn.~(1) with velocity resolution same as thermal velocity dispersion. Range of $v$ is chosen such that all the emission from the model galaxy is included. We estimate $P(K, \delta v)$ defined in eqn.~(7) for (a) $\delta v$ covering the entire {\rm H~{\sc i}~} emission, i.e, $\delta v = \Delta v$ and (b) $\delta v = \sigma_{T}/2$. Results are discussed in the next section. \section{Results and Discussions} \begin{figure} \begin{center} \epsfig{file=fig1.eps,width=3.2in} \end{center} \caption{2D power spectra with $\delta v = \Delta v$ plotted for different combinations of $\alpha = (-1.5, -2.5)$ and $\beta = (-1.5, 2.5)$ for inclination angle $i = 10^{\circ}$. Blue dash and dot-dash lines show the power spectra for $\beta = -1.5$ and $-2.5$ respectively with $\alpha = -2.5$. The solid blue line is a plot of $P(K) \propto k^{\alpha}$. The black curves are for corresponding power spectra with $\alpha=-1.5$. All curves are shifted arbitrarily in the vertical direction for clarity.} \label{fig:NHI} \end{figure} We first discuss the results for the case when we assume that the $n_{HI}$ and $v^{T}$ are uncorrelated. Figure~(2) shows the power spectra $P(K, \delta v)$ with $\delta v = \Delta v$. Here the``dot-dash" lines corresponds to the power spectra for $\beta = -2.5$ while the ``dash-dash" lines are for $\beta = -1.5$. Power spectra corresponding to $\alpha = -2.5$ are shown in blue and $\alpha = -1.5$ are shown in black. The solid lines corresponds to power law with slopes $-2.5$ and $-1.5$ respectively. All curves are shifted in vertical direction arbitrarily for clarity. It is clear that the power spectra of the intensity with $\delta v = \Delta v$ has the same slope that of the $n_{HI}$, irrespective of the slope of the velocity power spectra. As in our simulation we have considered a thick disk for the galaxy, we expect the slope of the power spectrum of column density to be same with that of $n_{HI}$. Hence, nature of the power spectra in Figure~(1) is quite expected and is just a verification of the eqn~(8). \begin{figure} \begin{center} \epsfig{file=fig2.eps,width=3.2in} \end{center} \caption{2D power spectra with $\delta v = \sigma_{T}/2$ plotted for different combinations as in Figure~(1). Note that slope of the spectra changes for $K > 0.8$. } \label{fig:NHI} \end{figure} We estimate the power spectra $P(K, \delta v)$ for all four combinations of $\alpha$ and $\beta$ and $i = 10^{\circ}$ with $\delta v = \sigma_{T}/2$, where we expect to see the effect of turbulence velocity $v^{T}$ in the intensity power spectra. Figure~(3) shows the corresponding power spectra as in Figure~(1) with $\delta v = \sigma_{T}/2$. Before we interpret these curves, we need to realise that here emission is coming from only a part of the galaxy's disk, as shown in Figure~(1c). In such a case, the observed intensity power spectra would have effect of the shape of the window where the emission is coming from \footnote{Effect of the window is discussed in detail in \citet{2009MNRAS.398..887D}}. This is precisely why all four curves in Figure~(3) have similar nature for $K<0.8$ and we can only expect to see the effect of $\alpha$ or $\beta$ beyond that. Interestingly, for $K>0.8$ the power spectra is independent of the values of $\alpha$ and is different for different $\beta$. This can be the effect of velocity modification, i.e, effect of the line of sight component of the turbulent velocity $v^{T}$ on the intensity power spectrum. As this is independent of $\alpha$, the nature of the velocity modification is different than what is expected from the result of LP00 (see eqn.~(10, 11)). To investigate how $P(K, \delta v)$ for $K>0.8$ changes with different values of $\beta$, we performed the same simulation with $\alpha = -2.5$ and $i=10^{\circ}$, for values of $\beta$ ranging $-3.0$ to $-1.0$. For each case, we fit the power spectra at $K>0.8$ with a power law of the form $P(K) \propto K^{\gamma}$ and note the best fit values. Since in simulation we have not added any contribution from the observational uncertainties, we only use the sample variance generated noise to do this fit. Results are shown in Figure~(4), where we have plotted $\beta$ in x axis with $\gamma$ with errors form the fit in y axis. We use a second order polynomial to empirically fit the values of $\gamma$ against $\beta$, i.e, $f(x) = a_{0} + a_{1} x + a_{2} x^{2}$ with $a_{0}, a_{1}$ and $a_{2}$ values $-0.27, -0.17$ and $-0.15$ respectively. \begin{figure} \begin{center} \epsfig{file=fig4.eps,width=3.2in} \end{center} \caption{ Points with error bars represents best fit values of $\gamma$ as a function of $\beta$. We use a polynomial fit to the points and the best fit line is shown in blue solid line. The values of the polynomial coefficients are also given in the plot window.} \end{figure} Next we consider the case when the fluctuations in $n_{HI}$ and that in $v^{T}$ are correlated, in fact we use the same set of gaussian random variables to represent them. Hence, in this case, we only consider variation of $\alpha$ since $n_{HI}$ and $v^{T}$ fluctuations are scaled up version of the same original gaussian variables. As expected the power spectrum with $\delta v = \Delta v$ has the slope of the column density power spectrum and the corresponding plot is exactly similar to Figure~(2) except from minute differences arising because of statistical fluctuations. We estimated the power spectrum of {\rm H~{\sc i}~} intensity with $\delta v = \sigma^{T}/2$. These power spectra also show trends as in Figure~(3), for $K<0.8$ it is dominated by the windowing effect and for larger $K$, it is a power law with exactly similar variation of the slope of the spectra as in Figure(4). We do not show these plots here to avoid repetition. To summarise, for both the cases $n_{HI}$ and $v^{T}$ uncorrelated and perfectly correlated the power spectrum $P(K, \delta v)$ with $\delta v = \Delta v$, we always reproduce power law with the same slope as $n_{HI}$ power spectrum and with $\delta v = \sigma_{T}/2$, at larger $K$ the power spectrum has a certain slope that only correlates with the slope of the velocity spectrum. \section{Conclusions} In this letter we investigate how the {\rm H~{\sc i}~} intensity fluctuation power spectrum is related to the number density and the line of sight component of the turbulent velocity for external galaxies. We found that for scale invariant fluctuations in both density and velocity, when the emission is integrated over all velocity range, the intensity fluctuation power spectra follow a power law that has the same slope of the power spectra of the {\rm H~{\sc i}~} number density fluctuations. We consider a thick disk for the galaxy in this case, in case of thin disk, the intensity spectra would have slope shallower by order unity (\citet{2009MNRAS.398..887D}, LP00). When the emission is integrated over a velocity range smaller than the turbulent velocity dispersion, due to the galactic rotation, only a part of the galaxy is visible. Effect of the density or the velocity fluctuation in the intensity power spectra can be inferred only for higher values of $K$ and for a relatively narrow range of $K$ values. We found that the slope of the spectra at these range follow approximately a power law with slope $\gamma$ related nonlinearly only to the slope of the velocity power spectrum $\beta$ and independent of the power spectrum of the density. This is a different result compared to LP00, where it is expected that $\gamma = \alpha +\beta/2$. Note that, this result is based on a power law fit to the power spectrum for a shallower range of $K$. Nevertheless, it clearly demonstrates that velocity modification of the {\rm H~{\sc i}~} power spectrum for external galaxies is quite different from that in our Galaxy. In our simulations, we did several simplifications. First we ignored the overall {\rm H~{\sc i}~} profile of the galaxy. However, as it is shown in \citet{2009MNRAS.398..887D}, effect of this profile is to modify the power spectra at lower $K$, which considering the galaxy spanning over the simulation volume would happen at $K<0.1$ and is not the regime of interest for the velocity modification. Similarly, including a realistic rotation curve would have only change the window over which the power spectrum can be investigated for velocity modification. Given these, our results also stand out for real galaxies. Effect of varying thermal velocity dispersion over the galaxy, on the other hand, is more complicated and needs to be investigated in detail separately. Finally, we discuss the feasibility to use the relation we obtain between $\gamma$ and $\beta$ in Figure~(4) for a real observation. Considering the galaxy spread over our entire simulation volume, the dynamic range in $K$ from simulation is approximately same as that in the THINGS observation. \citet{2013NewA...19...89D} has estimated the power spectra of 18 nearby spiral galaxies from THIGNS sample. Given the baseline coverage of observation, they could estimate the power spectra till $1/4^{th}$ the largest baseline. This is because at higher baselines the baseline coverage is restricted and signal to noise is insufficient to estimate the power spectrum with statistical significance. We use $NGrid = 512$ for performing our simulation. As the largest $K$ in simulation is $\pi$, $1/4^{th}$ the available baseline rang of THINGS corresponds to $K = \pi /4 \sim 0.8$ in our simulation, making these observation inefficient to probe the velocity modification this way. We choose an inclination angle of $10^{\circ}$ for our simulation, any higher inclination angle would result rather even smaller range in $K$ over which the values of $\gamma$ can be estimated. We conclude that with present telescopes to use VCA techniques for external galaxies, we need high integration time. An alternate method to estimate the turbulent velocity spectra of the external galaxies would be more useful. We aim to investigate along this direction in future. \section*{Acknowledgement} PD acknowledges useful discussion with Somnath Bharadwaj, Jayaram N. Chengalur, Nirupam Roy and Nissim Kanekar. This work is supported by the DST INSPIRE Faculty Fellowship award [IFA-13 PH 54] and performed at Indian Institute of Science Education and Research, Bhopal. PD is thankful to Narendra Nath Patra, Sushma Kurapati and Preetish Kumar Mishra for reading the earlier version of the draft and providing valuable comments.
{ "timestamp": "2015-04-24T02:11:40", "yymm": "1504", "arxiv_id": "1504.06252", "language": "en", "url": "https://arxiv.org/abs/1504.06252" }
\section{Astrophysical Context} Hot subluminous O stars form the hottest part of the extreme horizontal branch (EHB) region, which is itself a hot extension of the horizontal branch. The EHB region of the Hertzsprung-Russell (HR) diagram encompasses stars that span a wide range of effective temperatures, from 22,000 K up to 100,000 K, and are more compact (4.8 $\lesssim$ log $g$ $\lesssim$ 6.4) than the main sequence ones. This includes stars from two distinct spectral types, the cooler sdBs with their strong Balmer lines (22,000 K $\leq$ $T_{\rm eff}$~$\leq$ 38,000 K), and the hotter sdOs showing strong He~\textsc{ii} lines as well ($T_{\rm eff}$~$\geq$ 38,000 K). Most hot subdwarfs are believed to be helium core burning objects -- or in a phase following immediately core helium exhaustion -- with a layer of H-rich material too thin to sustain significant shell burning\footnote{Note that a few hot subdwarfs have been found to have a mass too low to sustain helium core burning (e.g., HD 188112, \citealt{heb03}). However, these low mass stars represent only a tiny fraction of the hot subdwarf population.}. From a spectroscopic point of view, sdB stars form a rather homogeneous group: they mostly cluster within the theoretical core helium burning region and its immediate surroundings in the HR diagram. Numerous analyses were made on various samples of sdB stars and their global properties (helium content, metal content, rotational velocity, binary population, etc.) are now well documented (e.g., \citealt{ede03,geier10,geier12,geier13,fon14}). However the situation is different when speaking of the hotter sdOs; they are distributed among a much larger region in the HR diagram and this distribution is not as homogeneous as in the case of the sdBs. While most of the latter are helium poor, the majority of sdO stars has an atmosphere enriched in helium and they are thought to be the results of various peculiar evolutionary paths\footnote{A comprehensive review of the global properties and characteristics of hot subdwarf stars can be found in \citet{heb09}.}. One striking fact about sdO stars is that they have been much less studied than their coolest counterparts. The most significant study in our view has been the one carried out by \citet{stro07} using an homogeneous sample of sdOs observed within the SPY survey. Additional abundances of carbon and nitrogen were then measured in those sdO stars by \citet{hirschthesis} and Hirsch \& Heber (in preparation, 2015). The sample of $\omega$ Centauri EHB stars in \citet{moe11} and \citet{lat14} also included a fair amount of sdO stars, but mostly cooler ones found at the transition between the spectral types B and O (i.e., below 40,000 K). While known sdO stars in the field are outnumbered by sdBs (these stars show a number ratio $\approx$ 1 : 3; \citealt{heb09}), the true reason for this relative lack of investigations must be found in the inherent challenge associated with the atmospheric modeling and spectroscopic analyses of very hot stars. For a star having $T_{\rm eff}$~$\gtrsim$ 50,000 K, the fundamental parameters determined by comparing the observed Balmer and helium lines in the optical with model ones bear important uncertainties. This is mostly due to the so-called Balmer line problem, first noticed by \citet{nap92,nap93} in hot central stars of old planetary nebulae. Basically, this problem comes down to the inability to simultaneously reproduce the observed Balmer lines with a unique set of fundamental parameters (log $g$ - $T_{\rm eff}$). More specifically, the individual lines need different temperatures in order to be matched properly, with the higher lines in the series needing models at higher temperatures. For example, for BD$+$28$\degr$4211, H$\alpha$ was best reproduced at $T_{\rm eff}$\ $\simeq$~50,000 K and H$\epsilon$ at around 85,000 K \citep{nap93}. In a situation like that, it is rather tricky to determine the temperature of the star without any additional information. In this particular case, the author could rely on UV data (from International Ultraviolet Explorer IUE) whose first analysis led to a value of $T_{\rm eff}$~$\simeq$ 82,000 K \citep{dre93}. This relatively high value of $T_{\rm eff}$\ was also supported by the weakness of the He~\textsc{i} 5876 \AA\ line in the optical domain, which requires a high effective temperature. On the basis of these results, it was then concluded that the H$\epsilon$ line was the one that could provide the most realistic temperature estimate. This ``calibration'' may have been useful at times, but it was not at all satisfactory on general grounds and different hypotheses were soon investigated to solve this embarrassing Balmer line problem \citep{nap94}. Most of them were rapidly rejected, save for the idea that the inclusion of metallic elements in the models might influence in a significant way the atmospheric structure, which in turn could change the shapes of the Balmer lines\footnote{The suggestion that metal opacity is at the heart of the Balmer line problem was first made by \citet{ber1993}.}. Note that at the time this issue was first identified, model atmospheres used for these hot stars were using a non-LTE (NLTE) treatment but included only H and He; the treatment of line blanketing by metals was still in its early stages. Accounting for the effects of metallic elements via their colossal numbers of transition lines was, at the time, a real computational challenge. Thanks to the work of \citet{dre93} and \citet{hub95} on the development of numerical techniques allowing the inclusion of metals (such as C, N, O, and iron-group elements) and the treatement of their transition lines in NTLE calculations, it was subsequently shown that these elements can indeed strongly influence the thermodynamical structure of the atmosphere. However, the resulting effects on the Balmer lines were initially found to be surprisingly weak \citep{haas96}. It is \citet{wer96} who brought up an important refinement in the treatement of light metals opacity: the inclusion of Stark broadening profiles for the CNO elements instead of the Doppler ones previously used. This addition led to an improved reproduction of the Balmer lines in his two test stars, BD$+$28$\degr$4211~itself and LS V$+$46$\degr$21, the DAO-type central star of a planetary nebulae. Despite this breakthrough, hot stars such as those presented in \citet{wer96} were then never analyzed by attempting a simultaneous fit of all of the available Balmer and helium lines in optical spectra. This now widely used technique has proven itself to be a robust tool for the determination of fundamental parameters ($T_{\rm eff}$, log $g$, and sometimes also $N$(He)/$N$(H)) in cooler white dwarfs and sdB stars \citep{ber94,saf94}. \citet{rauch07} later carried out a comprehensive spectral analysis of LS V$+$46$\degr$21, but they determined the effective temperature of the star using mainly the ionization equilibria of different metallic species whose lines were visible in the UV spectra of the star. The strongest He~\textsc{ii} lines ($\lambda\lambda$1640, 4686) and H$\beta$ were used to constrain the surface gravity. Feige 110 and G191-B2B were also analyzed in a similar way, combining both UV and optical data to assess fundamental parameters and metal abundances \citep{rauch13,rauch14}. As for BD$+$28$\degr$4211, no further detailed studies were made on that star since \citet{haas96} and later on \citet{ram03} estimated the abundances of a few metallic elements using IUE and HST Space Telescope Imaging Spectrograph (STIS) data. Given the particular status of BD$+$28$\degr$4211~as a spectroscopic standard star, both in the optical domain as well as in the UV range, modern data of extremely good quality are publicly available (through the Mikulski Archive for Space Telescopes, MAST\footnote{http://archive.stsci.edu/}). Surprisingly, and until recently, these data have merely been exploited. Moreover, some X-ray emission has been measured in BD$+$28$\degr$4211, as well as in two other sdO stars, Feige 34 and BD$+$37$\degr$1977 \citep{lapa14}. In view of this state of affair, and given the availability of optical spectra of exceptionally high sensitivity (see below), we undertook an in-depth spectral analysis of this star with the main aim of testing the simultaneous optical fitting method in a very hot star. This is of importance for hot stars, the majority of them in fact, for which only optical spectroscopy is readily accessible while no UV data are available. The first part of this analysis (\citealt{lat13}, hereafter Paper I) focussed on the UV spectral distribution of BD$+$28$\degr$4211~using STIS and Far-Ultraviolet Spectroscopic Explorer (FUSE) spectra. We obtained in a self-consistent way the abundances of 11 elements with well defined lines in the UV, namely C, N, O, F, Mg, Si, P, S, Ar, Fe, and Ni. None of these elements was found to be enriched, the abundances rather lie between the solar value and 1/10 solar. With the help of the ionization equilibria of several metallic species, we were able to confirm the previously determined effective temperature and constrain it to a value of 82,000 $\pm$ 5,000 K. We also estimated conservatively the surface gravity of the star to be log $g$ = 6.2$_{-0.1}^{+0.3}$, which is also consistent with past results. By comparing the Hipparcos parallax measurement of BD$+$28$\degr$4211~\citep{hip07} with spectroscopic distances estimated from several model spectra we found that, in order to reconcile both values, the star needs either a log $g$ higher than 6.2 or a mass significantly lower than the canonical value of 0.5 $M_{\rm \odot}$. Having these informations at hand, we can now tackle the analysis of its optical spectrum. Past spectroscopic studies of hot stars like BD$+$28$\degr$4211~always relied on UV data, sometimes supported by optical ones, to get reliable fundamental parameters (e.g., \citealt{rauch07,fontm08,zie12}). However, the need to rely on UV data can be very restricting since they must be gathered with space missions, which are a lot less accessible than ground-based observations supplying optical spectra. Our goal here is to find a way, using our test case star, to obtain reliable fundamental parameters ($T_{\rm eff}$, log $g$ and $N$(He)/$N$(H)) using solely optical data. Given that changes in optical line profiles may be subtle at times in the very hot star regime, this necessitates spectra of high S/N ratio and/or high resolution. In this spirit, we exploit three very high sensitivity spectra having various resolutions and wavelength coverages. We exploit as well a high resolution UVES spectrum culled from the ESO archive. This material is described in more detail in the following section. The main part of this paper, Section 3, includes a description of the model grids we used as well as the subsequent spectroscopic analyses made. We also carried out some additional verifications to test our deduced fundamental parameters by comparing our best-fit models with additional high resolution archive spectra. Finally, a discussion follows in Section 5. \section{Observational Material} BD$+$28$\degr$4211\ is a well known, bright (V = 10.58) standard star and, as such, has been regularly observed for calibration purposes. In particular, as part of her spectroscopic programs at the University of Arizona, one of us (E.M.G.) has observed that standard star for many years using mainly three different instrumental setups, each corresponding to a different spectral resolution and spectral coverage. Hence, by carefully combining the individual calibration data for each setup, we have obtained three exceptionally high sensitivity spectra for BD$+$28$\degr$4211\ on which is based a large part of the present analysis. This issue of the signal-to-noise ratio (S/N) is quite important in the present context since we seek to detect differences between the observed and modelled line profiles that may be relatively small. It should also be pointed out that particular care has always been taken while observing BD$+$28$\degr$4211\ in order to avoid contamination from the light of a nearby star. Indeed, rotating the slit to the parallactic angle at the midpoint of the exposure ensures that no light from the faint red companion of the star \citep{mas90} contaminates the spectrum of the sdO. Usually the companion was off the slit, however in the few cases when it fell within the slit, there was a clear spatial separation between the spectra of the companion and BD$+$28$\degr$4211, so it was always possible to extract only the sdO spectrum. Our first instrumental setup is defined by the combination of the blue spectrograph attached to the 6.5~m Multiple Mirror Telescope (MMT). The 832 mm$^{-1}$ grating is used in second order and, with the choice of a 1$^{\prime\prime}$ slit width, this combination provides a resolution $R$ of $\sim$4250 (1.0~\AA) and covers the wavelength range 4000--4950~\AA. The careful combination of 20 individual spectra of BD$+$28$\degr$4211~observed at the MMT resulted in a spectrum having a formal S/N around 1,100\footnote{The S/N calculation includes the summed star and sky photons plus the CCD readnoise. One thousand or more bias and flat images were taken during each run so that the processing of each spectral image introduces negligible additional noise.}. This will be referred to as the MMT spectrum in what follows. The other two instrumental setups make use of the Boller \& Chivens (B\&C) Cassegrain spectrograph mounted to Steward Observatory's 2.3~m Bok Telescope at Kitt Peak. Hence, the 832~mm$^{-1}$ grating in second order with a 1.5$^{\prime\prime}$ slit is used to achieve a resolution of 1.3~\AA\ over a bluer wavelength range of 3675--4520~\AA. Twenty spectra were obtained with this particular setup, each flux calibrated and then combined with median filtering. The resulting S/N ratio is $\sim$ 918. This spectrum will later be referred to as the BOK1.3 one. The third set of observations covers a much wider wavelength interval, from 3620 to 6900 \AA, but at the cost of a lower resolution of 8.7~\AA. These observations are still very useful because they include two additional and important spectral lines in BD$+$28$\degr$4211~: He~\textsc{ii} at 5412~\AA\ and H$\alpha$. These low resolution spectra are obtained with a 400~mm$^{-1}$ grating in first order in conjunction with a 2.5$^{\prime\prime}$ slit. Our resulting 8.7 \AA\ spectrum is the combination of 90 individual observations and has an overall whopping S/N of $\sim$2500. The resulting spectrum is referred to as the BOK8.7 one in what follows. Additionally, we retrieved one UVES spectrum of BD$+$28$\degr$4211\ through the ESO archive\footnote{www.eso.org/sci/observing/phase3/data\_releases.html} (program ID 69.C-0171(A)), that provides access to reduced scientific data obtained with the UVES spectrograph mounted on the VLT. That spectrum has a much reduced S/N ratio than our Steward Observatory (SO) data but it has significantly better resolution. It comes in two parts, 1) a ``blue'' one characterized by a spectral coverage of 3281--4562~\AA, a resolving power R = 68,642, leading to $\Delta \lambda \sim$0.06 \AA\ in mid-range, and a value of S/N $\sim$ 95, and 2) a ``red'' one characterized by a spectral coverage of 4624--6686~\AA, a resolving power R = 107,200, leading to $\Delta \lambda \sim$0.05 \AA\ in mid-range, and a value of S/N $\sim$ 63. One other important property of this UVES spectrum is that its continuum behaves relatively well for an echelle spectrum, so the observed line profiles are already amenable to direct comparisons with model line profiles. Thus, we made no particular attempt to remove some of the small remaining wavy structure shown in the data, except for fixing a discontinuity that is present in the red wing of the He~\textsc{ii} 4686 \AA\ line. We refer to that spectrum as UVES in the rest of this paper. There are also many high resolution HIRES observations of BD$+$28$\degr$4211~available in the Keck Observatory Archive\footnote{https://koa.ipac.caltech.edu} and extracted spectra, produced by an automated pipeline, are available for more than a hundred of them. Due to continuum placement difficulties, however, these echelle data are not particularly suited for a formal analysis aimed at simultaneously fitting the optical lines of hydrogen and helium. Nevertheless, some of the HIRES spectra are of very good quality (S/N up to $\sim$300) and are highly interesting since they feature their fair share of details that cannot be seen in our own spectra.We thus retrieved 23 of the available extracted spectra ($\lambda$ between 4600 - 6600 \AA, highest S/N) in order to make a posteriori comparisons between some observed lines and our optimal model spectra as well as to look for any radial velocity variations. Among those spectra, two were featured in \citet{her99}. However, there is a major inconvenience with the HIRES spectra: the continuum of the various orders is uneven and quite a bit wavy. While this flaw can be overcome rather easily when studying narrow spectroscopic features, which is an important purpose of such high resolution observations ($\sim$ 0.1 \AA), it is a hard thing to deal with when the lines of interest are tens of angstroms in width. The best way we found to flatten the continuum of the retained spectra was by using the continuum of adjacent orders which often had a similar shape. Continua from adjacent orders were shifted and superimposed to the spectra of interest and if both corresponded well enough, dividing the spectra by the continuum would flatten the former in a satisfactory way. \begin{figure*}[t] \includegraphics[scale=0.39,angle=270]{fig1a.eps} \includegraphics[scale=0.39,angle=270]{fig1b.eps} \includegraphics[scale=0.39,angle=270]{fig1c.eps} \includegraphics[scale=0.39,angle=270]{fig1d.eps} \caption{Best fits obtained using our grid of NLTE metal-free models. In order of increasing resolution: BOK8.7 spectrum (top left), BOK1.3 spectrum (top right), MMT spectrum (bottom left), UVES spectrum (bottom right). The observed spectral lines are shown in red, while the modeled lines are shown in black.} \label{fitg2} \end{figure*} \section{Spectroscopic Analysis} \subsection{Model Atmosphere Grids} Our grids of models were computed with the public codes TLUSTY and SYNSPEC\footnote{http://nova.astro.umd.edu/index.html} \citep{lanz95}, which were run on CALYS, our cluster of computers currently containing 320 processors, where a large number of models can be simultaneously computed. Further technical details on the models can be found in Paper I, especially about the ionic species that were included. We started our analysis with two different model grids. The first one is a metal-free grid of NLTE models; one of the grids that were built at the time of the analysis of the pulsating sdO star SDSS J160043.6+074802.9 \citep{lat11}. The purpose of using this grid is mainly for comparison. The second grid is one especially suited for BD$+$28$\degr$4211, that includes eight of the main metallic constituents of the star's atmosphere, namely C, N, O, Mg, Si, S, Fe, and Ni. Let us remind the reader here that in the course of the analysis made in Paper I, we inspected our model atoms and added a classic Stark profile to a few important lines of \ion{C}{iv}, \ion{N}{iv}, O~\textsc{iv}-\textsc{v} and Si~\textsc{iv}. The abundances of these elements were taken from the results of the UV analysis made in Paper I. Moreover, the hydrogen broadening profiles of \citet{trem09} were added to the SYNSPEC code, as a replacement for the \citet{lemke97} broadening tables previously used. This grid covers a parameter space centered around those of BD$+$28$\degr$4211, with $T_{\rm eff}$\ varying from 76~kK up to 90~kK by steps of 2,000 K, log $g$ from 5.4 to 6.8 dex by steps of 0.2 dex, and finally log \nhe\ from $-$2.0 to 0.0 dex by steps of 0.5 dex. We did not include F, P, and Ar in the metallicity considered for the construction of this grid mainly for technical reasons. Indeed, our current implementation of TLUSTY on our cluster CALYS leads to some convergence problems as well as memory restrictions when multi-metal NLTE model atoms are simultaneoulsy considered. This is particularly true when complex atoms such as those of Fe and Ni are included (as is the case here). This considerably slows down the computations to the point of being impractical. Extensive tests have shown that, for the abundances deduced in Paper I, the three above elements contribute, in fact, negligibly to the overall metal opacity in the atmosphere of BD$+$28$\degr$4211\ compared to the other elements that we retained. In this way, we were able to built our second grid over a reasonable length of time. Still, this second grid of models must be viewed as one that includes a {\sl minimal} metallicity since it does not include all the metallic species present in the atmosphere of the star. \subsection{Derived Atmospheric Parameters} The four optical spectra of BD$+$28$\degr$4211~that we gathered were analyzed with the two model grids mentioned in the previous section. We stress that the spectra were analyzed in the same way as are usually handled the much cooler sdB stars: all of the available H and He lines being simultaneously fitted in a three-dimensional space ($T_{\rm eff}$, log $g$, and log \nhe). The $\chi^2$ minimization procedure relies on the method of Levenberg-Marquardt, based on a steepest descent method \citep{ber92}. Normalized lines of both the observed and model spectra (convolved at the instrumental resolution) are thus compared. \begin{figure*}[t] \includegraphics[scale=0.39,angle=270]{fig2a.eps} \includegraphics[scale=0.39,angle=270]{fig2b.eps} \includegraphics[scale=0.39,angle=270]{mc1.mbdfinalplus.eps} \includegraphics[scale=0.39,angle=270]{fig2d.eps} \caption{Similar to Fig. 1, but using the NLTE model grid that includes the following elements, C, N, O, Mg, Si, S, Fe, and Ni, and the abundances determined in Paper I.} \label{fitg7f} \end{figure*} Resulting fits obtained with the metal-free grid are displayed in Fig. \ref{fitg2}. Not surprisingly, the resulting temperature is much lower than what is expected from the UV analysis ($T_{\rm eff}$ = 82,000 K $\pm$ 5000 K), but the other parameters are in acceptable ranges from the expected ones (log $g$ = 6.2$_{-0.1}^{+0.3}$, log $N$(He)/$N$(H) $\sim$ $-$1.0). The resulting fits are rather bad and assessing parameters on such results is not a good option. Note, however, that the very high S/N of our three SO spectra helps a lot in terms of defining ``badness'' here. The fits of the BOK8.7 and BOK1.3 spectra show good examples of the so-called Balmer line problem, with the lowest lines in the series being too shallow in the model, while the trend shifts in H8 with a model line that is too deep. There is also a hint in the BOK8.7 spectral fit that the resulting temperature is too low when one looks at the helium lines: weak neutral helium lines are predicted by the model while the observed spectrum is merely flat at these wavelengths and, in addition, the two main ionized helium lines are not strong enough in the model spectrum. These shortcomings are also evident in the fit of the high resolution UVES spectrum, as well as large differences in the cores of several lines. It should be pointed out that the uncertainties on the derived atmospheric parameters quoted in Fig. \ref{fitg2} (and following) are only the formal errors of the fit in 3D space. They do not take into account external errors and systematic effects. Ignoring differences in S/N, spectral coverage, and resolution from one spectrum to another, a better way of verifying the internal consistency of these results is to compute the mean value and the standard deviation for each of the parameter. For the metal-free grid, we thus obtain the following mean values (based on the 4 different spectral fits), $T_{\rm eff}$ = 66,250 K $\pm$ 1053 K, log $g$ = 6.367 $\pm$ 0.081, and log $N$(He)/$N$(H) = $-$1.210 $\pm$ 0.080. This is reported, as well as the inferred parameters of the individual fits, in the top third of Table \ref{res_fitbd}. Note that one obvious sytematic effect that decreases somewhat the mean value of log $N$(He)/$N$(H) is related to the fact that the BOK1.3 spectrum covers only one He~\textsc{ii} line, and that is the weak 4200 \AA\ feature. The fits performed with the second grid having abundances fixed to the ones determined for BD$+$28$\degr$4211~in Paper I were expected to give more satisfying results. As compared to the metal-free case, the temperatures obtained are indeed higher, while the inferred surface gravities and helium abundances change only sligthly and remain within the expected ranges. (see Fig. \ref{fitg7f}). Nevertheless, in spite of having included abundances that were self-consistently determined, and for which the main UV spectral lines were very well reproduced, the fitting procedure does not give appropriate effective temperatures. The results are too cool by roughly 10,000 K. A close look at the resulting fits shows that they are significantly improved as compared to the metal-free case, but they are not perfect either. A remnant of the Balmer line problem is still visible in our best fits, and the He~\textsc{ii} line at 5412 \AA\ cannot be reproduced properly in the BOK8.7 spectrum. In addition, the details of the line core emission in both H$\beta$ and He~\textsc{ii} at 4686 \AA are not well modeled in the UVES spectrum; the discrepancies suggest again too low an inferred temperature. The results of the fits with our second model grid are reported in the middle third of Table \ref{res_fitbd}. The mean values are $T_{\rm eff}$ = 72,554 K $\pm$ 2813 K, log $g$ = 6.536 $\pm$ 0.105, and log $N$(He)/$N$(H) = $-$1.199 $\pm$ 0.100. At this point two things must be kept in mind. The first one is that in this range of parameters, $T_{\rm eff}$~and log $g$, the Balmer and helium lines are only weakly sensitive to a change of parameters. A variation of effective temperature slightly changes the depth in the very core of the lines while the wings remain essentially the same. The surface gravity has a higher impact on the wings but also influences the depth, but again not by a very large amount (this will be discussed in more details in the next section). This causes an intrinsic uncertainty associated with any parameter determination based on optical data. Secondly, a consequence of the first point is that using very high quality spectra (in terms of S/N) allows to see the small discrepancies between our best-fit models and the observations. With spectra having a more representative sensitivity, the fits from Fig. \ref{fitg7f} might have looked acceptable and the parameters thus deduced would have been wrong. It should also be mentioned that forcing the temperature to a value of 82,000 K, while fitting the two other parameters, does not result in much better fits with this grid of models. Such a fit can be seen in Figure 4 of \citet{lat14proc}. \begin{table}[b] \caption{Results of our fitting procedures for BD$+$28$\degr$4211}\label{res_fitbd} \centering \scriptsize \begin{tabular}{l l c c} \hline \hline Spectrum & $T_{\rm eff}$ & log $g$ & log \nhe \\ & (K) & (cm s$^{-2}$) & (dex) \\ \\ \multicolumn{4}{c}{NLTE H,He model grid} \\ \hline BOK8.7 & 64,584 $\pm$ 1,113 & 6.463 $\pm$ 0.053 & $-$1.177 $\pm$ 0.048 \\ BOK1.3 & 67,504 $\pm$ 690 & 6.387 $\pm$ 0.020 & $-$1.336 $\pm$ 0.046 \\ MMT & 66,479 $\pm$ 659 & 6.238 $\pm$ 0.023 & $-$1.208 $\pm$ 0.024 \\ UVES & 66,432 $\pm$ 602 & 6.380 $\pm$ 0.022 & $-$1.118 $\pm$ 0.027 \\ rms & 66,250 $\pm$ 1053 & 6.367 $\pm$ 0.081 & $-$1.210 $\pm$ 0.080 \\ \hline \\ \multicolumn{4}{c}{NLTE line-blanketed grid with the metallic abundances of BD$+$28$\degr$4211} \\ \hline BOK8.7 & 68,416 $\pm$ 556 & 6.698 $\pm$ 0.021 & $-$1.173 $\pm$ 0.020 \\ BOK1.3 & 74,685 $\pm$ 927 & 6.536 $\pm$ 0.015 & $-$1.365 $\pm$ 0.044 \\ MMT & 71,561 $\pm$ 561 & 6.501 $\pm$ 0.019 & $-$1.101 $\pm$ 0.018 \\ UVES & 75,555 $\pm$ 615 & 6.408 $\pm$ 0.021 & $-$1.156 $\pm$ 0.020 \\ rms & 72,554 $\pm$ 2813 & 6.536 $\pm$ 0.105 & $-$1.199 $\pm$ 0.100 \\ \hline \\ \multicolumn{4}{c}{NLTE line-blanketed grid with ten times solar abundances} \\ \hline BOK8.7 & 79,694 $\pm$ 1,332 & 6.508 $\pm$ 0.045 & $-$1.157 $\pm$ 0.033 \\ BOK1.3 & 80,678 $\pm$ 1,174 & 6.536 $\pm$ 0.015 & $-$1.380 $\pm$ 0.045 \\ MMT & 82,738 $\pm$ 639 & 6.582 $\pm$ 0.016 & $-$1.050 $\pm$ 0.013 \\ UVES & 82,257 $\pm$ 660 & 6.450 $\pm$ 0.019 & $-$1.152 $\pm$ 0.017 \\ rms & 81,342 $\pm$ 1219 & 6.519 $\pm$ 0.048 & $-$1.185 $\pm$ 0.121 \\ \hline \end{tabular} \\ \end{table} \subsection{Exploring Metallicity Effects on Spectral Lines} In his important contribution on the Balmer line problem in a NLTE context, \citet{wer96} expressed the hope that including more elements than C, N, and O (like he did in his experiments) would definitely solve the problem in stars such as the hot sdOs that he investigated, BD$+$28$\degr$4211\ itself and the similar object LS V$+$46$\degr$21 (the central star of S216). The inclusion of C, N, and O in solar amounts helped a lot, but there were still remaining discrepancies between the observed and computed line profiles that most likely were due to additional missing opacity. As shown just above, the inclusion of C, N, O, Mg, Si, S, Fe, and Ni with the specific abundances derived from our UV analysis of BD$+$28$\degr$4211\ in Paper I did improve considerably the spectral fits, but our effort felt short in the sense that there is ample room still for improvement and we significantly underestimate the effective temperature of our target. This is somewhat disappointing, and we must conclude that significant opacity is still missing in these models\footnote{In this context, we reemphasize that our neglect of the contributions of F, P, and Ar at their derived UV abundances in our calculations cannot be at the origin of the problem.}. Prior to the work of \citet{wer96}, \citet{ber1993} had shown that the Balmer line problem can be solved in the hot DAO white dwarf Feige 55 ($T_{\rm eff}$ $\sim$ 60,300 K, log $g$ $\sim$ 7.25) if an abundance of Fe equal to 25 times its solar value with respect to H by number is used in the computations of the atmospheric structure (in the LTE approximation). Of course, this suprasolar value of the abundance of Fe was only used as a proxy for the overall metallic opacity in the atmosphere of Feige 55 and has nothing to do with the real abundance of that particular element. But the calculations of \citet{ber1993} certainly indicated that there is generally quite a bit of ``missing'' opacity in model atmospheres of hot stars and that this can have significant influence on the modeling of the optical lines of H and He. A similar problem was also found and discussed by \citet{otoole06} and \citet{geier07} concerning the difficulty of fitting simultaneously the H~\textsc{i}, He~\textsc{i}, and He~\textsc{ii} lines in the optical spectra of sdOB subdwarfs ($T_{\rm eff}$\ between $\sim$ 30,000 K and $\sim$ 40,000 K). Their proposed solution was to arbitrarily increase the metal abundances in LTE models to 10 times their solar values. In this way, consistent and acceptable spectral fits could be obtained. Likewise, and more recently, \citet{gia10} concluded that using a boosted metallicity consisting of 10 times the solar values of C, N, and O in NLTE models of hot DAO white dwarfs could be a practical approach to the analysis of optical spectra of such stars. In this spirit, we decided to explore the effects of increasing the abundances of metallic elements in our model atmospheres. For this purpose, we built a coarse grid of dedicated models including eight effective temperatures between 22,000 and 90,000 K. The temperature of the models between 40,000 and 90,000 K varies by step of 10,000 K, and they have log $g$ = 6.0 and log \nhe\ = $-$1.0, roughly representative of hot sdOs with normal helium content. The two coolest models, in order to be more representative of EHB stars, have the following parameters: $T_{\rm eff}$~= 22,000 K with log $g$ = 5.4, and $T_{\rm eff}$~=30,000 K with log $g$ = 5.6, and they both have log \nhe\ = $-$2.0. For each of these eight sets of parameters, we built metal-free model atmospheres and models including the line blanketing of C, N, O and Fe at one, two, five, ten, and fifteen times their solar abundances. All of these models were computed in NLTE and the synthetic spectra include only hydrogen and helium lines to avoid the presence of strong and unrealistic metallic lines. In order to inspect the metallicity effects, we present in the four panels of Fig. \ref{compline} the line profiles of the five lowest members of the Balmer series (H$\alpha$ to H$\epsilon$), of four He~\textsc{i} lines (4026, 4471, 4713, and 5876 \AA), and of three He~\textsc{ii} lines (4542, 4686, 5412 \AA). Line profiles are displayed for five metallicities: metal-free, one, five, ten, and fifteen times solar. We omitted from plotting the two times solar case because the line profiles are already quite crowded. We find Fig. \ref{compline} particularly instructive. \begin{figure*}[p] \begin{center} \includegraphics[scale=0.50,angle=0]{Habg.eps} \includegraphics[scale=0.50,angle=0]{Hde5876.eps} \includegraphics[scale=0.50,angle=0]{HeI.eps} \includegraphics[scale=0.50,angle=0]{HeII.eps} \caption{ Comparison of the line profiles for our models with different metallicities: metal-free (black), one (red), five (green), ten (magenta), and fifteen (blue) times the solar abundances for C, N, O, and Fe. Each line profile is normalized and the vertical scale is adjusted for each lines in order to allow the best view of the profiles. The synthetic spectra are convolved at a resolution of 1.0 \AA. } \label{compline} \end{center} \end{figure*} The most important result of Fig. \ref{compline} that we want to emphasize is the saturation effect that can clearly be observed in the plot: the line profiles no longer change with increasing metallicity beyond a certain level. This is particularly evident for the 80,000 K model, of central interest here, for which practically no distinction can be made in the profiles computed with a background metallicity of 5$\times$ (green), 10$\times$ (magenta), or 15$\times$ (blue) the solar abundances of C,N, O, and Fe. This suggests to us a practical recipe to compute some sort of saturated metallicity in model atmospheres, namely multipling by, say, a factor 10 the solar abundances of metals. This is nothing more than the suggestion already put forward by \citet{otoole06}, \citet{geier07}, and \citet{gia10}, but with the added justification of the saturation effect. Figure \ref{compline} further reveals that the saturation effect is verified over the full range of parameters displayed in the plot. The largest differences between the line profiles are found for the 40,000 K model, and, yet, the differences between the 10$\times$ (magenta) profiles and the 15$\times$ (blue) profiles in that particular model remain tiny for all of the lines illustrated. As for the sdB domain, sampled by our two coolest models (with different log $g$ and log $N$(He)/$N$(H) values than the hotter models), the line profiles are clearly not very sensitive to the assumed background metallicity. This result is not new in a sdB star context and has been rediscussed recently (see, e.g, Figs. 2 and 4 of \citealt{lat14f}). \subsection{Spectral Analysis with Metal-Enhanced Models} In the light of the results obtained in the previous subsection, we decided to test this concept of enhanced metallicity. We thus computed a third full grid of model atmospheres dedicated to the spectral fitting of BD$+$28$\degr$4211. It is similar to our second grid, except that the ``minimal'' metallicity of the latter (defined by the UV abundances of C, N, O, Mg, Si, S, Fe, and Ni) is replaced by a ``saturated'' metallicity (defined by ten times the solar abundances of these 8 elements). Technically speaking, it takes substantial amounts of time to build grids of NLTE models with nonzero metallicity, but since the metallicity has to be progressively ``turned on'' in our approach, the availability of the second grid helped us save considerable time in our passage from minimal to saturated opacity. Note also that in the computation of the synthetic spectra of this third grid (which is made with SYNSPEC), the metal abundances were reduced to the ones in BD$+$28$\degr$4211~to avoid unrealistic and strong metallic features in the optical spectra. In other words, the artificially enhanced metallicity was used only in the computation of the atmospheric structures (with TLUSTY). \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics{fig4.eps}} \caption{Temperature stratification and monochromatic optical depth $\tau_{\nu}$ = 2/3 as functions of depth, where {\it m} is the column density, for NLTE models defined by ${\it T}_{\rm eff}$~=~82,000~K, log $g$ = 6.4, and log {\it N}(He)/{\it N}(H) = $-$1.0. The temperature structure is shown for three model atmospheres having different compositions: with H and He only (black, dotted), with the metallic abundances of BD$+$28$\degr$4211~(red, dashed) and with ten times solar abundances (blue, solid). The $\tau_{\nu}$ = 2/3 curve is from the latter model and shows wavelength intervals corresponding to the Balmer line series.The wavelength sampling of this curve is about 0.3 \AA.} \label{strucbd} \end{figure} It is instructive to compare the temperature stratifications of atmosphere models having the same values of the effective temperature, surface gravity, and helium content, but obtained from the three different grids. As an example, Fig. \ref{strucbd} illustrates the effects of metals on the temperature structure of model atmospheres having fundamental parameters representing BD$+$28$\degr$4211: $T_{\rm eff}$~= 82,000 K, log $g$ = 6.4 and log \nhe = $-$1.0. The first model is a metal-free one, showing the typical NLTE temperature inversion in the outer layers of the atmosphere (dotted line). Adding the metallic content of BD$+$28$\degr$4211\ causes a drastic cooling of the outer layers while the deeper ones are heated (dashed curve). We showed in Paper I that the cooling is essentially due to the C, N, O, elements while both these elements and Fe heat the inner layers. When looking at the temperature stratification for a metal-enhanced model (solid curve), the most striking effect is the significant warming, again of the inner layers, that is prominent in the line-forming region (between a depth of $-$1.0 and $-$2.0). This line-forming region can be localized with the help of the $\tau_{\nu}$ = 2/3 curve, which indicates the depth (in column density) at which half the photons leave the atmosphere at a given wavelength. \begin{figure*} \includegraphics[scale=0.39,angle=270]{mc9.mbdTensol.eps} \includegraphics[scale=0.39,angle=270]{mc1.3.mbdTensol.eps} \includegraphics[scale=0.39,angle=270]{mc1.mbdTensol.eps} \includegraphics[scale=0.39,angle=270]{mongfit.Tensol.eps} \caption{Similar to Fig. 1, but using the NLTE model grid that includes the following elements, C, N, O, Mg, Si, S, Fe, and Ni, with solar abundances multiplied by a factor of 10.} \label{fitgten} \end{figure*} In the last step of the procedure, we fitted our four reference optical spectra using the third grid of models. The resulting fits can be seen in Fig. \ref{fitgten}. These fits are rather remarkable in the sense that they reproduce very well {\sl all} of the available observed line profiles, including details of line core emission in the high resolution UVES spectrum. Moreover, despite using spectra of different sensitivity, spectral coverage, and resolution, the inferred atmospheric parameters fall, in all four cases, within the ranges of values derived from the detailed UV analysis presented in Paper I. To our knowledge, this is the first time that realistic estimates of the atmospheric parameters of a hot sdO star, especially the effective temperature, have been obtained through the application of a simultaneous fit of all available H and He lines in a given optical spectrum, a method initially put forward by \citet{ber92} in the white dwarf context. With emphasis, we point out that this was possible only under the assumption of an increased metallicity. The resulting parameters of our various fits are summarized in the lower third of Table \ref{res_fitbd}. The mean values are $T_{\rm eff}$ = 81,342 K $\pm$ 1219 K, log $g$ = 6.519 $\pm$ 0.048, and log $N$(He)/$N$(H) = $-$1.185 $\pm$ 0.121. Of particular interest, this new estimate of the effective temperature is well within the well-constrained value of $T_{\rm eff}$ = 82,000 $\pm$ 5000 K derived from the UV spectrum in Paper I. For the surface gravity of BD$+$28$\degr$4211\, a value of log $g$ = 6.5 is also compatible with the conclusions of Paper I which led to a formal estimate of log $g$ = 6.2$_{-0.1}^{+0.3}$, although the new spectroscopic value is just formally acceptable. In this context, let us remind the reader that the UV metallic lines (mainly iron ones) did not help in constraining very tightly the surface gravity, but a better match was nevertheless obtained with log $g$ = 6.2, hence the adopted value. On the other hand, when we compared the spectroscopic distance of BD$+$28$\degr$4211~for various combinations of masses and surface gravities with the one given by the Hipparcos parallax of the star, a log $g$ $\geq$ 6.4 was needed, unless the mass of the star is significantly lower than the canonical value of $\sim$0.5~$M_{\rm \odot}$\ for a hot subdwarf. This is why the upward uncertainty we adopted on log $g$ allows for a surface gravity of at most 6.5. This upper value gave good spectroscopic distances for masses between 0.4 and 0.5~$M_{\rm \odot}$\ and still was not conflicting with the UV metal lines. We also checked the old Hipparcos parallax value for the star \citep{perry97} and the distance derived with this older value is between 88 and 126 pc. This is a little closer to the spectroscopic one but still does not allow for a good match of the distances with models having log $g$ of 6.2, unless the mass is around 0.3~$M_{\rm \odot}$. In the light of the present analysis, it is most likely that the true value of the surface gravity of BD$+$28$\degr$4211\ is closer to 6.5 dex than to the value of 6.2 dex suggested in Paper I\footnote{Assuming $T_{\rm eff}$ = 82,000 K, log $g$ = 6.5, $M$ = 0.5 $M_{\rm \odot}$\, and the reddening discussed in Paper I, we find a spectroscopic distance to BD$+$28$\degr$4211\ of $d$ = 112 $\pm$ 40 pc, which is indeed compatible with either the old, 88$-$126 pc, or new, 81$-$106 pc, Hipparcos distance.}. Finally, we note that a better estimate of the helium content is likely obtained if the BOK1.3 spectrum is excluded from the averaging process since the latter bears a very weak signature of the He abundance. One then gets log $N$(He)/$N$(H) = $-$1.120 $\pm$ 0.049. \begin{figure*} \centering \includegraphics[width=15cm]{fig6Rev.eps} \caption{Comparison between synthetic spectra and the He~\textsc{ii} 4686 \AA\ line from HIRES (1997-08-12). {\it Left Panel}: Synthetic spectra from models at 82,000 K, log $g$ = 6.4, and log \nhe~= $-$1.0. In red, the spectrum comes from a model with the abundances of BD$+$28$\degr$4211\ determined in Paper I. In blue, the spectrum is from a model with ten times the solar metallicity. {\it Right Panel}: Spectra from the ten times solar metallicity grid, having various surface gravities.} \label{he4686} \end{figure*} \begin{figure*} \centering \includegraphics[width=15cm]{fig7Rev.eps} \caption{Comparison between synthetic spectra and the H$\beta$ line from HIRES (2011-10-04). {\it Left Panel}: For models having different effective temperatures. {\it Right Panel}: For models having different surface gravities.} \label{hbeta} \end{figure*} \section{Additional Verifications} \subsection{HIRES Spectra} It is interesting to compare selected line profiles for fiducial models of BD$+$28$\degr$4211\ with those gathered from HIRES archives as discussed previously in Sect. 2. This is particularly true for the lines showing strong core emission at high resolution, namely He~\textsc{ii} 4686 \AA, H$\beta$, and H$\alpha$. We have obtained a very nice global fit of the UVES data we modeled as shown in the lower right panel of Fig. \ref{fitgten}, including those lines. However, the HIRES data allow further detailed comparisons because of their higher S/N (up to $\sim$ 300) even though their resolution is slightly degraded ($\sim$ 0.1 \AA) compared to the UVES. As discussed above, and contrary to the UVES data, the HIRES spectra are not suitable for a formal multiline analysis, but we can still use them to test further our model atmospheres. \begin{figure*}[t] \includegraphics[scale=0.63,angle=0]{fig8aRev.eps} \includegraphics[scale=0.63,angle=0]{fig8bRev.eps} \caption{{\it Left Panel}: Comparison between synthetic spectra having different log $g$ values and the H$\alpha$ line from HIRES (2011-10-04). {\it Right Panel}: Similar, but for the H$\gamma$ line.} \label{figalpha} \end{figure*} Referring to the lower right panel of Fig. \ref{compline}, the He~\textsc{ii} $\lambda$4686 line appears not much affected by a variation of the metallic content of the model atmospheres (from 1$\times$ to 15$\times$ the solar abundances), at least at the 1.0 \AA\ resolution of these synthetic spectra. A closer look, with a HIRES spectrum boasting a tenfold increase in resolution, shows otherwise as there is a definite improvement in the way $\lambda$4686 is reproduced when the metallicity is increased. This is particularly well illustrated in the left panel of Fig. \ref{he4686}. Indeed, the match between a fiducial model spectra ($T_{\rm eff}$~=82,000 K, log $g$ = 6.4 and log \nhe~= $-$1.0, metal abundances ten times solar) and the HIRES observation of this helium line is very good. In addition, we mentioned above that at such a high temperature, the strongest Balmer and helium lines are not very sensitive anymore to changes in log $g$ or $T_{\rm eff}$. We explicitly illustrate this point in the right panel of Fig. \ref{he4686} where model spectra having different values of log $g$ between 6.2 and 6.6 are depicted. One can see that this particular line is rather insensitive to such a change, although the comparison favors the higher gravities. We examined in a similar way H$\beta$ in Fig. \ref{hbeta} where it is compared with models from the enhanced metallicity grid having different values of $T_{\rm eff}$\ and log $g$. The left panel shows that H$\beta$ is rather insensitive to changes of effective temperature while a change in the surface gravity has a small effect on the depth of the line (right panel). Note that in this comparison, as well as in the following ones in this subsection (unless indicated otherwise), the values of the fixed parameters are among $T_{\rm eff}$~= 82,000 K, log $g$ = 6.4, and log \nhe~=$-$1.0. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig9.eps}} \caption{Comparisons between synthetic spectra and the He~\textsc{i} 5875 \AA\ region from HIRES (1997-08-12) for models having different effective temperatures. Top -- The model spectra are taken from our metal enriched grid. Bottom -- The model spectra come from a metal-free model grid and have log $g$ = 6.2 dex, as those used by \citet{nap93}.} \label{he5876} \end{figure} The effects of varying the assumed value of log $g$ on the line profiles of H$\alpha$ and H$\gamma$ are illustrated in Fig. \ref{figalpha}. As in the cases of He~\textsc{ii} $\lambda$4686 and H$\beta$ just discussed, the effects are rather small, but the lower value of log $g$ = 6.2 is disfavored. Note that the sharp absorption lines in the H$\alpha$ region are telluric lines, so they do not originate from the star. Our comparison shows that the emission peak of our fiducial models with enhanced metallicity is too high compared to the data, but the depth and width of the line are well reproduced. This is also the case for the UVES fit illustrated in the lower right panel of Fig. \ref{fitgten}, but is more difficult to see at the scale of that other plot. We also compared the line with models from our second grid, having the abundances of BD$+$28$\degr$4211, and the emission peak is also higher than the observed one, but the model lines are neither deep nor wide enough. The comparison between H$\gamma$ and some of our models is shown in the right panel of Fig. \ref{figalpha}. This line is also rather well reproduced with our models, but this time a surface gravity of 6.4 dex offers markedly a better match. The thing to note here is the presence of a tiny emission core in the observed line; such an emission is not often seen in Balmer lines other than H$\alpha$ and H$\beta$. A hint for a very weak emission feature can also be detected in our UVES spectrum. The high resolution is essential to see this type of feature. Emission is barely seen in our lowest gravity model and it is not strong enough to reproduce the one observed. \begin{figure*}[th] \centering \includegraphics[width=15cm]{fig10.eps} \caption{Comparisons between synthetic spectra and the He~\textsc{ii} 1640 \AA\ line from the STIS spectrum. {\it Left Panel}: With models having metallic abundances corresponding to those of BD$+$28$\degr$4211, $T_{\rm eff}$~=~82,000 K, log \nhe~= $-$1.0 and various log $g$. {\it Right Panel}: With models having log $g$ = 6.4 but different metallic contents.} \label{he1640} \end{figure*} There is one last feature that was worth investigating with the HIRES spectra and that is the He~\textsc{i} 5875 \AA\ line. \citet{nap93} used this line to secure the effective temperature he deduced for BD$+$28$\degr$4211\ using the H$\epsilon$ line. He compared model spectra having different temperatures with his observations and found a best match for an effective temperature between 80 and 85 kK (let us remember that metal-free NLTE models were used at the time). He had at his disposal a 0.4 \AA\ resolution spectrum that appears to be more noisy than the HIRES ones. The feature he associated with He~\textsc{i} $\lambda$5875 is barely visible in his spectrum but should be distinguishable in the HIRES ones. However, after a careful search we did not find any trace of this line after inspecting three different observations. If there is indeed a line, then it is weaker than the noise level. Figure \ref{he5876} shows the region of interest for the observations of the 1997-08-12 night. The top comparison is with three models having different effective temperature, taken from our metal-enhanced grid. With $T_{\rm eff}$\ $\geq$ 82,000 K, the line is predicted to be within the noise level. Since the helium line is not visible in the observations, we used the C~\textsc{iv} emission line at 5811 \AA\ (identified in \citealt{her99}) present in the same order to accurately fix the wavelength scale in that spectral region. The bottom comparison is made with synthetic spectra analog to those used by Napiwotzki, i.e., metal-free and with a surface gravity of 6.2 dex and the lines predicted are way too strong. The 23 spectra retrieved from the KOA span a 14 years time span, from August 1997 to October 2011, which is an interesting baseline to look for any long term radial velocity (RV) variations. The RVs were measured using between 8 and 20 metallic lines, depending on the wavelength coverage and S/N (see \citet{her99} for a list of BD$+$28$\degr$4211\ metallic lines). These individual RVs were then averaged out for every observation. The values obtained did not show any significant variations over these 14 years. The mean value of the RV for the 23 observations is 22.1 km s$^{-1}$ with a standard deviation of $\sigma$ = 2.3 km s$^{-1}$, which is in agreement with those reported by \citet{her99}. It is thus very unlikely that BD$+$28$\degr$4211\ is part of a binary system, unless the inclination is close to 0$\degr$ or the orbital period is much longer than our baseline, in the latter case, evolutionarily speaking, it is essentially a single star. \subsection{UV Helium Lines} The previous comparisons between high resolution optical lines and our model spectra with ten times solar abundances clearly demonstrated that our metal-enhanced models, overall, match very well the Balmer and helium lines seen in the HIRES spectrum of BD$+$28$\degr$4211. But what about the helium lines in the UV range, are they significantly affected by an enhanced metallicity? This point is worth investigating because, referring to Figure 5 of Paper I in which an oxygen line is located just next the to He~\textsc{ii} 1640 \AA\ line, one can notice that the helium line in question seems quite well reproduced by the model used then, while we just saw that the optical He~\textsc{ii} lines cannot be reproduced with such models. Figure \ref{he1640} shows comparisons of this line, first with models having various log $g$ and the abundances of BD$+$28$\degr$4211, where one can see that the changes thus induced are rather small. The wings are well reproduced by the models but the central absorption is wider in the observations. This is a bit intriguing and we verified that no change of parameters ($T_{\rm eff}$, \nhe) affects much the width of the core. The rotational velocity of the star is known to be quite small \citep{her99}, so this option can be disregarded. As for a metallicity effect, the right panel of Fig. \ref{he1640} shows that there is only a slight difference in the line profile between a model having the abundances of BD$+$28$\degr$4211\ and one having ten times the solar metallicity and this difference is not in the central core. We do not know why the central core is not reproduced correctly; it might have something to do with the theoretical line profile, since the core width remains unaffected by changes in the parameters of model atmospheres. We did not find in the literature any mention of this problem, but it would at least require the availability of a star having fundamental parameters relatively close to those of BD$+$28$\degr$4211\ in order to display a similar line profile with a sharp central core absorption. Alternatively, microturbulence (not included in our synthetic spectra) might be at work here. We finally also examined how the He~\textsc{ii} 1085 \AA\ line featured in the FUSE spectrum of BD$+$28$\degr$4211\ is reproduced by our model spectra. We compared it with models having the metallic abundances determined in Paper I and various surface gravities in Fig. \ref{he1085}. Again the variation of log $g$ does not produce important differences in the line profile and the models having realistic abundances reproduce well this helium line. This is true also for metal-enhanced models. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig11.eps}} \caption{ Comparisons between synthetic spectra and the He~\textsc{ii} 1085 \AA\ line from the FUSE spectrum. The models have the metallic content of BD$+$28$\degr$4211\ and different values of log $g$.} \label{he1085} \end{figure} \section{Discussion} We began our comprehensive analysis of BD$+$28$\degr$4211\ with the ultimate goal of retrieving the atmospheric parameters of the star by carrying out a standard fitting procedure of its optical spectrum alone, as it is done for the cooler hot subdwarfs. The challenge was to somehow ``overcome'' the Balmer line problem that prevents from modeling the observed Balmer series with a unique set of fundamental parameters. As discussed in Sect. 1, this problem is a major one in hot sdO stars and makes it very tricky to determine fundamental parameters of such stars, especially their effective temperatures. A careful inclusion of metal line-blanketing in model atmospheres seemed a promising way to solve this issue, or at least to diminish the discrepancies between observed and theoretical lines as was shown by \citet{wer96}. The first part of this analysis, presented in Paper I, focussed on the UV spectrum, which is a standard way for studying very hot stars and usually leads to sound results \citep{rauch07,zie12,rauch13}. By self-consistently fitting the numerous metallic lines in the UV spectra from FUSE and the HST spectrograph STIS, we were able to draw up a realistic chemical composition for the atmosphere of BD$+$28$\degr$4211. We also confirmed that the previously estimated fundamental parameters ($T_{\rm eff}$~$\sim$ 82,000 K, log $g$ $\sim$ 6.2, solar $N$(He)/$N$(H)) are in good agreement with the observed UV spectrum. The following step in our study, the subject of this article, was to use the abundances thus determined to build a grid of NLTE line-blanketed model atmospheres specifically suited to BD$+$28$\degr$4211\ and use it to perform a spectroscopic fit of the star's optical spectrum. However, even though significant improvements were obtained in the modeling of the observed optical lines as compared to the case where metal line-blanketing was neglected, we disappointedly realized that these custom-made models were still falling short of the desired results. And indeed, as seen in Fig. \ref{fitg7f} for example, the best fits obtained with our four spectra give effective temperatures too low by about 10,000 K and, in addition, the observed spectral lines are not well reproduced in their details. However, note in this latter case that with much lower S/N data than our superlative spectra (BOK8.7, BOK1.3, MMT), it would have been very difficult, if not downright impossible, to detect the small but quite significant differences between the observed and computed line profiles in the panels of that figure. In the past, some similar ``fitting'' problems were solved by artificially increasing the metallicity of LTE model atmospheres. This method proved useful in the sdOB transition range ($T_{\rm eff}$\ $\sim$ 35,000 K) where the LTE metal-rich models (with ten times the solar metallicity) yield improved matches between the observed lines and the best-fitting models, but this without changing much the fundamental parameters thus derived \citep{otoole06,geier07}. In that temperature range, the mismatch between observed and model spectra is not so much in the Balmer lines themselves but rather in the helium ones, for which the lines originating from both ionization stages cannot be correctly reproduced without this artefact. The Balmer line problem is also seen in hot white dwarfs and a few of those analyzed in \citet{gia10} were much better reproduced with NLTE model atmospheres including C,N,O with 10 times their solar abundances. It is thought that this approach compensates, at least partially, for some unknown opacity sources to the point where the atmospheric structure is affected in the ``correct'' way. With these informations in mind, we next investigated the effects of varying the background metallicity on the optical lines of interest. In this context, Fig. \ref{compline} is an important result of our present work. The plot indeed shows that the line profiles saturate beyond a certain value of the assumed metallicity. In particular, there are practically no differences in line profiles obtained in a wide range of $T_{\rm eff}$\ for a metallicity defined by 5$\times$, 10$\times$, or 15$\times$ the solar abundances of C, N, O, and Fe. This suggests that the concept of saturated metallicity could be used as an interim cure for the missing opacity problem in hot subdwarf stars. In practical terms, full grids of model atmospheres should be computed with the help of this artefact in order to analyze various samples of optical spectra, the latter ideally being characterized by a high sensitivity and/or high resolution. As for BD$+$28$\degr$4211, we thus built a dedicated grid of NLTE line-blanketed model atmospheres including the 8 most abundant metals found in Paper I, but having ten times their solar abundances to make sure the saturated regime was reached. This metal-enhanced grid ultimately allowed us to derive satisfactory fundamental parameters for BD$+$28$\degr$4211\ on the basis of our optical spectra. Our best estimates, based on the straight average of the results derived from four optical spectra of different sensitivity, spectral range, and resolution, give $T_{\rm eff}$ = 81,342 K $\pm$ 1219 K, log $g$ = 6.519 $\pm$ 0.048, and log $N$(He)/$N$(H) = $-$1.120 $\pm$ 0.049, with the uncertainty being the standard deviation of the results. These are perfectly compatible with the results of Paper I ($T_{\rm eff}$ = 82,000 $\pm$ 5,000 K, log $g$ = 6.2$_{-0.1}^{+0.3}$, log $N$(He)/$N$(H) = -1.0 [assumed]), which is based on the standard UV approach for very hot stars. The higher suggested surface gravity now solves the apparent conflict discussed in Paper I between the spectroscopic and Hipparcos distances. Specific tests indicated that the most important spectral features ``pulling'' towards a higher surface gravity are H$\delta$, H$\epsilon$, and He~\textsc{ii} at 4542 \AA. As an a posteriori test, we exploited some of the HIRES data of BD$+$28$\degr$4211\ available in the KOA. With their high resolution and good S/N, they provide an incomparable insight into the detailed profiles of several Balmer and helium lines of the star. We were able to overcome the drawback that comes with these data, the wavy continuum, and ended up with observed lines that could be compared with our models. That way we tested if our optimal model atmospheres, with their artificially enhanced metal abundances, could reproduce the detailed observations of HIRES. Our comparisons showed that our metal-enriched models indeed reproduce very well the following lines : He~\textsc{ii} $\lambda$4686, H$\alpha$, H$\beta$, and H$\gamma$. In the details, there is a small discrepancy between our models and the emission peak observed in H$\alpha$, which is predicted to be higher. A tiny emission bump is also discernible in the core of H$\gamma$ but was not fully reproduced by our models. The radial velocities measured for a sample of HIRES spectra, covering a 14 years time period does not show any significant variation within 5 km s$^{-1}$, thus indicating that the star is most likely a single one. What we learned from our analysis is that despite the fact that the UV spectrum can be very well reproduced by model atmospheres including the metal abundances derived for BD$+$28$\degr$4211, such models fail to reproduce the optical Balmer and helium lines. In order to achieve adequate results in the optical domain, we had to include in our models an artificially enhanced metallicity. The good side to this is that we were then able to derive appropriate fundamental parameters for BD$+$28$\degr$4211, based only on its optical spectrum. The downside is that to get these results we had to set our metal abundances to unrealistic values. These large abundances somehow affect the atmospheric structure of our models in a way that makes the optical lines correctly modeled. It is likely that the additional blanketing brought about by the enhanced abundances of the species included in our models account for some missing opacities present in the star but not in our models. Our use of what we called a saturated metallicity should only be seen as a proxy for some important missing physics. It is possible that these missing opacities come from atomic species not included in our models (these species should not be dominant in the star numberwise, but their opacities might be important), transition lines not accounted for, or improper broadening of some metal lines. Likewise, it is possible that incorrect opacity sampling might be at work here. With the presence of spectral lines originating from trans-iron elements, such as Ge, Ga, As, Sn, and Pb, in the spectra of hot subdwarf stars as well as in those of a few hot white dwarfs (\citealt{otoole04,nas11,wer12,rau15} and references therein), it is possible that the opacity of such elements constitute a part of the missing opacity. \citet{rei15} also showed that Ne in solar abundances can lead to an important change in the temperature structure of the atmosphere in a 100 kK model. However, it seems a little odd that such missing opacities would significantly affect the Balmer and helium optical lines, while the UV ones can be accounted for very well without the induced change in the atmospheric structure. In any case, this knowledge should be very useful for obtaining more accurate fundamental parameters for hot sdO stars (and possibly also for very hot white dwarfs) when observations in the optical range are the only ones available. In the light of our results and of previous investigations, we thus propose, along with earlier researchers, that the atmospheric structures of hot stars be computed with artificially enriched metal abundances as an interim solution for estimating their atmospheric parameters if only optical spectroscopy is available. We propose our concept of saturated metallicity for the whole domain of hot subdwarf stars. In particular, the procedure should be applied to the case of the newly discovered pulsating stars in $\omega$ Cen \citep{ran11} which are among the rare sdO stars known to pulsate. Their temperature determination via the fitting of their Balmer and helium lines with a grid of NLTE line-blanketed model spectra with normal metallic abundances yield values around 50,000 K. However, preliminary non-adiabatic exploration of the sdO star region did not show pulsational instabilities around this particular effective temperature \citep{ran12}, but only at higher values. A legitimate question that might be raised in this case is about the validity of the spectroscopically derived temperature, which must certainly be underestimated according to the present findings. We hope that the upcoming UV observations of two $\omega$ Cen pulsators will settle the issue and allow us to test our approach with these stars, but we have to keep in mind that their optical spectra is of limited quality given the relative faintness of the stars. The sdO star Feige 34 would also be a good candidate to test our approach, like BD$+$28$\degr$4211\ it is a spectroscopic standard for which good observational data (UV and optical) are available. Its optical spectrum suggests that it is cooler than BD$+$28$\degr$4211, but still quite hot ($T_{\rm eff}$ $\sim$ 70,000 K). Finally, to summarize our main results: \begin{itemize} \item We fitted high-quality spectra of BD$+$28$\degr$4211\ using NLTE model atmospheres including the metallicity determined in Paper I via our UV analysis. \item The best fits obtained with these models were improved when compared to fits made with models including only H and He, but they indicate a temperature too cold by 10,000 K, and does not perfectly reproduce the spectral lines. \item We investigated the effect of increasing our model atmospheres metallicity (up to 15$\times$ solar) on spectral lines for a wide range of temperatures. We observed, for most of them, a saturation effect at 10$\times$ solar metallicity; beyond that value the line profiles no longer change. \item We adopted this 10$\times$ solar metallicity to build a new metal-enhanced grid. The fitting procedure using this grid led to very good fits and accurate atmospheric parameters. \item We then compared our new ``best models'' with high-resolution, high S/N observed spectra culled from archived HIRES observations and found very good agreement. \item We thus suggest the use of metal enriched model atmospheres (10$\times$ solar) for determining the fundamental parameters of hot stars when only optical spectroscopy is available. This should lead to more realistic parameters than using models with normal metallic content. \end{itemize} \acknowledgements{This work was supported in part by the NSERC Canada through a fellowship awarded to M.L. and through a research grant awarded to G.F. The latter also acknowledges the contribution of the Canada Research Chair Program. M.L. also acknowledges funding by the Deutsches Zentrum f\"ur Luft- und Raumfahrt (grant 50 OR 1315). We thank L. Fr\"ohling for sharing her RV measurements and P. N\'emeth for interesting discussions. This work has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This paper also used data obtained from the ESO Science Archive Facility. We are also grateful to the PIs of the HIRES and UVES observations we used. } \bibliographystyle{aa}
{ "timestamp": "2015-04-27T02:05:44", "yymm": "1504", "arxiv_id": "1504.06417", "language": "en", "url": "https://arxiv.org/abs/1504.06417" }
\subsection{Spectral gaps via a fractal uncertainty principle} \label{s:spfup} We first reduce the estimate~\eqref{e:essential-gap2} to a fractal uncertainty principle. To state it, let $\Lambda_\Gamma\subset\mathbb S^{n-1}$ be the limit set of the group $\Gamma$, $M=\Gamma\backslash\mathbb H^n$ (see~\eqref{e:Lambda-Gamma}) and denote by $\indic_{\Lambda_\Gamma(\alpha)}$ the indicator function of the set \begin{equation} \label{e:limit-nbhd} \Lambda_\Gamma(\alpha)=\{y\in \mathbb S^{n-1}\mid d(y,\Lambda_\Gamma)\leq\alpha\} \end{equation} where $d(y,y')=|y-y'|$ denotes the Euclidean distance function on $\mathbb S^{n-1}\subset\mathbb R^n$. Note that the Minkowski dimension of $\Lambda_\Gamma$ is equal to $\delta$, therefore (see~\eqref{e:AD-estimate-Lebesgue}) \begin{equation} \label{e:LG-upper-1} \alpha^{n-1-\delta}/C\,\leq\,\mu_L(\Lambda_\Gamma(\alpha))\,\leq\, C \alpha^{n-1-\delta},\quad \alpha\in (0,1) \end{equation} where $\mu_L$ denotes the Lebesgue measure on $\mathbb S^{n-1}$. Define the operator $\mathcal B_\chi=\mathcal B_\chi(h):L^2(\mathbb S^{n-1})\to L^2(\mathbb S^{n-1})$ by \begin{equation} \label{e:B-chi} \mathcal B_\chi v(y)=(2\pi h)^{1-n\over 2}\int_{\mathbb S^{n-1}} |y-y'|^{2i/h} \chi(y,y') v(y')\,dy' \end{equation} where $dy'$ is the standard volume form on $\mathbb S^{n-1}$ and $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$, where \begin{equation} \label{e:s-diag} \mathbb S^{n-1}_\Delta=\{(y,y')\in\mathbb S^{n-1}\times\mathbb S^{n-1}\mid y\neq y'\}. \end{equation} \begin{defi} \label{d:fup} We say that $\Lambda_\Gamma$ satisfies the \textbf{fractal uncertainty principle} with exponent $\beta>0$, if for each $\varepsilon>0$ there exists $\rho\in (0,1)$ such that \begin{equation} \label{e:fup-standard} \|\indic_{\Lambda_\Gamma(C_1h^\rho)}\mathcal B_\chi(h) \indic_{\Lambda_\Gamma(C_1h^\rho)}\|_{L^2(\mathbb S^{n-1})\to L^2(\mathbb S^{n-1})} \leq C h^{\beta-\varepsilon},\quad h\in (0,1) \end{equation} for every $h$-independent constant $C_1$ and function $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$, and some $C$ depending on $C_1,\chi$. \end{defi} \begin{figure} \includegraphics{hgap-6.pdf} \caption{The horizontal leaves~\eqref{e:hor-lag}, in red, and the vertical leaves~\eqref{e:ver-lag}, in blue, for $n=2$. The horizontal variable is $y\in\mathbb S^1$ and the vertical variable is $\eta$. For the fractal uncertainty principle, the width of the distorted rectangles is slightly larger than $h$.} \label{f:foliations} \end{figure} \noindent\textbf{Remark}. The fractal uncertainty principle always holds with $\beta=\max(0,{n-1\over 2}-\delta)$, see~\eqref{e:tb-1} and~\eqref{e:tb-2}. On the other hand, by~\eqref{e:JN} the maximal $\beta$ for which~\eqref{e:fup-standard} can be true is $\beta={n-1\over 2}-{\delta\over 2}$, which (in dimension 2) is exactly the value of the essential spectral gap conjectured by Jakobson--Naud~\cite{Jakobson-Naud2}. To explain how the estimate~\eqref{e:fup-standard} represents an uncertainty principle associated to the set $\Lambda_\Gamma$, we consider the extremal case $\rho=1$, put $C_1=1$, and cover $\Lambda_\Gamma(h)$ by a collection of balls of radius $h$ centered at some points $y_1,\dots,y_N\in\Lambda_\Gamma$, where $N\sim h^{-\delta}$ by~\eqref{e:LG-upper-1}. Then for each $v\in L^2(\mathbb S^{n-1})$, the function $\mathcal B_\chi(h)\indic_{\Lambda_\Gamma(h)}v$ microlocally concentrates (see~\eqref{e:oppa} below) in an $h$-neighborhood of the union of `horizontal' Lagrangian leaves \begin{equation} \label{e:hor-lag} \bigcup_{j=1}^N \big\{\big(y,\partial_y\log (|y-y_j|^2)\big)\,\big|\, (y,y_j)\in\supp\chi\big\}\ \subset\ T^*\mathbb S^{n-1}, \end{equation} while the operator $\indic_{\Lambda_\Gamma(h)}$ microlocalizes to an $h$-neighborhood of the union of `vertical' Lagrangian leaves \begin{equation} \label{e:ver-lag} \bigcup_{j=1}^N \{(y_j,\eta)\mid \eta\in T^*_{y_j}\mathbb S^{n-1}\}\ \subset\ T^*\mathbb S^{n-1}. \end{equation} The estimate~\eqref{e:fup-standard} with $\beta>0$ then says that no function can be perfectly localized to $h$-neighorhoods of both~\eqref{e:hor-lag} and~\eqref{e:ver-lag}~-- see Figure~\ref{f:foliations}. Note that $h$-neighborhoods here cannot be replaced by, say, $h^{1/2}$ neighborhoods since Gaussians provide examples of functions that concentrate $h^{1/2}$ close to any fixed leaf of~\eqref{e:hor-lag} and to any fixed leaf of~\eqref{e:ver-lag}. A related statement in the context of normally hyperbolic trapping was proved by Nonnenmacher--Zworski~\cite[Lemma~5.12]{NoZwInv}. If $\Lambda_\Gamma$ satisfies the fractal uncertainty principle, then an essential spectral gap is given by the following \begin{theo} \label{t:fup-reduction} Assume that $\Lambda_\Gamma$ satisfies the fractal uncertainty principle with exponent $\beta>0$. Then~\eqref{e:essential-gap2} holds, in particular $R(\lambda)$ has finitely many poles in $\{\Im\lambda>-\beta+\varepsilon\}$ for each $\varepsilon>0$. \end{theo} We outline the proof of the resonance free region of Theorem~\ref{t:fup-reduction} (the resolvent bound follows directly from the argument). It suffices to show that for $\Re\lambda\gg 1$ and $\Im\lambda\geq -\beta+\varepsilon$, there are no nontrivial \emph{resonant states}, that is solutions to the equation \begin{equation} \label{e:laggie} \Big(-\Delta-{(n-1)^2\over 4}-\lambda^2\Big)u=0 \end{equation} which satisfy certain \emph{outgoing} conditions asymptotically at the infinite ends of $M$. Put $h:=(\Re\lambda)^{-1}$ and assume that $u$ is $L^2$-normalized on a sufficiently large fixed compact subset of $M$. We study concentration of $u$ in the \emph{phase space} $T^*M$ using semiclassical quantization \begin{equation} \label{e:oppa} a\in C^\infty(T^*M)\ \mapsto\ \Op_h(a):C^\infty(M)\to C^\infty(M) \end{equation} where $a$ satisfies certain growth conditions~-- see~\S\ref{s:semiclassical}. Let $\Gamma_+\subset T^*M\setminus 0$ be the \emph{outgoing tail}, consisting of geodesics which are trapped backwards in time; define also the \emph{incoming tail} $\Gamma_-\subset T^*M\setminus 0$ (see~\eqref{e:GpmDef}). The work of Vasy~\cite{vasy1,vasy2} near the infinite ends together with propagation of semiclassical singularities shows that $u$ is microlocalized on $\Gamma_+$ (see~\cite{BonyMichel,NoZwActa} for related results in Euclidean scattering). More precisely, for an $h$-independent symbol $a_+$, \begin{equation} \label{e:g+1} \supp (1-a_+)\cap\Gamma_+=\emptyset\quad \Longrightarrow\quad (1-\Op_h(a_+))u=\mathcal O(h^\infty)_{C^\infty(M)}. \end{equation} Moreover, $u$ has positive mass near $\Gamma_-$; more precisely, for $h$-independent $a_-$ \begin{equation} \label{e:g+2} \supp(1-a_-)\cap \Gamma_-=\emptyset\quad \Longrightarrow\quad \|\Op_h(a_-) u\|_{L^2}\geq C^{-1}>0. \end{equation} (The statement~\eqref{e:g+2} is not quite correct since $\Gamma_-$ extends to the infinite ends of $M$ and thus $a_-$ cannot be compactly supported; however, we may argue in a fixed neighborhood of the trapped set $K=\Gamma_+\cap \Gamma_-$. See Lemma~\ref{l:outgoing} for precise statements.) The main idea of the proof is to replace $h$-independent symbols in~\eqref{e:g+1} and~\eqref{e:g+2} with symbols that concentrate $h^\rho$ close to $\Gamma_\pm$: \begin{align} \label{e:g++1} d(\supp(1-a_+),\Gamma_+)>h^\rho\quad &\Longrightarrow\quad (1-\Op_h(a_+))u=\mathcal O(h^\infty)_{C^\infty(M)},\\ \label{e:g++2} d(\supp(1-a_-),\Gamma_-)>h^\rho\quad &\Longrightarrow\quad \|\Op_h(a_-)u\|_{L^2}\geq C^{-1}h^{(-\Im\lambda)\rho}. \end{align} The constant $\rho\in (0,1)$ is taken very close to $1$. See Lemma~\ref{l:second} for precise statements. The proofs of~\eqref{e:g+1} and~\eqref{e:g+2} use propagation estimates for some $h$-independent time. The proofs of~\eqref{e:g++1} and~\eqref{e:g++2} use similar estimates for time $t=\rho\log(1/h)$, and the factor $h^{(-\Im\lambda)\rho}=e^{(\Im\lambda)t}$ results from the imaginary part of the operator in~\eqref{e:laggie}. However, the analysis for~\eqref{e:g++1} and~\eqref{e:g++2} is considerably more complicated since the symbols $a_\pm$ have very rough behavior in the directions transversal to $\Gamma_\pm$, oscillating on the scale $h^\rho$~-- this corresponds to the fact that $t$ is almost \emph{twice} the Ehrenfest time (since the maximal expansion rate for the geodesic flow is equal to 1, the Ehrenfest time is just below ${1\over 2}\log(1/h)$~-- see for instance~\cite[Proposition~3.9]{qeefun}). To solve this problem, we use the fact that $\Gamma_+$ is foliated by the leaves of the weak unstable Lagrangian foliation $L_u$, while $\Gamma_-$ is foliated by the leaves of the weak stable Lagrangian foliation $L_s$; therefore, we can make $a_+$ vary on scale $1$ along $L_u$ and $a_-$ vary on the scale $1$ along $L_s$. Then $a_+$ and $a_-$ can both be quantized to some operators $\Op_h^{L_u}(a_+)$ and $\Op_h^{L_s}(a_-)$; however, these operators will not be part of the same calculus~-- see~\S\ref{s:second-microlocalization} for details. Next, the fractal uncertainty principle gives the following estimate for some $a_\pm$ satisfying the conditions of~\eqref{e:g++1}, \eqref{e:g++2}: \begin{equation} \label{e:fuppie} \|\Op_h(a_-)\Op_h(a_+)\|_{L^2\to L^2}\leq Ch^{\beta-\varepsilon/2}. \end{equation} To see this, we conjugate by a Fourier integral operator whose underlying canonical transformation maps an $h^\rho$ neighborhood of $\Gamma_+,\Gamma_-$ to an $h^\rho$ neighborhood of~\eqref{e:hor-lag}, \eqref{e:ver-lag} respectively (strictly speaking, to the products of \eqref{e:hor-lag}, \eqref{e:ver-lag} with $(T^*\mathbb R^+)_{w,\theta}$ where $w$ corresponds to $|\xi|_g$ and $\partial_\theta$ corresponds to the generator of the geodesic flow). Under this conjugation, $\Op_h(a_-)$ corresponds to $\indic_{\Lambda_\Gamma(h^\rho)}$ and $\Op_h(a_+)$ corresponds to $\mathcal B_\chi \indic_{\Lambda_\Gamma(h^\rho)} \mathcal B_\chi^*$, therefore~\eqref{e:fuppie} follows from~\eqref{e:fup-standard}. See~\S\ref{s:fun} for details. Gathering together~\eqref{e:g++1}, \eqref{e:g++2}, and~\eqref{e:fuppie}, and recalling that $-\Im\lambda<\beta-\varepsilon$, we obtain a contradiction for $\rho$ close enough to $1$ and $h\ll 1$ (thus finishing the proof): $$ \begin{aligned} C^{-1}h^{(\beta-\varepsilon)\rho}&\leq C^{-1}h^{(-\Im\lambda)\rho}\leq \|\Op_h(a_-)u\|_{L^2}\\ &\leq \|\Op_h(a_-)\Op_h(a_+)u\|_{L^2}+\mathcal O(h^\infty) \leq Ch^{\beta-\varepsilon/2}. \end{aligned} $$ It would be interesting to see if Theorem~\ref{t:fup-reduction} could be proved using transfer operator techniques such as the ones in~\cite{Naud}. We however note that the microlocal argument presented above may be easier to adapt to a variable curvature situation (see the Conjecture above) and it also provides an explicit polynomial bound on the resolvent~\eqref{e:essential-gap2}. \subsection{Fractal uncertainty principle via additive energy} As remarked before (following Definition~\ref{d:fup}), the fractal uncertainty principle holds with $\beta={n-1\over 2}-\delta$. This corresponds to counting the total area of the intersections of $h$-neighborhoods of~\eqref{e:hor-lag} and~\eqref{e:ver-lag} (which in turn depend on $\delta$ by~\eqref{e:LG-upper-1}) and can be seen via an $L^1\to L^\infty$ norm bound on $\mathcal B_\chi$. On the other hand, an $L^2\to L^2$ norm bound on $\mathcal B_\chi$ gives the fractal uncertainty principle with $\beta=0$. If we only use the volume bound~\eqref{e:LG-upper-1}, then no better value of $\beta$ can be obtained~-- for a non-rigorous explanation, one may replace $\Lambda_\Gamma(C_1h^\rho)$ in~\eqref{e:fup-standard} by a ball of volume $h^{n-1-\delta}$ in $\mathbb R^{n-1}$, replace $\mathcal B_\chi$ by the semiclassical Fourier transform, and calculate the corresponding $L^2\to L^2$ norm. To get a better exponent $\beta$, we thus have to use the fractal structure of $\Lambda_\Gamma$. More precisely, we will rely on the following combinatorial quantity: \begin{defi} \label{d:ae} For $\mathcal X\subset\mathbb R^{n-1}$ and $\alpha>0$, define the \textbf{$\alpha$-additive energy} of $\mathcal X$ by $$ E_A(\mathcal X,\alpha)=\alpha^{4(1-n)}\mu_L(\{(\eta_1,\eta_2,\eta_3,\eta_4)\in \mathcal X(\alpha)^4\mid |\eta_1-\eta_2+\eta_3-\eta_4|\leq\alpha\}) $$ where $\mathcal X(\alpha)$ is the $\alpha$-neighborhood of $\mathcal X$ and $\mu_L$ is the Lebesgue measure. This definition trivially extends from $\mathbb R^{n-1}$ to any $n-1$ dimensional vector space with an inner product. \end{defi} Additive energy is intimately connected with the additive structure of finite sets, and it is one of the central concepts in the field of additive combinatorics. See~\cite{Tao} for further information on additive energy and related topics. To explain the normalization of $E_A$, assume that $\mathcal X(\alpha)$ is the union of $N(\alpha)$ disjoint balls of radius $\alpha$, where the volume of $\mathcal X(\alpha)$ is proportional to $N(\alpha)\alpha^{n-1}$. Then $E_A(\mathcal X,\alpha)$ is proportional to the number of combinations of four such balls such that the sum of the centers of the first two balls is approximately equal to the sum of the centers of the other two. Motivated by~\eqref{e:LG-upper-1}, we assume that $N(\alpha)\sim\alpha^{-\delta}$. Then \begin{equation} \label{e:basicae} \alpha^{-2\delta}\lesssim E_A(\mathcal X,\alpha)\lesssim \alpha^{-3\delta}. \end{equation} Indeed, the upper bound follows from the fact that the first three balls determine the fourth one uniquely, and the lower bound follows from considering combinations of the form $(\eta_1,\eta_1,\eta_3,\eta_3)$. \begin{figure} \includegraphics{hgap-10.pdf} \caption{The stereographic projection map $\mathcal G$.} \label{f:stereographic0} \end{figure} We will use the additive energy of the images of the limit set $\Lambda_\Gamma$ by the map \begin{equation} \label{e:stpro} \mathcal G(y,y')={y'-(y\cdot y')y\over 1-y\cdot y'}\in\mathbb R^n,\quad y,y'\in\mathbb S^{n-1}\subset\mathbb R^n,\quad y\neq y', \end{equation} which is half the stereographic projection of $y'$ with the base point $y$~-- see Figure~\ref{f:stereographic0}. We have $\mathcal G(y,y')\perp y$, therefore we may think of it as a vector in $T_y\mathbb S^{n-1}$, or (pairing with the round metric on the sphere) as a vector in $T_y^*\mathbb S^{n-1}$. Note that $\mathcal G$ is related to the leaves of~\eqref{e:hor-lag} since \begin{equation} \label{e:gide} \partial_y\log (|y-y'|^2)=-\mathcal G(y,y'). \end{equation} \begin{defi} \label{d:ae-estimate} We say that $\Lambda_\Gamma$ satisfies the \textbf{additive energy bound} with exponent $\beta_E>0$, if for each $C_1>0$ there exists $C>0$ such that for all $\alpha\in (0,1)$, \begin{equation} \label{e:ae-estimate} \sup_{y_0\in \Lambda_\Gamma}E_A(\mathcal G(y_0,\Lambda_\Gamma)\cap B(0,C_1),\alpha) \leq C\alpha^{-3\delta+\beta_E}. \end{equation} \end{defi} One can also interpret the sets~$\mathcal G(y_0,\Lambda_\Gamma)$ in terms of the dynamics of the geodesic flow on $M$ using horocyclic flows~-- see~\eqref{e:cal-F} and~\eqref{e:cal-F-useful}. Given an additive energy bound, we obtain a fractal uncertainty principle and thus (by Theorem~\ref{t:fup-reduction}) an essential spectral gap: \begin{theo} \label{t:ae-reduction} Assume that $\Lambda_\Gamma$ satisfies the additive energy bound with exponent $\beta_E>0$. Then $\Lambda_\Gamma$ satisfies the fractal uncertainty principle with exponent \begin{equation} \label{e:beta-ae} \beta={3\over 8}\Big({n-1\over 2}-\delta\Big)+{\beta_E\over 16}. \end{equation} \end{theo} \noindent\textbf{Remark}. Note that by~\eqref{e:basicae}, the maximal $\beta_E$ for which Definition~\ref{d:ae-estimate} may hold is $\beta_E=\delta$. Plugged into~\eqref{e:beta-ae}, this gives an essential spectral gap of size ${3(n-1)-5\delta\over 16}$, which improves over~\eqref{e:standard-gap} only when $\delta\in ({5\over 11}(n-1),{3\over 5}(n-1))$. Theorem~\ref{t:ae-reduction} is proved using an $L^4$ estimate on the Fourier transforms of $\indic_{\mathcal G(y_0,\Lambda_\Gamma(h^{\rho/2}))}$ for $y_0\in\Lambda_\Gamma$ obtained from the additive energy bound. Here we have to replace the original $h^\rho$ neighborhood of $\Lambda_\Gamma$ by a bigger $h^{\rho/2}$ neighborhood to approximate correlations between different leaves of~\eqref{e:hor-lag} restricted to $\Lambda_\Gamma(h^{\rho/2})$ using the Fourier transform. Roughly speaking, the leaves which are farther than $h^{1/2}$ apart have an $\mathcal O(h^\infty)$ correlation and for the leaves which are closer than $h^{1/2}$ to each other, the difference of the phase functions in the resulting integral can be well approximated by its linear part~-- see the paragraph following~\eqref{e:eddie}. The enlargement of the neighborhood to $\Lambda_\Gamma(h^{\rho/2})$ causes the loss of a factor of $1\over 2$ in the size of the gap; together with a factor of $3\over 4$ coming from the use of the $L^4$ bound (rather than $L^\infty$) this explains the factor of $3\over 8$ in~\eqref{e:beta-ae}. \subsection{Additive energy via Ahlfors-David regularity} \label{s:introad} We now restrict to dimension $n=2$ and show that the limit set $\Lambda_\Gamma\subset\mathbb S^1$ of a convex co-compact Fuchsian group $\Gamma$ with $\delta\in (0,1)$ satisfies the additive energy bound with some positive exponent. For that we use the following regularity property: \begin{defi} \label{d:ad-regular} Let $(\mathcal{M},d)$ be a complete metric space with more than one element. We say a closed set $\mathcal X \subset \mathcal{M}$ is \textbf{$\delta$--regular} with constant $C_{\mathcal X}$ if for all $x\in \mathcal X$ we have \begin{equation}\label{defnADRegular} C_{\mathcal X}^{-1}r^\delta\ \leq\ \mu_{\delta}(\mathcal X\cap B(x,r))\ \leq\ C_{\mathcal X} r^{\delta},\quad 0<r<\diam(\mathcal M) \end{equation} where $B(x,r)$ is the metric ball of radius $r$ centered at $x$ and $\mu_{\delta}$ is the $\delta$--dimensional Hausdorff measure. \end{defi} Sets with this property are also known as \emph{Ahlfors-David regular}. See~\cite{DS} for an introduction to $\delta$--regular sets. While Definition~\ref{d:ad-regular} is phrased using $\delta$--dimensional Hausdorff measure, any other Borel outer measure could be used instead (in particular, for limit sets of convex co-compact Fuchsian groups the Patterson--Sullivan measure could be used). This is discussed further in Lemma~\ref{equivOfADRegDefns} below. The limit set $\Lambda_\Gamma\subset \mathbb S^{n-1}$ of a convex co-compact Fuchsian group $\Gamma$ is $\delta$--regular with $\delta$ defined in~\eqref{e:delta}~-- see~\cite[Theorem~7]{Sullivan} and~\cite[Lemma~14.13 and Theorem~14.14]{Borthwick}. We denote the associated regularity constant by \begin{equation} \label{e:ad-regular-limit} \mathbf C:=C_{\Lambda_\Gamma}. \end{equation} Using $\delta$--regularity of $\Lambda_\Gamma$, we obtain the following additive energy bound. Combined with Theorems~\ref{t:fup-reduction} and~\ref{t:ae-reduction}, it implies Theorem~\ref{t:main} and thus Theorem~\ref{t:marketing}. \begin{theo} \label{t:ad-reduced} Let $M=\Gamma\backslash\mathbb H^2$ be a convex co-compact hyperbolic surface with limit set $\Lambda_\Gamma\subset\mathbb S^1$ of dimension $\delta\in (0,1)$. Then $\Lambda_\Gamma$ satisfies the additive energy bound in the sense of Definition~\ref{d:ae-estimate} with exponent \begin{equation}\label{e:betaE} \beta_E:=\delta\exp\big[-\mathbf{K}(1-\delta)^{-28}(1+\log^{14}\mathbf{C})\big], \end{equation} where $\mathbf C$ is defined in~\eqref{e:ad-regular-limit} and $\mathbf K$ is a global constant. \end{theo} \noindent\textbf{Remarks}. (i) The specifics of the bound \eqref{e:betaE} are not particularly important. The key point is that the exponent $\beta_E $ in~\eqref{e:ae-estimate} is independent of $\alpha$, and it can be computed explicitly. We did not compute the value of $\mathbf{K}$, but in principle it can be done without much difficulty. \noindent (ii) In dimensions $n>2$, Theorem~\ref{t:ad-reduced} no longer holds in general as shown by the example of the hyperbolic cylinder in three dimensions (see for instance~\cite[Appendix~A]{fwl}). In this example, the limit set $\Lambda_\Gamma$ is a great circle on $\mathbb S^2$, and the stereographic projections $\mathcal G(y_0,\Lambda_\Gamma)$ are straight lines, which saturate the upper bound in~\eqref{e:basicae}. See~\S\ref{higherDimRemark} for possible generalizations to higher dimensions. Theorem~\ref{t:ad-reduced} follows from a general result bounding additive energy of Ahlfors-David regular sets, stated as Theorem~\ref{t:ae-combinatorial} in~\S\ref{s:ae-combinatorial}; the proof of Theorem~\ref{t:ae-combinatorial} can schematically be explained as follows (see~\S\ref{s:ae-ideas} for more details): \begin{enumerate} \item Ahlfors-David regular sets cannot contain large subsets of arithmetic progressions. This follows by a direct argument using~\eqref{defnADRegular} and the fact that $\delta<1$. \item A variant of Fre{\u\i}man's theorem from additive combinatorics asserts that any set with large additive energy must contain large subsets of generalized arithmetic progressions. Together with~(1) this implies that Ahlfors-David regular sets cannot have extremely large (i.e.~near maximal) additive energy. \item Ahlfors-David regular sets also have a certain type of coarse self-similarity. This allows us to analyze them at many scales and at many different locations. Since Ahlfors-David regular sets cannot have extremely large additive energy at any scale or at any location, we can perform a multi-scale analysis to conclude that such sets must actually have small additive energy. \end{enumerate} \subsection{Structure of the paper} \begin{itemize} \item In~\S\ref{s:semiclassical}, we review certain notions in semiclassical analysis, in particular pseudodifferential and Fourier integral operators. \item In~\S\ref{s:second-microlocalization}, we study an anisotropic pseudodifferential calculus associated to a Lagrangian foliation. \item In~\S\ref{s:hyperbolic}, we study geometric and dynamical properties of hyperbolic manifolds and, using the calculus of~\S\ref{s:second-microlocalization}, prove Theorem~\ref{t:fup-reduction}. \item In~\S\ref{s:fup}, we discuss the fractal uncertainty principle and prove Theorem~\ref{t:ae-reduction}. \item In~\S\ref{s:ae-combinatorial}, we prove that Ahlfors-David regular sets have small additive energy. \item In~\S\ref{s:ae}, we establish Ahlfors-David regularity of the stereographic projections of the limit set and prove Theorem~\ref{t:ad-reduced}. We also obtain locally uniform bounds on the regularity constant for 3-funneled surfaces. \item In Appendix~\ref{s:hyperbolic-technical}, we prove several technical lemmas used in~\S\ref{s:hyperbolic} and~\S\ref{s:ae}. \end{itemize} \section{Semiclassical preliminaries} \label{s:semiclassical} In this section, we give a brief review of semiclassical analysis. For a comprehensive introduction to the subject, the reader is referred to~\cite{e-z}. We partially follow the presentation of~\cite[Appendix~E]{dizzy} and~\cite{qeefun,fwl,nhp}. \subsection{Pseudodifferential operators} Let $M$ be a manifold. For $k\in\mathbb R$, we say that $a(x,\xi)\in C^\infty(T^*M)$ lies in the symbol class $S^k_{1,0}(T^*M)$ if it satisfies the derivative bounds \begin{equation} \label{e:basic-symbol} |\partial^\alpha_x\partial^\beta_\xi a(x,\xi)|\leq C_{\alpha\beta K}\langle\xi\rangle^{k-|\beta|},\quad x\in K, \end{equation} for each compact set $K\subset M$. We restrict ourselves to the subset of \emph{polyhomogeneous, or classical, symbols} $S^k(T^*M)\subset S^k_{1,0}(T^*M)$ which have asymptotic expansions $a(x,\xi)\sim\sum_{j=0}^\infty a_j(x,\xi)$ as $|\xi|\to\infty$ where each $a_j$ is positively homogeneous in $\xi$ of degree $k-j$. A family of symbols $b(x,\xi;h)\in S_{1,0}^k(T^*M)$ depending on a small parameter $h>0$ is said to lie in the class $S^k_h(T^*M)$ if it has the following expansion as $h\to 0$: \begin{equation} \label{e:h-classical} b(x,\xi;h)\sim\sum_{\ell=0}^\infty h^\ell a_\ell(x,\xi),\quad a_\ell\in S^{k-\ell}(T^*M). \end{equation} See for instance~\cite[\S E.1.2]{dizzy} and~\cite[\S2]{vasy2} for details. If $a\in S^k_{1,0}(T^*\mathbb R^n)$ satisfies~\eqref{e:basic-symbol} uniformly in $x\in\mathbb R^n$, then we can quantize it by the following formula (see~\cite[\S4.1.1]{e-z} and~\cite[\S E.1.4]{dizzy}) \begin{equation} \label{e:standard-quantization} \Op_h(a)f(x)=(2\pi h)^{-n}\int_{\mathbb R^{2n}}e^{{i\over h}(x-y)\cdot\xi}a(x,\xi)f(y)\,dyd\xi, \end{equation} which gives an operator $\Op_h(a)$ acting on the space $\mathscr S(\mathbb R^n)$ of Schwartz functions, as well as on the dual space $\mathscr S'(\mathbb R^n)$ of tempered distributions. Following~\cite[\S E.1.5]{dizzy} and~\cite[\S14.2.2]{e-z}, for a general manifold $M$ we consider the class $\Psi^k_h(M)$ of semiclassical pseudodifferential operators with symbols in $S^k_h(T^*M)$. We denote by $$ \sigma_h:\Psi^k_h(M)\to S^k(T^*M) $$ the principal symbol map. Operators in $\Psi^k_h$ act on semiclassical Sobolev spaces $H^s_{h,\comp}\to H^{s-k}_{h,\loc}$, see~\cite[\S E.1.6]{dizzy} and~\cite[\S14.2.4]{e-z}. We will often use the class $\Psi^{\comp}_h(M)$ of operators whose full symbols are essentially compactly supported in $T^*M$ and whose Schwartz kernels are compactly supported in $M\times M$. For $A\in\Psi^k_h(M)$, denote by $\WFh(A)$ its semiclassical wavefront set, which is the essential support of its full symbol~-- see for instance~\cite[\S E.2.1]{dizzy} and~\cite[Appendix~C.1]{zeta}. Then $\WFh(A)$ is a closed subset of the fiber-radially compactified cotangent bundle $\overline T^*M\supset T^*M$, see for instance~\cite[\S E.1.2]{dizzy} or~\cite[\S2]{vasy2}. For $A,B\in\Psi^k_h(M)$ and an open set $U\subset \overline T^*M$, we say that $$ A=B+\mathcal O(h^\infty)\quad\text{microlocally in }U, $$ if $\WFh(A-B)\cap U=\emptyset$. We also use the notion of wavefront sets of $h$-tempered distributions and operators, see for instance~\cite[\S E.2.3]{dizzy} or~\cite[\S2.3]{zeta}. Let $B=B(h):\mathcal D'(M)\to C_0^\infty(M)$ be an $h$-tempered family of smoothing operators and assume that the wavefront set $\WF'_h(B)\subset \overline T^*(M\times M)$ is a compact subset of $T^*(M\times M)$. We say that $B$ is \emph{pseudolocal} if $\WF'_h(B)$ is contained in the diagonal $\Delta(T^*M)\subset T^*(M\times M)$. For a pseudolocal operator $B$, we consider the set $\WF_h(B)\subset T^*M$ defined by \begin{equation} \label{e:wf-pseudolocal} \WF'_h(B)=\{(x,\xi,x,\xi)\mid (x,\xi)\in \WF_h(B)\}. \end{equation} Note that operators in $\Psi^{\comp}_h(M)$ are pseudolocal and their definition of wavefront set given in~\cite[\S E.2.1]{dizzy} agrees with the one given by~\eqref{e:wf-pseudolocal}. \subsection{Fourier integral operators} \label{s:fios} We next introduce semiclassical Fourier integral operators. Let $\varkappa:U_2\to U_1$ be a canonical transformation (that is, a symplectomorphism), where $U_j\subset T^*M_j$ are open sets and $M_j$ are manifolds of the same dimension. Define the graph of $\varkappa$ by \begin{equation} \label{e:Graph} \Graph(\varkappa):=\{(x,\xi,y,\eta)\mid (y,\eta)\in U_2,\ (x,\xi)=\varkappa(y,\eta)\}\ \subset\ T^*(M_1\times M_2). \end{equation} Let $\xi\,dx$ and $\eta\,dy$ be the canonical 1-forms on $T^*U_1$ and $T^*U_2$ respectively. Since $\varkappa$ is a canonical transformation, the restriction $(\xi\,dx-\eta\,dy)|_{\Graph(\varkappa)}$ is a closed 1-form. We require that $\varkappa$ is \emph{exact} in the sense that this restriction is an exact form, and fix an antiderivative \begin{equation} \label{e:antiderivative} F\in C^\infty(\Graph(\varkappa)),\quad (\xi\,dx-\eta\,dy)|_{\Graph(\varkappa)}=dF. \end{equation} For a canonical transformation $\varkappa$ with a fixed antiderivative $F$, we consider the class~$I^{\comp}_h(\varkappa)$ of compactly supported and microlocalized Fourier integral operators associated to $\varkappa$~-- see for instance~\cite[Chapter~5]{gu-st0}, \cite[Chapter~8]{gu-st}, \cite[\S3.2]{qeefun}, \cite[\S3.2]{fwl},% \footnote[2]{The presentation in~\cite[\S3.2]{fwl} contained an error because the Fourier integral operators associated to the identity map were not necessarily pseudodifferential operators with classical symbols due to a possible constant phase factor $e^{ic/h}$. We correct it here by fixing the antiderivative, which is always possible locally.} \cite[\S3.2]{nhp}, and the references there. We adopt a convention that operators in $I^{\comp}_h(\varkappa)$ act $\mathcal D'(M_2)\to C_0^\infty(M_1)$. We list some basic properties of the class $I^{\comp}_h(\varkappa)$: \begin{itemize} \item each $B\in I^{\comp}_h(\varkappa)$ is bounded uniformly in~$h$ on the spaces $H^s_{h,\loc}(M_2)\to H^{s'}_{h,\comp}(M_1)$ for all $s,s'\in\mathbb R$, and $\WF'_h(B)\subset \Graph(\varkappa)$; \item if $\varkappa:U_2\to U_1$, $U'\subset U_2$, and $\varkappa':=\varkappa|_{U'}$, then $B\in I^{\comp}_h(\varkappa')$ if and only if $B\in I^{\comp}_h(\varkappa)$ and $\WF'_h(B)\subset\Graph(\varkappa')$; \item if $\varkappa:T^*M\to T^*M$ is the identity map with the zero antiderivative, then $B\in I^{\comp}_h(\varkappa)$ if and only if $B\in\Psi^{\comp}_h(M)$; \item if $B\in I^{\comp}_h(\varkappa)$, then $B^*\in I^{\comp}_h(\varkappa^{-1})$, with the antiderivatives on $\Graph(\varkappa)$ and $\Graph(\varkappa^{-1})$ summing up to zero; \item if $\varkappa:U_2\to U_1$, $\varkappa':U_3\to U_2$, and $B\in I^{\comp}_h(\varkappa)$, $B'\in I^{\comp}_h(\varkappa')$, then $BB'\in I^{\comp}_h(\varkappa\circ\varkappa')$, with the antiderivative on $\Graph(\varkappa\circ\varkappa')$ chosen as the sum of the antiderivatives on $\Graph(\varkappa)$ and $\Graph(\varkappa')$. \end{itemize} To give a concrete expression for elements of $I^{\comp}_h(\varkappa)$, assume that $\varkappa$ is parametrized by a nondegenerate phase function $\Phi(x,y,\zeta)\in C^\infty(U_\Phi;\mathbb R)$, $U_\Phi\subset M_1\times M_2\times\mathbb R^m$, in the sense that the differentials $d(\partial_{\zeta_1}\Phi),\dots,d(\partial_{\zeta_m}\Phi)$ are independent on the critical set $$ \mathcal C_\Phi=\{(x,y,\zeta)\in U_\Phi\mid\partial_\zeta\Phi(x,y,\zeta)=0\} $$ and the graph $\Graph(\varkappa)$ is given by \begin{equation} \label{e:kappa-parametrized} \Graph(\varkappa)=j_\Phi(\mathcal C_\Phi),\quad j_\Phi:(x,y,\zeta)\mapsto (x,\partial_x\Phi(x,y,\zeta),y,-\partial_y\Phi(x,y,\zeta)). \end{equation} The corresponding antiderivative is just the pullback of $\Phi$ from $\mathcal C_\Phi$ to $\Graph(\varkappa)$ by the map $j_\Phi$. Then any operator $B\in I^{\comp}_h(\varkappa)$ has the following form modulo $\mathcal O(h^\infty)_{\Psi^{-\infty}}$: \begin{equation} \label{e:fio-general-form} Bf(x)=(2\pi h)^{-{m+n\over 2}}\int_{M_1\times\mathbb R^m} e^{{i\over h}\Phi(x,y,\zeta)}b(x,y,\zeta;h)\,dyd\zeta \end{equation} where $n=\dim M_1=\dim M_2$ and $b$ is a compactly supported symbol on $U_\Phi$, that is an $h$-dependent family of smooth functions with support contained in some $h$-independent compact set which has an asymptotic expansion in nonnegative integer powers of $h$. Moreover, local principal symbol calculus shows that \begin{equation} \label{e:principal-killed} b(x,y,\zeta;0)=0\quad\text{for all }(x,y,\zeta)\in\mathcal C_\Phi\ \Longrightarrow\ B\in hI^{\comp}_h(\varkappa). \end{equation} See for example~\cite[\S3.2]{qeefun} for details. A special case is when $M_2$ is an open subset of $\mathbb R^n$ and $\Graph(\varkappa)$ projects diffeomorphically onto the $(x,\eta)$ variables. Let $F\in C^\infty(\Graph(\varkappa))$ be the fixed antiderivative, and define the \emph{generating function} $S(x,\eta)\in C^\infty(U_S;\mathbb R)$ by the formula $S(x,\eta)=F+y\cdot\eta$, where $\Graph(\varkappa)$ is parametrized by $(x,\eta)\in U_S\subset M_1\times\mathbb R^n$. Then \begin{equation} \label{e:canonical-form} \Graph(\varkappa)=\{\xi=\partial_x S(x,\eta),\ y=\partial_\eta S(x,\eta),\ (x,\eta)\in U_S\} \end{equation} implying that $\varkappa$ is parametrized in the sense of~\eqref{e:kappa-parametrized} by the function $(x,y,\zeta)\mapsto S(x,\zeta)-y\cdot\zeta$. Each $B\in I^{\comp}_h(\varkappa)$ has the following form modulo $\mathcal O(h^\infty)_{\mathcal D'(M_2)\to C_0^\infty(M_1)}$: \begin{equation} \label{e:fio-local-form} Bf(x)=(2\pi h)^{-n}\int_{\mathbb R^{2n}}e^{{i\over h}(S(x,\eta)-y\cdot\eta)}b(x,\eta;h)\chi(y)f(y)\,dyd\eta,\quad f\in \mathcal D'(M_2), \end{equation} where $n=\dim M_j$, $b(x,\eta;h)$ is a compactly supported symbol on $U_S$, and $\chi\in C_0^\infty(M_2)$ is any function such that $\chi=1$ near $\partial_\eta S(\supp b)$. (The resulting operator is independent of the choice of $\chi$ modulo $\mathcal O(h^\infty)_{\mathcal D'(M_2)\to C_0^\infty(M_1)}$.) As remarked in~\cite[\S3.2]{fwl}, $\varkappa$ can locally be written in the form~\eqref{e:canonical-form} for some choice of local coordinates on $M_2$ as long as its domain does not intersect the zero section of $T^*M_2$. The latter condition can be arranged locally by composing $\varkappa$ with a transformation of the form $(y,\eta)\mapsto (y,\eta-d\psi(y))$ for some $\psi\in C^\infty(M_2)$, which amounts to multiplying the resulting operators by $e^{i\psi(y)/ h}$~-- see Lemma~\ref{l:gauge-fio} below. We next discuss microlocal inverses of Fourier integral operators. Assume that $B\in I^{\comp}_h(\varkappa),B'\in I^{\comp}_h(\varkappa^{-1})$. Then $BB'\in\Psi^{\comp}_h(M_1)$, $B'B\in\Psi^{\comp}_h(M_2)$, $\WFh(BB')\subset U_1$, $\WFh(B'B)\subset U_2$, and (as shown in the case of~\eqref{e:fio-local-form} by an explicit application of the method of stationary phase and in general is a form of Egorov's Theorem) \begin{equation} \label{e:symbol-commutes} \sigma_h(B'B)=\sigma_h(BB')\circ\varkappa. \end{equation} We call $B\in I^{\comp}_h(\varkappa)$ \emph{elliptic} at a point $(x,\xi,y,\eta)\in\Graph(\varkappa)$, if there exists $B'\in I^{\comp}_h(\varkappa^{-1})$ such that $\sigma_h(BB')(x,\xi)\neq 0$ (in fact, this is equivalent to requiring that $\sigma_h(BB^*)(x,\xi)\neq 0$). For $B$ given by~\eqref{e:fio-general-form}, this simply means that $b(x,y,\zeta;0)\neq 0$ where $(x,y,\zeta)=j_\Phi^{-1}(x,\xi,y,\eta)\in \mathcal C_\Phi$. For each point in $\Graph(\varkappa)$, there exist operators in $I^{\comp}_h(\varkappa)$ elliptic at this point. If $V_j\subset U_j$ are compact subsets such that $\varkappa(V_2)=V_1$, then we say that $B,B'$ \emph{quantize}~$\varkappa$ near $V_1\times V_2$ if \begin{equation} \label{e:quantized} \begin{aligned} BB'&=1+\mathcal O(h^\infty)\quad\text{microlocally near }V_1,\\ B'B&=1+\mathcal O(h^\infty)\quad\text{microlocally near }V_2. \end{aligned} \end{equation} Such operators $B,B'$ exist if $V_2=\{(y,\eta)\}$ for any given point $(y,\eta)\in U_2$ (and thus if $V_2$ is a sufficiently small neighborhood of $(y,\eta)$). To show this, take $B\in I^{\comp}_h(\varkappa)$ elliptic at $(\varkappa(y,\eta),y,\eta)$ and $B'_0\in I^{\comp}_h(\varkappa^{-1})$ such that $\sigma_h(BB'_0)\neq 0$ on $V_1$. Multiplying $B'_0$ on the right by an elliptic parametrix of $BB'_0$ (see for instance~\cite[\S E.2.2]{dizzy} and~\cite[Proposition~2.4]{zeta}), we obtain $B'\in I^{\comp}_h(\varkappa)$ such that $BB'=1+\mathcal O(h^\infty)$ microlocally near $V_1$. By~\eqref{e:symbol-commutes}, we have $\sigma_h(B'_0B)\neq 0$ on $V_2$, so we can construct $B''\in I^{\comp}_h(\varkappa)$ such that $B''B=1+\mathcal O(h^\infty)$ microlocally near $V_2$. Then $$ \WF'_h(B'-B'')\cap (V_1\times V_2)\ \subset\ \WF'_h((B''B)B'-B''(BB'))\ =\ \emptyset, $$ therefore~\eqref{e:quantized} holds. One could also define $B,B'$ as solutions of an evolution equation, see~\cite[Theorem~11.5]{e-z} and~\cite[\S3.2]{fwl}. One useful family of Fourier integral operators is given by the following \begin{lemm} \label{l:gauge-fio} Let $\varphi:M_1\to M_2$ be a diffeomorphism and $\psi\in C^\infty(M_1)$. Consider the operator $$ B=B(h):\mathcal D'(M_2)\to \mathcal D'(M_1),\quad Bf(x)=e^{i\psi(x)/h}f(\varphi(x)). $$ Then for each $A_j\in\Psi^{\comp}_h(M_j)$, we have $A_1B,BA_2\in I^{\comp}_h(\varkappa^{-1})$, where $$ \varkappa:T^*M_1\to T^*M_2,\quad \varkappa(x,\xi)=\big(\varphi(x),(d\varphi(x))^{-T}\cdot(\xi-d\psi(x))\big), $$ and the antiderivative is given by $\psi(x)$. \end{lemm} \begin{proof} It suffices to consider the case when $M_1,M_2$ are open subsets of $\mathbb R^n$. Let $A_2=\Op_h(a)\chi$, where $a(y,\eta;h)$ is compactly supported in $M_2\times\mathbb R^n$ and $\chi\in C_0^\infty(M_2)$ is equal to 1 near the projection of $\supp a$. Then $$ BA_2f(x)=(2\pi h)^{-n}\int_{\mathbb R^{2n}}e^{{i\over h}((\varphi(x)-y)\cdot\eta+\psi(x))} a(\varphi(x),\eta;h)\chi(y)f(y)\,dyd\eta. $$ This has the form~\eqref{e:fio-local-form} with $$ S(x,\eta)=\varphi(x)\cdot\eta+\psi(x),\quad b(x,\eta;h)=a(\varphi(x),\eta;h), $$ and it is straightforward to see that $\varkappa^{-1}$ is given by~\eqref{e:canonical-form}. The case of $A_1B$ is reduced to the case of $BA_2$ by considering adjoint operators. \end{proof} \section{Calculus associated to a Lagrangian foliation} \label{s:second-microlocalization} In this section, we define a class of exotic pseudodifferential operators associated to a Lagrangian foliation. The symbols of these operators are allowed to vary on the constant scale along the foliation and on the scale $h^\rho$, $0\leq \rho<1$, in the directions transversal to the foliation. For $\rho>{1\over 2}$, the resulting operators will not generally lie in the exotic calculus $\Psi_{1/2}$ (see for instance~\cite[\S5.1]{fwl}), yet they form an algebra with properties similar to those of standard pseudodifferential operators. A similar (in fact, sharper in certain ways as it allowed for $\rho=1$ and $\Psi_{1/2}$ behavior in some directions) second microlocal calculus associated to a hypersurface has previously been developed by Sj\"ostrand--Zworski~\cite[\S5]{sj-zw}; for a calculus associated to a Lagrangian submanifold in the analytic category, see~\cite[Chapter~2]{delort-book} and the references given there. \subsection{Foliations and symbols} We start with the definition of a Lagrangian foliation: \begin{defi} \label{d:l-foli} Let $M$ be a manifold, $U\subset T^*M$ be an open set, and $$ L_{(x,\xi)}\ \subset\ T_{(x,\xi)}(T^*M),\quad (x,\xi)\in U $$ a family of subspaces depending smoothly on $(x,\xi)$. We say that $L$ is a \textbf{Lagrangian foliation} on $U$ if \begin{itemize} \item $L_{(x,\xi)}$ is integrable in the sense that if $X,Y$ are two vector fields on $U$ lying in $L$ at each point (we denote this by $X,Y\in C^\infty(U;L)$), then the Lie bracket $[X,Y]$ lies in $C^\infty(U;L)$ as well; \item $L_{(x,\xi)}$ is a Lagrangian subspace of $T_{(x,\xi)}(T^*M)$ for each $(x,\xi)\in U$. \end{itemize} \end{defi} Another way to think about a Lagrangian foliation is in terms of its leaves, which are Lagrangian submanifolds whose tangent spaces are given by $L$. The existence of these leaves follows from Frobenius's Theorem, see Lemma~\ref{l:canonical} below. We consider the following class of symbols: \begin{defi} \label{d:symbols} Let $L$ be a Lagrangian foliation on $U\subset T^*M$, and fix $\rho\in [0,1)$. We say that a function $a(x,\xi;h)$ is a (compactly supported) symbol of class $S_\rho$ with respect to $L$, and write $$ a\in S^{\comp}_{L,\rho}(U), $$ if for each $h\in (0,h_0)$, $(x,\xi)\mapsto a(x,\xi;h)$ is a smooth function on $U$ supported inside some $h$-independent compact set and it satisfies the derivative bounds (with the constant $C$ depending on $Y_j,Z_j$, but not on $h$) \begin{equation} \label{e:symbols-def} \sup_{x,\xi} |Y_1\dots Y_m Z_1\dots Z_k a(x,\xi;h)|\leq C h^{-\rho k}, \end{equation} for each vector fields $Y_1,\dots,Y_m,Z_1,\dots,Z_k$ on $U$ such that $Y_1,\dots,Y_m\in C^\infty(U;L)$. \end{defi} The following statement is useful for constructing symbols in the class $S^{\comp}_{L,\rho}$: \begin{lemm} \label{l:symbol-construction} Let $M_1$ be a compact manifold and $V_0(h)\subset V_1(h)\subset M_1$ be $h$-dependent sets satisfying $$ d\big(V_0(h),M_1\setminus V_1(h)\big)>\varepsilon h^\rho $$ for some fixed $\varepsilon>0,\rho\in [0,1)$ and all $h\in (0,1)$. Then there exists $\chi(h)\in C_0^\infty(M_1;[0,1])$ such that for all $h\in (0,1)$, \begin{gather} \label{e:sc-1} \supp(1-\chi(h))\cap V_0(h)=\emptyset,\quad \supp\chi(h)\subset V_1(h);\\ \label{e:sc-2} \sup_{M_1}|\partial^\alpha\chi|\leq C_\alpha h^{-\rho|\alpha|}. \end{gather} \end{lemm} \begin{proof} By a partition of unity we reduce to the case when $V_1(h)$ is contained in a small coordinate neighborhood on $M_1$; therefore, it suffices to consider the case $M_1=\mathbb R^n$. Let $d(\cdot,\cdot)$ be the Euclidean distance function. Put $$ V_2(h):=\{x\in\mathbb R^n\mid d(x,V_0(h))\leq \varepsilon h^\rho/2\}, $$ then (here $B(x,r)$ denotes the ball of radius $r$ centered at $x$) \begin{equation} \label{e:sc-3} \begin{aligned} x\in V_0(h)\quad&\Longrightarrow\quad B(x,\varepsilon h^\rho/2)\subset V_2(h),\\ x\in V_2(h)\quad&\Longrightarrow\quad B(x,\varepsilon h^\rho/2)\subset V_1(h). \end{aligned} \end{equation} Take nonnegative $\psi\in C_0^\infty(B(0,\varepsilon/2))$ such that $\int\psi=1$, and put (here $m=\dim M_1$) $$ \chi(x;h):=h^{-m\rho}\int_{V_2(h)}\psi\Big({x-y\over h^\rho}\Big)\,dy. $$ It follows immediately from~\eqref{e:sc-3} that $\chi$ satisfies~\eqref{e:sc-1}. Moreover, by putting derivatives on $\psi$ we obtain the derivative bounds~\eqref{e:sc-2}, finishing the proof. \end{proof} To keep track of the essential supports of symbols in $S^{\comp}_{L,\rho}(U)$ in an $h$-dependent way, we use the following \begin{defi} \label{d:rapid-decay} Assume that $a(x,\xi;h)$ is an $h$-dependent family of smooth functions in $(x,\xi)\in U$, and $h_j\to 0$, $(x_j,\xi_j)\in U$ are some sequences. We say that $a$ is $\mathcal O(h^\infty)$ along the sequence $(x_j,\xi_j,h_j)$, if for each $N$ and each vector fields $Z_1,\dots,Z_L$ on $U$, there exists a constant $C$ such that $$ |Z_1\dots Z_L a(x_j,\xi_j;h_j)|\leq C h_j^N. $$ \end{defi} We next introduce local canonical coordinates bringing an arbitrary Lagrangian foliation to a normal form. Let $L_0$ be the Lagrangian foliation on $T^*\mathbb R^n$ given by the fibers of the cotangent bundle; that is, in the standard coordinates $(y,\eta)$ on $T^*\mathbb R^n$, $$ L_0=\Span(\partial_{\eta_1},\dots,\partial_{\eta_n}) $$ is the annihilator of $dy$. \begin{defi} Let $L$ be a Lagrangian foliation on $U\subset T^*M$. A \textbf{Lagrangian chart} is a symplectomorphism $$ \varkappa:U_0\to V,\quad U_0\subset U,\quad V\subset T^*\mathbb R^n, $$ such that $d\varkappa(x,\xi)\cdot L_{(x,\xi)}=(L_0)_{\varkappa(x,\xi)}$ for each $(x,\xi)\in U_0$. \end{defi} The basic properties of Lagrangian charts are given by \begin{lemm} \label{l:canonical} 1. Let $L$ be a Lagrangian foliation on $U\subset T^*M$ and $(x_0,\xi_0)\in U$. Then there exists a Lagrangian chart $\varkappa:U_0\to T^*\mathbb R^n$ on some neighborhood $U_0\subset U$ of $(x_0,\xi_0)$. 2. Assume that $\varkappa:V\to V'$, where $V,V'\subset T^*\mathbb R^n$ are open, is a symplectomorphism which preserves the foliation $L_0$, and $(y_0,\eta_0)\in V$. Then there exists $\varepsilon>0$ such that \begin{equation} \label{e:gauge-transform} \varkappa(y,\eta)=\big(\varphi(y),(d\varphi(y))^{-T}\cdot (\eta-d\psi(y))\big),\quad (y,\eta)\in B(y_0,\varepsilon)\times B(\eta_0,\varepsilon), \end{equation} for some diffeomorphism $\varphi:B(y_0,\varepsilon)\to\mathbb R^n$ onto its image and some function $\psi\in C^\infty(B(y_0,\varepsilon);\mathbb R)$. \end{lemm} \begin{proof} 1. Since $L$ is integrable, by Frobenius's Theorem~\cite[Theorem~C.1.1]{ho3} there exist local coordinates $(y,\tilde\eta)$ in a neighborhood of $(x_0,\xi_0)$ such that $L$ is the annihilator of $dy$. Moreover, since $L$ is Lagrangian, we have $\{y_j,y_k\}=0$. Now, by Darboux Theorem~\cite[Theorem~21.1.6]{ho3} there exists a set of functions $\eta_1,\dots,\eta_n$ defined near $(x_0,\xi_0)$ such that $$ \{y_j,y_k\}=\{\eta_j,\eta_k\}=0,\quad \{\eta_j,y_k\}=\delta_{jk}. $$ The map $(x,\xi)\mapsto (y,\eta)$ is a Lagrangian chart in a neighborhood of $(x_0,\xi_0)$. 2. Define the functions $y',\eta'$ on $V$ by setting $\varkappa:(y,\eta)\mapsto (y',\eta')$. Since the annihilators of $dy'$ and $dy$ are the same (and both equal to $L_0$), we have $y'=\varphi(y)$ for $(y,\eta)\in B(y_0,\varepsilon)\times B(\eta_0,\varepsilon)$, some $\varepsilon>0$, and some diffeomorphism onto its image $\varphi:B(y_0,\varepsilon)\to\mathbb R^n$. Since $\{y'_j,\eta'_k\}=\delta_{jk}$, we have $$ \eta'=(d\varphi(y))^{-T}\cdot(\eta-F(y)),\quad (y,\eta)\in B(y_0,\varepsilon)\times B(\eta_0,\varepsilon), $$ for some smooth map $F:B(y_0,\varepsilon)\to\mathbb R^n$. Since $\{\eta'_j,\eta'_k\}=0$, we have $F(y)=d\psi(y)$ for some $\psi:B(y_0,\varepsilon)\to\mathbb R$. \end{proof} \subsection{Calculus on $\mathbb R^n$} We next develop the calculus for the case $L=L_0$. We have $a(y,\eta;h)\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ if and only if $a$ is supported inside some $h$-independent compact set and satisfies the derivative bounds \begin{equation} \label{e:s0-symb} \sup_{y,\eta}|\partial_y^\alpha\partial_\eta^\beta a(y,\eta;h)|\leq C_{\alpha\beta}h^{-\rho|\alpha|}. \end{equation} We derive several basic properties of quantizations of symbols in $S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ by the map $\Op_h$ defined in~\eqref{e:standard-quantization}: \begin{lemm} \label{l:l2-bdd} For $a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$, the operator $\Op_h(a)$ is bounded on $L^2(\mathbb R^n)$ uniformly in $h$. \end{lemm} \begin{proof} We introduce the unitary rescaling operator $$ T_\rho:L^2(\mathbb R^n)\to L^2(\mathbb R^n),\quad T_\rho u(y)=h^{\rho/4}u(h^{\rho/2} y). $$ It suffices to estimate the $L^2\to L^2$ norm of $$ T_\rho\Op_h(a)T_\rho^{-1}=\Op_h(a_\rho),\quad a_\rho(\tilde y,\tilde \eta;h):=a(h^{\rho/2} \tilde y,h^{-\rho/2}\tilde \eta;h). $$ It follows from~\eqref{e:s0-symb} that $a_\rho\in S_{\rho/2}$, where the classes $S_\delta$, $0\leq \delta\leq 1/2$, are defined in~\cite[(4.4.5)]{e-z}. It remains to apply~\cite[Theorem~4.23(ii)]{e-z}. \end{proof} \begin{lemm} \label{l:quant-basic} Let $a,b\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$. Then: 1. We have $$ \Op_h(a)\Op_h(b)=\Op_h(a\#b)+\mathcal O(h^\infty)_{L^2\to L^2}, $$ where $a\# b\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ and for each $N$, $$ a\# b(y,\eta;h)=\sum_{j=0}^{N-1}{(-ih)^j\over j!} (\partial_\eta\cdot\partial_{y'})^j \big(a(y,\eta;h)b(y',\eta';h)\big)|_{y'=y\atop \eta'=\eta}+\mathcal O(h^{(1-\rho)N})_{S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)}. $$ 2. We have $$ \Op_h(a)^*=\Op_h(a^*)+\mathcal O(h^\infty)_{L^2\to L^2}, $$ where $a^*\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ and for each $N$, $$ a^*(y,\eta;h)=\sum_{j=0}^{N-1}{(-ih)^j\over j!} (\partial_\eta\cdot\partial_y)^j \overline{a(y,\eta;h)} +\mathcal O(h^{(1-\rho)N})_{S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)}. $$ \end{lemm} \begin{proof} It suffices to apply~\cite[Theorems~4.14 and~4.17]{e-z} to the rescaled symbols $a_\rho,b_\rho\in S_{\rho/2}$ introduced in the proof of Lemma~\ref{l:l2-bdd}. The resulting symbols are $\mathcal O(h^\infty)$ outside of a compact set and thus can be cut off to compactly supported symbols. \end{proof} Lemma~\ref{l:quant-basic} (or rather its trivial extension to symbols which are not compactly supported) implies that \begin{equation} \label{e:funny-pseudolocal} \Op_h(b_1)\Op_h(a)\Op_h(b_2)=\mathcal O(h^\infty)_{L^2\to L^2},\quad \supp b_1\cap\supp b_2=\emptyset, \end{equation} for each $a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ and $h$-independent $b_1,b_2\in C^\infty(\mathbb R^{2n})$ with all derivatives uniformly bounded. This in turn implies that the operator $\Op_h(a)$ is pseudolocal and $\WF'_h(\Op_h(a))$ is compactly contained in $T^*(\mathbb R^n\times\mathbb R^n)$. The next two lemmas establish invariance of the class of operators of the form $\Op_h(a)$, $a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$, under conjugation by Fourier integral operators whose canonical transformations preserve the foliation $L_0$: \begin{lemm} \label{l:chvar} Assume that $\varphi:V_1\to V_2$ is a diffeomorphism, where $V_j\subset \mathbb R^n$ are open sets, $\psi\in C^\infty(V_1)$, and $\chi\in C_0^\infty(V_1)$. Define the operators $B:C^\infty(V_1)\to C_0^\infty(V_2)$, $B':C^\infty(V_2)\to C_0^\infty(V_1)$ by $$ Bg(y')=e^{-i\psi(\varphi^{-1}(y'))/h}\chi(\varphi^{-1}(y'))f(\varphi^{-1}(y')),\quad B'f(y)=e^{i\psi(y)/h}\chi(y)f(\varphi(y)). $$ Then for each $a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$, $$ B' \Op_h(a)B=\Op_h(\tilde a)+\mathcal O(h^\infty)_{L^2\to L^2}, $$ for some $\tilde a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$ such that for each $N$, $$ \tilde a(y,\eta;h)=\sum_{j=0}^{N-1}h^j L_j\big(\chi(y)\chi(y')a(\varphi(y),\theta;h)\big)\big|_{y'=y,\, \theta=d\varphi(y)^{-T}(\eta-d\psi(y))}+\mathcal O(h^N)_{S^{\comp}_{L_0,\rho}(\mathbb R^n)}. $$ where $L_j$ are differential operators of order $2j$ in $y',\theta$ depending on $\varphi,\psi$ and $L_0=1$. \end{lemm} \begin{proof} We write $$ \begin{gathered} B'\Op_h(a)B f(y)\\ =(2\pi h)^{-n}\int_{\mathbb R^{2n}} e^{{i\over h}((\varphi(y)-\varphi(y'))\cdot\theta+\psi(y)-\psi(y'))} \chi(y)\chi(y')J_\varphi(y')a(\varphi(y),\theta;h)f(y')\,dy'd\theta \end{gathered} $$ where $J_\varphi(y')=|\det d\varphi(y')|$. By oscillatory testing~\cite[Theorem~4.19]{e-z}, we have $B'\Op_h(a)B=\Op_h(b)$, where $$ \begin{gathered} b(y,\eta;h)=e^{-{i\over h} y\cdot\eta}B'\Op_h(a)B(e^{{i\over h}y'\cdot\eta})\\ =(2\pi h)^{-n}\int_{\mathbb R^{2n}}e^{{i\over h}((\varphi(y)-\varphi(y'))\cdot\theta+\psi(y)-\psi(y')-(y-y')\cdot\eta)} \chi(y)\chi(y')J_\varphi(y')a(\varphi(y),\theta;h)\,dy'd\theta, \end{gathered} $$ as long as all derivatives of $b$ are bounded uniformly on $\mathbb R^{2n}$ for each fixed~$h$. It then remains to establish the asymptotic expansion for $b$, which follows immediately by the method of stationary phase~\cite[Theorem~3.16]{e-z}. The symbol $b$ is $\mathcal O(h^\infty)_{\mathscr S(\mathbb R^{2n})}$ outside of a fixed compact set, therefore it can be cut off to a compactly supported symbol. \end{proof} \begin{lemm} \label{l:gauge} Assume that $\varkappa:V\to V'$, where $V,V'\subset T^*\mathbb R^n$ are open, is a canonical transformation which preserves the foliation $L_0$. Let $$ B\in I^{\comp}_h(\varkappa),\quad B'\in I^{\comp}_h(\varkappa^{-1}). $$ Take $a\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$. Then there exists $b\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$ such that $$ \begin{aligned} B'\Op_h(a)B&=\Op_h(b)+\mathcal O(h^\infty)_{L^2\to L^2},\\ b&=(a\circ\varkappa)\sigma_h(B'B)+\mathcal O(h^{1-\rho})_{S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)}. \end{aligned} $$ Moreover, if $h_j\to 0$, $(y_j,\eta_j)\in T^*\mathbb R^n$ are some sequences such that $a$ is $\mathcal O(h^\infty)$ along the sequence $(\varkappa(y_j,\eta_j),h_j)$ (in the sense of Definition~\ref{d:rapid-decay}), then $b$ is $\mathcal O(h^\infty)$ along the sequence $(y_j,\eta_j,h_j)$. \end{lemm} \begin{proof} By applying a partition of unity to $B,B'$ and using pseudolocality of $\Op_h(a)$ (see~\eqref{e:funny-pseudolocal}) and part~2 of Lemma~\ref{l:canonical}, we reduce to the case when $\varkappa$ has the form~\eqref{e:gauge-transform} for some $\varphi:B(y_0,\varepsilon)\to\mathbb R^n$, $\psi\in C^\infty(B(y_0,\varepsilon);\mathbb R)$; we add a constant to $\psi$ to make sure that the fixed antiderivative on $\Graph(\varkappa)$ is equal to $\psi(x)$. By Lemma~\ref{l:gauge-fio} and the composition property of Fourier integral operators, the products $$ A:=e^{{i\over h}\psi}\varphi^* B,\quad A':=B'(\varphi^{-1})^*e^{-{i\over h}\psi}, $$ lie in $\Psi^{\comp}_h(\mathbb R^n)$. (Lemma~\ref{l:gauge-fio} applies since we can insert an element of $\Psi^{\comp}_h$ in between $B,B'$ and other factors.) Since $\WF'_h(B)\subset\Graph(\varkappa)$ and $\WF'_h(B')\subset\Graph(\varkappa')$ are compact, there exists $\chi\in C_0^\infty(B(y_0,\varepsilon))$ such that $$ B=(\chi\circ\varphi^{-1})B+\mathcal O(h^\infty)_{L^2\to L^2},\quad B'=B'(\chi\circ\varphi^{-1})+\mathcal O(h^\infty)_{L^2\to L^2}. $$ Then we write $$ B'\Op_h(a)B=A'\big(\chi e^{{i\over h}\psi}\varphi^*\Op_h(a)(\varphi^{-1})^*e^{-{i\over h}\psi}\chi\big) A+\mathcal O(h^\infty)_{L^2\to L^2}. $$ By Lemma~\ref{l:chvar}, we can write the operator in parentheses on the right-hand side as $\Op_h(\tilde a)+\mathcal O(h^\infty)_{L^2\to L^2}$ for some $\tilde a\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$; by Lemma~\ref{l:quant-basic}, we have $A'\Op_h(\tilde a)A=\Op_h(b)+\mathcal O(h^\infty)_{L^2\to L^2}$ for some $b\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$. The expression for the principal part of $b$ and the microlocal vanishing statement follow directly from Lemmas~\ref{l:quant-basic} and~\ref{l:chvar} and the fact that $\sigma_h(A')\sigma_h(A)=\sigma_h(A'A)=\sigma_h(B'B)$. \end{proof} \subsection{General calculus} \label{s:calculus-general} We now introduce pseudodifferential operators associated to general Lagrangian foliations, starting with the following \begin{defi} Let $M$ be a manifold, $U\subset T^*M$ an open set, $L$ a Lagrangian foliation on $U$, and $\rho\in [0,1)$. A family of operators $$ A=A(h):\mathcal D'(M)\to C_0^\infty(M) $$ is called a semiclassical pseudodifferential operator with symbol of class $S_{L,\rho}^{\comp}(U)$ (denoted $A\in\Psi^{\comp}_{h,L,\rho}(U)$) if it can be written in the form \begin{equation} \label{e:represent} A=\sum_{\ell=1}^N B'_\ell \Op_h(a_\ell) B_\ell+\mathcal O(h^\infty)_{\mathcal D'(M)\to C_0^\infty(M)} \end{equation} for some Lagrangian charts $\varkappa_\ell$, Fourier integral operators $B_\ell\in I^{\comp}_h(\varkappa_\ell),B'_\ell\in I^{\comp}_h(\varkappa_\ell^{-1})$, and symbols $a_\ell\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$. \end{defi} \begin{lemm} \label{l:globallem} Let $A\in \Psi^{\comp}_{h,L,\rho}(U)$. Then: 1. $A$ is bounded on $L^2$ uniformly in $h$, pseudolocal, and $\WFh(A)\subset U$ is compact. 2. If $\varkappa:\widetilde U\to T^*\mathbb R^n$ is a Lagrangian chart and $B\in I^{\comp}_h(\varkappa),B'\in I^{\comp}_h(\varkappa^{-1})$, then $$ BAB'=\Op_h(a)+\mathcal O(h^\infty)_{L^2\to L^2} $$ for some $a\in S_{L_0,\rho}^{\comp}(T^*\mathbb R^n)$, $\supp a\subset\varkappa(\widetilde U)$, and for each representation~\eqref{e:represent} of $A$, \begin{equation} \label{e:symbol-transform} a\circ\varkappa=\sigma_h(B'B)\sum_{\ell=1}^N \sigma_h(B'_\ell B_\ell)(a_\ell\circ\varkappa_\ell)+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}. \end{equation} Moreover, if $h_j\to 0$ and $(x_j,\xi_j)\in U$ are sequences such that for each $\ell$, either $(x_j,\xi_j)\notin \pi_1(\WF'_h (B_\ell))\cap \pi_2(\WF'_h(B'_\ell))$ for all $j$ or $a_\ell\circ\varkappa_\ell$ is $\mathcal O(h^\infty)$ along $(x_j,\xi_j,h_j)$ for all $\ell$ in the sense of Definition~\ref{d:rapid-decay}, then $a\circ\varkappa$ is $\mathcal O(h^\infty)$ along $(x_j,\xi_j,h_j)$ as well. \end{lemm} \begin{proof} 1. This follows immediately from Lemma~\ref{l:l2-bdd} and the properties of $\Op_h(a)$, $a\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$ established in the paragraph following~\eqref{e:funny-pseudolocal}. 2. We write $A$ in the form~\eqref{e:represent}, then $$ BAB'=\sum_{j=1}^N (BB'_\ell)\Op_h(a_\ell)(B_\ell B')+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}. $$ Now, we have $B_\ell B'\in I^{\comp}_h(\varkappa'_j)$, where $\varkappa'_\ell=\varkappa_\ell\circ\varkappa^{-1}:\varkappa(\widetilde U\cap U_\ell)\to\mathbb T^*\mathbb R^n$ is a symplectomorphism onto its image preserving the foliation $L_0$ and $U_\ell$ is the domain of $\varkappa_\ell$. Similarly $BB'_\ell \in I^{\comp}_h((\varkappa'_j)^{-1})$. It remains to apply Lemma~\ref{l:gauge}. To see~\eqref{e:symbol-transform}, we use the following corollary of~\eqref{e:symbol-commutes}: $\sigma_h(BB'_\ell B_\ell B')=(\sigma_h(B'_\ell B_\ell)\sigma_h(B'B))\circ\varkappa^{-1}$. \end{proof} We now define the principal symbol and ($h$-dependent) microsupport of an operator in $\Psi^{\comp}_{h,L,\rho}(U)$: \begin{defi} \label{d:lag-prince} Let $A\in \Psi^{\comp}_{h,L,\rho}(U)$. We define the principal symbol $$ \sigma_h^L(A)\in S^{\comp}_{L,\rho}(U)/h^{1-\rho}S^{\comp}_{L,\rho}(U) $$ by the following formula valid for any representation~\eqref{e:represent}: \begin{equation} \label{e:symboldef} \sigma_h^L(A)=\sum_{\ell=1}^N \sigma_h(B'_\ell B_\ell)(a_\ell\circ\varkappa_\ell). \end{equation} Moreover, if $h_j\to 0$ and $(x_j,\xi_j)\in U$ are some sequences, then we say that $A=\mathcal O(h^\infty)$ microlocally along $(x_j,\xi_j,h_j)$, if for each choice of $\varkappa,B,B'$ in part~2 of Lemma~\ref{l:globallem} and the corresponding symbol $a\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$, the symbol $a\circ\varkappa$ is $\mathcal O(h^\infty)$ along $(x_j,\xi_j,h_j)$. \end{defi} It follows from Lemma~\ref{l:globallem} that $\sigma_h^L(A)$ does not depend on the choice of the representation~\eqref{e:represent} of $A$. To construct an operator with given principal symbol and microsupport, we use the \emph{quantization map} \begin{equation} \label{e:op-h-l} a\in S^{\comp}_{L,\rho}(U)\ \mapsto\ \Op_h^L(a):=\sum_\ell B'_\ell \Op_h(a_\ell)B_\ell, \end{equation} where the sum above has finitely many nonzero terms for each $a$ and \begin{itemize} \item $\varkappa_\ell: U_\ell\to T^*\mathbb R^n$ are Lagrangian charts and $U_\ell$, $\ell\in\mathbb N$, form a locally finite covering of $U$; \item $B_\ell\in I^{\comp}_h(\varkappa_\ell)$, $B'_\ell\in I^{\comp}_h(\varkappa_\ell^{-1})$ are Fourier integral operators such that $\sigma_h(B'_\ell B_\ell)\in C_0^\infty(U_\ell)$ form a partition of unity: $$ \sum_\ell \sigma_h(B'_\ell B_\ell)=1\quad\text{on }U; $$ \item $a_\ell=(\chi_\ell a)\circ\varkappa_\ell^{-1}\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$, where $\chi_\ell\in C_0^\infty(U_\ell)$ are some functions equal to 1 near $\supp \sigma_h(B'_\ell B_\ell)$. \end{itemize} One can choose $\varkappa_\ell$ with the required properties by Lemma~\ref{l:canonical}; for existence of $B_\ell,B'_\ell$ see the discussion following~\eqref{e:quantized}. The quantization map is not canonical as it depends on the choice of $\varkappa_\ell,B_\ell,B'_\ell,\chi_\ell$. Note that if $A\in\Psi^{\comp}_h(M)$ is a pseudodifferential operator in the standard calculus and $\WFh(A)\subset U$, then $A\in\Psi^{\comp}_{h,L,\rho}(U)$ and $\sigma_h^L(A)=\sigma_h(A)$. Also, if $a\in S^0_h(T^*M)$ is a symbol in the standard class supported inside $U$, then $\Op_h^L(a)\in\Psi^{\comp}_h(M)$. This follows from the composition property of Fourier integral operators together with~\eqref{e:symbol-commutes}. The basic properties of the symbol map and a quantization map are given by \begin{lemm} \label{l:globalprop} 1. For each $a\in S^{\comp}_{L,\rho}(U)$, $$ \sigma_h^L(\Op_h^L(a))=a+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}. $$ 2. For each $A\in\Psi^{\comp}_{h,L,\rho}(U)$, we have $\sigma_h^L(A)=\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}$ if and only if $A\in h^{1-\rho}\Psi^{\comp}_{h,L,\rho}(U)$. 3. For each $A\in\Psi^{\comp}_{h,L,\rho}(U)$, there exists $a\in S^{\comp}_{L,\rho}(U)$ such that $A=\Op_h^L(a)+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}$. 4. For each $a\in S^{\comp}_{L,\rho}(U)$, if $h_j\to 0$ and $(x_j,\xi_j)\in U$ are sequences such that $a$ is $\mathcal O(h^\infty)$ along $(x_j,\xi_j,h_j)$ (in the sense of Definition~\ref{d:rapid-decay}), then $\Op_h^L(a)$ is $\mathcal O(h^\infty)$ microlocally along $(x_j,\xi_j,h_j)$ (in the sense of Definition~\ref{d:lag-prince}). 5. Let $A,B\in\Psi^{\comp}_{h,L,\rho}(U)$. Then $AB,A^*\in\Psi^{\comp}_{h,L,\rho}(U)$ and $$ \begin{aligned} \sigma_h^L(AB)&=\sigma_h^L(A)\sigma_h^L(B)+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)},\\ \sigma_h^L(A^*)&=\overline{\sigma_h^L(A)}+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}. \end{aligned} $$ \end{lemm} \begin{proof} 1. This follows immediately from~\eqref{e:symboldef}. 2. Assume that $\sigma_h^L(A)=\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}$; we need to show that $A\in h^{1-\rho}\Psi^{\comp}_{h,L,\rho}(U)$ (the reverse implication follows directly from~\eqref{e:symboldef}). Using a pseudodifferential partition of unity, we may assume that $\WFh(A)$ is contained in a some open subset $\widetilde U\subset U$ such that there exists a Lagrangian chart $\varkappa:\widetilde U\to T^*\mathbb R^n$ and Fourier integral operators $$ B\in I^{\comp}_h(\varkappa),\ B'\in I^{\comp}_h(\varkappa^{-1});\quad B'B=1+\mathcal O(h^\infty)\quad\text{microlocally near }\WFh(A). $$ Then $$ A=B'(BAB')B+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}. $$ However, by part~2 of Lemma~\ref{l:globallem} we have $$ BAB'=\Op_h(a)+\mathcal O(h^\infty)_{L^2\to L^2},\quad a\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n), $$ and since $\sigma_h^L(A)=\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}$, we have $a=\mathcal O(h^{1-\rho})_{S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)}$. Therefore, $a=h^{1-\rho}b$ for some $b\in S^{\comp}_{L_0,\rho}(T^*\mathbb R^n)$ and $$ A=h^{1-\rho}(B'\Op_h(b)B+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty})\in h^{1-\rho}\Psi^{\comp}_{h,L,\rho}(U). $$ 3. Put $a_0=\sigma_h^L(A)$. Then $A=\Op_h^L(a_0)+\mathcal O(h^{1-\rho})_{\Psi^{\comp}_{h,L,\rho}(U)}$, therefore $A=\Op_h^L(a_0)+h^{1-\rho}A_1$ for some $A_1\in\Psi^{\comp}_{h,L,\rho}(U)$. By induction we construct a family of operators $A_j\in \Psi^{\comp}_{h,L,\rho}(U)$, $j\in\mathbb N_0$, such that $A_0=A$ and $A_j=\Op_h^L(\sigma_h^L(A_j))+h^{1-\rho}A_{j+1}$. Then we have $A=\Op_h^L(a)+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}$ where $a\in S^{\comp}_{h,L,\rho}(U)$ is the following asymptotic sum: $$ a\sim\sum_{j=0}^\infty h^{(1-\rho)j}\sigma_h(A_j). $$ 4,5. These follow from Lemma~\ref{l:quant-basic} and part~2 of Lemma~\ref{l:globallem}. \end{proof} \subsection{Further properties} We start with an improved bound on the $L^2$ operator norm: \begin{lemm} \label{l:l2-improved} Let $A\in\Psi^{\comp}_{h,L,\rho}(U)$. Then as $h\to 0$, $$ \|A\|_{L^2(M)\to L^2(M)}\leq \sup_U |\sigma_h^L(A)|+o(1). $$ \end{lemm} \begin{proof} Take $\varepsilon>0$ and let $a:=\sigma_h^L(A)$. It suffices to prove that \begin{equation} \label{e:l2-bdd-new} \limsup_{h\to 0}\|A\|_{L^2(M)\to L^2(M)}\leq C_\varepsilon:=\sup_U |a|+\varepsilon. \end{equation} Define the function $b$ by $$ b=\sqrt{C_\varepsilon^2-|a|^2}. $$ Note that $b=C_\varepsilon$ outside of $\supp a$ and $C_\varepsilon-b\in S^{\comp}_{L,\rho}(U)$. Take the following quantization of $b$: $$ B:=C_\varepsilon-\Op_h^L(C_\varepsilon-b) $$ and note that $B$ is bounded on $L^2$. Since $|a|^2+|b|^2=C_\varepsilon^2$, we have $$ A^*A+B^*B=C_\varepsilon^2+\mathcal O(h^{1-\rho})_{L^2\to L^2}. $$ By applying this to $u\in L^2$ and taking the scalar product with $u$ itself, we get $$ \|Au\|_{L^2}^2\leq C_\varepsilon^2\|u\|_{L^2}^2+\mathcal O(h^{1-\rho})\|u\|_{L^2}^2, $$ which implies~\eqref{e:l2-bdd-new}. \end{proof} We next give a version of the elliptic parametrix construction: \begin{lemm} \label{l:lag-elliptic} Assume that $A,B\in\Psi^{\comp}_{h,L,\rho}(U)$ and $B$ is elliptic on the microsupport of $A$ in the following sense: there exists $\varepsilon>0$ such that for each sequences $h_j\to 0$ and $(x_j,\xi_j)\in U$, if $|\sigma_h^L(B)(x_j,\xi_j;h_j)|\leq\varepsilon$, then $A$ is $\mathcal O(h^\infty)$ microlocally along $(x_j,\xi_j;h_j)$. Then there exists $$ Q\in\Psi^{\comp}_{h,L,\rho}(U),\quad A=QB+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}. $$ \end{lemm} \begin{proof} Let $a_0=\sigma_h^L(A),b_0=\sigma_h^L(B)$. We first show that there exists $q_0\in S^{\comp}_{L,\rho}(U)$ such that $a_0=q_0b_0+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}$. For that, let $\chi\in C_0^\infty(-\varepsilon,\varepsilon)$ be equal to 1 near the origin. Then from the ellipticity assumption we have $$ a_0 \chi(|b_0|)=\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)} $$ and it remains to put $q_0:=a_0\big(1-\chi(|b_0|)\big)/b_0$. Put $Q_0=\Op_h^L(q_0)$, then $$ A=Q_0B+\mathcal O(h^{1-\rho})_{\Psi^{\comp}_{h,L,\rho}(U)}. $$ We write $A=Q_0B+h^{1-\rho}R_1$ for some $R_1\in \Psi^{\comp}_{h,L,\rho}(U)$. By part~4 of Lemma~\ref{l:globalprop}, $B$ is elliptic on the microsupport of $Q_0$; therefore, it is elliptic on the microsupport of $R_1$. Repeating the above process, we find a sequence $Q_j=\Op_h^L(q_j),R_j\in \Psi^{\comp}_{h,L,\rho}(U)$ such that $A=R_0$ and $$ R_j=Q_j B+h^{1-\rho}R_{j+1},\quad j=0,1,\dots $$ It remains to take $Q=\Op_h^L(q)$, where $q\in S^{\comp}_{L,\rho}(U)$ is the asymptotic sum $$ q\sim\sum_{j=0}^\infty h^{(1-\rho)j}q_j.\qedhere $$ \end{proof} Finally, we give a version of Egorov's theorem for the $\Psi^{\comp}_{h,L,\rho}$ calculus: \begin{lemm} \label{l:egorov} Let $L$ be a Lagrangian foliation on $U\subset T^*M$, $P\in\Psi^{\comp}_h(M)$, the principal symbol $p=\sigma_h(P)$ be real-valued (and $h$-independent), and \begin{equation} \label{e:egorov-condition} L_{(x,\xi)}\ \subset\ \ker dp(x,\xi)\quad\text{for each }(x,\xi)\in U. \end{equation} Let $A\in\Psi^{\comp}_{h,L,\rho}(U)$ and take $T>0$ such that $e^{-tH_p}(\WFh(A))\subset U$ for all $t\in [0,T]$. Then there exists a family of operators depending smoothly on $t$ $$ A_t\in\Psi^{\comp}_{h,L,\rho}(U),\quad t\in [0,T],\quad A_0=A+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}, $$ such that $\sigma_h^L(A_t)=\sigma_h^L(A)\circ e^{tH_p}+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(U)}$ and \begin{equation} \label{e:egorov-equation} ih\partial_t A_t+[P,A_t]=\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}. \end{equation} Moreover, if $t$ is fixed and $h_j\to 0$, $(x_j,\xi_j)\in U$ are sequences such that $A$ is $\mathcal O(h^\infty)$ microlocally along $(x_j,\xi_j,h_j)$, then $A_t$ is $\mathcal O(h^\infty)$ microlocally along $(e^{-tH_p}(x_j,\xi_j),h_j)$. \end{lemm} \begin{proof} First of all, by~\eqref{e:egorov-condition} the Hamiltonian vector field $H_p$ lies in $L$. Therefore, the Hamiltonian flow $e^{tH_p}$ preserves $L$, and for each $a\in S^{\comp}_{L,\rho}(U)$, both $H_pa$ and $a\circ e^{tH_p}$ (as long as $e^{-tH_p}(\supp a)\subset U$) lie in $S^{\comp}_{L,\rho}(U)$ as well. Next, we have for each $\widetilde A\in\Psi^{\comp}_{h,L,\rho}(U)$ \begin{equation} \label{e:nice-commutator} [P,\widetilde A]={h\over i}\Op_h^L(H_p \sigma_h^L(\widetilde A))+\mathcal O(h^{2-\rho})_{\Psi^{\comp}_{h,L,\rho}(U)}. \end{equation} Indeed, by a pseudodifferential partition of unity and part~2 of Lemma~\ref{l:globallem}, it suffices to prove% \footnote{We do not obtain the error $\mathcal O(h^2)$ in~\eqref{e:nice-commutator} because right-hand side is in $h\Psi^{\comp}_{h,L,\rho}(U)$ and thus Lemma~\ref{l:globallem} produces an $\mathcal O(h\cdot h^{1-\rho})$ error.} that for each function $f\in C_0^\infty(\mathbb R^n)$, $$ f(y)\#\tilde a-\tilde a\# f(y)={h\over i}\{f(y),\tilde a\}+\mathcal O(h^2)_{S^{\comp}_{h,L,\rho}(T^*\mathbb R^n)} $$ which follows immediately from Lemma~\ref{l:quant-basic}. Take $a\in S^{\comp}_{L,\rho}(U)$ such that $A=\Op_h^L(a)+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}$. Consider the family of operators $$ A^{(0)}_t:=\Op_h^L(a^{(0)}_t),\quad a^{(0)}_t=a\circ e^{tH_p},\quad t\in [0,T]. $$ Then~\eqref{e:nice-commutator} implies $$ ih\partial_t A^{(0)}_t+[P,A^{(0)}_t]=h^{2-\rho}R^{(1)}_t,\quad R^{(1)}_t\in\Psi^{\comp}_{h,L,\rho}(U). $$ Next, put for $t\in [-T,T]$, $$ A^{(1)}_t:=\Op_h^L(a^{(1)}_t),\quad a^{(1)}_t:=\int_0^t i\sigma_h^L(R^{(1)}_s)\circ e^{(t-s)H_p}\,ds. $$ Then $$ a^{(1)}_0=0,\quad \partial_t a^{(1)}_t=H_pa^{(1)}_t+i\sigma_h^L(R^{(1)}_t) $$ and thus~\eqref{e:nice-commutator} implies $$ ih\partial_t A^{(1)}_t+[P,A^{(1)}_t]+hR^{(1)}_t=h^{2-\rho}R^{(2)}_t,\quad R^{(2)}_t\in\Psi^{\comp}_{h,L,\rho}(U). $$ Arguing by induction, we construct operators $A^{(j)}_t=\Op_h^L(a^{(j)}_t),R^{(j)}_t\in\Psi^{\comp}_{h,L,\rho}(U)$, $j\in\mathbb N_0$, such that $R^{(0)}_t=0$, $A^{(0)}_0=A+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty}$, $A^{(j)}_0=0$ for $j>0$, and $$ ih\partial_t A^{(j)}_t+[P,A^{(j)}_t]+hR^{(j)}_t=h^{2-\rho}R^{(j+1)}_t. $$ It remains to put $$ A_t:=\Op_h^L(a_t),\quad a_t\sim\sum_{j=0}^\infty h^{(1-\rho)j} a_t^{(j)}.\qedhere $$ \end{proof} \section{Hyperbolic manifolds} \label{s:hyperbolic} In this section, we assume that $(M,g)$ is an $n$-dimensional convex co-compact hyperbolic manifold, that is, a quotient $M=\Gamma\backslash\mathbb H^n$ of the hyperbolic space $\mathbb H^n$ by a convex co-compact (geometrically finite) subgroup $\Gamma$ of the isometry group $\PSO(1,n)$ of $\mathbb H^n$. We refer the reader to~\cite{Borthwick} for the formal definition and properties of these manifolds in the important special case of dimension $n=2$ and to~\cite{perry} for the case of general dimension. We will use the calculus of~\S\ref{s:second-microlocalization} to obtain fine microlocal bounds on the scattering resolvent on $M$, and prove Theorem~\ref{t:fup-reduction} using these bounds. \subsection{Dynamical properties} \label{s:dynamical} Define the function $p\in C^\infty(T^*M\setminus 0)$ by \begin{equation} \label{e:p-symbol} p(x,\xi)=|\xi|_g,\quad (x,\xi)\in T^*M\setminus 0, \end{equation} and let $X$ be the Hamiltonian vector field of $p$. Then \begin{equation} \label{e:g-flow} e^{tX}:T^*M\setminus 0\to T^*M\setminus 0 \end{equation} is the homogeneous rescaling of the geodesic flow. Here homogeneity means that $[X,\xi\cdot\partial_\xi]=0$ where $\xi\cdot\partial_\xi$ is the generator of dilations. In what follows, we will identify the cotangent bundle $T^*M$ with the tangent bundle $TM$ using the metric $g$. \smallsection{Stable/unstable decomposition} For $(x,\xi)\in T^*M\setminus 0$, we decompose the tangent space at $(x,\xi)$ as follows: \begin{equation} \label{e:sudec} T_{(x,\xi)}(T^*M)=\mathbb RX\oplus \mathbb R(\xi\cdot\partial_\xi)\oplus E_s(x,\xi)\oplus E_u(x,\xi) \end{equation} where $E_s,E_u$ are the $n-1$ dimensional stable and unstable bundles, defined in the case $|\xi|_g=1$ for instance in~\cite[(3.14)]{rrh} (recalling the identification $T^*M\simeq TM$), and in general by requiring that they are homogeneous. Note that $E_s,E_u$ are the images of the stable/unstable bundles of $\mathbb H^n$ (which are also denoted $E_s,E_u$) under the covering map \begin{equation} \label{e:pi-Gamma} \pi_\Gamma:T^*\mathbb H^n\to T^*M, \end{equation} and they are tangent to the level sets of $p$. The subbundles $E_s,E_u$ are invariant under the flow $e^{tX}$. Moreover, the projection map $T_{(x,\xi)}T^*M\to T_x M$ is an isomorphism from $E_s(x,\xi)$ onto the space $\{\eta\in T_xM\mid \langle\xi,\eta\rangle=0\}$. Therefore, we can canonically pull back the metric $g_x$ to $E_s(x,\xi)$. Same is true for $E_u(x,\xi)$, and we have (see for instance~\cite[\S3.3]{rrh}) \begin{equation} \label{e:stun} |de^{tX}(x)v|_g=\begin{cases} e^t |v|_g,&\quad v\in E_u(x,\xi);\\ e^{-t}|v|_g,&\quad v\in E_s(x,\xi). \end{cases} \end{equation} For each $(x,\xi)\in T^*M\setminus 0$, consider the \emph{weak stable/unstable subspaces} \begin{equation} \label{e:L-s-L-u} L_s(x,\xi):=\mathbb R X(x,\xi)\oplus E_s(x,\xi),\quad L_u(x,\xi):=\mathbb R X(x,\xi)\oplus E_u(x,\xi). \end{equation} Define the maps \begin{equation} \label{e:B-pm} B_\pm:T^*\mathbb H^n\setminus 0\to\mathbb S^{n-1} \end{equation} as follows: for $(x,\xi)\in T^*\mathbb H^n$, $B_\pm(x,\xi)$ is the limit of the projection to the ball model of $\mathbb H^n$ of the geodesic $e^{tX}(x,\xi)$ as $t\to\pm\infty$~-- see for instance~\cite[\S3.4]{rrh}. Then the lifts of $L_s,L_u$ to $T^*\mathbb H^n\setminus 0$ are given by~\cite[(3.25)]{rrh} \begin{equation} \label{e:desker} \begin{aligned} \pi_\Gamma^*L_s(x,\xi)&=\ker dB_+(x,\xi)\cap \ker dp(x,\xi),\\ \pi_\Gamma^*L_u(x,\xi)&=\ker dB_-(x,\xi)\cap \ker dp(x,\xi). \end{aligned} \end{equation} \begin{lemm} \label{l:our-ls} $L_s$ and $L_u$ are Lagrangian foliations on $T^*M\setminus 0$ in the sense of Definition~\ref{d:l-foli}. \end{lemm} \begin{proof} We consider the case of $L_s$; the case of $L_u$ is handled similarly. Using the covering map $\pi_\Gamma$, we reduce to the case $M=\mathbb H^n$. The fact that $L_s$ is integrable follows immediately from~\eqref{e:desker}. Since $\dim L_s=n$, it remains to show that $\omega(Y_1,Y_2)=0$ for each $Y_1,Y_2\in L_s(x,\xi)$, where $\omega$ is the symplectic form on $T^*\mathbb H^n$. When $Y_1=X(x,\xi)$, this is immediate since $L_s(x,\xi)\subset\ker dp(x,\xi)$. Therefore, we may assume that $Y_1,Y_2\in E_s(x,\xi)$. Since $e^{tX}$ is a Hamiltonian flow, it is a symplectomorphism, and we find $$ \omega(Y_1,Y_2)=\omega(de^{tX}(x,\xi)Y_1,de^{tX}(x,\xi)Y_2)\leq C|de^{tX}(x,\xi)Y_1|_g\cdot |de^{tX}(x,\xi)Y_2|_g $$ where the constant $C$ in the last inequality is independent of $t$ since the isometry group $\PSO(1,n)$ acts transitively on $\mathbb H^n$ and the lifted action on $T^*\mathbb H^n\setminus 0$ preserves $\omega$, $E_s$, and the induced metric on $E_s$. Letting $t\to +\infty$ and using~\eqref{e:stun}, we get $\omega(Y_1,Y_2)=0$ as required. \end{proof} The next lemma states that the result of propagating a compactly supported symbol up to almost twice the Ehrenfest time lies in the anisotropic class $S^{\comp}_{L,\rho}$ from Definition~\ref{d:symbols}, where $L=L_s$ or $L=L_u$ depending on the direction of propagation. See Appendix~\ref{s:hyperbolic-technical} for the proof. \begin{lemm} \label{l:propagated-okay} Let $\chi_1,\chi_2\in C_0^\infty(T^*M\setminus 0)$ be independent of $h$ and fix $\rho\in [0,1)$. Then we have uniformly in $t\in [0,\rho\log(1/h)]$, $$ \chi_2(\chi_1\circ e^{tX})\in S^{\comp}_{L_s,\rho}(T^*M\setminus 0),\quad \chi_2(\chi_1\circ e^{-tX})\in S^{\comp}_{L_u,\rho}(T^*M\setminus 0). $$ \end{lemm} \smallsection{Infinity and trapping} Since $M$ is a convex co-compact hyperbolic manifold, it is \emph{even asymptotically hyperbolic} in the sense of~\cite[Definition~1.2]{guillarmou}; more precisely, it is the interior of a compact manifold with boundary $\overline M$ such that near $\partial\overline M$, $$ g={d\tilde x^2+g_1(\tilde x^2,\tilde y)\over \tilde x^2}, $$ where $\tilde x\geq 0$ is a boundary defining function and $(\tilde x,\tilde y)\in (0,\varepsilon)\times \partial\overline M$ are some product coordinates on a collar neighborhood of $\partial\overline M$. It is shown for example in~\cite[Lemma~7.1]{qeefun} that there exists a function $r:M\to\mathbb R$ such that $\{r\leq R\}$ is compact for all $R$ and the sets $\{r\leq R\}$ are strictly convex for all $R\geq 0$; that is, if we restrict $r$ to any geodesic on $M$ and denote by dots derivatives with respect to the geodesic parameter, then at each point of the geodesic we have \begin{equation} \label{e:convex} r\geq 0,\ \dot r=0\quad\Longrightarrow\quad \ddot r>0. \end{equation} In fact, it suffices to take $r:=\tilde x^{-1}-r_1$ for a boundary defining function $\tilde x$ of $\overline M$ and large enough constant $r_1>0$. We now define the \emph{incoming/outgoing tails} $\Gamma_\pm$ by \begin{equation} \label{e:GpmDef} \Gamma_\pm=\{(x,\xi)\in T^*M\setminus 0\mid r(e^{tX}(x,\xi)) \text{ is bounded as }t\to\mp\infty\}. \end{equation} Define also the \emph{trapped set} $K=\Gamma_+\cap\Gamma_-$. It follows from~\eqref{e:convex} that $\Gamma_\pm$ are closed subsets of $T^*M\setminus 0$ and $K\subset \{r<0\}$, see for instance~\cite[\S4.1]{qeefun}. We assume that $K\neq\emptyset$ (in the case when $K=\emptyset$, $M$ is known to have an arbitrarily large essential spectral gap, see for instance~\cite[(1.1)]{vasy2}). Recall that $M=\Gamma\backslash\mathbb H^n$, where $\Gamma$ is a convex co-compact group of hyperbolic isometries. Define the \emph{limit set} $\Lambda_\Gamma\subset \mathbb S^{n-1}$ as follows: for each $x\in \mathbb H^n$, \begin{equation} \label{e:Lambda-Gamma} \Lambda_\Gamma=\overline{\{\gamma.x\mid \gamma\in\Gamma\}}\cap \mathbb S^{n-1} \end{equation} where we use the ball model of the hyperbolic space and the closure is taken in the closed ball in $\mathbb R^n$. The resulting set is closed and independent of the choice of $x$; see for instance~\cite{Patterson} and~\cite[Lemma~2.8]{Borthwick} for the case of $n=2$ and~\cite{Sullivan} for general $n$. For each $(x,\xi)\in T^*\mathbb H^n\setminus 0$, we have (see Appendix~\ref{s:hyperbolic-technical} for the proof) \begin{equation} \label{e:Gpm-formula} \begin{aligned} \pi_\Gamma(x,\xi)\in\Gamma_+\quad\iff\quad B_-(x,\xi)\in\Lambda_\Gamma,\\ \pi_\Gamma(x,\xi)\in\Gamma_-\quad\iff\quad B_+(x,\xi)\in\Lambda_\Gamma, \end{aligned} \end{equation} where the maps $B_\pm$ are defined in~\eqref{e:B-pm} and $\pi_\Gamma$ is defined in~\eqref{e:pi-Gamma}. The following statement, when combined with~\eqref{e:Gpm-formula}, implies that for a trajectory $(x(t),\xi(t))$ of $e^{tX}$ on $T^*M\setminus 0$ which stays in a fixed compact set for all $t\in [0,T]$, the point $(x(0),\xi(0))$ is $\mathcal O(e^{-T})$ close to $\Gamma_-$ and the point $(x(T),\xi(T))$ is $\mathcal O(e^{-T})$ close to $\Gamma_+$. See Appendix~\ref{s:hyperbolic-technical} for the proof. \begin{lemm} \label{l:close-to-trapping} Let $V\subset T^*\mathbb H^n\setminus 0$ be a compact set. Then there exists a constant $C$ such that for each $t\geq 0$, $$ (x,\xi)\in V,\quad \pi_\Gamma(e^{\pm tX}(x,\xi))\in\pi_\Gamma(V)\quad\Longrightarrow\quad d(B_\pm(x,\xi),\Lambda_\Gamma)\leq Ce^{-t}. $$ Here $d(\cdot,\cdot)$ is the Euclidean distance function on $\mathbb S^{n-1}$. \end{lemm} \subsection{Scattering resolvent} \label{s:scattering-resolvent} Consider the Laplace--Beltrami operator $\Delta$ on $(M,g)$ and its $L^2$ resolvent $$ R(\lambda)=\Big(-\Delta-{(n-1)^2\over 4}-\lambda^2\Big)^{-1}:L^2(M)\to H^2(M),\quad \Im\lambda>0, $$ which may have finitely many poles corresponding to eigenvalues of $-\Delta$ on the interval $\big[0,{(n-1)^2\over 4}\big)$. Then $R(\lambda)$ continues meromorphically with poles of finite rank as a family of operators \begin{equation} \label{e:R-lambda} R(\lambda):L^2_{\comp}(M)\to H^2_{\loc}(M),\quad \lambda\in\mathbb C. \end{equation} A related question of continuation of Eisenstein series was studied by Patterson~\cite{patterson1,patterson2} in dimension 2 and Perry~\cite{perry,perry2} in higher dimensions. The continuation of $R(\lambda)$ was established by Mazzeo--Melrose~\cite{mazzeo-melrose} and Guillarmou~\cite{guillarmou} for general (even) asymptotically hyperbolic manifolds, and by Guillop\'e--Zworski~\cite{guillope-zworski} for manifolds of constant curvature near infinity. We refer the reader to~\cite[Chapter~6]{Borthwick} for the proof in dimension 2 and an overview of the history of the subject. To study essential spectral gaps, we write \begin{equation} \label{e:nu-def} \lambda=h^{-1}-i\nu,\quad \nu\in [-1,\beta-\varepsilon], \end{equation} where the semiclassical parameter $h>0$ needs to be small enough for the argument to work. (Resolvent bounds for negative $\Re\lambda$ follow from bounds for positive $\Re\lambda$ since $R(\lambda)^*=R(-\bar\lambda)$.) We introduce the semiclassical resolvent $$ R_h(\omega):=h^{-2}R(\lambda),\quad \omega:=h\lambda=1-ih\nu. $$ To derive high frequency estimates near infinity, we use the construction of the meromorphic continuation of the resolvent due to Vasy~\cite{vasy1,vasy2}. See in particular~\cite[\S5.1]{vasy2} and also~\cite[Lemma~2.1]{fwl}, \cite[\S4.4]{nhp}. The book \cite[Chapter~5]{dizzy} provides a detailed account of a slightly modified version of Vasy's method, which could be used in the present paper, and~\cite{vfd} gives a short self-contained introduction in the nonsemiclassical case. (For the constant curvature case considered here, one could alternatively apply complex scaling, see Zworski~\cite{zworski-inventiones} and Datchev~\cite{trumpet-of-death}.) Specifically, we write $$ R_h(\omega)f=\psi_1 (\mathcal P_h(\omega)^{-1}\psi_2 f)|_M,\quad f\in C_0^\infty(M). $$ Here $\psi_j\in C^\infty(M)$, $j=1,2$, are certain nonvanishing functions depending on $\omega,h$ and $\mathcal P_h(\omega)\in\Psi^2_h(M_{\mathrm{ext}})$ is a certain family of semiclassical pseudodifferential operators on a compact manifold $M_{\mathrm{ext}}$ containing $M$ as an open subset; we have \begin{equation} \label{e:p-h-def} (\mathcal P_h(\omega)u)|_M=\psi_2\Big(-h^2\Delta-{h^2(n-1)^2\over 4}-\omega^2\Big)\psi_1 (u|_M),\quad u\in C^\infty(M_{\mathrm{ext}}). \end{equation} Moreover, $\mathcal P_h(\omega)$ is a Fredholm operator between the spaces $$ \{u\in H^s_h(M_{\mathrm{ext}})\mid \mathcal P_h(\omega)u\in H^{s-1}_h(M_{\mathrm{ext}})\}\ \to\ H^{s-1}_h(M_{\mathrm{ext}}) $$ provided that $s>0$ is large enough depending on $\beta$; the inverse $\mathcal P_h(\omega)^{-1}:H^{s-1}_h(M_{\mathrm{ext}})\to H^s_h(M_{\mathrm{ext}})$ is meromorphic in $\omega$ with poles of finite rank (if we treat $\omega$ and $h$ as independent parameters). For each fixed $r_0>0$ we may arrange so that $\psi_1=\psi_2=1$ on $\{r\leq r_0\}$, see for instance the paragraph preceding~\cite[(3.14)]{vasy2}. Therefore, to show the resolvent bound~\eqref{e:essential-gap2} it suffices to prove the estimate \begin{equation} \label{e:est0} \|u\|_{H^s_h(M_{\mathrm{ext}})}\leq Ch^{-1-2\max(0,\nu)-\varepsilon}\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})} \end{equation} when $h$ is small enough depending on $\varepsilon$ and \begin{equation} \label{e:eq0} \mathcal P_h(\omega)u=f,\quad u\in H^s_h(M_{\mathrm{ext}}),\quad f\in H^{s-1}_h(M_{\mathrm{ext}}). \end{equation} (The resulting $H^{s-1}_h\to H^s_h$ estimate on $\chi R_h(\omega)\chi$ can be converted to an $L^2\to L^2$ estimate using the elliptic parametrix of $-h^2\Delta-{h^2(n-1)^2\over 4}-\omega^2$ near the fiber infinity, see for instance~\cite[Proposition~3.3]{nhp}.) We use the following outgoing estimates on the operator $\mathcal P_h(\omega)$. Their meaning is as follows: since $R_h(\omega)$ is the \emph{outgoing} resolvent (in the sense that it maps compactly supported functions on $M$ to functions with outgoing behavior at the infinity of $M$), it should be \emph{semiclassically outgoing}, that is propagate singularities in the forward direction along the geodesic flow. In particular if $\tilde u=R_h(\omega)\tilde f$ and $\tilde f=\mathcal O(h^\infty)$, then $\WFh(\tilde u)$ is contained in the outgoing tail $\Gamma_+$ (as follows from~\eqref{e:pest-1} below). Moreover, if we control $u$ near the trapped set then we can bound its norm everywhere (as follows from~\eqref{e:pest-2} below). \begin{lemm} \label{l:outgoing} For each $u,f$ satisfying~\eqref{e:eq0}, we have the following estimates: 1. Assume that $A_1\in\Psi^0_h(M_{\mathrm{ext}})$, $\WFh(A_1)\subset \{r\leq r_0\}\subset\overline T^*M$, and \begin{equation} \label{e:pest-1-cond} \WFh(A_1)\cap\Gamma_+\cap \{|\xi|_g=1\}=\emptyset. \end{equation} Then \begin{equation} \label{e:pest-1} \|A_1u\|_{H^s_h(M_{\mathrm{ext}})}\leq Ch^{-1}\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})}. \end{equation} 2. Assume that $A_2\in\Psi^{\comp}_h(M_{\mathrm{ext}})$ is elliptic on $K\cap \{|\xi|_g=1\}$. Then \begin{equation} \label{e:pest-2} \|u\|_{H^s_h(M_{\mathrm{ext}})}\leq C\|A_2u\|_{L^2}+Ch^{-1}\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}. \end{equation} \end{lemm} \noindent\textbf{Remark.} The estimates~\eqref{e:pest-1}, \eqref{e:pest-2} make it possible to treat the infinite ends of our manifold as a black box; see~\cite[\S4]{nhp} for a more formal treatment. In particular, our results would apply to any manifold with the same trapping structure as a convex co-compact hyperbolic quotient and infinite ends which satisfy~\eqref{e:pest-1}, \eqref{e:pest-2}; this includes Euclidean ends~\cite[\S4.3]{nhp} and general even asymptotically hyperbolic ends~\cite[\S4.4]{nhp}. \begin{proof}[Sketch of proof] Both of these statements follow from the elliptic estimate~\cite[Proposition~3.2]{nhp}, propagation of singularities~\cite[Proposition~3.4]{nhp}, and radial points estimates~\cite[Propositions~2.10 and~2.11]{vasy1} applied to the dynamical picture of the Hamiltonian flow of the principal symbol of $\mathcal P_h(\omega)$ as studied in~\cite{vasy1,vasy2}. More precisely, condition~\eqref{e:pest-1-cond} guarantees that each point in $\WFh(A_1)$ either lies in the elliptic set of $\mathcal P_h(\omega)$ or the corresponding backwards Hamiltonian flow line converges to the radial sets, near which $u$ is controlled when $s$ is large enough depending on $\beta$; this yields~\eqref{e:pest-1}. Next, each backwards Hamiltonian flow line of $\mathcal P_h(\omega)$ either passes through its elliptic set, or converges to the radial sets, or passes through the elliptic set of $A_2$; this yields~\eqref{e:pest-2}. The proof (in a modified setting using domains with boundary, which however works equally well for our purposes) is described in detail in~\cite[\S6.2.3]{dizzy}. We also refer the reader to~\cite[Lemma~4.4]{fwl} and~\cite[Lemma~4.1]{nhp} for more details on the dynamics of the flow and to~\cite[Lemmas~4.4 and~4.6]{nhp} for slightly different proofs involving a semiclassically outgoing parametrix for the resolvent. \end{proof} Finally, we write a pseudodifferential equation (see~\eqref{e:eq1} below) which is a direct consequence of~\eqref{e:eq0} but more convenient for Lemma~\ref{l:general-propagation} below because the principal symbol of the associated operator is the function $p$ given by~\eqref{e:p-symbol}. Consider the set \begin{equation} \label{e:W-0} W_0:=\{r\leq r_0,\ |\xi|_g\in [1/2,2]\}\subset T^*M. \end{equation} Take $P\in\Psi^{\comp}_h(M)$ such that $P^*=P$ and \begin{equation} \label{e:the-P} \begin{gathered} P^2=-h^2\Delta+{h^2(n-1)^2\over 4}+\mathcal O(h^\infty)\quad\text{microlocally near } W_0,\\ \sigma_h(P)(x,\xi)=p(x,\xi)=|\xi|_g\quad\text{near }W_0. \end{gathered} \end{equation} We can construct such an operator following~\cite[Lemma~4.6]{grigis-sjostrand}: first take $P_0\in\Psi^{\comp}_h(M)$ such that $P_0^*=P_0$ and $\sigma_h(P_0)=p$ near $W_0$. Denote $\mathbf P:=-h^2\Delta+{h^2(n-1)^2\over 4}$, then $\sigma_h(P_0^2)=\sigma_h(\mathbf P)$ near $W_0$ and thus $$ \mathbf P=P_0^2+hR_0+\mathcal O(h^\infty)\quad\text{microlocally near }W_0 $$ for some $R_0\in\Psi^{\comp}_h(M)$ and $R_0^*=R_0$. We next construct $P_1\in\Psi^{\comp}_h(M)$ such that $P_1^*=P_1$ and $$ \mathbf P=(P_0+hP_1)^2+h^2R_1+\mathcal O(h^\infty)\quad\text{microlocally near }W_0 $$ for some $R_1\in\Psi^{\comp}_h(M)$ and $R_1^*=R_1$; to do that, it suffices to put $\sigma_h(P_1)=\sigma_h(R_0)/2p$ near $W_0$. Arguing by induction, we construct a family of operators $P_j\in \Psi^{\comp}_h(M)$ such that $P_j^*=P_j$ and $$ \mathbf P=(P_0+hP_1+\dots+h^m P_m)^2+\mathcal O(h^{m+1})\quad\text{microlocally near }W_0; $$ it remains to take as $P$ the asymptotic sum $P\sim\sum_{j=0}^\infty h^jP_j$. By~\eqref{e:p-h-def}, and since $\psi_1=\psi_2=1$ near $\{r\leq r_0\}$, we have $$ \|B(P^2-\omega^2)u\|_{L^2}\leq C\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})} $$ for $u,f$ satisfying~\eqref{e:eq0} and each $B\in\Psi^{\comp}_h(M)$ such that $\WF_h(B)$ is contained in some small neighborhood of $W_0$. We write $(P^2-\omega^2)=(P+\omega)(P-\omega)$ and note that $P+\omega$ is elliptic on $W_0$; therefore, the elliptic estimate~\cite[Proposition~3.2]{nhp} gives \begin{equation} \label{e:eq1} \|A(P-\omega)u\|_{L^2}\leq C\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})} \end{equation} for each $A\in\Psi^{\comp}_h(M)$ such that $\WFh(A)\subset W_0$. \subsection{Second microlocalization of the resolvent} We now take the first step towards proving a spectral gap, which is to use the calculus of~\S\ref{s:second-microlocalization} and the Lagrangian foliations $L_u,L_s$ of~\eqref{e:L-s-L-u} to obtain fine microlocal estimates on solutions to~\eqref{e:eq0}. We start with a general propagation estimate: \begin{lemm} \label{l:general-propagation} Let $a,b\in S^{\comp}_{L,\rho}(T^*M\setminus 0)$ where $L\in \{L_u,L_s\}$, $\rho\in[0,1)$, and fix $T>0$. Assume that $|a|\leq 1$ everywhere and $$ e^{-TX}(\supp a)\ \subset\ \{b=1\};\quad e^{-tX}(\supp a)\ \subset\ W_0,\quad t\in [0,T], $$ where $W_0\subset T^*M\setminus 0$ is defined in~\eqref{e:W-0}. Then for each $\varepsilon_0>0$ and each $u,f$ satisfying~\eqref{e:eq0} we have $$ \|\Op_h^L(a) u\|_{L^2}\leq (e^{\nu T}+\varepsilon_0)\|\Op_h^L(b)u\|_{L^2}+Ch^{-1}\|f\|_{H^s_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})}. $$ where $\nu$ is defined in~\eqref{e:nu-def} and $\Op_h^L$ is a quantization procedure described in~\eqref{e:op-h-l}. \end{lemm} \begin{proof} Let $P\in\Psi^{\comp}_h(M)$ be the operator defined in~\eqref{e:the-P}. Consider the family of operators $A_t\in\Psi^{\comp}_{h,L,\rho}(T^*M\setminus 0)$, $t\in [0,T]$, constructed in Lemma~\ref{l:egorov}, with $A_0=\Op_h^L(a)+\mathcal O(h^\infty)$; here~\eqref{e:egorov-condition} holds since $\sigma_h(P)=p$ near $W_0$ and $L_u,L_s\subset\ker dp$. Using~\eqref{e:egorov-equation}, \eqref{e:eq1}, and the fact that $P^*=P$, we write $$ \begin{aligned} \partial_t\|A_t u\|_{L^2}^2&=2\Re\langle \partial_t A_t u,A_t u\rangle\\ &=-{2\over h}\Im\langle [P,A_t] u,A_t u\rangle+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})}\cdot \|A_t u\|_{L^2}\\ &={2\over h}\Im\langle A_t Pu, A_t u\rangle+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})}\cdot \|A_t u\|_{L^2}\\ &=-2\nu\|A_t u\|^2_{L^2}+(Ch^{-1}\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})})\|A_tu\|_{L^2}. \end{aligned} $$ Integrating this, we get \begin{equation} \label{e:prop1} \|\Op_h^L(a)u\|_{L^2}\leq e^{\nu T}\|A_Tu\|_{L^2}+Ch^{-1}\|f\|_{H^{s-1}_h(M_{\mathrm{ext}})}+\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})}. \end{equation} Now, it follows from part~4 of Lemma~\ref{l:globalprop} and Lemma~\ref{l:egorov} that for each sequences $h_j\to 0$ and $(x_j,\xi_j)\in T^*M\setminus 0$ such that $e^{TX}(x_j,\xi_j)\notin\supp a(\bullet;h_j)$, the operator $A_T$ is $\mathcal O(h^\infty)$ microlocally along $(x_j,\xi_j,h_j)$ in the sense of Definition~\ref{d:lag-prince}. We then apply Lemma~\ref{l:lag-elliptic} to write $$ A_T=Q\Op_h^L(b)+\mathcal O(h^\infty)_{\mathcal D'\to C_0^\infty},\quad Q\in \Psi^{\comp}_{h,L,\rho}(T^*M\setminus 0). $$ Moreover, Lemma~\ref{l:egorov} and the proof of Lemma~\ref{l:lag-elliptic} give $$ \sigma_h^L(Q)=(a\circ e^{TX})/b+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(T^*M\setminus 0)} =a\circ e^{TX}+\mathcal O(h^{1-\rho})_{S^{\comp}_{L,\rho}(T^*M\setminus 0)}. $$ By Lemma~\ref{l:l2-improved}, we have $\|Q\|_{L^2\to L^2}\leq 1+\varepsilon_1$ for each $\varepsilon_1>0$ and $h$ small enough depending on $\varepsilon_1$. Therefore, $$ e^{\nu T}\|A_T u\|_{L^2}\leq (e^{\nu T}+\varepsilon_0)\|\Op_h^L(b)u\|_{L^2} +\mathcal O(h^\infty)\|u\|_{H^s_h(M_{\mathrm{ext}})} $$ which together with~\eqref{e:prop1} finishes the proof. \end{proof} We can now prove second microlocal estimates on solutions to~\eqref{e:eq0}. Roughly speaking, in the case $f=0$ and $t=\rho\log(1/h)$ the estimate~\eqref{e:second1} below states that $u$ is concentrated $h^\rho$ close to $\Gamma_+$ (for each $\rho<1$) and the estimate~\eqref{e:second2} states that the $u$ has to be of size at least $h^{\nu\rho+}$ in an $h^\rho$ neighborhood of $\Gamma_-$~-- see~\eqref{e:g++1} and~\eqref{e:g++2}. In~\S\ref{s:fun}, we will see that the combination of these two facts with the fractal uncertainty principle implies that $\nu$ cannot be too small, giving an essential spectral gap. \begin{lemm} \label{l:second} Let $\chi\in C_0^\infty(T^*M\setminus 0;[0,1])$ be equal to 1 near $K\cap \{|\xi|_g=1\}$. Fix $\rho\in [0,1)$. Then there exists $T>0$ such that we have for each $\varepsilon_0>0$, uniformly in $t\in [T,\rho\log(1/h)]$, and $u,f$ satisfying~\eqref{e:eq0} \begin{gather} \label{e:second1} \big\|\Op_h^{L_u}\big(\chi(1-\chi\circ e^{-tX})\big)u\big\|_{L^2}\ \leq\ Ch^{-1}e^{(\max(0,\nu)+\varepsilon_0)t}\|f\|_{H^{s-1}_h} +\mathcal O(h^\infty)\|u\|_{H^s_h},\\ \label{e:second2} \|u\|_{H^s_h}\ \leq\ Ce^{(\nu+\varepsilon_0)t}\big\|\Op_h^{L_s}\big(\chi(\chi\circ e^{tX})\big)u\big\|_{L^2}+Ch^{-1}e^{(\max(0,\nu)+\varepsilon_0)t}\|f\|_{H^{s-1}_h}. \end{gather} Here $\chi(1-\chi\circ e^{-tX})\in S^{\comp}_{L_u,\rho}(T^*M\setminus 0)$, $\chi(\chi\circ e^{tX})\in S^{\comp}_{L_s,\rho}(T^*M\setminus 0)$ by Lemma~\ref{l:propagated-okay}. \end{lemm} \begin{proof} Denote $$ \begin{aligned} F_+(t)&:=\big\|\Op_h^{L_u}\big(\chi(1-\chi\circ e^{-tX})\big)u\big\|_{L^2},\\ F_-(t)&:=\big\|\Op_h^{L_s}\big(\chi(\chi\circ e^{tX})\big)u\big\|_{L^2}, \end{aligned} $$ then it suffices to show that for each $\varepsilon_0>0$ there exists $T>0$ such that for all $t_0\in [T/2,T]$ and $t\in [0,\rho\log(1/h)]$, we have (with constants uniform in $t_0,t$) \begin{gather} \label{e:second1.1} F_+(t+t_0)\ \leq\ e^{(\nu+\varepsilon_0)t_0} F_+(t)+Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h},\\ \label{e:second1.2} F_-(t)\ \leq\ e^{(\nu+\varepsilon_0)t_0} F_-(t+t_0)+Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. \end{gather} Indeed, iterating these estimates we get for all $t\in [T,\rho\log(1/h)]$ \begin{align} \label{e:second2.1} F_+(t)\ &\leq\ e^{(\nu+\varepsilon_0)t}F_+(T/2)+Ch^{-1}e^{(\max(0,\nu)+\varepsilon_0)t}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h},\\ \label{e:second2.2} F_-(0)\ &\leq\ e^{(\nu+\varepsilon_0)t}F_-(t)+Ch^{-1}e^{(\max(0,\nu)+\varepsilon_0)t}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. \end{align} By~\eqref{e:dynamo-1} below, the wavefront set of $\Op_h^{L_u}\big(\chi(1-\chi\circ e^{-TX/2})\big)\in \Psi^{\comp}_h(M)$ does not intersect $\Gamma_+\cap \{|\xi|_g=1\}$. By~\eqref{e:pest-1} (where $r_0$ is chosen large enough depending on $\chi$) we see that $$ F_+(T/2)\leq Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h} $$ and~\eqref{e:second1} follows from here and~\eqref{e:second2.1}. Next, $\Op_h^{L_s}(\chi^2)\in\Psi^{\comp}_h(M)$ is elliptic on $K\cap \{|\xi|_g=1\}$. By~\eqref{e:pest-2} we get $$ \|u\|_{H^s_h}\leq CF_-(0)+Ch^{-1}\|f\|_{H^{s-1}_h} $$ and~\eqref{e:second2} follows from here and~\eqref{e:second2.2}. \begin{figure} \includegraphics{hgap-1.pdf} \qquad \includegraphics{hgap-2.pdf} \caption{An illustration of the proof of~\eqref{e:second1.1}. The function $\chi(1-\chi\circ e^{-(t+t_0)X})$ is split into two parts. The part corresponding to $\chi_2$ is estimated by~\eqref{e:pest-1} and the darker shaded part corresponding to $\chi_1$ is transported backwards by the flow to the right half of the figure, where it is covered by $\chi(1-\chi\circ e^{-tX})$.} \label{f:prop-1} \end{figure} We now prove~\eqref{e:second1.1}. We put $T:=NT_0$, where $N$ is a large constant to be chosen later and for each $(x,\xi)\in \{|\xi|_g=1\}$ and each $t,t_1,t_2\geq T_0>0$ we have \begin{align} \label{e:dynamo-1} (x,\xi)\in\Gamma_+\cap \supp\chi\ &\Longrightarrow\ e^{-tX}(x,\xi)\notin\supp (1-\chi),\\ \label{e:dynamo-2} (x,\xi)\in e^{t_1X}(\supp\chi)\cap e^{-t_2X}(\supp \chi)\ &\Longrightarrow\ (x,\xi)\notin\supp (1-\chi). \end{align} The existence of such $T_0$ follows from~\cite[Lemmas~2.3 and~2.4]{rnc} and the fact that $\chi=1$ near $K\cap \{|\xi|_g=1\}$. We write $\chi=\chi_1+\chi_2$ where $\chi_j\in C_0^\infty(T^*M\setminus 0;[0,1])$, $\supp\chi_2\cap \Gamma_+\cap \{|\xi|_g=1\}=\emptyset$, and for each $t\in [T_0,T+3T_0]$, $t_1,t_2\geq T_0$, and $(x,\xi)\in T^*M\setminus 0$ \begin{align} \label{e:dynamo2-1} (x,\xi)\in\supp\chi_1\ &\Longrightarrow\ e^{-tX}(x,\xi)\notin\supp (1-\chi),\\ \label{e:dynamo2-2} (x,\xi)\in e^{t_1X}(\supp\chi)\cap e^{-t_2X}(\supp \chi_1)\ &\Longrightarrow\ (x,\xi)\notin\supp (1-\chi),\\ \label{e:dynamo2-3} (x,\xi)\in e^{t_1X}(\supp\chi_1)\cap e^{-t_2X}(\supp \chi)\ &\Longrightarrow\ (x,\xi)\notin\supp (1-\chi). \end{align} Note that~\eqref{e:dynamo2-2} and~\eqref{e:dynamo2-3} follow immediately from~\eqref{e:dynamo-2} as long as $\supp\chi_1\subset\supp\chi$. Take $\chi'_2\in C_0^\infty(T^*M)$ such that $\chi'_2=1$ near $\supp\chi_2$ and $\supp\chi'_2\cap \Gamma_+\cap \{|\xi|_g=1\}=\emptyset$. By Lemma~\ref{l:lag-elliptic} and~\eqref{e:pest-1} we have \begin{equation} \label{e:chi-2-est} \begin{gathered} \big\|\Op_h^{L_u}\big(\chi_2(1-\chi\circ e^{-(t+t_0)X})\big)u\|_{L^2} \leq C\|\Op_h^{L_u}(\chi'_2)u\|_{L^2}+\mathcal O(h^\infty)\|u\|_{H^s_h} \\\leq Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. \end{gathered} \end{equation} Next, we have (see Figure~\ref{f:prop-1}) \begin{equation} \label{e:prop-containment-1} e^{-(t_0+T_0)X}\big(\supp(\chi_1(1-\chi\circ e^{-(t+t_0)X}))\big)\ \subset\ \{\chi(1-\chi\circ e^{-tX})=1\}. \end{equation} Indeed, let $(x,\xi)\in\supp(\chi_1(1-\chi\circ e^{-(t+t_0)X}))$. Since $t_0+T_0\in [T_0,T+T_0]$, by~\eqref{e:dynamo2-1} we have $\chi(e^{-(t_0+T_0)X}(x,\xi))=1$. It remains to show that $\chi(e^{-(t+t_0+T_0)X}(x,\xi))=0$. This follows from~\eqref{e:dynamo2-2} applied to $e^{-(t+t_0)X}(x,\xi)\in\supp (1-\chi)$, $t_1=T_0$, $t_2=t+t_0$. We now apply Lemma~\ref{l:general-propagation} to~\eqref{e:prop-containment-1} (where we choose $r_0$ large enough depending on $\chi$ and $T$ and make $\supp\chi_1\subset \{|\xi_g|\in [1/2,2]\}$) and get for each fixed $\varepsilon_1>0$, $$ \big\|\Op_h^{L_u}\big(\chi_1(1-\chi\circ e^{-(t+t_0)X})\big)u\big\|_{L^2} \leq (e^{\nu (t_0+T_0)}+\varepsilon_1)F_+(t)+Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. $$ Together with~\eqref{e:chi-2-est} this implies~\eqref{e:second1.1} as long as we have \begin{equation} \label{e:rookie} e^{\nu(t_0+T_0)}+\varepsilon_1\leq e^{(\nu+\varepsilon_0)t_0}. \end{equation} By choosing $\varepsilon_1$ small enough, this reduces to $\nu T_0<\varepsilon_0t_0$, which follows from the fact that $t_0\geq T/2=NT_0/2$ if we choose $N$ large enough depending on $\varepsilon_0,\beta$. \begin{figure} \includegraphics{hgap-3.pdf} \qquad \includegraphics{hgap-4.pdf} \caption{An illustration of the proof of~\eqref{e:second1.2}. The function $\chi(\chi\circ e^{tX})$ is split into two parts. The part corresponding to $\chi_2$ is estimated by~\eqref{e:pest-1} and the darker shaded part corresponding to $\chi_1$ is transported backwards by the flow to the right half of the figure, where it is covered by $\chi(\chi\circ e^{(t+t_0)X})$.} \label{f:prop-2} \end{figure} To show~\eqref{e:second1.2}, we first note that similarly to~\eqref{e:chi-2-est}, \begin{equation} \label{e:chi-2-est2} \big\|\Op_h^{L_s}\big(\chi_2(\chi\circ e^{tX})\big)u\big\|_{L^2} \leq Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. \end{equation} Next, there exists $T_1\in [T_0,3T_0]$ such that (see Figure~\ref{f:prop-2}) \begin{equation} \label{e:prop-containment-2} e^{-(t_0+T_1)X}\big(\supp(\chi_1(\chi\circ e^{tX}))\big)\ \subset\ \{\chi(\chi\circ e^{(t+t_0)X}) =1\}. \end{equation} Indeed, let $(x,\xi)\in \supp(\chi_1(\chi\circ e^{tX}))$. By~\eqref{e:dynamo2-1}, we have $\chi(e^{-(t_0+T_1)X}(x,\xi))=1$. It remains to show that $\chi(e^{(t-T_1)X}(x,\xi))=1$. If $t\leq 2T_0$, then we put $T_1:=t+T_0$ and use~\eqref{e:dynamo2-1}. If $t\geq 2T_0$, then we put $T_1:=T_0$ and apply~\eqref{e:dynamo2-3} to $e^{(t-T_0)X}(x,\xi)$, $t_1=t-T_0$, $t_2=T_0$. Applying Lemma~\ref{l:general-propagation} to~\eqref{e:prop-containment-2} we get for each fixed $\varepsilon_1>0$, $$ \big\|\Op_h^{L_s}\big(\chi_1(\chi\circ e^{tX})\big)\|_{L^2} \leq (e^{\nu(t_0+T_1)}+\varepsilon_1)F_-(t+t_0)+Ch^{-1}\|f\|_{H^{s-1}_h}+\mathcal O(h^\infty)\|u\|_{H^s_h}. $$ Together with~\eqref{e:chi-2-est2} this implies~\eqref{e:second1.2} as long as we have $$ e^{\nu(t_0+T_1)}+\varepsilon_1\leq e^{(\nu+\varepsilon_0)t_0} $$ which is achieved by taking $N$ large enough similarly to~\eqref{e:rookie}. \end{proof} \subsection{Reduction to a fractal uncertainty principle} \label{s:fun} In this section, we prove Theorem~\ref{t:fup-reduction}. We start by constructing symplectomorphisms \begin{equation} \label{e:kappa-M-domains} \varkappa^\pm:T^*\mathbb H^n\setminus 0\to T^*(\mathbb R^+_w\times\mathbb S^{n-1}_y) \end{equation} which map the weak stable/unstable Lagrangian foliations $L_s,L_u$ defined in~\eqref{e:L-s-L-u} to the vertical foliation on $T^*(\mathbb R^+\times\mathbb S^{n-1})$: \begin{equation} \label{e:lafol} (\varkappa^+)_*L_u=(\varkappa^-)_* L_s=L_V:=\ker(dw)\cap\ker(dy). \end{equation} Recall the symbol $p:T^*\mathbb H^n\setminus 0\to (0,\infty)$ and the maps $B_\pm:T^*\mathbb H^n\setminus 0\to \mathbb S^{n-1}$ defined in~\eqref{e:p-symbol} and~\eqref{e:B-pm}. For $(x,\xi)\in T^*\mathbb H^n\setminus 0$, put $$ G_\pm(x,\xi)=p(x,\xi)\mathcal G(B_\pm(x,\xi),B_\mp(x,\xi))\in T^*_{B_\pm(x,\xi)}\mathbb S^{n-1} $$ where $\mathcal G$ is defined in~\eqref{e:stpro}. See Figure~\ref{f:stereographic}. \begin{figure} \includegraphics{hgap-5.pdf} \caption{The points $B_\pm(x,\xi)$ and the vector $G_+(x,\xi)$ in the ball model of the hyperbolic space.} \label{f:stereographic} \end{figure} Denote by $\mathcal P(x,y)$ the (two-dimensional version of) Poisson kernel, defined on the ball model of $\mathbb H^n$ by \begin{equation} \label{e:Pker} \mathcal P(x,y)={1-|x|^2\over |x-y|^2},\quad x\in\mathbb H^n,\ y\in\mathbb S^{n-1}. \end{equation} The symplectomorphisms $\varkappa^\pm$ are constructed in the following lemma; see Appendix~\ref{s:hyperbolic-technical} for the proof. Note that~\eqref{e:lafol} follows immediately from~\eqref{e:kappa-M-def} and~\eqref{e:desker}. \begin{lemm} \label{l:kappa-M} The maps \begin{equation} \label{e:kappa-M-def} \varkappa^\pm:(x,\xi)\mapsto \big(p(x,\xi),B_\mp(x,\xi),\pm\log\mathcal P(x,B_\mp(x,\xi)),\pm G_\mp(x,\xi)\big) \end{equation} are exact symplectomorphisms from $T^*\mathbb H^n\setminus 0$ onto $T^*(\mathbb R^+\times\mathbb S^{n-1})$. \end{lemm} \noindent\textbf{Remark}. The coordinates $\varkappa^\pm(x,\xi)=(w,y,\theta,\eta)$ can be interpreted as follows: \begin{itemize} \item $y,\eta$ determine the geodesic $\gamma(t)=e^{tX}(x,\xi)$ up to shifting $t$ and rescaling $\xi$, in particular $y$ gives the limit of the geodesic $\gamma(t)$ as $t\to\pm\infty$; \item $w$ is the length of $\xi$, corresponding to the energy of the geodesic $\gamma(t)$; \item $\theta$ satisfies $\theta(\gamma(t))=\theta(\gamma(0))-t$ and thus determines the position of $(x,\xi)$ on the geodesic $\gamma(t)$. \end{itemize} We next consider the symplectomorphism \begin{equation} \label{e:kappa-hat} \widehat\varkappa:=\varkappa^+\circ(\varkappa^-)^{-1}:T^*(\mathbb R^+\times\mathbb S^{n-1})\to T^*(\mathbb R^+\times\mathbb S^{n-1}). \end{equation} The next lemma, proved in Appendix~\ref{s:hyperbolic-technical}, constructs a generating function for $\widehat\varkappa$: \begin{lemm} \label{l:kappa-hat} Consider the following function on $\mathbb R^+_w\times \mathbb S^{n-1}_\Delta$: \begin{equation} \label{e:Theta} \Theta(w,y,y')=w\log {|y-y'|^2\over 4} \end{equation} where $|y-y'|$ denotes Euclidean distance on $\mathbb S^{n-1}\subset\mathbb R^n$. Then for each $(w,y,\theta,\eta)$ and $(w,y',\theta',\eta')$ in $T^*(\mathbb R^+\times\mathbb S^{n-1})$, the following two statements are equivalent: \begin{gather} \label{e:kappa-hat1} (w,y',\theta',\eta')=\widehat\varkappa(w,y,\theta,\eta);\\ \label{e:kappa-hat2} \theta-\theta'=\partial_w\Theta(w,y,y'),\quad \eta=\partial_y\Theta(w,y,y'),\quad \eta'=-\partial_{y'}\Theta(w,y,y'). \end{gather} Moreover, the antiderivative for $\widehat\varkappa$ defined as the sum of antiderivatives for $\varkappa^+$ and $(\varkappa^-)^{-1}$ (see~\S\ref{s:fios}) is equal to the pullback of $\Theta$ to the graph $\Graph(\widehat\varkappa)$. \end{lemm} Using Lemma~\ref{l:kappa-hat} and the theory presented in~\S\ref{s:fios}, we characterize Fourier integral operators associated to $\widehat\varkappa^{-1}$: \begin{lemm} \label{l:B-hat} Assume that $B\in I^{\comp}_h(\widehat\varkappa^{-1})$. Then we have $$ B=A\widetilde{\mathcal B}_\chi+\mathcal O(h^\infty)_{\Psi^{-\infty}} $$ for some $A\in\Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$, $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$, and $$ \widetilde{\mathcal B}_\chi v(w,y)=(2\pi h)^{1-n\over 2}\int_{\mathbb S^{n-1}} \Big|{y-y'\over 2}\Big|^{2iw/h}\chi(y,y')v(w,y')\,dy' $$ where $\mathbb S^{n-1}_\Delta$ is defined in~\eqref{e:s-diag}, $|y-y'|$ denotes the Euclidean distance, and $dy'$ is the standard volume form on the sphere. \end{lemm} \begin{proof} For the function $\Theta$ defined in~\eqref{e:Theta}, we have $$ \widetilde{\mathcal B}_\chi v(w,y)=(2\pi h)^{1-n\over 2}\int_{\mathbb S^{n-1}}e^{{i\over h}\Theta(w,y,y')}\chi(y,y')v(w,y')\,dy'. $$ For $A\in\Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$, the operator $A\widetilde{\mathcal B}_\chi$ is given by the following formula modulo an $\mathcal O(h^\infty)_{\Psi^{-\infty}}$ remainder: $$ A\widetilde{\mathcal B}_\chi v(w,y)=(2\pi h)^{-{n+1\over 2}}\int\limits_{\mathbb S^{n-1}\times T^*\mathbb R^+}e^{{i\over h}((w-w')\theta+\Theta(w',y,y'))} b(w,\theta,y,y';h) v(w',y')\, dy' dw' d\theta $$ where $b$ is a compactly supported symbol on $\mathbb R^+_w\times\mathbb R_\theta\times \mathbb S^{n-1}_\Delta$ such that $$ b(w,\theta,y,y';0)=\sigma_h(A)(w,y,\theta,\partial_y\Theta(w,y,y'))\chi(y,y'). $$ To see this, it suffices to choose some local coordinates on $\mathbb S^{n-1}$, take $A=\Op_h(a)$ for some compactly supported symbol $a(w,y,\theta,\eta;h)$, write $$ \begin{aligned} A\widetilde{\mathcal B}_\chi v(w,y)=\,&(2\pi h)^{1-3n\over 2}\int e^{{i\over h}((w-w')\theta+(y-y'')\cdot\eta+\Theta(w',y'',y'))}\\ &a(w,y,\theta,\eta;h)\chi(y'',y')v(w',y')\,dw'dy''d\theta d\eta dy' \end{aligned} $$ where the integral is taken over $\mathbb R^+_{w'}\times\mathbb S^{n-1}_{y''}\times\mathbb R^n_{\theta,\eta}\times\mathbb S^{n-1}_{y'}$, and apply the method of stationary phase in the $(y'',\eta)$ variables. Now, let $B\in I^{\comp}_h(\widehat\varkappa^{-1})$. Fix $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$ such that \begin{equation} \label{e:WF-B-cond} \WFh(B)\ \subset\ U_\chi:=\varphi_\Theta(\{(w,\theta,y,y')\mid \chi(y,y')\neq 0\}) \end{equation} where $\varphi_\Theta:R^+_w\times\mathbb R_\theta\times \mathbb S^{n-1}_\Delta\to\Graph(\widehat\varkappa^{-1})$ is the diffeomorphism constructed using~\eqref{e:kappa-hat2}. By Lemma~\ref{l:kappa-hat}, the function $$ \Phi:(w,y,w',y',\theta)\in \mathbb R^+_w\times\mathbb R^+_{w'}\times \mathbb S^{n-1}_\Delta\times\mathbb R_\theta \ \mapsto\ (w-w')\theta+\Theta(w',y,y') $$ parametrizes $\widehat\varkappa^{-1}$ in the sense of~\eqref{e:kappa-parametrized}. Therefore, by~\eqref{e:fio-general-form} we may write for some compactly supported symbol $\tilde b$ on the domain of $\Phi$, modulo $\mathcal O(h^\infty)_{\Psi^{-\infty}}$, $$ Bv(w,y)=(2\pi h)^{-{n+1\over 2}}\int\limits_{\mathbb S^{n-1}\times T^*\mathbb R^+} e^{{i\over h}((w-w')\theta+\Theta(w',y,y'))}\tilde b(w,y,w',y',\theta;h)v(w',y')\,dy'dw'd\theta. $$ Moreover, by~\eqref{e:WF-B-cond} (which implies that $B$ is a Fourier integral operator associated to a restriction of $\widehat\varkappa^{-1}$) we can take $\tilde b$ supported inside $\{\chi(y,y')\neq 0\}$. Take $A_0\in\Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$ such that for all $(w,y,\theta,\eta,w,y',\theta',\eta')\in \Graph(\widehat\varkappa^{-1})$, $$ \sigma_h(A_0)(w,y,\theta,\eta)={\tilde b(w,y,w,y',\theta;0)\over \chi(y,y')}. $$ Comparing the oscillatory integral expressions for $A_0\widetilde{\mathcal B}_\chi$ and $B$ and using~\eqref{e:principal-killed}, we get $$ B-A_0\widetilde{\mathcal B}_\chi\in hI^{\comp}_h(\widehat\varkappa^{-1}). $$ Moreover, we may choose $A_0$ so that $\WFh(B-A_0\widetilde{\mathcal B}_\chi)\subset U_\chi$. Replacing $B$ with $h^{-1}(B-A_0\widetilde{\mathcal B}_\chi)$ and arguing by induction, we construct $A_j\in \Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$ such that $$ B-\sum_{j=0}^N h^jA_j\widetilde{\mathcal B}_\chi\in h^{N+1}I^{\comp}_h(\widehat\varkappa^{-1}). $$ Then $B=A\widetilde{\mathcal B}_\chi+\mathcal O(h^\infty)_{\Psi^{-\infty}}$, where $A\in\Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$ is defined by the asymptotic sum $A\sim \sum_{j=0}^\infty h^j A_j$. \end{proof} We now reformulate the fractal uncertainty principle of Definition~\ref{d:fup} as follows: \begin{lemm} \label{l:fup-revisited} Assume $\Lambda_\Gamma,\beta,\varepsilon>0,\rho\in (0,1)$ are such that~\eqref{e:fup-standard} holds for all $C_1,\chi$. With $L_V$ defined in~\eqref{e:lafol}, let $$ A_\pm\in \Psi^{\comp}_{h,L_V,\rho}(T^*(\mathbb R^+\times\mathbb S^{n-1})),\quad B\in I^{\comp}_h(\widehat\varkappa^{-1}) $$ and assume that for some constant $C_2$, $A_+$ and $A_-$ are $\mathcal O(h^\infty)$, in the sense of Definition~\ref{d:lag-prince}, along every sequence $(w_j,y_j,\theta_j,\eta_j,h_j)$ such that $d(y_j,\Lambda_\Gamma)>C_2h_j^\rho$. Then $$ \|A_-BA_+\|_{L^2\to L^2}\leq Ch^{\beta-\varepsilon}. $$ \end{lemm} \begin{proof} We first use Lemma~\ref{l:B-hat} to write $B=A\widetilde{\mathcal B}_\chi+\mathcal O(h^\infty)_{L^2\to L^2}$ for some $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$ and $A\in\Psi^{\comp}_h(\mathbb R^+\times\mathbb S^{n-1})$. The operator $A_-A\in\Psi^{\comp}_{h,L_V,\rho}$ satisfies the same microlocal vanishing assumption as $A_-$, therefore it suffices to show that \begin{equation} \label{e:meimei} \|A_-\chi'(w)\widetilde{\mathcal B}_\chi A_+\|_{L^2\to L^2}\leq Ch^{\beta-\varepsilon}. \end{equation} Here we may insert some cutoff function $\chi'\in C_0^\infty(\mathbb R^+)$ since $A$ is compactly supported. Using Lemma~\ref{l:symbol-construction}, take a function $\chi_0(y;h)\in C^\infty(\mathbb S^{n-1})$ such that $$ \supp\chi_0\subset \Lambda_\Gamma(2C_2h^\rho),\quad \supp(1-\chi_0)\cap\Lambda_\Gamma(C_2h^\rho)=\emptyset,\quad |\partial^\alpha_y \chi_0(y)|\leq C_\alpha h^{-\rho|\alpha|}. $$ Here $\Lambda_\Gamma(\cdot)$ is defined in~\eqref{e:limit-nbhd}. We claim that \begin{equation} \label{e:meimei2} A_-(1-\chi_0(y;h))=\mathcal O(h^\infty)_{L^2\to L^2},\quad (1-\chi_0(y;h))A_+=\mathcal O(h^\infty)_{L^2\to L^2}. \end{equation} Indeed, by a partition of unity we may assume that $\WFh(A_-)$ is contained in the cotangent bundle of a coordinate chart on $\mathbb R^+\times\mathbb S^{n-1}$. By part~2 of Lemma~\ref{l:globallem} (where $B,B'$ are pullback operators), we can write $A_-=\Op_h(a_-)+\mathcal O(h^\infty)_{L^2\to L^2}$ for some $a_-\in S^{\comp}_{L_0,\rho}$. Moreover, by the assumption on $A_-$, we see that $a_-$ is $\mathcal O(h^\infty)$, in the sense of Definition~\ref{d:rapid-decay}, along every sequence $(w_j,y_j,\theta_j,\eta_j,h_j)$ such that $y_j\notin\Lambda_\Gamma(C_2h_j^\rho)$. Then the first estimate in~\eqref{e:meimei2} follows from Lemma~\ref{l:quant-basic} (or rather, its trivial adaptation to the non-compactly supported symbol $1-\chi_0(y;h)$); the second estimate in~\eqref{e:meimei2} is proved similarly. By~\eqref{e:meimei2}, since $\|A_\pm\|_{L^2\to L^2}$ is bounded uniformly in $h$, in order to prove~\eqref{e:meimei} it suffices to show \begin{equation} \label{e:meimei3} \|\chi_0(y;h)\chi'(w)\widetilde{\mathcal B}_\chi\chi_0(y;h)\|_{L^2(\mathbb R^+\times\mathbb S^{n-1})\to L^2(\mathbb R^+\times\mathbb S^{n-1})}\leq Ch^{\beta-\varepsilon}. \end{equation} We calculate $$ \chi_0(y;h)\chi'(w)\widetilde{\mathcal B}_\chi\chi_0(y;h)v(w,y)=\chi'(w)4^{-iw/h}\mathcal B'_w(v(w,\cdot))(y), $$ where $\mathcal B'_w:L^2(\mathbb S^{n-1})\to L^2(\mathbb S^{n-1})$ is given by $$ \mathcal B'_w f(y)=(2\pi h)^{1-n\over 2}\int_{\mathbb S^{n-1}}|y-y'|^{2iw/h} \chi(y,y')\chi_0(y;h)\chi_0(y';h)f(y')\,dy'. $$ Replacing $h$ by $h/w$ in~\eqref{e:fup-standard} and using that $\chi_0$ is bounded and supported in $\Lambda_\Gamma(2C_2h^\rho)$, we see that uniformly in $w\in \supp\chi'$, $$ \|\mathcal B'_w\|_{L^2(\mathbb S^{n-1})\to L^2(\mathbb S^{n-1})}\leq Ch^{\beta-\varepsilon} $$ and~\eqref{e:meimei3} follows from here by integration. \end{proof} We are now ready to give \begin{proof}[Proof of Theorem~\ref{t:fup-reduction}] In order to prove~\eqref{e:essential-gap2}, it suffices to show the estimate~\eqref{e:est0} for all functions $u,f$ satisfying~\eqref{e:eq0}, where (see~\eqref{e:nu-def}) $$ \omega=1-ih\nu,\quad\nu\in [-1,\beta-\varepsilon]. $$ Take $\chi_\pm\in C_0^\infty(T^*M\setminus 0;[0,1])$ such that $\chi_\pm=1$ near $K\cap \{|\xi|_g=1\}$. We also take $\varepsilon_0>0$ small enough depending on $\varepsilon$ and fix $\rho\in (0,1)$ so that~\eqref{e:fup-standard} is satisfied with $\varepsilon$ replaced by $\varepsilon_0$. Put $$ \begin{gathered} t:=\rho\log(1/h),\quad \nu^+:=\max(0,\nu);\\ A'_+:=\Op_h^{L_u}\big(\chi_+(\chi_+\circ e^{-tX})\big),\quad A_0:=\Op_h^{L_u}(\chi_+),\quad A'_-:=\Op_h^{L_s}\big(\chi_-(\chi_-\circ e^{tX})\big). \end{gathered} $$ Note that by Lemma~\ref{l:propagated-okay}, $$ A'_+\in\Psi^{\comp}_{h,L_u,\rho}(T^*M\setminus 0),\quad A_0\in\Psi^{\comp}_h(M),\quad A'_-\in\Psi^{\comp}_{h,L_s,\rho}(T^*M\setminus 0). $$ By Lemma~\ref{l:second} we obtain \begin{gather} \label{e:together1} \|(A_0-A'_+)u\|_{L^2}\leq Ch^{-1-\rho(\nu^++\varepsilon_0)}\|f\|_{H^{s-1}_h} +\mathcal O(h^\infty)\|u\|_{H^s_h},\\ \label{e:together2} \|u\|_{H^s_h}\leq Ch^{-\rho(\nu+\varepsilon_0)}\|A'_-u\|_{L^2} +Ch^{-1-\rho(\nu^++\varepsilon_0)}\|f\|_{H^{s-1}_h}. \end{gather} We choose $\chi_\pm$ so that $\chi_+=1$ near $\supp\chi_-$. Let $Q\in\Psi^{\comp}_h(M)$ be an elliptic parametrix of $A_0$ near $\supp\chi_-$ (see for instance~\cite[\S E.2.2]{dizzy} or~\cite[Proposition~2.4]{zeta}); in particular, $$ QA_0=1+\mathcal O(h^\infty)\quad\text{microlocally near }\supp\chi_-. $$ Since $A'_-$ is pseudolocal and its wavefront set is contained inside $\supp\chi_-$, we have \begin{equation} \label{e:together4} \|A'_-(1-QA_0)u\|_{L^2}=\mathcal O(h^\infty)\|u\|_{H^s_h}. \end{equation} Take $\varepsilon_0<\varepsilon/2$. We claim that it suffices to prove the bound \begin{equation} \label{e:prodib} \|A'_-QA'_+\|_{L^2(M)\to L^2(M)}\leq Ch^{\beta-\varepsilon_0}. \end{equation} Indeed, putting together~\eqref{e:together1}--\eqref{e:prodib} and using that $\nu\leq\beta-\varepsilon$, we have $$ \begin{aligned} \|u\|_{H^s_h}&\leq Ch^{-\rho(\nu+\varepsilon_0)}\|A'_-u\|_{L^2} +Ch^{-1-\rho(\nu^++\varepsilon_0)}\|f\|_{H^{s-1}_h}\\ &\leq Ch^{-\rho(\nu+\varepsilon_0)}\big(\|A'_-QA'_+u\|_{L^2}+ \|A'_-Q(A_0-A'_+)u\|_{L^2}\\ &\quad+\|A'_-(1-QA_0)u\|_{L^2}\big) +Ch^{-1-\rho(\nu^++\varepsilon_0)}\|f\|_{H^{s-1}_h}\\ &\leq Ch^{\varepsilon-2\varepsilon_0}\|u\|_{H^s_h} +Ch^{-1-2(\nu^++\varepsilon_0)}\|f\|_{H^{s-1}_h} +\mathcal O(h^\infty)\|u\|_{H^s_h} \end{aligned} $$ giving~\eqref{e:est0} for $h$ small enough. It remains to deduce~\eqref{e:prodib} from the fractal uncertainty principle. We may assume that $\WFh(Q)$ is contained in a small neighborhood of $\supp\chi_-$; by an appropriate choice of $\chi_-$, we may assume that $\WFh(Q)$ is contained in a small neighborhood of $K\cap \{|\xi|_g=1\}$. By a partition of unity, it suffices to show that for each $(x_0,\xi_0)\in K\cap \{|\xi|_g=1\}$, there exists a neighborhood $V$ of $(x_0,\xi_0)$ such that~\eqref{e:prodib} holds for all $Q\in\Psi^{\comp}_h(M)$ with $\WFh(Q)\subset V$. Fix $(x_0,\xi_0)\in K\cap \{|\xi|_g=1\}$. Composing the maps $\varkappa^\pm$ constructed in Lemma~\ref{l:kappa-M} with a local inverse of the covering map $\pi_\Gamma$ defined in~\eqref{e:pi-Gamma}, we obtain exact symplectomorphisms \begin{equation} \label{e:kappa-0} \varkappa_0^\pm:U\to U'_\pm,\quad U\subset T^* M\setminus 0,\quad U'_\pm\subset T^*(\mathbb R^+\times\mathbb S^{n-1}), \end{equation} for some small neighborhood $U$ of $(x_0,\xi_0)$ and some small neighborhoods $U'_\pm$ of $$ (1,y_0^\pm,\theta_0^\pm,\eta_0^\pm):=\varkappa_0^\pm(x_0,\xi_0). $$ Here $(w,y)$ are coordinates on $\mathbb R^+\times\mathbb S^{n-1}$ and $(\theta,\eta)$ are the corresponding dual variables. Take $\mathcal B_\pm\in I^{\comp}_h(\varkappa_0^\pm)$, $\mathcal B'_\pm\in I^{\comp}_h((\varkappa_0^\pm)^{-1})$ quantizing $\varkappa_0^\pm$ near $V\times \varkappa_0^\pm(V)$ in the sense of~\eqref{e:quantized}, where $V\Subset U$ is a small neighborhood of $(x_0,\xi_0)$. Since $\WFh(Q)\subset V$ and $A'_\pm$ are pseudolocal, we have $$ A'_-QA'_+=(\mathcal B'_-\mathcal B_-)A'_-(\mathcal B'_-\mathcal B_-)(\mathcal B'_+\mathcal B_+)QA'_+(\mathcal B'_+\mathcal B_+)+\mathcal O(h^\infty)_{L^2\to L^2}; $$ therefore, to prove~\eqref{e:prodib} it suffices to show that \begin{equation} \label{e:prodib2} \|A_- BA_+\|_{L^2(\mathbb R^+\times\mathbb S^{n-1})\to L^2(\mathbb R^+\times\mathbb S^{n-1})}\leq Ch^{\beta-\varepsilon_0}, \end{equation} where, with $\widehat\varkappa$ defined in~\eqref{e:kappa-hat}, $$ A_-=\mathcal B_-A'_-\mathcal B'_-,\quad A_+=\mathcal B_+QA'_+\mathcal B'_+,\quad B=\mathcal B_-\mathcal B'_+\in I^{\comp}_h(\widehat\varkappa^{-1}). $$ By the construction of $\Psi^{\comp}_{h,L,\rho}$ calculus in~\S\ref{s:calculus-general}, together with~\eqref{e:lafol}, we see that $$ A_\pm\in\Psi^{\comp}_{h,L_V,\rho}(T^*(\mathbb R^+\times\mathbb S^{n-1})). $$ Moreover, $A_\pm=\mathcal O(h^\infty)$ in the sense of Definition~\ref{d:lag-prince} along each sequence $(w_j,y_j,\theta_j,\eta_j,h_j)$ such that \begin{equation} \label{e:killit-1} (w_j,y_j,\theta_j,\eta_j)\ \notin\ \varkappa_0^\pm(U\cap e^{\pm\rho\log(1/h_j)X}(\supp\chi_\pm)). \end{equation} By Lemma~\ref{l:close-to-trapping} and~\eqref{e:kappa-M-def}, we see that~\eqref{e:killit-1} is satisfied when $d(y_j,\Lambda_\Gamma)> C_2h_j^\rho$ for some fixed constant $C_2$. Now~\eqref{e:prodib2} follows from Lemma~\ref{l:fup-revisited}. \end{proof} \noindent\textbf{Remark}. As follows from the proof of Lemma~\ref{l:kappa-M} in Appendix~\ref{s:hyperbolic-technical}, the Fourier integral operators $\mathcal B_\pm$ used in the proof of Theorem~\ref{t:fup-reduction} are microlocalized versions of the Poisson operators $$ v(w,y)\ \mapsto\ u(x)=\int_{\mathbb R^+}\int_{\mathbb S^{n-1}} \mathcal P(x,y)^{\mp i w/h} v(w,y)\,dy dw. $$ Therefore, conjugation by $\mathcal B_\pm$ is related to the representation of resonant states as images under the Poisson operator of distributions supported on the limit set, see for instance~\cite[(14.9)]{Borthwick} or~\cite{Bunke-Olbrich2}. \section{Fractal uncertainty principle} \label{s:fup} In this section, we prove Theorem~\ref{t:ae-reduction}. We will not use directly the geometry or the dynamics of the manifold $M$, instead relying on the additive structure of the limit set $\Lambda_\Gamma$ defined in~\eqref{e:Lambda-Gamma} and basic harmonic analysis. \subsection{Basic properties} We start with some basic facts regarding the fractal uncertainty principle of Definition~\ref{d:fup}. First of all, since $\mathcal B_\chi(h)$ is a semiclassical Fourier integral operator associated to a canonical transformation (see~\eqref{e:fio-general-form} and Lemma~\ref{l:kappa-hat}), it is bounded on $L^2$ uniformly in $h$. This gives the bound \begin{equation} \label{e:tb-1} \|\indic_{\Lambda_\Gamma(C_1h^\rho)}\mathcal B_\chi\indic_{\Lambda_\Gamma(C_1 h^\rho)}\|_{L^2\to L^2}\leq C. \end{equation} Combined with Theorem~\ref{t:fup-reduction}, this bound translates to the well-known statement that there are no resonances in the upper half-plane away from the imaginary axis, which is a direct consequence of self-adjointness of the Laplacian on $L^2(M)$. To formulate the next bound, we introduce the parameter \begin{equation} \label{e:delta} \delta\in [0,n-1] \end{equation} defined as the exponent of convergence of Poincar\'e series: that is, $\delta$ is the smallest number such that for all $x,x'\in\mathbb H^n$ and $\Re s>\delta$, we have $$ \Sigma_p(s;x,x'):=\sum_{\gamma\in\Gamma} \exp\big(-sd_{\mathbb H^n}(x,\gamma.x')\big)<\infty. $$ Here $d_{\mathbb H^n}(\cdot,\cdot)$ stands for the distance function on $\mathbb H^n$ induced by the hyperbolic metric. The constant $\delta$ is the Hausdorff dimension of the limit set $\Lambda_\Gamma$, see~\cite[Theorem~14.14]{Borthwick} for the case $n=2$ and~\cite{Sullivan} for general dimensions. It is also the Minkowski dimension of $\Lambda_\Gamma$; in fact, we have the following more precise estimate (which is a form of Ahlfors-David regularity): \begin{equation} \label{e:AD-estimate-Lebesgue} \begin{gathered} C^{-1}\alpha^{n-1-\delta}(\alpha')^\delta\ \leq\ \mu_L(\Lambda_\Gamma(\alpha)\cap B(y_0,\alpha')) \ \leq\ C\alpha^{n-1-\delta}(\alpha')^\delta,\\ 0<\alpha\leq \alpha'\leq 1,\quad y_0\in\Lambda_\Gamma, \end{gathered} \end{equation} where $\Lambda_\Gamma(\alpha)$ is defined in~\eqref{e:limit-nbhd}, $B(y_0,\alpha')$ denotes the ball of radius $\alpha'$ centered at $y_0$, $\mu_L$ denotes the Lebesgue measure on $\mathbb S^{n-1}$, and the constant $C>0$ does not depend on $y_0,\alpha,\alpha'$. See~\S\ref{s:minkowski} for the proof. By putting $\alpha'=1$ in~\eqref{e:AD-estimate-Lebesgue}, we obtain in particular~\eqref{e:LG-upper-1}. Given~\eqref{e:LG-upper-1}, we may use Schur's lemma~\cite[Lemma~18.1.12]{ho3}: the estimate $$ \sup_{y\in\Lambda_\Gamma(C_1h^\rho)}\int_{\Lambda_\Gamma(C_1h^\rho)} |\chi(y,y')|\,dy'\leq Ch^{\rho(n-1-\delta)} $$ implies by~\eqref{e:B-chi} that \begin{equation} \label{e:tb-2} \|\indic_{\Lambda_\Gamma(C_1h^\rho)}\mathcal B_\chi\indic_{\Lambda_\Gamma(C_1 h^\rho)}\|_{L^2\to L^2}\leq Ch^{1-n\over 2}h^{\rho(n-1-\delta)}. \end{equation} Since we may choose $\rho$ arbitrarily close to 1, this gives the fractal uncertainty principle with exponent ${n-1\over 2}-\delta$. By Theorem~\ref{t:fup-reduction}, the bounds \eqref{e:tb-1} and~\eqref{e:tb-2} together translate (with a loss of $\varepsilon$) to the standard spectral gap~\eqref{e:standard-gap}. We finally show that the fractal uncertainty principle cannot hold with $\beta>{n-1\over 2}-{\delta\over 2}$. This threshold corresponds to the Jakobson--Naud conjecture, see~\S\ref{s:spfup}. More precisely, we claim that for each $\varepsilon>0$, there exists $\chi\in C_0^\infty(\mathbb S^{n-1}_\Delta)$ and a family $v(h)\in L^2(\mathbb S^{n-1})$ such that for $h$ small enough, \begin{equation} \label{e:JN} \|\indic_{\Lambda_\Gamma(h)}\mathcal B_\chi(h) \indic_{\Lambda_\Gamma(h)}v(h)\|_{L^2}\geq h^{{n-1\over 2}-{\delta\over 2}+\varepsilon}\|v(h)\|_{L^2}. \end{equation} To prove~\eqref{e:JN}, take small $\tilde\varepsilon>0$, fix $y_0,y_1\in\Lambda_\Gamma$, $y_0\neq y_1$, and let $v(h)=\indic_{B(y_0,h^{1+\tilde\varepsilon})}$. Then \begin{equation} \label{e:vvvnnn} \|v(h)\|_{L^2}\leq C h^{(1+\tilde\varepsilon)(n-1)/2},\quad \indic_{\Lambda_\Gamma(h)}v(h)=v(h). \end{equation} Using~\eqref{e:B-chi}, we compute $$ \mathcal B_\chi(h) v(y;h)=(2\pi h)^{1-n\over 2}|y-y_0|^{2i/h}\int_{\mathbb S^{n-1}} \Big({|y-y'|\over |y-y_0|}\Big)^{2i/h}\chi(y,y')v(y')\,dy'. $$ We take $h$-independent $\chi$ such that $\chi(y,y')=1$ for $(y,y')$ near $(y_1,y_0)$. For $y$ in a fixed neighborhood of $y_1$ and $y'\in\supp v$, we have $\big({|y-y'|\over |y-y_0|}\big)^{2i/h}=1+\mathcal O(h^{\tilde\varepsilon})$, therefore for $\tilde\varepsilon_1$ small enough, $$ |\mathcal B_\chi(h) v(y;h)|\geq C^{-1}h^{1-n\over 2} h^{(1+\tilde\varepsilon)(n-1)}\quad\text{for } |y-y_1|<\tilde\varepsilon_1. $$ By~\eqref{e:AD-estimate-Lebesgue}, we then have $$ \|\indic_{\Lambda_\Gamma(h)}\mathcal B_\chi(h)v(h)\|_{L^2}\geq C^{-1} h^{1-n\over 2}h^{(1+\tilde\varepsilon)(n-1)}h^{n-1-\delta\over 2}, $$ which together with~\eqref{e:vvvnnn} implies~\eqref{e:JN} as long as $\tilde\varepsilon<{2\over n-1}\varepsilon$. \subsection{Reduction to additive energy} \label{s:fup-ae} We now prove Theorem~\ref{t:ae-reduction}. We will take $\rho$ very close to 1, in particular $\rho>1/2$. Using Lemma~\ref{l:symbol-construction}, take $\psi_0(y;h)\in C^\infty(\mathbb S^{n-1};[0,1])$ such that \begin{gather} \label{e:psi0-1} \supp(1-\psi_0)\cap \Lambda_\Gamma(C_1h^\rho)=\emptyset,\quad \supp\psi_0\subset \Lambda_\Gamma(h^{\rho/2});\\ \label{e:psi0-3} \sup_{\mathbb S^{n-1}}|\partial^\alpha\psi_0|\leq C_\alpha h^{-{\rho\over 2}|\alpha|}. \end{gather} To show~\eqref{e:fup-standard}, it suffices to prove that $$ \|\sqrt{\psi_0}\,\mathcal B_\chi(h)\indic_{\Lambda_\Gamma(C_1h^\rho)}\|_{L^2\to L^2}\leq Ch^{\beta-\varepsilon}, $$ in fact it is enough to prove the following $T^*T$-bound: $$ \|\indic_{\Lambda_\Gamma(C_1 h^\rho)}\mathcal B_\chi(h)^*\psi_0\mathcal B_\chi(h)\indic_{\Lambda_\Gamma(C_1h^\rho)}\|_{L^2\to L^2}\leq Ch^{2(\beta-\varepsilon)}. $$ By Schur's lemma~\cite[Lemma~18.1.12]{ho3} and~\eqref{e:B-chi} it is enough to prove the Schwartz kernel bound \begin{equation} \label{e:skb1} \sup_{y''\in\Lambda_\Gamma(C_1 h^\rho)}\int\limits_{\Lambda_\Gamma(C_1 h^\rho)}|\mathcal K(y,y'';h)|\,dy \leq Ch^{2(\beta-\varepsilon)} \end{equation} where the integral kernel $\mathcal K(y,y'';h)$ of the operator $\mathcal B_\chi(h)^*\psi_0\mathcal B_\chi(h)$ is given by \begin{equation} \label{e:skb2} \mathcal K(y,y'';h)=h^{1-n}\int\limits_{\mathbb S^{n-1}} \bigg({|y'-y''|\over |y'-y|}\bigg)^{2i/h} \chi(y',y'')\overline{\chi(y',y)}\psi_0(y';h)\,dy'. \end{equation} Morally speaking, $\mathcal K(y,y'';h)$ is the correlation on $\Lambda_\Gamma(h^{\rho/2})$ between Lagrangian states corresponding to two levels in~\eqref{e:hor-lag} given by $y_j=y$ and $y_j=y''$. To capture cancellations in the expression~\eqref{e:skb1}, we use the following precise version of the method of nonstationary phase (see for instance~\cite[Theorem~7.7.1]{ho1} for the standard version): \begin{lemm} \label{l:ibp} Let $U\subset \mathbb R^m$ be open and bounded, $\widetilde\varphi\in C^\infty(U;\mathbb R)$, and $a\in C_0^\infty(U)$. Assume that the following inequalities hold: \begin{equation} \label{e:ibp1} \begin{aligned} C_1^{-1}\tilde h^{-1}\leq |d\widetilde\varphi(x)|\leq C_1\tilde h^{-1}&\quad\text{for all }x\in\supp a;\\ |\partial^\alpha \widetilde\varphi(x)|\leq C_{|\alpha|}\tilde h^{-\tilde\rho|\alpha|}&\quad\text{for all }x\in\supp a\text{ and }|\alpha|\geq 2;\\ |\partial^\alpha a(x)|\leq C_{|\alpha|}\tilde h^{-\tilde\rho|\alpha|}&\quad\text{for all }x\in U\text{ and all }\alpha. \end{aligned} \end{equation} Here $\tilde\rho,\tilde h\in (0,1)$ and $C_0,C_1,C_2,\dots$ are positive constants. Then for each $N\in\mathbb N_0$ \begin{equation} \label{e:ibp2} \bigg|\int_U e^{i\widetilde\varphi(x)}a(x)\,dx\bigg|\leq C'_N\tilde h^{N(1-\tilde\rho)}, \end{equation} where the constant $C'_N$ depends only on $U,C_0,C_1,\dots,C_{N+1}$. \end{lemm} \noindent\textbf{Remark.} Using coordinate charts and a partition of unity for $a(x)$, we see that Lemma~\ref{l:ibp} also applies when $U$ is a manifold; we will typically use it for $U=\mathbb S^{n-1}$. \begin{proof} Consider the first order differential operator $$ L=-i\sum_{j=1}^m {\partial_j\widetilde\varphi\over |d\widetilde\varphi|^2}\partial_j,\quad |d\widetilde\varphi|^2:=\sum_{j=1}^m|\partial_j\widetilde\varphi|^2. $$ Then $e^{i\widetilde\varphi}=L(e^{i\widetilde\varphi})$. Integrating by parts $N$ times, we obtain $$ \bigg|\int_{U} e^{i\widetilde\varphi(x)}a(x)\,dx\bigg|= \bigg|\int_{U} e^{i\widetilde\varphi(x)}(L^t)^Na(x)\,dx\bigg| \leq \mu_L(U)\sup_x |(L^t)^Na(x)|, $$ where $\mu_L$ is the Lebesgue measure and $$ L^t f=i\sum_{j=1}^m\partial_j\Big({\partial_j\widetilde\varphi\over |d\widetilde\varphi|^2}f\Big). $$ Now, the first two bounds in~\eqref{e:ibp1} imply that $$ \sup_{x\in\supp a}\Big|\partial^\alpha\Big({\partial_j\widetilde\varphi\over |d\widetilde\varphi|^2}\Big)\Big|\leq C'_\alpha\tilde h^{1-\tilde\rho|\alpha|} $$ where the constants $C'_\alpha$ depend only on $C_0,C_1,\dots,C_{|\alpha|+1}$. This together with the last bound in~\eqref{e:ibp1} implies the estimate~\eqref{e:ibp2}. \end{proof} Armed with Lemma~\ref{l:ibp}, we establish decay of the kernel $\mathcal K$. We first consider the case when $y$ and $y''$ are sufficiently far away from each other so that the corresponding Lagrangian leaves almost do not correlate: \begin{lemm} \label{l:ibpl1} We have uniformly in $y,y''\in\mathbb S^{n-1}$ such that $|y-y''|>{1\over 2}h^{1/2}$, $$ \mathcal K(y,y'';h)=\mathcal O(h^\infty). $$ \end{lemm} \begin{proof} We rewrite~\eqref{e:skb2} as \begin{equation} \label{e:K-rewritten} \begin{gathered} \mathcal K(y,y'';h)=h^{1-n}\int_{\mathbb S^{n-1}}e^{i\varphi(y,y',y'')/h}a(y,y',y'';h)\,dy',\\ \varphi=2(\log |y'-y''|-\log |y'-y|),\quad a=\chi(y',y'')\overline{\chi(y',y)}\psi_0(y';h). \end{gathered} \end{equation} Due to the cutoff $\chi$, the amplitude $a$ is supported inside some fixed compact set which does not intersect $\{y=y'\}$ and $\{y'=y''\}$; in particular $\varphi$ is smooth near $\supp a$. We next have for all $N$, $$ \|\varphi\|_{C^N_{y'}(\supp a)}\leq C_N |y-y''|, $$ where the constants $C_N$ are independent of $y,y'',h$. Moreover, for some constant $C$ independent of $y,y',y'',h$, $$ |\partial_{y'}\varphi(y,y',y'')|\geq C^{-1}|y-y''|\quad\text{for }(y,y',y'')\in\supp a. $$ This follows immediately from~\eqref{e:gide} and the fact that the map $y\mapsto \mathcal G(y',y)$ is a diffeomorphism from $\mathbb S^{n-1}\setminus \{y'\}$ to $T_{y'}^*\mathbb S^{n-1}$. It remains to apply Lemma~\ref{l:ibp} with $$ \widetilde\varphi:={\varphi\over h},\quad \tilde h:={h\over|y-y''|}<2h^{1/2},\quad \tilde\rho:=\rho, $$ and use~\eqref{e:psi0-3}. \end{proof} Given Lemma~\ref{l:ibpl1}, in order to show~\eqref{e:skb1} it suffices to prove the following bound: \begin{equation} \label{e:skb3} \sup_{y_0\in\Lambda_\Gamma}\,\sup_{y''\in B(y_0,h^{1/2})}\int\limits_{\Lambda_\Gamma(C_1 h^\rho)\cap B(y_0,h^{1/2})}|\mathcal K(y,y'';h)|\,dy \leq Ch^{2(\beta-\varepsilon)}. \end{equation} We claim that it is enough to prove the following $L^4$ estimate: \begin{equation} \label{e:horror1} \sup_{y_0\in\Lambda_\Gamma}\,\sup_{y''\in B(y_0,h^{1/2})} \int\limits_{B(y_0,h^{1/2})}|\mathcal K(y,y'';h)|^4\,dy \leq Ch^{8(\beta-\varepsilon)-3\rho(n-1-\delta)-{3\delta\over 2}}. \end{equation} Indeed, \eqref{e:skb3} follows by H\"older's inequality from~\eqref{e:horror1} and the following corollary of~\eqref{e:AD-estimate-Lebesgue}: \begin{equation} \label{e:eddie} \|\indic_{\Lambda_\Gamma(C_1h^\rho)\cap B(y_0,h^{1/2})}\|_{L^{4/3}}\leq Ch^{{3\rho\over 4}(n-1-\delta)+{3\delta\over 8}}. \end{equation} The proof of~\eqref{e:horror1} is based on taking the Taylor expansion of the phase function $\varphi$ in~\eqref{e:K-rewritten} around $y=y''=y_0$. The first term in the expansion is linear in $y-y''$ and gives the Fourier transform of a distorted version of $\psi_0$; the next terms are $\mathcal O(|y-y_0|^2+|y''-y_0|^2)=\mathcal O(h)$ and can be put into the amplitude in the integral. The $L^4$ norm of the Fourier transform can next be estimated via the additive energy of the distorted support of $\psi_0$. The proof below relies on this argument, though it does not explicitly use the Fourier transform. Note that to reduce our integral to Fourier transform we needed to restrict to $y,y''=y_0+\mathcal O(h^{1/2})$. To show that the contributions of other $y,y''$ are negligible in Lemma~\ref{l:ibpl1} we needed the derivative bounds~\eqref{e:psi0-3}, explaining the need to make $\psi_0$ live on an $h^{\rho/2}$ neighborhood of the limit set rather than on an $h^{\rho}$ neighborhood. Using Lemma~\ref{l:symbol-construction}, take $\psi_1(y;h)\in C^\infty(\mathbb S^{n-1};[0,1])$ such that \begin{equation} \label{e:psi1} \begin{gathered} \supp(1-\psi_1)\cap B(y_0,h^{1/2})=\emptyset,\quad \supp\psi_1\subset B(y_0,2h^{1/2});\\ \sup_{\mathbb S^{n-1}}|\partial^\alpha \psi_1|\leq C_\alpha h^{-|\alpha|/2}. \end{gathered} \end{equation} Then to prove~\eqref{e:horror1} is enough to show that \begin{equation} \label{e:horror2} \sup_{y_0\in\Lambda_\Gamma}\,\sup_{y''\in B(y_0,h^{1/2})} \int\limits_{\mathbb S^{n-1}}\psi_1(y;h)|\mathcal K(y,y'';h)|^4\,dy \leq Ch^{8(\beta-\varepsilon)-3\rho(n-1-\delta)-{3\delta\over 2}}. \end{equation} By Fubini's Theorem and~\eqref{e:skb2} we have $$ \int\limits_{\mathbb S^{n-1}}\psi_1(y;h)|\mathcal K(y,y'';h)|^4\,dy =\int\limits_{(\mathbb S^{n-1})^4}\mathcal K_1(y_1,y_2,y_3,y_4,y'';h)\,dy_1dy_2dy_3dy_4, $$ where $$ \begin{aligned} \mathcal K_1&=h^{4(1-n)}\psi_0(y_1;h)\psi_0(y_2;h)\psi_0(y_3;h)\psi_0(y_4;h)\mathcal K_2,\\ \mathcal K_2&=\Big({|y_1-y''|\cdot |y_3-y''|\over |y_2-y''|\cdot |y_4-y''|}\Big)^{2i/h} \chi(y_1,y'')\overline{\chi(y_2,y'')}\chi(y_3,y'')\overline{\chi(y_4,y'')}\mathcal K_3,\\ \mathcal K_3&=\int_{\mathbb S^{n-1}}\Big({|y_2-y|\cdot |y_4-y|\over |y_1-y|\cdot |y_3-y|}\Big)^{2i/h}\,\, \overline{\chi(y_1,y)}\chi(y_2,y)\overline{\chi(y_3,y)}\chi(y_4,y)\psi_1(y;h)\,dy. \end{aligned} $$ The next statement shows that $\mathcal K_1$ is very small unless $y_1,y_2,y_3,y_4$ satisfy a certain additive relation. The measure of the set of quadruples $(y_1,y_2,y_3,y_4)$ which do satisfy this relation will later be estimated using additive energy. \begin{lemm} \label{l:ibpl2} Let $\eta_j=\mathcal G(y_0,y_j)\in T^*_{y_0}\mathbb S^{n-1}$, with $\mathcal G$ defined in~\eqref{e:stpro}, and assume that \begin{equation} \label{e:etacon} |\eta_1-\eta_2+\eta_3-\eta_4|\geq h^{\rho/2}. \end{equation} Then $\mathcal K_1(y_1,y_2,y_3,y_4,y'';h)=\mathcal O(h^\infty)$, uniformly in $y_0,y_1,y_2,y_3,y_4,y''$. \end{lemm} \begin{proof} It is enough to show that $\mathcal K_3=\mathcal O(h^\infty)$. For that, we write $$ \begin{gathered} \mathcal K_3=\int_{\mathbb S^{n-1}}e^{i\varphi/h}a\,dy,\quad \varphi(y_1,y_2,y_3,y_4,y)=2\sum_{j=1}^4 (-1)^j\log |y_j-y|,\\ a(y_1,y_2,y_3,y_4,y;h)=\overline{\chi(y_1,y)}\chi(y_2,y)\overline{\chi(y_3,y)}\chi(y_4,y)\psi_1(y;h). \end{gathered} $$ Put $\eta:=\eta_1-\eta_2+\eta_3-\eta_4\in T^*_{y_0}\mathbb S^{n-1}$. By~\eqref{e:gide}, $$ \partial_y\varphi(y_1,y_2,y_3,y_4,y_0)=\eta. $$ Since $\psi_1$ is supported in $B(y_0,2h^{1/2})$, we have for some global constant $C$, $$ |\eta|-Ch^{1/2}\leq|\partial_y \varphi|\leq |\eta|+Ch^{1/2}\quad\text{on }\supp a. $$ By~\eqref{e:etacon}, we get for $h$ small enough, $$ |\eta|/2\leq |\partial_y \varphi|\leq 2|\eta|\quad\text{on }\supp a. $$ It remains to apply Lemma~\ref{l:ibp} with $$ \widetilde\varphi:={\varphi\over h},\quad \tilde h:={h\over |\eta|}\leq h^{1-\rho/2},\quad \tilde\rho:={1\over 2-\rho}, $$ and use~\eqref{e:psi1}. \end{proof} Since $\chi$ is supported away from the diagonal, there exists a constant $C_1$ such that on $\supp \mathcal K_1$, we have $|\mathcal G(y_0,y_j)|\leq C_1$ for $j=1,2,3,4$. By~\eqref{e:psi0-1}, the additive energy bound~\eqref{e:ae-estimate} with $\alpha=Ch^{\rho/2}$ implies that uniformly in $y''$, \begin{equation} \label{e:ae-mesb} \mu_L\big(\{(y_1,y_2,y_3,y_4)\in \supp\mathcal K_1\mid |\eta_1-\eta_2+\eta_3-\eta_4|\leq h^{\rho/2}\}\big)\leq Ch^{2\rho(n-1)-{3\rho\over 2}\delta+{\rho\over 2}\beta_E} \end{equation} where $\eta_j:=\mathcal G(y_0,y_j)\in T^*_{y_0}\mathbb S^{n-1}$. We also have by~\eqref{e:psi1} $$ \sup|\mathcal K_1|\leq Ch^{{7\over 2}(1-n)}. $$ Together with~\eqref{e:ae-mesb} and Lemma~\ref{l:ibpl2}, this gives $$ \sup_{y_0\in\Lambda_\Gamma}\sup_{y''\in B(y_0,h^{1/2})} \int_{\mathbb S^{n-1}}\psi_1(y;h)|\mathcal K(y,y'';h)|^4\,dy\leq Ch^{(2\rho-{7\over 2})(n-1)-{3\rho\over 2}\delta+{\rho\over 2}\beta_E}. $$ This implies~\eqref{e:horror2} as long as $$ \Big(2\rho-{7\over 2}\Big)(n-1)-{3\rho\over 2}\delta+{\rho\over 2}\beta_E\geq 8(\beta-\varepsilon)-3\rho(n-1-\delta)-{3\delta\over 2}. $$ Recalling~\eqref{e:beta-ae}, this inequality becomes $$ \Big(5(n-1)-{9\over 2}\delta+{1\over 2}\beta_E\Big)(1-\rho)\leq 8\varepsilon. $$ The last inequality holds when $\rho$ is close to 1 depending on $\varepsilon$, finishing the proof of Theorem~\ref{t:ae-reduction}. \section{General bounds on additive energy} \label{s:ae-combinatorial} In this section, we prove a new bound (Theorem~\ref{t:ae-combinatorial}) on the additive energy of general Ahlfors-David regular sets (not just those arising from hyperbolic quotients). There is substantial conflict of notation between the current section and the rest of the paper. However, this should not cause a problem since the two are completely decoupled from each other. This makes it possible to use simpler notation in the current section. \input{section6.tex} \section{Regularity and additive energy of limit sets} \label{s:ae} In this section, we study regularity of limit sets of convex co-compact groups and their stereographic projections (\S\ref{s:ae-dynamical}). We next use the results of~\S\ref{s:ae-combinatorial} to prove Theorem~\ref{t:ad-reduced} (\S\ref{s:minkowski}). \subsection{Regularity of limit sets} \label{s:ae-dynamical} In this section, we consider a convex co-compact hyperbolic quotient $M=\Gamma\backslash\mathbb H^n$ and use the Ahlfors-David regularity of the limit set $\Lambda_\Gamma\subset\mathbb S^{n-1}$ to establish Ahlfors-David regularity of the stereographic projections $\mathcal G(y_0,\Lambda_\Gamma)\subset T_{y_0}\mathbb S^{n-1}$, $y_0\in\Lambda_\Gamma$, where $\mathcal G$ is defined in~\eqref{e:stpro}: \begin{lemm} \label{l:adred} Let $\mathbf C$ be the constant in Ahlfors-David regularity of the limit set, as defined in~\eqref{e:ad-regular-limit}. Then for each $y_0,y_1\in \Lambda_\Gamma$, $y_0\neq y_1$, we have \begin{equation} \label{e:adred} \mathbf K_0^{-1}\mathbf C^{-1}r^\delta\ \leq\ \mu_\delta\big(\mathcal G(y_0,\Lambda_\Gamma)\cap B(\mathcal G(y_0,y_1),r)\big) \ \leq\ \mathbf K_0\mathbf Cr^\delta,\quad r>0 \end{equation} where $\mu_\delta$ is the $\delta$--Hausdorff measure on $T_{y_0}\mathbb S^{n-1}$ and $\mathbf K_0$ is a global constant (depending only on the dimension). \end{lemm} The case of bounded $r$ in the above lemma is an immediate consequence of the regularity of $\Lambda_{\Gamma}$. To show that the constants in the regularity statement do not deteriorate when $r\to \infty$, we will prove that if we shrink the set $\mathcal G(y_0,\Lambda_{\Gamma})-\mathcal G(y_0,y_1)$, we obtain an isometric image of the set $\mathcal G(y'_{0},\Lambda_{\Gamma})-\mathcal G(y'_0,y'_1)$ for some other $y'_0,y'_1\in\Lambda_{\Gamma}$. For that we use the group action, or equivalently, argue on the quotient manifold $M$ rather than on $\mathbb H^{n}$. We first write the sets $\mathcal G(y_0,\Lambda_\Gamma)\subset T_{y_0}\mathbb S^{n-1}$ as subsets of the unstable spaces on $M$ via a map $\mathscr U_-$ constructed using horocyclic flows (see Appendix~\ref{s:hyperbolic-technical} for the proof): \begin{lemm} \label{l:horocyclic} Let $S^*\mathbb H^n\subset T^*\mathbb H^n$ be the unit cotangent bundle and $E_u$ the unstable foliation, see~\eqref{e:sudec}. Then there exists a smooth map $$ \mathscr U_-:\{(x,\xi,\eta)\mid (x,\xi)\in S^*\mathbb H^n,\ \eta\in E_u(x,\xi)\}\to S^*\mathbb H^n $$ such that for $(\tilde x,\tilde\xi):=\mathscr U_-(x,\xi,\eta)$ and $B_\pm:S^*\mathbb H^n\to \mathbb S^{n-1}$ defined in~\eqref{e:B-pm}, \begin{gather} \label{e:scroo-1} B_-(\tilde x,\tilde\xi)=B_-(x,\xi),\quad \mathcal P(\tilde x,B_-(x,\xi))=\mathcal P(x,B_-(x,\xi)),\\ \label{e:scroo-2} \mathcal G\big(B_-(x,\xi),B_+(\tilde x,\tilde\xi)\big)-\mathcal G\big(B_-(x,\xi),B_+(x,\xi)\big)= \mathcal P(x,B_-(x,\xi))\mathcal T_-(x,\xi)\eta, \end{gather} where $\mathcal T_-(x,\xi):E_u(x,\xi)\to T_{B_-(x,\xi)}\mathbb S^{n-1}$ is some linear isometry and $\mathcal P$ is the Poisson kernel defined in~\eqref{e:Pker}. Moreover, the map $\mathscr U_-$ commutes with the natural action of the isometry group $\PSO(1,n)$ and thus descends to a map $$ \mathscr U_-:\{(x,\xi,\eta)\mid (x,\xi)\in S^*M,\ \eta\in E_u(x,\xi)\}\to S^* M. $$ \end{lemm} \begin{figure} \includegraphics{hgap-9.pdf} \caption{The points $(x,\xi)\in S^*\mathbb H^n$ and $(\tilde x,\tilde\xi)=\mathscr U_-(x,\xi,\eta)$. The dashed circle is a horocycle and the solid arcs are geodesics through $(x,\xi)$ and $(\tilde x,\tilde\xi)$.} \label{f:horocyclic} \end{figure} \noindent\textbf{Remarks}. (i) For each $(x,\xi)\in S^*M$, the set $\{\mathscr U_-(x,\xi,\eta)\mid \eta\in E_u(x,\xi)\}$ is the unstable manifold passing through $(x,\xi)$, and the differential of the map $\eta\mapsto \mathscr U_-(x,\xi,\eta)$ at $\eta=0$ is the embedding $E_u(x,\xi)\to T_{(x,\xi)}(S^*M)$. See Figure~\ref{f:horocyclic}. The map $\mathcal T_-$ is related to the parametrization of $E_u(x,\xi)$ by the orthogonal complement $\mathcal E(x,\xi)\subset T_x M$ of~$\xi$ (see the paragraph preceding~\eqref{e:stun}) and to the parallel transport map $\mathcal E(x,\xi)\to T_{B_-(x,\xi)}\mathbb S^{n-1}$ (see~\cite[\S3.6]{rrh}), but we do not need an explicit expression for $\mathcal T_-$ here. \noindent (ii) In dimension $2$, $\mathscr U_-$ is given by the flow of the unstable horocyclic vector field $U_-$ (see for instance~\cite[(2.1)]{rrh}): $$ \mathscr U_-(x,\xi,\eta)=e^{sU_-}(x,\xi),\quad \eta=sU_-(x,\xi)\in E_u(x,\xi),\quad s\in\mathbb R. $$ \smallskip For each point $(x,\xi)\in K\cap S^*M$, where $K=\Gamma_+\cap\Gamma_-$ is the trapped set, define \begin{equation} \label{e:cal-F} \mathcal F_{(x,\xi)}:=\{\eta\mid \mathscr U_-(x,\xi,\eta)\in K\}\ \subset\ E_u(x,\xi). \end{equation} Note that $(\tilde x,\tilde \xi):=\mathscr U_-(x,\xi,\eta)$ lies in $\Gamma_+$ for all $\eta$, since the geodesics starting at $(x,\xi)$ and $(\tilde x,\tilde\xi)$ converge to each other as $t\to -\infty$ by~\eqref{e:scroo-1} (see also~\eqref{e:Gpm-formula}); therefore, $K$ can be replaced in~\eqref{e:cal-F} by $\Gamma_-$. By~\eqref{e:scroo-2} and~\eqref{e:Gpm-formula}, for each $(x,\xi)\in S^*\mathbb H^n$ we have \begin{equation} \label{e:cal-F-useful} \mathcal G(y_-,\Lambda_\Gamma)-\mathcal G(y_-,y_+)= \mathcal P(x,y_-)\mathcal T_-(x,\xi)\mathcal F_{\pi_\Gamma(x,\xi)},\quad y_\pm :=B_\pm(x,\xi), \end{equation} with $\pi_\Gamma$ defined in~\eqref{e:pi-Gamma}. \begin{proof}[Proof of Lemma~\ref{l:adred}] Let $y_0,y_1\in\Lambda_\Gamma$ and $y_0\neq y_1$. We first prove~\eqref{e:adred} for the case $0<r<1$. Define the diffeomorphism $$ \Phi:\mathbb S^{n-1}\setminus \{y_0\}\to T_{y_0}\mathbb S^{n-1},\quad \Phi(y):=\mathcal G(y_0,y). $$ Then $d\Phi(y)$ is conformal with factor ${1\over 2}(1+|\Phi(y)|^2)$. It follows that $$ \sup_{\Phi(y)\in B(\eta_1,1)} \|d\Phi(y)\|\leq {3\over 2} (1+|\eta_1|^2),\quad \sup_{\Phi(y)\in B(\eta_1,1)} \|d\Phi(y)^{-1}\|\leq {6\over 1+|\eta_1|^2} $$ where $\eta_1:=\Phi(y_1)$, and therefore $$ B\Big(y_1,{2r\over 3(1+|\eta_1|^2)}\Big)\ \subset\ \Phi^{-1}(B(\eta_1,r))\ \subset\ B\Big(y_1,{6r\over 1+|\eta_1|^2}\Big). $$ We now have by~\eqref{defnADRegular} $$ \begin{aligned} \mu_\delta\big(\Phi(\Lambda_\Gamma)\cap B(\eta_1,r)\big)&\leq \Big({3\over 2}(1+|\eta_1|^2)\Big)^\delta\mu_\delta\big(\Lambda_\Gamma \cap \Phi^{-1}(B(\eta_1,r))\big)\\ &\leq \Big({3\over 2}(1+|\eta_1|^2)\Big)^\delta\mu_\delta\Big(\Lambda_\Gamma\cap B\Big(y_1,{6r\over 1+|\eta_1|^2}\Big)\Big)\\ &\leq \mathbf C (9r)^\delta \end{aligned} $$ and similarly $$ \mu_\delta\big(\Phi(\Lambda_\Gamma)\cap B(\eta_1,r)\big) \geq \Big({1+|\eta_1|^2\over 6}\Big)^\delta\mu_\delta\big(\Lambda_\Gamma \cap \Phi^{-1}(B(\eta_1,r))\big) \geq \mathbf C^{-1} (r/9)^\delta $$ which gives~\eqref{e:adred} for $0<r<1$ with $\mathbf K_0:=9^{n-1}\geq 9^\delta$. Now, assume that $r\geq 1$. Take some $(x,\xi)\in S^*\mathbb H^n$ on the geodesic connecting $y_0$ and $y_1$, that is $$ B_-(x,\xi)=y_0,\quad B_+(x,\xi)=y_1. $$ Let $\gamma\in\Gamma$ and put $$ (x',\xi'):=\gamma.(x,\xi),\quad B_-(x',\xi')=y'_0,\quad B_+(x',\xi')=y'_1; $$ note that $y'_0,y'_1\in\Lambda_\Gamma$. We choose $(x,\xi)$ and $\gamma$ such that \begin{equation} \label{e:huangpu0} r':={\mathcal P(x',y_0')\over \mathcal P(x,y_0)}r\ <\ 1. \end{equation} To do that, we first remark that there exists $R>0$ depending on the quotient $M$ such that for each $(\tilde x,\tilde\xi)\in K\cap S^*M$, there exists \begin{equation} \label{e:huangpu} (x',\xi')\in\pi_\Gamma^{-1}(\tilde x,\tilde\xi),\quad \mathcal P(x',B_-(x',\xi'))\leq R. \end{equation} This follows immediately from the compactness of $K\cap S^*M$. To ensure~\eqref{e:huangpu0}, it remains to take $(x,\xi)$ such that $\mathcal P(x,y_0)>rR$ (which is always possible since the function $P(x,y_0)$ grows exponentially along the backwards geodesic flow) and choose $(x',\xi')$ using~\eqref{e:huangpu} with $(\tilde x,\tilde\xi):=\pi_\Gamma(x,\xi)$. From~\eqref{e:cal-F-useful} and the fact that $\pi_\Gamma(x,\xi)=\pi_\Gamma(x',\xi')$, we have \begin{equation} \label{e:huangpu1} \mathcal G(y_0,\Lambda_\Gamma)-\mathcal G(y_0,y_1)={\mathcal P(x,y_0)\over \mathcal P(x',y'_0)}\widetilde{\mathcal T}\big(\mathcal G(y'_0,\Lambda_\Gamma)-\mathcal G(y'_0,y'_1)\big) \end{equation} where $\widetilde{\mathcal T}:T_{y_0'}\mathbb S^{n-1}\to T_{y_0}\mathbb S^{n-1}$ is an isometry. Since~\eqref{e:adred} is already known for $r<1$, we have using~\eqref{e:huangpu0}, $$ \mathbf K_0^{-1}\mathbf C^{-1}(r')^\delta\ \leq\ \mu_\delta\big(\mathcal G(y'_0,\Lambda_\Gamma)\cap B(\mathcal G(y'_0,y'_1),r')\big)\ \leq\ \mathbf K_0\mathbf C (r')^\delta. $$ Combining this with~\eqref{e:huangpu1}, we obtain~\eqref{e:adred}. \end{proof} \subsection{Regularity, additive energy, and Minkowski dimension} \label{s:minkowski} In this section, we state a few results estimating the Lebesgue measure of neighborhoods of Ahlfors-David regular sets. This establishes bounds on Minkowski dimensions of these sets. We rely on the following \begin{defi} Assume that $(\mathcal M,d)$ is a metric space. For $\mathcal X\subset\mathcal M$ and $\alpha>0$, define the maximal number of $\alpha$-separated points in $\mathcal X$: $$ \mathcal N(\mathcal X,\alpha)=\max\{N\mid x_1,\dots, x_N\in \mathcal X,\ d(x_i,x_j)> \alpha \quad\text{for }i\neq j\}. $$ \end{defi} For regular sets, the quantity $\mathcal N(\mathcal X,\alpha)$ establishes a link between the Hausdorff and Minkowski dimensions: \begin{lemm} \label{l:alphasep} Let $(\mathcal M,d)$ be a metric space and let $\mathcal X\subset\mathcal M$ be compact. 1. If $\mathcal X$ is $\delta$--regular in the sense of Definition~\ref{d:ad-regular} with constant $C_\mathcal X$, then for each $x\in \mathcal X$, $$ C_{\mathcal X}^{-2}\Big({\alpha'\over \alpha}\Big)^\delta\leq \mathcal N(\mathcal X\cap B(x,\alpha'),\alpha)\leq C_{\mathcal X}^2\Big(1+{2\alpha'\over \alpha}\Big)^\delta,\quad 0<\alpha,\alpha'<\diam(\mathcal M). $$ 2. If $\mathcal M$ is an $m$-dimensional Riemannian manifold and $\mathcal X(\alpha)$ is the $\alpha$-neighborhood of $\mathcal X$, then $$ C^{-1}\alpha^m\mathcal N(\mathcal X,\alpha) \leq \mu_L(\mathcal X(\alpha))\leq C\alpha^m\mathcal N(\mathcal X,\alpha),\quad 0<\alpha<1, $$ where $\mu_L$ is the Lebesgue measure induced by the metric and $C$ is some constant independent of $\alpha$. \end{lemm} \begin{proof} We will begin with the first statement. Fix a ball $B(x,\alpha')$ with $x\in\mathcal{X}$. Let $\{x_1,\ldots,x_N\}\subset \mathcal{X}\cap B(x,\alpha')$ be a maximal collection of $\alpha$--separated points. Then $$ \mathcal{X}\cap B(x,\alpha')\ \subset\ \bigcup_{i=1}^N B(x_i,\alpha ),\qquad \bigsqcup_{i=1}^N B\Big(x_i,{\alpha\over 2}\Big)\ \subset\ B\Big(x,\alpha'+{\alpha\over 2}\Big). $$ It follows that \begin{equation*} \begin{split} N &\geq C_{\mathcal{X}}^{-1} \alpha^{-\delta} \sum_{i=1}^N \mu_{\delta}(\mathcal{X}\cap B(x_i,\alpha)) \geq C_{\mathcal{X}}^{-1} \alpha^{-\delta} \mu_{\delta}(\mathcal{X}\cap B(x,\alpha^\prime))\\&\geq C_{\mathcal{X}}^{-2} \Big({\alpha'\over\alpha}\Big)^{\delta},\\ N &\leq \Big({2\over\alpha}\Big)^{\delta}C_{\mathcal{X}} \sum_{i=1}^N \mu_{\delta}\Big(\mathcal{X}\cap B\Big(x_i,{\alpha\over 2}\Big)\Big) \leq \Big({2\over\alpha}\Big)^{\delta}C_{\mathcal{X}} \mu_{\delta}\Big(\mathcal{X}\cap B\Big(x,\alpha'+{\alpha\over 2}\Big)\Big)\\&\leq C_{\mathcal{X}}^2\Big(1+{2\alpha'\over\alpha}\Big)^{\delta}; \end{split} \end{equation*} this finishes the proof of the first statement. The second statement follows similarly from the inclusions $$ \mathcal X(\alpha)\ \subset\ \bigcup_{i=1}^N B(x_i,2\alpha),\qquad \bigsqcup_{i=1}^N B\Big(x_i,{\alpha\over 2}\Big)\ \subset\ \mathcal X(\alpha), $$ where $\{x_1,\dots,x_N\}\subset\mathcal X$ is a maximal collection of $\alpha$-separated points, plus the observation that on any compact subset $\Omega\subset \mathcal M$, the Lebesgue measure of a ball of radius~$\alpha$ is between $C_{\Omega}^{-1}\alpha^m$ and $C_{\Omega}\alpha^m$. \end{proof} As an application, we obtain \begin{proof}[Proof of~\eqref{e:AD-estimate-Lebesgue}] Follows directly from $\delta$-regularity of the limit set (see~\S\ref{s:introad}), Lemma~\ref{l:alphasep}, and the fact that $\Lambda_\Gamma(\alpha)\cap B(y_0,\alpha')$ contains the $\alpha$-neighborhood of $\Lambda_\Gamma\cap B(y_0,\alpha'-\alpha)$ and is contained in the $\alpha$-neighborhood of $\Lambda_\Gamma\cap B(y_0,\alpha'+\alpha)$. \end{proof} \noindent\textbf{Remark}. In Definition \ref{d:ad-regular} of Ahlfors-David regularity we used the Hausdorff measure. However, any other Borel measure could be used instead: \begin{lemm}[\cite{DS}, Lemma 1.2]\label{equivOfADRegDefns} Let $(\mathcal{M},d)$ be a complete metric space with more than one element and let $\mathcal X \subset \mathcal{M}$. Let $\mu$ be a Borel measure on $\mathcal{M}$ with the property that for all $x\in \mathcal X$, \begin{equation} C_{\mathcal{X}}^{-1}r^{\delta}\leq \mu(\mathcal{X}\cap B(x,r))\leq C_{\mathcal{X}}r^{\delta},\quad 0<r<\diam(\mathcal{M}). \end{equation} Then $\mathcal{X}$ is $\delta$--regular. The regularity constant depends only on $\delta$ and $C_{\mathcal X}$. \end{lemm} We now give the proof of Theorem~\ref{t:ad-reduced}. The first step is the following \begin{lemm} \label{l:ad-helper} Let $\Lambda_\Gamma$ be the limit set of a convex co-compact hyperbolic surface, $\mathbf C$ be defined in~\eqref{e:ad-regular-limit}, $\mathbf K_0$ be given in Lemma~\ref{l:adred}, and $\mathcal G$ be defined in~\eqref{e:stpro}. Take $y_0\in \Lambda_\Gamma$ and $R>0$. Then there exists an interval $[-1,1] \subset I \subset [-2,2]$ such that $$ \mathcal{X}:=I\cap R^{-1}\mathcal G(y_0,\Lambda_\Gamma)\ \subset\ T_{y_0}\mathbb S^1\simeq \mathbb R $$ is $\delta$--regular with regularity constant $\mathbf{C}_2:=(50\mathbf{K}_0\mathbf{C})^{{1+\delta\over 1-\delta}}$. \end{lemm} \begin{proof} By Lemma~\ref{l:adred}, the set $$ \mathcal Y:=R^{-1}\mathcal G(y_0,\Lambda_\Gamma)\ \subset\ \mathbb R $$ is $\delta$-regular with constant $\mathbf K_0 \mathbf C$. Divide $[-2,-1]$ and $[1,2]$ into $\mathbf C_1$ intervals of size $\mathbf C_1^{-1}$ each, where $\mathbf{C}_1:=\lceil(10\mathbf{K}_0^2\mathbf{C}^2)^{\frac{1}{1-\delta}}\rceil$. By Proposition~\ref{ADStronglyAvoids}, at least one of the sub-intervals in $[-2,-1]$ and at least one of the sub-intervals in $[1,2]$ must be disjoint from~$\mathcal{Y}$. Call these intervals $I_1$ and $I_2$. Let $I$ be the convex hull of the midpoints of $I_1$ and~$I_2$. We will show that $\mathcal{X}:=I\cap\mathcal Y$ is $\delta$--regular. Let $x\in \mathcal{X}$ and $0<r\leq 4$. We immediately have \begin{equation*} \mu_{\delta}\big(\mathcal{X}\cap B(x,r)\big)\leq\mu_{\delta}\big(\mathcal Y\cap B(x,r)\big)\leq \mathbf{K}_0\mathbf{C}r^\delta. \end{equation*} It remains to prove a lower bound. If $r<\mathbf{C}_1^{-1}$ then $ \mathcal Y\cap B(x,r)\subset I$ and thus \begin{equation*} \mu_{\delta}(\mathcal{X}\cap B(x,r))=\mu_{\delta}\big(\mathcal Y\cap B(x,r)\big)\geq (\mathbf{K}_0\mathbf{C})^{-1}r^\delta. \end{equation*} On the other hand, if $\mathbf{C}_1^{-1}\leq r\leq 4$, then \begin{equation*} \begin{split} \mu_{\delta}(\mathcal{X}\cap B(x,r))&\geq \mu_{\delta}(\mathcal{X}\cap B(x,\mathbf{C}_1^{-1}))\\ &\geq \mathbf{K}_0^{-1}\mathbf{C}^{-1}\mathbf{C}_1^{-\delta}\\ &\geq \mathbf C_2^{-1} r^{\delta}.\qedhere \end{split} \end{equation*} \end{proof} We finally combine Theorem~\ref{t:ae-combinatorial} from~\S\ref{s:ae-combinatorial}, Lemma~\ref{l:alphasep}, and Lemma~\ref{l:ad-helper} to obtain \begin{proof}[Proof of Theorem~\ref{t:ad-reduced}] Let $\Lambda_\Gamma$ be the limit set of a convex co-compact hyperbolic group. Let $y_0\in\Lambda_\Gamma.$ Let $\alpha>0$ (small) and $C_1\geq 1$ (large), and put $\alpha_1:=C_1^{-1}\alpha$. First, note that \begin{equation}\label{boundOnRescaledBallAE} E_A\big(\mathcal G(y_0,\Lambda_\Gamma)\cap B(0,C_1),\alpha\big) = E_A\big(C_1^{-1}\mathcal G(y_0,\Lambda_\Gamma)\cap B(0,1),\alpha_1\big)\leq E_A(\mathcal X,\alpha_1), \end{equation} where $\mathcal X:=I\cap C_1^{-1}\mathcal G(y_0,\Lambda_\Gamma)\subset[-2, 2]$ is defined in Lemma~\ref{l:ad-helper}. Each point $(\eta_1,\eta_2,\eta_3,\eta_4)\in \mathcal X(\alpha_1)^4$ satisfying $|\eta_1-\eta_2+\eta_3-\eta_4|\leq\alpha_1$ lies in the $4\alpha_1$-neighborhood of the set $$ \mathcal Z_{\alpha_1}=\{(\eta_1,\eta_2,\eta_3,\eta_4)\in\mathcal X^4\mid |\eta_1-\eta_2+\eta_3-\eta_4|\leq 5\alpha_1\}. $$ By Definition~\ref{d:ae} and part~2 of Lemma~\ref{l:alphasep}, we have for some global constant $C$, \begin{equation} \label{e:allai1} E_A(\mathcal X,\alpha_1)\leq C \mathcal N(\mathcal Z_{\alpha_1},4\alpha_1). \end{equation} We next claim that \begin{equation} \label{e:allai2} \mathcal N(\mathcal Z_{\alpha_1},4\alpha_1)\leq C\alpha_1^{-4\delta}\mathcal E_A(\mathcal X,\mu_\delta,9\alpha_1) \end{equation} where $\mathcal E_A$ is given by Definition~\ref{d:aespecial} and $C$ is some constant depending on $\mathbf C$. Indeed, let $z_1,\dots,z_N\in \mathcal Z_{\alpha_1}$ be a $4\alpha_1$-separated set of points. Then $$ \bigsqcup_{j=1}^N B(z_j,2\alpha_1)\ \subset\ \{|\eta_1-\eta_2+\eta_3-\eta_4|\leq 9\alpha_1\}. $$ Taking the $\mu_\delta^4$ measure of the intersection of both sides with $\mathcal X^4$ and arguing similarly to the proof of part~1 of Lemma~\ref{l:alphasep}, we obtain~\eqref{e:allai2}. Finally, applying Theorem~\ref{t:ae-combinatorial} to ${1\over 2}\mathcal X$, which is $\delta$-regular by Lemma~\ref{l:ad-helper}, we obtain \begin{equation}\label{boundOnSlightlyEnlargedBallAE} \mathcal E_A(\mathcal X,\mu_\delta,9\alpha_1)\leq C\alpha_1^{\delta+\beta_E}. \end{equation} Here $\beta_E=\delta\exp\big[-\mathbf{K}(1-\delta)^{-28}(1+\log^{14}\mathbf{C})\big],$ where $\mathbf{K}$ is an absolute constant, and $C$ is some constant depending on $\mathbf C$. Combining \eqref{boundOnRescaledBallAE}--\eqref{boundOnSlightlyEnlargedBallAE}, we conclude that $\Lambda_{\Gamma}$ satisfies the additive energy bound with exponent $\beta_E$ in the sense of Definition~\ref{d:ae-estimate}. \end{proof} \subsection{Example: three-funneled surfaces} \label{s:3fun} We now consider a particular family of convex co-compact hyperbolic surfaces and show that the regularity constants in Lemma~\ref{l:adred} for the corresponding limit sets have a uniform upper bound when the surface varies in a compact set in the moduli space. (Similar reasoning is expected to work for general convex co-compact hyperbolic surfaces.) More precisely, we study the family of \emph{three-funneled surfaces} $M_{\ell}$, parametrized by the Fenchel--Nielsen coordinates $\ell:=(\ell_1,\ell_2,\ell_3)\in (0,\infty)^3$. To construct $M_\ell$, we start with a right-angled hyperbolic hexagon with sides $\ell_1/2,q_3,\ell_2/2,q_1,\ell_3/2,q_2$. This hexagon is unique up to isometry, and $q_1,q_2,q_3>0$ are determined by a formula involving only $\ell_1,\ell_2,$ and $\ell_3$ (see for example~\cite[Theorem~3.5.13]{Ratcliffe}). Gluing two such hexagons along the $q_3$ side, we obtain a right-angled hyperbolic octagon with sides $\ell_1,q_2,\ell_3/2,q_1,\ell_2,q_1,\ell_3/2,q_2$. Attaching funnel ends along the $\ell_1,\ell_2,\ell_3/2$ sides to the above octagon, we obtain a fundamental domain $\mathcal F_{\ell}\subset\mathbb H^2$. The complement of $\mathcal F_{\ell}$ is the disjoint union of four open geodesic half-disks (i.e. regions of $\mathbb H^2$ bounded by a geodesic) $D_1,D_2,D_3,D_4$, where $D_1,D_3$ are bounded by the geodesics containing the two $q_2$ sides of the octagon and $D_2,D_4$ are bounded by the geodesics containing its $q_1$ sides. See Figure~\ref{f:schottky}. We next define the group $\Gamma_\ell\subset\PSL(2,\mathbb R)$ using the following Schottky representation. Let $\omega_1,\omega_2$ be the geodesics on $\mathbb H^2$ containing the $\ell_1$ and $\ell_2$ sides of the octagon. Then there exist unique $\gamma_1,\gamma_2\in \PSL(2,\mathbb R)$ satisfying \begin{equation} \label{e:schottky} \gamma_j(\overline{D_j})=\mathbb H^2\setminus D_{j+2},\quad \gamma_j(\omega_j)=\omega_j,\quad j=1,2. \end{equation} \begin{figure} \includegraphics{hgap-13.pdf} \qquad \includegraphics{hgap-14.pdf} \caption{A three-funneled surface (on the right) and its fundamental domain in the Poincar\'e disk model (on the left).} \label{f:schottky} \end{figure}% Let $\Gamma_\ell$ be the group generated by $\gamma_1,\gamma_2$. Then $\Gamma_\ell$ is a free group, and the quotient $$ M_\ell:=\Gamma_\ell\backslash\mathbb H^2 $$ is a convex co-compact hyperbolic surface with fundamental domain $\mathcal F_\ell$. The numbers $\ell_1,\ell_2,\ell_3$ are the lengths of the geodesic necks separating the funnels of $M_\ell$ from the convex core. See for instance~\cite[\S15.1]{Borthwick} for details. The octagon constructed above, the disks $D_j$, and the group elements $\gamma_j$ depend continuously on the choice of $\ell=(\ell_1,\ell_2,\ell_3)$. Denote by $\Lambda_\ell=\Lambda_{\Gamma_\ell}$ the limit set of $\Gamma_\ell$ and by $\delta_\ell$ its dimension. The following proposition states that $\Lambda_\ell$ is Ahlfors--David regular with regularity constant locally uniform in $\ell$; we use same notation as in Lemma~\ref{l:adred}. \begin{prop} \label{l:3fun} Let $\mathscr K\subset (0,\infty)^3$ be a compact set. Then there exists a constant $\mathbf C_{\mathscr K}>0$ such that for each $\ell\in\mathscr K$ and each $y_0,y_1\in\Lambda_\ell$, $y_0\neq y_1$ we have \begin{equation} \label{e:3fun} \mathbf C_{\mathscr K}^{-1}c_\ell r^\delta\ \leq\ \mu_\delta\big(\mathcal G(y_0,\Lambda_\ell)\cap B(\mathcal G(y_0,y_1),r)\big) \ \leq\ \mathbf C_{\mathscr K}c_\ell r^\delta,\quad r>0 \end{equation} where $c_\ell>0$ depends only on $\ell$. \end{prop} Before proving Proposition~\ref{l:3fun}, we use it to show the following \begin{theo} \label{t:3funny} There exists an open set $\mathscr U\subset (0,\infty)^3$ such that: \begin{itemize} \item if $\ell\in (0,\infty)^3$ and $\delta_\ell=1/2$, then $\ell\in\mathscr U$; \item if $\ell\in\mathscr U$, then $M_\ell$ has an essential spectral gap in the sense of~\eqref{e:essential-gap2} of size $$ \beta=\beta_\ell>\max\Big(0,{1\over 2}-\delta_\ell\Big). $$ \end{itemize} \end{theo} \noindent\textbf{Remark}. It is well-known that $\delta_\ell$ depends continuously on $\ell$~--- see for example~\cite{Anderson-Rocha}. Moreover, there exist $\ell_-,\ell_+\in (0,\infty)^3$ such that $\delta_{\ell_-}<1/2<\delta_{\ell_+}$. In fact, for the case $\ell_1=\ell_2=\ell_3$, we have $\delta_\ell\to 1$ as $\ell_j\to 0$ and $\delta_\ell\to 0$ as $\ell_j\to \infty$~--- see~\cite[Theorem~3.5]{McMullen}. By considering a path connecting $\ell_-$ with $\ell_+$ and applying Theorem~\ref{t:3funny}, we see that there exist $\ell$ such that $\delta_\ell>1/2$, yet $M_\ell$ has an essential spectral gap of size $\beta_\ell>0$. \begin{proof} It suffices to show that for each $\tilde\ell$ with $\delta_{\tilde\ell}=1/2$, there exists $\tilde\beta>0$ and a neighborhood $U_{\tilde\ell}$ of $\tilde\ell$ such that for each $\ell\in U_{\tilde\ell}$, $M_\ell$ has an essential spectral gap of size $\tilde\beta$. Indeed, it follows from here that there is an open neighborhood $U'_{\tilde\ell}$ of $\tilde\ell$ such that for each $\ell\in U'_{\tilde\ell}$, $M_\ell$ has an essential gap of size $\beta_\ell>\max(0,1/2-\delta_\ell)$. It remains to let $\mathscr U$ be the union of all $U'_{\tilde\ell}$. To show the existence of the neighborhood $U_{\tilde\ell}$, by Theorems~\ref{t:fup-reduction} and~\ref{t:ae-reduction} it suffices to show that there exists a constant $\beta_E>0$ such that for all $\ell$ sufficiently close to $\tilde\ell$, the set~$\Lambda_\ell$ satisfies the additive energy bound with exponent $\beta_E$ in the sense of Definition~\ref{d:ae-estimate}. To show this, we argue as in the proof of Theorem~\ref{t:ad-reduced} in~\S\ref{s:minkowski}. The only difference is that Lemma~\ref{l:adred} is replaced by Proposition~\ref{l:3fun}. The constant $c_\ell$ in~\eqref{e:3fun} can be removed by Lemma~\ref{equivOfADRegDefns}; alternatively, we may argue using the measure $c_\ell^{-1}\mu_\delta$ instead of $\mu_\delta$ since the proof of Theorem~\ref{t:ae-combinatorial} never used that $\mu_\delta$ is the Hausdorff measure. \end{proof} We now prove Proposition~\ref{l:3fun}. Assume that $\ell$ varies in a compact subset of $(0,\infty)^3$; the constants below will depend on that subset. We start with the following \begin{lemm} \label{l:bashas} Assume that $$ \gamma=\begin{pmatrix} a&b\\c&d\end{pmatrix}\in \Gamma_\ell\subset\PSL(2,\mathbb R),\quad |\gamma|:=a^2+b^2+c^2+d^2. $$ Then for each $A\subset \mathbb S^1$, we have \begin{equation} \label{e:bashas} \mu_{\delta_\ell}(\Lambda_\ell\cap \gamma(A))\leq (2|\gamma|)^{\delta_\ell} \mu_{\delta_\ell}(\Lambda_\ell\cap A). \end{equation} \end{lemm} \begin{proof} We identify the upper half-plane model $\{\Im z>0\}$ with the disk model $\{|w|<1\}$ by the M\"obius transformation $$ w={z-i\over z+i},\quad z=i{1+w\over 1-w}. $$ With the M\"obius transformation $\gamma$ given in the $z$ variable by $\gamma.z={az+b\over cz+d}$, its derivative in the $w$ variable on $\mathbb S^1$ satisfies $$ |\gamma'(w)|={1+z^2\over (az+b)^2+(cz+d)^2},\quad w\in\mathbb S^1. $$ It follows that $|\gamma'(w)|^{-1}\leq 2|\gamma|$; substituting $\gamma^{-1}$ instead of $\gamma$, we get $|\gamma'(w)|\leq 2|\gamma|$. The estimate~\eqref{e:bashas} follows from here and the fact that $\Lambda_\ell\cap\gamma(A)=\gamma(\Lambda_\ell\cap A)$. \end{proof} Using Lemma~\ref{l:bashas}, we next prove \begin{lemm} \label{l:bashas2} There exists a constant $c>0$ such that \begin{equation} \label{e:bashas2} \mu_{\delta_\ell}(\Lambda_\ell\cap \overline{D_j})\geq c\mu_{\delta_\ell}(\Lambda_\ell),\quad j=1,2,3,4. \end{equation} \end{lemm} \begin{proof} We consider the case $j=1$, the other cases are treated similarly. By~\eqref{e:schottky}, we have $\overline{D_1\cup D_2\cup D_4}\subset\gamma_1(\overline{D_1})$. Since $|\gamma_1|$ is bounded by some constant depending on $\mathscr K$, by Lemma~\ref{l:bashas} we obtain for some constant $C$, \begin{equation} \label{e:bashas2.1} \mu_{\delta_\ell}\big(\Lambda_\ell\cap (\overline{D_1\cup D_2\cup D_4})\big)\leq C\mu_{\delta_\ell}(\Lambda_\ell\cap\overline{D_1}). \end{equation} Next, $\overline{D_3}\subset\gamma_2(\overline{D_2})$. Therefore, similarly to~\eqref{e:bashas2.1} we get \begin{equation} \label{e:bashas2.2} \mu_{\delta_\ell}(\Lambda_\ell\cap \overline{D_3})\leq C\mu_{\delta_\ell}(\Lambda_\ell\cap \overline{D_2}). \end{equation} Combining~\eqref{e:bashas2.1} and~\eqref{e:bashas2.2} and using that $\Lambda_\ell\subset \overline{D_1\cup D_2\cup D_3\cup D_4}$, we obtain~\eqref{e:bashas2}. \end{proof} We are now ready to give \begin{proof}[Proof of Lemma~\ref{l:3fun}] We argue similarly to the proof of Lemma~\ref{l:adred}. Take $y_0,y_1\in\Lambda_\ell$, $y_0\neq y_1$, and $r>0$. Fix a large constant $C_1>0$ depending only on $\mathscr K$, to be chosen later. Let $(x,\xi)\in S^*\mathbb H^2$ be the unique point satisfying $$ B_-(x,\xi)=y_0,\quad B_+(x,\xi)=y_1,\quad \mathcal P(x,y_0)=r/C_1. $$ By~\eqref{e:Gpm-formula}, the projection $\pi_\Gamma(x,\xi)\in S^*M$ lies in the trapped set. Take $\gamma\in\Gamma_\ell$ such that for $(x',\xi'):=\gamma.(x,\xi)$, the point $x'$ lies in the fundamental domain $\mathcal F_\ell$, and denote $$ y'_0:=B_-(x',\xi'),\quad y'_1:=B_+(x',\xi'). $$ Since $\pi_\Gamma(x',\xi')$ is in the trapped set, $x'$ lies in the convex core, which is the octagon used in the construction of $\mathcal F_\ell$. Thus there exists a constant $C_2>0$ depending only on $\mathscr K$ such that $$ C_2^{-1}\leq \mathcal P(x',y'_0)\leq C_2. $$ Applying~\eqref{e:huangpu1}, we see that $$ \begin{aligned} &(C_1C_2)^{-\delta} r^\delta \mu_\delta\big(\mathcal G(y'_0,\Lambda_\ell)\cap B(\mathcal G(y'_0,y'_1),C_1/C_2)\big)\\ \leq\ &\mu_\delta\big(\mathcal G(y_0,\Lambda_\ell)\cap B(\mathcal G(y_0,y_1),r)\big)\\ \leq\ & (C_2/C_1)^\delta r^\delta \mu_\delta\big(\mathcal G(y'_0,\Lambda_\ell)\cap B(\mathcal G(y'_0,y'_1),C_1C_2)\big). \end{aligned} $$ Therefore, in order to prove~\eqref{e:3fun} it is enough to verify the inequalities $$ \begin{aligned} \mu_\delta\big(\mathcal G(y'_0,\Lambda_\ell)\cap B(\mathcal G(y'_0,y'_1),C_1C_2)\big)&\leq Cc_\ell,\\ \mu_\delta\big(\mathcal G(y'_0,\Lambda_\ell)\cap B(\mathcal G(y'_0,y'_1),C_1/C_2)\big)&\geq C^{-1}c_\ell \end{aligned} $$ for some constants $C$ depending only on $\mathscr K$ and $c_\ell$ depending on $\ell$. We have $y'_0,y'_1\in\Lambda_\ell$, thus $y'_0\in \overline {D_j},y'_1\in\overline{D_k}$ for some $j,k\in\{1,2,3,4\}$. Moreover, $x'\in\mathcal F_\ell$ implies that $j\neq k$. Thus $|y'_0-y'_1|$ is bounded away from zero uniformly in $\ell\in\mathscr K$. Since $\mathcal G$ is a smooth map away from the diagonal, we see that it is enough to prove the inequalities \begin{align} \label{e:fuego2} \mu_\delta\big(\{y\in\Lambda_\ell\mid \mathcal G(y'_0,y)\in B(\mathcal G(y'_0,y'_1),C_1C_2)\}\big)&\leq Cc_\ell, \\ \label{e:fuego3} \mu_\delta\big(\{y\in\Lambda_\ell\mid \mathcal G(y'_0,y)\in B(\mathcal G(y'_0,y'_1),C_1/C_2)\}\big)&\geq C^{-1}c_\ell. \end{align} We put $c_\ell:=\mu_\delta(\Lambda_\ell)$, then~\eqref{e:fuego2} follows automatically. To show~\eqref{e:fuego3}, we note that for $C_1$ large enough depending on $\mathscr K$, the set on the left-hand side contains $\Lambda_\ell\cap \overline{D_k}$. It remains to use Lemma~\ref{l:bashas2}. \end{proof} \subsection{Ideas behind the proof of Theorem~\ref{t:ae-combinatorial}} \label{s:ae-ideas} \subsubsection{Ahlfors-David regularity and arithmetic progressions} A $\delta$--regular subset of $[0,1]$ cannot contain long arithmetic progressions. More precisely, suppose $P\subset \mathcal{X}$ is an arithmetic progression of length $|P|$ and spacing $t>0$. Let $I\subset[0,1]$ be the interval of length $t|P|$ centered around $P$. If we place an interval of radius $t/2$ around each point of $P$, then $\mu_\delta(\mathcal{X}\cap I)\geq C_\mathcal{X}^{-1}|P| (t/2)^{\delta}$. On the other hand, $\mu_\delta(\mathcal{X}\cap I)\leq C_\mathcal{X}(|P| t)^\delta$. If $|P|$ is sufficiently large (depending on $C_\mathcal{X}$ and $\delta$), we arrive at a contradiction, provided that $\delta<1$. In fact more is true. If $P$ is not contained in $\mathcal{X}$ but merely meets $\mathcal{X}$ in many points, the argument still applies as well. Finally, the argument is not affected if we perturb the points of $P$ slightly. We say that $\mathcal{X}$ \emph{strongly avoids long arithmetic progressions}. Note that this argument relies on the fact that $\delta<1$. If instead $\mathcal{X}$ is a subset of $[0,1]^n$ for $n>1$ (or a more general metric space) and $\delta\geq 1$ then the argument fails. In~\S\ref{higherDimRemark} we will discuss this phenomenon further. \subsubsection{Small doubling and additive structure} If $A\subset\mathbb{Z}$ is a finite set and $|A+A|<K|A|$, what can we say about $A$? There is a family of theorems in additive combinatorics that say that if $K$ is small then $A$ must have additive structure. The most famous of these is Fre{\u\i}man's theorem~\cite{freiman}, which says that $A$ must be contained in a generalized arithmetic progression. For our purposes however, we will obtain stronger results by using a variant of Fre{\u\i}man's theorem due to Sanders~\cite{Sanders}, which makes the weaker claim that $A$ has large intersection with a convex progression. When combined with the ideas discussed above, we conclude that if $\mathcal{X}$ is a regular set then $\mathcal{X}$ cannot have maximally large additive energy. Unfortunately, the sort of bounds that one obtains from this argument are very weak---far too weak to obtain the polynomial in $\alpha$ improvement of Theorem~\ref{ADRegularSmallAddEnergyThm}\footnote{However, if the polynomial Fre{\u\i}man-Ruzsa conjecture is proved then this theorem may be employed directly, and the subsequent steps would not be needed.}. \subsubsection{Multiscale analysis of Ahlfors-David regular sets} If $\mathcal{X}$ is a regular set, we can examine $\mathcal{X}$ at many intermediate scales between $\alpha$ and 1; there will be roughly $|\log \alpha|$ scales total. We use the arguments above to get a small gain in the scale-$\alpha$ additive energy of $\mathcal{X}$ at each intermediate scale. These gains will compound with each intermediate scale, and the total gain will be large enough to obtain Theorem~\ref{ADRegularSmallAddEnergyThm}. \subsection{Ahlfors-David regular sets and additive structure} We start the proof of Theorem~\ref{ADRegularSmallAddEnergyThm} by exploring the implications of $\delta$-regularity for the additive structure of the set $\mathcal X$. \begin{defn} An \textbf{arithmetic progression} is a set of the form $$ \{a-\ell q, a-(\ell-1)q,\ldots, a, a+q,\ldots,a+\ell q\}\subset\mathbb{Z}, $$ where $a,q\in\mathbb{Z}$, $q\neq 0$, and $\ell\geq 0$. \end{defn} \begin{defn} \label{strongAvoid} Let $A\subset\mathbb{Z}$ be a finite set. We say that $A$ \textbf{strongly avoids long arithmetic progressions} (with parameter $S$) if for all $\epsilon>0$, there is a number $S=S(\epsilon)$ so that for all arithmetic progressions $P\subset\mathbb{Z}$ with $|P\cap A|\geq\epsilon|P|$, we have $|P|\leq S(\varepsilon).$ \end{defn} If $\mathcal{X}\subset t\mathbb{Z}$ for $0<t<1$, then we say that $\mathcal{X}$ strongly avoids long arithmetic progressions if $t^{-1}\mathcal{X}$ does, i.e.~we simply re-scale $\mathcal{X}$ so that it lies on the integer lattice. \begin{defn} \label{convexProgression} A ($d$-dimensional) \textbf{centered convex progression} is a triple $(B,\Lambda,\varphi)$, where $B\subset{\mathbb R}^d$ is a centrally symmetric convex set, $\Lambda\subset{\mathbb R}^d$ is a lattice, and $\varphi : \Lambda\to\mathbb{Z}$ is a linear map. We will primarily be interested in the image $\varphi(B\cap\Lambda)$. Sets of this form are generalizations of arithmetic progressions. If $\varphi$ is injective on $B\cap\Lambda$, we say that $(B,\Lambda,\varphi)$ is \textbf{proper}. \end{defn} The next lemma shows that a $d$-dimensional centered convex progression $(B,\Lambda,\varphi)$ of \emph{cardinality} $N:=|\varphi(B\cap\Lambda)|$ can be embedded into a centered convex progression $(B',\Lambda',\varphi')$ whose \emph{size} $|B'\cap\Lambda'|$ is bounded by $N$ times a $d$-dependent constant. \begin{lemm}[Cardinality vs. size]\label{containingProgressionInWellBehavedSet} Let $(B,\Lambda, \varphi)$ be a $d$-dimensional centered convex progression. Then there is some $d^\prime\leq d$ and a $d^\prime$-dimensional centered convex progression $(B^\prime,\Lambda^\prime,\varphi^\prime)$ with \begin{equation} |B^\prime\cap\Lambda^\prime|\leq 2^{(d+2)^2} |\varphi(B\cap\Lambda)|\label{BpSize} \end{equation} such that \begin{equation} \varphi(B\cap\Lambda)\subset \varphi^\prime(B^\prime\cap\Lambda^\prime).\label{BpInclusion} \end{equation} \end{lemm} \begin{proof} We recall \cite[Corollary 4.2]{Tao2}: \begin{prop} Let $(B,\Lambda,\varphi)$ be a $d$--dimensional centered convex progression. Then there exists a $d'$-dimensional \textbf{proper} centered convex progression $(B^{\prime\prime},\Lambda^{\prime\prime},\varphi^{\prime\prime})$ for some $d'\leq d$ such that we have the inclusions \begin{align} &\varphi(B\cap\Lambda)\subset \varphi^{\prime\prime}\big((2^{d-d'+1}B^{\prime\prime})\cap\Lambda^{\prime\prime}\big),\label{BBppInclusion1}\\ & \varphi\big((2B^{\prime\prime})\cap\Lambda^{\prime\prime}\big)\subset \varphi(B\cap\Lambda),\label{BBppInclusion2} \end{align} where $tB=\{x\in{\mathbb R}^d\colon t^{-1}x\in B\}$. \end{prop} Let $B^\prime:=2^{d-d'+1}B^{\prime\prime}$, $\Lambda^\prime:=\Lambda^{\prime\prime}$, and $\varphi^\prime:=\varphi^{\prime\prime}$. Then \eqref{BpInclusion} is satisfied. Since $(B^{\prime\prime},\Lambda^{\prime\prime},\varphi^{\prime\prime})$ is proper, we have \begin{equation} \label{sizeContainment} \big|(2^{-d+d^\prime-1}B^\prime)\cap\Lambda^\prime\big|=\big|\varphi^\prime\big((2^{-d+d^\prime-1}B^\prime)\cap\Lambda^\prime\big)\big|\leq |\varphi(B\cap\Lambda)|. \end{equation} By~\cite[Lemma 3.3]{Tao2}, we have \begin{equation}\label{ContainBpInDilate} |B^\prime\cap\Lambda^\prime| \leq 2^{(d+2)^2} \big|(2^{-d+d^\prime-1}B^\prime)\cap\Lambda^\prime\big|. \end{equation} Combining \eqref{sizeContainment} and \eqref{ContainBpInDilate} we obtain \eqref{BpSize}. \end{proof} We next recall \cite[Lemma 3.36]{Tao}: \begin{prop} Let $B\subset{\mathbb R}^d$ be a centrally symmetric convex set and let $\Lambda\subset{\mathbb R}^d$ be a lattice. Suppose that the ${\mathbb R}$ span of $B\cap\Lambda$ has dimension $r$. Then there exists an $r$--tuple $w=(w_1,\ldots,w_r)$ with $w_1,\ldots,w_r$ linearly independent vectors in $\Lambda$, and an $r$--tuple of integers $N=(N_1,\ldots,N_r)$ so that \begin{equation} [-N,N]\cdot w\ \subset\ B\cap\Lambda\ \subset\ [-r^{2r}N,r^{2r}N]\cdot w. \end{equation} Here \begin{equation*} [-N,N]\cdot w= \{ \ell_1w_1+\ldots+\ell_rw_r\colon -N_j\leq \ell_j\leq N_j,\ j=1,\ldots,r\}. \end{equation*} \end{prop} \begin{corr}\label{trappedInAGAP} Let $B\subset{\mathbb R}^d$ be a centrally symmetric convex set and let $\Lambda\subset{\mathbb R}^d$ be a lattice. Suppose that the ${\mathbb R}$ span of $B\cap\Lambda$ has dimension $r$. Then there exists an $r$--tuple $w=(w_1,\ldots,w_r)$ with $w_1,\ldots,w_r$ linearly independent vectors in $\Lambda$, and an $r$--tuple of integers $N=(N_1,\ldots,N_r)$ so that \begin{equation} B\cap\Lambda\ \subset\ [-N,N]\cdot w, \end{equation} and \begin{equation} \prod_{j=1}^r (2N_j+1) \leq 3^r r^{2r^2}|B\cap\Lambda|. \end{equation} \end{corr} \begin{lemm}[Arithmetic progressions inside convex progressions]\label{convexProgressionsContainArithmeticProgressions} Let $(B,\Lambda,\varphi)$ be a $d$--dimensional centered convex progression and let $$ A\subset \varphi(B\cap\Lambda),\quad |A|\geq\epsilon|\varphi(B\cap\Lambda)|. $$ Then there exists a (one dimensional) arithmetic progression $P\subset\mathbb{Z}$ with \begin{equation}\label{PisBig} |P|\geq |A|^{1/d} \end{equation} and \begin{equation}\label{PMeetsA} |P\cap A|\geq d^{-3(d+2)^2}\epsilon|P|. \end{equation} \end{lemm} \begin{proof} Let $(B^\prime, \Lambda^\prime,\varphi^\prime)$ be a centered convex progression obeying \eqref{BpSize} and \eqref{BpInclusion}. In particular, we have $A\subset\varphi^\prime(B^\prime\cap\Lambda^\prime)$ and \begin{equation} |B^\prime\cap\Lambda^\prime|\leq 2^{(d+2)^2}\epsilon^{-1}|A|. \end{equation} By Corollary \ref{trappedInAGAP}, there is a number $r\leq d$, an $r$--tuple $w=(w_1,\ldots,w_r)$ of linearly independent vectors in~$\Lambda'$, and an $r$--tuple of integers $N=(N_1,\ldots,N_r)$ so that \begin{equation} \label{varphiInvInBox} (\varphi^\prime)^{-1}(A)\cap B^\prime\cap\Lambda^\prime\ \subset\ B^\prime\cap\Lambda^\prime\ \subset\ [-N,N]\cdot w, \end{equation} and \begin{equation}\label{boundOnSizeOfNi} \prod_{j=1}^r (2N_j+1)\ \leq\ 3^r r^{2r^2}|B^\prime\cap\Lambda^\prime|\ \leq\ 2^{(d+3)^2} d^{2d^2}\epsilon^{-1} |A|. \end{equation} Now, \eqref{varphiInvInBox} implies that \begin{equation}\label{AInBox} A \subset \varphi'([-N,N]\cdot w). \end{equation} We can assume that $\varphi'(w_i)\neq 0$ for each index $i$ for which $N_i\neq 0$, since if $\varphi'(w_i)=0$ then we could set $N_i=0$ and both \eqref{boundOnSizeOfNi} and \eqref{AInBox} remain satisfied. By re-indexing if necessary, we can assume that $N_1\geq N_j$ for all $j=2,\ldots,r$. Partition the set $[-N,N]\cdot w$ into $\prod_{j=2}^r (2N_j+1)$ disjoint sets of the form \begin{equation*} \Big\{\ell w_1 + \sum_{j=2}^r x_jw_j \,\,\Big|\,\, -N_1 \leq \ell \leq N_1\Big\}. \end{equation*} Each of these sets has cardinality \begin{equation*} (2N_1+1) \geq \Big(\prod_{j=1}^r (2N_j+1)\Big)^{1/r}\geq|A|^{1/d}. \end{equation*} By \eqref{boundOnSizeOfNi}, \eqref{AInBox}, and pigeonholing, at least one of these sets $Y$ must satisfy \begin{equation} |Y\cap (\varphi^\prime)^{-1}(A)|\ \geq\ 2^{-(d+3)^2} d^{-2d^2}\epsilon|Y|\ \geq\ d^{-3(d+2)^2}\epsilon|Y|. \end{equation} For the last inequality we may assume that $d\geq 2$, since otherwise we may put $P:=\varphi(B\cap\Lambda)$. Let $P=\varphi^\prime(Y)$. By assumption $\varphi^\prime$ is injective on $Y$, so $P$ satisfies \eqref{PisBig} and~\eqref{PMeetsA}. \end{proof} The following Proposition is a direct corollary of~\cite[Theorem 1.4]{Sanders}: \begin{prop}[Additive structure] \label{SchoenProp} Let $A\subset\mathbb{Z}$ and suppose $|A+A|\leq K|A|$. Then there is a $d\leq K_0 \log^4K$ dimensional centered convex progression $(B,\mathbb{Z}^d,\varphi)$ and an offset $x\in\mathbb{Z}$ so that \begin{equation} |\varphi(B\cap\mathbb{Z}^d)|\leq \exp[K_0 \log^4 K]\cdot |A| \end{equation} and \begin{equation} |(A-x)\cap \varphi(B\cap\mathbb{Z}^d)|\geq \exp[-K_0\log^4 K]\cdot |A|. \end{equation} Here $K_0>0$ is an absolute constant. \end{prop} Applying Lemma~\ref{convexProgressionsContainArithmeticProgressions}, we obtain the following corollary. \begin{corr} \label{corrAdditiveStrBigGAPs} Let $A\subset \mathbb{Z}$ and suppose that $|A+A|\leq K|A|$. Then there is an arithmetic progression $P$ so that \begin{equation} |P|\geq |A|^{(K_1 \log^4K)^{-1}}, \end{equation} and \begin{equation}\label{ACapPSize} |A\cap P |\geq e^{-K_1 \log^9 K}|P|. \end{equation} Here $K_1>0$ is an absolute constant. The term $ \log^9$ in \eqref{ACapPSize} could be replaced with $\log^{8+o(1)}$, but we will not worry about these small optimizations. \end{corr} \begin{corr} \label{corrAPAP} Let $A\subset\mathbb{Z}$. Suppose that $A$ strongly avoids long arithmetic progressions (with parameter $S(\varepsilon)$), and $|A+A|\leq K|A|$. Then \begin{equation}\label{corrBoundOnSizeA} |A|\leq \Big(S(e^{-K_1 \log^9 K})\Big)^{K_1\log^4 K}, \end{equation} where $K_1$ is the absolute constant from Corollary~\ref{corrAdditiveStrBigGAPs}. \end{corr} \begin{prop}[Ahlfors-David regular sets avoid arithmetic progressions] \label{ADStronglyAvoids} Let $\mathcal{X}\subset[0,1]$ be a $\delta$--regular set with regularity constant $C$ and let $0<\alpha<1$. Then $\mathcal{X}(\alpha)\cap \alpha\mathbb{Z}$ strongly avoids long arithmetic progressions (here $\mathcal{X}(\alpha)$ is the $\alpha$-neighborhood of $\mathcal{X}$). In particular, we can take $S(\epsilon)=(10C^2\epsilon^{-1})^{\frac{1}{1-\delta}}.$ \end{prop} \begin{proof} Let $P\subset[0,1]\cap \alpha\mathbb{Z}$ be a proper arithmetic progression. In particular, $P$ has spacing $t\geq \alpha$. Assume that $|P\cap\mathcal{X}(\alpha)|>\epsilon|P|$. For each point $y\in P\cap \mathcal{X}(\alpha)$, the ball $B(y,2t)$ contains the ball of radius $t$ centered at some point of $\mathcal{X}$. By \eqref{defnADRegular} we have \begin{equation} \sum_{y\in P\cap\mathcal{X}(\alpha)}\mu_{\delta}(\mathcal{X}\cap B(y,2t))\geq C^{-1}\epsilon|P|t^{\delta}. \end{equation} On the other hand, the balls $B(y,2t)$ are at most five-fold overlapping, and $P$ is contained in an interval $J$ of length $(|P|+3)t\leq 2|P|t$ (unless $|P|<3$ in which case $|P|<S(\epsilon)$ automatically). By \eqref{defnADRegular} we have \begin{equation} \sum_{y\in P\cap\mathcal{X}(\alpha)}\mu_{\delta}(\mathcal{X}\cap B(y,2t))\leq 5\mu_{\delta}(\mathcal{X}\cap J)\leq 10C(t|P|)^{\delta}. \end{equation} We conclude that $|P|\leq (10C^2\epsilon^{-1})^{\frac{1}{1-\delta}}$. \end{proof} Combining Corollary~\ref{corrAPAP} and Proposition~\ref{ADStronglyAvoids}, we get \begin{corr}\label{ADRegularSetsInIntervals} Let $\mathcal{X}\subset[0,1]$ be a $\delta$--regular set with regularity constant $C$. Let $0<\alpha<1$ and $A\subset \mathcal{X}(\alpha)\cap\alpha\mathbb{Z}$, and suppose $|A+A|\leq K|A|$. Then \begin{equation} |A|\leq \exp\big[K_3(1-\delta)^{-1}(\log C)\log^{13}K\big] \end{equation} for some absolute constant $K_3$. \end{corr} \subsection{Ahlfors-David regular trees} The key to proving Theorem \ref{ADRegularSmallAddEnergyThm} will be to analyze $\mathcal{X}$ at many scales. Heuristically, if $\alpha=M^{-N}$ for $M,N$ positive integers, then $\mathcal{X}$ can naturally be analyzed at the $N$ scales $1,M^{-1},\ldots,M^{-N}$. On each scale, we will get a small gain in the scale-$\alpha$ additive energy of $\mathcal{X}$. In order to keep track of $\mathcal{X}(\alpha)$ at these different scales we will construct an object called a tree. \begin{defn} [Trees] A \textbf{(rooted) tree} of height $\ell\in\mathbb Z_{\geq 0}$ is a connected acyclic graph with a distinguished vertex (called the root). Once we have specified the root of a tree, each vertex has a well-defined height (i.e.~its distance from the root), and we say that one vertex $v$ is a parent of another vertex $v^\prime$ if $v$ and $v^\prime$ are adjacent and $v$ has smaller height. More formally, a (rooted) tree is a quadruple $(V,H,p,\ell)$, where \begin{itemize} \item $V$ is a finite set of \textbf{vertices}; \item $H:V\to \{0,\dots,\ell\}$ is the \textbf{height} function, and $H^{-1}(0)$ consists of a single vertex called the \textbf{root}; \item $p:V\setminus H^{-1}(0)\to V$ is the \textbf{parent} function, and $p(H^{-1}(t))\subset H^{-1}(t-1)$ for $t=1,\dots,\ell$. \end{itemize} We denote by $\mathcal {V}_t(T):=H^{-1}(t)$ the set of vertices of height $t$. \end{defn} Let $T$ be a tree of height $\ell$. The set of \emph{leaves} of $T$ is defined as $$ \mathcal L(T)=H^{-1}(\ell)\ \subset\ V. $$ For each vertex $v\in V$, we say that $v'\in V$ is a \emph{child} of $v$, if $p(v')=v$. More generally, we say that $v'$ is \emph{below} $v$, and write $v'\prec v$, if there is a sequence of vertices $v_1,\ldots,v_m$ so that $v_1=v$, $v_m=v'$, and $v_{i+1}$ is a child of $v_i$ for each $i=1,\ldots,m-1$. If $T$ is a tree and $v\in V$, we define $T_v$ to be the \emph{subtree} of $T$ rooted at $v$. This is a tree of height $\ell-H(v)$. Its vertices are the set $\{v'\in V\mid v'\prec v\}$. The height function is $v'\mapsto H(v')-H(v)$, and the parent function is inherited from the original tree. \begin{defn}[Regular trees] Let $\ell,B,C\geq 0$. We say a tree $T$ is an ``Ahlfors-David regular tree of height~$\ell$, branching~$B$, and regularity constant~$C$'' if $T$ is a tree of height~$\ell$ and for each $v\in T$, \begin{equation}\label{ADregularTreeEqn} C^{-1} B^{\ell-H(v)}\leq |\mathcal{L}(T_v)|\leq C B^{\ell-H(v)}, \end{equation} where $T_v$ is the sub-tree of $T$ rooted at $v$. \end{defn} \noindent\textbf{Remark}. If $T$ is an Ahlfors-David regular tree with height $\ell$, branching $B$, and regularity constant $C$, then each vertex of $T$ has between $C^{-2} B$ and $C^2B$ children. However, much more is true--if a vertex of $T$ has (relatively) few children, then \emph{its} children must have many children, and vice versa. Thus the tree $T$ might not be perfectly balanced, but it cannot become extremely unbalanced either. \begin{lemm}\label{subTreeIsADReg} Let $T$ be an Ahlfors-David regular tree with height $\ell$, branching $B$, and regularity constant $C$. Let $v\in T$. Then $T_v$ (the sub-tree rooted at $v$) is an Ahlfors-David regular tree of height $\ell-H(v)$, branching $B$, and regularity constant $C$. \end{lemm} If $T$ is a tree of height $\ell$ and $j\in\mathbb N$, then we can define the $j$-th \emph{power} of $T$, denoted~$T^j$, as the following tree of height $\ell$: \begin{itemize} \item the vertices of $T^j$ are ordered pairs $(v_1,\dots,v_j)$, where $v_1,\dots,v_j$ are vertices of the tree $T$ of the same height; \item the height of a vertex $(v_1,\dots,v_j)$ is equal to the height of each of $v_i$; \item the parent of a vertex $(v_1,\dots,v_j)$ is equal to $(p(v_1),\dots,p(v_j))$, where $p$ is the parent function of the original tree. \end{itemize} If $T$ is an Ahlfors-David regular tree with height $\ell$, branching $B$, and regularity constant $C$, then $T^j$ is an Ahlfors-David regular tree with with height $\ell$, branching $B^j$, and regularity constant $C^j$. \subsection{Discretization} \label{s:discretization} The trees discussed in the previous section are useful for describing the multi-scale structure of $\delta$--regular sets. Let $\mathcal{X}\subset[0,1]$ be a $\delta$--regular set with regularity constant $C$. Define \begin{equation}\label{defnC1} C_1:=(10C^2)^{1\over 1-\delta}. \end{equation} Let $M,N$ be positive integers; we will fix $M$ and study asymptotic behavior as $N\to\infty$. We will describe a process that divides $[0,1)$ into sub-intervals of length roughly $M^{-j},\ j=0,\ldots N,$ and assembles these intervals into a tree. For each $j=0,\ldots,N$, divide $[0,1)$ into $M^j$ intervals of the form $[iM^{-j}, (i+1)M^{-j})$. If $I$ is an interval of this form, we say $I$ is \emph{empty} if $I\cap \mathcal{X}=\emptyset$. Otherwise $I$ is \emph{non-empty}. If several non-empty intervals are adjacent, merge them into a single (longer) interval. By Lemma~\ref{ADStronglyAvoids} with $\alpha:=M^{-j}$ and $\varepsilon:=1$, at most $C_1$ intervals can be merged into a single interval in this fashion. Let $\mathcal{I}_j$ be the set of non-empty intervals obtained after the merger process is complete. Each interval in $\mathcal{I}_j$ has length between $M^{-j}$ and $C_1M^{-j}$. Furthermore, if $I$ is an interval in $\mathcal{I}_j$, then there is a gap of size $\geq M^{-j}$ on either side of $I$ that is disjoint from $\mathcal{X}$. \begin{lemm}\label{uniqueParent} Let $I\in \mathcal{I}_j$. Then there is a unique interval $\tilde I\in\mathcal{I}_{j-1}$ that intersects $I$. Furthermore, $I\subset\tilde I$. \end{lemm} \begin{proof} First, note that $I\subset\bigcup_{\tilde I\in \mathcal{I}_{j-1}}\tilde I$. Thus it suffices to show that $I$ intersects at most one interval from $\mathcal{I}_{j-1}$. Suppose that $I$ intersects two intervals, $\tilde I=[\tilde i_0 M^{1-j},\ \tilde i_1 M^{1-j})$ and $\tilde I^\prime=[\tilde i_0^\prime M^{1-j},\ \tilde i_1^\prime M^{1-j})$ from $\mathcal{I}_{j-1}$. If we write $I=[i_0M^{-j},i_1M^{-j})$, then $i_0 \leq M\tilde i_1\leq Mi_0^\prime\leq i_1$. Since no two intervals in $\mathcal{I}_{j-1}$ can be adjacent, there must be some interval of the form $[i^\prime M^{1-j},(i^\prime+1)M^{1-j})$ that is disjoint from $\mathcal{X}$, with $\tilde i_1 < i^\prime < i^\prime+1 < \tilde i_0^\prime$. This implies that $[i^\prime M^{1-j},(i^\prime+1)M^{1-j})\subset I$, and in particular, $[(Mi^\prime) M^{-j},(Mi^\prime+1)M^{-j})\subset I.$ But this implies that $[(Mi^\prime) M^{-j},(Mi^\prime+1)M^{-j})\cap I\cap\mathcal{X}=\emptyset,$ which is a contradiction---by the construction of the intervals in $\mathcal{I}_j$, every sub-interval of $I$ of the form $[iM^{-j},(i+1)M^{-j})$ must intersect $\mathcal{X}$. We conclude that there is at most one interval from $\mathcal{I}_{j-1}$ that intersects $I$. \end{proof} We now construct the tree $T_{\mathcal{X};M,N}$ as follows. The root vertex of $T_{\mathcal{X};M,N}$ corresponds to the interval $[0,1)\in \mathcal I_0$. For each $j=1,\ldots,N$, the vertices of $T_{\mathcal{X};M,N}$ of height $j$ correspond to the intervals in $\mathcal{I}_j$. The parent of an interval in $\mathcal I_j$ is the unique interval in $\mathcal I_{j-1}$ containing that interval. If $v$ is a vertex of $T_{\mathcal{X};M,N}$, let $I_v$ be the corresponding interval. Note that $v'\prec v$ if and only if $I_{v^\prime}\subset I_v$. See Figure~\ref{f:thetree}. \begin{figure} \includegraphics{hgap-11.pdf} \caption{The discretization of the middle third Cantor set (pictured in red) with $M=4$. The intervals of the discretization for $j=1,\dots, 4$ are pictured in blue and form a tree.} \label{f:thetree} \end{figure} \begin{lemm} \label{l:treecons} Let $\mathcal{X}$ be a $\delta$--regular set with regularity constant $C$. Then $T_{\mathcal{X};M,N}$ is an Ahlfors-David regular tree with height $N$, branching $M^{\delta}$ and regularity constant \begin{equation}\label{defnC2} C_2:=C^2 C_1^\delta=10^{\delta\over 1-\delta} C^{\frac{2}{1-\delta}}. \end{equation} \end{lemm} \begin{proof} Let $v$ be a vertex of $T=T_{\mathcal{X};M,N}$ and let $I_v\subset[0,1)$ be the corresponding interval. Choose any $x_0\in I_v\cap \mathcal X$. Since $I_v$ has length no more than $C_1M^{-H(v)}$, it is contained in the ball $B(x_0,C_1M^{-H(v)})$. On the other hand,% \footnote{This is one of the places where we use the the merging of consecutive intervals; otherwise, one of the intervals may intersect $\mathcal X$ at an endpoint and the resulting tree may not be regular.} $B(x_0,M^{-H(v)})\cap \mathcal X\subset I_v$. Since $\mathcal X$ is $\delta$-regular, we have \begin{equation} C^{-1} M^{-H(v)\delta}\leq \mu_{\delta}(\mathcal{X}\cap I_v)\leq C(C_1 M^{-H(v)})^{\delta}. \end{equation} For each leaf $v^\prime\in \mathcal{L}(T)$ with $v^\prime\prec v$, let $I_{v^\prime}$ be the corresponding interval. Again, we have \begin{equation} C^{-1} M^{-N\delta}\leq \mu_{\delta}(\mathcal{X}\cap I_{v^\prime})\leq C(C_1 M^{-N})^{\delta}. \end{equation} On the other hand, by Lemma \ref{uniqueParent} we have \begin{equation*} \mu_{\delta}(\mathcal{X}\cap I_v)=\sum_{v^\prime\in \mathcal{L}(T_v)}\mu_{\delta}(\mathcal{X}\cap I_{v^\prime}). \end{equation*} It follows that \begin{equation*} C_2^{-1}(M^\delta)^{N-H(v)}\leq |\mathcal{L}(T_v)|\leq C_2(M^\delta)^{N-H(v)},\ \quad C_2 = C^2 C_1^\delta. \qedhere \end{equation*} \end{proof} We will mainly be interested in the tree $T_{\mathcal{X};M,N}^3$. A vertex of this tree is a triple of intervals $(I_1,I_2,I_3)$ where each interval $I_i$ meets $\mathcal{X}$ and all three intervals lie in $\mathcal I_j$ for some $j$ (and thus they have comparable lengths). \subsection{Additive structure and pruning the tree} \label{AddEnergTreePrunSec} Let $\mathcal{X}$ be a $\delta$--regular set with regularity constant $C$ and let $T_{\mathcal{X};M,N}$ be the Ahlfors-David regular tree described in~\S\ref{s:discretization}. Let $v=(I_1,I_2,I_3)$ be a vertex of $T_{\mathcal{X};M,N}^3$. Consider the interval $$ I_1-I_2+I_3:=\{x_1-x_2+x_3\mid x_1\in I_1,\ x_2\in I_2,\ x_3\in I_3\}. $$ We say that $v$ \emph{misses} $\mathcal{X}$ if $(I_1-I_2+I_3)\cap \mathcal{X}(M^{-H(v)})=\emptyset$. Otherwise $v$ \emph{hits} $\mathcal{X}$. The following result shows that if $M$ is sufficiently large then we have an improvement in additive energy on each level of the tree. \begin{prop}\label{missProp} Let $\mathcal{X}$ be a $\delta$--regular set with regularity constant $C$, and let $T_{\mathcal{X};M,N}$ be the Ahlfors-David regular tree described above. Assume that $M\geq M_0$, where \begin{equation}\label{defnC3} M_0:=\exp\big[K_5\delta^{-1}(1-\delta)^{-14}(1+\log^{14}C)\big], \end{equation} and $K_5$ is a large absolute constant. Let $v\in T_{\mathcal{X};M,N}^3$ be a vertex that is not a leaf. Then at least one of the children of $v$ misses $\mathcal{X}$. \end{prop} \begin{proof} Suppose $M\geq M_0$. Let $v\in T_{\mathcal{X};M,N}^3$ be a vertex that is not a leaf and suppose all of the children of $v$ hit $\mathcal{X}$; we will obtain a contradiction. Write $v=(v_1,v_2,v_3)$ and let $I_1,I_2,I_3\in \mathcal{I}_{H(v)}$ be the corresponding triple of intervals. Let $\alpha_1 = M^{-H(v)-1}$ and define $$ \begin{aligned} A_i&:=I_i \cap \mathcal{X}(\alpha_1) \cap \alpha_1\mathbb{Z},\quad i=1,2,3;\\ A_4&:=(I_1-I_2+I_3) \cap \mathcal{X}(4C_1\alpha_1) \cap \alpha_1\mathbb{Z}. \end{aligned} $$ Here $C_1$ is defined in~\eqref{defnC1}. \begin{lemm} If every child of $v$ hits $\mathcal{X}$, then \begin{equation} \label{e:ally} A_1-A_2+A_3\ \subset\ A_4. \end{equation} \end{lemm} \begin{proof} Take $a_i\in A_i$, $i=1,2,3$. Then $|a_i-b_i|\leq\alpha_1$ for some $b_i\in\mathcal{X}$; since $I_i$ is surrounded by size $M\alpha_1$ intervals which do not intersect $\mathcal{X}$, we have $b_i\in I_i\cap\mathcal{X}$. Then $b_i$ lies in some child $I_i'$ of $I_i$, which is an interval of size no more than $C_1\alpha_1$. We have $I_1'-I_2'+I_3'\cap\mathcal{X}(\alpha_1)\neq\emptyset$ and thus $b_1-b_2+b_3\in\mathcal{X}((3C_1+1)\alpha_1)$. Then $a_1-a_2+a_3\in \mathcal{X}(4C_1\alpha_1)$ and~\eqref{e:ally} follows. \end{proof} We next claim that \begin{equation} \label{e:zuotian} \begin{aligned} |A_i|&\geq (2C^2)^{-1}M^\delta,\quad i=1,2,3;\\ |A_4|&\leq 44C^2C_1M^\delta. \end{aligned} \end{equation} These inequalities follow immediately from~\eqref{defnADRegular} and the following observations: \begin{itemize} \item The balls $B(a,\alpha_1)$ centered at $a\in A_i$ cover $\mathcal X\cap I_i$, and $\mathcal X\cap I_i$ contains the intersection of $\mathcal X$ with a ball of radius $M\alpha_1$ centered at a point of $\mathcal X$. \item For each $a\in A_4$, the ball $B(a,5C_1\alpha_1)$ contains a ball of radius $C_1\alpha_1$ centered at a point of $\mathcal X$, the balls $B(a,5C_1\alpha_1)$ for different $a$ have overlapping at most~$11C_1$, and their union is contained in an interval of length $(3M+10)C_1\alpha_1\leq 4MC_1\alpha_1$. \end{itemize} We now use the Ruzsa sum inequality~\cite[(4.6)]{Ruzsa} (see also Petridis~\cite{Petridis}): $$ |A+C|\leq {|A+B|\cdot |B+C|\over |B|} $$ valid for nonempty finite sets $A,B,C\subset\mathbb Z$. Putting $A:=A_1,B:=A_3,C:=A_1$ and using~\eqref{e:zuotian} and the fact that $|A_1+A_3|\leq |A_4|$ we obtain $$ |A_1+A_1|\leq {|A_4|^2\over |A_1|\cdot |A_3|}\cdot |A_1|\leq C_3|A_1|, $$ where $$ C_3:=7744C^8C_1^2\leq (10C^2)^{6\over 1-\delta}. $$ Apply Corollary~\ref{ADRegularSetsInIntervals} to the set $A_1$ with $K:=C_3$. We conclude that \begin{equation} |A_1|\leq \exp\big[K_4(1-\delta)^{-1}(\log C)\big((1-\delta)^{13}(1+\log C)^{13}\big)\big], \end{equation} where $K_4$ is an absolute constant. If $M\geq M_0$, we obtain a contradiction with the first bound in~\eqref{e:zuotian}. \end{proof} \subsection{Analyzing the pruned tree} \label{s:pruned} To take advantage of Proposition~\ref{missProp}, we prove the following general fact about pruned subtrees of Ahlfors--David regular trees. For two trees $T=(V,H,p,\ell),T'=(V',H',p',\ell)$ of same height, we say that $T'$ is a \emph{subtree} of $T$ if $V'\subset V$ and $H',p'$ are the restrictions of $H,p$ to $V'$. We say that $T'$ is a \emph{pruned} subtree, if for each $v\in V'$ which is not a leaf, there exists a child of $v$ in $T$ which does not lie in $V'$. See Figure~\ref{f:pruned}. \begin{lemm}[Pruned trees have few leaves]\label{treePrudingLem} Let $T$ be an Ahlfors-David regular tree with height $\ell$, branching $B$, and regularity constant $C$. Let $T^\prime$ be a pruned subtree of~$T$. Then \begin{equation} |\mathcal{L}(T^\prime)|\leq\big(1-C^{-2}B^{-1}\big)^{\ell} |\mathcal{L}(T)|. \end{equation} \end{lemm} \begin{proof} We will prove the lemma by induction on $\ell$, the height of the tree. If $\ell=0$ the result is trivial. Now assume the result has been proved for all Ahlfors-David regular trees with height $\ell-1$, branching $B$, and regularity constant $C$. Let $T$ be an Ahlfors-David regular tree with height $\ell$, branching $B$, and regularity constant $C$, and let $T'$ be a pruned subtree of $T$. Then at least one of the vertices in $\mathcal{V}_1(T)$ is missing from~$T^\prime$. We call this vertex $v^*$. By Lemma \ref{subTreeIsADReg}, each of the trees $\{T_v\colon v\in \mathcal{V}_1(T^\prime)\}$ is a regular tree with height $\ell-1$, branching $B$, and regularity constant $C$. Thus we can apply the induction hypothesis to each such tree to obtain \begin{figure} \includegraphics{hgap-12.pdf} \caption{An Ahlfors-David regular tree and a pruned subtree (in red).} \label{f:pruned} \end{figure} \begin{equation*} \begin{split} |\mathcal L(T^\prime)|&=\sum_{v\in \mathcal{V}_1(T^\prime)}|\mathcal{L}(T_v^\prime)|\\ &\leq (1-C^{-2}B^{-1})^{\ell-1} \sum_{v\in \mathcal{V}_1(T^\prime)}|\mathcal{L}(T_v)|\\ &\leq (1-C^{-2}B^{-1})^{\ell-1}\Big(|\mathcal{L}(T)|-|\mathcal{L}(T_{v^*})|\Big). \end{split} \end{equation*} By \eqref{ADregularTreeEqn} we have \begin{equation} |\mathcal{L}(T_{v^*})|\geq C^{-2}B^{-1} |\mathcal{L}(T)|. \end{equation} Thus \begin{equation*} |\mathcal{L}(T)|-|\mathcal{L}(T_{v^*})|\leq (1-C^{-2}B^{-1})\cdot|\mathcal{L}(T)|, \end{equation*} so \begin{equation*} |\mathcal{L}(T^\prime)|\leq (1-C^{-2}B^{-1})^{\ell}|\mathcal{L}(T)|.\qedhere \end{equation*} \end{proof} \subsection{Finishing the proof of Theorem~\ref{ADRegularSmallAddEnergyThm}} Let $\mathcal{X}$ be a $\delta$--regular set with regularity constant $C$ and let $\alpha>0$. Let $$ M=\lceil M_0 \rceil,\quad N=\bigg\lfloor {\log(1/\alpha)\over \log M}\bigg\rfloor, $$ where $M_0$ is defined in \eqref{defnC3}, and let $T_{\mathcal{X};M,N}$ be the associated Ahlfors-David regular tree constructed in~\S\ref{s:discretization}. Let $T^\prime$ be the subtree of $T_{\mathcal{X};M,N}^3$ which consists of triples of intervals that hit $\mathcal X$ (see~\S\ref{AddEnergTreePrunSec}). By Proposition~\ref{missProp}, $T'$ is pruned in the sense of~\S\ref{s:pruned}. By Lemmas~\ref{subTreeIsADReg} and~\ref{treePrudingLem} and the inequality $(1-t)\leq e^{-t}$, we have \begin{equation} \label{boundOnPrunedTree} \begin{split} |\mathcal L(T^\prime)|&\leq\Big(1-\big(C_2^6M^{3\delta}\big)^{-1}\Big)^{N}C_2^3M^{3\delta N} \\ &\leq C_2^3 M^{N(3\delta-\rho)}, \quad \rho=(C_2^6 M^{3\delta}\log M)^{-1}. \end{split} \end{equation} Expanding the definition of~$C_2$ and~$M_0$ from~\eqref{defnC2} and~\eqref{defnC3}, we have \begin{equation*} \rho\geq \delta\exp\big[-K_6(1-\delta)^{-14}(1+\log^{14}C)\big], \end{equation*} where $K_6$ is a large absolute constant. Now, assume that $(x_1,x_2,x_3)\in\mathcal{X}^3$ satisfies $x_1-x_2+x_3\in \mathcal{X}(\alpha)$. Then $x_i\in I_i$, $i=1,2,3$, where $I_i$ are intervals corresponding to some leaves $v_i$ of $T_{\mathcal{X};M,N}$. Moreover, since $\alpha\leq M^{-N}$, the vertex $(v_1,v_2,v_3)$ hits $\mathcal{X}$ and thus is a leaf of $T'$. By~\eqref{defnADRegular}, for each leaf $(v_1,v_2,v_3)$ of $T'$ we have $$ \mu_\delta^4\big(\big\{(x_1,x_2,x_3,x_4)\in \mathcal{X}^4\mid |x_1-x_2+x_3-x_4|\leq \alpha,\ x_i\in I_{v_i},\ i=1,2,3\big\}\big)\leq C_4\alpha^{4\delta} $$ for some constant $C_4$ depending on $C$ and $\delta$ but not on $\alpha$. Combining this with~\eqref{boundOnPrunedTree} and recalling~\eqref{e:AESpecial}, we conclude that $$ \mathcal E_A(\mathcal X,\mu_\delta,\alpha)\leq C_4C_2^3\alpha^{\delta+\rho}\leq C_4C_2^3\alpha^{\delta+\beta_\mathcal{X}} $$ where $\beta_\mathcal{X}=\delta\exp\big[-\mathbf{K}(1-\delta)^{-14}(1+\log^{14}C)\big]$ for $\mathbf{K}$ a large absolute constant. This finishes the proof of Theorem~\ref{ADRegularSmallAddEnergyThm}. \subsection{Further remarks}\label{furtherRemarks} \subsubsection{A discretized additive energy bound} We have chosen to phrase Theorem \ref{ADRegularSmallAddEnergyThm} in the language of Ahlfors-David regular sets. However, we only examine these sets at scales between $\alpha$ and 1. Our proof of Theorem \ref{ADRegularSmallAddEnergyThm} also gives the following variant: \begin{prop}\label{VarADRegularSmallAddEnergyThm} Let $\mathcal{X}\subset[0,1]$ be a union of intervals of length $\alpha$. Suppose that for all $\alpha\leq r\leq 1$ and all $x\in \mathcal{X}$ we have the bounds \begin{equation} C_{\mathcal{X}}^{-1} r^{\delta}\alpha^{1-\delta}\leq \mu_L(\mathcal{X}\cap B(x,r))\leq C_{\mathcal{X}} r^{\delta}\alpha^{1-\delta}, \end{equation} where $\mu_L$ is the one-dimensional Lebesgue measure. Then \begin{equation} \mu_L^4\big(\{(x_1,x_2,x_3,x_4)\in\mathcal{X}\colon |x_1-x_2+x_3-x_4|<\alpha\}\big)\leq\widetilde C\alpha^{4-3\delta+\beta_{\mathcal{X}}}, \end{equation} where $\beta_X$ is as given in \eqref{finalEpsBd} and $\widetilde C$ is some constant. \end{prop} \subsubsection{Higher dimensions} \label{higherDimRemark} Most of the arguments in this section extend to higher dimensions without difficulty. The real issue is extending Theorem \ref{ADRegularSmallAddEnergyThm} to $\delta\geq 1$. This may be challenging because Proposition \ref{ADStronglyAvoids} may not be true if $\delta\geq 1$. For example, a unit line segment in $[0,1]^2$ is a 1--regular set, but it contains arbitrarily long arithmetic progressions. A unit line segment in $[0,1]^2$ also has maximal additive energy, so no variant of Theorem \ref{ADRegularSmallAddEnergyThm} can hope to hold for that example. If $\delta\geq 1$, it is not clear what the proper hypotheses for the theorem should be. One possible avenue is the following. In \cite{bond}, the authors study $1$--regular sets that satisfy an additional property called $(\rho,C_1)$--unrectifiability. If a 1--regular set is $(\rho,C_1)$ unrectifiable, then for all rectangles $R$ of dimensions $r_1\geq r_2$, we have $\mu_1(\mathcal{X}\cap R)\leq C r_1^{1-\rho}r_2^\rho$. If $\rho>0$ then sets with this property strongly avoid arithmetic progressions. It is possible that this property can be generalized to other $\delta$--regular sets for $\delta>1$. \subsubsection{Improving the bounds on $\beta_\mathcal{X}$} It is likely that the bound on $\beta_\mathcal{X}$ from Theorem \ref{ADRegularSmallAddEnergyThm} can be substantially improved. A modest improvement would be to replace the bound in \eqref{finalEpsBd} by $C_{\mathcal{X}}^{-K/(1-\delta)^K}$ where $K$ is an absolute constant. However, the following example shows that the constant $\beta_\mathcal{X}$ must go to 0 as $C\to\infty$. Let $C>1$ be an integer and let \begin{equation} \mathcal{X}=\{x\in[0,1]\colon\ x\ \textrm{has a base } C\ \textrm{expansion of the form}\ x=0.a_10a_30a_5\ldots\}. \end{equation} Then $\mathcal{X}$ is a $1/2$--regular set with regularity constant $C$. On the other hand, \begin{equation*} \mathcal{E}_A(\mathcal{X},\alpha)\geq \alpha^{1/2+\frac{1}{10\log C}}. \end{equation*} A similar example can be constructed for other values of $\delta$. This shows that in Theorem~\ref{ADRegularSmallAddEnergyThm} we cannot take $\beta_\mathcal{X}>0$ to be independent of $C$.
{ "timestamp": "2016-05-12T02:10:43", "yymm": "1504", "arxiv_id": "1504.06589", "language": "en", "url": "https://arxiv.org/abs/1504.06589" }
\section{Introduction} Recently, polar codes~\cite{arikan2009channel} have received significant attention due to its capability to achieve the capacity of binary-input memoryless symmetric channels with low-complexity encoding and decoding schemes. Successive cancellation (SC)~\cite{arikan2009channel}, list successive cancellation (List-SC)~\cite{tal2011list} and belief propagation (BP)~\cite{xu2015xj} are the three most common proposed decoding schemes. Among these, SC decoder is the most promising for practical hardware implementation since its low $O(NlogN)$ complexity, where $N$ is the length of the code. Thus, many relevant hardware designs are proposed~\cite{leroux2011hardware}~\cite{leroux2013semi}~\cite{mishra2012successive}. However, algorithmically, SC decoder suffers from high latency. Typically, for conventional SC decoder, its latency ($2N-2$) increases linearly with respect to the code length. This is a significant challenge since polar codes work well only at very long code lengths. A lot of works have been done to reduce the latency of SC decoder from both hardware and algorithm aspects. In~\cite{zhang2012reduced}, a pre-computation method is used to reduce decoding latency from $2N-2$ to $N-1$. In~\cite{yuan2014low}, three approaches, the dedicated 2-bit decoder for the last stage of SC decoding, overlapped-scheduling and look-ahead techniques are applied, which eventually results in a $3N/4-1$ latency. In~\cite{alamdar2011simplified} and ~\cite{sarkis2014fast}, by observing the tree architecture of SC decoding, certain patterns of constituent codes are found. These constituent codes can feed back the hard decision information immediately without traversal, which can significantly reduce the latency of decoding some polar codes with a given architecture. This approach is refer to as fast-SSC decoder. Moreover, a processors-array based structure for FPGA implementation is also proposed in~\cite{sarkis2014fast}. In this paper, a novel low latency hardware architecture of polar code decoding using fast-SSC algorithm is presented. Although fast-SSC algorithm naturally lacks flexibility for multiple rates, the proposed design overcomes this disadvantage by utilizing the similarity between the decoding processes of fast constituent polar codes and regular polar codes. Corresponding scheduling plan is presented in this paper. We also provide the design details of the $processing~unit$ (PU) which is compatible with both regular polar code and constituent polar code. The comparisons with other commonly discussed SC decoders are given. For example, Compared with the 2b-SC-Precomputation decoder, the fastest ASIC design of SC decoder to best of our knowledge, the proposed design can achieve at least $60\%$ latency reduction for polar code with length $N=1024$. The analysis of latency reduction with respect to code rates is also presented. It shows proposed architecture can yield a significant latency reduction especially at high code rate (code rate $>$ 0.8). This is very promising for modern communication or data storage systems where high rate codes are desired. Synthesis results using $Nangate~FreePDK~45nm$ process shows the proposed design can reach throughput of up to $5.81~Gbps$ and $2.01~Gbps$ for $(1024,870)$ and $(1024,512)$ polar codes, respectively. This paper is organized as follows. The relative background are reviewed in section~\ref{Background}. Then, the hardware implementation of proposed system is described in section~~\ref{Hardware Implementation}. After that, the synthesis results and relevant comparisons are discussed in section~~\ref{Hardware Analysis of Comparison}. Finally, the conclusion is in section~\ref{Conclusion}. \section{Background} \label{Background} \subsection{Polar Code and Tree analysis of SC Decoding} \label{Polar Code} \begin{figure}[!t] \centering \subfloat[]{\includegraphics[width=1.5in]{encoder.eps}\label{encoder}} \hfil \subfloat[]{\includegraphics[width=0.75in]{tree.eps}\label{tree}} \hfil \caption{\protect\subref{encoder} Encoder of $(8,4)$ polar code, \protect\subref{tree} Tree presentation of $(8,4)$ SC decoder} \label{polar_code_introduction} \end{figure} As described in \cite{arikan2009channel}, a polar code is constructed by exploiting channel polarization. Mathematically, polar codes are linear block codes of length $N = 2^n$. The transmitted codeword ${\bm{x}}\triangleq {(x_1,x_2,\cdots,x_N)}$ is computed by $\bm{x}=\bm{u}\bm{G}$ where $\bm{G=F^{\otimes m}}$, and $\bm{F^{\otimes m}}$ is the $m$-th Kronecker power of $\bm{F} = \begin{bmatrix} 1&0\\ 1&1 \end{bmatrix} $. Each row of $G$ is corresponding to an equivalent polarizing channel. For an $(N,k)$ polar code, $k$ bits that carry source information in $\bm{u}$ are transmitted using the most reliable channels. These are refer to information bits. While the rest $N-k$ bits, called frozen bits, are set to zeros and are placed at the least reliable channels. Determining the location of the information and frozen bits depends on the channel model and the channel quality is investigated in~\cite{tal2013construct}. Fig.~\ref{encoder} shows an example of $(8,4)$ polar code encoder, where the black and white nodes stand for the information bits and frozen bits, respectively. Polar codes can be decoded by recursively applying successive cancellation to estimate $\hat{u}_i$ using the channel output $y_{0}^{N-1}$ and the previously estimated bits $\hat{u}_{0}^{i-1}$. This approach is naturally represented by a binary tree whose each node corresponds to a constituent code. The number of bits in one constituent node in stage $m (m = 0,1,2...)$ $N^{m}$ is equal to $2^m$. Fig.~\ref{tree} shows an example of $(8,4)$ polar code. $\bm{\alpha}$ stands for the soft reliability value, typically is log-likelihood ratio (LLR), and $\bm{\beta}$ stands for the hard decision. $\bm{\alpha_l}$ and $\bm{\alpha_r}$ are the message passing from parent node to left and right child, and can be computed according to Eq.~(\ref{left_child}) and Eq.~(\ref{right_child}), respectively. \begin{equation} \begin{aligned} \alpha_{l}[i] &= f(\alpha_{v}[i],\alpha_{v}[i+N^{m}/2])\\ &= sign(\alpha_{v}[i])sign(\alpha_{v}[i+N^{m}/2])\\ &\cdot~min(|\alpha_{v}[i]|,|\alpha_{v}[i+N^{m}/2]|) \label{left_child} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \alpha_{r}[i] &= g(\beta_{l}[i-N^{m}/2],\alpha_{v}[i],\alpha_{v}[i-N^{m}/2]) \\ &= (-1)^{\beta_{l}[i-N^{m}/2]}\cdot\alpha_{v}[i-N^{m}/2]+\alpha_{v}[i] \label{right_child} \end{aligned} \end{equation} At stage 0, $\beta_v$ of a frozen node is always zero, and for information bit its value is calculated by threshold detection of the soft reliability according to \begin{equation}\label{hard_decision} \beta_v=h(\alpha_v)= \left \{ \begin{array}{c} 0,~if~\alpha_v \geqslant 0 \\ 1,~otherwise \\ \end{array} \right. \end{equation} At intermediate stages, $\bm{\beta_v}$ can be recursively calculated by \begin{equation}\label{feedback} \beta_{v}[i] = \left \{ \begin{array}{ll} \beta_{l}[i] \oplus \beta_{r}[i]~if~i\leq~N^{m}/2 \\ \beta_{r}[i-N^{m}/2]~otherwise \\ \end{array} \right. \end{equation} \subsection{Fast-SSC Algorithm} \label{Fast SC Algorithm} The main idea of fast-SSC algorithm is illustrated in~\cite{zhang2012reduced},~\cite{alamdar2011simplified} and~\cite{sarkis2014fast}. By finding some certain pattern constituent polar codes, the hard decision $\bm{\beta_v}$ of each constituent node can be determined immediately, without traversing the entire subtree, once the constituent polar code is activated. For a length $N$ constituent code in non-systematic polar codes, $\hat{\bm{u}}_N$ is calculated by $\hat{\bm{u}}_N = \bm{\beta_{vN}} \cdot G_N$, where $G_N$ is the generator matrix for length $N$ polar code. We adopt four kinds of constituent polar codes in our design. These are $\mathcal{N}^0$, $\mathcal{N}^1$, $\mathcal{N}^{SPC}$ and $\mathcal{N}^{REP}$, which are called fast constituent polar codes. $\mathcal{N}^0$ and $\mathcal{N}^1$ are refer to those constituent codes which only contain frozen bits or information bits, respectively. For $\mathcal{N}^0$ codes, we can set $\bm{\beta_v}$ to $0$ immediately. For $\mathcal{N}^1$ node, $\bm{\beta_v}$ can be directly decided via threshold detection Eq.~(\ref{hard_decision}). $\mathcal{N}^{SPC}$ and $\mathcal{N}^{REP}$ are two kinds constituent codes containing both frozen bits and information bits. In a length $N$ $\mathcal{N}^{SPC}$ codes, only the first bit is frozen. It renders the constituent codes as a rate $(N-1)/N$ single parity check (SPC) code. This code can be decoded by doing parity check with the least reliable bit which has the minimum absolute value of LLR. First, get the hard decision $HD_{v}$ of $\bm{\beta_v}$ via threshold detection. Then, calculated the parity by \begin{equation}\label{HD_SPC} parity = \sum_{i=1}^{N^m} \oplus HD_{v}[i]. \end{equation} and, find the index of the least reliable bit via \begin{equation}\label{min_SPC} j= arg \min_i|\alpha_{v}[i]|. \end{equation} Eventually, $\bm{\beta_v}$ is decided by \begin{equation}\label{beta_SPC} \beta_{v}[i]= \left \{ \begin{array}{ll} HD_{v}[i]\oplus parity,~when~i=j \\ HD_{v}[i],~otherwise \\ \end{array} \right. \end{equation} In a length $N$ $\mathcal{N}^{SPC}$ codes, only the last bit is information bit. In this case, all the $\beta_{v}[i]$ should be the same and are reflections of the information contained in the only one information bit. Thus, the decoding algorithm starts by summing all input LLRs and $\bm{\beta_v}$ is calculated as \begin{equation}\label{beta_REP} \beta_{v}[i]= \left \{ \begin{array}{ll} 0,~when~\sum\alpha_{v}[i] \geqslant 0; \\ 1,~otherwise \\ \end{array} \right. \end{equation} \begin{figure}[] \centering \subfloat[]{\includegraphics[width=1in]{n0n1.eps}\label{n0n1}} \hfil \subfloat[]{\includegraphics[width=1in]{nsnr.eps}\label{nsnr}} \caption{\protect\subref{n0n1} An example of $\mathcal{N}^0$ and $\mathcal{N}^1$ in a 8-bit polar code tree, and \protect\subref{nsnr} An example of $\mathcal{N}^{SPC}$ and $\mathcal{N}^{REP}$ in a 8-bit polar code tree} \label{2tree} \end{figure} Fig.~\ref{2tree} gives the examples of tree presentations of these four kinds constituent polar codes. \section{Hardware Implementation} \label{Hardware Implementation} In this section, a novel hardware implementation of fast-SSC decoder is presented. For a polar code with a given length, different code rate yields different distribution of constituent polar codes. A thoughtfully-composed architecture should have the capability and flexibility to deal with different rates. By exploiting the homogeneousness between the decoding processes of fast constituent polar codes and regular polar codes, our design supports a variety of rates. The scheduling scheme based on the proposed architecture is also discussed. Additionally, we develop an approach for sharing and reusing computational elements to achieve higher hardware efficiency. \subsection{System Overview} \label{System Overview} As introduced in~\cite{leroux2013semi}, tree architecture or line architecture for SC decoder is the most common. Line architecture has a higher hardware utilization but needs increased complexity in control module and memory access. Thus, we adopt tree architecture in our design. Fig.~\ref{ststem_overview} shows an overview of proposed system when code length = 16. $Processing~unit$ (PU) performs the $f$ and $g$ functions in Eq.~(\ref{left_child}) and Eq.~(\ref{right_child}), respectively, and its arithmetic part is used to decode $\mathcal{N}^{SPC}$ and $\mathcal{N}^{REP}$ as well. Pre-computation technique is also used, which allows the $f$ and $g$ functions update in the same clock cycle. The PU used in stage 0 has a slight difference with ordinary PU. We denote it with PU$_0$ in the figure. According to Eq.~(\ref{min_SPC}), the minimum LLR value needs to be found. The comparator tree is used to perform this since it inherently exists in the tree architecture of PUs. A judicious scheduling permits obtaining the minimum value at $stage~0$ and recording the choice of smaller input for each PU at each stage. After that, a backward operation implemented by a series of $parity~transmit~unit$ (PTU) can help to locate the minimum one among the length $N$ $\mathcal{N}^{SPC}$ constituent polar codes. Design details are illustrated in section~\ref{Processing Unit Design}. The estimation of current bit in SC decoding is bases on the information of previous decoded bits ($\bm{\beta}$). This information is also called partial sum. Thus, a $partial~sum~generator$ (PSG) which can co-operate with decoding pipeline is also needed. We adopt the PSG introduced in~\cite{zhang2013low} in our design, and it is compatible with our system. Thus, the design of PSG is not discussed in this paper. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{system_overview.eps} \caption{Overview of proposed system when code length = 16} \label{ststem_overview} \end{figure} \subsection{Dataflow, latency and flexibility analysis } \label{Scheduling of Fast SC Decoder} In terms of tree presentation, SC decoder conventionally process one node in each clock cycle. Traversal of a subtree contained $N$ leaf nodes needs $2N-2$ clock cycles. By using pre-computation as introduced in~\cite{zhang2012reduced}, which calculate the $f$ function and all the possible result of $g$ functions in the same clock cycle, the latency can be reduced to $N-1$. In our design, if this subtree is belong to fast constituent polar codes, the latency can be further reduced. For $\mathcal{N}^{0}$, the $\bm{\beta_v}$ are all set to $0$, and for $\mathcal{N}^{1}$, the $\bm{\beta_v}$ are determined by hard decision of input LLRs. Both of the two computations need only one clock cycle after they are activated. For $\mathcal{N}^{SPC}$, according to Eq.~(\ref{HD_SPC}), Eq.~(\ref{min_SPC}), and Eq.~(\ref{beta_SPC}), only three operations needed. Finding the minimum LLR can be done by a comparator tree, which is naturally existed in SC decoder with tree architecture since every PU has a comparator for Eq.~(\ref{left_child}). For $N$ LLRs, finding the smallest one use $Log_2N$ clock cycles. Meanwhile, we can obtain the parity bit when the minimum LLR is found, which will be explained in the next subsection. After that, one more clock cycle is need for signal parity check which is done by a $XOR$ gate. Thus, totally, decoding a length $N$ $\mathcal{N}^{SPC}$ constituent polar codes need $Log_2N + 1$ clock cycles. For $\mathcal{N}^{REP}$, according to Eq.~(\ref{beta_REP}), an accumulation operation is needed. Similar to the comparator tree, an adder tree also exists in SC decoder within the tree architecture since every PU has an adder for Eq.~(\ref{right_child}). For a length $N$ $\mathcal{N}^{REP}$ constituent polar code, it needs $Log_2N$ clock cycles to decode. $\mathcal{N}^0$ and $\mathcal{N}^1$ have time complexity $O(1)$ and $\mathcal{N}^{SPC}$ and $\mathcal{N}^{REP}$ have time complexity $O(log_2N)$. Compared with commonly discussed SC architecture in~\cite{leroux2013semi},~\cite{zhang2012reduced} and~\cite{yuan2014low}, which all have linear time complexity $O(N)$, we can benefit significantly from proposed scheduling scheme in term of latency, especially with very large $N$. The latency reduction of $N=1024$ polar code with different rate will be presented in the next section. The main challenge for fast-SSC decoder is that the architecture subject to the rate of codes. This is due to the reason that polar codes with different rates do not have the uniform distribution of constituent polar codes. Proposed design overcomes this obstacle by exploring the similarity between the decoding architecture of fast constituent and regular polar codes. The specific designed PU allows the tree architecture to deal with both fast constituent and regular polar codes, which means the entire decoding processing can run smoothly no matter what the distributions of constituent codes are. This architecture is independent and does not relay on the distribution of constituent codes. This property provides the flexibility for multiple rates. To switch from one rate to another rate, only the control signals for given PUs need to be modified. \subsection{Processing Unit Design} \label{Processing Unit Design} \begin{figure}[!t] \centering \subfloat[]{\includegraphics[width=2.5in]{processing_unit.eps}\label{processing_unit}} \vfil \subfloat[]{\includegraphics[width=2.5in]{processing_unit0.eps}\label{processing_unit0}} \caption{\protect\subref{processing_unit} Design details of PU, \protect\subref{processing_unit0} Design details of $PU_0$} \label{processing unit design} \end{figure} Fig.~\ref{processing_unit} shows design details of PU. A single PU can perform $f$ and $g$ functions in Eq.~(\ref{left_child}) and Eq.~(\ref{right_child}), respectively. Also a PU tree can help to find the minimum values or do accumulation for multiple inputs. In Fig.~\ref{processing_unit}, $S$ stands for $signed~magnitude~number$ and $C$ stands for $2's~ complement~number$. Unlike the PU design in~\cite{yuan2014low}, in which data are initially stored as signed magnitude form, our design use 2's complement as initial form. We do this for two reasons. 1). According to synthesis result, the critical path of PU is along with the $g$ function path. By moving number system convert modules to the $f$ function path, which means using 2's complement as initial data form, the critical path is still along with $g$ function path, but with significant reduction. 2). Compared with four number system convert modules are used in~\cite{yuan2014low}, only three are used if use 2's complement number. This is more hardware efficient. The benefits of this modification can be seen in section \ref{Hardware Analysis of Comparison}. For each PU, two LLRs are fed simultaneously. Since we use the pre-computation technique, $f$ and $g$ functions are calculated at the same time, and which one needs to be output is determined by $mode~select~2$. According to Eq.~(\ref{right_child}), there are only two types of possible results for $g$ function, sum or difference. Its final result depends on the corresponding partial sum. So two registers are used here to hold the most recently computed values until the corresponding partial sum is calculated. When it calculates the sum for decoding $\mathcal{N}^{REP}$, only additions are needed. The datapath is decided by $Mode~select~1$ signal. When $f$ function is performed, according to Eq.~(\ref{left_child}), both 2 inputs are divided into two parts: sign bit and unsigned number. Each part is processed separately first, and then results of two parts are combined together to obtain the updated value. $C~to~S$ and $S~to~C$ modules are needed before and after comparisons, respectively. When it deals with $\mathcal{N}^{SPC}$, the result of comparison should be recorded using a register as the $select~signal$ for PTU. Since the processing of searching minimum value lasts several clock cycles, there should be a feedback of the register to hold this value for the later clock cycles. The input source is chosen by $Mode~select~3$ signal. Since every PU does $exclusive~or~operation$ to the sign bit of two inputs, according to Eq.~(\ref{HD_SPC}), the sign bit of the final value in stage 0 should be equal to the parity. Eq.~(\ref{beta_SPC}) can be performed using an $XOR$ gate. The PU that contains the minimum LLR receives the parity check bit and the others receive $0$s. The transmission of parity check bit is done by the PTU which is a two input two output module. One input is the $parity~check~bit$ (PCB) and the other is the $select~signal$ (SS). The parity check bit is transmitted via $output~1$ (O1) or $output~2$ (O2) bases on the values of SS. Table.~\ref{truth table} shows the truth table of PTU. We can obtain the logic expression of O1 and O2 as: $O1~=~PCB~and~\overline{SS}~,O2~=~PCB~and~SS$. This can be done by two $and$ gates and one $Inverter$. \begin{table}[h] \centering \caption{Truth table of PTU} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline PCB & SS & O1 & O2 & PCB & SS & O1 & O2 \\ \hline 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ \hline 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ \hline \end{tabular} \label{truth table} \end{table} The PU in $stage 0$, as denote PU$_0$ in~Fig.~\ref{ststem_overview}, has a simpler architecture. Fig.~\ref{processing_unit0} shows the design details of $PU_0$. Since only one more clock cycle need for single parity check, there is no feed back to this register. Furthermore, $\mathcal{N}^{SPC}$ cannot exist in $stage 0$. So top part in Fig.~\ref{processing_unit} which is relative to single parity check can be removed. For $g$ function and $\mathcal{N}^{REP}$ , the output of $f$ function can be feed back to it immediately, and the sign bit of the result of adding is the partial sum for $\mathcal{N}^{REP}$. \begin{table}[!t] \centering \caption{Hardware comparison of different $(n,k)$ SC decoder with $q$-bit quantization for inner LLRs using tree architecture} \begin{tabular}{|c|c|c|c|c|} \hline Hardware Type & \cite{zhang2012reduced} & \cite{leroux2011hardware} & \cite{yuan2014low} & Proposed Design \\ \hline \# of PU & $n-1$ & $n-1$ & $n-1$ & $n-1$ \\ \hline \# of PTU & $0$ & $0$ & $0$ & $2/n-1$ \\ \hline \# of 1 bit REG & $\thickapprox 3qn$ & $\thickapprox qn$ & $\thickapprox 3qn$ & $\thickapprox (3q+1)n$ \\ \hline HC & $1.3$ & $1$& $1.3$&$1.31$ \\ \hline Latency (clock cycle) & $n-1$ & $2n-2$ & $0.75n-1$ & $\thickapprox (0.1\thicksim 0.3)n$ \\ \hline Throughput & $2$ & $1$ & $2.67$ & $\thickapprox6.69\thicksim 22.26 $ \\ \hline Throughput/HC & $1.53$ & $1$ & $1.74$&$5.1\thicksim 16.99$ \\ \hline \end{tabular} \label{comparison} \end{table} \subsection{Fixed point analysis} \label{Quantization Analysis} Fig.~\ref{fix_point} shows the effect of quantization on the $(1024,512)$ polar code. For channel outputs and inner LLRs, we use separate quantization schemes. The quantization schemes are shown in $(C,L,F)$ format. Where $C$, $L$ and $F$ are the number of bits used for presenting channel output, inner LLRs and fraction parts of both channel output and LLRs, respectively. Since no multiplication or division used, which means the length of fraction does not change, channel outputs and inner LLRs use the same fraction precision. As the result of the trade-off between hardware efficiency and decoding performance, we choose $(4,5,0)$ quantization scheme in our design. \begin{figure}[!t] \centering \includegraphics[width=3 in]{fix_point.eps} \caption{Effect of quantization on the BER/FER performance of $(1024,512)$ code} \label{fix_point} \end{figure} \section{Hardware Analysis and Comparison} \label{Hardware Analysis of Comparison} In this section, the comparisons between proposed design and other state-of-the-art designs are given, and synthesis results using $Nangate~FreePDK~45nm$ process are also presented. Table.~\ref{comparison} shows the hardware comparison of different $(n,k)$ SC decoders with $q$-bit quantization for inner LLRs using tree architectures. All the throughputs and hardware complexity (HC) are normalized to the SC decoder in~\cite{leroux2011hardware}, and the hardware complexity is estimated based on the synthesis results. The latency for proposed design is a range with respect to the code rates change from $0.05$ to $0.95$. From this table, we can see that our proposed design achieves the highest throughput per unit of hardware complexity. The exact latency depends on the code rate. Fig.~\ref{latency_reduction} shows the latency reduction of the proposed design along with code rates from $0.05$ to $0.95$. The reduction is relative to the 2b-SC-Precomputation decoder which so far is known to be the fastest. The figure shows at least $60\%$ latency reduction can be achieved by our proposed design. This is very promising for many applications where high rate channel codes are needed, such as for data storage system. Additionally, we implemented the proposed design with $Verilog$ for the polar code with length=$1024$ and synthesized it using $Nangate~FreePDK~45nm$ process with $Synopsys~Design~Complier$. We calculated the throughput for $(1024,870)$ and $(1024,512)$ polar codes. Table~\ref{syn_result} shows the synthesis result for $(1024,870)$ and $(1024,512)$ polar codes. Notice that the maximum frequency is higher than that reported in \cite{yuan2014low} which use the same process as our design. Our design in theory should have a lower maximum frequency since we have one more Mux delay for regular and fast constituent polar codes. This performance improving is attributable to the modification we have done to PU as described in section \ref{Processing Unit Design}. \begin{table}[h] \centering \caption{Synthesis result for $(1024,870)$ and $(1024,512)$ polar codes} \begin{tabular}{|c|c|} \hline Silicon Area ($\mu m^2$) & 275899 \\ \hline Max Frequency (GHz) & 1.04 \\ \hline Latency (1024,870) (clock cycle) & 156 \\ \hline Throughtput(1024,870) (Gbps) & 5.81 \\ \hline Latency (1024,512) (clock cycle) & 266 \\ \hline Throughtput(1024,512) (Gbps) & 2.01 \\ \hline \end{tabular} \label{syn_result} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3in]{latency_reduction.eps} \caption{Latency Reduction $vs.$ Code Rate} \label{latency_reduction} \end{figure} \section{Conclusion} \label{Conclusion} In this paper, we proposed a hardware architecture of fast-SSC algorithm for polar codes. By exploiting the similarity between the decoding processing of fast constituent and regular polar codes, proposed design overcomes the disadvantage of fast-SSC decoder that lacking decoding flexibility with respect to multiple code rates. Corresponding scheduling plan and the intendedly designed PU are also described. Result shows that proposed design significantly increase the decoding throughput of polar codes compared with other state-of-art SC decoders. \bibliographystyle{IEEEtran}
{ "timestamp": "2015-09-30T02:00:14", "yymm": "1504", "arxiv_id": "1504.06247", "language": "en", "url": "https://arxiv.org/abs/1504.06247" }
\section{Introduction} \label{Introduction} Since the first recorded white light observations of solar flares (\citealp{Carrington1859, Hodgson1859}), sophisticated ground-based and space-born solar techniques have been introduced to investigate the physics of the sun. Recently space telescopes like SoHO, Yokoh, RHESSI, Hinode and SDO, have revealed many detailed observations covering broad wavelength ranges at a high temporal, spatial and spectral resolution. Generally, it is accepted that the energy of solar flares comes from stressed, non-potential, current-carrying coronal magnetic fields being released by magnetic reconnection. About $10$ to $50\%$ of the flare energy may be transferred to energetic electrons and ions (e.g. \citealp{Lin&Hudson1976}). In some cases energetic electrons alone carried away $50\%$ of the flare energy (e.g. \citealp{Milleretal1997}), being accelerated to energies up to $10-100$ $MeV$ (e.g. \citealp{Aschwanden2002}). The prime diagnostic of accelerated electrons in solar flares is the HXR radiation they cause. Two main components were identified in HXR light curves : a sharply increasing component and a slowly varying one. The sharp increase happens within 0.5-5 s after the initial flaring (e.g. \citealp{Holmanetal2011, Zharkovaetal2011}). This indicates that within sub-second electrons are locally accelerated in excess of a few $MeV$. The slowly varying component lasts as long as flares continue, i.e., electron energization continues. Using high-resolution imaging, HXR location in the sun has been solved with a few arcseconds resolution. Solar observations have shown HXR emissions from the foot-points of flaring coronal structures. Recently based on the classical CSHKP (see \citealp{Priest&Forbes2002} for a review) solar flare reconnection model and solar flare observations near the limb of the solar disk, HXRs were found also at the flare loop tops (e.g., \citealp{Masuda1994, Gordovskyyetal2010b} with Yohkoh observation). Although a substantial progress was made in observations, it is still an open question by which mechanisms the flare electrons are accelerated. Mostly suggested mechanisms can be divided into three classes (1) acceleration by direct current (DC) electric fields (see, e.g., \citealp{Zharkova&Gordovskyy2004, Zharkova&Gordovskyy2005a, Zharkova&Gordovskyy2005b}), (2) stochastic acceleration (see, e.g., \citealp{Vlahos2009}) and (3) shock acceleration (see, e.g., \citealp{Aschwanden2002, Benz2008}). Observations show also that different flares produce different HXR spectra changing with time and their locations with respect to the polarity inversion line (PIL) (e.g., \citealp{Zharkovaetal2011}). All these features can hardly be explained by one single acceleration mechanism. Therefore flare energetic particles are perhaps accelerated by different mechanisms at different time and in different places while the flares last. In order to validate acceleration mechanism, it is appropriate to carry out test particle calculations. The electron acceleration in the vicinity of a single reconnection X-point, e.g., was investigated by \citealp{Zharkova&Gordovskyy2004, Zharkova&Gordovskyy2005a, Zharkova&Gordovskyy2005b, Wood&Neukirch2005, Priest&Titov1996}). Analytic reconnection models were used as well as the results of ideal or resistive MHD numerical simulations. \citealp{Martens&Young1990}, e.g., used MHD simulation results to study the particle motions and accelerations in current sheets. In all these models the magnetic field parallel component of the DC electric field $\vec{E} = -\vec{u} \times \vec{B} + \eta \vec{J}$ causes strong particle acceleration if only $\eta$ was chosen appropriately. However, the prescription of $\eta$ in the resistive MHD simulations is usually ad hoc and arbitrary. Meanwhile in the collisionless corona, the concept of collisional resistivity $\eta$ is largely unapplicable. Microphysical effects have to be taken into account. \citealp{Silinetal2005, Buechner&Elkina2006}, e.g., have shown that considering possible micro-turbulence strong parallel electric fields must be confined in narrow channels of the ion inertia scale size (see also J. B\"{u}chner and W. Daughton 2007, section 3.5 in \cite{Buechner2007}). Macroscopic MHD simulations, on the other hand, is better to be used to investigate the electron acceleration in the convective electric fields ($\vec{E}=-\vec{u}\times\vec{B}$). \citealp{Veksteinetal1997} and \citealp{Guoetal2010}, e.g., analysed the particle acceleration in the convective electric fields around and at a magnetic null point, respectively. \citealp{Veksteinetal1997} used an analytically prescribed magnetic field with an added uniform electric field in the perpendicular direction to calculate the test particle guiding center motions near a reconnection X-point in a 2D geometry. They restricted the test particle orbits far away from the X-point since the guiding center approximation breaks down in a null-point. They considered particle parallel acceleration due to the $\vec{E} \times \vec{B}$ drift effects and neglected the effects of magnetic gradients and curvatures by launching only particles with very small initial parallel velocities and magnetic moments. As other authors before (e.g., \citealp{Burkhartetal1990}), they found the final kinetic energy of the most accelerated particle is proportional to $E^{4/3}$, where $E = -\vec{u}\times\vec{B}$. They also assessed the spectral index of the accelerated particles as being about 1.7, the corresponding HXR spectral index would be around 2.7 utilizing a simple relation $\gamma_s=\delta +1$ (where $\delta$ is the electron spectral index and $\gamma_s$ stands for the index of emitted HXR spectrum) which is valid within the thin target model (\citealp{Datlowe&Lin1973}). Contrary to \citealp{Veksteinetal1997}, \citealp{Guoetal2010} took the output of a 3D MHD simulation of magnetic null point reconnection to study the electron and proton acceleration at a 3D null point in the convective electric field. Every test particle is traced by solving the full equations of motion. This is necessary since the guiding center approximation breaks down at a magnetic null point. They investigated the influence of the convective speed on particle acceleration by rescaling it. They found that all particles are more efficiently accelerated with a larger convective speed. Particle energy can be up to energies of the order of $2$ $MeV$ (proton) and $3$ $keV$ (electron) from initial thermal energy of about $200$ $eV$. The reason is that non-adiabatic (demagnetized) particles can easily be accelerated in the direction perpendicular to the magnetic field. Particle have to undergo strong perpendicular drift to be substantially accelerated. But particle final parallel energy can still dominate its final total kinetic energy. Because of the much smaller gyroradius, electrons are demagnetized in smaller regions than the protons, protons are accelerated to higher energies than electrons. These authors also studied the influence of the initial energy on particle acceleration: higher initial energies lead protons to be stronger accelerated, while the final kinetic energy of electrons were not influenced essentially. In the studies of \citealp{Veksteinetal1997} and \citealp{Guoetal2010}, there were only one magnetic X- or null point. \citealp{Kruckeretal2008} claimed that electron DC-acceleration at only one reconnection X-point (e.g., \citealp{Zharkova&Gordovskyy2005a, Wood&Neukirch2005}) cannot explain the huge number of accelerated electrons inferred from HXR observations. The possibility of particle acceleration by cascading reconnection was mentioned firstly by \citealp{Shibata&Tanuma2001}. They conjectured that magnetic islands could be formed at many scales by tearing-mode instabilities of the stretching current sheet. Later this concept are confirmed by theoretical approaches (e.g., \citealp{Loureiroetal2007, Uzdenskyetal2010}), observations (e.g., \citealp{Hoshinoetal1994, Karlicky2004}), AMR MHD simulations (e.g., \citealp{Bartaetal2011}) and particle in cell (PIC) simulations (e.g., \citealp{Karlickyetal2012}). The electron acceleration by many reconnection sites was studied by \citealp{Li&Lin2012} and \citealp{Gordovskyyetal2010a, Gordovskyyetal2010b}. In their studies, however, they assumed arbitrary ad-hoc prescribed anomalous resistivity models to reveal the accelerating fields. As well as the number of X-points in their studies were obtained by periodically repeating the simulation domain. Only the particle acceleration in the convective electric fields $\vec{E}=-\vec{u}\times\vec{B}$, however, is independent on any ad hoc assumption about anomalous resistivity. In order to understand its possible acceleration effects, we use the results of AMR-MHD simulations of multiple island formations by cascading reconnection (\citealp{Bartaetal2010, Bartaetal2011}). Those simulations have shown that cascading reconnection forms differently sized magnetic islands where electrons can be accelerated (see Sect.\ref{MHD-Model}). We use two different magnetic structure resolutions to investigated the resolution influence on electron accelerations. We studied the electron acceleration by cascading reconnection not only near the X-points but also in the magnetic islands in the framework of a guiding center approximation (\citealp{Northrop1963}, see Sect.\ref{Method}). The resulting HXR emissions by energetic electron non-thermal Bremsstrahlung are derived using a optically thin Bremsstrahlung method (\citealp{Brown1971, Tandberg-Hanssen&Emslie1988}) to compare with flare HXR observations. In Sect.\ref{Results}, electron acceleration dependence on initial conditions, different acceleration factors in the parallel direction, acceleration in different (parallel and perpendicular) direction, as well as trajectories, are investigated for trapped (Sect.\ref{0_Trapped}) and precipitating (Sect.\ref{0_Fled}) electrons. Finally the results are discussed and conclusion are drawn in Sect.\ref{Discussion}. \section{Electromagnetic fields of cascading reconnection} \label{MHD-Model} In this study we aim to investigate the particle acceleration in the convective electric fields of differently resolved cascading reconnection current sheets trailing a flaring arcade behind an ejected flux rope (cf. \citealp{Lin&Forbes2000}). The fields of cascading magnetic reconnection are obtained by means of a 2.5D AMR MHD simulation (\citealp{Bartaetal2010, Bartaetal2011}). In traditional MHD simulations, there are only uniform grid points. Unfortunately the sub-grid physics become important when the current sheet width and the non-idea plasma domain become thinner than the numerical grid size. Hence, traditional coarse MHD simulation cannot study smaller-scale processes of anticipated cascading reconnection. In order to resolve smaller-scale magnetic structures in the thinner current sheets, we use simulation results which can cover an as large as possible scale range. The high resolution AMR MHD technique allows the description of smaller-scale magnetic structures. For that sake the refined mesh is used when the current sheet width becomes comparable with the initial coarse grid size. The AMR algorithm works as follows: If at the time-step $t+\Delta t$ some coarse grids are detected containing thin current sheet, then they will locally be split into sub-boxes with $10 \times 10$ grid-points in the sub-system. After such refined meshes are initialized, the necessary more detailed plasma and field values are obtained by interpolating their parent coarse system values at the last time step ($t$). Then the dynamics of both the newly created and the pre-existing refined meshes are evolved in time ($t \rightarrow t+\Delta t$) with an accordingly refined time-step. After that the plasma and field values at the parent coarse mesh are replaced by averaging the quantities obtained from its corresponding refined meshes at time-step $t+\Delta t$. The influence of the global dynamics on the refined meshes are considered by interpolating boundary conditions in time and space. This refinement is repeated until the whole simulation is over (see \citealp{Bartaetal2010}). So there are two sets of electromagnetic field data obtained by the AMR MHD simulation: one for a simulation on the coarse meshes alone and another with the refined meshes which provides even smaller-scale structures of magnetic fields (see Fig.\ref{MyDomain_Zoom}). The MHD simulation results are restricted to 2.5D, i.e. two dimensional geometry but three dimensional plasma velocities and magnetic fields. This assumption is reasonable since observations have shown that the extended solar flare arcades typically having much larger extend along the polarity-inversion line (PIL) than across the PIL. The coordinate system is shown in Fig.\ref{wholeView}: the x and y-axis are directed along and perpendicular to the current sheet, respectively. The current sheet center is located at $y=0$, while the z-axis is pointing along the PIL located at ($x=0$, $y=0$). In this direction, every value is invariant i.e. $\partial / \partial z = 0$. The coarse resolution contains 6400 $\times$ 800 points in the vertical (x-axis) and right half of horizontal (positive y-axis) direction. A mirroring boundary is used at $y=0$ for the left half box: $\rho$, $u_{x}$, $u_{z}$, $B_{y}$, $B_{z}$ and $U$ are symmetric while $u_{y}$, $B_{x}$ are anti-symmetric. For the upper and right sides, free boundary conditions are used: all quantities should satisfy the von Neumann prescription $\partial / \partial \vec{n} = 0$ except of the normal magnetic field $B_{n}$ and the total energy density $U$. $B_{n}$ and $U$ are used to fulfill $\nabla \cdot \vec{B} = 0$. At the bottom, a symmetric boundary condition ($Q(-y)=Q(y)$) is used for $\rho$, $B_{x}$, $B_{z}$, $U$ and the anti-symmetric relation $Q(-y)=-Q(y)$ is assumed for $B_{y}$. The plasma is always static $\vec{u} = 0$ at the bottom. A generalized Harris-type current sheet is chosen as the initial state of the AMR MHD simulation (\citealp{Bartaetal2010, Bartaetal2011}): \begin{align} \nonumber \vec{A}(x,y,z; t=0) &= -B_{x0} \ln \Bigg[\exp \Bigg(\displaystyle\frac{y}{\omega_{cs}(x)}\Bigg) + \exp \Bigg(-\displaystyle\frac{y}{\omega_{cs}(x)}\Bigg) \Bigg] \hat{z} \\ \nonumber B_{z}(x,y,z; t=0) &= B_{z0} \\ \nonumber \rho(x,y,z; t=0) &= \rho_{0} \exp \Bigg(-\displaystyle\frac{x}{L_{G}}\Bigg) \\ \label{InitialCS} \end{align} where $\omega_{cs}(x)$ (Eq.\eqref{width}) shows the characteristic width at different height of the initial current sheet and $L_{G}$ =120 $Mm$ is the scale hight for a fully ionized hydrogen plasma: \begin{align} \omega_{cs}(x)=\displaystyle\frac{d \cdot x^{2}+x+x_{0}}{x+x_{0}} \label{width} \end{align} and $B_{x0}$, $B_{z0}$, $\rho_{0}$, $d$, $x_{0}$ are normalized quantities: $B_{z0} = 0.2$, $\rho_{0} = 1.0$, $B_{0} = \sqrt{ B_{x0}^{2} + B_{z0}^{2}} =1.0$, $d =0.003$ and $x_{0} = 20.0$. The x and y components of the magnetic field ($B_{x}$, $B_{y}$) are obtained from the magnetic vector potential $\vec{A}$ as $\vec{B} = \nabla \times \vec{A}$. Note that the magnetic field strength slightly decreases via $\omega_{cs}(x)$ with height 'x' corresponding to the magnetic field in the solar corona balancing the gravity force (Eq.\eqref{MHDEquations4}). The initial magnetic field state is displayed in the left panel of Fig.\ref{wholeView}. Compressible, resistive MHD equations (Eqs.\eqref{MHDEquations1} to \eqref{MHDEquations4}) are solved to describe the evolution of the plasma and magnetic fields (e.g., \citealp{Priest1984}): \begin{align} \displaystyle\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{u}) &= 0 \label{MHDEquations1} \\ \rho \displaystyle\frac{\partial \vec{u}}{\partial t} + \rho (\vec{u} \cdot \nabla) \vec{u} &= -\nabla p + \vec{J} \times \vec{B} + \rho \vec{g} \label{MHDEquations2} \\ \displaystyle\frac{\partial \vec{B}}{\partial t} &= - \nabla \times \vec{E} = \nabla \times (\vec{u} \times \vec{B} - \eta \vec{J}) \label{MHDEquations3} \\ \displaystyle\frac{\partial U}{\partial t} + \nabla \cdot \vec{S} &= \rho \vec{u} \cdot \vec{g} \label{MHDEquations4} \end{align} where $\rho$ is the plasma density, $\vec{u}$ plasma velocity, $\vec{B}$ magnetic field strength, $\vec{E}$ electric field strength, $\eta$ resistivity, $\vec{g}$ gravitational acceleration at the photospheric level and $p$ plasma pressure. The current density $\vec{J}$, total energy density U and energy flux $\vec{S}$ are defined as: \begin{align} \vec{J} &= \displaystyle\frac{\nabla \times \vec{B}}{\mu_{0}} \\ U &= \displaystyle\frac{p}{\gamma - 1 } + \displaystyle\frac{1}{2} \rho u^{2} + \displaystyle\frac{B^{2}}{2 \mu_{0}} \\ \vec{S} &= \Bigg(U + p + \displaystyle\frac{B^{2}}{2 \mu_{0}} \Bigg) \vec{u} - \displaystyle\frac{\vec{u} \cdot \vec{B}}{\mu_{0}} \vec{B} + \displaystyle\frac{\eta}{\mu_{0}} \vec{J} \times \vec{B} \label{EquationsS} \end{align} where $\gamma_0 = \frac{5}{3}$ is the adiabatic coefficient for adiabatic condition and $\mu_{0}$ is the vacuum magnetic permeability. The anomalous resistivity $\eta$ in the Ohm's law (Eq.\eqref{MHDEquations3}) and in the energy flux $\vec{S}$ (Eq.\eqref{EquationsS}) is chosen ad hoc to describe the sub-grid-scale dissipation effects of microphysical (kinetic) processes. It is switched on depending on the strength of the local current-carrier drift velocity $v_{CCD}=|J|/(e \rho)$ compared to the critical threshold velocity $v_{cr}$ (e.g., \citealp{Bartaetal2011}): \begin{align} \eta(\vec{r}, t)= \left \{ \begin{array}{cc} 0 & |v_{CCD}| \leq v_{cr} \\ C\displaystyle\frac{v_{CCD}(\vec{r}, t)-v_{cr}}{v_0} & |v_{CCD}| > v_{cr} \end{array} \right. \label{Resistivity} \end{align} \citealp{Buechner&Elkina2005, Buechner&Elkina2006} and \citealp{Karlicky&Barta2008} have confirmed this behaviour by means of Vlasov and PIC-code numerical simulation and derived both critical velocity $v_{cr}$ and the coefficient $C$ in Eq.\eqref{Resistivity}. \begin{figure*}[htbp] \centering \mbox{ \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_0.png} \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_80.png} \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_200.png} \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_300.png} \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_360.png} \includegraphics[width=0.16\textwidth,viewport=193 37 435 483, clip=true]{1_Az_Fine_520.png} } \caption{Evolution of the in-plane magnetic field companies of cascading magnetic reconnection in the CME-trailing current sheet obtained by high resolution 2.5D AMR MHD simulation. Panels from \textit{left} to \textit{right} show the initial state ($t=0$ $t_{0}$), primary plasmoids ($t=80$ $t_{0}$), secondary plasmoids ($t=200$ $t_{0}$), third stage of plasmoids ($t=300$ $t_{0}$), large scale magnetic islands mature state ($t=360$ $t_{0}$) and last state ($t=520$ $t_{0}$) where the erupted and disconnected magnetic field lines imply the appearance of a CME. } \label{wholeView} \end{figure*} The AMR MHD simulation (Eqs.\eqref{MHDEquations1} to \eqref{MHDEquations4}) is carried out with normalized parameters: the normalizing length scale (half width of the current sheet at $x=0$) is chosen to be $L_{0} = 6.0 \times 10^{5}$ $m$, the normalizing magnetic field is $B_{0} = 4.0 \times 10^{-2}$ $T$ and the normalizing number density is $n_{0}=1.25 \times 10^{16}$ $m^{-3}$ as well as $q_{0} = |e|$ is taken as the normalizing charge. Other scaling parameters can be derived as: $V_{0}=B_{0}/\sqrt{\mu_{0} n_{0} m_{0}} = 7.80 \times 10^{6} m/s$ (where $m_{0}=m_{p}$ - proton mass and $V_{0}$ is the asymptotic value of the Alfv\'{e}n velocity at $y \rightarrow \infty$, $x=0$ and $t=0$), The time is normalized by the Alfv\'{e}n transit time $t_{0} = L_{0}/V_{0} = 7.69 \times 10^{-2} s$. Furthermore there are $E_{0}=V_{0} B_{0} = 3.12 \times 10^{5} V/m$ and $\eta_{0}=\mu_{0} L_{0} V_{0} = 5.88 \times 10^{6} \Omega \cdot m$. The asymptotic plasma beta parameter is $\beta=0.1$ at ($y \rightarrow \infty$ , $x=0$). In the coarse resolution, the mesh sizes are $\Delta x=\Delta y=0.045$ $L_{0}$. Hence the whole simulation domain extends over $(0, 288) \times (-36, 36)$ $L_{0}^{2}$ in the x-y plane. Fig.\ref{wholeView} depicts the evolution of the in-plane magnetic fields. Total simulation is performed over 520 $t_{0}$ when a CME is ejected through the upper boundary of the box (last panel of Fig.\ref{wholeView}). We pick out an already fragmented current sheet at $t=360$ $t_{0}$ as the background electromagnetic fields since at this time step not only there are the most information of the refined smaller-scale magnetic structures but also after that no more additional magnetic islands are generated. No anomalous resistivity is switched on before $t=420$ $t_{0}$. In order to relate the electron acceleration to the resolution of the magnetic structures, we compare the acceleration in the coarsely and finely resolved magnetic fields. Fig.\ref{MyDomain_Zoom} compares the magnetic structures obtained by the coarse (upper panels) and higher (lower panels) resolutions at $t=360$ $t_{0}$. From left to right, increasing zoom-levels show the details of the magnetic structures. The right bottom panel depict the detail of smaller-scale magnetic structures obtained by the higher resolution. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{2_Az_Together.pdf} } \caption{Magnetic field lines at time $t=360$ $t_{0}$. \textit{Top line}: coarsely resolved magnetic structures. \textit{Bottom line}: higher resolution simulation. \textit{Left} to \textit{right:} increasing zooms. } \label{MyDomain_Zoom} \end{figure} \section{Methods Used} \label{Method} \subsection{Test Particle Calculations} \label{Method1} If the gyroradius ($r_{gy} = \displaystyle\frac{m v}{q B}$) and gyroperiod ($\propto 1/\omega_{gy}= \displaystyle\frac{2 \pi m}{q B}$) of the particle are much smaller than the length scale of transverse gradients ($r_{\perp}$) and characteristic oscillation periods ($\propto 1/\omega_{os}$) of the ambient electromagnetic fields (i.e., $ r_{gy} / r_{\perp} \ll 1$ and $\omega_{gy} / \omega_{os} \gg 1 $), a guiding center approximation is valid. The motion of a magnetized charged particle can be decomposed into a drift of its guiding center and a gyration around this center (\citealp{Northrop1963}). The minimum magnetic field strength obtained by the AMR MHD simulations is $0.19$ $B_{0}$, for 10 $MeV$ energized electrons, the corresponding gyroradius is $4.4$ $m$ only. The grid size even of the refined mesh ($\Delta x = \Delta y = 0.0045$ $L_{0}$ $= 2.7$ km) is much larger. As well as, in normalized Eqs.\eqref{GC1} to \eqref{GC5} with the normalization values shown in Sect.\ref{MHD-Model}, a coefficient $\displaystyle\frac{m_{0} V_{0}}{q_{0} B_{0} L_{0}}$ and its reciprocal arise in Eqs.\eqref{GC2} and \eqref{GC3}. $\displaystyle\frac{m_{0} V_{0}}{q_{0} B_{0} L_{0}}$ corresponds to the ratio of the particle gyroradius $\displaystyle\frac{m_{0} V_{0}}{q_{0} B_{0}}$ over the characteristic length $L_{0}$ or the particle gyro-period $\displaystyle\frac{m_{0}}{q_{0} B_{0}}$ to the scaling time $t_{0}=L_{0}/V_{0}$. If only $\displaystyle\frac{m_{0} V_{0}}{q_{0} B_{0} L_{0}}$ is much smaller than unity, the guiding center approximation can be applied. In our study, it is only of the order of $10^{-6}$. Hence, here we use the guiding center approximation to trace each electron. Although only 0.01\% and 0.45\% electrons can be accelerated up to energies $> 100$ $keV$, for a high precision, a relativistic guiding center approximation is used: \begin{align} \displaystyle\frac{d \vec{R}}{dt} = \vec{v_{D}} &+ \displaystyle\frac{(\gamma v_{\parallel})}{\gamma} \vec{b} \label{GC1} \\ \vec{v_{D}} = \vec{v_{E}} &+ \displaystyle\frac{m}{q} \displaystyle\frac{(\gamma v_{\parallel})^{2}}{\gamma k^{2} B} [\vec{b} \times (\vec{b} \cdot \nabla) \vec{b}] + \displaystyle\frac{m}{q} \displaystyle\frac{\mu } {\gamma k^{2} B} [\vec{b} \times (\nabla(kB))] \nonumber\\& + \displaystyle\frac{m}{q} \displaystyle\frac{(\gamma v_{\parallel})} {\gamma k^{2} B} [\vec{b} \times (\vec{b} \cdot \nabla) \vec{v_{E}}] + \displaystyle\frac{m}{q} \displaystyle\frac{(\gamma v_{\parallel})} {\gamma k^{2} B} [\vec{b} \times (\vec{v_{E}} \cdot \nabla) \vec{b}] \nonumber\\& + \displaystyle\frac{m}{q} \displaystyle\frac{\gamma} {\gamma k^{2} B} [\vec{b} \times (\vec{v_{E}} \cdot \nabla) \vec{v_{E}}] + \displaystyle\frac{1}{\gamma c^{2}} \displaystyle\frac{E_{\parallel}} {\gamma k^{2} B} (\gamma v_{\parallel}) [\vec{b} \times \vec{v_{E}}] \label{GC2}\\ \displaystyle\frac{d (\gamma v_{\parallel})}{dt} =& \displaystyle\frac{q}{m} \vec{E} \cdot \vec{b} - \displaystyle\frac{\mu}{\gamma}[\vec{b} \cdot \nabla(kB)] \nonumber\\& + (\gamma v_{\parallel}) \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b} ] + \gamma \vec{v_{E}} \cdot [ (\vec{v_{E}} \cdot \nabla) \vec{b}] \label{GC3}\\ \gamma =& \sqrt{\displaystyle\frac{c^{2}+(\gamma v_{\parallel})^{2}+ 2 \mu B}{ c^{2}-v_{D}^{2}}} \label{GC4}\\ \displaystyle\frac{d \mu}{dt} =& 0 \label{GC5} \end{align} here $\vec{R}$, $\vec{v_{D}}$, $v_{\parallel}$, $\gamma$ and $\vec{b}$ are the guiding center position vector, the perpendicular drift velocity, the velocity along the magnetic field, the relativistic factor ($\displaystyle\frac{c}{\sqrt{c^{2}-v^{2}}}$) and the magnetic field direction unity vector $\vec{b} = \displaystyle\frac{\vec{B}}{B}$, respectively. In the expression for the drift velocity $\vec{v_{D}}$ in Eq.\eqref{GC2}, the term $\vec{v_{E}}$ corresponds to the local $\vec{E} \times \vec{B}$ drift velocity $\vec{v_{E}} = \displaystyle\frac{\vec{E} \times \vec{B}}{B^{2}}$. Other terms are the magnetic curvature drift velocity and the magnetic gradient drift velocity as well as higher order drifts. The factor $k = \sqrt{1-\displaystyle\frac{ \vec{v_{E}}^{2} }{c^{2}}}$ relates the electromagnetic field values to the reference frame moving with the velocity $\vec{v_{E}}$. Finally, $\mu = \displaystyle\frac{(\gamma v_{\perp})^{2}}{2B}$ is the relativistic magnetic moment per mass unit where $v_{\perp}$ is the particle gyration velocity perpendicular to $\vec{B}$. The electron energy is expressed using the relativistic $\gamma$-factor as $E=(\gamma-1)mc^{2}$. The set of Eqs.\eqref{GC1} to \eqref{GC5} are solved utilizing a fourth-order Runge-Kutta scheme. The field values between the grid points are interpolated along the electron trajectories with 2D linear interpolation. $4.752\times10^{5}$ test electrons are initially uniformly distributed along the current sheet ($0 < x < 108$ $L_{0}, y =0$) at 2400 points with 22 different initial velocities from 0.0 to $21.0$ $v_{th}$ and 9 different initial pitch angles from 0 to $\pi$. Here $v_{th}$ is the electron thermal velocity for a typical coronal temperature of $10^6$ $K$: $0.76 V_{0} \cong 6\times 10^{3}$ km/s. Every electron is traced for up to $10 t_{0}$ ($\sim 0.769$ $s$) or until it leaves the simulation domain, whatever happens first. Not that this time is shorter than the time scale of essential magnetic field changes in the MHD simulations. \subsection{Spectrum $\/$ Distribution function of accelerated electrons} \label{Method3} To obtain the energetic electron distribution function, we use the fact that the solar corona is practically collisionless. Hence according to Lioville's theorem, the particle distribution function keeps constant along the particle trajectory: $f(E, A, \vec{r}, t) = f(E_{0}, A_{0}, \vec{r_{0}}, t_{0})$. This allows to calculate the electron distribution function $f(E, A, \vec{r}, t)$ at the place where HXR are expected to be generated by Bremsstrahlung of energetic electrons. \subsection{HXR Emission} \label{Method2} Knowing the local plasma number density and electron distribution function, the hard X-ray emissivity $I(\epsilon)$ integrated over all contributing electrons can be calculated in the frame work of the thin target model (\citealp{Brown1971}) as: \begin{align} I(\epsilon) = \sum^{\infty}_{E>\epsilon} n(\vec{r}) v(\vec{r}) \sigma_{B}(\epsilon, E) f(E, A, \vec{r}, t) \label{HXREquation} \end{align} Here $E$, $A$, $\vec{r}$ and $v(\vec{r})$ are the energy, pitch angle, position and velocity of the electron at the time $t$, $n(\vec{r})$ is the local plasma number density, $\epsilon$ is the radiated photon energy, $f(E, A, \vec{r}, t)$ is the electron distribution function at the position of interest place and time, while $\sigma_{B}$ is the cross section of the Bremsstrahlung process. For a simple approximation, we take the Bethe-Heitler formula for the Bremsstrahlung cross section (\citealp{1934RSPSA.146...83B, Brown1971}): \begin{align} \sigma_{B}(\epsilon, E) \propto \displaystyle\frac{1}{\epsilon E} \ln \Bigg[\displaystyle\frac{1+\sqrt{1-\displaystyle\frac{\epsilon}{E}}} {1-\sqrt{1-\displaystyle\frac{\epsilon}{E}}}\Bigg] \label{Sigma} \end{align} Note that the Bethe-Heitler formula applies only to particle energies less than $100$ $keV$. In this investigation, for both kinds of magnetic fields resolution, more than $99\%$ of the electrons are accelerated to energies less than $100$ $keV$, i.e. the Bethe-Heitler formula still can give a high accuracy here. \section{Results} \label{Results} Depending on the locations of the simulated electrons at 10 $t_{0}$, three groups of electrons can be identified: those trapped in the magnetic islands; those precipitating to the chromosphere and the ones being ejected into the interplanetary space. There is no electron escaping from the left and right sides of the simulation domain. We concentrate our analysis on the trapped (Sect.\ref{0_Trapped}) and precipitating (Sect.\ref{0_Fled}) electrons. \subsection{Trapped Electrons} \label{0_Trapped} More than 80\% of simulated electrons are still trapped in the magnetic islands along the current sheet by $10$ $t_{0}$. This highly dynamical and complex magnetic field structure provides a very effective trapping mechanism of energetic electrons for the coronal HXR sources. \subsubsection{Acceleration dependence on initial conditions and magnetic field resolution} The acceleration of electrons in the convective (or induced) electric field $\vec{E}=-\vec{u}\times\vec{B}$ is sensitive to the initial position, velocity, and pitch angle of injected electrons and the fine structure of the magnetic field. The upper panels of Fig.\ref{0_0_Trapped} depict the dependence of the energy gain on the initial conditions and the magnetic field resolution. The lower panels show the corresponding projected results. In general, the electron acceleration is more efficient in magnetic fields with better resolved small scale structures for the larger magnetic curvatures and gradients accessible. The maximum final kinetic energy of trapped electrons is at most of the order of $100$ $keV$ in coarsely resolved magnetic fields, but it can be up to $470$ $keV$ if smaller-scale magnetic structures are taken into account, corresponding to a maximum energy gain of $53$ $keV$ and $420$ $keV$ for the coarsely and finely resolved fields respectively. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.25\textwidth]{3_0_Coarse_X0A0Ee_Trapped.png} % \includegraphics[width=0.25\textwidth]{3_0_Fine_X0A0Ee_Trapped.png} } \\ \mbox{ \includegraphics[width=0.50\textwidth]{3_Trapped.png} } \\ \mbox{ \includegraphics[width=0.50\textwidth]{3_0_E0A0X0Ee_C+F_Trapped.png} } \caption{ Dependence of the electron kinetic energy gain ($\Delta E$) on the initial pitch angle ($A_{0}$), velocity ($v_{0}$) and position ($x_{0}$) for two differently resolved magnetic field structures. Each point represents one electron. The \textit{upper} panels show the 3D result with the initial velocity color coded. The \textit{lower} panels show the corresponding projected results and the averaged value ($\star$ lines) and standard deviation of the energy gain (\ding{73} lines) in the initial energy and pitch angle spaces. $B_{y}$ along the current sheet center is depicted in the bottom-right panel. The \textit{red} and \textit{blue} colors are used to distinguish the results in the coarsely and finely resolved magnetic fields, respectively. Note that there are three different scales in the y-axis for $\Delta E <0$, $0<\Delta E<3$ $keV$ and $\Delta E > 3$ $keV$. } \label{0_0_Trapped} \end{figure} From left to right the bottom panels of Fig.\ref{0_0_Trapped} depict the dependence of acceleration efficiency on the electron initial energy, pitch angle and position, respectively. The kinetic energy gain increases with the increase of the initial energy, which is consistent with the results of \citealp{Guoetal2010}. It is interesting to note that both the mean and the standard deviation of the energy gain are roughly proportional to the initial energy and the acceleration efficiency of the finely resolved case is about 3 times higher than the coarse one. The dependence of the electron energy change on the initial pitch angle and position, however, is more or less chaotic due to the complex field structures. Different from \citealp{Karlickyetal2004} where the betatron process dominates, here the most energetic electron is not associated with an initial pitch angle of $90^{\circ}$ any more. The acceleration symmetry with respective to the $90^{\circ}$ pitch angle is also broken when the magnetic fields are better resolved. The lower-right panel of Fig.\ref{0_0_Trapped} also shows the magnetic field component $B_{y}$ along the current sheet center $y=0$, which can be used to identify the magnetic X- and O-points with $B_{y}=0$. The most efficient acceleration appears to be associated with electrons injected close to the X-points that contain larger magnetic gradients and smaller magnetic curvature radii. \subsubsection{Energy gain} \label{Acceleration} The guiding center approach decomposes particle energy into components parallel and perpendicular to the magnetic field and the part associated with the guiding center drift in the direction perpendicular to the magnetic field. The maximum drift velocity $v_{D}$ (Eq.\eqref{GC2}) is 1.40 $v_{th}$ and 0.97 $v_{th}$ in the coarsely and finely resolved magnetic fields, which is negligible comparing with the other two components. Considering that the anomalous resistivity is not switch on ($\vec{E} \cdot \vec{b}=0$) and $\vec{v_{D}} \cong \vec{v_{E}}$, $k \cong 1$, Eqs.\eqref{GC3}, \eqref{GC5} and $\mu = \displaystyle\frac{(\gamma v_{\perp})^{2}}{2B}$ give: \begin{align} \displaystyle\frac{1}{2}\displaystyle\frac{d (\gamma v_{\parallel})^{2}}{dt} =& - \mu v_{\parallel}[\vec{b} \cdot \nabla B] + (\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b} ] \label{Parallel_Gain}\\ \displaystyle\frac{1}{2}\displaystyle\frac{d (\gamma v_{\perp})^{2}}{dt} =& \displaystyle\frac{d \mu B}{dt} = \mu \displaystyle\frac{d B}{dt} \nonumber\\ =& \mu v_{\parallel}[\vec{b} \cdot \nabla B]+ \mu \vec{v_{E}} \cdot \nabla B \label{Perpendicular_Gain} \end{align} So the energy evolution of an electron in the guiding-center limit is given by: \begin{align} \displaystyle\frac{d E_{k} }{dt} \propto & \displaystyle\frac{d (\gamma v_{\parallel})^{2} +(\gamma v_{\perp})^{2}}{dt} \nonumber\\ = & \mu \vec{v_{E}} \cdot \nabla B + (\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b} ] \label{Total_Gain} \end{align} Fig.\ref{Acc_Factors} exhibits the spacial distributions of the acceleration rates in Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain}: $(\vec{b} \cdot \nabla B)/ (2B)$ (the middle panel), ($\vec{v_{E}} \cdot \nabla B )/ (2B)$ (the left panel) and $\vec{v_{E}} \cdot [(\vec{b} \cdot \nabla) \vec{b}]$ (the right panel) in the coarse calculation domain (corresponding to the top-middle panel of Fig.\ref{MyDomain_Zoom}), note that these distributions are quite similar between the coarsely and finely resolved magnetic fields. Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain} depict the parallel magnetic gradient ($\vec{b} \cdot \nabla B$) can change both the parallel and perpendicular energies of electrons, but they cancel out each other in the total energy evolution \eqref{Total_Gain}. The overall energy change is dominated by the perpendicular magnetic gradient ($\mu \vec{v_{E}} \cdot \nabla B$) and curvature $(\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b} ]$, which are proportional to the electron energy due to $\mu$ and $(v_{\parallel})^{2}$, respectively. The increase of the electron acceleration with the increasing initial energy is therefore expected (see the lower-left panel of Fig.\ref{0_0_Trapped}). (In addition, trapped electrons with larger initial velocities bounce and pass accelerators more frequently to gain more energies.) \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.50\textwidth]{4_graB_360_White.png} } \caption{Spacial distributions of perpendicular (\textit{left} panel), parallel (\textit{middle} panel) gradient and perpendicular curvature ((\textit{right} panel)) in Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain} of the coarsely resolved magnetic fields. } \label{Acc_Factors} \end{figure} Also due to the combined actions between the magnetic gradient and curvature in Eq.\eqref{Total_Gain}, the favourable initial pitch angles (shown in the lower-middle panel of Fig.\ref{0_0_Trapped}) are not 0, $180^{\circ}$ or $90^{\circ}$ which correspond to acceleration dominated only by magnetic curvatures ($(\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b}]$) or gradients ($\mu \vec{v_{E}} \cdot \nabla B$), respectively. No favourable initial pitch angle with 0, $180^{\circ}$ or $90^{\circ}$ indicates that the perpendicular magnetic gradient and curvature acceleration efficiencies are comparable with each other in this complex magnetic field structure no matter what resolution of magnetic fields are used. These favourable initial pitch angles also change with the magnetic field resolution. The acceleration asymmetry around initial pitch $90^{\circ}$ (the bottom-middle panel of Fig.\ref{0_0_Trapped}) is due to the non-symmetric acceleration factors around the current sheet center in ($\mu \vec{v_{E}} \cdot \nabla B$) and $(\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [ (\vec{b} \cdot \nabla) \vec{b} ]$ by the third dimension of electromagnetic fields in 2.5D symmetric current sheet geometry. The acceleration symmetry is weakly broken for the coarsely resolved case. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{5_V_pp_Trapped_Per_Par.png} } \caption{Parallel ($\Delta E_{\parallel}$ in \textit{top} line) and perpendicular ($\Delta E_{\perp}$ in \textit{bottom}) acceleration symmetry about the initial pitch angle $90^{\circ}$. \textit{Left} panels: case of the coarsely resolved magnetic fields. \textit{Right} panels: highly resolved ones. Every trapped electron (shown by one \textit{'$\ast$' point}) is color-coded by its initial velocity. The corresponding color-code is shown in the middle. } \label{Symmetry} \end{figure} Fig.\ref{Symmetry} shows electron acceleration symmetry around the initial pitch angle $90^{\circ}$ for the parallel and perpendicular energy gain components. With the details of the parallel and perpendicular acceleration, one can see that electron parallel and perpendicular acceleration are not exactly symmetric around the initial pitch angle $90^{\circ}$ in both the coarsely and finely resolved magnetic fields. The symmetry is better preserved in the coarsely resolved fields due to the smoothing effects (see the bottom-middle panel of Fig.\ref{0_0_Trapped}). The term $\mu v_{\parallel} (\vec{b} \cdot \nabla B)$ in Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain}, however, is not influenced by the third dimension of electromagnetic fields, hence it is symmetric around the current sheet center at the beginning. In other words, the non-symmetric acceleration in the parallel and perpendicular direction (see the top and bottom panels of Fig.\ref{Symmetry}, respectively) are due to non-symmetric ($\mu \vec{v_{E}} \cdot \nabla B$) and $(\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [(\vec{b} \cdot \nabla)\vec{b}]$ around the current sheet center, respectively. For the coarsely resolved case, the acceleration is dominated by the perpendicular component, the reverse is true for the finely resolved case. Meanwhile non-symmetric parallel acceleration (i.e. non-symmetric $|v_{\parallel}|$) can enhance the asymmetry in both the parallel and perpendicular acceleration (see Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain}). \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{6_V_0_E_Trapped_End.png} } \caption{Comparison between the final parallel ($E_{\parallel}$) and perpendicular ($E_{\perp}$) kinetic energy of trapped electrons in the coarsely (\textit{left} panel) and finely (\textit{right} panel) resolved magnetic fields. Each electron is color-coded by its total kinetic energy change ($\Delta E$). The \textit{black lines} in each panel correspond to '$E_{\perp}=E_{\parallel}$' and '$E=E_{\perp}+E_{\parallel}=50$ $keV$'. } \label{Parallel_Total} \end{figure} Fig.\ref{Parallel_Total} shows the distribution of the trapped electron acceleration in the ($E_{\parallel}$, $E_{\perp}$) plane. The distribution of the strongly accelerated electrons is highly anisotropic. It is dominated by the perpendicular and parallel kinetic energy components for the coarsely and finely resolved magnetic fields, respectively. While for the weakly accelerated electrons, they still roughly keep their initial isotropic distribution. For electrons initially moving only along magnetic field lines (with an initial pitch angle 0 or $180^{\circ}$), they do not have acceleration in the perpendicular direction. In this condition, however, the parallel acceleration is a little stronger in the coarsely resolved magnetic fields (maximum finale energy 81 $keV$) than that in the fine case with maximum finale energy 79 $keV$. \subsubsection{Characteristic trajectories} To better understand the details of the electron acceleration processes, the first row of Fig.\ref{3Examples_Coarse} and second row of Fig.\ref{3Examples_Fine} depict the trajectory and energy evolution of the most energetic electrons in the coarsely ($v_{0}=21$ $v_{th}$, $A_{0}=5/8\pi$, $x_{0}=97.56$ $L_{0}$) and finely ($v_{0}=21$ $v_{th}$, $A_{0}=3/4\pi$, $x_{0}=41.58$ $L_{0}$) resolved magnetic fields, respectively, and all panels in Fig.\ref{3Examples_Fine} have the same initial conditions as the corresponding panels in Fig.\ref{3Examples_Coarse}. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{7_X_Y_Z_E_Coarse.png} } \caption{Trajectory and energy evolution for three characteristic trapped electrons in the coarsely resolved magnetic fields. Panels in the \textit{first and second} column show the electron trajectory in the $xy$ and $yz$ plane, they are color-coded by the total kinetic energy profile. Panels in the \textit{third} column show the electron total kinetic (\textit{black} line), parallel (\textit{red} line) and perpendicular (\textit{blue} line) energy evolution. } \label{3Examples_Coarse} \end{figure} For the coarsely resolved case, the most efficient acceleration is dominated by the increase of the perpendicular energy component for the positive perpendicular magnetic gradient $\mu \vec{v_{E}} \cdot \nabla B$ above 95 $L_{0}$ (see the left panel of Fig.\ref{Acc_Factors}). Also the slightly decreasing parallel energy is due to the negative perpendicular curvature $\vec{v_{E}} \cdot [(\vec{b} \cdot \nabla)\vec{b}]$ there (see the right panel of Fig.\ref{Acc_Factors}). With the same initial conditions, the corresponding electron in the finely resolved magnetic fields gain more energy (see the first row in Fig.\ref{3Examples_Fine}). For the finely resolved case, the most efficient acceleration happens to the parallel energy component with slightly increased perpendicular energy when the electron are trapped in the magnetic island (around $x=37$ $L_{0}$) and accelerated again and again with its circulating motions by the positive perpendicular magnetic curvature $\vec{v_{E}} \cdot [(\vec{b} \cdot \nabla)\vec{b}]$ (located at the thin layer in the central current sheet above $x=40$ $L_{0}$, corresponding to the step-like increased displacement in the $z$ direction, see the final kinetic energy color-coding $yz$-trajectory projection in the second row of Fig.\ref{3Examples_Fine}) and gradient $\mu \vec{v_{E}} \cdot \nabla B$ (located around $x=40$ $L_{0}$). As expected, in the coarsely refined magnetic fields without the smaller-scale magnetic field structures, trajectory and acceleration of the corresponding electron are totally changed. This electron is mirror-trapped and cannot circulate in the magnetic island and move systematically in the $z$ direction. In the right panel of Fig.\ref{Acc_Factors}, one can see the largest perpendicular curvature acceleration region is located at around $x=85$ $L_{0}$, however, the strongest energetic electron is not launched there. That is due to the cancellation between the perpendicular curvature acceleration and deceleration when electron circulates in the magnetic island at $x \sim 90$ $L_{0}$. Electrons in the third row of Fig.\ref{3Examples_Coarse} and Fig.\ref{3Examples_Fine} are chosen to reveal the acceleration characteristics around $x=90$ $L_{0}$. These two electrons also have the same initial conditions: $v_{0}=21$ $v_{th}$, $A_{0}=7/8\pi$ and $x_{0}=96.075$ $L_{0}$. Their color-coded $yz$-trajectory projection in the third row of Fig.\ref{3Examples_Coarse} and Fig.\ref{3Examples_Fine} prove the above discussion and their energy gains are mainly due to the parallel acceleration by the perpendicular magnetic curvature. Furthermore these two electrons have the same trajectory and energy evolution, i.e., there is no influence coming from the magnetic field resolution, since no refined smaller-scale magnetic structures are found along their trajectories between $x=81 - 97$ $L_{0}$. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{8_X_Y_Z_E_Fine.png} } \caption{Trajectory and energy evolution of electrons launched with the same initial conditions as the corresponding one in Fig.\ref{3Examples_Coarse} but in finely resolved magnetic fields. } \label{3Examples_Fine} \end{figure} Energy oscillation between parallel and perpendicular energies in each characteristic electron energy evolution profile (the last column of Fig.\ref{3Examples_Coarse} and Fig.\ref{3Examples_Fine} ) is due to the parallel magnetic gradient $v_{\parallel} (\vec{b} \cdot \nabla B)$ in Eqs.\eqref{Parallel_Gain} and \eqref{Perpendicular_Gain} when electron passes the positive and negative parallel magnetic gradient regions in turn or electron is mirrored with alternate parallel velocity in the parallel and anti-parallel direction. Each condition can be found in Fig.\ref{3Examples_Coarse} and Fig.\ref{3Examples_Fine}. While the magnitude of this oscillations is due to magnitude of the parallel magnetic gradients $\vec{b} \cdot \nabla B$ along electron trajectory (see the middle panel of Fig.\ref{Acc_Factors}). \subsubsection{Comparison with Observations} More than $60\%$ of trapped electrons are accelerated ($\Delta E>0$) and more than $50\%$ of them have kinetic energies larger than 10 $keV$. These energetic electrons can produce HXRs by Bremsstrahlung (note that the HXR range is $10 - 400$ $keV$). In the framework of the thin target model (\citealp{Brown1971}) and using the Bethe-Heitler formula for the Bremsstrahlung cross section (\citealp{1934RSPSA.146...83B, Brown1971}, see details in Sec.\ref{Method2}), we derive the HXR spectrum of these energetic trapped electrons in order to compare our results with solar flare HXR observations. Since the initial electron distribution function in the solar atmosphere is not known, we consider three different initial distribution functions as: \begin{align} f(E_{0}, A_{0}, \vec{r_{0}}, t_{0}) \propto \left \{ \begin{array}{cc} E_{0}^{0} \texttt{ (or Constant)} \\ E_{0}^{-3} \\ \texttt{Maxwell-Boltzmann} \end{array} \right. \label{Initial_Distribution_Function} \end{align} \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{9_0_ElectronSpectrum_Trapped.png} } \\ \mbox{ \includegraphics[width=0.45\textwidth]{9_0_HXR_Trapped.png} } \caption{Electron (\textit{top}) and HXR (\textit{bottom}) spectra of energetic trapped electrons in the coarsely (\textit{red} lines) and finely (\textit{blue} lines) resolved magnetic fields with three different initial distribution functions - constant (\textit{solid} lines), power-law $-3$ (\textit{dash-dot} lines) and Maxwellian at $10^6$ $K$ (\textit{dashed} lines). The spectral indices are for the ranges marked with the black dashed lines embraced by two plus signs at two ends and their values are shown under each panel: the first one for electron and photon energies below 50 $keV$ and the second one for the energies between $50$ $keV$ and $100$ $keV$. } \label{0_HXR} \end{figure} The resulting electron and HXR spectra and spectral indices (below and above 50 $keV$) after acceleration ($t=10$ $t_{0}$) are depicted in the top and bottom two panels of Fig.\ref{0_HXR}, respectively. In approximation, the relationship between the electron ($\gamma_{e}$) and corresponding HXR ($\gamma_{HXR}$) spectral indices agree well with the relationship $\gamma_{HXR}=\gamma_{e}+1$ in the thin target model. The influence from the ambient plasma number density $n_{\vec{r}}$ (Eq.\eqref{HXREquation}) is very small due to it small normalized range $[0.3 - 1.8]$ in both differently resolved magnetic fields, while the HXR flux at 100 keV differs by more than 4 orders of magnitude for these two cases (see the bottom panel of Fig.\ref{0_HXR}). Note that the HXR spectral indices, calculated from an initial Maxwell-Boltzmann distribution function for $T=10^6$ K, are too large to match any observed HXR spectrum. For the cases with a power-law distribution initially (e.g., \citealp{Karlicky&Barta2006}), one may treat the electron acceleration as a diffusion in 2D energy space. Since the diffusion coefficient is approximately proportional to the energy, this explains the difference by one of the spectral indexes of the injected and accelerated electrons below the maximum injection energy of $\sim 50$ $keV$. Above 50 $keV$, the acceleration in the finely resolved magnetic field is much more efficient than that in the coarsely resolved magnetic field, we have a harder spectrum ($\sim 6$) for the finely resolved case. The corresponding HXR spectral indexes are consistent with the observed values for small flares (whose HXR spectral indices can be as soft as $\geq 7$, see \citealp{Aschwanden2002}). \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{10_EndPosition_XY__B_E_0_Trapped.png} } \caption{Final locations of trapped electrons with final kinetic energies $> 10$ $keV$ at $t=10t_{0}$. They are color-coded by their final kinetic energies. \textit{Left} and \textit{right} panels correspond to the coarsely and finely resolved magnetic fields. Note that the electrons with final kinetic energy $>105keV$ are shown only by \textit{red asterisk} points in the better resolved magnetic fields. } \label{EP_Trapped} \end{figure} Besides the HXR spectra, fine structures (bright spots) along the current sheets trailing CMEs or eruptive filaments were observed (e.g., by \citealp{Ciaravellaetal2002, Koetal2003, Savage2010}). These bright spots should come from energetic trapped electrons. Fig.\ref{EP_Trapped} shows the final locations of these trapped electrons with final kinetic energies $>10 keV$ which will brighten the magnetic island in the current sheet that may be associated with the observed hot spots. Furthermore depending on the evolution of these magnetic islands, the bright spot located at $x=90$ $L_{0}$ moves upwards away from the sun while others fall back to the sun. This evolution agrees well with the observed upward (\citealp{Ciaravellaetal2002, Koetal2003, Savage2010}) and downward (\citealp{Koetal2003, Savage2010}) moving bright spots in CME-trailing current sheets. \subsection{Precipitating Electrons} \label{0_Fled} A second observable feature which can be derived from our calculations are the emission produced by the energetic precipitating electrons. They can precipitate to the solar chromosphere and be related to the observed footpoint HXR signatures there. We first study the acceleration dependence of precipitating electrons on the initial conditions (velocity, pitch angle and position). The results are shown in Fig.\ref{0_0_Fled}. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.25\textwidth]{11_0_Coarse_X0A0Ee.png} \includegraphics[width=0.25\textwidth]{11_0_Fine_X0A0Ee.png} } \\ \mbox{ \includegraphics[width=0.50\textwidth]{11_Escaped.png} } \\ \mbox{ \includegraphics[width=0.50\textwidth]{11_0_E0A0X0Ee_C+F_Escaped.png} } \caption{Same as Fig.\ref{0_0_Trapped} but for precipitating electrons. Here three different scales in the y-axis are also used for $\Delta E <0$, $0<\Delta E<2$ $keV$ and $\Delta E > 2$ $keV$. } \label{0_0_Fled} \end{figure} \subsubsection{Initial condition dependence} Similar to electrons trapped in the current sheet, acceleration of precipitating electrons also strongly depends on their initial (velocity, pitch angle and position) conditions. The acceleration efficiency increases with the increase of the energy and the overall acceleration is more efficient in finely resolved magnetic field. However the acceleration is much less efficient than those trapped electrons. The maximum energy gain is only a few $keV$ and about $10$ $keV$ for the coarsely and finely resolved magnetic fields respectively. The dependence of the energy gain on the initial pitch angle and position show that only electrons in a few channels can escape from the acceleration site and injected into the chromosphere. As expected, electrons moving along magnetic field line are more likely to escape than those with a pitch angle close to $90^{\circ}$. However, only a small portion ($< 12\%$) of electrons can precipitate into the chromosphere. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{12_V_pp_Escaped_Per_Par.png} } \caption{Same as Fig.\ref{Symmetry} but for precipitating electrons. } \label{Symmetry_Precipitating} \end{figure} The bottom right panel of Fig.\ref{0_0_Fled} depicts that a large portion of precipitating electrons start near to X-points. But no electron escapes from the X-points near $x=42$ $L_0$ and $x=97$ $L_0$. Since the magnetic islands below this two X-points is not symmetric about their center - O-points: their upper parts are smaller than the lower parts (see the whole $B_{y}$ plots in the bottom right panel of Fig.\ref{0_0_Trapped}). Current sheet center launched electrons are easily reflected or trapped by this geography (see the characteristic trajectories of trapped electrons launched near $x=42$ $L_0$ in the middle-line of Figs.\ref{3Examples_Coarse} and \ref{3Examples_Fine}). \subsubsection{Acceleration properties} Although there are the same reasons for the asymmetric acceleration around the initial pitch angle $90^{0}$ in the finely resolved magnetic fields between trapped and precipitating electrons, acceleration asymmetry of precipitating electrons is much weaker than that of trapped electrons (comparing Fig.\ref{Symmetry} with Fig.\ref{Symmetry_Precipitating}). The total acceleration of precipitating electrons also is much weaker than that of trapped electrons, see the '$E=50$ $keV$' parts of Fig.\ref{Parallel_Total} and Fig.\ref{Parallel_Fled}, especially the coarse case for precipitating electrons in Fig.\ref{Parallel_Fled}. The final kinetic energy $E_{e}$ of the most energetic precipitating electrons is a little more than 50 and 60 $keV$ in the coarsely and finely resolved magnetic fields, respectively, i.e., all precipitating electrons have final kinetic energies $E_{e} <100$ $keV$. Different from trapped electrons in Fig.\ref{Parallel_Total}, here most precipitating electrons still keep their initial energies shown as stripes parallel to '$E=50$ $keV$' Acceleration difference between trapped and precipitating electrons is mainly contributed by the acceleration in the parallel direction. Precipitating electrons have stronger deceleration than acceleration in the parallel direction (see the top panels of Fig.\ref{Symmetry} and Fig.\ref{Symmetry_Precipitating}), i.e., the acceleration of precipitating electrons are mainly coming from the perpendicular direction independent on the magnetic field resolution (see also Fig.\ref{Parallel_Fled}). For a stronger parallel acceleration, electron should stay longer around the current sheet center where has larger magnetic curvatures than other places, while precipitating electrons are ejected out of the current sheet before they can reach higher energies. \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{13_V_0_E_Escaped_End.png} } \caption{Same as Fig.\ref{Parallel_Total}, but for precipitating electrons. } \label{Parallel_Fled} \end{figure} On the whole the acceleration differences of precipitating electrons between in the coarsely and finely resolved magnetic fields are not so many as that of trapped electrons, since precipitating electrons spend most of their time far away from the central current sheet and the better resolved magnetic structures are located only near the current sheet center. \subsubsection{characteristic trajectory of precipitating electron} \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{14_X_Y_Z_E_Escaped_New.png} } \caption{Trajectory projection and energy evolution of the most energetic precipitating electron with initial velocity $v_{0}=21$ $v_{th}$ and pitch angle $157.5^{\circ}$ in the coarsely (\textit{left} panel) and finely (\textit{right} panel) resolved magnetic fields. Trajectories are color-coded according to the local electron total kinetic energy. } \label{Fled_Test} \end{figure} Top and bottom lines of Fig.\ref{Fled_Test} show the trajectory and energy evolution of the most energetic precipitating electron in the coarsely and finely resolved magnetic fields, respectively. Their $xy$ and $yz$ trajectory projections (along the magnetic field lines only) prove their strongly magnetized condition. Also the energy profiles in the last column of Fig.\ref{Fled_Test} depict the different acceleration properties in the coarsely and finely resolved magnetic fields: their acceleration sites are still in the current sheet center by the perpendicular magnetic curvatures $\vec{v_{E}} \cdot [(\vec{b} \cdot \nabla)\vec{b}]$ and gradients $\vec{v_{E}} \cdot \nabla B$. After they leave there is no acceleration any more, there parallel magnetic gradient $v_{\parallel} (\vec{b} \cdot \nabla B)$ is stronger than the other two terms (see Fig.\ref{Acc_Factors}). Also because of the single sign of the parallel magnetic gradients and direction of parallel velocity along precipitating electron trajectory, precipitating electrons do not have frequent energy oscillation as that of the characteristic trapped electrons in Fig.\ref{3Examples_Coarse} and Fig.\ref{3Examples_Fine}. \subsubsection{Comparison with UV and EUV observations} \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{15_EndPositionF_ZY__B_E_0.png} } \\ \mbox{ \includegraphics[width=0.45\textwidth]{15_EndPositionF_ZY__B_E_0_Evolution.png} } \caption{Chromosphere locations of precipitating electrons at t=10 $t_{0}$ (\textit{top} line) and their evolution (\textit{bottom two} lines), color-coded by their final kinetic energies ($E_{e}$ - \textit{Blue $'\ast'$} for $E_{e} < 54$ $keV$ and \textit{red $'\ast'$} for $E_{e} > 54$ $keV$) separately in the coarsely and finely resolved magnetic islands. } \label{0_EndPosition} \end{figure} The low energies of the precipitating electrons in the convective electric fields can not cause HXR emissions but ribbons of UV and EUV brightening (\citealp{Fletcheretal2011}). Fig.\ref{0_EndPosition} depicts the spatial distribution of the electrons precipitating to the chromosphere at the end of calculation (t=10 $t_{0}$, top panels) and their evolution with time (panels in the last two lines). As the figure shows the ribbons exhibit a anti-symmetric geometry around the PIL. This two ribbons are related to the initial pitch angles of precipitating electrons: electron with an initial pitch angles $ > 90^{\circ}$ ($< 90^{\circ}$) precipitates into one (the other) branch. This kind of initial pitch angle dependence is attributed to the weak parallel accelerations by the perpendicular magnetic curvature $(\gamma v_{\parallel})^{2} \vec{v_{E}} \cdot [(\vec{b} \cdot \nabla)\vec{b}]$ (Eq.\eqref{Parallel_Gain}) which cannot accelerate electron into the direction anti-parallel to its initial velocity. Because of the non-symmetric acceleration around the initial pitch angle $90^{\circ}$ of precipitating electrons in the finely resolved magnetic fields, more efficiently accelerated precipitating electrons are with initial pitch angles 0.875$\pi$ ($>90^{\circ}$). Hence more accelerated ($E_{e} > 54$ $keV$) precipitating electrons are only located at one branch of the ribbon geometry with finely resolved smaller-scale magnetic structures (the top-right panel of Fig.\ref{0_EndPosition}). Some of the observed asymmetry between two footpoints therefore may be attributed to the acceleration process. While the chromosphere energy distribution of precipitating energized electrons, accelerated by coarsely resolved magnetic structures, are more anti-symmetric with respect to the PIL (the top-left panel in Fig.\ref{0_EndPosition}). Also with the chromospheric location evolution of the precipitating electrons (panels in the last two lines of Fig.\ref{0_EndPosition}), one can find their locations along the chromospheric ribbons depend on their initial positions also: electrons started closer to the sun surface precipitate closer to the PIL , earlier in the chromosphere and have shorter displacements along z-axis (or PIL). At the same initial position, electrons with larger initial energies correspond to larger final kinetic energies and parallel velocities which lead electron to reach chromosphere earlier (see Fig.\ref{Electron_Light}). \begin{figure}[htbp] \centering \mbox{ \includegraphics[width=0.45\textwidth]{16_Electron_Light_Curve_Escaped.png} } \caption{Lightcurve of precipitating electrons for four energy ranges: $E_{e} < 10 $ $keV$ - \textit{dashed} lines $10< E_{e} < 25 $ $keV$ - \textit{dotted} lines $25< E_{e} < 50 $ $keV$ - \textit{solid} lines $E_{e} > 50 $ $keV$ - \textit{dash-dot} lines for the acceleration in the coarsely (\textit{red} lines) and finely (\textit{blue} lines) resolved magnetic fields. } \label{Electron_Light} \end{figure} Fig.\ref{Electron_Light} shows that fluxes of electrons with higher energies evolve faster and reach peaks earlier than those of lower energy electrons. The time scale of the flux peak of precipitating electrons with final kinetic energies $> 50 keV$ indicates precipitating electrons are accelerated less than 1.0 $t_{0}$ $< 0.1$ $s$. The refined magnetic field structures are mainly located above $x=25$ $L_{0}$, hence at the beginning (before 1.5 $t_{0}$) there is no acceleration difference between the coarsely and finely resolved magnetic fields for precipitating electrons. \section{Conclusions and Discussion } \label{Discussion} \subsection{Conclusions} In contrast to acceleration in direct current (DC) parallel electric fields which in MHD simulations depends on the choice of the resistivity in the Ohms law, we concentrate on the acceleration due to magnetic gradient and curvature drift effects in the cascading reconnection current sheet. We found that both trapped in the magnetic islands and precipitating electrons can be accelerated by the perpendicular magnetic gradients and curvatures. Trapped energetic electrons contribute to the formation of bright spots along the current sheet trailing CMEs or eruptive filaments) as well as the flare loop-top HXR radiation by their Bremsstrahlung. Precipitating electrons, on the other hand, cause ribbons of UV and EUV brightening in the solar chromosphere. Whether an electron becomes trapped or precipitating depends on the initial conditions (e.g., for precipitation, an electron should have a position around X-points, velocity $> 2$ $v_{th}$ and pitch angle $\neq 90^{\circ}$). Trapped electrons are energized mainly in the magnetic islands in the coarse magnetic fields, while in the better resolved magnetic fields, the strongest trapped electron acceleration takes place close to the X-points due to there finely resolved larger magnetic curvatures and gradients in the smaller-scale magnetic fields. Both trapped and precipitating electrons are accelerated or decelerated in dependence on their initial positions and pitch angles. The electron final kinetic energy strength depends on the initial electron energy - larger initial energies cause stronger accelerations. As well as every kind of electron can get more energization if the smaller-scale magnetic structures, obtained by higher resolution MHD simulations, are taken into account. Also because of these smaller-scale structures, energization of more accelerated trapped electrons are mainly in the parallel direction in the finer magnetic fields. Other (less accelerated trapped and precipitating) electrons mainly gain energies in the perpendicular direction. Due to the asymmetry in the magnetic curvature drift acceleration term around the center of 2.5D current sheet, the larger magnetic curvatures in the better resolved magnetic structures cause stronger non-symmetric accelerations around initial pitch angle $90^{\circ}$ of trapped and precipitating electrons. On the contrary, in the coarsely resolved magnetic fields both trapped and precipitating electron acceleration are close to symmetric around $90^{\circ}$. With the better resolved small-scale magnetic structures, the maximum energy gain of trapped electrons can be up to $421$ $keV$. This already suffices to explain the observed loop-top HXR radiations. Under the thin target model together with a simple Bethe-Heitler formula for the cross section of Bremsstrahlung and an initial distributions function $\propto E_{0}^{0}$ (or constant) and $\propto E_{0}^{-3}$ , the HXR spectral indices of trapped electrons can be as hard as $\sim5$ in the better resolved magnetic fields. This is already hard enough to explain the observed HXR spectra in medium solar flares. For initial Maxwell-Boltzmann distributions for $T=10^6$ $K$, the HXR spectra provide, however, just a slight enhancement of the high energy tail. In the chromospheric ribbon-shape locations of precipitating electrons, electrons starting lower in the solar atmosphere precipitate closer to the PIL. The weak parallel acceleration of precipitating electron leads electrons with initial pitch angles $<90^{\circ}$ precipitate to one side of the PIL, while ones with initial pitch angles $>90^{\circ}$ go to the other side of the PIL. Generally, there is a anti-symmetrical geometry of precipitating electron locations in chromosphere around the PIL. While because of the stronger accelerations of electrons with initial pitch angles $>90^{\circ}$, more energetic electrons are located in one side of the PIL only with the better resolved smaller-scale magnetic structures. \subsection{Discussion} Solar flare observations imply that a large number of energetic electrons should precipitate into the solar chromosphere where they cause observable radiations. Our calculations have shown that only $12 \%$ electrons can precipitate within $10$ $t_{0}$. While the whole current sheet evolution is as long as 520 $t_{0}$. Depending on the magnetic field evolutions (see panels of Fig.\ref{wholeView}), lower magnetic islands ($x < 70$ $L_{0}$) in Fig.\ref{EP_Trapped}) will merge into one magnetic loop (see right panel of Fig.\ref{wholeView}) eventually. So in the end, the electrons previously trapped in the lower magnetic islands can later also precipitate to the chromosphere. Taking into account this merging effects, more than $63\%$ electrons will finally reach the chromosphere. As well as when the space scale collapses to the kinetic one, the guiding center approximation will be not valid any more. Particle motion will become chaotic due to nonlinear resonances between particle bounce motion and gyration. With the transition to chaos, \citealp{Buechner1989} found that trapped nonadiabatic charged particles can escape due to chaotic pitch angle scattering effects. Furthermore in this study during $10$ $t_{0}$, the background electromagnetic fields are constant, so the time effects on electron acceleration are neglected. With the evolutions of the electromagnetic fields, maybe some trapped electrons become precipitating ones. As a result, even more electrons will precipitate. Our study can explain the observed medium and small solar flare loop-top HXR spectra and EUV-ribbons just based on magnetic gradient and curvature effects in magnetic islands without ad hoc postulated "anomalous" resistivity. Precipitating electrons in our results, however, cannot explain the HXR spectral indices in the foot-points of solar flares which can be as hard as 1.5 in large solar flares. Precipitating electrons also could reach the energies necessary to explain the observed HXR spectra bases on magnetic gradient and curvature effects if more and smaller magnetic field structures are formed by cascading magnetic reconnection.
{ "timestamp": "2015-04-27T02:08:06", "yymm": "1504", "arxiv_id": "1504.06486", "language": "en", "url": "https://arxiv.org/abs/1504.06486" }
\section{Introduction and main results} \subsection{} Let $H$ be an associative algebra with unity over a field~$\kk$ and let $\mathscr C$ be a full abelian subcategory closed under submodules of the category $H-\Mod$ of left $H$-modules. Suppose that we have a ``finite duality'' functor ${}^\star: \mathscr C\to \Mod-H$ with $V^\star\subseteq V^*=\Hom_\kk(V,\kk)$ (with equality if and only if~$V$ is finite dimensional) with its natural right $H$-module structure, such that the restriction of the evaluation pairing $\la\cdot,\cdot\ra_V:V\tensor V^*\to \kk$ to $V\tensor V^\star$ is non-degenerate for all objects $V$ in~$\mathscr C$ (see~\S\ref{subs:pf-main0} for the details). Following~\cite{Fos}, we define $\beta_V:V\tensor_{D(V)} V^\star\to H^*$ where $D(V)=\End_H V^\star=(\End_H V)^{op}$ by $$ \beta_V(v\tensor f)(h)=\la h\lact v, f\ra_V=\la v,f\ract h\ra_V,\qquad v\in V,\,f\in V^\star,\,h\in H, $$ where $\lact$ (respectively, $\ract$) denotes the left (respectively, right) $H$-action. It is easy to see that~$\beta_V$ is well-defined. Set $H_V^*=\Im\beta_V$. Recall that $V\tensor V^\star$ and $H^*$ are naturally $H$-bimodules. The following is essentially proved in~\cite{Fos}*{\S3.1} and~\cite{FD}*{Corollary~1.16} \begin{proposition} \begin{enumerate}[{\rm(a)}] \item\label{prop:Fos-0.a} For all $V\in \mathscr C$, $\beta_V$ is a homomorphism of $H$-bimodules and $H^*_V$ depends only on the isomorphism class of~$V$. Moreover, if $V,V'\in\mathscr C$ are simple and $H^*_V=H^*_{V'}$ then $V\cong V'$; \item\label{prop:Fos-0.b} $H^*_{V\oplus V'}=H^*_V+H^*_{V'}$ for all $V,V'\in\mathscr C$. In particular, $H^*_{V^{\oplus n}}=H^*_V$ for all $n\in\mathbb N$. \item\label{prop:Fos-0.c} If $V\tensor_{D(V)}V^\star$ is simple as an $H$-bimodule then $\beta_V$ is injective. \item\label{prop:Fos-0.d} If $V$ is simple finite dimensional then $V\tensor_{D(V)}V^\star$ is simple as an $H$-bimodule and hence $\beta_V$ is injective. \end{enumerate} \label{prop:Fos-0} \end{proposition} It is natural to call $H^*_V$ a {\em generalized Peter-Weyl component}. Denote $H^*_{\mathscr C}=\sum_{[V]\in\Iso\mathscr C} H^*_V$ and $\ul H^*_{\mathscr C}=\bigoplus_{[V]\in\Iso^\circ\mathscr C} H^*_V$, where $\Iso\mathscr C$ (respectively, $\Iso^\circ\mathscr C$) is the set of isomorphism classes of objects (respectively, simple objects) in~$\mathscr C$. By definition there is a natural homomorphism of $H$-bimodules $\ul H^*_{\mathscr C}\to H^*_{\mathscr C}$. Clearly, under the assumptions of Proposition~\ref{prop:Fos-0}\eqref{prop:Fos-0.c} it is injective. Note that $H^*_{\mathscr C}=\sum_{[V]\in A} H^*_V$ for any subset $A$ of~$\Iso\mathscr C$ which generates it as an additive monoid. The following refinement of~\cite{Fos}*{Theorem~3.10} establishes the generalized Peter-Weyl decomposition. \begin{theorem}\label{thm:semi-simp-new} Suppose that all objects in~$\mathscr C$ have finite length. Then \begin{enumerate}[{\rm(a)}] \item if $H^*_{\mathscr C}=\ul H^*_{\mathscr C}$ then $\mathscr C$ is semisimple; \item if $\mathscr C$ is semisimple and $V\tensor_{D(V)}V^\star$ is simple for every $V\in\mathscr C$ simple then $H^*_{\mathscr C}=\ul H^*_{\mathscr C}$. \end{enumerate} \end{theorem} \subsection{} Henceforth we denote by $\mathscr C^{fin}$ the full subcategory of~$\mathscr C$ consisting of all finite dimensional objects. Clearly $V\tensor V^\star$, $V\in\mathscr C^{fin}$, is a unital algebra with the unity~$1_V$; set $z_V:=\beta_V(1_V)\in H^*_V$. For example, if $H=\kk G$ for a finite group $G$ then for any finite dimensional $H$-module~$V$ we have $z_V(g)=tr_V(g)$, $g\in G$ where $tr_V$ denotes the trace of a linear endomorphism of~$V$. Given an $H$-bimodule~$B$, define the subspace $B^H$ of $H$-invariants in~$B$ by $B^H=\{ b\in B\,:\, h\lact b=b\ract h,\,\forall\, h\in H\}$ ($B^H$ is sometimes referred to as the center of~$B$). Clearly, $z_V\in (H^*_V)^H$, $z_V(1_H)=\dim_\kk V\not=0$ and $(H^*_V)^H=\kk z_V$ if $\End_H V=\kk\id_V$. Set $\mathcal Z_{\mathscr C}=\sum_{[V]\in\Iso\mathscr C} \ZZ z_V$. Given~$V\in\mathscr C$, denote $|V|$ its image in the Grothendieck group $K_0(\mathscr C)$ of~$\mathscr C$. The following result contrasts sharply with Proposition~\ref{prop:Fos-0} and Theorem~\ref{thm:semi-simp-new} for non-semisimple $\mathscr C$. \begin{theorem}\label{thm:char} Suppose that $\mathscr C=\mathscr C^{fin}$. Then the map $K_0(\mathscr C)\to \mathcal Z_{\mathscr C}$ given by $|V|\mapsto z_V$, $[V]\in\Iso\mathscr C$ is an isomorphism of abelian groups. \end{theorem} \subsection{} To introduce a multiplication on~$\mathcal Z_{\mathscr C}\subset (H^*_{\mathscr C})^H\subset H^*_{\mathscr C}$, we assume henceforth that $H=(H,m,\Delta,\varepsilon)$ is a bialgebra and that $\mathscr C$ is a tensor subcategory of $H-\Mod$. Note that $H^*$ is an algebra in a natural way. It is easy to see (Lemma~\ref{lem:inv-subalg}) that $(H^*)^H$ is a subalgebra of~$H^*$. We also assume that there is a natural isomorphism $(V\tensor V')^\star\cong V'{}^\star\tensor V^\star$ in~$\mcat-H$ for all $V,V'\in\mathscr C$. \begin{theorem} \begin{enumerate}[{\rm(a)}] \item\label{thm:main-1.a} $H^*_V\cdot H^*_{V'}=H^*_{V\tensor V'}$ for all $V,V'\in\mathscr C$. In particular, $H^*_{\mathscr C}$ is a subalgebra of~$H^*$; \item\label{thm:main-1.b} $z_V\cdot z_{V'}=z_{V\tensor V'}$ for all $V,V'\in\mathscr C^{fin}$. In particular, if $\mathscr C=\mathscr C^{fin}$ then $\mathcal Z_{\mathscr C}$ is a subring of~$(H_{\mathscr C}^*)^H$ and the map $K_0(\mathscr C)\to \mathcal Z_{\mathscr C}$ from Theorem~\ref{thm:char} is an isomorphism of rings. \end{enumerate} \label{thm:main-1} \end{theorem} \noindent Thus, it is natural to regard $\mathcal Z_{\mathscr C}$ as the character ring of~$\mathscr C$. \subsection{} It turns out that we can transfer the above structures from~$H^*_{\mathscr C}$ to~$H$ if $H=(H,m,\Delta,\varepsilon,S)$ is a Hopf algebra. For an $H$-bimodule $B$ define left actions $\ad$ and~$\diamond$ on~$B$ via $(\ad h)(b)=h_{(1)}\lact b\ract S(h_{(2)})$ and $h\diamond b=S^2(h_{(2)})\lact b\ract S(h_{(1)})$, $h\in H$, $b\in B$, where $\Delta(b)=b_{(1)}\tensor b_{(2)}$ in Sweedler's notation. Fix a categorical completion $H\widehat\tensor H$ such that $(f\tensor 1)(H\widehat\tensor H)\subset H$ for all $f\in H^*_{\mathscr C}$. Equivalently, $\Phi_P:H^*_{\mathscr C}\to H$, $f\mapsto (f\tensor 1)(P)$ is a well-defined linear map. Denote $\mathscr A(H)$ the set of all $P\in H\widehat\tensor H$ such that $P\cdot (S^2\tensor 1)(\Delta(h))=\Delta(h)\cdot P$ for all $h\in H$. Clearly, $\mathscr A(H)$ is a subalgebra of~$H\widehat\tensor H$. Elements of~$\mathscr A(H)$ are analogous to $M$-matrices (see e.g.~\cite{Sh}). For $V\in C^{fin}$, set $c_V=c_{V,P}:=\Phi_P(z_V)\in \Phi_P((H^*_{\mathscr C})^H)$. Let $Z(H)$ be the center of~$H$. \begin{theorem}\label{thm:main-2} Let~$P\in\mathscr A(H)$. Then $\Phi_P:H^*_{\mathscr C}\to H$ is a homomorphism of left $H$-modules, where $H$ acts on~$H^*_{\mathscr C}$ and~$H$ via $\diamond$ and~$\ad$, respectively. Moreover, $\Phi_P((H^*_{\mathscr C})^H)\subset Z(H)$ and the assignment $|V|\mapsto c_V$, $[V]\in\Iso\mathscr C^{fin}$ defines a homomorphism of abelian groups $\ch_{\mathscr C}:K_0(\mathscr C^{fin})\to Z(H)$. \end{theorem} Surprisingly, $\Phi_P$ is often close to be an algebra homomorphism. To make this more precise, we generalize the notion of an algebra homomorphism as follows. Let $A$, $B$ be $\kk$-algebras and let $\mathscr F$ be a collection of subspaces in~$A$. We say that a $\kk$-linear map $\Phi:A\to B$ is a {\em $\mathscr F$-homomorphism} if $\Phi(U)\cdot \Phi(U')\subset \Phi(U\cdot U')$ for all $U,U'\in\mathscr F$. We say that $\mathscr F$ is multiplicative if $U\cdot U'\in\mathscr F$ for all $U,U'\in\mathscr F$. It is easy to see that $|\mathscr F|:=\sum_{U\in\mathscr F} U$ is a subalgebra of~$A$ and $\Phi(|\mathscr F|)$ is a subalgebra of~$B$ for any multiplicative family $\mathscr F$. In what follows we denote $\mathscr F_{\mathscr C}$ the collection of all subspaces of~$H^*$ of the form $H^*_V$ where $V\in\mathscr C$. By Theorem~\ref{thm:main-1}, $\mathscr F_{\mathscr C}$ is multiplicative. \begin{example} Let $H=\kk G$ where $G$ is a finite group and $\mathscr C$ be the category of its finite dimensional representations. Then the assignment $\delta_g\mapsto g^{-1}$ where $\delta_g(h)=\delta_{g,h}$, $g,h\in G$ defines an isomorphism of $H$-bimodules $\Phi:H^*\to H$. Let $\mathscr F_G=\{ H^*_V\,:\, [V]\in\Iso^\circ\mathscr C,\, \Hom_G(V,V\tensor V)\not=0\}\subset \mathscr F_{\mathscr C}$. If $|G|\in\kk^\times$ then $\Phi$ is an $\mathscr F_G$-homomorphism since $\Phi(H^*_V)\cdot \Phi(H^*_{V'})=0$ if $[V]\not=[V']\in\Iso^\circ\mathscr C$ and $\Phi(H^*_V)\cdot \Phi(H^*_V)=\Phi(H^*_V)$. \end{example} Denote by $\mathscr M(H)$ the set of all $P\in H\widehat\tensor H$ such that $\Phi_P$ is an $\mathscr F_{\mathscr C}$-homomorphism and by $\mathscr M_0(H)$ the set of all $P\in\mathscr M(H)$ such that $\Phi_P$ restricts to a homomorphism of algebras $(H^*_{\mathscr C})^H\to Z(H)$. We abbreviate $H_{V,P}:=\Phi_P(H^*_V)$ and $H_{\mathscr C,P}:=\Phi_P(H^*_{\mathscr C})=\sum_{[V]\in\Iso\mathscr C} H_{V,P}$. Since $\mathscr F_{\mathscr C}$ is multiplicative, $H_{\mathscr C,P}$ is a subalgebra of~$H$ for~$P\in\mathscr M(H)$. The following is immediate. \begin{proposition}\label{cor:main-cor} Suppose that $P\in\mathscr A(H)\cap\mathscr M(H)$ and $\Phi_P$ is injective. Then: \begin{enumerate}[{\rm(a)}] \item\label{cor:main-cor.a'} If $V\tensor_{D(V)}V^\star$ is a simple $H$-bimodule then it is isomorphic to $H_{V,P}$ as a left $H$-module; \item\label{cor:main-cor.a} $H_{\mathscr C,P}=\bigoplus_{[V]\in\Iso^\circ\mathscr C} H_{V,P}$ if $\mathscr C$ is semisimple and $V\tensor_{D(V)}V^\star$ is simple as an $H$-bimodule for each $V\in\mathscr C$ simple; \item\label{cor:main-cor.b} If $P\in\mathscr M_0(H)$ then $\ch_{\mathscr C}:K_0(\mathscr C^{fin})\to Z(H)$ is injective. \end{enumerate} \end{proposition} The following theorem provides a sufficiently large subclass of~$\mathscr A(H)\cap\mathscr M(H)$ and~$\mathscr A(H)\cap\mathscr M_0(H)$. \begin{theorem}\label{thm:main-thm} Suppose that $P\in\mathscr A(H)$ such that $(\Delta\tensor 1)(P)=(m\tensor m\tensor 1)((T\tensor 1) P_{15}P_{35})$ for some $T\in H\widehat\tensor H\widehat\tensor H \widehat\tensor H$. Then $P\in\mathscr M(H)$. Moreover, if $(m^{op}\tensor m^{op})(T)=1\tensor 1$ then $P\in\mathscr M_0(H)$. \end{theorem} It should be noted that $\mathscr M(H)$ and~$\mathscr M_0(H)$ are not exhausted by the above condition. \begin{example}\label{ex:S_3} Suppose that $\operatorname{char}\kk\not=2,3$ and let $P_{\lambda,\mu}=\frac1{6}\sum_{\sigma\in S_3} 1\tensor \sigma+\frac1{36}\big[ s_1\tensor (1+(2\mu-1)s_1-(\mu+1)(s_2+s_1s_2s_1)+ s_1s_2+s_2 s_1)\big]_{S_3}+\frac1{18}\big[ s_1s_2\tensor (2+(\lambda-1)s_1s_2-(\lambda+1)s_2s_1)\big]_{S_3}$, where $\lambda,\mu\in\kk$, $s_i=(i,i+1)$ and we abbreviate $\big[ x\big]_G:=\sum_{g\in G} (g\tensor g)x(g^{-1}\tensor g^{-1})$ for $x\in\kk G\tensor\kk G$. Then one can show that $P_{\lambda,\mu}\in\mathscr A(H)\cap\mathscr M_0(H)$ and that $\Phi_P$ is an isomorphism if and only if $(\lambda,\mu)\in(\kk^\times)^2$. However, there is no $T\in H^{\tensor 4}$ such that the condition of Theorem~\ref{thm:main-thm} holds. \end{example} It turns out that $P\in\mathscr A(\kk G)\cap \mathscr M_0(\kk G)$ with $\Phi_P$ injective does not always exist for a given finite group~$G$ (for instance, it does not exist for dihedral groups different from~$S_2\times S_2$ and~$S_3$) and thus it would be interesting to classify all finite groups~$G$ which admit such a~$P$. Its existence provides a decomposition of $\kk G$ into a direct sum of adjoint $G$-modules $H_{V,P}$ over all simple $\kk G$-modules~$V$ (a mock Peter-Weyl decomposition) which is an alternative to the well-known Maschke decomposition into the direct sum of matrix algebras. As a further example, we constructed an 8-parameter family of such~$P$ for $G=S_4$. The answer is rather cumbersome (it involves 34 terms of the form $[g\tensor x]_{S_4}$, $g\in S_4$, $x\in\kk S_4$) and is available at~\href{https://ishare.ucr.edu/jacobg/jdec-example.pdf}{https://ishare.ucr.edu/jacobg/jdec-example.pdf}). Specializing Proposition~\ref{cor:main-cor} and Theorem~\ref{thm:main-thm} to quantized universal enveloping algebras we can recover Joseph's decomposition (\cite{joseph-mock}). Namely, let $H=U_q(\gg)$ for a Kac-Moody algebra~$\gg$ and $\mathscr C_\gg$ be the (semisimple) category of highest weight integrable $U_q(\gg)$-modules (of type~$\mathbf 1$, see e.g. \cite{CP}); then $V^\star$ is the graded dual. Let $\Lambda^+$ be the monoid of dominant weights for~$\gg$ and denote $V(\lambda)$ a highest weight simple integrable module of highest weight $\lambda\in\Lambda^+$. We construct $P=P_{\gg}$ with $\Phi_{P_{\gg}}$ injective in Lemma~\ref{lem:P_g} and obtain the following Theorem which refines results of~\cite{joseph-mock}. \begin{theorem} \begin{enumerate}[{\rm(a)}] \item\label{thm:joseph-decom.a} For $\lambda\in \Lambda^+$, $H_{V(\lambda),P}=\ad U_q(\gg)(K_{2\lambda})\cong V(\lambda)\tensor V(\lambda)^\star$. \item\label{thm:joseph-decom.b} $\sum_{\lambda\in\Lambda^+} \ad U_q(\gg)(K_{2\lambda}) $ is direct and is a subalgebra of~$U_q(\gg)$. \end{enumerate} \label{thm:joseph-decomp} \end{theorem} Furthermore, part~\eqref{cor:main-cor.b} of~Proposition~\ref{cor:main-cor}, which generalizes a classic result of Drinfeld (\cite{Dr}), yields \begin{theorem}\label{thm:centre} Let $\gg$ be semisimple. Then the assignment $|V|\mapsto c_V$ defines an isomorphism of algebras $\QQ(q)\tensor_\ZZ K_0(\gg-\operatorname{mod})\to Z(U_q(\gg))$. \end{theorem} \noindent This provides the following refinements of classic results of Duflo, Harish-Chandra and Rosso~(\cite{Ros}). \begin{corollary} For $\gg$ semisimple, $Z(U_q(\gg))$ is freely generated by the $c_{V(\omega)}$ where the $\omega$ are fundamental weights of~$\gg$, and $c_{V(\lambda)}c_{V(\mu)}=\sum_{\nu\in\Lambda^+} [V(\lambda)\tensor V(\mu):V(\nu)] c_{V(\nu)}$ for any $\lambda,\mu\in\Lambda^+$. \end{corollary} \subsection*{Acknowledgments} We are grateful to Anthony Joseph for explaining to us his approach to the center of quantized enveloping algebras and to Henning Andersen, David Kazhdan and Victor Ostrik for stimulating discussions. This work was completed during a visit of the second author to the Institut Mittag-Leffler (Djursholm, Sweden) whose support is greatly appreciated. \section{Notation and proofs} Recall that, given an $H$-bimodule $B$, $B^*$ is naturally an $H$-bimodule via $(h\lact f\ract h')(b)=f(h'\lact b\ract h)$, $f\in B^*$, $h,h'\in H$, $b\in B$. In particular, $H^*$ is an $H$-bimodule. \subsection{Proof of Theorem~\ref{thm:char}}\label{subs:pf-main0} The following are immediate. \begin{lemma}\label{lem:0} $\la V,W^\star\ra_{V\oplus W}=0=\la W,V^\star\ra_{V\oplus W}$. \end{lemma} \begin{lemma}\label{lem:A} Let $V$, $W$ be left $H$-modules and let $\rho:H\tensor_\kk W\to V$ be a $\kk$-linear map. Then: \begin{enumerate}[{\rm(a)}] \item \label{lem:A.a} the assignment $ h\lact_\rho (v,w)=(h\lact v+\rho(h\tensor w),h\lact w)$, $h\in H$, $v\in V$, $w\in W$, defines a left $H$-module structure $V\oplus_\rho W$ on~$V\oplus W$ if and only if \begin{equation}\label{eq:lem:A.1} \rho(hh'\tensor w)=\rho(h\tensor h'\lact w)+h\lact \rho(h'\tensor w),\qquad h,h'\in H,\,w\in W. \end{equation} In that case $V$ is an $H$-submodule of $V\oplus_\rho W$ and $W=(V\oplus_\rho W)/V$. \item \label{lem:A.b} A short exact sequence of $H$-modules $0\to V\to U\to W\to 0$ is equivalent to $0\to V\xrightarrow{} V\oplus_\rho W\xrightarrow{} W\to 0$ for some~$\rho$ satisfying~\eqref{eq:lem:A.1}. \end{enumerate} \end{lemma} Thus, given $V\subset U$ in~$\mathscr C$, we can replace the natural short exact sequence $0\to V\to U\to U/V\to 0$ by the one from Lemma~\ref{lem:A}. \begin{lemma}\label{lem:B'} Let $V$, $W$ be left $H$-modules and let $\rho$ be as in Lemma~\ref{lem:A}. Then $\beta_{V\oplus_\rho W}(x+y)=\beta_V(x)+\beta_V(y)$ for any $x\in V\tensor V^\star$, $y\in W\tensor W^\star$. \end{lemma} \begin{proof} It suffices to verify the assertion for $x=v\tensor f$ and $y=w\tensor g$, $v\in V$, $w\in W$, $f\in V^\star$, $g\in W^\star$. We have by Lemmata~\ref{lem:0}, \ref{lem:A}\eqref{lem:A.a} \begin{align*} \beta_{V\oplus_\rho W}&(v\tensor f+w\tensor g)(h)=\la h\lact_\rho v\tensor f+h\lact_\rho w\tensor g\ra_{V\oplus W} \\&=\la h\lact v,f\ra_V+\la\rho(h\tensor w), f\ra_{V\oplus W}+\la h\lact w,g\ra_{W}=\beta_V(v\tensor f)(h)+\beta_{W}(w\tensor g)(h).\qedhere \end{align*} \end{proof} Since $1_{V\oplus_\rho W}=1_V+1_W$ where $1_V\in V\tensor V^\star$, $1_W\in W\tensor W^\star$, it follows from Lemma~\ref{lem:B'} that $z_{V\oplus_\rho W}=z_V+z_W$ and the map $K_0(\mathscr C)\to \mathcal Z_{\mathscr C}$, $|V|\mapsto z_V$ is a well-defined surjective homomorphism of abelian groups. Also, $z_V\in \sum_{[S]\in \Iso^\circ\mathscr C} \ZZ z_{S}$ for each $V\in\mathscr C=\mathscr C^{fin}$ because it has finite length. Since the set $\{z_{V}\}_{[V]\in \Iso^\circ\mathscr C}\subset \ul H^*_{\mathscr C}$ is $\kk$-linearly independent by Proposition~\ref{prop:Fos-0}\eqref{prop:Fos-0.d}, the injectivity follows.\qed \subsection{Algebra structure on~\texorpdfstring{$H^*_{\mathscr C}$}{H*\_C}} Henceforth we assume that $H=(H,m,\Delta,\varepsilon)$ is a bialgebra. Then $H^*$ is a unital algebra with the multiplication defined by $(\phi\cdot\xi)(h)=\phi(h_{(1)})\xi(h_{(2)})$, $h\in H$, $\phi,\xi\in H^*$, $\Delta(h)=h_{(1)}\tensor h_{(2)}$ in Sweedler notation and the unity is~$\varepsilon$. \begin{lemma}\label{lem:inv-subalg} $(H^*)^H$ is a subalgebra of~$H^*$. \end{lemma} \begin{proof} Observe that $\phi\in (H^*)^H$ if and only if $\phi(hh')=\phi(h'h)$ for all~$h,h'\in H$. Then, given $h,h'\in H$ and $\xi,\xi'\in(H^*)^H$ we have \begin{equation*} (\xi\cdot\xi')(hh')=\xi(h_{(1)}h'_{(1)})\xi'(h_{(2)}h'_{(2)})=\xi(h'_{(1)}h_{(1)})\xi'(h'_{(2)}h_{(2)})=(\xi\cdot\xi')(h'h).\qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main-1}] Note that in the category of $\kk$-vector spaces there is a natural isomorphism $\kappa:(V\tensor V^\star)\tensor (V'\tensor V'{}^\star)\to(V\tensor V')\tensor (V\tensor V')^\star$, $\kappa(v\tensor f\tensor v'\tensor f')=v\tensor v'\tensor f'\tensor f$, $v\in V$, $v'\in V'$, $f\in V^\star$, $f'\in V'{}^\star$. Then, clearly, $\la\cdot,\cdot\ra_{V\tensor V'}\circ\kappa=\la \cdot,\cdot\ra_V\tensor \la \cdot,\cdot\ra_{V'}$ which immediately implies that $\tilde\beta_V\tensor \tilde\beta_{V'}=\tilde\beta_{V\tensor V'}\circ \kappa$ where $\tilde\beta_U:=\beta_U\circ\pi_U$ where $\pi_U:U\tensor_\kk U^\star\to U\tensor_{D(U)} U^\star$ is the natural projection. This proves the first assertion and also the second once we observe that $1_{V\tensor V'}=\kappa(1_V\tensor 1_{V'})$. \end{proof} \subsection{The Hopf algebra case} Suppose now that $H=(H,m,\Delta,\varepsilon,S)$ is a Hopf algebra. Since~$H$ is naturally an $H$-bimodule, $\ad:H\to\End_\kk H$ is a homomorphism of algebras. We also define $\ad^*:H^{op}\to \End_\kk H$ by $(\ad^*h)(h')=S(h_{(1)})h'S^2(h_{(2)})$, which is a homomorphism of algebras. Henceforth, given $a\in H^{\tensor n}$ we write it in Sweedler-like notation as $a=a_1\tensor\cdots\tensor a_n$ with summation understood. \begin{proof}[Proof of Theorem~\ref{thm:main-2}] We need the following equivalent descriptions of~$\mathscr A(H)$. \begin{lemma}\label{lem:D} Let $P=P_1\tensor P_2\in H\widehat\tensor H$. The following are equivalent: \begin{enumerate}[{\rm(a)}] \item\label{lem:D.a} $P\cdot (S^2\tensor 1)\circ\Delta(h)=\Delta(h)\cdot P$; \item\label{lem:D.c} $(1\tensor h)\cdot P=(\ad^*h_{(1)})(P_1)\tensor P_2 h_{(2)}$; \item\label{lem:D.d} $(\ad^* h\tensor 1)(P)=(1\tensor\ad h)(P)$. \end{enumerate} \end{lemma} \begin{proof} By~\eqref{lem:D.a} we have $ h_{(1)}\tensor P_1 S^2(h_{(2)})\tensor P_2 h_{(3)}\tensor h_{(4)}=h_{(1)}\tensor h_{(2)}P_1\tensor h_{(3)}P_2\tensor h_{(4)} $ for all $h\in H$. Then~\eqref{lem:D.c} and~\eqref{lem:D.d} follow by applying $m(S\tensor 1)\tensor 1\tensor \varepsilon$ and $m(S\tensor 1)\tensor m(1\tensor S)$, respectively, to both sides. Part~\eqref{lem:D.c} implies~\eqref{lem:D.a} since $h_{(1)}(\ad^* h_{(2)})(h')=h'S^2(h)$. Finally, \eqref{lem:D.d} implies~\eqref{lem:D.c} since $(\ad^*h_{(1)})(P_1)\tensor P_2h_{(2)}=P_1\tensor \ad h_{(1)}(P_2)h_{(2)}=P_1\tensor hP_2$. \end{proof} \begin{lemma}\label{lem:E} Let $B$ be an $H$-bimodule and set $B^{\diamond H}:=\{ b\in B\,:\, h\diamond b=\varepsilon(h)b,\,h\in H\}$. Then $B^H\subset B^{\diamond H}\subset B^{S(H)}$ with the equality if $S$ is invertible. \end{lemma} \begin{proof} Let $h\in H$. Then for all $b\in B^H$ we have $h\diamond b=S^2(h_{(2)})\lact b\ract S(h_{(1)})=S^2(h_{(2)})S(h_{(1)})\lact b=S(h_{(1)}S(h_{(2)}))\lact b= \varepsilon(h)b$. On the other hand, for all $b\in H^{\diamond H}$, $ S(h)\lact b=\varepsilon(h_{(1)})S(h_{(2)})\lact m=S(h_{(3)})S^2(h_{(2)})\lact m\ract S(h_{(1)})=S(S(h_{(2)})h_{(3)})\lact m\ract S(h_{(1)})=m\ract S(h) $. \end{proof} The following Lemma is well-known and can be proved similarly. \begin{lemma}\label{lem:F} $Z(H)=H^H=H^{\ad H}:=\{ h'\in H\,:\, (\ad h)(h')=\varepsilon(h)h',\,h\in H\}$.\qed \end{lemma} By Lemma~\ref{lem:D}\eqref{lem:D.d} we have, for all $h\in H$, $\xi\in H^*_{\mathscr C}$ $$ \Phi_P(h\diamond \xi)=(S^2(h_{(2)})\lact\xi\ract S(h_{(1)}))(P_1)P_2=\xi((\ad^*h)P_1)P_2=\xi(P_1)(\ad h)(P_2)=(\ad h)\Phi_P(\xi). $$ Furthermore, if $\xi\in (H^*_{\mathscr C})^H$ then $\Phi_P(h\diamond \xi)=\varepsilon(h)\Phi_P(\xi)=(\ad h)\Phi_P(\xi)$, whence $\Phi_P(\xi)\in Z(H)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main-thm}] Suppose that $P$ satisfies $(\Delta\tensor 1)(P)=t_1 P_1 t_2\tensor t_3 P'_1 t_4\tensor P_2 P'_2$, for some $T=t_1\tensor t_2\tensor t_3\tensor t_4\in H^{\widehat\tensor 4}$ where $P=P_1\tensor P_2=P'_1\tensor P'_2$. Then for any $\xi,\xi'\in H^*_{\mathscr C}$ \begin{equation} \begin{split} \Phi_P(\xi\cdot \xi')=(\xi\cdot \xi')(P_1)P_2=\xi( t_1P_1t_2)\xi'(t_3P'_1t_4)P_2P'_2=(t_2\lact \xi\ract t_1)(P_1)(t_4\lact \xi'\ract t_3)(P'_1)P_2P'_2 \\=\Phi_P(t_2\lact \xi\ract t_1)\cdot\Phi_P(t_4\lact \xi'\ract t_3).\label{eq:Phi-mult} \end{split} \end{equation} Take $\xi\in H^*_V$, $\xi'\in H^*_{V'}$. Then $\xi\cdot \xi'\in H^*_{V\tensor V'}$ by Theorem~\ref{thm:main-1}\eqref{thm:main-1.a} and $\Phi_P(\xi\cdot\xi')\in H_{V,P}\cdot H_{V',P}$ by~\eqref{eq:Phi-mult}. Therefore, $P\in\mathscr M(H)$. Furthermore, assume that $t_2t_1\tensor t_4t_3=1\tensor 1$, and let $\xi,\xi'\in (H^*_{\mathscr C})^H$. Then~\eqref{eq:Phi-mult} yields $\Phi_P(\xi\cdot \xi')=\Phi_P(t_2t_1\lact \xi)\cdot\Phi_P(t_4t_3\lact \xi')=\Phi_P(\xi)\cdot\Phi_P(\xi')$. This implies that~$P\in\mathscr M_0(H)$. \end{proof} \subsection{Applications}\label{subs:Joseph} Let $\mathscr R(H)$ be the set of pairs $(R^+,R^-)$, $R^\pm\in H\widehat\tensor H$, such that $R^+_{21}R^-\cdot\Delta(h)=\Delta(h)\cdot R^+_{21}R^-$ for all~$h\in H$ and $ (\Delta\tensor 1)(R^\pm)=R^\pm_{13}R^\pm_{23}$, $(1\tensor\Delta)(R^+)=R^+_{13}R^+_{12} $. Clearly, $(R,R)\in\mathscr R(H)$ if $R$ is an $R$-matrix for~$H$. \begin{lemma}\label{lem:P-R} Suppose that there exists $\mathbf g\in H$ group-like such that $\mathbf gS^2(h)=h\mathbf g$ for all~$h\in H$. Let $(R^+,R^-)\in\mathscr R(H)$. Then $P:=R^+_{21}\cdot R^-\cdot (\mathbf g\tensor 1)\in\mathscr A(H)\cap \mathscr M_0(H)$. \end{lemma} \begin{proof} Write $R^\pm=r^\pm_1\tensor r^\pm_2=s^\pm_1\tensor s^\pm_2$. Since $R^+_{21}R^-\cdot\Delta(h)=\Delta(h)\cdot R^+_{21}R^-$ we have $$P\cdot (S^2\tensor 1)(\Delta(h))=r^+_2r^-_1\mathbf g S^2(h_{(1)})\tensor r^+_1r^-_2 h_{(2)}=r^+_2r^-_1 h_{(1)}\mathbf g\tensor r^+_1r^-_2 h_{(2)} =\Delta(h)\cdot P.$$ Thus, $P\in \mathscr A(H)$. Furthermore, $(\Delta\tensor 1)(P)=R^+_{32}R^+_{31}R^-_{13}R^-_{23}(\mathbf g\tensor \mathbf g\tensor 1)= P_1\tensor r^+_2r^-_1\mathbf g\tensor r^+_1 P_2r^-_2$. Since $(\Delta\tensor 1)(R^+)= r^+_1\tensor s^+_1\tensor r^+_1s^+_1$, by Lemma~\ref{lem:D}\eqref{lem:D.c} we obtain \begin{multline*} (\Delta\tensor 1)(P)=(\ad^* r^+_1)(P_1)\tensor r^+_2s^+_2r^-_1\mathbf g\tensor P_2 s^+_1r^-_2 =(\ad^* r^+_1)(P_1)\tensor r^+_2 P'_1\tensor P_2P'_2\\ =S(r^+_1)P_1S^2(s^+_1)\tensor r^+_2s^+_2 P'_1\tensor P_2P'_2. \end{multline*} Thus, $P\in\mathscr M(H)$ with $T=(S\tensor S^2\tensor 1\tensor 1)(R^+_{13}\cdot R^+_{23})$. Finally, $(m^{op}\tensor m^{op})(T)= S^2(s^+_2)S(r^+_1)\tensor r^+_2s^+_2=(S\tensor 1)( R^+\cdot (S\tensor 1)(R^+))=1\tensor 1$. Thus, $P\in\mathscr M_0(H)$. \end{proof} If~$P$ is as in Lemma~\ref{lem:P-R} we obtain \begin{equation}\label{eq:explicit-phi} \Phi_P(\beta_V(v\tensor f))= r^+_1\la r_2^+r^-_1\mathbf g\lact v,f\ra_V r^-_2=r^+_1\la r^-_1\lact\mathbf g(v),f\ract r^+_2\ra_V r^-_2,\qquad v\in V,\, f\in V^\star. \end{equation} Let $\kk=\QQ(q)$ and let $U_q(\gg)$ be a quantized enveloping algebra corresponding to a symmetrizable Kac-Moody algebra~$\gg$ which is a Hopf algebra generated by $E_i$, $F_i$, $i\in I$ and $K_\mu$, $\mu\in\Lambda$, where $\Lambda$ is a weight lattice of~$\gg$, with $\Delta(E_i)=1\tensor E_i+E_i\tensor K_{\alpha_i}$, $\Delta(F_i)=F_i\tensor 1+K_{-\alpha_i}\tensor F_i$, $\Delta(K_\mu)=K_\mu\tensor K_\mu$, $\varepsilon(E_i)=\varepsilon(F_i)=0$ and $\varepsilon(K_\mu)=1$, where $\alpha_i$, $i\in I$ are simple roots of~$\gg$. Let $\mathcal K$ be the subalgebra of~$U_q(\gg)$ generated by the $K_\mu$, $\mu\in\Lambda$. After~\cites{Dr,Lus-book}, there exists a unique $R$-matrix in a weight completion $U_q(\gg)\widehat\tensor U_q(\gg)$ of the form $R=R_0 R_1$ where $R_1\in U_q^+(\gg)\widehat\tensor U_q^-(\gg)$ is essentially $\Theta^{op}$ in the notation of~\cite{Lus-book} and satisfies $(\varepsilon\tensor 1)(R_1)=(1\tensor \varepsilon)(R_1)=1\tensor 1$, while $R_0\in \mathcal K\widehat\tensor \mathcal K$ is determined by the following condition: for any $\mathcal K$-modules $V^\pm$ such that $K_\mu|_{V^\pm}=q^{(\mu,\mu_\pm)}\id_{V^\pm}$, $\mu,\mu_\pm\in\Lambda$, we have $R_0|_{V^-\tensor V^+}=q^{(\mu_-,\mu_+)}\id_{V^-\tensor V^+}$. Here $(\cdot,\cdot)$ is the Kac-Killing form on $\Lambda\times\Lambda$ (\cite{Kac}). The following is immediate. \begin{lemma}\label{lem:P_g} Let $R=r_1\tensor r_2$ be as above. Let $v_\lambda\in V(\lambda)$ ($f_\lambda\in V(\lambda)^\star$) be a highest (respectively, lowest) weight vector of weight~$\lambda$ (respectively, $-\lambda$), $\lambda\in\Lambda^+$. Then $r_1\lact v_\lambda\tensor r_2=v_\lambda\tensor K_\lambda$ and $r_1\tensor f_\lambda\ract r_2= K_\lambda\tensor f_\lambda$.\qed \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:joseph-decomp}] Since $V(\lambda)$ is a simple highest weight module, $D(V(\lambda))\cong \kk$. Note that for any $\lambda,\mu\in\Lambda^+$, $V(\lambda)\tensor V(\mu)$ is a simple $U_q(\lie g\oplus\lie g)=U_q(\lie g)\tensor U_q(\lie g)$-module of highest weight $(\lambda,\mu)$. Twisting $V(\mu)$ with the anti-automorphism of $U_q(\lie g)$ interchanging $F_i$ and~$E_i$, we conclude that $V(\lambda)\tensor V(\lambda)^\star$ is a simple $U_q(\lie g)$-bimodule. Taking into account that $\mathbf g=K_{-2\rho}$ we obtain from Lemma~\ref{lem:P_g} and~\eqref{eq:explicit-phi} that $\Phi_P(\beta_{V(\lambda)}(v_\lambda\tensor f_\lambda))= K_\lambda\la \mathbf g\lact v_\lambda,f_\lambda\ra K_\lambda\in \kk^\times K_{2\lambda}$. Since $V(\lambda)\tensor V(\lambda)^\star$ is cyclic on~$v_\lambda\tensor f_\lambda$ as $U_q(\gg)$-module with the $\diamond$ action, $H_{V(\lambda)}$ is cyclic on~$K_{2\lambda}$ as the $\ad U_q(\gg)$-module by the above. Since $\beta_{V(\lambda)}$ is injective by Theorem~\ref{prop:Fos-0}\eqref{prop:Fos-0.c} and $\Phi_P$ is injective by~\cite{Dr}, it follows that $H_{V(\lambda)}\cong V(\lambda)\tensor V(\lambda)^\star$. This proves~\eqref{thm:joseph-decom.a}. Then the sum in~\eqref{thm:joseph-decom.b} is direct by Proposition~\ref{cor:main-cor}\eqref{cor:main-cor.a} and coincides with $H_{\mathscr C_{\lie g},P}$ which is always a subalgebra of~$H$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:centre}] Since~$D(V(\lambda))\cong\kk$, Theorem~\ref{thm:joseph-decomp} implies that $Z(H_{\mathscr C_\gg,P_\gg})=\bigoplus_{\lambda\in\Lambda^+} \kk c_{V(\lambda)}$, hence the assignment $|V(\lambda)|\mapsto c_{V(\lambda)}$ is an isomorphism $\kk\tensor_\ZZ K_0(\mathscr C_\gg)\to \Phi_{P_\gg}((H^*_{\mathscr C_\gg})^H)=Z(H_{\mathscr C_\gg,P_{\gg}})$ as in Proposition~\ref{cor:main-cor}\eqref{cor:main-cor.b}. By~\cite{Lus}, $K_0(\mathscr C_\gg)=K_0(\gg-\operatorname{mod})$ where $\gg-\operatorname{mod}$ is the category of finite dimensional $\gg$-modules. On the other hand, each non-zero element of $Z(U_q(\gg))$ is $\ad$-invariant, hence generates a one-dimensional $\ad U_q(\gg)$-module and thus is contained in $H_{\mathscr C_\gg,P_\gg}$ by~\cite{joseph-mock}. Therefore, $Z(U_q(\gg))\subset H_{\mathscr C_\gg,P_\gg}$ hence $Z(U_q(\gg))=Z(H_{\mathscr C_\gg,P_\gg})$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{CP}{book}{ author={Chari, Vyjayanthi}, author={Pressley, Andrew}, title={A guide to quantum groups}, publisher={Cambridge University Press, Cambridge}, date={1994}, } \bib{Dr}{article}{ author={Drinfel{\cprime}d, V. G.}, title={Almost cocommutative Hopf algebras}, language={Russian}, journal={Algebra i Analiz}, volume={1}, date={1989}, number={2}, pages={30--46}, } \bib{FD}{book}{ author={Farb, Benson}, author={Dennis, R. Keith}, title={Noncommutative algebra}, series={Graduate Texts in Mathematics}, volume={144}, publisher={Springer-Verlag, New York}, date={1993}, pages={xiv+223}, isbn={0-387-94057-X}, } \bib{Fos}{thesis}{ author={Foster, John}, title={\href{https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/13269/Foster_oregon_0171A_10698.pdf?sequence=1}{Semisimplicity of certain representation categories}}, type={Ph.D. thesis}, date={2013}, organization={U. of Oregon Eugene} } \bib{joseph-mock}{article}{ author={Joseph, Anthony}, title={On the mock Peter-Weyl theorem and the Drinfeld double of a double}, journal={J. Reine Angew. Math.}, volume={507}, date={1999}, pages={37--56}, issn={0075-4102}, } \bib{Kac}{book}{ author={Kac, Victor G.}, title={Infinite-dimensional Lie algebras}, edition={2}, publisher={Cambridge University Press, Cambridge}, date={1985}, isbn={0-521-32133-6}, } \bib{Lus}{article}{ author={Lusztig, George}, title={Quantum deformations of certain simple modules over enveloping algebras}, journal={Adv. in Math.}, volume={70}, date={1988}, number={2}, pages={237--249}, } \bib{Lus-book}{book}{ author={Lusztig, George}, title={Introduction to quantum groups}, series={Progress in Mathematics}, volume={110}, publisher={Birkh\"auser, Boston, MA}, date={1993}, } \bib{Maj}{book}{ author={Majid, Shahn}, title={Foundations of quantum group theory}, publisher={Cambridge University Press, Cambridge}, date={1995}, pages={x+607}, isbn={0-521-46032-8}, } \bib{Ros}{article}{ author={Rosso, Marc}, title={Analogues de la forme de Killing et du th\'eor\`eme d'Harish-Chandra pour les groupes quantiques}, journal={Ann. Sci. \'Ecole Norm. Sup. (4)}, volume={23}, date={1990}, number={3}, pages={445--467}, issn={0012-9593}, } \bib{RST}{article}{ author={Reshetikhin, N. Yu.}, author={Semenov-Tian-Shansky, M. A.}, title={Quantum $R$-matrices and factorization problems}, journal={J. Geom. Phys.}, volume={5}, date={1988}, number={4}, pages={533--550 (1989)}, } \bib{Sch}{article}{ author={Schneider, H.-J.}, title={Some properties of factorizable Hopf algebras}, journal={Proc. Amer. Math. Soc.}, volume={129}, date={2001}, number={7}, pages={1891--1898 (electronic)}, issn={0002-9939}, } \bib{Sh}{article}{ author={Semikhatov, A. M.}, title={Factorizable ribbon quantum groups in logarithmic conformal field theories}, journal={Theoret. and Math. Phys.}, volume={154}, date={2008}, number={3}, pages={433--453}, issn={0040-5779}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2015-07-10T02:08:45", "yymm": "1504", "arxiv_id": "1504.06362", "language": "en", "url": "https://arxiv.org/abs/1504.06362" }
\section{Introduction} Shape optimization problems are a very challenging field in mathematical analysis and has attracted more and more attention in the last decade. One of the most discussed and oldest problems is certainly the task of finding the shape of a body inside a fluid having the least resistance. This problem dates back at least to Newton, who proposed this topic in a rotationally symmetric setting. Nowadays, there are a lot of important industrial applications leading to this kind of questions. Among others we mention in particular the problem of optimizing the shape of airplanes, cars and wind turbine blades in order to have least resistance or biomechanical applications like bypass constructions. The wide fields of applications may be one of the reasons that shape optimization problems in fluids received growing attention recently. Nevertheless, those problems turn out to be very challenging and so far no overall mathematical concept has been successful in a general sense. One of the main difficulties certainly is that shape optimization problems are often not well-posed, i.e., no minimizer exists, compare for instance \cite{book:KawohlCellinaOrnelas00, article:Murat77, incoll:Tartar}. There are some contributions leading to mathematically well-posed problem formulations, see for instance \cite{article:PlotnikovSokolowski10}, but the geometric restrictions are difficult to handle numerically. The most common approaches used in practice parametrize the boundary of the unknown optimal shape by functions, see for instance \cite{incoll:BrandenburgLindemannUlbrichUlbrich09, article:Pironneau74}. However, those formulations do not inherit a minimizer in general. For numerical simulations typically shape sensitivity analysis is used. Here, one uses local boundary variations in order to find a gradient of the cost function with respect to the design variable, which is in this case the shape of the body. The necessary calculations are carried out without considering the existence or regularity of a minimizer. But in the end one obtains a mathematical structure that can be used for numerical implementations. In \cite{GarckeHechtNS}, a phase field approach was introduced for minimizing general volume functionals in a Navier--Stokes flow. For this purpose, the porous medium approach proposed by Borrvall and Petersson \cite{article:BorrvallPetersson03} and a Ginzburg--Landau regularization as in the work of Bourdin and Chambolle \cite{article:BourdinChambolle03} were combined. The latter is a diffuse interface approximation of a perimeter regularization. This leads to a model where existence of a minimizer can be guaranteed, and at the same time necessary optimality conditions can be derived and used for numerical simulations, see \cite{garckehinzeetal}. In particular, this approach replaces the free boundary $\Gamma$ of the body $B$ by a diffuse interface. Hence, it is a priori not clear how to deal with objective functionals that are defined on the free boundary $\Gamma$. In this work, we study the following boundary objective functional: \begin{align}\label{generalfunctional} \int_{\Gamma} h(x, \nabla \bm{u}, p, \bm{\nu}) \, \mathrm{d} \mathcal{H}^{d-1} \,, \end{align} where $h$ is a given function, $\bm{u}$ denotes the velocity field of the fluid, $p$ denotes the pressure, $\bm{\nu}$ is the \textit{inner} unit normal of the fluid region, i.e., pointing from the body $B$ into the complementary fluid region $E = B^{c}$. The velocity $\bm{u}$ and pressure $p$ are assumed to obey the stationary Navier--Stokes equations inside the fluid region $E$, and the no-slip condition on $\Gamma$, namely, \begin{subequations}\label{IntroNS} \begin{alignat}{2} -\div \bm{\sigma} + (\bm{u} \cdot \nabla) \bm{u} & = \bm{f} && \text{ in } E, \\ \div \bm{u} & = 0 && \text{ in } E, \\ \bm{u} & = \bm{0} && \text{ on } \Gamma, \end{alignat} \end{subequations} where $\bm{\sigma} := \mu \left(\nabla \bm{u} + (\nabla \bm{u})^T \right)- p \, \bm{\mathrm{I}}\,$ denotes the stress tensor of the velocity field $\bm{u}$, $\mu > 0$ denotes the viscosity of the fluid, $\bm{f}$ denotes an external body force, and $\, \bm{\mathrm{I}}\,$ denotes the identity tensor. An important example of $h$ is the hydrodynamic force component acting on $\Gamma$ with the force direction defined by the unit vector $\bm{a}$: \begin{align}\label{HydroDynamForce} h(x, \nabla \bm{u}, p, \bm{\nu}) = \bm{a} \cdot (\bm{\sigma}\bm{\nu}) = \bm{a} \cdot (\mu (\nabla \bm{u} + (\nabla \bm{u})^{T}) - p \, \bm{\mathrm{I}}\,) \bm{\nu}, \end{align} and so \eqref{generalfunctional} becomes \begin{align}\label{HydroDynamFunctional} \int_{\Gamma} \bm{a} \cdot (\bm{\sigma} \bm{\nu}) \, \mathrm{d} \mathcal{H}^{d-1} \, = \bm{a} \cdot \left ( \int_{\Gamma} \bm{\sigma} \bm{\nu} \, \mathrm{d} \mathcal{H}^{d-1} \, \right ). \end{align} If $\bm{a}$ is parallel to the direction of the flow, then \eqref{HydroDynamFunctional} represents the drag of the object $B$. If $\bm{a}$ is perpendicular to the direction of the flow, then \eqref{HydroDynamFunctional} represents the lift of the object. In the work at hand we propose an approach on how to deal with boundary objective functionals in the phase field setting. To be precise, we aim to minimize an appropriate phase field approximation of the functional \eqref{generalfunctional}, and also the functional \eqref{HydroDynamFunctional}, which can be considered as one of the most important objectives in shape optimization in fluids. The fluid is assumed to be an incompressible, Newtonian fluid described by the stationary Navier--Stokes equations \eqref{IntroNS}. For this purpose, we first discuss how we model the integral over the free boundary $\Gamma$ if it is replaced by a diffuse interface and how the normal $\bm{\nu}$ can be defined in this setting, see Section \ref{sec:DerivationPhaseField}. Afterwards, we analyze the phase field problem for both \eqref{generalfunctional} and \eqref{HydroDynamFunctional} and discuss the existence of a minimizer and optimality conditions, see Section \ref{sec:AnalysisPhaseField}. In Section \ref{sec:SharpInterfaceAsymp}, we focus on the hydrodynamic force functional \eqref{HydroDynamFunctional} and the corresponding phase field problem is then related to the sharp interface free boundary problem with a perimeter regularization by the method of matched formal asymptotic expansions. We find that the formal sharp interface limit of the optimality system gives the same results as can be found in the shape sensitivity literature. We then solve the phase field problem numerically, see Section \ref{sec:Numerics}. For this purpose, we derive a gradient flow equation for the reduced objective functional and arrive in a Cahn--Hilliard type system. After time discretization, this system is treated in every time step by a Newton method. We numerically solve shape optimization problems involving drag and the lift-to-drag ratio. \section{Notation and problem formulation}\label{sec:SharpInterfaceProblem} Let us assume that $\Omega \subset \mathbb{R}^{d}$, $d \in \{2,3\}$, is a fixed domain with Lipschitz boundary. Inside this fixed domain $\Omega$ we may have certain parts filled with fluid, denoted by $E$, and the complement $B:= \overline{\Omega} \setminus E$ is some non-permeable medium. In the following we will denote by $\bm{\nu}$ the outer unit normal of $B$, i.e., the inner unit normal of the fluid region. The aim is to minimize the functional, given by \eqref{generalfunctional}, where $\Gamma := \partial B \cap \Omega$, subject to the Navier--Stokes equations \eqref{IntroNS}. We additionally impose a volume constraint on the amount of fluid. For this purpose we choose $\beta \in (-1,1)$ and only use fluid regions $E \subset \Omega$ fulfilling the constraint $\abs{E}=\frac{(\beta+1)}{2}\abs{\Omega}$. We prescribe some inflow or outflow regions on the boundary of $\Omega$ and choose for this purpose $\bm{g} \in \bm{H}^{\frac{1}{2}}(\partial \Omega)$ such that $\int_{\partial \Omega} \bm{g} \cdot \bm{\nu}_{\partial \Omega} \; \, \mathrm{d} \mathcal{H}^{d-1} \, = 0$. Additionally, we may have some body force $\bm{f} \in \bm{L}^{2}(\Omega)$ acting on the design domain. Note that throughout this paper we denote $\mathbb{R}^{d}$-valued functions and spaces consisting of $\mathbb{R}^{d}$-valued functions in boldface. As already mentioned in the introduction, problems like this are generally not well-posed in the sense that the existence of a minimizer can not be guaranteed. Hence, we use an additional perimeter regularization. For this purpose, we add a multiple of the perimeter of the obstacle to the cost functional \eqref{generalfunctional}. In order to properly formulate the resulting problem we introduce a design function $\varphi:\Omega \to \{ \pm1 \}$, where $\{ \varphi = 1 \} = E$ describes the fluid region and $\{ \varphi = -1 \} = B$ is its complement. The volume constraint reads in this setting as $\int_{\Omega} \varphi \, \mathrm{dx}\, = \beta \abs{\Omega}$. The design functions are chosen to be functions of bounded variation, such that the fluid region has finite perimeter, i.e., $\varphi \in BV(\Omega,\{\pm1\})$. We shall write $P_{\Omega}(E)$ for the perimeter of some set of bounded variation $E \subseteq \Omega$ in $\Omega$. Besides, if $\varphi$ is a function of bounded variation, its distributional derivative $ \mathrm{D} \varphi$ is a finite Radon measure and we can define the total variation by $\abs{ \mathrm{D} \varphi}(\Omega)$. For $\varphi \in BV(\Omega,\{\pm1\})$, it holds that \begin{align}\label{HausmeasureRelation} \abs{ \mathrm{D} \varphi}(\Omega)= 2P_{\Omega}(\{\varphi=1\}). \end{align} For a more detailed introduction to the theory of sets of finite perimeter and functions of bounded variation we refer to \cite{book:EvansGariepy, book:Giusti}. We hence arrive in the following space of admissible design functions: \begin{align}\label{defn:PhiAd0} \Phi_{ad}^{0}:= \left \{ \varphi \in BV(\Omega,\{\pm 1\}) \mid \int_{\Omega} \varphi \, \mathrm{dx}\, = \beta \abs{\Omega} \right \}. \end{align} Let $\gamma > 0$ denote the weighting factor for the perimeter regularization. Then, we arrive at the following shape optimization problem for the functional \eqref{generalfunctional} with additional perimeter regularization: \begin{align}\label{IntroObjFclt} \min_{(\varphi,\bm{u}, p)} J_{0}(\varphi, \bm{u},p) := \int_{\Omega} \frac{1}{2} h(x, \nabla \bm{u}, p, \bm{\nu}_{\varphi}) \, \mathrm{d} \, \abs{ \mathrm{D} \varphi }+ \frac{\gamma}{2} \abs{ \mathrm{D} \varphi }(\Omega ), \end{align} subject to $\varphi \in \Phi_{ad}^{0}$ and $(\bm{u},p) \in \bm{H}^{1}(E) \times L^{2}(E)$ fulfilling \begin{subequations}\label{IntroStateEquSharp} \begin{alignat}{2} -\mu \Delta \bm{u} + ( \bm{u} \cdot \nabla) \bm{u} + \nabla p & = \bm{f} &&\text{ in } E = \{ \varphi =1 \},\\ \div \bm{u} & = 0 &&\text{ in } E, \label{NSdiv} \\ \bm{u} & = \bm{g} && \text{ on } \partial \Omega \cap \partial E,\\ \bm{u} & = \bm{0} && \text{ on } \Gamma = \Omega \cap \partial E. \label{NSnoslip} \end{alignat} \end{subequations} Here, we used the relation \eqref{HausmeasureRelation} to replace the perimeter of $E$ with $\frac{1}{2} \abs{ \mathrm{D} \varphi}(\Omega)$. Furthermore, by the polar decomposition \begin{align}\label{polardecomp} \mathrm{D} \varphi = \bm{\nu}_{\varphi} \abs{ \mathrm{D} \varphi} \text{ for } \varphi \in BV(\Omega, \{ \pm 1 \}), \end{align} of the Radon measure $ \mathrm{D} \varphi$ into a positive measure $\abs{ \mathrm{D} \varphi}$ and a $S^{d-1}$-valued function $\bm{\nu}_{\varphi} \in L^{1}\left(\Omega,\abs{ \mathrm{D} \varphi} \right)^{d}$, see for instance \cite[Corollary 1.29]{book:Ambrosio}, we replace the product of the normal and the Hausdorff measure in \eqref{HydroDynamFunctional} by $\frac{1}{2}\bm{\nu}_{\varphi} \, \mathrm{d} \, \abs{ \mathrm{D} \varphi}$. In particular, $\bm{\nu}_{\varphi}$ can be considered as a generalised unit normal on $\partial E$. We remark that the shape optimization problem \eqref{IntroObjFclt} for the hydrodynamic force component \eqref{HydroDynamForce} have been studied extensively in the literature. In the work of \cite{article:Bello97}, the boundary integral \eqref{HydroDynamFunctional} is transformed into a volume integral. This is also done in \cite{incoll:BrandenburgLindemannUlbrichUlbrich_advancedNumMeth_DesignNSflow, article:PlotnikovSokolowski10}, but in the latter, the compressible Navier--Stokes equations are considered. We also mention \cite{article:Kondoh12}, which utilises the approach of Borrvall and Petersson \cite{article:BorrvallPetersson03} and the volume integral formulation. The shape derivatives for general volume and boundary objective functionals in Navier--Stokes flow have been derived in \cite{article:SchmidtSchulz10}. Finally, we mention the work of \cite{incoll:Boisgerault}, which bears the most similarity to our set-up. Under the assumption that the set $E = \{ \varphi = 1\}$ is $C^{2}$ and that there is a unique, sufficiently regular solution $\bm{u}$ to \eqref{IntroNS}, the analysis of \cite{incoll:Boisgerault} obtained, via the speed method, that the shape derivative of \begin{align*} J(E) = \int_{\Gamma} \bm{a} \cdot (\mu (\nabla \bm{u} + (\nabla \bm{u})^{T}) - p \bm{I}) \bm{\nu} \, \mathrm{d} \mathcal{H}^{d-1} \, \end{align*} with respect to vector field $V$ is given by (see \cite[Theorem 4, Equation 39]{incoll:Boisgerault}) \footnote{We remark that in \cite{incoll:Boisgerault}, the normal $\bm{n}$ is pointing from the fluid domain to the obstacle, i.e., in comparison with our set-up, $\bm{n} = - \bm{\nu}$.} \begin{align}\label{ShapeDerivHydroDynamForce} \mathrm{D} J(E)[V] = \int_{\Gamma} \inner{V(0)}{\bm{\nu}}(\bm{f} \cdot \bm{a} + \mu \partial_{\bm{\nu}} \bm{q} \cdot \partial_{\bm{\nu}} \bm{u}) \, \mathrm{d} \mathcal{H}^{d-1} \,, \end{align} where $\bm{q}$ is the solution to the adjoint system (see \cite[Equation 33.2]{incoll:Boisgerault}): \begin{subequations}\label{IntroAjointEquSharp} \begin{alignat}{2} -\mu \Delta \bm{q} + (\nabla \bm{u})^{T} \bm{q} - ( \bm{u} \cdot \nabla) \bm{q} + \nabla \pi & = \bm{0} &&\text{ in } E ,\\ \div \bm{q} & = 0 &&\text{ in } E,\\ \bm{q} & = \bm{a} && \text{ on } \Gamma,\\ \bm{q} & = \bm{0} && \text{ on } \partial \Omega \cap \partial E. \end{alignat} \end{subequations} Here, we denote the normal derivative of a scalar $\alpha$ and of a vector $\bm{\beta}$ as \begin{align}\label{normalderivativevector} \partial_{\bm{\nu}} \alpha := \nabla \bm{\alpha} \cdot \bm{\nu}, \quad \partial_{\bm{\nu}} \bm{\beta} := (\nabla \bm{\beta}) \bm{\nu}. \end{align} We note that as $\bm{u}$ satisfies the no-slip boundary condition \eqref{NSnoslip}, $\bm{u}$ has no tangential components on $\Omega \cap \partial E$. Thus, we obtain \begin{align}\label{decomposition:nablau:surface} \nabla \bm{u} = \partial_{\bm{\nu}} \bm{u} \otimes \bm{\nu} \text{ on } \Gamma = \Omega \cap \partial E. \end{align} Using the divergence free condition \eqref{NSdiv}, and the no-slip condition \eqref{NSnoslip}, we obtain on $\Gamma$: \begin{align}\label{pdnuUdotnuZero} 0 = \div \bm{u} = \tr{\nabla \bm{u}} = \sum_{i=1}^{d} \partial_{\bm{\nu}} u_{i} \nu_{i} = \partial_{\bm{\nu}} \bm{u} \cdot \bm{\nu} \Longrightarrow (\nabla \bm{u})^{T} \bm{\nu} = (\partial_{\bm{\nu}} \bm{u} \cdot \bm{\nu}) \bm{\nu} = \bm{0}, \end{align} which in turn implies that \begin{align}\label{HydroDynamForceSim} J(E) = \int_{\Gamma} \bm{a} \cdot (\bm{\sigma}\bm{\nu})\, \mathrm{d} \mathcal{H}^{d-1} \, = \int_{\Gamma} \bm{a} \cdot (\mu \nabla \bm{u} - p \, \bm{\mathrm{I}}\,) \bm{\nu} \, \mathrm{d} \mathcal{H}^{d-1} \,. \end{align} This is similar to the setting of \cite[Remark 12]{article:SchmidtSchulz10} and by following the computations in \cite{article:SchmidtSchulz10} one obtains \eqref{IntroAjointEquSharp} as the adjoint system and the shape derivative of \eqref{HydroDynamForceSim} for a $C^{2}$ domain in the direction of $V$ is \footnote{We remark that in \cite[Remark 12]{article:SchmidtSchulz10} the term $\div_{\Gamma} (\mu (\nabla \bm{u}) \bm{a})$ appears instead of $\div_{\Gamma}( \mu (\nabla \bm{u})^{T} \bm{a})$, which we believe is a typo.} \begin{equation}\label{SchmidtShapeDerivHydroDynamForce} \begin{aligned} \mathrm{D} J(E)[V] & = \int_{\Gamma} \inner{V(0)}{\bm{\nu}} \left ( -\mu \partial_{\bm{\nu}}(\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a} + \partial_{\bm{\nu}} p (\bm{a} \cdot \bm{\nu}) + \mu \partial_{\bm{\nu}} \bm{q} \cdot \partial_{\bm{\nu}} \bm{u} \right ) \, \mathrm{d} \mathcal{H}^{d-1} \, \\ & - \int_{\Gamma} \inner{V(0)}{\bm{\nu}} \div_{\Gamma} \left ( \mu (\nabla \bm{u})^{T} \bm{a} - p \bm{a} \right ) \, \mathrm{d} \mathcal{H}^{d-1} \,, \end{aligned} \end{equation} where $\div_{\Gamma}$ denotes the surface divergence. We introduce the surface gradient of $f$ on $\Gamma$ by $\nabla_{\Gamma}f$ with components $(\underline{D}_{k}f)_{1 \leq k \leq d}$, and with this definition we obtain $\div_{\Gamma} \bm{v} = \sum_{k=1}^{d} \underline{D}_{k} v_{k}$ for a vector field $\bm{v}$. Moreover, in components, we have \begin{align*} \partial_{\bm{\nu}} (\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a} = \sum_{i,j,k=1}^{d} \nu_{i} \partial_{i}(\nu_{j} \partial_{j} u_{k}) a_{k}. \end{align*} \begin{rem}\label{rem:SchmidtBoisgerault} In \cite[Remark 12]{article:SchmidtSchulz10}, the term $\mu \partial_{\bm{\nu}} (\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a}$ appearing on the right hand side of \eqref{SchmidtShapeDerivHydroDynamForce} is originally given as $\sum_{i,j,k=1}^{d} \nu_{i} \frac{\partial^{2} u_{k}}{\partial x_{i} \partial x_{j}} \nu_{j} a_{k}$. This is related to $\partial_{\bm{\nu}}(\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a}$ by the formula \begin{align}\label{interchangederivativesnu} \sum_{i,j,k=1}^{d} \nu_{i} \frac{\partial^{2} u_{k}}{\partial x_{i} \partial x_{j}} \nu_{j} a_{k} = \partial_{\bm{\nu}} (\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a} - \sum_{i,j,k = 1}^{d} \nu_{i} \partial_{i} \tilde{\nu}_{j} \partial_{j} u_{k} a_{k}, \end{align} where $\tilde{\bm{\nu}} = (\tilde{\nu}_{j})_{1 \leq j \leq d}$ denotes an extension of $\bm{\nu}$ off the boundary $\Gamma$ to a neighbourhood $U \supset \Gamma$ with $\abs{\tilde{\bm{\nu}}} = 1$ near $\Gamma$ and $\tilde{\bm{\nu}} \mid_{\Gamma} = \bm{\nu}$. By \eqref{decomposition:nablau:surface}, we see that $\partial_{j} u_{k} = \partial_{\bm{\nu}} u_{k} \nu_{j}$ on $\Gamma$, and so \begin{align} \sum_{i,j,k=1}^{d} \nu_{i} \partial_{i} \tilde{\nu}_{j} \partial_{j} u_{k} a_{k} = \sum_{i,j,k=1}^{d} \nu_{i} \partial_{i} \tilde{\nu}_{j} \nu_{j} \partial_{\bm{\nu}} u_{k} a_{k} = \sum_{i,j,k=1}^{d} \tfrac{1}{2} \nu_{i} \partial_{i} (\abs{\tilde{\nu}_{j}}^{2}) \partial_{\bm{\nu}} u_{k} a_{k} = 0. \end{align} Thus, the last term in \eqref{interchangederivativesnu} is zero and we have the relation \begin{align}\label{relation:Schmidt} \sum_{i,j,k=1}^{d} \nu_{i} \frac{\partial^{2} u_{k}}{\partial x_{i} \partial x_{j}} \nu_{j} a_{k} = \partial_{\bm{\nu}} (\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a}, \end{align} when $\bm{u} = \bm{0}$ on $\Gamma$. \end{rem} Based on Remark \ref{rem:SchmidtBoisgerault}, if $(\bm{u}, p)$ are sufficiently regular, then a short computation involving \eqref{relation:Schmidt} shows that on $\Gamma$, \begin{align*} & \; -\mu \div_{\Gamma} ((\nabla \bm{u})^{T} \bm{a}) - \mu \partial_{\bm{\nu}}(\partial_{\bm{\nu}} \bm{u}) \cdot \bm{a} + \partial_{\bm{\nu}}p (\bm{a} \cdot \bm{\nu}) + \div_{\Gamma} (p \bm{a})\\ = & \; -\mu \sum_{i=1}^{d} \underline{D}_{i} (\partial_{i} u_{j}) a_{j} - \mu \sum_{i,j,k=1}^{d} \nu_{i} \partial_{k} (\partial_{i} u_{j}) \nu_{k} a_{j} + \nabla p \cdot \bm{a} \\ = & \; -\mu \Delta \bm{u} \cdot \bm{a} + \nabla p \cdot \bm{a} = \bm{f} \cdot \bm{a} + (\bm{u} \cdot \nabla) \bm{u} \cdot \bm{a} = \bm{f} \cdot \bm{a}, \end{align*} where we have used the no-slip condition \eqref{NSnoslip}, and hence \eqref{SchmidtShapeDerivHydroDynamForce} is equivalent to \eqref{ShapeDerivHydroDynamForce}. \section{Derivation of the phase field formulation}\label{sec:DerivationPhaseField} The problem derived in the previous section has several drawbacks. First, it is not clear if this is well-posed, i.e., if for every $\varphi \in \Phi_{ad}^{0}$ there is a solution of the state equations \eqref{IntroStateEquSharp} and if there exists a minimizer $(\varphi, \bm{u}, p)$ of the overall problem \eqref{IntroObjFclt}-\eqref{IntroStateEquSharp}. Second, optimizing in the space $BV(\Omega)$ is not very practical. Deriving optimality conditions is not easy and it is not clear how to perform numerical simulations on this problem. Hence, we now want to approximate the complex shape optimization problem \eqref{IntroObjFclt}-\eqref{IntroStateEquSharp} by a problem that can be treated by well-known approaches. To this end we introduce a diffuse interface version of the free boundary problem by using a phase field approach. \subsection{The state equations in the phase field setting}\label{sec:PFState} In this setting, the design variable $\varphi: \Omega \to \mathbb{R}$ is now allowed to have values in $\mathbb{R}$, instead of only the two discrete values $\pm1$, and inherits $H^{1}(\Omega)$ regularity. In addition to the two phases $\{\varphi = 1 \}$ (fluid region E) and $\{\varphi = -1\}$ (solid region B), we also have an interfacial region $\{-1< \varphi < 1\}$ which is related to a small parameter $\varepsilon >0$. By \cite{article:Modica87}, we know that the Ginzburg--Landau energy \begin{align}\label{defn:GinzburgLandau} \mathcal{E}_{\varepsilon} : H^{1}(\Omega) \to \mathbb{R}, \quad \mathcal{E}_{\varepsilon}(\varphi):= \int_{\Omega} \frac{\varepsilon}{2} \abs{\nabla \varphi}^{2} \, \mathrm{dx}\,+\frac{1}{\varepsilon} \psi(\varphi)\, \mathrm{dx}\, \end{align} approximates $\varphi \mapsto c_{0} \abs{ \mathrm{D} \varphi}(\Omega) = 2c_{0} P_{\Omega}(\{ \varphi = 1 \})$ in the sense of $\Gamma$-convergence. Here, \begin{align}\label{defn:c0} c_{0} := \frac{1}{2}\int_{-1}^1\sqrt{2\psi(s)} \, \mathrm{ds}\, \end{align} and $\psi: \mathbb{R} \to \mathbb{R}$ is a potential with two equal minima at $\pm 1$, and in this paper we focus on an arbitrary double-well potential satisfying the assumption below: \begin{assumption}\label{assump:psi} Let $\psi \in C^{1,1}(\mathbb{R})$ be a non-negative function such that $\psi(s) = 0$ if and only if $s \in\{ \pm 1 \}$, and the following growth condition is fulfilled for some constants $c_{1}, c_{2} ,t_{0} > 0$ and $k \geq 2$: \begin{align*} c_{1} t^{k} \leq \psi(t) \leq c_{2} t^{k} \quad \forall t \geq t_{0}. \end{align*} \end{assumption} Additionally, we use the so-called porous medium approach for the state equations, see also \cite{GarckeHechtNS, garckehinzeetal}. This means that, we relax the non-permeability of the solid region $B$ outside the fluid by placing a porous medium of small permeability $(\overline{\alpha}_{\varepsilon})^{-1} \ll 1$ outside the fluid region $E$. In the interfacial region $\{-1< \varphi <1\}$ we interpolate between the equations describing the flow through the porous medium and the stationary Navier--Stokes equations by using an interpolation function $\alpha_{\varepsilon}$ satisfying the following assumption: \begin{assumption}\label{assump:alpha} We assume that $\alpha_{\varepsilon} \in C^{1,1}(\mathbb{R})$ is non-negative, with $\alpha_{\varepsilon}(1) = 0$, $\alpha_{\varepsilon}(-1)= \overline{\alpha}_{\varepsilon} > 0$, and there exist $s_{a}, s_{b} \in \mathbb{R}$ with $s_{a} \leq -1$ and $s_{b} \geq 1$ such that \begin{equation}\label{alphaepsSaSb} \begin{aligned} \alpha_{\varepsilon}(s) & = \alpha_{\varepsilon}(s_{a}) \text{ for } s \leq s_{a}, \\ \alpha_{\varepsilon}(s) & = \alpha_{\varepsilon}(s_{b}) \text{ for } s \geq s_{b}. \end{aligned} \end{equation} Moreover, we assume that the inverse permeability vanishes as $\varepsilon \searrow 0$, i.e., $\lim_{\varepsilon \searrow 0} \overline{\alpha}_{\varepsilon} = \infty$. \end{assumption} In particular, we have that \begin{align*} 0 \leq \alpha_{\varepsilon}(s) \leq \sup_{t \in [s_{a}, s_{b}]} \alpha_{\varepsilon}(t) < \infty \quad \forall s \in \mathbb{R}, \end{align*} i.e., $\alpha_{\varepsilon} \in L^\infty(\mathbb{R})$. The resulting state equations for the phase field problem are then given in the strong form by the following system: \begin{subequations}\label{IntroStateEquPhase} \begin{alignat}{2} \label{state1} \alpha_{\varepsilon}(\varphi) \bm{u} - \mu \Delta \bm{u} + (\bm{u} \cdot \nabla )\bm{u} + \nabla p &=\bm{f} && \text{ in } \Omega,\\ \label{state2} \div \bm{u} & = 0 && \text{ in }\Omega,\\ \label{state3} \bm{u} & = \bm{g} &&\text{ on } \partial \Omega. \end{alignat} \end{subequations} Later we add $\int_{\Omega} \frac{1}{2} \alpha_{\varepsilon}(\varphi)\abs{\bm{u}}^{2}\, \mathrm{dx}\,$ to the objective functional and this ensures that in the limit $\varepsilon \searrow 0$, the velocity $\bm{u}$ vanishes outside the fluid region, and hence the medium can really be considered as non-permeable again. In the following, we will use the following function spaces: \begin{align*} \bm{H}^{1}_{0,\sigma}(\Omega):=\left\{\bm{v} \in \bm{H}^{1}_{0}(\Omega) \mid \div \bm{ v} = 0 \right \},\quad \bm{H}^{1}_{\bm{g},\sigma}(\Omega) := \left\{\bm{v} \in \bm{ H}^{1}(\Omega) \mid \bm{v}|_{\partial \Omega} = \bm{g} ,\,\div \bm{v} = 0 \right \}, \end{align*} and for the pressure we use the space $L^{2}_{0}(\Omega):= \left \{ p \in L^{2}(\Omega)\mid \int_{\Omega} p \, \mathrm{dx}\, = 0 \right\}$. The function space of admissible design functions for the phase field optimization problem will be given correspondingly to \eqref{defn:PhiAd0} as \begin{align*} \Phi_{ad}:=\left \{ \varphi \in H^{1}(\Omega) \mid \int_{\Omega} \varphi \, \mathrm{dx}\, =\beta \abs{\Omega}\right\}. \end{align*} \subsection{The cost functional in the phase field setting}\label{sec:PFCostFunc} We are now left to transfer the boundary integral in \eqref{IntroObjFclt} to the diffuse interface setting where the free boundary $\Gamma$ is replaced by an interfacial region. To this end, we apply a result of \cite{article:Modica87} and approximate the perimeter regularization term with $\frac{1}{2c_{0}} \mathcal{E}_{\varepsilon}(\varphi)$. Meanwhile, keeping in mind the polar decomposition \eqref{polardecomp} and the relation \eqref{HausmeasureRelation}, we consider the vector-valued measure with density $\frac{1}{2} \nabla \varphi$ as an approximation to $\bm{\nu} \, \mathrm{d} \mathcal{H}^{d-1} \,$. Thus, we may approximate \eqref{IntroObjFclt} with \begin{align*} \int_{\Omega} \frac{1}{2} h(x, \nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\, + \frac{\gamma}{2 c_{0}} \mathcal{E}_{\varepsilon}(\varphi). \end{align*} Alternatively, we may appeal to the property of equipartition for the Ginzburg--Landau energy, i.e., it holds asymptotically that (see for instance, \eqref{equipartition} in Section \ref{sec:SharpInterfaceAsymp}, or \cite[Section 5.1]{article:Chen96}): \begin{align*} \int_{\Omega} \abs{\frac{1}{\varepsilon} \psi(\varphi_{\varepsilon})- \frac{\varepsilon}{2} \abs{ \nabla \varphi_{\varepsilon}}^{2}} \, \mathrm{dx}\, \sim 0 \text{ as } \varepsilon \searrow 0. \end{align*} Hence, together with \eqref{HausmeasureRelation}, and the fact that $\Gamma$-limit of $\mathcal{E}_{\varepsilon}(\varphi)$ is the functional $c_{0} \abs{ \mathrm{D} \varphi}(\Omega)$, defined for functions with values in $\{\pm 1\}$, and $+\infty$ otherwise, we have loosely speaking \begin{align}\label{HausdroffPsiApprox} 2c_{0} \mathcal H^{d-1} \lefthalfcup \Gamma \sim c_{0} \abs{ \mathrm{D} \varphi} \sim \frac{\varepsilon}{2} \abs{\nabla \varphi}^{2} + \frac{1}{\varepsilon} \psi(\varphi) \sim \frac{2}{\varepsilon}\psi(\varphi), \end{align} where $\frac{\varepsilon}{2} \abs{\nabla \varphi}^{2} + \frac{1}{\varepsilon} \psi(\varphi)$ and $\frac{2}{\varepsilon} \psi(\varphi)$ are interpreted as measures on $\Omega$, by using their values as densities. Here, we have identified $\Gamma = \partial \{\varphi = 1\} \cap \Omega$ with its reduced boundary, then it holds that $\frac{1}{2} \abs{ \mathrm{D} \varphi} = \abs{ \mathrm{D} \chi_{\{\varphi=1\}}} =\mathcal{H}^{d-1} \lefthalfcup \Gamma$, see for instance \cite[Theorem 3.59]{book:Ambrosio}. The generalised unit normal $\bm{\nu}$ can be approximated by $\frac{\nabla\varphi}{\abs{\nabla\varphi}}$. To rewrite this into a more convenient form, which is in particular differentiable with respect to $\varphi$, we use equipartition of energy and replace $\abs{\nabla\varphi}$ by $\frac{1}{\varepsilon}\sqrt{2\psi(\varphi)}$, and obtain the approximation \begin{align}\label{normalHausdiffuseapprox} c_{0} \bm{\nu} \mathrm d\mathcal H^{d-1} \sim \varepsilon \frac{\nabla \varphi}{\sqrt{2\psi(\varphi)}} \frac{1}{\varepsilon}\psi(\varphi) \, \mathrm{dx}\, = \sqrt{\frac{\psi(\varphi)}{2}} \nabla \varphi \, \mathrm{dx}\,. \end{align} Hence, we may also approximate \eqref{IntroObjFclt} with \begin{align}\label{ObjFctlDerivationPhaseField1} \frac{1}{c_{0}} \int_{\Omega} \sqrt{\tfrac{\psi(\varphi)}{2}} h(x, \nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\, + \frac{\gamma}{2c_{0}} \mathcal{E}_{\varepsilon}(\varphi), \end{align} when we extend $h(x, \nabla \bm{u}, p, \cdot)$ from unit vectors to all of $\mathbb{R}^{n}$ such that $h$ is positively one homogeneous with respect to its last variable. This allows us to extract the factor $\sqrt{\frac{\psi(\varphi)}{2}}$. We note that in the bulk regions $\{\varphi=\pm1\}$, we have $\psi(\varphi)=0$ and hence the functional \eqref{ObjFctlDerivationPhaseField1} is not differentiable with respect to $\varphi$. Hence, we add a small constant $\delta_{\varepsilon}$ to $\psi$ in order to have $\psi(s) + \delta_{\varepsilon} > 0$ for all $s \in \mathbb{R}$. However, we neglect the addition of this constant for the Ginzburg--Landau regularization $\mathcal{E}_{\varepsilon}(\varphi)$ in the objective functional because adding a constant to the cost functional will not change the optimization problem. In fact, for the analysis of the phase field problem, it is only important that $\delta_{\varepsilon} > 0$. In Section \ref{sec:SharpInterfaceAsymp} where we perform a formal asymptotic analysis, we will require $\lim_{\varepsilon \searrow 0} \delta_{\varepsilon} = 0$ at a superlinear rate (see Remark \ref{rem:deltaepslinearscaling}). \subsection{Optimization problem in the phase field setting} Combining the above ideas, we arrive in the following phase field approximation: \begin{equation}\label{IntroObjFcltPhase} \begin{aligned} \min_{(\varphi,\bm{u}, p)} J_{\varepsilon}^{h} \left(\varphi,\bm{u}, p \right) := & \int_{\Omega} \frac{1}{2} \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2} + \frac{1}{2c_{0}} \left (\frac{\varepsilon}{2} \abs{\nabla\varphi}^{2} + \frac{1}{\varepsilon}\psi(\varphi) \right )\, \mathrm{dx}\, \\ + & \int_{\Omega} \mathcal{M}(\varphi) h(x, \nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\,, \end{aligned} \end{equation} subject to $\varphi \in \Phi_{ad}$ and $(\bm{u},p) \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)\times L^{2}_{0}(\Omega)$ fulfilling \begin{equation}\label{IntroStateEquPhaseWeak} \int_{\Omega} \alpha_{\varepsilon}(\varphi)\bm{u}\cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + \left(\bm{u} \cdot \nabla\right) \bm{u} \cdot\bm{v} - p \div \bm{v} \, \mathrm{dx}\, = \int_{\Omega} \bm{f} \cdot \bm{v} \, \mathrm{dx}\,\quad \forall \bm{v} \in \bm{H}^{1}_{0}(\Omega). \end{equation} Notice, that \eqref{IntroStateEquPhaseWeak} is a weak formulation of the state equations \eqref{IntroStateEquPhase}. Moreover, based on the discussions in Section \ref{sec:PFCostFunc}, the function $\mathcal{M}(\varphi)$ can be chosen to be \begin{align}\label{defn:mathcalL} \mathcal{M}(\varphi) = \frac{1}{2} \text{ or } \mathcal{M}(\varphi) = \frac{1}{c_{0}} \sqrt{\tfrac{\psi(\varphi) + \delta_{\varepsilon}}{2}}. \end{align} The phase field approximation for the shape optimization problem with the hydrodynamic force \eqref{HydroDynamForce} is obtained from \eqref{IntroObjFcltPhase} by substituting \begin{align*} h(x, \nabla \bm{u}, p, \nabla \varphi) = \nabla \varphi \cdot (\mu (\nabla \bm{u} + (\nabla \bm{u})^{T}) - p \, \bm{\mathrm{I}}\,) \bm{a}. \end{align*} I.e., \begin{equation}\label{ObjFunctHydroPhase} \begin{aligned} \min_{(\varphi,\bm{u}, p)} J_{\varepsilon} \left(\varphi,\bm{u}, p \right) := & \int_{\Omega} \frac{1}{2} \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2} + \frac{1}{2c_{0}} \left (\frac{\varepsilon}{2} \abs{\nabla\varphi}^{2} + \frac{1}{\varepsilon}\psi(\varphi) \right )\, \mathrm{dx}\, \\ + & \int_{\Omega} \mathcal{M}(\varphi) \nabla \varphi \cdot (\mu (\nabla \bm{u} + (\nabla \bm{u})^{T}) - p \, \bm{\mathrm{I}}\,) \bm{a} \, \mathrm{dx}\,, \end{aligned} \end{equation} subject to $\varphi \in \Phi_{ad}$ and $(\bm{u}, p) \in \bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \times L^{2}_{0}(\Omega)$ fulfilling (\ref{IntroStateEquPhaseWeak}). \subsection{Possible modifications} \subsubsection{Double obstacle potential} We could also use a double obstacle potential $\psi : \mathbb{R} \to \mathbb{R} \cup \{+\infty\}$ instead of the double-well potential in Assumption \ref{assump:psi}, i.e., \begin{align}\label{doubleobstacle} \psi(\varphi)= \begin{cases} \frac{1}{2}(1-\varphi^{2}) & \text{ if } \varphi \in [-1,1], \\ +\infty & \text{ if } \abs{\varphi} > 1. \end{cases} \end{align} Then, one has to treat the constraint $\abs{\varphi} \leq 1$ a.e. in the necessary optimality system either by writing the gradient equation in form of a variational inequality or by including additional Lagrange parameters. Numerical simulations could be implemented by a Moreau-Yosida relaxation as in \cite{garckehinzeetal}. A Moreau-Yosida relaxation also leads to a differentiable double well potential, and here we restrict ourselves to a differentiable potential where both settings can then be included in the above mentioned way. \subsubsection{Inequality constraint for fluid volume} Another possible modification of the problem setting would be to replace the equality constraint $\int_{\Omega} \varphi \, \mathrm{dx}\,= \beta \abs{\Omega}$ by an inequality constraint $\int_{\Omega} \varphi \, \mathrm{dx}\, \leq \beta \abs{\Omega}$. This would make sense in certain settings, if a maximal amount of fluid that can be used during the optimization process is prescribed and not the exact volume fraction. This would not change anything in the analysis, only that the Lagrange multiplier for this constraint would have a sign and an additional complementarity constraint appears in the optimality system. \subsubsection{Objective functionals with no dependency on the unit normal} We may also consider objective functionals with no dependence on the normal, i.e., the boundary objective functional \eqref{IntroObjFclt} takes the form \begin{align}\label{ObjFucNoDep} \int_{\Gamma} k(x, \nabla \bm{u}, p) \, \mathrm{d} \mathcal{H}^{d-1} \,. \end{align} An example of \eqref{ObjFucNoDep} is the best approximation to a target surface pressure distribution in the sense of least squares: \begin{align*} k(x, \nabla \bm{u}, p) = \frac{1}{2}\abs{p - p_{d}}^{2}, \end{align*} where $p_{d}$ denotes the target surface pressure distribution. Then, using \eqref{HausdroffPsiApprox}, we deduce that the phase field approximation of \eqref{ObjFucNoDep} is given by \begin{align*} \frac{1}{c_{0}} \int_{\Omega} \frac{1}{\varepsilon} \psi(\varphi) k(x, \nabla \bm{u}, p) \, \mathrm{dx}\,. \end{align*} If $k(\cdot, \cdot, \cdot)$ satisfies similar assumptions to Assumptions \ref{assump:generalh} and \ref{assump:regularityh} (see below), one can adapt the proofs of Theorems \ref{t:PhaseFieldExistMin} and \ref{t:GeneralisationOptimality} to obtain existence of a minimiser and the corresponding first order necessary optimality conditions. \section{Analysis of the phase field problem}\label{sec:AnalysisPhaseField} In this section we want to analyze the phase field problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak} derived in the previous section as a diffuse interface approximation of the shape optimization problem of minimizing \eqref{generalfunctional} for a Navier--Stokes flow. For this purpose, we introduce some notation for the nonlinearity in the stationary Navier--Stokes equations. We define the trilinear form \begin{align*} b & : \bm{H}^{1}(\Omega) \times \bm{H}^{1}(\Omega) \times \bm{H}^{1}(\Omega) \to \mathbb{R}, \\ b(\bm{u}, \bm{v}, \bm{w})& := \int_{\Omega}\left(\bm{u} \cdot \nabla \right)\bm{v} \cdot \bm{w} \, \mathrm{dx}\, = \sum_{i,j=1}^{d} \int_{\Omega} u_{i} \partial_{i} v_{j} w_{j} \, \mathrm{dx}\,.\end{align*} We directly obtain the following properties: \begin{lem}\label{l:PropertiesTrilinearForm} The form $b$ is well-defined and continuous in the space $\bm{H}^{1}(\Omega) \times \bm{H}^{1}(\Omega) \times \bm{H}^{1}_{0} (\Omega)$. Moreover we have: \begin{align}\label{e:ContinuityEstimateTrilinearForm} \abs{b(\bm{u}, \bm{v}, \bm{w})} \leq K_{\Omega} \norm{\nabla\bm{u}}_{\bm{L}^{2}(\Omega)} \norm{\nabla \bm{v}}_{\bm{L}^{2}(\Omega)} \norm{\nabla \bm{w}}_{\bm{L}^{2}(\Omega)} \quad \forall \bm{u}, \bm{w} \in \bm{H}^{1}_{0}(\Omega), \bm{v} \in \bm{H}^{1}(\Omega), \end{align} with \begin{align}\label{KOmega} K_{\Omega} = \begin{cases} \frac{1}{2}\abs{\Omega}^{1/2} & \text{ if } d = 2, \\ \frac{2\sqrt{2}}{3}\abs{\Omega}^{1/6} & \text{ if } d = 3. \end{cases} \end{align} Additionally, the following properties are satisfied: \begin{align} \label{e:TrilinearformLastTwoEqualZero} b\left(\bm{u}, \bm{v}, \bm{v}\right) =0 \quad & \forall \bm{u}\in \bm{H}^{1}(\Omega), \div \bm{u} = 0, \quad \bm{v} \in \bm{H}^{1}_{0}(\Omega), \\ \label{e:TrilinearformLastTwoSwitch} b\left(\bm{u}, \bm{v}, \bm{w}\right) = -b\left(\bm{u}, \bm{w}, \bm{v}\right) \quad & \forall \bm{u} \in \bm{H}^{1}(\Omega), \div \bm{u} = 0 , \quad \bm{v}, \bm{w} \in \bm{H}^{1}_{0}(\Omega). \end{align} \end{lem} \begin{proof} The stated continuity and estimate \eqref{e:ContinuityEstimateTrilinearForm} can be found in \cite[Lemma IX.1.1]{book:Galdi} and \eqref{e:TrilinearformLastTwoEqualZero}-\eqref{e:TrilinearformLastTwoSwitch} are considered in \cite[Lemma IX.2.1]{book:Galdi}. \end{proof} Besides, we have the following important continuity property: \begin{lem}\label{l:TrilinearFormStrongCont} Let $(\bm{u}_{n})_{n\in\mathbb{N}}, (\bm{v}_{n})_{n\in\mathbb{N}} , (\bm{w}_{n})_{n \in \mathbb{N}} \subset \bm{H}^{1}(\Omega)$, $ \bm{u},\bm{v}, \bm{w} \in \bm{H}^{1}(\Omega)$ be such that $\bm{u}_{n} \rightharpoonup \bm{u}$, $\bm{v}_{n} \rightharpoonup \bm{v}$ and $\bm{w}_{n} \rightharpoonup \bm{w}$ in $\bm{H}^{1}(\Omega)$ where $\bm{v}_{n}|_{\partial \Omega} = \bm{v}|_{\partial \Omega}$ for all $n \in \mathbb{N}$. Then \begin{align}\label{e:bbilinearformTwoconvergence} \lim_{n\to\infty} b (\bm{u}_{n}, \bm{v}_{n}, \tilde{\bm{w}}) = b(\bm{u}, \bm{v}, \tilde{\bm{w}}) \quad \forall \tilde{\bm{w}} \in \bm{H}^{1}(\Omega). \end{align} Moreover, one can show that \begin{align}\label{e:StatNSStrongCong} \bm{H}^{1}(\Omega) \times \bm{H}^{1}(\Omega) \ni (\bm{u}, \bm{v})\mapsto b(\bm{u}, \cdot, \bm{v}) \in \bm{H}^{-1}(\Omega) \end{align} is strongly continuous, and thus \begin{align}\label{e:bbilinearformThreeconvergence} \lim_{n \to \infty} b(\bm{u}_{n}, \bm{v}_{n}, \bm{w}_{n}) = b(\bm{u}, \bm{v}, \bm{w}). \end{align} \end{lem} \begin{proof} We apply the idea of \cite[Lemma 72.5]{book:Zeidler4} and make in particular use of the compact embedding $\bm{H}^{1}(\Omega)\hookrightarrow \bm{L}^{3}(\Omega)$ and the continuous embedding $\bm{H}^{1}(\Omega) \hookrightarrow \bm{L}^{6}(\Omega)$. The strong continuity of \eqref{e:StatNSStrongCong} follows from \cite[Lemma 72.5]{book:Zeidler4}. In addition, from the boundedness of the sequences $(\bm{u}_{n})_{n \in \mathbb{N}}, (\bm{v}_{n})_{n \in \mathbb{N}}, (\bm{w}_{n})_{n \in \mathbb{N}}$, and \eqref{e:StatNSStrongCong}, we have \begin{align*} & \; \abs{b(\bm{u}_{n}, \bm{v}_{n}, \bm{w}_{n}) - b(\bm{u}, \bm{v}, \bm{w})} \\ = & \; \abs{b(\bm{u}_{n} - \bm{u}, \bm{v}_{n}, \bm{w}_{n})} + \abs{b(\bm{u}, \bm{v}_{n}, \bm{w}_{n} - \bm{w})} + \abs{b(\bm{u}, \bm{v}_{n} - \bm{v}, \bm{w})} \\ \leq & \; \underbrace{\norm{\bm{u}_{n} - \bm{u}}_{\bm{L}^{3}(\Omega)}}_{\xrightarrow{ n \to \infty}0} \underbrace{\norm{\nabla \bm{v}_{n}}_{\bm{L}^{2}(\Omega)} \norm{\bm{w}_{n}}_{\bm{L}^{6}(\Omega)}}_{\leq C} + \underbrace{\norm{\bm{u}}_{\bm{L}^{6}(\Omega)} \norm{\nabla \bm{v}_{n}}_{\bm{L}^{2}(\Omega)}}_{\leq C} \underbrace{\norm{\bm{w}_{n} - \bm{w}}_{\bm{L}^{3}(\Omega)}}_{\xrightarrow{ n \to \infty}0} \\ + & \; \underbrace{\abs{b(\bm{u}, \bm{v}_{n} - \bm{v}, \bm{w})}}_{\xrightarrow{ n \to \infty} 0 \text{ by } \eqref{e:StatNSStrongCong}}. \end{align*} \end{proof} \subsection{Existence results} In this section, we want to analyze the solvability of the state equations \eqref{IntroStateEquPhaseWeak}. Afterwards, we will show existence of a minimizer for the overall optimization problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak}. \begin{lem}\label{l:StateEquSolvable} Let Assumption \ref{assump:alpha} hold. Then, for every $\varphi \in L^{1}(\Omega)$ there exists at least one pair $(\bm{u}, p) \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \times L^{2}_{0}(\Omega)$ such that the state equations \eqref{IntroStateEquPhase} are fulfilled in the sense of \eqref{IntroStateEquPhaseWeak}. This solution $(\bm{u},p)$ fulfils the estimate \begin{align}\label{e:AprioriSolOpE} \norm{\bm{u}}_{\bm{H}^{1}(\Omega)} + \norm{p}_{L^2(\Omega)} \leq C(\mu,\alpha_{\varepsilon},\bm{f},\bm{g},\Omega), \end{align} with a constant $C = C(\mu,\alpha_{\varepsilon}, \bm{f}, \bm{g},\Omega)$ independent of $\varphi$. \end{lem} \begin{proof} We refer to \cite[Lemma 4]{GarckeHechtNS}, where the existence and uniqueness statements for the velocity field $\bm{u}$ are discussed. We point out, that the restriction to functions $\varphi \in L^{1}(\Omega)$ with $\abs{\varphi} \leq 1$ a.e. in $\Omega$ used in \cite{GarckeHechtNS} is only necessary because the function $\alpha_{\varepsilon}$ in \cite{GarckeHechtNS} is only defined on the interval $[-1,1]$. But of course, the same arguments apply to our case where $\alpha_{\varepsilon}$ is bounded and $\varphi \in L^{1}(\Omega)$. Now for every $\varphi \in L^{1}(\Omega)$ and $\bm{u} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$ fulfilling \begin{align*} \int_{\Omega} \alpha_{\varepsilon}(\varphi) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + (\bm{u} \cdot \nabla) \bm{u} \cdot \bm{v} \, \mathrm{dx}\, =\int_{\Omega} \bm{f} \cdot \bm{v} \, \mathrm{dx}\, \quad \forall \bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega), \end{align*} we find by \cite[Lemma II.2.1.1]{book:Sohr} a unique $p\in L^{2}_{0}(\Omega)$ such that \eqref {IntroStateEquPhaseWeak} together with \begin{align*} \norm{p}_{L^{2}(\Omega)} \leq C(\Omega) \norm{\alpha_{\varepsilon}(\varphi) \bm{u} - \mu \Delta \bm{u} + (\bm{u} \cdot \nabla) \bm{u} - \bm{f}}_{\bm{H}^{-1}(\Omega)} \end{align*} is fulfilled. Combining this with the previous statements we can conclude the lemma. \end{proof} This motivates the definition of a set-valued solution operator \begin{align}\label{defn:soluoperator} \bm{S}_{\varepsilon}(\varphi) : =\{(\bm{u}, p) \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \times L^{2}_{0}(\Omega) \mid (\bm{u}, p) \text{ fulfil }\eqref{IntroStateEquPhaseWeak} \} \text{ for }\varphi\in L^{1}(\Omega). \end{align} \begin{rem}\label{rem:uniquenessSolnOperator} If there is some $\bm{u} \in \bm{S}_{\varepsilon}(\varphi)$ with $\norm{\nabla \bm{u}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$, where $K_{\Omega}$ is defined in \eqref{KOmega}. Then $\bm{S}_{\varepsilon}(\varphi) = \{(\bm{u}, p)\}$. I.e., there is exactly one solution of \eqref{IntroStateEquPhaseWeak} corresponding to $\varphi$ (see for instance \cite[Lemma 11.5]{thesis:Hecht} or \cite[Lemma 5]{GarckeHechtNS}). \end{rem} Moreover, we show a certain continuity property of the solution operator: \begin{lem}\label{l:SolOpCont} Under Assumption \ref{assump:alpha}, assume $(\varphi_{k})_{k \in \mathbb{N}}\subset L^{1}(\Omega)$ converges strongly to $\varphi \in L^{1}(\Omega)$ in the $L^{1}$-norm and $(\bm{u}_{k} ,p_{k})_{k \in \mathbb{N}} \subset \bm{H}^{1}(\Omega)\times L^{2}(\Omega)$ are given such that $(\bm{u}_{k}, p_{k}) \in \bm{S}_{\varepsilon}(\varphi_{k})$ for all $k \in \mathbb{N}$. Then there is a subsequence, which will be denoted by the same, such that $(\bm{u}_{k}, p_{k})_{k \in \mathbb{N}}$ converges strongly in $\bm{H}^{1}(\Omega) \times L^{2}(\Omega)$ to some element $(\bm{u},p) \in \bm{S}_{\varepsilon}(\varphi)$. \end{lem} \begin{proof} Let $(\varphi_{k})_{k \in \mathbb{N}}$ and $(\bm{u}_{k}, p_{k})_{k \in \mathbb{N}}$ be chosen as in the statement. By passing to another subsequence, denoted the same, we can without loss of generality assume that $\varphi_{k} \to \varphi$ almost everywhere. Invoking \eqref{e:AprioriSolOpE}, we obtain a uniform bound on $(\bm{u}_{k}, p_{k})$ in $\bm{H}^{1}(\Omega) \times L^{2}(\Omega)$ because $(\bm{u}_{k}, p_{k}) \in \bm{S}_{\varepsilon}(\varphi_{k})$. And so there is a subsequence, which will be denoted by the same, such that $\bm{u}_{k}$ converges weakly in $\bm{H}^{1}(\Omega)$ and strongly in $\bm{L}^{2}(\Omega)$ to some limit element $\bm{u} \in \bm{ H}^{1}_{\bm{g}, \sigma}(\Omega)$ and $p_{k}$ converges weakly in $L^{2}(\Omega)$ to some limit element $p \in L^{2}_{0}(\Omega)$. We now aim to show that \begin{align*} F_{k} & : \bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \to \mathbb{R}, \\ F_{k}(\bm{v}) & := \int_{\Omega} \frac{1}{2}\alpha_{\varepsilon}(\varphi_{k}) \abs{\bm{v}}^{2} +\frac{\mu}{2}\abs{\nabla \bm{v}}^{2} + (\bm{u}_{k} \cdot \nabla) \bm{u}_{k} \cdot \bm{v} - \bm{f} \cdot \bm{v} \, \mathrm{dx}\,, \end{align*} $\Gamma$-converges in $\bm{H}^{1}_{\bm{g},\sigma}(\Omega)$ equipped with the weak topology to \begin{align*} F_{\infty} &: \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \to \mathbb{R}, \\ F_{\infty}(\bm{v}) & := \int_{\Omega}\frac{1}{2} \alpha_{\varepsilon} (\varphi) \abs{\bm{v}}^{2} + \frac{\mu}{2} \abs{\nabla \bm{v}}^{2} + (\bm{u} \cdot \nabla ) \bm{u} \cdot \bm{v} - \bm{f} \cdot \bm{v} \, \mathrm{dx}\,, \end{align*} as $k \to \infty$. To see this we first notice that for any sequence $(\bm{v}_{k})_{k \in \mathbb{N}}\subseteq \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$ converging weakly in $\bm{H}^{1}(\Omega)$ to $\bm{v} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$, by Fatou's lemma it holds that \begin{align*} \int_{\Omega} \alpha_{\varepsilon}(\varphi)\abs{\bm{v}}^{2}\, \mathrm{dx}\, \leq \liminf_{k \to \infty} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k})\abs{\bm{v}_{k}}^{2} \, \mathrm{dx}\,. \end{align*} Applying the boundedness and continuity properties of the trilinear form $b(\cdot, \cdot, \cdot)$, see Lemma \ref{l:PropertiesTrilinearForm} and \ref{l:TrilinearFormStrongCont}, we can deduce that $\lim_{k \to \infty} b(\bm{u}_{k}, \bm{u}_{k}, \bm{v}_{k}) = b(\bm{u}, \bm{u}, \bm{v})$. As the remaining terms of $F_k$ are weakly lower semicontinuous in $\bm{H}^1(\Omega)$ and independent of $\varphi_{k}$, we directly obtain \begin{align*} F_{\infty}(\bm{v}) \leq \liminf_{k\to\infty} F_{k}(\bm{v}_{k}). \end{align*} Let $\bm{v} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$ be chosen. We will show, that the constant sequence $(\bm{v})_{k \in \mathbb{N}}$ defines a recovery sequence. For this purpose, we notice that due to the boundedness and continuity of $\alpha_{\varepsilon}$, we have from Lebesgue's dominated convergence theorem \begin{align}\label{convergenceAlphaEpsterm} \lim_{k \to \infty} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k})\abs{\bm{v}}^{2} \, \mathrm{dx}\, = \int_{\Omega} \alpha_{\varepsilon}(\varphi) \abs{\bm{v}}^{2} \, \mathrm{dx}\,. \end{align} Invoking \eqref{e:bbilinearformTwoconvergence} in Lemma \ref{l:TrilinearFormStrongCont}, we deduce that \begin{align*} \lim_{k \to \infty} b(\bm{u}_{k}, \bm{u}_{k}, \bm{v}) = b(\bm{u}, \bm{u}, \bm{v}), \end{align*} and thus, we obtain that $\lim_{k \to \infty} F_{k}(\bm{v}) = F_{\infty}(\bm{v})$. This shows that the $\Gamma$-limit of $(F_{k})_{k \in \mathbb{N}}$ in $\bm{H}^{1}_{\bm{g}, \sigma}(\Omega)$ with respect to the weak topology equals $F_{\infty}$. Now we notice, that $\bm{u}_{k}$ is exactly the unique minimizer of $F_{k}$ in $\bm{H}^{1}_{\bm{g},\sigma}(\Omega)$, as it fulfils per definition the necessary and sufficient first order optimality conditions for the convex optimization problem $\min_{\bm{u} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)}F_{k}(\bm{u})$. Hence, the weak $\bm{H}^{1}(\Omega)$ limit of $(\bm{u}_{k})_{k \in \mathbb{N}}$, which is $\bm{u} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$, is the unique solution of $\min_{\bm{u} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)} F_{\infty}(\bm{u})$, thus it holds that \begin{align}\label{e:SEpsContinuousStateEquDivForm} \int_{\Omega} \alpha_{\varepsilon}(\varphi) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + (\bm{u} \cdot \nabla) \bm{u} \cdot \bm{v} \, \mathrm{dx}\, = \int_{\Omega} \bm{f} \cdot \bm{v} \, \mathrm{dx}\, \quad \forall \bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega). \end{align} By \cite[Lemma II.2.1.1]{book:Sohr} we can associate to \eqref{e:SEpsContinuousStateEquDivForm} a unique $\tilde{p} \in L^{2}_{0}(\Omega)$ such that \eqref{IntroStateEquPhaseWeak} is fulfilled, and hence $\tilde{p} = p$. Altogether we have shown $(\bm{u}, p)\in \bm{S}_{\varepsilon}(\varphi)$. To show the strong convergence in $\bm{H}^{1}(\Omega) \times L^{2}(\Omega)$, we note that from the $\Gamma$-convergence of $(F_{k})_{k \in \mathbb{N}}$ to $F_{\infty}$ we obtain additionally that $\lim_{k \to \infty} F_{k}(\bm{u}_{k}) = F_{\infty}(\bm{u})$. Invoking Lemma \ref{l:ConvPropertiesAlphaEpsTerm} below we find \begin{align*} \lim_{k \to \infty} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k}) \abs{\bm{u}_{k}}^{2} \, \mathrm{dx}\, = \int_{\Omega} \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2} \, \mathrm{dx}\,. \end{align*} In addition, by means of \eqref{e:bbilinearformThreeconvergence} from Lemma \ref{l:TrilinearFormStrongCont} we have \begin{align*} \lim_{k \to \infty} b(\bm{u}_{k}, \bm{u}_{k}, \bm{u}_{k}) = b(\bm{u}, \bm{u}, \bm{u}). \end{align*} These two results allow us to deduce from the convergence of the minimal functional values of $(F_{k})_{k \in \mathbb{N}}$ that $\lim_{k \to \infty} \int_{\Omega} \abs{\nabla \bm{u}_{k}}^{2} \, \mathrm{dx}\, = \int_{\Omega} \abs{\nabla \bm{u}}^{2} \, \mathrm{dx}\,$. Then, together with $\bm{u}_{k} \rightharpoonup \bm{u}$ in $\bm{H}^{1}(\Omega)$ this yields that $\lim_{k\to\infty}\norm{\bm{u}_{k} - \bm{u}}_{\bm{H}^{1}(\Omega)}=0$. Subtracting the state equations \eqref {IntroStateEquPhaseWeak} written for $\varphi$ from the state equations \eqref {IntroStateEquPhaseWeak} written for $\varphi_{k}$, we find from Lemma \ref{l:ConvPropertiesAlphaEpsTerm} below that \begin{align*} \int_{\Omega} (p_{k} - p) \div \bm{v} \, \mathrm{dx}\, &=\int_{\Omega} (\alpha_{\varepsilon}(\varphi_{k}) \bm{u}_{k} - \alpha_{\varepsilon}(\varphi) \bm{u}) \cdot \bm{v} + \mu \nabla( \bm{u}_{k} - \bm{u}) \cdot \nabla \bm{v} \, \mathrm{dx}\, \\ & + b (\bm{u}_{k}, \bm{u}_{k}, \bm{v}) - b(\bm{u}, \bm{u}, \bm{v}) \\ &\leq \underbrace{\norm{\alpha_{\varepsilon}(\varphi_{k}) \bm{u}_{k} - \alpha_{\varepsilon}(\varphi) \bm{u}}_{\bm{L}^{2}(\Omega)}}_{\xrightarrow{k \to \infty}0} \norm{\bm{v}}_{\bm{L}^{2}(\Omega)} + \mu \underbrace{\norm{\bm{u}_{k} - \bm{u}}_{\bm{H}^{1}(\Omega)}}_{\xrightarrow{k \to \infty} 0} \norm{\bm{v}}_{\bm{H}^{1}(\Omega)} \\ &+ \underbrace{\norm{b(\bm{u}_{k}, \bm{u}_{k}, \cdot) - b(\bm{u}, \bm{u}, \cdot)}_{\bm{H}^{-1}(\Omega)}}_{\xrightarrow{k \to \infty}0} \norm{\bm{v}}_{\bm{H}^{1}(\Omega)}. \end{align*} Thus $\lim_{k \to \infty} \norm{\nabla (p_{k} - p)}_{\bm{H}^{-1}(\Omega)} = 0$. Using now the pressure estimate, see for instance \cite[Lemma II.1.5.4]{book:Sohr}, we find \begin{align*} \norm{p_{k} - p}_{L^{2}(\Omega)} \leq c \norm{\nabla (p_{k} - p)}_{\bm{H}^{-1}(\Omega)}\xrightarrow{k\to\infty}0. \end{align*} Therefore, we deduce that $(p_{k})_{k \in \mathbb{N}}$ converges strongly in $L^{2}(\Omega)$ to $p$. \end{proof} In the previous proof we made use of the following lemma: \begin{lem}\label{l:ConvPropertiesAlphaEpsTerm} Under Assumption \ref{assump:alpha}, assume that for $(\varphi_{k})_{k \in \mathbb{N}}\subset L^{1}(\Omega)$, $(\bm{u}_{k})_{k \in \mathbb{N}}\subset \bm{L}^{2}(\Omega)$ and $\varphi \in L^{1}(\Omega)$, $\bm{u} \in \bm{L}^{2}(\Omega)$, \begin{align*} \lim_{k \to \infty} \norm{\varphi_{k} -\varphi}_{L^{1}(\Omega)} = 0, \quad \varphi_{k} \to \varphi \text{ a.e. and } \lim_{k\to\infty} \norm{\bm{u}_{k} - \bm{u}}_{\bm{L}^{2}(\Omega)}=0. \end{align*} Then it holds that \begin{align*} \lim_{k \to \infty} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k})\abs{\bm{u}_{k}}^{2} \, \mathrm{dx}\, = \int_{\Omega} \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2} \, \mathrm{dx}\, \text{ and } \lim_{k \to \infty} \norm{\alpha_{\varepsilon}(\varphi_{k}) \bm{u}_{k} - \alpha_{\varepsilon}(\varphi) \bm{u}}_{\bm{L}^{2}(\Omega)}=0. \end{align*} \end{lem} \begin{proof} Using the ideas of \cite[Theorem 5.1]{thesis:Hecht} and \cite[Theorem 1]{GarckeHechtNS} we find that \begin{align*} \abs{\int_{\Omega} \alpha_{\varepsilon} (\varphi_{k})\abs{\bm{u}_{k}}^{2} - \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2} \, \mathrm{dx}\,} & = \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k}) \left( \abs{\bm{u}_{k}}^{2} - \abs{\bm{u}}^{2} \right) \, \mathrm{dx}\, \\ & + \int_{\Omega} (\alpha_{\varepsilon}(\varphi_{k}) - \alpha_{\varepsilon}(\varphi))\abs{\bm{u}}^{2}\, \mathrm{dx}\,, \end{align*} and from $\alpha_{\varepsilon} \in L^{\infty}(\mathbb{R})$ we obtain \begin{align*} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{k}) \left(\abs{\bm{u}_{k}}^{2} - \abs{\bm{u}}^{2} \right) \, \mathrm{dx}\, \leq \norm{\alpha_{\varepsilon}}_{L^{\infty}(\mathbb{R})} \norm{\bm{u}_{k} + \bm{u}}_{\bm{L}^{2}(\Omega)} \norm{\bm{u}_{k} - \bm{u}}_{\bm{L}^{2}(\Omega)} \xrightarrow{k\to\infty}0. \end{align*} Moreover, the uniform bound on $\alpha_{\varepsilon}$ yields by Lebesgue's dominated convergence theorem \begin{align*} \lim_{k \to \infty} \int_{\Omega} (\alpha_{\varepsilon} (\varphi_{k}) - \alpha_{\varepsilon}(\varphi)) \abs{\bm{u}}^{2}\, \mathrm{dx}\, = 0, \end{align*} which combined with the previous step yields the first assertion. Using a similar idea we find \begin{align*} \norm{\alpha_{\varepsilon}(\varphi_{k}) \bm{u}_{k} - \alpha_{\varepsilon}(\varphi) \bm{u}}_{\bm{L}^{2}(\Omega)} & \leq \norm{\alpha_{\varepsilon}(\varphi_{k})(\bm{u}_{k} - \bm{u})}_{\bm{L}^{2}(\Omega)} + \norm{(\alpha_{\varepsilon}(\varphi_{k}) - \alpha_{\varepsilon}(\varphi)) \bm{u}}_{\bm{L}^{2}(\Omega)} \\ & \leq \norm{\alpha_{\varepsilon}}_{L^{\infty}(\mathbb{R})}\norm{\bm{u}_{k} - \bm{u}}_{\bm{L}^{2}(\Omega)} + \norm{(\alpha_{\varepsilon}(\varphi_{k}) - \alpha_{\varepsilon}(\varphi))\bm{u}}_{\bm{L}^{2}(\Omega)}\xrightarrow{k\to\infty}0, \end{align*} where we applied Lebesgue's dominated convergence theorem in order to deduce from $\alpha_{\varepsilon} \in L^{\infty}(\mathbb{R})$ that $\lim_{k\to\infty}\norm{(\alpha_{\varepsilon}(\varphi_{k}) - \alpha_{\varepsilon}(\varphi))\bm{u}}_{\bm{L}^{2}(\Omega)}=0$. \end{proof} We make the following assumption regarding $h$: \begin{assumption}\label{assump:generalh} Let $h : \Omega \times \mathbb{R}^{d \times d} \times \mathbb{R} \times \mathbb{R}^{d} \to \mathbb{R}$ be a Carath\'{e}odory function, which fulfils \begin{enumerate} \item $h(\cdot, \bm{A}, s, \bm{w}) : \Omega \to \mathbb{R}$ is measurable for each $\bm{w} \in \mathbb{R}^{d}, s \in \mathbb{R}, \bm{A} \in \mathbb{R}^{d \times d}$, and \item $h(x, \cdot, \cdot, \cdot) : \mathbb{R}^{d \times d} \times \mathbb{R} \times \mathbb{R}^{d} \to \mathbb{R}$ is continuous for almost every $x \in \Omega$. \end{enumerate} Moreover, there exist non-negative functions $a \in L^{1}(\Omega)$, $b_{1}, b_{2}, b_{3} \in L^{\infty}(\Omega)$ such that for almost every $x \in \Omega$ it holds \begin{align*} \abs{h(x, \bm{A}, s, \bm{w})} \leq a(x) + b_{1}(x) \abs{\bm{A}}^{2} + b_{2}(x) \abs{s}^{2} + b_{3}(x) \abs{\bm{w}}^{2}, \end{align*} for all $\bm{w} \in \mathbb{R}^{d}, s \in \mathbb{R}, \bm{A} \in \mathbb{R}^{d \times d}$. Furthermore, the functional $\mathcal{H} : \bm{H}^{1}(\Omega) \times L^{2}(\Omega) \times H^{1}(\Omega) \to \mathbb{R}$ defined as \begin{align*} \mathcal{H}(\bm{u}, p, \varphi) & := \int_{\Omega} \mathcal{M}(\varphi) h(x, \nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\,, \end{align*} satisfy the following properties \begin{enumerate} \item[(i)] $\mathcal{H} \mid_{\bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \times L^{2}_{0}(\Omega) \times \Phi_{ad}}$ is bounded from below, and \item[(ii)] for all $\varphi_{n} \rightharpoonup \varphi$ in $H^{1}(\Omega)$, $\bm{u}_{n} \to \bm{u}$ in $\bm{H}^{1}(\Omega)$, $p_{n} \to p$ in $L^{2}(\Omega)$, it holds that \begin{align*} \mathcal{H}(\bm{u}, p, \varphi) \leq \liminf_{n \to \infty} \mathcal{H}(\bm{u}_{n}, p_{n}, \varphi_{n}). \end{align*} \end{enumerate} \end{assumption} We then obtain the following existence result for \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak}: \begin{thm}\label{t:PhaseFieldExistMin} Under Assumptions \ref{assump:psi}, \ref{assump:alpha} and \ref{assump:generalh}, there exists at least one minimizer of the optimal control problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak}. \end{thm} \begin{proof} We may restrict ourselves to considering $\varphi \in \Phi_{ad}$ with $\varphi \in [s_{a}, s_{b}]$ a.e. in $\Omega$. In fact, we define as in \cite[Proof of Proposition 1]{article:Modica87} for arbitrary $\varphi \in \Phi_{ad}$ the truncated functions $\tilde{\varphi} := \max\{s_{a},\min\{\varphi,s_{b}\}\}$ and find $\mathcal{E}_{\varepsilon}(\tilde{\varphi}) \leq \mathcal{E}_{\varepsilon}(\varphi)$, where $\mathcal{E}_{\varepsilon}$ is defined in (\ref{defn:GinzburgLandau}). Moreover, by (\ref{alphaepsSaSb}), we have $\alpha_{\varepsilon}(\varphi) = \alpha_{\varepsilon}(\tilde{\varphi})$ and hence also $\bm{S}_{\varepsilon}(\varphi) = \bm{S}_{\varepsilon}(\tilde{\varphi})$. Therefore we obtain \begin{align*} J_{\varepsilon}^{h}(\tilde{\varphi}, \bm{u}, p)\leq J_{\varepsilon}^{h}(\varphi, \bm{u}, p) \text{ for all } (\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi) = \bm{S}_{\varepsilon}(\tilde{\varphi}). \end{align*} By Assumption \ref{assump:generalh}, $\mathcal{H} \mid_{\bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \times L^{2}_{0}(\Omega) \times \Phi_{ad}}$ is bounded below by a constant $C_{0}$, and so $J_{\varepsilon}^{h} : \Phi_{ad} \times \bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \times L^{2}_{0}(\Omega)$ is bounded from below by a constant $C_{1}$. Thus, we can choose a minimizing sequence $(\varphi_{n} , \bm{u}_{n} , p_{n})_{n \in \mathbb{N}} \subset \Phi_{ad} \times \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \times L^{2}_{0}(\Omega)$ with $(\bm{u}_{n}, p_{n}) \in \bm{S}_{\varepsilon}(\varphi_{n})$ for all $n$ and \begin{align*} \lim_{n \to \infty } J_{\varepsilon}^{h}(\varphi_{n}, \bm{u}_{n} , p_{n}) = \inf_{\varphi \in \Phi_{ad}, (\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)} J_{\varepsilon}^{h}(\varphi, \bm{u}, p) > -\infty. \end{align*} In particular, from the non-negativity of $\psi$ and $\alpha_{\varepsilon}$, we see that for $\rho > 0$, there exists an $N$ such that $n > N$ implies \begin{align*} C_{0} + \frac{\gamma \varepsilon}{2c_{0}}\norm{\nabla \varphi_{n}}_{\bm{L}^{2}(\Omega)} \leq J_{\varepsilon}^{h}(\varphi_{n}, \bm{u}_{n}, p_{n}) \leq \inf_{\varphi \in \Phi_{ad}, (\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)} J_{\varepsilon}^{h}(\varphi, \bm{u}, p) + \rho. \end{align*} Thus, $\{\nabla \varphi_{n}\}_{n \in \mathbb{N}}$ is bounded uniformly in $\bm{L}^{2}(\Omega)$. Moreover, without loss of generality, we may assume that $\varphi_{n}(x) \in [s_{a}, s_{b}]$ for a.e. $x \in \Omega$ and every $n \in \mathbb{N}$. And so, we deduce that $\{\varphi_{n}\}_{n \in \mathbb{N}}$ is bounded uniformly in $H^{1}(\Omega) \cap L^{\infty}(\Omega)$, and we may choose a subsequence $(\varphi_{n_{k}})_{k \in \mathbb{N}}$ that converges strongly in $L^{2}(\Omega)$ and pointwise almost everywhere in $\Omega$ to some limit element $\varphi \in \Phi_{ad}$. Using Lemma \ref{l:SolOpCont} we can deduce that there is a subsequence of $(\bm{u}_{n_{k}}, p_{n_{k}})_{k \in \mathbb{N}}$, denoted by the same index, such that \begin{align}\label{e:ExistMinProofStrongConvState} \lim_{k \to \infty} \norm{\bm{u}_{n_{k}}- \bm{u}}_{\bm{H}^{1}(\Omega)} = 0, \quad\lim_{k \to \infty} \norm{p_{n_{k}} - p}_{L^{2}(\Omega)} = 0, \end{align} and $(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)$. From Lemma \ref{l:ConvPropertiesAlphaEpsTerm} we deduce additionally that \begin{align}\label{e:ExistMinProofTerm2} \lim_{k \to \infty} \int_{\Omega} \alpha_{\varepsilon}(\varphi_{n_{k}}) \abs{\bm{u}_{n_{k}}}^{2} \, \mathrm{dx}\, = \int_{\Omega} \alpha_{\varepsilon}(\varphi) \abs{\bm{u}}^{2}\, \mathrm{dx}\,. \end{align} As $\sup_{k \in \mathbb{N}} \norm{\psi(\varphi_{n_{k}})}_{L^{\infty}(\Omega)} < \infty$ we can use Lebesgue's dominated convergence theorem to deduce $\lim_{k \to\infty} \int_{\Omega} \psi (\varphi_{n_{k}} ) \, \mathrm{dx}\, = \int_{\Omega} \psi ( \varphi ) \, \mathrm{dx}\,$. Finally, the weak lower semicontinuity of $H^{1}(\Omega) \ni \varphi \mapsto \int_{\Omega} \abs{\nabla \varphi}^{2}\, \mathrm{dx}\,$ yields \begin{align}\label{e:ExistMinProofTerm3} \int_{\Omega} \frac{\varepsilon}{2} \abs{\nabla \varphi}^{2} +\frac{1}{\varepsilon} \psi(\varphi) \, \mathrm{dx}\, \leq \liminf_{k \to \infty} \int_{\Omega} \frac{\varepsilon}{2} \abs{\nabla \varphi_{n_{k}}}^{2} + \frac{1}{\varepsilon}\psi(\varphi_{n_{k}}) \, \mathrm{dx}\,. \end{align} Together with the lower semicontinuity assumption on $\mathcal{H}$ from Assumption \ref{assump:generalh}, we deduce that \begin{align*} J_{\varepsilon}^{h}(\varphi, \bm{u}, p) \leq \liminf_{k \to \infty} J_{\varepsilon}^{h}(\varphi_{n_{k}}, \bm{u}_{n_{k}}, p_{n_{k}}) = \inf_{\varphi \in \Phi_{ad},(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)} J_{\varepsilon}^{h}(\varphi,\bm{u}, p), \end{align*} and so $(\varphi, \bm{u}, p)$ is a minimizer of \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak}. \end{proof} By the same arguments, one can show an analogous existence result for the optimal control problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ involving the hydrodynamic force \eqref{HydroDynamForce}: \begin{thm}\label{t:HydroDyanExistMin} Under Assumptions \ref{assump:psi} and \ref{assump:alpha}, there exists at least one minimizer of the optimization problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ involving the hydrodynamic force \eqref{HydroDynamForce}. \end{thm} \begin{proof} We will prove the assertion for the choice $\mathcal{M}(\varphi) = \sqrt{\tfrac{\psi(\varphi) + \delta_{\varepsilon}}{2}}$, and the analogous assertion for the choice $\mathcal{M}(\varphi) = \frac{1}{2}$ follows along the same lines. We first show that $\{ J_{\varepsilon}(\varphi, \bm{u}, p) \mid \varphi \in \Phi_{ad}, (\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)\}$ is bounded from below. We may restrict ourselves to considering $\varphi \in \Phi_{ad}$ with $\varphi \in [s_{a}, s_{b}]$ a.e. in $\Omega$ as in the proof of Theorem \ref{t:PhaseFieldExistMin}. Now let $\varphi \in \Phi_{ad}$ be arbitrarily chosen with $\varphi \in [s_{a},s_{b}]$ for a.e. $x \in \Omega$ and choose $(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)$. From \eqref{e:AprioriSolOpE}, we find a constant $C_{2} > 0$ independent of $\varphi$ such that \begin{align*} \norm{\bm{u}}_{\bm{H}^{1}(\Omega)} + \norm{p}_{L^{2}(\Omega)} < C_{2}. \end{align*} By construction, we have \begin{align*} \varphi \in [s_{a}, s_{b}] \Longrightarrow \norm{\psi(\varphi)}_{L^{\infty}(\Omega)} < C_{3}, \end{align*} for some constant $C_{3} > 0$ independent of $\varphi$. Then, using Cauchy--Schwarz's inequality, and Young's inequality we have \begin{align*} & \; \frac{1}{c_{0}} \int_{\Omega} \sqrt{\tfrac{\psi(\varphi)+ \delta_{\varepsilon}}{2}} \nabla \varphi \cdot \left(\mu \left ( \nabla \bm{u} + (\nabla \bm{u})^{T} \right ) - p \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\, \\ \geq & \; -\frac{1}{c_{0} \sqrt{2}} \norm{\nabla\varphi \sqrt{\psi(\varphi)+ \delta_{\varepsilon}} }_{L^{2}(\Omega)} \norm{\mu \left( \nabla \bm{u} + (\nabla \bm{u})^{T} \right )\bm{a} - p \bm{a} }_{\bm{L}^{2}(\Omega)} \\ \geq & \; - \frac{1}{c_{0}} \sqrt{\tfrac{C_{3} + \delta_{\varepsilon}}{2}} \norm{ \nabla \varphi }_{L^{2}(\Omega)} \left( 2\mu C_{2} + C_{2} \right) \geq - \frac{\gamma \varepsilon}{8 c_{0}}\norm{\nabla \varphi}_{L^{2}(\Omega)}^{2} - C_{4}, \end{align*} with some constant $C_{4} > 0$ independent of $\varphi$. The non-negativity of $\alpha_{\varepsilon}$ and $\psi$ yield that \begin{equation} \label{JepsLowerbdd} \begin{aligned} J_{\varepsilon}(\varphi, \bm{u} , p) & \geq \int_{\Omega} \frac{1}{c_{0}} \sqrt{\tfrac{\psi(\varphi) + \delta_{\varepsilon}}{2}} \nabla \varphi \cdot \left(\mu\left(\nabla \bm{u} + (\nabla \bm{u})^{T} \right) - p\, \bm{\mathrm{I}}\, \right) \bm{a} + \frac{\gamma}{2c_{0}} \frac{\varepsilon}{2} \abs{\nabla \varphi}^{2} \, \mathrm{dx}\, \\ & \geq - \frac{\gamma \varepsilon}{8 c_{0}} \norm{\nabla \varphi}_{L^{2}(\Omega)}^{2} - C_{4} + \frac{\gamma \varepsilon}{4c_{0}} \norm{\nabla \varphi}_{L^{2}(\Omega)}^{2} = \frac{\gamma \varepsilon}{8c_{0}} \norm{\nabla \varphi}_{L^{2}(\Omega)}^{2} - C_{4} \geq -C_{4}. \end{aligned} \end{equation} This shows that $\{J_{\varepsilon}(\varphi,\bm{u} , p) \mid \varphi \in \Phi_{ad}, (\bm{u} , p) \in \bm{S}_{\varepsilon}(\varphi)\}$ is bounded from below. Hence we may choose a minimizing sequence $(\varphi_{n} , \bm{u}_{n} , p_{n})_{n \in \mathbb{N}} \subset \Phi_{ad} \times \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \times L^{2}_{0}(\Omega)$ with \begin{align*} \lim_{n \to \infty } J_{\varepsilon}(\varphi_{n}, \bm{u}_{n} , p_{n}) = \inf_{\varphi \in \Phi_{ad}, (\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)} J_{\varepsilon}(\varphi, \bm{u}, p) > -\infty. \end{align*} As before, we deduce that $\{\varphi_{n}\}_{n \in \mathbb{N}}$ is bounded uniformly in $H^{1}(\Omega) \cap L^{\infty}(\Omega)$, together with Lemma \ref{l:SolOpCont}, we have subsequences $(\varphi_{n_{k}}, \bm{u}_{n_{k}}, p_{n_{k}})_{k \in \mathbb{N}}$, that satisfy \begin{align*} \lim_{k \to \infty} \norm{\varphi_{n_{k}} - \varphi}_{L^{2}(\Omega)} = 0, \quad \lim_{k \to \infty} \norm{\bm{u}_{n_{k}}- \bm{u}}_{\bm{H}^{1}(\Omega)} = 0, \quad \lim_{k \to \infty} \norm{p_{n_{k}} - p}_{L^{2}(\Omega)} = 0, \end{align*} and $(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)$. To deduce that $(\varphi, \bm{u}, p)$ is a minimizer of $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$, we only need to show that \begin{equation}\label{HydroDynamTermWLSC}\begin{aligned} & \; \liminf_{k \to \infty} \int_{\Omega} \sqrt{\psi(\varphi_{n_{k}}) + \delta_{\varepsilon}} \nabla \varphi_{n_{k}} \cdot \left( \mu \left( \nabla \bm{u}_{n_{k}} + (\nabla \bm{u}_{n_{k}})^{T} \right ) - p_{n_{k}} \, \bm{\mathrm{I}}\, \right ) \bm{a} \, \mathrm{dx}\, \\ \geq & \; \int_{\Omega} \sqrt{\psi(\varphi) + \delta_{\varepsilon}} \nabla \varphi \cdot \left( \mu \left( \nabla \bm{u} +(\nabla \bm{u})^T \right) - p \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\,, \end{aligned} \end{equation} as the other integrals in \eqref{ObjFunctHydroPhase} are shown to be weakly lower semicontinuous in the proof of Theorem \ref{t:PhaseFieldExistMin}. We apply now an idea of \cite{article:Modica87} and define \begin{align*} \phi(t) := \int_{s_{a}}^{t} \sqrt{\psi(s)+\delta_{\varepsilon}} \, \mathrm ds, \quad w_{n_{k}}(x) := \phi(\varphi_{n_{k}}(x)). \end{align*} Then we see that \begin{align*} \mathrm{D} w_{n_{k}}(x) = \phi'(\varphi_{n_{k}}(x)) \mathrm{D} \varphi_{n_{k}}(x) = (\sqrt{\psi(\varphi_{n_{k}}(x)) + \delta_{\varepsilon}}) \mathrm{D} \varphi_{n_{k}}(x). \end{align*} By the uniform boundedness of $(\varphi_{n_{k}})_{k \in \mathbb{N}}$ in $H^{1}(\Omega) \cap L^{\infty}(\Omega)$, we find that $(\psi(\varphi_{n_{k}}))_{k \in \mathbb{N}}$ is uniformly bounded in $L^{\infty}(\Omega)$, and so by the Cauchy--Schwarz inequality, \begin{align*} \norm{w_{n_{k}}}_{L^{2}(\Omega)}^{2} & \leq \int_{\Omega} (\varphi_{n_{k}} - s_{a}) \left ( \int_{s_{a}}^{\varphi_{n_{k}}} (\psi(s) + \delta_{\varepsilon}) \, \mathrm{ds}\, \right ) \, \mathrm{dx}\, \\ & \leq \sup_{s \in [s_{a}, s_{b}]} (\psi(s) + \delta_{\varepsilon}) \int_{\Omega} \abs{\varphi_{n_{k}} - s_{a}}^{2} \, \mathrm{dx}\,, \\ \norm{ \mathrm{D} w_{n_{k}}}_{L^{2}(\Omega)}^{2} & \leq \sup_{k \in \mathbb{N}} \, (\psi(\varphi_{n_{k}}) + \delta_{\varepsilon}) \norm{ \mathrm{D} \varphi_{n_{k}}}_{L^{2}(\Omega)}^{2}. \end{align*} Thus, we deduce that $(w_{n_{k}})_{k \in \mathbb{N}}$ is bounded uniformly in $H^{1}(\Omega)$, and hence there is a subsequence, denoted by the same index, that converges weakly in $H^{1}(\Omega)$ and pointwise almost everywhere in $\Omega$ to some limit element $w \in H^{1}(\Omega)$. Since $\phi$ is continuous and $\lim_{k \to \infty} \varphi_{n_k}(x) = \varphi(x)$ for almost every $x \in \Omega$, we know that $w = \phi(\varphi)$. In particular, the weak convergence of $ \mathrm{D} w_{n_{k}}$ to $ \mathrm{D} w$ implies that \begin{align} \label{e:ExistMinProofWeakConvTerms} \sqrt{\psi(\varphi_{n_{k}}) + \delta_{\varepsilon}} \nabla \varphi_{n_{k}} \rightharpoonup \sqrt{\psi(\varphi) + \delta_{\varepsilon}} \nabla \varphi \quad\text{ in } \bm{L}^{2}(\Omega). \end{align} Combining \eqref{e:ExistMinProofStrongConvState} and \eqref{e:ExistMinProofWeakConvTerms} we obtain from the product of weak-strong convergence: \begin{equation}\label{e:ExistMinProofTerm1}\begin{aligned} & \; \lim_{k \to \infty} \int_{\Omega} \sqrt{\psi(\varphi_{n_{k}}) + \delta_{\varepsilon}} \nabla\varphi_{n_{k}} \cdot \left( \mu \left( \nabla \bm{u}_{n_{k}} + (\nabla \bm{u}_{n_{k}})^{T} \right) - p_{n_{k}} \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\, \\ = & \; \int_{\Omega} \sqrt{\psi(\varphi) + \delta_{\varepsilon}} \nabla \varphi \cdot \left( \mu \left( \nabla \bm{u} +(\nabla \bm{u})^T \right) - p \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\,. \end{aligned} \end{equation} Using \eqref{e:ExistMinProofTerm1}, \eqref{e:ExistMinProofTerm2} and \eqref{e:ExistMinProofTerm3}, we deduce that \begin{align*} J_{\varepsilon}(\varphi, \bm{u}, p) \leq \liminf_{k \to \infty} J_{\varepsilon}(\varphi_{n_{k}}, \bm{u}_{n_{k}}, p_{n_{k}}) = \inf_{\varphi \in \Phi_{ad},(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)} J_{\varepsilon}(\varphi,\bm{u}, p), \end{align*} and so $(\varphi, \bm{u}, p)$ is a minimizer of $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$. \end{proof} \begin{rem} Note that, for the choice $\mathcal{M}(\varphi) = \frac{1}{2}$, the proof of Theorem \ref{t:HydroDyanExistMin} is completed once we showed that $J_{\varepsilon}$ is bounded from below, which can be shown similarly as in \eqref{JepsLowerbdd}, and (ii) in Assumption \ref{assump:regularityh} has been verified. This follows the product of weak-strong convergence: \begin{equation} \begin{aligned} & \; \lim_{k \to \infty} \int_{\Omega} \nabla\varphi_{n_{k}} \cdot \left( \mu \left( \nabla \bm{u}_{n_{k}} + (\nabla \bm{u}_{n_{k}})^{T} \right) - p_{n_{k}} \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\, \\ = & \; \int_{\Omega} \nabla \varphi \cdot \left( \mu \left( \nabla \bm{u} +(\nabla \bm{u})^T \right) - p \, \bm{\mathrm{I}}\, \right) \bm{a} \, \mathrm{dx}\,. \end{aligned} \end{equation} \end{rem} \subsection{Optimality conditions} This section is devoted to the derivation of a first order necessary optimality system for the optimal control problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak}. For this purpose, we first show Fr\'echet differentiability of the solution operator. We will only be able to show differentiability at certain points where the solution to the state equations is unique. Otherwise we cannot apply the implicit function theorem in order to deduce the statement. To be precise, we obtain the following result: \begin{lem}\label{l:SolOpDiffable} Under Assumption \ref{assump:alpha}, let $\varphi_{\varepsilon} \in H^{1}(\Omega)\cap L^{\infty}(\Omega)$ be given such that there is $(\bm{u}_{\varepsilon} , p_{\varepsilon}) \in \bm{S}_{\varepsilon}(\varphi_{\varepsilon})$ with $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} <\frac{\mu}{K_{\Omega}}$. Then there is a neighborhood $N$ of $\varphi_{\varepsilon}$ in $H^{1}(\Omega) \cap L^{\infty}(\Omega)$ such that for every $\varphi \in N$ the solution operator consists of exactly one pair, and hence we may write $\bm{S}_{\varepsilon} : N \subset H^{1}(\Omega) \cap L^{\infty}(\Omega) \to \bm{H}^{1}(\Omega) \times L^{2}(\Omega)$. This mapping is then differentiable at $\varphi_{\varepsilon}$ with $ \mathrm{D} \bm{S}_{\varepsilon}(\varphi_{\varepsilon})(\varphi) =: (\bm{u}, p) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}_{0}(\Omega)$ being the unique solution of the linearized state system \begin{subequations}\label{e:PhaseLinearizedState} \begin{alignat}{2} \alpha'_{\varepsilon}(\varphi_{\varepsilon}) \varphi \bm{u}_{\varepsilon} + \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} - \mu \Delta \bm{u} + (\bm{u} \cdot \nabla )\bm{u}_{\varepsilon} + (\bm{u}_{\varepsilon} \cdot \nabla)\bm{u} + \nabla p & = \bm{0} && \text{ in } \Omega, \\ \div \bm{u} & = 0 && \text{ in } \Omega, \\ \bm{u} & = \bm{0} && \text{ on } \partial \Omega. \end{alignat} \end{subequations} \end{lem} \begin{proof} As already mentioned, we want to apply the implicit function theorem to get the statements of the lemma. For this purpose, we first note that, by \cite[Lemma IX.4.2]{book:Galdi}, there exists a $\bm{G} \in \bm{H}^{1}_{\bm{g},\sigma}(\Omega)$, i.e., $\bm{G}$ satisfies \begin{align*} \div \bm{G} = 0 \text{ in } \Omega, \quad \bm{G} \mid_{\partial \Omega} = \bm{g}. \end{align*} We define \begin{align*} F : (H^{1}(\Omega) \cap L^{\infty}(\Omega)) \times \bm{H}^{1}_{0}(\Omega) \times L^{2}_{0}(\Omega) \to \bm{H}^{-1}(\Omega) \times L^{2}_{0}(\Omega), \quad F = (F_{1}, F_{2}), \end{align*} by \begin{align*} F_{1} (\varphi, \bm{u}, p ) \bm{v} &:= \int_{\Omega} \alpha_{\varepsilon}(\varphi) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + (\bm{u} \cdot \nabla )\bm{u} \cdot \bm{v} - p \div \bm{v} - \bm{f} \cdot \bm{v} \, \mathrm{dx}\, \\ & + \int_{\Omega} (\bm{u} \cdot \nabla) \bm{G} \cdot \bm{v} + (\bm{G} \cdot \nabla)\bm{u} \cdot \bm{v} + \alpha_{\varepsilon}(\varphi) \bm{G} \cdot \bm{v} + \mu \nabla \bm{G} \cdot \nabla \bm{v} + (\bm{G} \cdot \nabla) \bm{G} \cdot \bm{v} \, \mathrm{dx}\,, \\ F_{2}(\varphi, \bm{u}, p) &:= \div \bm{u}, \end{align*} for all $\bm{v} \in \bm{H}^{1}_{0}(\Omega)$. Hence, $F(\varphi,\bm{u} - \bm{G}, p)=0$ if and only if $(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)$. Thus in particular we have $F(\varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon})=0$. Besides, we directly see that the Fr\'{e}chet differential $ \mathrm{D} _{(\bm{u}, p)}F$ exists and is given at $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon})$ as \begin{align*} \mathrm{D} _{(\bm{u}, p)} F_{1} (\varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon}) (\bm{u}, p) \bm{v} &= \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{v} + \mu\nabla \bm{u} \cdot \nabla \bm{v} + (\bm{u} \cdot \nabla )\bm{u}_{\varepsilon} \cdot \bm{v} \, \mathrm{dx}\, \\ & + \int_{\Omega} (\bm{u}_{\varepsilon} \cdot \nabla ) \bm{u} \cdot \bm{v} - p \div \bm{v} \, \mathrm{dx}\,,\\ \mathrm{D} _{(\bm{u}, p)} F_{2} (\varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon})(\bm{u}, p) & = \div \bm{u}. \end{align*} The assumption $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$, equations \eqref{e:ContinuityEstimateTrilinearForm} and \eqref{e:TrilinearformLastTwoEqualZero} ensure that \begin{align*} \bm{H}^{1}_{0,\sigma}(\Omega)\times \bm{H}^{1}_{0,\sigma}(\Omega) \ni (\bm{u}, \bm{v}) \mapsto \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + (\bm{u} \cdot \nabla )\bm{u}_{\varepsilon} \cdot \bm{v} + (\bm{u}_{\varepsilon} \cdot \nabla) \bm{u} \cdot \bm{v} \, \mathrm{dx}\, \end{align*} defines a coercive, continuous bilinear form. Hence, we may use the Lax--Milgram theorem and standard results for the solvability of the divergence operator, see for instance \cite[Lemma II.2.1.1]{book:Sohr}, in order to obtain that $ \mathrm{D} _{(\bm{u},p)}F(\varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon})$ is an isomorphism. Next, we want to consider the differentiability of $F$ with respect to its first argument. For this purpose, we have to consider $\alpha_{\varepsilon}: L^{6}(\Omega) \to L^{\frac{3}{2}}(\Omega)$ as a Nemytskii operator, making in particular use of the embedding $H^{1}(\Omega) \hookrightarrow L^{6}(\Omega)$. The results in \cite[Section 4.3.3]{book:Troeltzsch} ensure that $\alpha_{\varepsilon} : L^{6}(\Omega)\to L^{\frac{3}{2}}(\Omega)$ defines a Fr\'{e}chet-differentiable Nemytskii operator, which follows from the assumption $\alpha_{\varepsilon} \in L^{\infty}(\mathbb{R})\cap C^{1,1}(\mathbb{R})$. We can then conclude directly that $F$ is Fr\'{e}chet differentiable with respect to its first argument with \begin{align*} \mathrm{D} _{\varphi} F_{1} (\varphi, \bm{u} - \bm{G}, p)(\tilde{\varphi})\bm{v} = \int_{\Omega} \alpha'_{\varepsilon}(\varphi) \tilde{\varphi} \bm{u} \cdot \bm{v} \, \mathrm{dx}\,, \quad \mathrm{D} _{\varphi} F_{2}(\varphi, \bm{u} - \bm{G}, p) = 0. \end{align*} Additionally, we need that $F$ is Fr\'{e}chet differentiable in a neighborhood of $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon})$. To show this, we will use \cite[Proposition 4.14]{book:Zeidler1}, i.e., we show that the partial derivatives are continuous in order to conclude that $F$ is Fr\'{e}chet differentiable. Thus let $(\varphi_{k}, \bm{u}_{k}, p_{k})_{k \in \mathbb{N}} \subset (H^{1}(\Omega) \cap L^{\infty}(\Omega)) \times \bm{H}^{1}_{0}(\Omega) \times L^{2}_{0}(\Omega)$ be sequences with \begin{align*} \lim_{k \to \infty} \norm{\bm{u}_{k} - \bm{u}}_{\bm{H}^{1}(\Omega)} = 0, \quad \lim_{k \to \infty} \norm{p_{k} - p}_{L^{2}(\Omega)} = 0, \quad \lim_{k \to \infty} \norm{\varphi_{k} - \varphi}_{H^{1}(\Omega) \cap L^{\infty}(\Omega)} = 0. \end{align*} As $\alpha_{\varepsilon}:L^6(\Omega)\to L^{\frac{3}{2}}(\Omega)$ defines a continuous Nemytskii-operator, making additionally use of the continuity properties of the trilinear form as stated in Lemma \ref{l:TrilinearFormStrongCont}, we can deduce that \begin{align*} \lim_{k \to \infty} \norm{ \mathrm{D} _{(\bm{u}, p)} F(\varphi_{k}, \bm{u}_{k}, p_{k}) - \mathrm{D} _{(\bm{u}, p)} F(\varphi, \bm{u}, p)}_{\mathcal{L}(\bm{H}^{1}_{0}(\Omega) \times L^{2}_{0}(\Omega), \bm{H}^{-1}(\Omega) \times L^{2}_{0}(\Omega))} = 0. \end{align*} Moreover, from $\alpha'_{\varepsilon} \in C^{0,1}$ and standard results for Nemytskii operators we find that $L^{6}(\Omega) \ni \varphi \mapsto \alpha'_{\varepsilon}(\varphi) \in L^{6}(\Omega)$ is continuous. And thus we also find by direct calculations that $\lim_{k \to \infty} \norm{ \mathrm{D} _{\varphi}F (\varphi_{k}, \bm{u}_{k}, p_{k}) - \mathrm{D} _{\varphi}F(\varphi, \bm{u}, p)}_{\mathcal{L}(H^{1}(\Omega), \bm{H}^{-1}(\Omega) \times L^{2}_{0}(\Omega))}=0$. Therefore, we obtain that $F$ is Fr\'{e}chet differentiable. Finally, applying the implicit function theorem, we obtain for $\norm{\varphi-\varphi_{\varepsilon}}_{H^{1}(\Omega) \cap L^{\infty}(\Omega)} \ll 1$ the existence and uniqueness of a pair $(\bm{u}, p)$ such that $F(\varphi, \bm{u} - \bm{G}, p) = 0$, i.e., $(\bm{u}, p) \in \bm{S}_{\varepsilon}(\varphi)$. This implies the first part of the statement. The second part of the lemma is a consequence of the differentiability statement of the implicit function theorem: \begin{align*} \mathrm{D} \bm{S}_{\varepsilon}(\varphi_{\varepsilon}) = -\left( \mathrm{D} _{(\bm{u}, p)} F (\varphi_{\varepsilon}, \bm{u}_{\varepsilon} -\bm{G}, p_{\varepsilon}) \right)^{-1}\circ \mathrm{D} _{\varphi} F( \varphi_{\varepsilon}, \bm{u}_{\varepsilon} - \bm{G}, p_{\varepsilon}), \end{align*} which reads in our setting as $\div \bm{u} = 0$ and \begin{equation}\label{e:PhaseLinearizedStateWeakform} \begin{aligned} & \; \int_{\Omega} \alpha'_{\varepsilon}(\varphi_{\varepsilon})\varphi \bm{u}_{\varepsilon} \cdot \bm{v} + \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} \, \mathrm{dx}\, \\ + & \; \int_{\Omega} (\bm{u} \cdot \nabla )\bm{u}_{\varepsilon}\cdot \bm{v} + (\bm{u}_{\varepsilon} \cdot \nabla)\bm{u} \cdot \bm{v} - p \div \bm{v} \, \mathrm{dx}\, = 0 \quad \forall \bm{v} \in \bm{H}^{1}_{0}(\Omega). \end{aligned} \end{equation} \end{proof} We denote by $ \mathrm{D} _{i}h(x, \bm{A}, s, \bm{w})$ for $i \in \{1, 2, 3, 4\}$ as the differential of \begin{align*} \Omega \times \mathbb{R}^{d \times d} \times \mathbb{R} \times \mathbb{R}^{d} \ni (x, \bm{A}, s, \bm{w}) \mapsto h(x, \bm{A}, s, \bm{w}) \end{align*} with respect to the $i$-th variable, respectively. \begin{assumption}\label{assump:regularityh} In addition to Assumption \ref{assump:generalh}, assume further that $x \mapsto h(x, \bm{A}, s, \bm{w})$ is in $W^{1,1}(\Omega)$ for all $(\bm{A}, s, \bm{w}) \in \mathbb{R}^{d \times d} \times \mathbb{R} \times \mathbb{R}^{d}$ and the partial derivatives \begin{align*} \mathrm{D} _{2}h(x, \cdot, s, \bm{w}), \; \mathrm{D} _{3}h(x, \bm{A}, \cdot, \bm{w}), \; \mathrm{D} _{4}h(x, \bm{A}, s, \cdot ) \end{align*} exist for all $\bm{w} \in \mathbb{R}^{d}$, $s \in \mathbb{R}$, $\bm{A} \in \mathbb{R}^{d \times d}$, and almost all $x \in \Omega$. Moreover, we assume that \begin{equation}\label{equ:partialderivativesh} \abs{ \mathrm{D} _{i} h(x, \bm{A}, s, \bm{w})} \leq \tilde{a}(x) + \tilde{b}_{1}(x) \abs{\bm{A}} + \tilde{b}_{2}(x) \abs{s} + \tilde{b}_{3}(x) \abs{\bm{w}}, \text{ for } i \in \{2, 3, 4 \}, \end{equation} for some non-negative $\tilde{a} \in L^{1}(\Omega)$, $\tilde{b}_{1}, \tilde{b}_{2}, \tilde{b}_{3} \in L^{\infty}(\Omega)$. \end{assumption} From Assumption \ref{assump:regularityh} we see that \begin{align*} (L^{2}(\Omega))^{d \times d} \ni \bm{A} & \mapsto \mathrm{D} _{2}h( \cdot, \bm{A}, s, \bm{w}) \in L^{2}(\Omega), \\ L^{2}(\Omega) \ni s & \mapsto \mathrm{D} _{3}h( \cdot, \bm{A}, s, \bm{w}) \in L^{2}(\Omega), \\ (L^{2}(\Omega))^{d} \ni \bm{w} & \mapsto \mathrm{D} _{4}h( \cdot, \bm{A}, s, \bm{w}) \in L^{2}(\Omega), \end{align*} are well-defined Nemytskii operators for $\bm{A} \in (L^{2}(\Omega))^{d \times d}$, $s \in L^{2}(\Omega)$, and $\bm{w} \in (L^{2}(\Omega))^{d}$ if and only if \eqref{equ:partialderivativesh} is fulfilled. Moreover, the operator \begin{align*} (L^{2}(\Omega))^{d \times d} \times L^{2}(\Omega) \times (L^{2}(\Omega))^{d} \ni (\bm{A}, s, \bm{w}) \mapsto h( \cdot, \bm{A}, s, \bm{w}) \in L^{1}(\Omega) \end{align*} is continuously Fr\'{e}chet differentiable. Next, by Assumption \ref{assump:psi}, $\psi \in C^{1,1}(\mathbb{R})$, we have that $ \mathrm{D} _{y}( \sqrt{\psi(y) + \delta_{\varepsilon}})$ is locally Lipschitz and thus the Nemytskii operator \begin{align*} L^{\infty}(\Omega) \ni \varphi \mapsto \sqrt{\psi(\varphi) + \delta_{\varepsilon}} \in L^{\infty}(\Omega) \end{align*} is continuously Fr\'{e}chet differentiable. Hence, we find that \begin{align*} \mathcal{H} : \bm{H}^{1}(\Omega) \times L^{2}(\Omega) \times H^{1}(\Omega) \cap L^{\infty}(\Omega) \ni (\bm{u}, p, \varphi) \mapsto \int_{\Omega} \mathcal{M}(\varphi) h(x, \nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\, \end{align*} is continuously Fr\'{e}chet differentiable and its distributional derivative is given as \begin{equation}\label{FrechDerivativemathcalH} \begin{aligned} \mathrm{D} \mathcal{H}(\bm{u}, p, \varphi)(\bm{v}, s, \eta) & = \int_{\Omega} \mathcal{M}(\varphi) ( \mathrm{D} _{2}h, \mathrm{D} _{3}h, \mathrm{D} _{4}h)\mid_{(x, \nabla \bm{u}, p, \nabla \varphi)} \cdot (\nabla \bm{v}, s, \nabla \eta) \, \mathrm{dx}\, \\ & + \int_{\Omega} h(x, \nabla \bm{u}, p, \nabla \varphi) \mathcal{M}'(\varphi) \eta \, \mathrm{dx}\,. \end{aligned} \end{equation} We note that for the choice $\mathcal{M}(\varphi) = \frac{1}{2}$, the second integral on the right hand side of \eqref{FrechDerivativemathcalH} vanishes as the Fr\'{e}chet derivative of $\frac{1}{2}$ is the zero functional. On the other hand, for the choice $\mathcal{M}(\varphi) = \frac{1}{c_{0}} \sqrt{\frac{\psi(\varphi) + \delta_{\varepsilon}}{2}}$, the Fr\'{e}chet derivative is given as \begin{align} \mathcal{M}'(\varphi) = \frac{1}{c_{0}} \frac{\psi'(\varphi)}{2 \sqrt{2 (\psi(\varphi) + \delta_{\varepsilon})}}. \end{align} Before formulating the optimality system we want to discuss the adjoint system. The pair of adjoint variables $(\bm{q}_{\varepsilon}, \pi_{\varepsilon}) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}(\Omega)$ is the weak solution of the adjoint system, which is given as follows: find $(\bm{q}_{\varepsilon}, \pi_{\varepsilon}) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}(\Omega)$ such that \begin{subequations}\label{generalh:adjointsystem} \begin{align} \notag \alpha_{\varepsilon} (\varphi_{\varepsilon})(\bm{q}_{\varepsilon} & - \bm{u}_{\varepsilon}) - \mu \div (\nabla \bm{q}_{\varepsilon} + (\nabla \bm{q}_{\varepsilon})^{T}) + (\nabla \bm{u}_{\varepsilon})^{T} \bm{q}_{\varepsilon} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{q}_{\varepsilon} + \nabla \pi_{\varepsilon} \\ & = - \div \left ( \mathcal{M}(\varphi) \mathrm{D} _{2}h \right ) && \text{ in } \Omega, \\ \div \bm{q}_{\varepsilon} & = - \mathcal{M}(\varphi) \mathrm{D} _{3}h + \vartheta_{\varepsilon} && \text{ in } \Omega, \\ \bm{q}_{\varepsilon}& = \bm{0} && \text{ on } \partial \Omega, \end{align} \end{subequations} where $ \mathrm{D} _{2}h, \mathrm{D} _{3}h$ are evaluated at $(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon})$ and \begin{align}\label{defn:vartheta} \vartheta_{\varepsilon} := \strokedint_{\Omega} \mathcal{M}(\varphi) \mathrm{D} _{3}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \, \mathrm{dx}\,. \end{align} \begin{rem}\label{r:VarThetaAsLagrangeMult} The parameter $\vartheta_{\varepsilon} \in \mathbb{R}$ can be interpreted as a Lagrange multiplier for the constraint $\int_{\Omega} p\, \mathrm{dx}\, = 0$. By carrying out the formal Lagrange method as described for instance in \cite{book:Hinzeetal,book:Troeltzsch} and appending the mean value condition on the pressure $p$ with some Lagrange multiplier $\vartheta_{\varepsilon}$ to the Lagrangian, one obtains that $\vartheta_{\varepsilon}$ appears in the adjoint system as in \eqref{generalh:adjointsystem}. \end{rem} The next lemma shows that the system \eqref{generalh:adjointsystem} is uniquely solvable: \begin{lem}\label{l:AdjointSysWellDef} Let Assumptions \ref{assump:psi}, \ref{assump:alpha}, and \ref{assump:regularityh} hold, and let $\varphi_{\varepsilon} \in H^{1}(\Omega) \cap L^{\infty}(\Omega)$ and $\bm{u}_{\varepsilon} \in \bm{H}^{1}_{\bm{g}, \sigma}(\Omega)$ such that $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$ be given. Then there exists a unique solution pair $(\bm{q}_{\varepsilon}, \pi_{\varepsilon}) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}(\Omega)$ of the adjoint system \eqref{generalh:adjointsystem}. \end{lem} \begin{proof} First, we notice that by definition of $\vartheta_{\varepsilon}$ \eqref{defn:vartheta}, it holds that \begin{align*} \int_{\Omega} \mathcal{M}(\varphi) \mathrm{D} _{3}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) - \vartheta_{\varepsilon} \, \mathrm{dx}\, = 0. \end{align*} As $\varphi_{\varepsilon} \in L^{\infty}(\Omega)$, we have $\mathcal{M}(\varphi) \in L^{\infty}(\Omega)$ for either choices. Thus, by Assumption \ref{assump:regularityh}, we obtain that $\mathcal{M}(\varphi) \mathrm{D} _{3}h \in L^2(\Omega)$. So, from standard results, see for instance \cite[Lemma II.2.1.1]{book:Sohr}, we deduce the existence of some $\bm{w} \in \bm{H}^{1}_{0}(\Omega)$ such that \begin{align*} \div \bm{w} = - \mathcal{M}(\varphi) \mathrm{D} _{3}h + \vartheta_{\varepsilon}. \end{align*} Note that, by the density of $\bm{C}^{\infty}_{0,\sigma}(\Omega) := \{ \bm{v} \in (C^{\infty}_{0}(\Omega))^{d} \, | \div \bm{v} = 0 \}$ in $\bm{H}^{1}_{0,\sigma}(\Omega)$ (see \cite[Lemma II.2.2.3]{book:Sohr}), for any $\bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega)$, there exists a sequence $\{\bm{v}^{n}\}_{n \in \mathbb{N}} \subset \bm{C}^{\infty}_{0,\sigma}(\Omega)$ such that \begin{align*} \norm{\bm{v}^{n} - \bm{v}}_{\bm{H}^{1}(\Omega)} \to 0 \text{ as } n \to \infty. \end{align*} Thus, for any $\bm{y} \in \bm{H}^{1}_{0}(\Omega), \bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega)$, we find that by the commutativity of second derivatives, \begin{equation}\label{nablaunablavTranspose} \begin{aligned} & \; \int_{\Omega} \nabla \bm{y} \cdot (\nabla \bm{v})^{T} \, \mathrm{dx}\, = \lim_{n \to \infty} \int_{\Omega} \nabla \bm{y} \cdot (\nabla \bm{v}^{n})^{T} \, \mathrm{dx}\, \\ = & \; \lim_{n \to \infty} \sum_{i,j=1}^{d} \int_{\Omega} \partial_{i} y_{j} \partial_{j} v^{n}_{i} \, \mathrm{dx}\, = \lim_{n \to \infty} \sum_{i,j=1}^{d} \left ( \int_{\partial \Omega} y_{j} \partial_{j} v^{n}_{i} \nu_{\partial \Omega,i} \, \mathrm{d} \mathcal{H}^{d-1} \, - \int_{\Omega} y_{j} \partial_{j} \partial_{i} v^{n}_{i} \, \mathrm{dx}\, \right )\\ = & \; \lim_{n \to \infty} \int_{\partial \Omega} (\bm{y} \cdot \nabla) \bm{v}^{n} \cdot \bm{\nu}_{\partial \Omega} \, \mathrm{d} \mathcal{H}^{d-1} \, - \int_{\Omega} \bm{y} \cdot \nabla (\div \bm{v}^{n}) \, \mathrm{dx}\, = 0. \end{aligned} \end{equation} We define the bilinear form $a: \bm{H}^{1}_{0,\sigma}(\Omega) \times \bm{H}^{1}_{0,\sigma}(\Omega) \to \left ( \bm{H}^{1}_{0,\sigma}(\Omega) \right )'$ by \begin{equation}\label{adjoint:bilinearform} \begin{aligned} a(\bm{u}, \bm{v}) & := \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot (\nabla \bm{v} + (\nabla \bm{v})^{T}) + (\nabla \bm{u}_{\varepsilon})^{T} \bm{u} \cdot \bm{v} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{u} \cdot \bm{v} \, \mathrm{dx}\, \\ & = \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{v} + \mu \nabla \bm{u} \cdot \nabla \bm{v} + (\nabla \bm{u}_{\varepsilon})^{T} \bm{u} \cdot \bm{v} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{u} \cdot \bm{v} \, \mathrm{dx}\,, \end{aligned} \end{equation} where we have used \eqref{nablaunablavTranspose} for $\bm{u}, \bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega)$. Making use of $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$, \eqref{e:ContinuityEstimateTrilinearForm}, \eqref{e:TrilinearformLastTwoEqualZero}, and the Poincar\'{e} inequality, we can establish that $a(\cdot, \cdot)$ is a coercive bilinear form, i.e., there exists a constant $c(\mu, \abs{\Omega}) > 0$ such that, \begin{align*} a(\bm{u}, \bm{u}) & = \int_{\Omega} \underbrace{\alpha_{\varepsilon}(\varphi_{\varepsilon})}_{\geq 0} \abs{\bm{u}}^{2} + \mu \abs{\nabla \bm{u}}^{2} \, \mathrm{dx}\, + b(\bm{u}, \bm{u}_{\varepsilon}, \bm{u}) - \underbrace{b(\bm{u}_{\varepsilon}, \bm{u}, \bm{u})}_{=0 \text{ by }\eqref{e:TrilinearformLastTwoEqualZero}} \\ & \geq \mu \norm{\nabla \bm{u}}_{\bm{L}^{2}(\Omega)}^{2} - K_{\Omega} \norm{\nabla \bm{u}}_{\bm{L}^{2}(\Omega)}^{2} \norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} \geq c(\mu, \abs{\Omega}) \norm{\bm{u}}_{\bm{H}^{1}_{0}(\Omega)}^{2}. \end{align*} Meanwhile, the boundedness of the bilinear form $a(\cdot, \cdot)$ in $\bm{H}^{1}_{0,\sigma}(\Omega) \times \bm{H}^{1}_{0,\sigma}(\Omega)$ can be shown using \eqref{e:ContinuityEstimateTrilinearForm}, the boundedness of $\alpha_{\varepsilon}$, H\"{o}lder's inequality and the assumption $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$. Thus, by the Lax--Milgram theorem, we obtain a unique $\hat{\bm{q}} \in \bm{H}^{1}_{0,\sigma}(\Omega)$ such that \begin{align}\label{adjoint:weakform} a(\hat{\bm{q}},\bm{v}) = \int_{\Omega}\alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u}_{\varepsilon} \cdot \bm{v} + \mathcal{M}(\varphi) ( \mathrm{D} _{2}h \cdot \nabla \bm{v} ) \, \mathrm{dx}\, - a(\bm{w}, \bm{v}) \quad \forall \bm{v} \in \bm{H}^{1}_{0,\sigma}(\Omega). \end{align} We note that the integral terms are well-defined due to Assumption \ref{assump:regularityh} and the boundedness of $\alpha_{\varepsilon}$. We set $\bm{q}_{\varepsilon} := \hat{\bm{q}} + \bm{w}$. The existence of $\pi_{\varepsilon} \in L^{2}(\Omega)$ follows from standard results, see for instance \cite[Lemma II.2.2.1]{book:Sohr}. Thus, $(\bm{q}_{\varepsilon}, \pi_{\varepsilon})$ is the unique weak solution of the adjoint system \eqref{generalh:adjointsystem}. \end{proof} Now we can formulate necessary optimality conditions for our optimal control problem: \begin{thm}\label{t:GeneralisationOptimality} Let $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon}) \in (\Phi_{ad} \cap L^{\infty}(\Omega)) \times \bm{H}^{1}_{\bm{g}, \sigma}(\Omega) \times L^{2}_{0}(\Omega)$ be a minimizer of $J_{\varepsilon}^{h}$ such that $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$. Then the following optimality system is fulfilled: There exists a Lagrange multiplier $\lambda_{\varepsilon} \in \mathbb{R}$ for the integral constraint such that \begin{equation}\label{generalh:PhaseFieldOptSys} \begin{aligned} &\; \left(\alpha'_{\varepsilon}(\varphi_{\varepsilon}) \left(\frac{1}{2} \abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right) +\frac{\gamma}{2c_{0} \varepsilon} \psi'(\varphi_{\varepsilon}) +\lambda_{\varepsilon} + \mathcal{M}'(\varphi_{\varepsilon}) h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) , \zeta \right)_{L^{2}(\Omega)} \\ + \; & \left( \mathcal{M}(\varphi_{\varepsilon}) \mathrm{D} _{4}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) + \frac{\gamma \varepsilon}{2c_{0}} \nabla \varphi_{\varepsilon} , \nabla \zeta \right)_{\bm{L}^{2}(\Omega)} = 0 \quad \forall \zeta \in H^{1}(\Omega) \cap L^{\infty}(\Omega). \end{aligned} \end{equation} Here, $(\bm{q}_{\varepsilon}, \pi_{\varepsilon}) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}(\Omega)$ is the unique weak solution of the adjoint system \eqref{generalh:adjointsystem}. \end{thm} \begin{proof} We rewrite the problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak} as a minimizing problem for a reduced objective functional defined on an open set in $H^{1}(\Omega) \cap L^{\infty}(\Omega)$ by making use of Lemma \ref{l:SolOpDiffable}. In particular, at least in a neighborhood $N \subset H^{1}(\Omega) \cap L^{\infty}(\Omega)$ of $\varphi_{\varepsilon}$, the solution operator $\bm{S}_{\varepsilon}$ is not set-valued, but for every $\varphi \in N$ we have $\bm{S}_{\varepsilon}(\varphi) = \{ (\bm{u}, p) \}$. Thus we may define the reduced functional $j_{\varepsilon}^{h}: N \to \mathbb{R}$ by \begin{align*} j_{\varepsilon}^{h}(\varphi) := J_{\varepsilon}^{h}(\varphi, \bm{S}_{\varepsilon}(\varphi)). \end{align*} Then, $\varphi_{\varepsilon}$ is also a local minimizer of $j_{\varepsilon}^{h}$. Hence, the gradient equation \begin{align}\label{generalh:ProofOptSysEpsVarIn} \mathrm{D} j_{\varepsilon}^{h}(\varphi_{\varepsilon})(\varphi) = 0, \quad \forall \varphi \in H^1(\Omega), \, \int_{\Omega} \varphi \, \mathrm{dx}\,=0, \end{align} would be fulfilled if $j_{\varepsilon}^{h}$ would be differentiable. We will show in the next step that $j_{\varepsilon}^{h}$ is differentiable at $\varphi_{\varepsilon}$ as a mapping from $H^{1}(\Omega) \cap L^{\infty}(\Omega)$ to $\mathbb{R}$. Lemma \ref{l:SolOpDiffable} already ensures that the solution operator $\bm{S}_{\varepsilon}$ is differentiable from $H^{1}(\Omega) \cap L^{\infty}(\Omega)$ to $\bm{H}^{1}(\Omega)\times L^{2}(\Omega)$. Thus we now look at dependence of $J_{\varepsilon}^{h}$ on the first variable. For this purpose we find first as in the proof of Lemma \ref{l:SolOpDiffable} that $\alpha_{\varepsilon}: L^{6}(\Omega) \to L^{\frac{3}{2}}(\Omega)$ is a Fr\'{e}chet differentiable Nemytskii operator, and hence \begin{align*} H^{1}(\Omega) \ni \varphi \mapsto \int_{\Omega} \alpha_{\varepsilon}(\varphi)\abs{\bm{u}}^{2}\, \mathrm{dx}\, \end{align*} is Fr\'{e}chet differentiable for any $\bm{u}\in \bm{H}^{1}(\Omega)$. With similar results, i.e. by making use of \cite[Section 4.3.3]{book:Troeltzsch}, we also find that \begin{alignat*}{3} L^{\infty}(\Omega)\ni \varphi & \mapsto \psi(\varphi) \in L^{\infty}(\Omega), && \quad L^{\infty}(\Omega)\ni \varphi &&\mapsto \int_{\Omega} \psi(\varphi)\, \mathrm{dx}\,, \\ H^{1}(\Omega) \ni \varphi &\mapsto \nabla \varphi \in \bm{L}^{2}(\Omega), && \quad H^{1}(\Omega) \ni \varphi &&\mapsto \int_{\Omega} \abs{\nabla \varphi}^{2} \, \mathrm{dx}\, \end{alignat*} are differentiable. Combining these results and the Fr\'{e}chet differentiability of $\mathcal{H}$, we find that $j_{\varepsilon}^{h} : N \to \mathbb{R}$ is differentiable. Hence we may conclude by the minimizing property of $\varphi_{\varepsilon}$ that the gradient equation \eqref{generalh:ProofOptSysEpsVarIn} is fulfilled. We then find from \eqref{generalh:ProofOptSysEpsVarIn} that \begin{align}\label{generalh:ProofOptSysEpsVarEq} 0 = \mathrm{D} j_{\varepsilon}^{h}(\varphi_{\varepsilon})\left(\varphi - \strokedint_{\Omega} \varphi \, \mathrm{dx}\, \right) = \mathrm{D} j_{\varepsilon}^{h} (\varphi_{\varepsilon}) (\varphi) + \lambda_{\varepsilon} \int_{\Omega} \varphi \, \mathrm{dx}\, \quad \forall \varphi \in H^{1}(\Omega), \end{align} where we defined \begin{align}\label{defn:lambda} \lambda_{\varepsilon} := -\abs{\Omega}^{-1} \mathrm{D} j_{\varepsilon} (\varphi_{\varepsilon}) \in \mathbb{R}. \end{align} In particular, we interpret $\lambda_{\varepsilon} \in \mathbb{R}$ as a Lagrange multiplier for the integral constraint $\int_{\Omega} \varphi \, \mathrm{dx}\, = \beta \abs{\Omega}$. We now want to rewrite \eqref{generalh:ProofOptSysEpsVarEq} into a more convenient form by using the adjoint variable $\bm{q}_{\varepsilon}$, which is defined as the solution of \eqref{generalh:adjointsystem}. For this purpose we start calculating the derivative of $j_{\varepsilon}^{h}$. We find for every $\varphi \in H^{1}(\Omega)$ the following formula: \begin{equation}\label{generalh:DerJEpsiloNFirstForm} \begin{aligned} \mathrm{D} j_{\varepsilon}^{h}(\varphi_{\varepsilon}) \varphi &= \int_{\Omega} \frac{1}{2} \alpha'_{\varepsilon}(\varphi_{\varepsilon})\varphi \abs{\bm{u}_{\varepsilon}}^{2} + \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u}_{\varepsilon} \cdot \bm{u} \, \mathrm{dx}\, \\ & + \frac{\gamma}{2c_{0}} \int_{\Omega} \varepsilon \nabla \varphi_{\varepsilon} \cdot \nabla \varphi + \frac{1}{\varepsilon} \psi'(\varphi_{\varepsilon}) \varphi \, \mathrm{dx}\, \\ & + \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon})( \mathrm{D} _{2}h, \mathrm{D} _{3}h, \mathrm{D} _{4}h)\mid_{(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon})} \cdot (\nabla \bm{u}, p, \nabla \varphi) \, \mathrm{dx}\, \\ & + \int_{\Omega} h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \mathcal{M}'(\varphi_{\varepsilon}) \varphi \, \mathrm{dx}\,. \end{aligned} \end{equation} where $\bm{S}_{\varepsilon}(\varphi_{\varepsilon}) = \{ (\bm{u}_{\varepsilon}, p_{\varepsilon})\}$ and $(\bm{u}, p) := \mathrm{D} \bm{S}_{\varepsilon}(\varphi_{\varepsilon}) \varphi$ is the solution of the linearized state equation \eqref{e:PhaseLinearizedState}. Now we use the adjoint state $\bm{q}_{\varepsilon}$ as a test function in the linearized state equation \eqref{e:PhaseLinearizedState} and find that \begin{equation}\label{generalh:TestLinearizedWithAdjoint}\begin{aligned} & \; \int_{\Omega} \alpha'_{\varepsilon}(\varphi_{\varepsilon})\varphi \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} + \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u} \cdot \bm{q}_{\varepsilon} + \mu \nabla \bm{u} \cdot \nabla \bm{q}_{\varepsilon} \, \mathrm{dx}\, \\ + & \; \int_{\Omega} (\bm{u} \cdot \nabla ) \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} + (\bm{u}_{\varepsilon} \cdot \nabla) \bm{u} \cdot \bm{q}_{\varepsilon} + p \left ( \mathcal{M}(\varphi_{\varepsilon}) \mathrm{D} _{3}h - \vartheta_{\varepsilon} \right )\, \mathrm{dx}\, = 0, \end{aligned} \end{equation} where $ \mathrm{D} _{3}h$ is evaluated at $(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon})$. Then we use the linearized state $\bm{u} \in \bm{H}^{1}_{0,\sigma}(\Omega)$ as a test function in \eqref{adjoint:weakform} and obtain \begin{equation}\label{generalh:TestAdjointWithLinearized} \begin{aligned} &\; \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{q}_{\varepsilon} \cdot \bm{u} + \mu \nabla \bm{q}_{\varepsilon} \cdot \nabla \bm{u} + (\nabla \bm{u}_{\varepsilon})^{T} \bm{q}_{\varepsilon} \cdot \bm{u} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{q}_{\varepsilon} \cdot \bm{u} \, \mathrm{dx}\, \\ = & \; \int_{\Omega} \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u}_{\varepsilon} \cdot \bm{u} + \mathcal{M}(\varphi_{\varepsilon}) \left ( \mathrm{D} _{2}h \cdot \nabla \bm{u} \right ) \, \mathrm{dx}\,, \end{aligned} \end{equation} where $ \mathrm{D} _{2}h$ is evaluated at $(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon})$. Comparing \eqref{generalh:TestLinearizedWithAdjoint} and \eqref{generalh:TestAdjointWithLinearized} yields the following identity \begin{equation}\label{generalh:CompareAdjointAndLinearized} \begin{aligned} \int_{\Omega} \alpha_{\varepsilon}'(\varphi_{\varepsilon}) \varphi \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} + \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u}_{\varepsilon} \cdot \bm{u} + \mathcal{M}(\varphi_{\varepsilon}) \left ( \mathrm{D} _{2}h \cdot \nabla \bm{u} + p \mathrm{D} _{3}h \right ) \, \mathrm{dx}\, = 0, \end{aligned} \end{equation} where we have used that $p \in L^{2}_{0}(\Omega)$, $\div \bm{u}_{\varepsilon} = 0$ in $\Omega$, $\bm{u} = \bm{q}_{\varepsilon} = \bm{0}$ on $\partial \Omega$, and thus \begin{align*} \int_{\Omega} p \vartheta_{\varepsilon} \, \mathrm{dx}\, = \vartheta_{\varepsilon} \int_{\Omega} p \, \mathrm{dx}\, & = 0, \\ \int_{\Omega} (\bm{u}_{\varepsilon} \cdot \nabla) \bm{q}_{\varepsilon} \cdot \bm{u} + (\bm{u}_{\varepsilon} \cdot \nabla) \bm{u} \cdot \bm{q}_{\varepsilon} \, \mathrm{dx}\, = \int_{\Omega} \bm{u}_{\varepsilon} \cdot \nabla (\bm{q}_{\varepsilon} \cdot \bm{u}) \, \mathrm{dx}\, & = 0. \end{align*} Hence, by using \eqref{generalh:CompareAdjointAndLinearized}, we can rewrite \eqref{generalh:DerJEpsiloNFirstForm} as follows: \begin{equation}\label{generalh:DerJEpsiloNSecondForm} \begin{aligned} \mathrm{D} j_{\varepsilon}(\varphi_{\varepsilon}) \varphi &= \int_{\Omega} \alpha'_{\varepsilon}(\varphi_{\varepsilon})\varphi \left ( \frac{1}{2}\abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right ) + \frac{\gamma \varepsilon}{2c_{0}} \nabla \varphi_{\varepsilon} \cdot \nabla \varphi + \frac{\gamma}{2c_{0} \varepsilon} \psi'(\varphi_{\varepsilon}) \varphi \, \mathrm{dx}\, \\ & + \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}) \mathrm{D} _{4}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \cdot \nabla \varphi \, \mathrm{dx}\,\\ & + \int_{\Omega} h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \mathcal{M}'(\varphi_{\varepsilon}) \varphi \, \mathrm{dx}\,. \end{aligned} \end{equation} Together with \eqref{generalh:ProofOptSysEpsVarEq}, this yields the statement of the theorem. \end{proof} The analogous optimality condition for the optimization problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ involving the hydrodynamic force \eqref{HydroDynamForce} is given as follows: \begin{thm}\label{t:HydroDynamOptSys} Let $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon}) \in (\Phi_{ad} \cap L^{\infty}(\Omega)) \times \bm{H}^{1}_{\bm{g},\sigma}(\Omega) \times L^{2}_{0}(\Omega)$ be a minimizer of optimization problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ involving the hydrodynamic force \eqref{HydroDynamForce} with $\norm{\nabla \bm{u}_{\varepsilon}}_{\bm{L}^{2}(\Omega)} < \frac{\mu}{K_{\Omega}}$, thus in particular, $\bm{S}_{\varepsilon}(\varphi_{\varepsilon}) = \{(\bm{u}_{\varepsilon}, p_{\varepsilon})\}$. Then the following optimality system is fulfilled: There exists a Lagrange multiplier $\lambda_{\varepsilon} \in \mathbb{R}$ for the integral constraint such that \begin{equation}\label{e:PhaseFieldOptSys}\begin{aligned} &\; \left(\alpha'_{\varepsilon}(\varphi_{\varepsilon}) \left(\frac{1}{2} \abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right) + \frac{\gamma}{2c_{0} \varepsilon} \psi'(\varphi_{\varepsilon}) +\lambda_{\varepsilon} + \mathcal{M}'(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot \left( \bm{\sigma}_{\varepsilon} \bm{a} \right), \zeta \right)_{L^{2}(\Omega)} \\ + \; & \left( \mathcal{M}(\varphi_{\varepsilon}) \bm{\sigma}_{\varepsilon} \bm{a} + \frac{\gamma \varepsilon}{2c_{0}} \nabla \varphi_{\varepsilon} , \nabla \zeta \right)_{\bm{L}^{2}(\Omega)} = 0 \quad \forall \zeta \in H^{1}(\Omega) \cap L^{\infty}(\Omega), \end{aligned} \end{equation} where $\bm{\sigma}_{\varepsilon} := \mu \left(\nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}) \right) - p_{\varepsilon} \, \bm{\mathrm{I}}\,$, and $(\bm{q}_{\varepsilon}, \pi_{\varepsilon}) \in \bm{H}^{1}_{0}(\Omega) \times L^{2}(\Omega)$ is the unique weak solution of the adjoint system \begin{subequations}\label{e:AdjointStrong}\begin{align} \label{adjoint1} \notag \alpha_{\varepsilon} (\varphi_{\varepsilon}) (\bm{q}_{\varepsilon}& - \bm{u}_{\varepsilon}) - \mu \nabla \cdot (\nabla \bm{q}_{\varepsilon} + (\nabla \bm{q}_{\varepsilon})^{T}) + (\nabla \bm{u}_{\varepsilon})^{T} \bm{q}_{\varepsilon} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{q}_{\varepsilon} + \nabla \pi_{\varepsilon} \\ & = - \mu \left ( \div \left ( \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \right ) \bm{a} - \nabla \left ( \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \right ) \bm{a} \right ) &&\text{in }\Omega,\\ \label{adjoint2} \div \bm{q}_{\varepsilon} & = \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot \bm{a} - \strokedint_{\Omega} \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot \bm{a} \, \mathrm{dx}\, && \text{in }\Omega, \\ \label{adjoint3} \bm{q}_{\varepsilon} & = \bm{0} && \text{on }\partial \Omega. \end{align}\end{subequations} \end{thm} \begin{proof} Note that for the hydrodynamic force \eqref{HydroDynamForce}: \begin{align*} h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) = \nabla \varphi_{\varepsilon} \cdot (\mu(\nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}) - p_{\varepsilon} \, \bm{\mathrm{I}}\,) \cdot \bm{a}, \end{align*} and so we compute that \begin{align*} \mathrm{D} _{2}h & = \mu \left ( \nabla \varphi_{\varepsilon} \otimes \bm{a} + \bm{a} \otimes \nabla \varphi_{\varepsilon} \right ), \quad \mathrm{D} _{3}h = -\bm{a} \cdot \nabla \varphi_{\varepsilon}, \\ \mathrm{D} _{4}h & = (\mu (\nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}) - p_{\varepsilon} \, \bm{\mathrm{I}}\,) \bm{a}. \end{align*} As $\bm{a}$ is a constant vector, \eqref{equ:partialderivativesh} in Assumption \ref{assump:regularityh} is satisfied and the statements follow from the application of Theorem \ref{t:GeneralisationOptimality}. \end{proof} \begin{rem}\label{r:phaseStrong} After using integration by parts, we find that we can rewrite the gradient equation \eqref{e:PhaseFieldOptSys} for the hydrodynamic force formally in the strong form as \begin{align}\label{e:PhaseFieldGradientEquStrong} -\frac{\gamma}{2c_{0}} \left ( \varepsilon \Delta \varphi_{\varepsilon} - \frac{1}{\varepsilon} \psi'(\varphi_{\varepsilon}) \right) + \lambda_{\varepsilon} + \alpha'_{\varepsilon}(\varphi_{\varepsilon}) \left ( \frac{1}{2} \abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right ) - \mathcal{M}(\varphi_{\varepsilon}) \div (\bm{\sigma}_{\varepsilon} \bm{a}) =0 \text{ in }\Omega, \end{align} with the boundary condition \begin{align}\label{e:PhaseFieldGradientEquStrong:Bdy} \frac{\gamma}{2c_{0}} \varepsilon \nabla \varphi_{\varepsilon} \cdot \bm{\nu}_{\partial \Omega} + \mathcal{M}(\varphi_{\varepsilon}) \bm{\nu}_{\partial \Omega} \cdot (\bm{\sigma}_{\varepsilon} \bm{a}) = 0 \text{ on }\partial \Omega. \end{align} Moreover, with sufficiently smooth solutions, we can make use of the state equation \eqref{state1} to rewrite \eqref{e:PhaseFieldGradientEquStrong} as: \begin{equation}\label{e:PhaseFieldGradientEquStrongRewrite} \begin{aligned} & \; -\frac{\gamma}{2c_{0}} \left ( \varepsilon \Delta \varphi_{\varepsilon} - \frac{1}{\varepsilon} \psi'(\varphi_{\varepsilon}) \right) + \lambda_{\varepsilon} + \alpha'_{\varepsilon}(\varphi_{\varepsilon}) \left ( \frac{1}{2} \abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right ) \\ + & \; \mathcal{M}(\varphi_{\varepsilon}) \left( \bm{f} - \alpha_{\varepsilon}(\varphi_{\varepsilon}) \bm{u}_{\varepsilon} - (\bm{u}_{\varepsilon} \cdot \nabla ) \bm{u}_{\varepsilon} \right ) \cdot \bm{a} = 0. \end{aligned} \end{equation} \end{rem} \begin{rem}\label{r:Dirichletbdy} We note that the above analysis of \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak} can be modified to include a Dirichlet condition for the design function $\varphi_{\varepsilon}$ on $\partial \Omega$, for instance $\varphi_{\varepsilon} = 1$ on $\partial \Omega$. This amounts to changing the space of admissible design functions to \begin{align*} \Phi_{ad} = \left \{ \varphi \in H^{1}(\Omega) \mid \int_{\Omega} \varphi \, \mathrm{dx}\, = \beta \abs{\Omega} \text{ and } \varphi = 1 \text{ on } \partial \Omega \right \}. \end{align*} Then, in the optimality conditions \eqref{generalh:PhaseFieldOptSys} and \eqref{e:PhaseFieldOptSys}, and also in \eqref{generalh:ProofOptSysEpsVarIn} and \eqref{generalh:ProofOptSysEpsVarEq}, we use test functions $\zeta \in H^{1}_{0}(\Omega) \cap L^{\infty}(\Omega)$, and $\varphi \in H^{1}_{0}(\Omega)$. Moreover, from Remark \ref{r:phaseStrong}, the strong form of the resulting gradient equation \eqref{e:PhaseFieldOptSys} remains as \eqref{e:PhaseFieldGradientEquStrong} $($or \eqref{e:PhaseFieldGradientEquStrongRewrite}$)$, but now with the boundary condition \begin{align*} \varphi_{\varepsilon} = 1 \text{ on } \partial \Omega. \end{align*} \end{rem} \section{Sharp interface asymptotics for the hydrodynamic force}\label{sec:SharpInterfaceAsymp} In Section~\ref{sec:DerivationPhaseField}, we introduced the diffuse interface problem \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak} as an approximation of the shape optimization problem \eqref{IntroObjFclt}-\eqref{IntroStateEquSharp} for a general functional $h$. In Section~\ref{sec:AnalysisPhaseField}, the existence of a minimizer $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon})$ to \eqref{IntroObjFcltPhase}-\eqref{IntroStateEquPhaseWeak} for every fixed $\varepsilon > 0$ is guaranteed by Theorem \ref{t:PhaseFieldExistMin}, and the first order necessary optimality condition is given in Theorem \ref{t:GeneralisationOptimality}. The analogous results for the hydrodynamic force problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ are also presented in Theorem \ref{t:HydroDyanExistMin} and Theorem \ref{t:HydroDynamOptSys}. In this section, we focus only the hydrodynamic force problem $\{\eqref{IntroStateEquPhaseWeak}, \eqref{ObjFunctHydroPhase}\}$ and carry out a sharp interface limit of the system $\{\eqref{IntroStateEquPhase}, \eqref{e:AdjointStrong}, \eqref{e:PhaseFieldGradientEquStrongRewrite} \}$ by the method of formally matched asymptotic expansions. We hereby recover the optimality conditions expected by classical shape sensitivity analysis presented in Section \ref{sec:SharpInterfaceProblem} in the limit $\varepsilon \searrow 0$. For an introduction and more detailed discussion of the techniques and basic assumptions used in the method of formally matched asymptotic analysis we refer for instance to \cite{article:FifePenrose95, article:GarckeStinner06}. In the asymptotic analysis, we assume there are sufficient smooth solutions to the system $\{\eqref{IntroStateEquPhase}, \eqref{e:AdjointStrong}, \eqref{e:PhaseFieldGradientEquStrong}\}$, and hence we consider \eqref{e:PhaseFieldGradientEquStrongRewrite} instead of \eqref{e:PhaseFieldGradientEquStrong} in the sequel as the analysis is comparatively easier. \begin{assumption} We assume that for small $\varepsilon$, the domain $\Omega$ can be divided into two open subdomains $\Omega^{\pm}(\varepsilon)$, separated by an interface $\Gamma(\varepsilon)$. Furthermore, we assume that there is a family $(\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon}, \bm{q}_{\varepsilon}, \pi_{\varepsilon}, \lambda_{\varepsilon}, \vartheta_{\varepsilon})_{\varepsilon > 0}$ of solutions to $\{\eqref{IntroStateEquPhase}, \eqref{e:AdjointStrong}, \eqref{e:PhaseFieldGradientEquStrongRewrite}\}$, which are sufficiently smooth and have an asymptotic expansion in $\varepsilon$ in the bulk regions away from $\Gamma(\varepsilon)$ (the outer expansion, see Section \ref{sec:OuterExp}), and another expansion in the interfacial region (inner expansions, see Section \ref{sec:InnerExp}), see also \cite{article:FifePenrose95,article:GarckeStinner06} for a detailed formulation. \end{assumption} For the remainder of this section, we will make use of the following assumptions extensively: \begin{assumption} The correction constant $\delta_{\varepsilon}$ and the interpolation function $\alpha_{\varepsilon}$ fulfill \begin{align*} \delta_{\varepsilon} = \varepsilon^{k}, \, k > 1, \quad \alpha_{\varepsilon}(t) = \frac{1}{\varepsilon} \hat{\alpha}(t), \end{align*} where $\hat\alpha\in C^{1,1}(\mathbb{R})\cap L^\infty(\mathbb{R})$ satisfies the following properties: \begin{align}\label{assump:asymptotics:alpha} \hat{\alpha}(-1)>0,\quad \hat{\alpha}(1) = \hat{\alpha}'(1) = 0, \quad \hat{\alpha}(t) \neq 0 \text{ for } t \neq 1. \end{align} Moreover, we assume that the potential $\psi \in C^{2}(\mathbb{R})$ satisfies: \begin{align}\label{assump:asymptotics:psi} \psi(\pm 1) = \psi'(\pm 1) = 0. \end{align} \end{assumption} For the terms involving the square root, we make use of the following expansion for $a = a_{0} + \varepsilon a_{1} + \varepsilon^{2} a_{2} + \ldots$, which holds due to Taylor's theorem: \begin{equation}\label{taylorexpansion} \begin{aligned} \sqrt{a + \delta_{\varepsilon}} &= \sqrt{a_{0} +\varepsilon a_{1} + \ldots + \varepsilon^{k}(a_{k} + 1 ) + \ldots} \\ & = \sqrt{a_{0}} + \frac{1}{2 \sqrt{a_{0}}} \left [ \varepsilon a_{1} + \ldots + \varepsilon^{k}(a_{k} + 1) + \ldots \right ] \\ & - \frac{1}{4 \sqrt{a_{0}^{3}}} \left [ \varepsilon a_{1} + \ldots + \varepsilon^{k} (a_{k} + 1) + \ldots \right ]^{2} + \ldots. \end{aligned} \end{equation} \subsection{Outer expansions}\label{sec:OuterExp} We assume that for $v_{\varepsilon} \in \{\varphi_{\varepsilon}, \bm{u}_{\varepsilon}, p_{\varepsilon}, \lambda_{\varepsilon}, \vartheta_{\varepsilon}, \bm{q}_{\varepsilon}, \pi_{\varepsilon} \}$, the following outer expansions hold: \begin{align*} v_{\varepsilon} = v_{0} + \varepsilon v_{1} + \dots. \end{align*} Applying Taylor's theorem and \eqref{taylorexpansion}, for the choice $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{\sqrt{2} c_{0}} \sqrt{\psi(\varphi_{\varepsilon}) + \delta_{\varepsilon}}$, we obtain following outer expansion \begin{equation}\label{sqrtPsiOuter} \begin{aligned} & \; \mathcal{M}(\varphi_{\varepsilon}) = \mathcal{M}(\varphi_{0} + \varepsilon \varphi_{1} + \dots) \\ = & \; \frac{1}{\sqrt{2} c_{0}} \left ( \sqrt{ \psi(\varphi_{0}) + \psi'(\varphi_{0}) (\varepsilon \varphi_{1} + l\dots) + \ldots + \psi^{(k)}(\varphi_{0})(\varepsilon \varphi_{1} + \dots)^{k} + \ldots } \right ) \\ = & \; \frac{1}{ \sqrt{2} c_{0}} \left ( \sqrt{\psi(\varphi_{0})} + \frac{\varepsilon \psi'(\varphi_{0}) \varphi_{1} }{2 \sqrt{\psi(\varphi_{0})}}+ \mathcal{O}(\varepsilon^{2}) \right ) =: \mathcal{M}_{0}(\varphi_{0}) + \varepsilon \mathcal{M}_{1}(\varphi_{0}) \varphi_{1} + \text{ h.o.t.}. \end{aligned} \end{equation} We remark that, for the classical smooth double-well potential $\psi(\varphi) = \frac{1}{4}(1-\varphi^{2})^{2}$, one can compute that \begin{align*} \lim_{s \searrow -1} \frac{\psi'(s)}{\sqrt{\psi(s)}} = 2, \quad \lim_{s \nearrow 1} \frac{\psi'(s)}{\sqrt{\psi(s)}} = -2, \end{align*} and so $\mathcal{M}_{1}(\pm 1)$ is well-defined for the smooth double-well potential. We denote $(\cdot)_{O}^{\beta}$ to be the order $\beta$ outer expansions of equation $(\cdot)$. To leading order $(\ref{state1})_{O}^{-1}$ gives \begin{align}\label{state1:leadingorder} \hat{\alpha}(\varphi_{0}) \bm{u}_{0} = \bm{0}. \end{align} By (\ref{assump:asymptotics:alpha}), if $\varphi_{0} \neq 1$, we then obtain $\bm{u}_{0} = \bm{0}$. Similarly, to leading order $(\ref{adjoint1})_{O}^{-1}$ gives \begin{align}\label{adjoint1:leadingorder} \hat{\alpha}(\varphi_{0}) \bm{q}_{0} = \hat{\alpha}(\varphi_{0}) \bm{u}_{0}. \end{align} Thus, if $\varphi_{0} \neq +1$, then $\bm{q}_{0} = \bm{u}_{0} = \bm{0}$. Meanwhile, $(\ref{state2})_{O}^{0}$, $(\ref{state3})_{O}^{0}$, and $(\ref{adjoint3})_{O}^{0}$ give \begin{equation*} \begin{aligned} \div \bm{u}_{0} = 0 &\text{ in } \Omega, \\ \bm{u}_{0} = \bm{g}, \quad \bm{q}_{0} = \bm{0} &\text{ on } \partial \Omega. \end{aligned} \end{equation*} To order $-1$, $(\ref{e:PhaseFieldGradientEquStrongRewrite})_{O}^{-1}$ gives \begin{align}\label{phase:leadingorder} \hat{\alpha}'(\varphi_{0}) \left ( \frac{1}{2} \abs{\bm{u}_{0}}^{2} - \bm{u}_{0} \cdot \bm{q}_{0} \right ) = -\frac{\gamma}{2c_{0}} \psi'(\varphi_{0}). \end{align} If $\varphi_{0} \neq 1$, then from (\ref{state1:leadingorder}), (\ref{adjoint1:leadingorder}), and (\ref{assump:asymptotics:alpha}), we have that \begin{align}\label{psiroots} -\psi'(\varphi_{0}) = 0. \end{align} Hence, $\varphi_{0}$ must be a piecewise constant function that takes values equal to the roots of $\psi'(\cdot)$. The stable solutions to (\ref{psiroots}) are $\varphi_{0} = \pm 1$. In particular, we can define the fluid region and the solid region by \begin{align*} E := \{x \in \Omega \mid \varphi_{0}(x) = 1 \},\quad B := \{x \in \Omega \mid \varphi_{0}(x)= -1 \}, \end{align*} respectively. Moreover, from (\ref{state1:leadingorder}) and (\ref{adjoint1:leadingorder}) we have \begin{align}\label{s:OuterExpU0ZeroInOmegas}\bm{u}_{0} = \bm{q}_{0} = \bm{0} \text{ in } B. \end{align} Furthermore, as $\varphi_{0} = \pm 1$, we have $\nabla \varphi_{0} = \bm{0}$ in $E$ and $B$, and so, from the definition \eqref{defn:vartheta} that $\vartheta_{0} = 0$. From $(\ref{adjoint2})_{O}^{0}$ we have \begin{align}\label{adjoint2:leadingorder} \div \bm{q}_{0} = 0 &\text{ in } E \cup B. \end{align} The next order $(\ref{state1})_{O}^{0}$ gives \begin{align}\label{state1:firstorder} \hat{\alpha}'(\varphi_{0}) \varphi_{1} \bm{u}_{0} + \hat{\alpha}(\varphi_{0}) \bm{u}_{1} - \mu \Delta \bm{u}_{0} + (\bm{u}_{0} \cdot \nabla) \bm{u}_{0} + \nabla p_{0} = \bm{f}. \end{align} By (\ref{assump:asymptotics:alpha}), for $\varphi_{0} = 1$, we obtain \begin{align}\label{NSbulk} - \mu \Delta \bm{u}_{0} + (\bm{u}_{0} \cdot \nabla) \bm{u}_{0} + \nabla p_{0} = \bm{f} \text{ in } E. \end{align} Similarly, $(\ref{adjoint1})_{O}^{0}$ gives \begin{align}\label{adjoint1:firstorder} \notag & \; \hat{\alpha}'(\varphi_{0}) \varphi_{1} (\bm{q}_{0} - \bm{u}_{0}) + \hat{\alpha}(\varphi_{0}) (\bm{q}_{1} - \bm{u}_{1}) - \mu \div ( \nabla \bm{q}_{0} + (\nabla \bm{q}_{0})^{T}) \\ + & \; (\nabla \bm{u}_{0})^{T} \bm{q}_{0} - (\bm{u}_{0} \cdot \nabla) \bm{q}_{0} + \nabla \pi_{0} = \bm{0}. \end{align} For $\varphi_{0} = 1$, we obtain \begin{align*} - \mu \Delta \bm{q}_{0} + (\nabla \bm{u}_{0})^{T} \bm{q}_{0} - (\bm{u}_{0} \cdot \nabla) \bm{q}_{0} + \nabla \pi_{0} = \bm{0} \text{ in } E, \end{align*} where we have used (\ref{adjoint2:leadingorder}) to simplify the divergence term. \subsection{Inner expansions and matching conditions}\label{sec:InnerExp} Now we consider the interfacial region, i.e. near some free boundary $\Gamma= \partial E \cap\partial B$ which is assumed to be the limiting hypersurface of the zero level sets of $\varphi_{\varepsilon}$. For studying the limiting behaviour in these parts of $\Omega$ we introduce new coordinates. For this purpose we introduce the signed distance function $d(x)$ to $\Gamma$ and set $z = \frac{d}{\varepsilon}$ as the rescaled distance variable. Here we use the sign convention $d(x)>0$ if $x \in E$. Let $\gamma(s)$ denote a parametrization of $\Gamma$ by arc-length $s$, and let $\bm{\nu}$ denote the outward unit normal of $\Gamma$. Then, in a tubular neighbourhood of $\Gamma$, for sufficiently smooth function $v(x)$, we have \begin{align*} v(x) = v(\gamma(s) + \varepsilon z \bm{\nu}(\gamma(s))) =: V(s,z). \end{align*} In this new $(s,z)$-coordinate system, the following change of variables apply, see \cite{article:GarckeStinner06}: \begin{align*} \nabla_{x} v = \frac{1}{\varepsilon}\partial_{z} V \bm{\nu} + \nabla_{\Gamma} V + \text{ h.o.t.}, \end{align*} where $\nabla_{\Gamma}f$ denotes the surface gradient of $f$ on $\Gamma$ with components $(\underline{D}_{k}f)_{1 \leq k \leq d}$ and h.o.t. denotes higher order terms with respect to $\varepsilon$. Moreover, if $\bm{v}$ is a vector-valued function, then we obtain \begin{align*} \div_{x} \bm{v} = \frac{1}{\varepsilon}\partial_{z} \bm{V} \cdot \bm{\nu} + \div_{\Gamma} \bm{V} + \text{ h.o.t.}. \end{align*} In particular, using the fact that the normal $\bm{\nu}$ is independent of $z$, we have \begin{align*} \Delta v = \div_{x} (\nabla_{x} v) & = \frac{1}{\varepsilon^{2}} \partial_{zz}V + \frac{1}{\varepsilon}\underbrace{\div_{\Gamma} (\partial_{z}V \bm{\nu})}_{= - \kappa \partial_{z}V} + \text{ h.o.t.}, \end{align*} where $\kappa = - \div_{\Gamma} \bm{\nu}$ is the mean curvature. We denote the variables $\varphi_{\varepsilon}$, $\bm{u}_{\varepsilon}$, $p_{\varepsilon}$, $\bm{q}_{\varepsilon}$, $\pi_{\varepsilon}$ in the new coordinate system by $\Phi_{\varepsilon}$, $\bm{U}_{\varepsilon}$, $P_{\varepsilon}$, $\bm{Q}_{\varepsilon}$, $\Pi_{\varepsilon}$. We further assume that they have the following inner expansions: \begin{align*} V_{\varepsilon}(s,z) = V_{0}(s,z) + \varepsilon V_{1}(s,z) + \ldots, \end{align*} for $V_{\varepsilon} \in \{ \Phi_{\varepsilon}, \bm{U}_{\varepsilon}, P_{\varepsilon}, \bm{Q}_{\varepsilon}, \Pi_{\varepsilon} \}$. We then obtain, \begin{align*} \mathcal{M}(\Phi_{\varepsilon}) = \mathcal{M}_{0}(\Phi_{0}) + \varepsilon \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} + \text{ h.o.t.}, \end{align*} where $\mathcal{M}_{0}$, $\mathcal{M}_{1}$ are as defined in \eqref{sqrtPsiOuter} if we consider $\mathcal{M}(\varphi) = \frac{1}{c_{0}} \sqrt{\frac{\psi(\varphi) + \delta_{\varepsilon}}{2}}$. We remark that, for a sufficiently smooth function $\bm{f}$ independent of $\varepsilon$, \begin{align*} \bm{f}(x) & = \bm{f}(\gamma(s) + \varepsilon z \bm{\nu}(s)) = \bm{f}(\gamma(s)) + \varepsilon z \nabla \bm{f}(\gamma(s)) \cdot \bm{\nu} + \text{ h.o.t. } \\ & =: \bm{F}_{0}(s) + \varepsilon \bm{F}_{1}(s,z) + \text{ h.o.t.}, \end{align*} for $x$ in a neighbourhood of $\Gamma$. As a consequence, we see that \begin{align}\label{pdzF0} \partial_{z} \bm{F}_{0} = \bm{0}. \end{align} As the Lagrange multipliers $\lambda_{\varepsilon}$ and $\vartheta_{\varepsilon}$ are constant, we assume that the inner expansions are the same as the outer expansions. In particular, the leading order expansions of the Lagrange multipliers do not depend on $z$. The assumption that the zero level set of $\varphi_{\varepsilon}$ converge to $\Gamma$ implies that \begin{align}\label{e:CondPhi} \Phi_{0}(0) = 0. \end{align} In order to match the inner expansions valid in the interfacial region to the outer expansions of Section \ref{sec:OuterExp} we employ the matching conditions, (for the derivation we refer to \cite[Appendix D]{article:GarckeStinner06}): \begin{align} \label{e:MatchingCond1} \lim_{z \to \pm \infty} V_{0}(s,z) &= v_{0}^{\pm}, \\ \label{e:MatchingCond2} \lim_{z \to \pm\infty} \partial_{z}V_{0}(s,z) &= 0 ,\\ \label{e:MatchingCond3} \lim_{z \to \pm \infty} \partial_{z} V_{1}(s,z) &= \nabla v_{0}^{\pm} \cdot \bm{\nu}, \\ \label{e:MatchingCond4} \lim_{z \to \pm \infty} \partial_{zz} V_{2}(s,z) &= \left(\left(\bm{\nu} \cdot \nabla \right)\left(\bm{\nu} \cdot \nabla \right)u_{0}^{\pm} \right) = \partial_{\bm{\nu}}(\partial_{\bm{\nu}} u_{0}^{\pm}) , \end{align} where $v_{0}^{\pm}:= \lim_{\delta \searrow 0} v_{0}(p \pm \delta \bm{\nu})$ for $p \in \Gamma$. Then (\ref{e:MatchingCond3}) and (\ref{e:MatchingCond4}) for vector-valued functions read as \begin{align*} \lim_{z \to \pm \infty} \partial_{z} \bm{V}_{1}(s,z) = \partial_{\bm{\nu}} \bm{v}_{0}^{\pm}, \quad \lim_{z \to \pm \infty} \partial_{zz} \bm{V}_{2}(s,z) = \bm{\nu} \cdot \nabla (\partial_{\bm{\nu}} \bm{v}_{0}^{\pm}) = \partial_{\bm{\nu}} (\partial_{\bm{\nu}} \bm{v}_{0}^{\pm}). \end{align*} As $\div \bm{u}_{\varepsilon} = 0$, we can rewrite \begin{align*} \Delta \bm{u}_{\varepsilon} = \div ( \nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}). \end{align*} For a tensor $\bm{A}$, let $\mathcal{E}(\bm{A}) = \frac{1}{2}(\bm{A} + \bm{A}^{T})$. Then we can compute \begin{align*} \Delta \bm{u}_{\varepsilon} & = \frac{2}{\varepsilon^{2}} \partial_{z} ( \mathcal{E}(\partial_{z} \bm{U}_{\varepsilon} \otimes \bm{\nu}) \bm{\nu}) + \frac{2}{\varepsilon} \partial_{z}(\mathcal{E}(\nabla_{\Gamma} \bm{U}_{\varepsilon}) \bm{\nu}) + \frac{2}{\varepsilon} \div_\Gamma(\mathcal{E}(\partial_{z} \bm{U}_{\varepsilon} \otimes \bm{\nu}) + \ldots \\ & = \frac{1}{\varepsilon^{2}} \partial_{zz} \bm{U}_{\varepsilon} + \frac{1}{\varepsilon^{2}}\partial_{z}(\partial_{z} \bm{U}_{\varepsilon} \cdot \bm{\nu}) \bm{\nu} + \frac{2}{\varepsilon} \partial_{z}(\mathcal{E}(\nabla_{\Gamma} \bm{U}_{\varepsilon}) \bm{\nu}) + \frac{2}{\varepsilon} \div_\Gamma(\mathcal{E}(\partial_{z} \bm{U}_{\varepsilon} \otimes \bm{\nu}) + \ldots. \end{align*} We note that the same expansion holds for the divergence term in (\ref{adjoint1}). Similarly as in Section \ref{sec:OuterExp}, we will denote $(\cdot)_{I}^{\beta}$ to be the order $\beta$ inner expansions of equation $(\cdot)$. \subsubsection{Inner expansions of the state equations} To order $-1$, $(\ref{state2})_{I}^{-1}$ gives \begin{align}\label{state2:innerleadingorder} \partial_{z}\bm{U}_{0} \cdot \bm{\nu} = \partial_{z}(\bm{U}_{0} \cdot \bm{\nu}) = 0, \end{align} while to leading order $(\ref{state1})_{I}^{-2}$ gives \begin{align} \label{state1:innerleadingorder} -\mu \partial_{z} ( \partial_{z} \bm{U}_{0} + (\partial_{z} \bm{U}_{0} \cdot \bm{\nu}) \bm{\nu}) = -\mu \partial_{zz} \bm{U}_{0} = \bm{0}, \end{align} where we have used (\ref{state2:innerleadingorder}). Integrating with respect to $z$ from $-\infty$ to $z$ and applying the matching condition \eqref{e:MatchingCond2} leads to \begin{align} \label{pdzU0zero} \partial_{z} \bm{U}_{0}(s,z) = \bm{0}, \end{align} and so $\bm{U}_{0}$ is independent of $z$. Integrating once more with respect to $z$ from $-\infty$ to $z$ and by the matching condition $\eqref{e:MatchingCond1}$, we hence find that \begin{align} \label{U0zero} \bm{U}_{0}(s,z) \equiv \bm{u}_{0}^{-} = \bm{0}, \end{align} where we made in particular use of $\eqref{s:OuterExpU0ZeroInOmegas}$. This implies \begin{align}\label{velocontinuous} \bm u_{0}^{+} = \bm{u}_{0}^{-} = \bm{0}. \end{align} To first order $(\ref{state2})_{I}^{0}$ gives \begin{align} \label{state2:innerfirstorder} \partial_{z}\bm{U}_{1} \cdot \bm{\nu} + \div_\Gamma\bm{U}_{0} = \partial_{z} \bm{U}_{1} \cdot \bm{\nu} = 0, \end{align} where we have used (\ref{U0zero}). Using (\ref{U0zero}) and (\ref{state2:innerfirstorder}), to first order $(\ref{state1})_{I}^{-1}$ gives \begin{align} \label{state1:innerfirstorder:simple} - \mu \partial_{zz} \bm{U}_{1} + \partial_{z} P_{0} \bm{\nu} = \bm{0}. \end{align} \subsubsection{Phase field equation to leading order} To leading order $(\ref{e:PhaseFieldGradientEquStrongRewrite})_{I}^{-1}$ gives \begin{equation}\label{phase:innerleadingorder} \begin{aligned} -\frac{\gamma}{2c_{0}} (\partial_{zz}\Phi_{0} - \psi'(\Phi_{0})) + \hat{\alpha}'(\Phi_{0})(\tfrac{1}{2} \abs{\bm{U}_{0}}^{2} - \bm{U}_{0} \cdot \bm{Q}_{0}) - \mathcal{M}(\Phi_{0}) \alpha(\Phi_{0}) \bm{U}_{0} \cdot \bm{a} = 0 \end{aligned} \end{equation} Using (\ref{U0zero}), the above simplifies to \begin{align} \label{ODE} \partial_{zz}\Phi_{0} - \psi'(\Phi_{0}) = 0. \end{align} Along with the matching conditions $\eqref{e:MatchingCond1}$ for $\Phi_{0}$: \begin{align*} \Phi_{0}(s, z = \pm \infty) = \pm 1, \end{align*} we can choose $\Phi_{0}$ to be independent of $s$ and as the unique monotone solution to (\ref{ODE}) satisfying $\Phi_{0}(z=0) = 0$ (recall $\eqref{e:CondPhi}$). Moreover, taking the product of (\ref{ODE}) with $\Phi_{0}'(z)$ and integrating with respect to $z$ from $-\infty$ to $z$ leads to the so-called equipartition of energy after matching: \begin{align} \label{equipartition} \frac{1}{2} \abs{\Phi_{0}'(z)}^{2} = \psi(\Phi_{0}(z)) \text{ for } \abs{z} < \infty. \end{align} Moreover, a short calculation using \eqref{equipartition}, the monotonicity of $\Phi_{0}$, and a change of variables $s \mapsto \Phi_{0}(z)$ shows that \begin{align} \label{const:c0} c_{0} = \frac{1}{2} \int_{-1}^{1} \sqrt{2 \psi(s)} \, \mathrm{ds}\, = \frac{1}{2} \int_{\mathbb{R}} \sqrt{2\psi(\Phi_{0}(z))} \Phi_{0}'(z) \, \mathrm{dz}\, = \frac{1}{2} \int_{\mathbb{R}} \abs{\Phi_{0}'(z)}^{2} \, \mathrm{dz}\,. \end{align} \subsubsection{Inner expansions of the adjoint equation} Before we analyse the adjoint equation, we first compute: \begin{equation}\label{adjoint:RHSterm1:expansion} \begin{aligned} \div (\mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon}) & = \frac{1}{\varepsilon^{2}} \partial_{z}(\mathcal{M}(\Phi_{\varepsilon}) \partial_{z} \Phi_{\varepsilon}) + \div_\Gamma \left ( \mathcal{M}(\Phi_{\varepsilon}) \left ( \frac{1}{\varepsilon} \partial_{z}\Phi_{\varepsilon} \bm{\nu} + \nabla_{\Gamma} \Phi_{\varepsilon} \right ) \right) \\ & + \text{ h.o.t.}, \end{aligned} \end{equation} and for any $1 \leq j \leq d$, \begin{align*} & \; (\nabla (\mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon}) \bm{a})_{j} = \sum_{i=1}^{d} \partial_{i} (\mathcal{M}(\varphi_{\varepsilon}) \partial_{j} \varphi_{\varepsilon}) a_{i} \\ = & \; \sum_{i=1}^{d} \frac{1}{\varepsilon} \nu_{i} \partial_{z} \left ( \mathcal{M}(\Phi_{\varepsilon}) \left ( \frac{1}{\varepsilon} \partial_{z} \Phi_{\varepsilon} \nu_{j} + \underline{D}_{j} \Phi_{\varepsilon} \right ) \right )a_{i} + \underline{D}_{i} \left ( \mathcal{M}(\Phi_{\varepsilon}) \left ( \frac{1}{\varepsilon} \partial_{z}\Phi_{\varepsilon} \nu_{j} + \underline{D}_{j} \Phi_{\varepsilon} \right ) \right ) a_{i} + \text{ h.o.t.}, \end{align*} so that \begin{equation}\label{adjoint:RHSterm2:expansion} \begin{aligned} \nabla (\mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon}) \bm{a} & = \frac{1}{\varepsilon^{2}} (\bm{\nu} \cdot \bm{a}) \bm{\nu} \partial_{z} (\mathcal{M}(\Phi_{\varepsilon}) \partial_{z} \Phi_{\varepsilon})\\ & + \frac{1}{\varepsilon} \left ( (\bm{\nu} \cdot \bm{a}) \partial_{z} (\mathcal{M}(\Phi_{\varepsilon}) \nabla_{\Gamma} \Phi_{\varepsilon}) + \nabla_{\Gamma} (\mathcal{M}(\Phi_{\varepsilon}) \partial_{z} \Phi_{\varepsilon} \bm{\nu}) \bm{a} \right ) \\ & + \nabla_{\Gamma} (\nabla_{\Gamma} \Phi_{\varepsilon}) \bm{a} + \text{ h.o.t.}. \end{aligned} \end{equation} To leading order $(\ref{adjoint2})_{I}^{-1}$ gives \begin{align} \label{adjoint2:innerleadingorder} \partial_{z} \bm{Q}_{0} \cdot \bm{\nu} = \mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' (\bm{\nu} \cdot \bm{a}), \end{align} while to leading order $(\ref{adjoint1})_{I}^{-2}$ gives \begin{align} \label{adjoint1:innerleadingorder} - \mu \partial_{zz} \bm{Q}_{0} - \mu \partial_{z}(\partial_{z} \bm{Q}_{0} \cdot \bm{\nu}) \bm{\nu} = - \mu \partial_{z} (\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' ) ((\bm{\nu} \cdot \bm{a}) \bm{\nu} + \bm{a}), \end{align} where we have used \eqref{adjoint:RHSterm1:expansion}, \eqref{adjoint:RHSterm2:expansion} and that $\bm{\nu}$ is independent of $z$ to simplify the right hand side of $(\ref{adjoint1})_{I}^{-2}$. Integrating \eqref{adjoint1:innerleadingorder} with respect to $z$ from $-\infty$ to $z$ and using the matching condition \eqref{e:MatchingCond2} leads to \begin{align*} \partial_{z} \bm{Q}_{0} + (\partial_{z}\bm{Q}_{0} \cdot \bm{\nu}) \bm{\nu} = \mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' ((\bm{\nu} \cdot \bm{a}) \bm{\nu} + \bm{a}), \end{align*} and upon adding the product of \eqref{adjoint2:innerleadingorder} with $\bm{\nu}$ leads to \begin{align} \label{pdzQ0} \partial_{z} \bm{Q}_{0}(s,z) = \mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \bm{a}. \end{align} Integrating \eqref{pdzQ0} with respect to $z$ from $-\infty$ to $z$, using the matching condition \eqref{e:MatchingCond1} and $\bm{q}_{0}^{-} = \bm{0}$ (see \eqref{s:OuterExpU0ZeroInOmegas}), lead to \begin{align}\label{Q0indeps} \bm{Q}_{0}(s,z) = \left ( \int_{-\infty}^{z} \mathcal{M}_{0}(\Phi_{0}(z)) \Phi_{0}'(z) \, \mathrm{dz}\, \right ) \bm{a}. \end{align} In particular, the right hand side is independent of $s$, and so we can deduce that $\bm{Q}_{0}$ is also independent of $s$. Using the matching condition $\eqref{e:MatchingCond1}$, we hence have \begin{align} \label{q0Omegaplus} \bm{q}_{0}^{+} = \left ( \int_{\mathbb{R}} \mathcal{M}_{0}(\Phi_{0}(z)) \Phi_{0}'(z) \, \mathrm{dz}\, \right ) \bm{a}. \end{align} For the choice $\mathcal{M}(\varphi) = \frac{1}{2}$, we see that \begin{align} \int_{\mathbb{R}} \mathcal{M}_{0}(\Phi_{0}(z)) \Phi_{0}'(z) \, \mathrm{dz}\, = \frac{1}{2} \int_{\mathbb{R}} \Phi_{0}'(z) \, \mathrm{dz}\, = 1, \end{align} while for the choice $\mathcal{M}(\varphi) = \frac{1}{c_{0}} \sqrt{\frac{\psi(\varphi) + \delta_{\varepsilon})}{2}}$, we see that by \eqref{sqrtPsiOuter}, \eqref{equipartition}, and \eqref{const:c0}, \begin{align*} \int_{\mathbb{R}} \mathcal{M}_{0}(\Phi_{0}(z)) \Phi_{0}'(z) \, \mathrm{dz}\, = \frac{1}{c_{0}} \int_{\mathbb{R}} \frac{1}{\sqrt{2}} \sqrt{\psi(\Phi_{0}(z))} \Phi_{0}'(z) \, \mathrm{dz}\, = \frac{1}{c_{0}} \int_{\mathbb{R}} \frac{1}{2} \abs{\Phi_{0}'(z)}^{2} \, \mathrm{dz}\, = 1. \end{align*} Thus, in both cases, we obtain \begin{align}\label{q0Omegaplus:complete} \bm{q}_{0}^{+} = \bm{a}. \end{align} To the next order, we obtain from $(\ref{adjoint2})_{I}^{0}$ \begin{align} \label{adjoint2:innerfirstorder} \partial_{z} \bm{Q}_{1} \cdot \bm{\nu} = \mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} (\bm{\nu} \cdot \bm{a}) + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}' (\bm{\nu} \cdot \bm{a}), \end{align} where we used that $\bm{Q}_{0}$ and $\Phi_{0}$ are functions of $z$ only, and $\vartheta_{0} = 0$ from the outer expansions. Meanwhile, from \eqref{adjoint:RHSterm1:expansion} and \eqref{adjoint:RHSterm2:expansion}, $(\ref{adjoint1})_{I}^{-1}$ gives \begin{equation}\label{adjoint1:innerfirstorder} \begin{aligned} & \; \hat{\alpha}(\Phi_{0}) \bm{Q}_{0} - \mu \partial_{zz} \bm{Q}_{1} - \mu \partial_{z}(\partial_{z} \bm{Q}_{1} \cdot \bm{\nu}) \bm{\nu} - 2 \mu \partial_{z}(\mathcal{E}(\nabla_{\Gamma} \bm{Q}_{0})) \bm{\nu} - 2 \mu \div_\Gamma( \mathcal{E}( \bm{Q}_{0}' \otimes \bm{\nu})) \\ = & \; -\mu (\bm{a} + (\bm{\nu} \cdot \bm{a}) \bm{\nu}) \partial_{z}(\mathcal{M}_{0}(\Phi_{0}) \partial_{z}\Phi_{1} + \mathcal{M}_{1}(\Phi_{0})\Phi_{1} \Phi_{0}') \\ - & \; \mu \div_{\Gamma} (\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \bm{\nu}) \bm{a} - \mu \nabla_{\Gamma} (\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \bm{\nu}) \bm{a}. \end{aligned} \end{equation} Moreover, we can simplify, thanks to fact that $\Phi_{0}$ and $\bm{Q}_{0}$ only depend on $z$: \begin{align*} 2\div_\Gamma(\mathcal{E}( \bm{Q}_{0}' \otimes \bm{\nu})) & = \nabla_{\Gamma} ( \bm{Q}_{0}' ) \bm{\nu} + (\div_{\Gamma} \bm{\nu}) \bm{Q}_{0}' + (\nabla_{\Gamma} \bm{\nu}) \bm{Q}_{0}' + (\div_{\Gamma} \bm{Q}_{0}') \bm{\nu} \\ & = -\kappa \bm{Q}_{0}' + (\nabla_{\Gamma} \bm{\nu}) \bm{Q}_{0}', \\ \div_{\Gamma} (\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \bm{\nu}) & = -\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \kappa, \\ \nabla_{\Gamma}(\mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \bm{\nu}) & = \mathcal{M}_{0}(\Phi_{0}) \Phi_{0}' \nabla_{\Gamma} \bm{\nu}. \end{align*} Then, using the relation \eqref{pdzQ0}, we obtain from \eqref{adjoint1:innerfirstorder}: \begin{equation}\label{adjoint1:innerfirstorder:sim} \begin{aligned} & \; \hat{\alpha}(\Phi_{0}) \bm{Q}_{0} - \mu \partial_{zz} \bm{Q}_{1} - \mu \partial_{z}(\partial_{z} \bm{Q}_{1} \cdot \bm{\nu}) \bm{\nu} + \mu \kappa \bm{Q}_{0}' - \mu(\nabla_{\Gamma} \bm{\nu}) \bm{Q}_{0}' \\ = & \; -\mu (\bm{a} + (\bm{\nu} \cdot \bm{a}) \bm{\nu}) \partial_{z}(\mathcal{M}_{0}(\Phi_{0}) \partial_{z}\Phi_{1} + \mathcal{M}_{1}(\Phi_{0})\Phi_{1} \Phi_{0}') + \mu \bm{Q}_{0}' \kappa - \mu (\nabla_{\Gamma} \bm{\nu}) \bm{Q}_{0}', \end{aligned} \end{equation} and thus, upon cancelling the common terms, we have \begin{equation}\label{adjoint1:innerfirstorder:sim:2} \begin{aligned} & \; \hat{\alpha}(\Phi_{0}) \bm{Q}_{0} - \mu \partial_{zz} \bm{Q}_{1} - \mu \partial_{z}(\partial_{z} \bm{Q}_{1} \cdot \bm{\nu}) \bm{\nu} \\ = & \; -\mu (\bm{a} + (\bm{\nu} \cdot \bm{a}) \bm{\nu}) \partial_{z}(\mathcal{M}_{0}(\Phi_{0}) \partial_{z}\Phi_{1} + \mathcal{M}_{1}(\Phi_{0})\Phi_{1} \Phi_{0}'). \end{aligned} \end{equation} \subsubsection{Phase field equation to first order} Using (\ref{U0zero}), we obtain from $(\ref{e:PhaseFieldGradientEquStrongRewrite})_{I}^{0}$ to first order: \begin{equation}\label{phase:innerfirstorder} \begin{aligned} & \frac{\gamma}{2c_{0}} \left ( -\partial_{zz} \Phi_{1} + \kappa \Phi_{0}' + \psi''(\Phi_{0}) \Phi_{1} \right ) + \lambda_{0} \\ & - \hat{\alpha}'(\Phi_{0}) \bm{U}_{1} \cdot \bm{Q}_{0} + \mathcal{M}_{0}(\Phi_{0}) \left ( \bm{F}_{0} -\hat{\alpha}(\Phi_{0}) \bm{U}_{1} \right ) \cdot \bm{a} = 0. \end{aligned} \end{equation} Making use of (\ref{pdzQ0}), after taking the product of (\ref{phase:innerfirstorder}) with $\Phi_{0}'$ we have \begin{equation}\label{phase:innerfirstorder:product} \begin{aligned} & \frac{\gamma}{2c_{0}} \left ( -\partial_{zz} \Phi_{1} \Phi_{0}' + \Phi_{1} (\psi'(\Phi_{0}))' + \kappa \abs{\Phi_{0}'}^{2} \right ) + \lambda_{0} \Phi_{0}' \\ & - (\hat{\alpha}(\Phi_{0}))' \bm{U}_{1} \cdot \bm{Q}_{0} + \bm{Q}_{0}' \cdot (\bm{F}_{0} - \hat{\alpha}(\Phi_{0}) \bm{U}_{1}) = 0. \end{aligned} \end{equation} We note that by integrating by parts: \begin{align*} -\int_{\mathbb{R}} (\hat{\alpha}(\Phi_{0}))' \bm{U}_{1} \cdot \bm{Q}_{0} \, \mathrm{dz}\, = \int_{\mathbb{R}} \hat{\alpha}(\Phi_{0}) (\bm{U}_{1} \cdot \bm{Q}_{0}' + \partial_{z} \bm{U}_{1} \cdot \bm{Q}_{0}) \, \mathrm{dz}\, - [\hat{\alpha}(\Phi_{0}) \bm{U}_{1} \cdot \bm{Q}_{0}]_{z=-\infty}^{z=+\infty} . \end{align*} We use that $\hat{\alpha}(1) = 0$, $\bm{Q}_{0}(z = -\infty) = \bm{q}_{0}^{-} = \bm{0}$ to deduce that the jump term is zero. Hence, \begin{align} \label{phase:innerfirstorder:integraltermU1Q0} -\int_{\mathbb{R}} (\hat{\alpha}(\Phi_{0}))' \bm{U}_{1} \cdot \bm{Q}_{0} \, \mathrm{dz}\, = \int_{\mathbb{R}} \hat{\alpha}(\Phi_{0}) (\bm{U}_{1} \cdot \bm{Q}_{0}' + \partial_{z} \bm{U}_{1} \cdot \bm{Q}_{0}) \, \mathrm{dz}\,. \end{align} So, from integrating (\ref{phase:innerfirstorder:product}) over $\mathbb{R}$ and using (\ref{phase:innerfirstorder:integraltermU1Q0}) we obtain \begin{equation}\label{phase:innerfirstorder:integrated} \begin{aligned} &\int_{\mathbb{R}} \frac{\gamma}{2c_{0}} \left ( - \partial_{zz} \Phi_{1} \Phi_{0}' + \Phi_{1} (\psi'(\Phi_{0}))' + \kappa \abs{\Phi_{0}'}^{2} \right ) + \lambda_{0} \Phi_{0}' \, \mathrm{dz}\, \\ & + \int_{\mathbb{R}} \hat{\alpha}(\Phi_{0}) \partial_{z} \bm{U}_{1} \cdot \bm{Q}_{0} + \bm{Q}_{0}' \cdot \bm{F}_{0} \, \mathrm{dz}\, = 0. \end{aligned} \end{equation} Considering the first line, we find that, after integrating by parts and applying matching \eqref{e:MatchingCond1}-\eqref{e:MatchingCond2} for $\Phi_{0}$, \begin{equation}\label{phase:innerfirstorder:line1} \begin{aligned} & \; \int_{\mathbb{R}} \frac{\gamma}{2c_{0}} \left ( -\partial_{zz} \Phi_{1} \Phi_{0}' + \Phi_{1} (\psi'(\Phi_{0}))' + \kappa \abs{\Phi_{0}'}^{2} \right ) + \lambda_{0} \Phi_{0}' \, \mathrm{dz}\, \\ = & \; \frac{\gamma}{2c_{0}} \int_{\mathbb{R}} \partial_{z} \Phi_{1} \left( \Phi_{0}'' - \psi'(\Phi_{0}) \right) + \frac{\gamma}{2c_{0}} [\psi'(\Phi_{0}) \Phi_{1} - \Phi_{0}' \partial_{z}\Phi_{1}]_{z=-\infty}^{z=+\infty} \\ + & \; \kappa \frac{\gamma}{2c_{0}} \underbrace{\int_{\mathbb{R}} \abs{\Phi_{0}'}^{2}\, \mathrm{dz}\,}_{=2c_{0}} + \lambda_{0}\underbrace{\int_{\mathbb{R}} \Phi_{0}' \, \mathrm{dz}\,}_{=2} = \kappa \gamma + 2 \lambda_{0}, \end{aligned} \end{equation} where we made use of \eqref{ODE}, the relation \eqref{const:c0}, and that $\kappa$ is independent of $z$. Thus it remains to identify \begin{align}\label{phase:innerfirstorder:unknownterm} \int_{\mathbb{R}} \bm{F}_{0} \cdot \partial_{z} \bm{Q}_{0} + \hat{\alpha}(\Phi_{0}) \partial_{z} \bm{U}_{1} \cdot \bm{Q}_{0} \, \mathrm{dz}\,. \end{align} To this end, we take the scalar product of (\ref{adjoint1:innerfirstorder:sim:2}) with $\partial_{z} \bm{U}_{1}$ and use (\ref{state2:innerfirstorder}) to obtain \begin{equation}\label{adjoint1:innerfirstorder:sim:multipliedpdzU1} \begin{aligned} & \; \hat{\alpha}(\Phi_{0}) \bm{Q}_{0} \cdot \partial_{z} \bm{U}_{1} - \mu \partial_{zz} \bm{Q}_{1} \cdot \partial_{z} \bm{U}_{1} \\ = & \; -\mu \partial_{z} \bm{U}_{1} \cdot \bm{a} \partial_{z}(\mathcal{M}_{0}(\Phi_{0}) \partial_{z}\Phi_{1} + \mathcal{M}_{1}(\Phi_{0})\Phi_{1} \Phi_{0}'). \end{aligned} \end{equation} Integrating \eqref{adjoint1:innerfirstorder:sim:multipliedpdzU1} over $\mathbb{R}$ with respect to $z$, and applying integration by parts leads to \begin{equation}\label{adjoint1:innerfirstorder:integrated} \begin{aligned} & \; \int_{\mathbb{R}} \hat{\alpha}(\Phi_{0}) \bm{Q}_{0} \cdot \partial_{z} \bm{U}_{1} \, \mathrm{dz}\, \\ = & \; \mu \int_{\mathbb{R}} \partial_{zz} \bm{Q}_{1} \cdot \partial_{z} \bm{U}_{1} - \partial_{z} \bm{U}_{1} \cdot \bm{a} \partial_{z}(\mathcal{M}_{0}(\Phi_{0})\partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}') \, \mathrm{dz}\, \\ = & \; \mu \left [ \partial_{z} \bm{Q}_{1} \cdot \partial_{z} \bm{U}_{1} - \partial_{z} \bm{U}_{1} \cdot \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}') \right ]_{z = -\infty}^{z = +\infty} \\ - & \; \mu \int_{\mathbb{R}} \partial_{zz} \bm{U}_{1} \cdot (\partial_{z} \bm{Q}_{1} - \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}')) \, \mathrm{dz}\,. \end{aligned} \end{equation} Using \eqref{assump:asymptotics:psi}, the matching conditions \eqref{e:MatchingCond1}, \eqref{e:MatchingCond2}, \eqref{e:MatchingCond3} for $\Phi_{0}$, and \eqref{e:MatchingCond3} for $\bm{Q}_{1}$ and $\bm{U}_{1}$, we see that the jump term is \begin{equation}\label{adjoint1:innerfirstorder:integrated:jumpterm} \begin{aligned} \left [ \partial_{z} \bm{Q}_{1} \cdot \partial_{z} \bm{U}_{1} - \partial_{z} \bm{U}_{1} \cdot \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}') \right ]_{z = -\infty}^{z = +\infty} = [\partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0}]_{-}^{+}, \end{aligned} \end{equation} since, in the case $\mathcal{M}(\varphi) = \frac{1}{2}$, we have $\mathcal{M}_{0}(\Phi_{0}) = \frac{1}{2}$ and $\mathcal{M}_{1}(\Phi_{0}) \Phi_{1} = 0$, while for the case $\mathcal{M}(\varphi) = \frac{1}{c_{0}} \sqrt{\frac{\psi(\varphi) + \delta_{\varepsilon}}{2}}$, using \eqref{equipartition} and the matching conditions, we have \begin{align*} & \; \left [ \partial_{z} \bm{U}_{1} \cdot \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}') \right ]_{z = -\infty}^{z = +\infty} \\ = & \; \frac{1}{\sqrt{2} c_{0}} \left [ \partial_{z} \bm{U}_{1} \cdot \bm{a} \left ( \sqrt{\psi(\Phi_{0})} \partial_{z}\Phi_{1} + \frac{\psi'(\Phi_{0}) \Phi_{1}}{\sqrt{2}} \frac{\Phi_{0}'}{\sqrt{2 \psi(\Phi_{0})}} \right ) \right ]_{z = -\infty}^{z = +\infty} = 0. \end{align*} Meanwhile, using \eqref{adjoint2:innerfirstorder} and \eqref{state1:innerfirstorder:simple}, the integral term is \begin{equation}\label{adjoint1:innerfirstorder:integrated:integralterm} \begin{aligned} & \; \int_{\mathbb{R}} \mu \partial_{zz} \bm{U}_{1} \cdot (\partial_{z} \bm{Q}_{1} - \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}')) \, \mathrm{dz}\, \\ = & \; \int_{\mathbb{R}} - \partial_{z} P_{0} \bm{\nu} \cdot (\partial_{z} \bm{Q}_{1} - \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}')) \, \mathrm{dz}\, = 0. \end{aligned} \end{equation} Together with \eqref{pdzF0}, i.e., $\bm{F}_{0}$ is independent of $z$, we obtain from \eqref{adjoint1:innerfirstorder:integrated:jumpterm}, \eqref{adjoint1:innerfirstorder:integrated:integralterm} that \eqref{phase:innerfirstorder:unknownterm} is \begin{equation} \begin{aligned} \int_{\mathbb{R}} \bm{F}_{0} \cdot \partial_{z} \bm{Q}_{0} + \hat{\alpha}(\Phi_{0}) \partial_{z} \bm{U}_{1} \cdot \bm{Q}_{0} \, \mathrm{dz}\, & = \bm{f}_{0} \cdot [\bm{q}_{0}]_{-}^{+} + \mu [\partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0}]_{-}^{+} \\ & = \bm{f}_{0} \cdot \bm{a} + \mu \partial_{\bm{\nu}} \bm{q}_{0}^{+} \cdot \partial_{\bm{\nu}} \bm{u}_{0}^{+}, \end{aligned} \end{equation} as $\bm{q}_{0}^{-} = \bm{u}_{0}^{-} = \bm{0}$, and $\bm{q}_{0}^{+} = \bm{a}$ from \eqref{q0Omegaplus:complete}. Thus, we obtain from \eqref{phase:innerfirstorder:integrated} the following solvability condition for $\Phi_{1}$: \begin{equation*} \begin{aligned} 2 \lambda_{0} + \kappa \gamma + \bm{f}_{0} \cdot \bm{a} + \mu \partial_{\bm{\nu}} \bm{q}_{0}^{+} \cdot \partial_{\bm{\nu}} \bm{u}_{0}^{+} = \bm{0} \text{ on } \Gamma. \end{aligned} \end{equation*} \subsubsection{Sharp interface limit} In summary, we obtain the following sharp interface limit: \begin{subequations}\label{e:StateAndAdjointInLimit}\begin{align} - \mu \Delta \bm{u}_{0} + (\bm{u}_{0} \cdot \nabla) \bm{u}_{0} + \nabla p_{0} = \bm{f} & \text{ in } E, \\ - \mu \Delta \bm{q}_{0} + (\nabla \bm{u}_{0})^{T} \bm{q}_{0} - (\bm{u}_{0} \cdot \nabla) \bm{q}_{0} + \nabla \pi_{0} = \bm{0} & \text{ in } E, \\ \div \bm{u}_{0} = 0, \quad \div \bm{q}_{0} = 0 & \text{ in } E, \\ \bm{u}_{0} = \bm{g}, \quad \bm{q}_{0} = \bm{0} & \text{ on } \partial \Omega \cap E, \\ \bm{u}_{0} = \bm{q}_{0} = \bm{0} & \text{ in } B,\\ \bm{u}_{0} = \bm{0},\quad \bm{q}_{0} = \bm{a} & \text{ on } \Gamma, \label{DirichletFreeBdy} \end{align}\end{subequations} together with the following gradient equation: \begin{equation}\label{e:GradientEquationLimit}\begin{aligned} \kappa \gamma + 2\lambda_{0} + \mu \partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0} + \bm{f} \cdot \bm{a} = 0 \text{ on } \Gamma, \end{aligned} \end{equation} which is consistent with the adjoint system \eqref{IntroAjointEquSharp} and the strong form of \eqref{ShapeDerivHydroDynamForce} from \cite{incoll:Boisgerault}, taking into account the volume constraint (see \eqref{generalh:ProofOptSysEpsVarEq}) and the additional perimeter regularization. \begin{rem}[Linear scaling for the correction constant $\delta_{\varepsilon}$]\label{rem:deltaepslinearscaling} Suppose $\delta_{\varepsilon} = \varepsilon$, then we observe from \eqref{taylorexpansion} that \begin{align*} \sqrt{\psi(\Phi) + \varepsilon} = \sqrt{\psi(\Phi_{0})} + \varepsilon \frac{\psi'(\Phi_{0}) \Phi_{1} + 1}{2 \sqrt{\psi(\Phi_{0})}} + \mathrm{ h.o.t.}, \end{align*} i.e., \begin{align*} \mathcal{M}_{0}(\Phi_{0}) = \frac{1}{\sqrt{2} c_{0}} \sqrt{\psi(\Phi_{0})}, \quad \mathcal{M}_{1}(\Phi_{0})\Phi_{1} = \frac{1}{\sqrt{2} c_{0}}\frac{\psi'(\Phi_{0}) \Phi_{1} + 1}{2 \sqrt{\psi(\Phi_{0})}}. \end{align*} The presence of this extra factor of $\frac{1}{2 \sqrt{\psi(\Phi_{0})}}$ in $\mathcal{M}_{1}(\Phi_{0}) \Phi_{1}$ alters the jump term of \eqref{adjoint1:innerfirstorder:integrated} to \begin{equation*} \begin{aligned} & \; \left [ \partial_{z} \bm{Q}_{1} \cdot \partial_{z} \bm{U}_{1} - \partial_{z} \bm{U}_{1} \cdot \bm{a} (\mathcal{M}_{0}(\Phi_{0}) \partial_{z} \Phi_{1} + \mathcal{M}_{1}(\Phi_{0}) \Phi_{1} \Phi_{0}') \right ]_{z = -\infty}^{z = +\infty} \\ = & \; [\partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0}]_{-}^{+} - \frac{\bm{a}}{2 c_{0}} \cdot \left [ \frac{\Phi_{0}'}{\sqrt{2 \psi(\Phi_{0})}} \partial_{z} \bm{U}_{1} \right ]_{z=-\infty}^{z=+\infty} = \partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0} - \frac{\bm{a}}{2 c_{0}} \cdot \partial_{\bm{\nu}} \bm{u}_{0},. \end{aligned} \end{equation*} where we have used \eqref{equipartition}. Thus, instead of \eqref{e:GradientEquationLimit}, we obtain\begin{align*} \kappa \gamma + 2\lambda_{0} + \mu \partial_{\bm{\nu}} \bm{q}_{0} \cdot \partial_{\bm{\nu}} \bm{u}_{0} + \frac{\mu}{2 c_{0}} \partial_{\bm{\nu}} \bm{u}_{0} \cdot \bm{a} + \bm{f} \cdot \bm{a} = 0 \text{ on } \Gamma. \end{align*} \end{rem} \section{Numerical computations}\label{sec:Numerics} In this section we investigate the phase field approach numerically. We minimize the drag and maximize the lift-to-drag ratio of an obstacle in outer flow and apply both phase field approximations of the corresponding surface functionals. Concerning numerical results in the literature we refer to the minimization of the drag functional in \cite{article:SchmidtSchulz10, incoll:BrandenburgLindemannUlbrichUlbrich09}, where a sharp interface approach is used. In \cite{article:Kondoh12} the porous medium approach is used, where the authors argue, that the term $\alpha_{\varepsilon}\bm{u}_\varepsilon$ is a valid approximation for the hydrodynamic force. Let us start with defining the free energy $\psi$. Here we use \begin{equation} \begin{aligned} \tilde{\psi}(y) & = \frac{s}{2} \left({\max}^{2}(0,y - 1) + {\min}^{2}(0,y + 1) \right ) + \frac{1}{2}(1 - y^{2}), \\ \psi(y) & = \tilde{\psi} \left(\frac{s}{s-1} y \right) + \frac{1}{2(s-1)}. \end{aligned} \end{equation} Note that $\tilde{\psi}$ can be obtained by using a Moreau--Yosida relaxation of the double--obstacle free energy (\ref{doubleobstacle}) with the relaxation (or penalization) parameter $s \gg 1$, and the scaling of the argument and the shifting are chosen such that $\psi$ has its minima at $y = \pm 1$ with $\psi(\pm 1) = 0$. We further introduce the convex-concave splitting \begin{align*} \psi & = \psi_{+} + \psi_{-}, \\ \psi_{+}(y) & = \frac{s}{2} \left( {\max}^{2} \left(0,\frac{s}{s-1}y - 1 \right) + {\min}^{2} \left(0,\frac{s}{s-1} y + 1 \right) \right),\\ \psi_{-}(y) & = \frac{1}{2} \left( 1 - \left(\frac{s}{s-1}y \right)^{2}\right) + \frac{1}{2(s-1)}, \end{align*} where $\psi_{+}$ is the convex part of $\psi$ and $\psi_{-}$ is its concave part. Next we define the interpolation function $\alpha_{\varepsilon}$ as \begin{align} \alpha_{\varepsilon}(y) = \frac{\overline{\alpha}}{\varepsilon} \begin{cases} 0 & \text{ if } y \geq 1,\\ \frac{1}{(1-\theta)(3+\theta)}(y-1)^{2} & \text{ if } 1> \varphi \geq \theta,\\ \min \left( 1+\frac{2}{3+\theta},1-\frac{2}{3+\theta}(y+1) \right) & \text{ if } \theta > \varphi, \end{cases} \end{align} where $\overline{\alpha}$ is a given constant, and we choose $\theta = 0.99$. This function $\alpha_{\varepsilon}(y)$ describes a linear function between $y = -2$ and $y = \theta$ and has a quadratic extension between $y = \theta$ and $y = 1$. We fulfill Assumption \ref{assump:alpha} with $s_{a} = -2$ and $s_{b} = 1$. Note that we do not fulfill the regularity $\alpha_{\varepsilon} \in C^{1,1}(\mathbb{R})$ at $s_{a}$. But this is not a severe violation since in practice it holds that $-2 < \varphi_{\varepsilon}$ and we can control the violation of the bound $-1 \leq \varphi_{\varepsilon}$ by choosing an appropriate relaxation parameter $s$. For solving the optimization problem \eqref{IntroObjFcltPhase} we use a mass conserving $H^{-1}$-gradient flow approach, following \cite{garckehinzeetal}. For this purpose we introduce an artificial time variable $t$ and solve the following evolution equation for the phase field variable $\varphi_{\varepsilon}(t)$ which is obtained from \eqref{generalh:PhaseFieldOptSys}: \begin{equation}\label{eq:num:gradflow} \begin{aligned} \partial_{t} \varphi_{\varepsilon} & = \Delta w_{\varepsilon}, \\ w_{\varepsilon} & = - \gamma \varepsilon \Delta \varphi_{\varepsilon} + \frac{\gamma}{\varepsilon} \psi'(\varphi_{\varepsilon}) + \alpha'_{\varepsilon}(\varphi_{\varepsilon}) \left ( \frac{1}{2} \abs{\bm{u}_{\varepsilon}}^{2} - \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} \right ) + J_{\varphi},\\ J_{\varphi} & = \mathcal{M}'(\varphi_{\varepsilon}) h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) -\div \left( \mathcal{M}(\varphi_\varepsilon) \mathrm{D} _{4}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \right), \end{aligned} \end{equation} where $\bm{u}_{\varepsilon}$ is obtained from \eqref{IntroStateEquPhase}, $\bm{q}_{\varepsilon}$ is obtained from \eqref{generalh:adjointsystem} and $J_\varphi$ abbreviates the terms arising from the differentiation of the functional $h$, as shown in Theorem \ref{t:GeneralisationOptimality}. Note that we include the factor $\frac{1}{2c_0}$ into the parameter $\gamma$. Using the gradient flow approach allows us to use nonlinear parts of the gradient, for example the derivative of $\psi_{+}$, implicitly in time in a time stepping scheme, which for the chosen free energy is favorable in view of stability reasons. After time discretization with variable time step size $\tau^{k+1}$ we at each time instance solve the following problem:\\ Given $\varphi_{\varepsilon}^{k}$, find $\varphi_{\varepsilon}^{k+1}$, $w_{\varepsilon}^{k+1}$, $\bm{u}_{\varepsilon}$, $p_{\varepsilon}$, $\bm{q}_{\varepsilon}$, and $\pi_{\varepsilon}$ fulfilling the primal system \begin{equation}\label{eq:num:TD_primal} \begin{aligned} \alpha_{\varepsilon}(\varphi_{\varepsilon}^{k}) \bm{u}_{\varepsilon} - \mu \Delta \bm{u}_{\varepsilon} + (\bm{u}_{\varepsilon} \cdot \nabla)\bm{u}_{\varepsilon} + \nabla p_{\varepsilon} & = \bm{f} && \text{ in }\Omega, \\ \div \bm{u}_{\varepsilon} & = 0 && \text{ in }\Omega, \\ \bm{u}_{\varepsilon} &= \bm{g} &&\text{ on } \partial \Omega, \end{aligned} \end{equation} the adjoint system \begin{equation}\label{eq:num:TD_adjoint} \begin{aligned} \alpha_{\varepsilon} & (\varphi_{\varepsilon}^{k})\bm{q}_{\varepsilon} - \mu \div \left (\nabla \bm{q}_{\varepsilon} + (\nabla \bm{q}_{\varepsilon})^{T} \right) + (\nabla \bm{u}_{\varepsilon})^{T} \bm{q}_{\varepsilon} - (\bm{u}_{\varepsilon} \cdot \nabla) \bm{q}_{\varepsilon} + \nabla \pi_{\varepsilon} \\ & = \alpha_{\varepsilon}(\varphi_{\varepsilon}^{k}) \bm{u}_{\varepsilon} - \div \left (\mathcal{M}(\varphi_{\varepsilon}^{k}) \mathrm{D} _{2} h \right) &&\text{ in }\Omega,\\ \div \bm{q}_{\varepsilon} & = - \mathcal{M}(\varphi_{\varepsilon}^{k}) \mathrm{D} _{3} h + \vartheta_{\varepsilon} && \text{ in }\Omega, \\ \bm{q}_{\varepsilon} & = \bm{0} && \text{ on }\partial \Omega, \end{aligned} \end{equation} and the Cahn--Hilliard system \begin{equation}\label{eq:num:TD_phase} \begin{aligned} \varphi_{\varepsilon}^{k+1} & = \tau^{k+1} \Delta w_{\varepsilon}^{k+1} + \varphi_{\varepsilon}^{k} && \text{ in }\Omega,\\ w_{\varepsilon}^{k+1} & = - \gamma \varepsilon \Delta \varphi_{\varepsilon}^{k+1} + \frac{\gamma}{\varepsilon} \left(\psi'_{+}(\varphi_{\varepsilon}^{k+1}) + \psi'_{-}(\varphi_{\varepsilon}^{k})\right) \\ & + \frac{1}{2} \alpha'_{\varepsilon}(\varphi_{\varepsilon}^{k+1}) \abs{\bm{u}_{\varepsilon}}^{2} - \alpha'_{\varepsilon}(\varphi_{\varepsilon}^{k}) \bm{u}_{\varepsilon} \cdot \bm{q}_{\varepsilon} + J_{\varphi} && \text{ in }\Omega,\\ J_{\varphi} & = \mathcal{M}'(\varphi_{\varepsilon}^{k}) h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}^{k+1})\\ & - \div \left( \mathcal{M}(\varphi_{\varepsilon}^{k}) \mathrm{D} _{4}h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}^{k+1}) \right),\\ 0 & = \gamma\varepsilon \nabla \varphi_{\varepsilon}^{k+1} \cdot \bm{\nu}_{\partial \Omega} + \mathcal M(\varphi_\varepsilon^{k})\bm \nu_{\partial \Omega} \cdot D_4h &&\text{ on } \partial \Omega,\\ 0 &= \nabla w_{\varepsilon}^{k+1}\cdot \bm{\nu}_{\partial \Omega} &&\text{ on } \partial \Omega. \end{aligned} \end{equation} As noted above, we evaluate $\psi'_{+}$ at the new time instance for stability reasons. For the spatial discretization piecewise linear and globally continuous finite elements are used for the variables $\varphi_{\varepsilon}^{k+1}$, $w_{\varepsilon}^{k+1}$, $p_{\varepsilon}$, and $\pi_{\varepsilon}$, while piecewise quadratic and globally continuous elements are used for $\bm{u}_{\varepsilon}$ and $\bm{q}_{\varepsilon}$. The meshes are adapted using the jumps of the normal derivative of $\varphi_{\varepsilon}^{k+1}$ and $w_{\varepsilon}^{k+1}$ over edges of the underlying discretization mesh, see \cite{article:CarstensenVerfuerth99, book:Verfuerth_Adaptivity}, together with a D\"{o}rfler marking \cite{article:Doerfler96}. \subsection{Minimization of the hydrodynamic force of an obstacle}\label{ssec:num:minHydro} We investigate the minimization of the drag of an obstacle of fixed area in a channel flow with block inflow profile. The computational domain is $\Omega = (0,1.7) \times (0,0.4)$. The initial phase field $\varphi^{0}$ is defined as a circle of radius $r = 0.05$ with center at $M = (0.5,0.2)$. The boundary velocity is set to $\bm g(x,y) = (1,0)^{T}$. We fix $\delta_{\varepsilon} = 0$, $s= 1 \times 10^{6}$, and $\bm{f} \equiv \bm{0}$. We further set \begin{align*} \tau^{k+1} := \xi \min_{T} (h_{T} \norm{\nabla w_{\varepsilon}^{k}}_{L^{2}(T)}^{-1}), \end{align*} where the minimization is carried out over all triangles $T$. Here, the diameter of triangle $T$ is denoted by $h_{T}$, and $\xi$ is a positive scaling parameter typically set to $\xi=5$. This CFL-like condition prevents the interfacial region from moving too fast for the adaptation process. We restate the definition of the phase field approximation of the hydrodynamic force in a direction $\bm{a}$ as \begin{align}\label{eq:num:hydroForce} F^{\bm{a}} := \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot \left( \mu \left( \nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T} \right) - p_{\varepsilon} \, \bm{\mathrm{I}}\, \right) \cdot \bm{a} \, \mathrm{dx}\,. \end{align} When $\bm{a}$ is equal to the direction of the flow, i.e., $\bm{a} = (1,0)^{T}$, we denote the resulting approximation as $F^{D}$, which corresponds to the drag of the obstacle. Meanwhile, if $\bm{a}$ is perpendicular to the direction of the flow, i.e., $\bm{a} = (0,1)^{T}$, then we denote the resulting approximation as $F^{L}$, which corresponds to the lift of the obstacle. From \eqref{e:AdjointStrong} and \eqref{e:PhaseFieldGradientEquStrongRewrite}, the terms arising from the derivatives of $h$ in systems \eqref{eq:num:TD_adjoint} and \eqref{eq:num:TD_phase} in the present setting are given as \begin{align*} (- \div \left (\mathcal{M}(\varphi_{\varepsilon}^{k}) \mathrm{D} _{2}h \right),\bm{v})& = \mu \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}^{k}) \nabla \varphi_{\varepsilon}^{k} \cdot \left(\nabla \bm{v} + (\nabla \bm{v})^{T} \right) \bm{a} \, \mathrm{dx}\, && \forall \bm{v} \in \bm{H}^{1}_{0}(\Omega), \\ (-\mathcal{M}(\varphi_{\varepsilon}^{k}) \mathrm{D} _{3}h + \vartheta_{\varepsilon}, \eta) &= \int_{\Omega} \left( \mathcal{M}(\varphi_{\varepsilon}^{k}) \nabla \varphi_{\varepsilon}^{k} \cdot \bm{a} - \strokedint_{\Omega} \mathcal{M}(\varphi_{\varepsilon}^{k}) \nabla \varphi_{\varepsilon}^{k} \cdot \bm{a} \, \mathrm{dx}\, \right) \eta \, \mathrm{dx}\, && \forall \eta \in L^{2}_{0}(\Omega), \\ (J_{\varphi}, \zeta) &= \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}^{k}) \left( - \alpha_{\varepsilon}(\varphi_{\varepsilon}^{k}) \bm{u}_{\varepsilon} - (\bm{u}_{\varepsilon} \cdot \nabla ) \bm{u}_{\varepsilon} \right) \cdot \bm{a} \zeta \, \mathrm{dx}\, && \forall \zeta \in H^{1}(\Omega). \end{align*} Next, we report on the numerical results for the case of minimizing $F^{D}$. The parameters are chosen as $\varepsilon = 0.00025$, $\overline{\alpha} = 0.03$, $\mu = 0.001$, and $\gamma=0.01$. We note that we use path-following with respect to the value of $\mu$, starting from $\mu=0.01$, and also for the value of $\gamma$, starting from $\gamma=0.1$. In Figure \ref{fig:num:results_drag} we show results obtained with our approach. \begin{figure} \centering \includegraphics[height=3cm]{drag_sqrt__0027_sig_1em2_eta_1em3} \includegraphics[height=3cm]{drag_sqrt__0027_velocity} \caption{Result for minimizing the drag using $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{c_{0}}\sqrt{\frac{\psi(\varphi_{\varepsilon})}{2}}$. In the left plot we show the obstacle (i.e., $\varphi_{\varepsilon} \leq 0$) and streamlines of $\bm{u}_{\varepsilon}$ in black, and the pressure outside of the obstacle in gray. Darker gray means higher pressure. On the right we show $\abs{\bm{u}_{\varepsilon}}$ in gray, where darker gray means lower velocity. The isoline $\varphi_{\varepsilon} \equiv 0$ is shown in white and again streamlines are displayed in black. The results for $\mathcal M(\varphi_{\varepsilon}) = \frac{1}{2}$ are visually indistinguishable from these results. Note that we only show the computational domain in the neighbourhood of the obstacle.} \label{fig:num:results_drag} \end{figure} The drag for $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{c_0}\sqrt{\frac{\psi(\varphi_{\varepsilon})}{2}}$ is given by $F^{D} = 3.9454 \times 10^{-2}$ $(3.9492 \times 10^{-2})$, and for $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{2}$ we have $F^{D} = 3.9117 \times 10^{-2}$ $(3.9499 \times 10^{-2})$. In brackets we give the drag obtained by evaluating the surface formulation over the isoline $\varphi_{\varepsilon} \equiv 0$. We see that both formulations give very similar results. \subsection{Maximization of the lift-drag ratio of an obstacle}\label{ssec:maxLD} Based on the results of the previous section we now investigate the maximization of the lift-to-drag ratio given by \begin{align*} R := F^{L} / F^{D}, \end{align*} To this end, we consider \begin{align*} \int_\Omega \mathcal M(\varphi_{\varepsilon} )h(x, \nabla \bm{u}_{\varepsilon}, p_{\varepsilon}, \nabla \varphi_{\varepsilon}) \, \mathrm{dx}\, := - \frac{ \int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot (\mu (\nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}) - p_{\varepsilon} \, \bm{\mathrm{I}}\,) \bm{a}^{\perp} \, \mathrm{dx}\,}{\int_{\Omega} \mathcal{M}(\varphi_{\varepsilon}) \nabla \varphi_{\varepsilon} \cdot (\mu(\nabla \bm{u}_{\varepsilon} + (\nabla \bm{u}_{\varepsilon})^{T}) - p_{\varepsilon} \, \bm{\mathrm{I}}\,) \bm{a} \, \mathrm{dx}\,}, \end{align*} with $\bm{a} = (1,0)^{T}$ and $\bm{a}^{\perp} = (0,1)^{T}$. The numerical setup is the same as in the previous section and the parameters are chosen as $\varepsilon = 0.0005$, $\overline{\alpha} = 4$, $\mu = 1/15$, and $\gamma=0.3$. In this example we fix the y-coordinate of the center of mass of the obstacle by a Lagrange multiplier approach in order to keep it fixed at the initial position. We define the center of mass of the obstacle as \begin{align*} \mathrm{com} = \frac{ \int_{\Omega} \frac{1-\varphi_{\varepsilon}}{2} x \, \mathrm{dx}\,}{ \int_{\Omega} \frac{1-\varphi_{\varepsilon}}{2}\, \mathrm{dx}\,}. \end{align*} In Figure \ref{fig:num:results_liftdrag} we show results for this parameter set. \begin{figure} \centering \includegraphics[height=3cm]{liftdrag_0011_sig_3em1_eta_1d15_cy} \includegraphics[height=3cm]{liftdrag_simple_0012_sig_3em1_eta_1d15_cy} \caption{Result for maximizing the lift-to-drag ratio using $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{c_{0}}\sqrt{\frac{\psi(\varphi_{\varepsilon})}{2}}$ (left) and $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{2}$ (right). The obstacle (i.e., $\varphi_{\varepsilon} \leq 0$) and streamlines are shown in black and the velocity magnitude in gray. Darker gray means larger velocity. Note that we only show the computational domain in the neighbourhood of the obstacle.} \label{fig:num:results_liftdrag} \end{figure} We observe the expected optimal shape for both formulations, but for $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{c_{0}}\sqrt{\frac{\psi(\varphi_{\varepsilon})}{2}}$ we obtain a longer and thinner obstacle. The lift-to-drag ratio for $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{c_{0}}\sqrt{\frac{\psi(\varphi_{\varepsilon})}{2}}$ is $R = 1.1104$, and for $\mathcal{M}(\varphi_{\varepsilon}) = \frac{1}{2}$ it is $R= 0.9885$. We stress that, here we calculate with a rather small value of $\mu = 1/15$ and that the minimal magnitude of velocity inside the obstacle is $4 \times 10^{-2}$, which is rather large. However, we think that the results are a promising starting point for further investigations.
{ "timestamp": "2015-04-27T02:04:37", "yymm": "1504", "arxiv_id": "1504.06402", "language": "en", "url": "https://arxiv.org/abs/1504.06402" }
\section{introduction} The emission line intensity mapping is a technique to access high-$z$ galaxies below the detection limit without losing redshift information, as proposed by e.g. \citet{2010JCAP...11..016V, 2011JCAP...08..010V, 2011ApJ...728L..46G, 2012ApJ...745...49G,2013ApJ...768..130G,2011ApJ...741...70L, 2013ApJ...763..132S,2014ApJ...786..111P}. Optimistically, it only collects radiation from galaxies in a selected redshift range, as the spurious flux due to foregrounds, contaminating radiation and noise can be in principle removed or suppressed. Compared with galaxy surveys that aim at resolving faint spots in a limited field of view (FOV), the advantages of intensity mapping rely on the fact that, if the galaxy luminosity function has a sufficiently steep faint end, the observed radiation is actually dominated by unresolved sources \citep{2014ApJ...793..116U}. Even if this is not the case, intensity mapping can still be used to study unresolved galaxies once resolved sources are removed (masked). Interestingly, an intensity mapping experiment could be carried out with a modest aperture but large FOV telescope. The [\hbox{C~$\scriptstyle\rm II$}] 157.7~$\mu$m fine-structure line arising from the $^2$P$_{3/2}$$\rightarrow$$^2$P$_{1/2}$ transition is the brightest amongst all metal lines emitted by the interstellar medium (ISM) of star-forming galaxies. It is associated to the star formation in galaxies \citep{2002A&A...385..454B,2011MNRAS.416.2712D,2014arXiv1402.4075D,2014arXiv1409.7123H} and plays a key role in the energy balance of galaxies, as it provides one of the most efficient cooling processes for the neutral ISM. With respect to the Ly$\alpha$ line, the [\hbox{C~$\scriptstyle\rm II$}] line has the advantage of being unaffected by dust attenuation and neutral hydrogen absorption. In the local Universe, [\hbox{C~$\scriptstyle\rm II$}] line has been successfully detected even in galaxies with amazingly low star formation rates (SFR) of $\sim 0.001~M_\odot$yr$^{-1}$ \citep{2014arXiv1402.4075D}. These authors have also derived the relation between the [\hbox{C~$\scriptstyle\rm II$}] line luminosity, $L_{\rm CII}$, and the SFR of \textit{local} galaxy samples \citep{2014arXiv1402.4075D}. Surprisingly, given the rather complicated physics behind the [\hbox{C~$\scriptstyle\rm II$}] emission, $L_{\rm CII}$ scales rather tightly with SFR. However, at high redshift ($z \gsim4$), the [\hbox{C~$\scriptstyle\rm II$}] line has been detected so far only in quasar host galaxies \citep{2005A&A...440L..51M, 2012ApJ...751L..25V,2012A&A...543A.114G,2013ApJ...773...44W,2013ApJ...770...13W,2014arXiv1409.4418C} or ultra-luminous infrared galaxies (ULIRGs, with $L_{\rm IR} > 10^{12}~L_\odot$ where $L_{\rm IR}$ is the in-band luminosity at $8-1000~\mu$m) characterized by SFR $\sim 10 ^{2-3} ~M_\odot$yr$^{-1}$ \citep{2011ApJ...740...63C, 2011A&A...530L...8D, 2014A&A...565A..59D}. For typical normal star-forming galaxies (SFR $\sim 10~M_\odot$yr$^{-1}$), [\hbox{C~$\scriptstyle\rm II$}] emission has not yet been detected \citep{2013ApJ...778..102O, 2014ApJ...792...34O,2014arXiv1407.5793S,2014ApJ...784...99G}. This might indicate that most of carbon in these galaxies is at higher ionization state and/or their ISM is characterized by a very low level of metal enrichment. By applying the $L_{\rm CII} - $SFR relation derived from local galaxies samples to high redshift Ly$\alpha$ emitters it is possible to compute the expected [\hbox{C~$\scriptstyle\rm II$}] flux from these galaxies. The fact that their [\hbox{C~$\scriptstyle\rm II$}] line remains undetected even with ALMA provides useful constraints on their internal radiation field, molecular content, gas density, and metallicity \citep{V15,2013MNRAS.433.1567V,2014ApJ...784...99G}. As probing tools, intensity mapping experiments are affected by the presence of foreground radiation, including that represented by the galaxy continuum redshifted into the observed band. Unfortunately, it is almost often the case that the foreground intensity largely exceeds that of the signal. The typical [\hbox{C~$\scriptstyle\rm II$}] line luminosity is $0.1\% - 1\%$ of the $L_{\rm IR}$ \citep{2009A&A...500L...1M}. This implies that even if only one percent of the IR luminosity is redshifted into the observed band, the continuum emission overcomes the [\hbox{C~$\scriptstyle\rm II$}] line. In addition to the far-infrared (FIR) continuum foreground, there are other emission lines emitted from a range of redshifts that fall at the same frequency of the [\hbox{C~$\scriptstyle\rm II$}] signal; they act as \textit{contaminants}. For example, the [\hbox{O~$\scriptstyle\rm I$}] line with wavelength 145~$\mu$m, the two [\NII] lines ($\lambda = 122, 205~\mu$m) and two [\CI] lines ($\lambda= 610, 371~\mu$m), and a handful of CO rotational transition lines in the range 200-2610 $\mu$m. Among these, the CO rotational transition lines are the most relevant here. For example, since the CO(4-3) line has a wavelength $651~\rm \mu m$, if emitted from $z = 0.45$ galaxies, it contaminates the [\hbox{C~$\scriptstyle\rm II$}] emission from $z = 5$ galaxies. The emission efficiency\footnote{As a caveat, we note that there is no clear consensus in the literature on this value, see \citet{2014MNRAS.443.3506B}.} of the CO(4-3) line from star-forming galaxies is $\sim 2\%$ of the [\hbox{C~$\scriptstyle\rm II$}] line \citep{2010JCAP...11..016V}. However, the luminosity distance from $z=0$ to 0.45 is only $\sim 5\%$ of that to $z=5$. As the flux is inversely proportional to the square of the luminosity distance, whereas the proper distance interval that corresponds to the same bandwidth is $\propto (1+z)^{-3/2}$, the CO flux can be more than ten times higher than the [\hbox{C~$\scriptstyle\rm II$}] one, even ignoring the cosmological evolution of the star formation rate density. Thus, CO contamination, as well as the continuum foreground, cannot be ignored and must be considered thoroughly. Although the [\hbox{C~$\scriptstyle\rm II$}] signal itself can be computed analytically \citep{2012ApJ...745...49G,2014ApJ...793..116U}, a reliable investigation of the influence of foreground/contamination is only possible based on mock maps that carefully mimic observations as close as possible. This is the prime motivation of this paper. Using halo catalogs recovered from large scale N-body simulations, we produce mock maps including (a) [\hbox{C~$\scriptstyle\rm II$}] signal, (b) FIR continuum foreground, (c) CO and [\CI] contamination lines, and (d) instrumental noise. We then test our foreground/contamination removal scheme on these maps to demonstrate the successful recovery of the original [\hbox{C~$\scriptstyle\rm II$}] signal. The layout of this paper is as follows. In Sec. \ref{method}, we describe the model used to compute the [\hbox{C~$\scriptstyle\rm II$}] from high-$z$ galaxies and the necessary steps to generate mock maps. We show our forecasts for the [\hbox{C~$\scriptstyle\rm II$}] signal, extragalactic FIR continuum and the contamination and perform foreground/contamination removal experiments on mocks to recover the original [\hbox{C~$\scriptstyle\rm II$}] signal. The results are presented in Sec. \ref{result}. The conclusions and discussion are found in Sec. \ref{conclusion}. \section{methods}\label{method} \subsection{[\hbox{C~$\scriptstyle\rm II$}] emission from early galaxies}\label{obs} \citet{2013MNRAS.433.1567V} and \citet{V15} (hereafter V15) have combined high-$z$ galaxy numerical simulations with sub-grid models of the ISM to compute the expected [\hbox{C~$\scriptstyle\rm II$}] luminosity ($L_{\rm CII}$) arising from diffuse neutral gas and photodissociation regions (PDRs). The resulting trend of $L_{\rm CII}$ with SFR and metallicity ($Z$) is consistent with observations of local metal-poor dwarf galaxies \citep{2014arXiv1402.4075D}. For the range [0.1,100]~$M_\odot$yr$^{-1}$ and [0.05,1.0]~$Z_\odot$ which should encompass most of the sources contributing to the total [\hbox{C~$\scriptstyle\rm II$}] emission at high redshift, V15 results are well reproduced by the following fitting formula \begin{align} {\rm log}(L_{\rm CII})& = 7.0 + 1.2\times{\rm log(SFR)} + 0.021\times{\rm log(Z)} \nonumber \\ &+ 0.012\times{\rm log(SFR)log(Z)} - 0.74\times{\rm log^2(Z)}, \label{LCII} \end{align} where $L_{\rm CII}$, SFR and $Z$ are in units of $L_\odot$, $M_\odot$yr$^{-1}$ and $Z_\odot$ respectively. The next step is to compute the $L_{\rm CII}-M_{\rm h}$ relation, where $M_{\rm h}$ is the halo mass. To this aim we need to know the SFR$-M_{\rm h}$ and $Z-M_{\rm h}$ relations. We obtain them from the procedure described in below paragraphs\footnote{Note that, compared to galaxies, the intergalactic medium emits a negligible [\hbox{C~$\scriptstyle\rm II$}] signal (e.g. \citealt{2012ApJ...745...49G}) at $z\gsim2$, and therefore is not considered in this work.}. Since the UV luminosity ($L_{\rm UV}$) of a galaxy scales with its SFR (e.g. \citealt{1998ARA&A..36..189K}), we adopt the observed UV luminosity functions (LFs) to derive the SFR$-M_{\rm h}$ relation. The {\it measured} UV LF is well described by a Schechter parameterization \citep{1976ApJ...203..297S}: \begin{equation} \frac{dn}{dM_{\rm UV}} = 0.4\,{\rm ln}(10)\,\phi_\star \, x^{1+\alpha} e^{-x}, \label{dndMUV} \end{equation} where $x= 10^{0.4(M^\star_{\rm UV}-M_{\rm UV})}$, with $M_{\rm UV}$ the dust-attenuated absolute magnitude. For the rest-frame UV luminosity at 1600 \AA, the redshift-dependent parameters $(M^\star_{\rm UV}, \phi_\star, \alpha)$ that fit observations between $z\sim4-8$ are \citep{2014arXiv1403.4295B} \begin{align} M^\star_{\rm UV} &= -20.96+0.01(z-6) \nonumber \\ \phi_\star&=0.46\times10^{-0.27(z-6)}10^{-3} \\ \alpha&=-1.87-0.10(z-6)\nonumber\,. \end{align} The intrinsic absolute magnitude is $M^\prime_{\rm UV}=M_{\rm UV}-A_{1600}$, where $A_{1600} = 4.43+1.99\beta$ ($\ge 0$) is the dust attenuation at 1600 \AA ~\citep{1999ApJ...521...64M} and $\beta$ is the measured spectral slope ($f_\lambda\propto \lambda^\beta$). Normally $\beta$ depends on $M_{\rm UV}$ and is fitted by \citep{2014ApJ...793..115B} \begin{equation} \beta = \beta_{-19.5}+\frac{d\beta}{dM_{\rm UV}}(M_{\rm UV}+19.5). \end{equation} From Fig. 2 of \citet{2014ApJ...793..115B} we find the following redshift-dependent fit, valid for $4 \lower.5ex\hbox{\ltsima} z\lower.5ex\hbox{\ltsima} 7$ \begin{align} &\beta_{-19.5}=-1.97-0.06(z-6) \nonumber \\ &\frac{d\beta}{dM_{\rm UV}}=-0.18-0.03(z-6). \end{align} The intrinsic UV LF is then connected to the measured UV LF via \begin{equation} \frac{dn^\prime}{dM^\prime_{\rm UV}}(M^\prime_{\rm UV},z)=\frac{dn}{dM_{\rm UV}}(M_{\rm UV},z). \end{equation} Assuming that the intrinsic $L^\prime_{\rm UV}$ monotonically increases with $M_{\rm h}$ and that all halos host some star formation activity, we obtain the $L^\prime_{\rm UV}-M_{\rm h}$ relation from \begin{equation} \int_{M^\prime_{\rm UV}} \frac{dn^\prime}{d{M^\prime_{\rm UV}}} d{M^\prime_{\rm UV}} = \int_{M_{\rm h}} \frac{dn}{dM_{\rm h}}dM_{\rm h}, \end{equation} where $dn/dM_{\rm h}$ is the halo mass function \citep{1999MNRAS.308..119S,2001MNRAS.323....1S}. This ``abundance matching" technique will be used also in Sec. \ref{FIRcontinuum} to derive the relation between the IR luminosity and halo mass. We then derive the SFR from $L^\prime_{\rm UV}$. In principle $L^\prime_{\rm UV}$ depends not only on the SFR, but also on metallicity and stellar age. However, we note that the UV luminosity is insensitive to the metallicity and the stellar age, unless stars are very young ($\lower.5ex\hbox{\ltsima} 10$~Myr). So we can safely assume that $L^\prime_{\rm UV}$ scales with SFR as \begin{equation} L^\prime_{\rm UV} =l_{\rm UV} \times {\rm SFR}. \end{equation} We compute $l_{\rm UV}$ from {\tt Starburst99}\footnote{http://www.stsci.edu/science/starburst99/docs/default.htm} \citep{1999ApJS..123....3L,2005ApJ...621..695V,2010ApJS..189..309L} by assuming a metallicity $0.1~Z_\odot$, stellar age 10\% of Hubble time, and a Salpeter IMF between $0.1- 100~M_\odot$. We choose the ``continuous star formation" mode. At 1600 \AA, $l_{\rm UV} =(8.9, 8.6, 8.3)\times10^{27}$~erg s$^{-1}$Hz$^{-1}$($M_\odot $/yr$)^{-1}$ at $z= (5, 6, 7)$; which is similar to \citet{1998ARA&A..36..189K}. We plot the SFR derived with this procedure at various redshifts as a function of $M_{\rm h}$ in Fig. \ref{SFR_M}. As a comparison we also plot the SFR$-M_{\rm h}$ relation at $z\sim5$ found by \citet{2014arXiv1410.4808S} who fit the relation using semi-analytical models of galaxy formation. As can be seen by inspecting this figure, the specific SFR, i.e. the SFR per unit mass, starts to drop at a turnover mass $\sim10^{11}~M_\odot$. This is consistent with semi-analytical model predictions. The final ingredient of Eq. (\ref{LCII}) is $Z$. As the metallicity of high-$z$ galaxies is very poorly constrained at present, we derive it by combining the $L^\prime_{\rm UV} - M_{\rm h}$ relation with the ``fundamental metallicity relation" (FMR) that relates $Z$ to the stellar Mass ($M_\star$) and SFR. The FMR inferred from low-$z$ galaxy observations \citep{2010MNRAS.408.2115M} is given by the following equation: \begin{align} \log(Z) &= 0.21+0.37\log(M_{10})-0.14\log({\rm SFR})\nonumber\\ &-0.19\log^{2}(M_{10})-0.054\log^{2}({\rm SFR})\\ &+0.12\log(M_{10})\log({\rm SFR})\nonumber, \end{align} where $M_{10}=M_\star/10^{10}$, $M_\star$ and $Z$ are expressed in Solar units. No redshift evolution is found at least up to $z= 2.5$ \citep{2010MNRAS.408.2115M}; therefore we apply it also to high-$z$ galaxies, with the caveat that deviations might appear for high-$z$ galaxies. The stellar mass, $M_\star$, is linked to the UV absolute magnitude via the mass-to-light ratio. From the latest measurements \citep{2014MNRAS.444.2960D}, \begin{equation} {\rm log}(M_\star) ={\rm log}(M_\star^{0})+\frac{dM_\star}{dM_{\rm UV}}(M_{\rm UV}+19.5), \end{equation} where $\log(M_\star^{0}) = (9.00, 8.84, 8.63)$ and $dM_\star/dM_{\rm UV} = (-0.46, -0.54, -0.45)$ in the redshift ranges $4.5 \le z < 5.5$, $5.5 \le z < 6.5$, and $6.5 \le z < 7.5$, respectively, for the model without nebular line contribution. We use the observed UV magnitude $M_{\rm UV}$ (i.e., without dust correction), and we ignore the negligible difference between the UV luminosity at 1500 \AA~and 1600 \AA. Fig. \ref{Z_M} shows the derived $Z-M_{\rm h}$ relation at $z=5, 6$ and 7. We finally compute the [\hbox{C~$\scriptstyle\rm II$}] luminosity of halos with mass $M_{\rm h}$ by substituting the SFR and $Z$ derived above into Eq. (\ref{LCII}). Although different star formation histories may cause a scatter in the [\hbox{C~$\scriptstyle\rm II$}] luminosities for halos of a given mass, we neglect this effect because it only results in noise as long as the luminosity dispersion is independent of position on large scales. The $L_{\rm CII} - M_{\rm h}$ relation derived with the above procedure is shown in Fig. \ref{logLCII_M} by solid, short dashed and long dashed lines for $z=5, 6$ and 7 respectively. When deriving the above $L_{\rm CII} - M_{\rm h}$ relation, the properties of faint galaxies are extrapolated from the observed bright galaxies. We check the validity of this relation by comparing the result obtained through this semi-empirical method with the numerical simulations of cosmic metal enrichment presented by \citet{2014MNRAS.440.2498P} (hereafter P14). P14 have used an hydrodynamical simulation to follow the star formation and the Pop III-Pop II transition ($Z > 10^{-4}~Z_\odot$). Hereafter, we only consider the Pop II star formation mode. If a halo has formed stars, it contains a number of stellar particles whose birth date and metallicity is recorded. For a selected halo, the mean stellar age is \begin{equation} t_{\rm age} = \frac{\sum_{i} \Delta t_i m_{\star,i}}{\sum_{i} m_{\star,i}}, \end{equation} where $\Delta t_i$ is the birth date of the $i$-th stellar particle, $m_{\star,i}$ is its mass and the sum is extended over all Pop II stellar particles in that halo. The mean metallicity of the halo is \begin{equation} Z = \frac{\sum_{i} Z_i m_{\star,i}}{\sum_{i} m_{\star,i}}, \end{equation} where $Z_i$ is the metallicity of the $i$-th Pop II stellar particle. By dividing the total Pop II stellar mass by the mean age, we obtain the mean SFR of the halo, \begin{equation} {\rm SFR} = \frac{\sum_i m_{\star,i}}{t_{\rm age}}. \end{equation} The SFR vs. $M_{\rm h}$ and $Z$ vs. $M_{\rm h}$ at $z\sim5$ in P14 simulation are also plotted in Fig. \ref{SFR_M} and Fig. \ref{Z_M} respectively. Using Eq. (\ref{LCII}) we calculate the [\hbox{C~$\scriptstyle\rm II$}] luminosity of halos from their SFR and $Z$ and group halos into several mass bins at each simulation output. Some halos only have Pop III stars, or are too small to host any star formation. Therefore in each mass bin only a fraction $f_{\rm CII}$ of halos exhibit [\hbox{C~$\scriptstyle\rm II$}] emission. This fraction tends to one as the halo mass increases. We denote the mean log of [\hbox{C~$\scriptstyle\rm II$}] luminosity for {\it halos that exhibit [\hbox{C~$\scriptstyle\rm II$}] emission} by $\langle \log (L_{\rm CII}) \rangle$, and use \begin{equation} {\rm log}(L_{\rm CII}) = \langle{\rm log}(L_{\rm CII})\rangle+{\rm log}(f_{\rm CII}) \end{equation} as the mean [\hbox{C~$\scriptstyle\rm II$}] luminosity of {\it all} halos with mass $M_{\rm h}$; this quantity is plotted in Fig. \ref{logLCII_M}. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig1.eps} \caption{The SFR derived from UV LFs as a function of halo mass at $z = 5,\, 6\,{\rm and}\, 7$ respectively. The SFR$-M_{\rm h}$ relation in \citet{2014arXiv1410.4808S} at $z\sim5$ and the SFR of each halo containing Pop II stars in P14 simulation at $z\sim5$ are also shown.} \label{SFR_M} } \end{figure} \begin{figure} \centering{ \includegraphics[scale=0.4]{fig2.eps} \caption{Metallicity, derived from mass-to-light ratio and the FMR, as a function of halo mass at $z = 5,\, 6\,{\rm and}\, 7$. The metallicity of each halo containing Pop II stars in the P14 simulation at $z\sim5$ are also shown. } \label{Z_M} } \end{figure} \begin{figure} \centering{ \includegraphics[scale=0.4]{fig3.eps} \caption{The mean $L_{\rm CII}$ as a function of halo mass, $M_{\rm h}$, at redshifts $\sim$5, 6 and 7 respectively. The lines are the relation derived using observations while the points are for the P14 simulation (the errorbars are the standard deviation of $\langle \log(L_{\rm CII}) \rangle$ for each mass bin). } \label{logLCII_M} } \end{figure} \subsection{Far-infrared continuum foreground}\label{FIRcontinuum} In this subsection, we model the extragalactic foreground due to the FIR continuum from galaxies at different redshifts. The Milky Way FIR and CMB radiation are assumed to be removed straightforwardly and hence are not considered in this work. The FIR luminosity function of galaxies, including spiral galaxies, starburst galaxies, star-forming galaxies containing AGNs and sometimes AGNs, is studied in e.g. \citet{2009A&A...496...57M,2013MNRAS.432...23G,2013A&A...553A.132M}. In \citet{2013MNRAS.432...23G}, the LF can be written as \begin{equation} \Phi = \Phi_\star\left(\frac{L_{\rm IR}}{L^\star_{\rm IR}}\right)^{1-\alpha} {\rm exp}\left[-\frac{1}{2\sigma^2}{\rm log}^2\left(1+\frac{L_{\rm IR}}{L^\star_{\rm IR}}\right)\right], \label{PhiIR} \end{equation} where $L_{\rm IR }$ is the infrared luminosity between $8 - 1000~{\rm \mu}$m. We use redshift evolution formulae of parameters $\alpha, \sigma, \Phi_\star, L_{\rm IR}^\star$ \citep{2013MNRAS.432...23G}: $\Phi_\star = 5.7\times10^{-3}(1+z)^{-0.57}$ for $z \le 1.1$, and $\Phi_\star = 6.81\times10^{-2}(1+z)^{-3.92}$ for $z > 1.1$; $L_{\rm IR}^\star = 7.68\times10^9 (1+z)^{3.55}$ for $z \le 1.85$ and $L_{\rm IR}^\star =5.80\times10^{10}(1+z)^{1.62}$ for $z > 1.85$; $\alpha = 1.15$, $\sigma = 0.52$ for $z \le 0.3$, and $\alpha = 1.2$, $\sigma = 0.5$ otherwise. The above LFs are constructed from galaxy samples at $z\lower.5ex\hbox{\ltsima} 4.2$, therefore including the large majority of the sources contributing to the FIR continuum (and CO, which is associated to the FIR continuum, see next subsection). We use again the abundance matching technique \citep{2012A&A...537L...5B} to construct the $L_{\rm IR} - M_{\rm h}$ relation. We suppose that the contribution of subhalos to the IR luminosity function is small and we ignore them. By equating the number density of galaxies with IR luminosity above $L_{\rm IR}$ and the number density of halos above $M_{\rm h}$, \begin{equation} \int_{L_{\rm IR}} \Phi(L_{\rm IR},z)dL_{\rm IR} = \int_{M_{\rm h}} \frac{dn}{dM_{\rm h}}dM_{\rm h}, \label{LIR} \end{equation} the $L_{\rm IR} - M_{\rm h}$ relation is derived. We plot the the IR luminosity - halo mass relation at redshift 0.5 and 2 in Fig. \ref{L_IR}. For the same reasons given in Sec. \ref{obs}, we do not consider the IR luminosity dispersion among halos with the same mass $M_{\rm h}$. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig4.eps} \caption{FIR continuum luminosity derived from Eq. (\ref{LIR}) as a function halo mass at redshift $z=0.5$ and 2. } \label{L_IR} } \end{figure} \subsection{CO and [\CI] emission lines contamination} The CO rotational transition lines from low-redshift galaxies are by far the most important contaminants for the $z>5$ \hbox{C~$\scriptstyle\rm II$} signal. Although some studies aimed at measuring CO luminosity functions exist (e.g. \citealt{2003ApJ...582..659K} and references therein), they lack data either for higher rotational transition numbers, $J$, or at high redshift. On the other hand, the CO line luminosity is found to be closely related to the IR luminosity, since both lines are good star formation activity tracers \citep{2009MNRAS.399..264B,2014MNRAS.444.1301P}. The CO line luminosity is derived by using the ${\rm log}(L_{\rm IR}) =\alpha{\rm log}(\oline{L}_{\rm CO})+\beta$ relations presented in Tab. 3 of \citet{2014ApJ...794..142G} for lines with $J$-ladders from 1 to 13. These relations are fitted from samples of local ($z < 0.1$) (U)LIRGs and high-$z$ ($z > 1$) dusty star-forming galaxies (DSFGs). Since CO lines are considered as contaminants to be removed when recovering the [\hbox{C~$\scriptstyle\rm II$}] signal, the scatter around the fitting relation should be modeled. To this aim we assume a Gaussian distribution of the form \begin{equation} p(L_{\rm CO}|~\oline{L}_{\rm CO}) =\frac{1}{\sqrt{2\pi} \sigma} {\rm exp} \left[-\frac{x^2}{2\sigma^2}\right], \label{pCO} \end{equation} where $x= {\rm log}( L_{\rm CO}) - {\rm log}( \oline{L}_{\rm CO} )$ and $\sigma = s/\alpha$ is the variance; $s$ is the scatter of the data around the ${\rm log}(L_{\rm IR}) - {\rm log} (\oline{L}_{\rm CO}) $ fitting in \citet{2014ApJ...794..142G}, including the intrinsic dispersion and statistical errors. We further consider the contamination from two [\CI] fine-structure lines: (i) [\CI(1-0)], corresponding to the $^3$P$_{1}$$\rightarrow$$^3$P$_{0}$ transition, at 492~GHz, and (ii) and [\CI(2-1)], corresponding to the $^3$P$_{2}$$\rightarrow$$^3$P$_{1}$ transition, with frequency 809~GHz. Several authors have reported [\CI] observational data (e.g. \citealt{2000ApJ...537..644G, 2002A&A...383...82I,2011ApJ...730...18W,2013MNRAS.435.1493A}), finding relations between the [\CI] and CO or IR luminosities. Motivated by observations, \citet{2014MNRAS.444.1301P} have calculated the expected [\CI]/$L_{\rm IR}$ ratios for $0 < z < 2$ galaxies, by combining a semi-analytical galaxy formation model with radiative-transfer and line-tracing calculations. We adopt the outcome of these theoretical calculations and we add 0.25 dex scatters to the mean ratio, namely the maximum of the deviations reported by \citet{2014MNRAS.444.1301P}. \subsection{Instrumental noise}\label{sec_instrumental} We have to account for instrumental noise, in order to have predictions that can be fairly compared with observation. The noise level of a radio telescope is given by the standard expression \begin{equation}\label{eq_noise} \sigma_{\rm N} = \frac{2k_BT_{\rm sys}}{A\sqrt{\Delta\nu_0 t}}, \end{equation} where $k_B$ is the Boltzmann constant, $T_{\rm sys}$ is the system temperature, $A$ is the area of the antenna, $t$ is the integration time per FOV. If the camera has $N_{\rm pix}$ pixels and the observation is performed at wavelength $\lambda_0$, normally the instrument is designed by such a way that ${\Omega_{\rm FOV}} \sim \lambda_0^2/AN_{\rm pix}$ holds. To cover a sky region of solid angle $\Omega_{\rm map}$, the total integration time is $t_{\rm obs} = t\times \Omega_{\rm map}/\Omega_{\rm FOV}$. We model the instrumental noise as a zero mean Gaussian random variable without spatial and frequency correlation, and we add such fluctuations to the mock maps. \subsection{Mock maps} The light cone for which we produce the intensity maps is built from the halo catalogs of the {\tt BolshoiP} simulation\footnote{\url{http://www.cosmosim.org/cms/simulations/bolshoip-project/bolshoip/}} ({\tt Bolshoi} simulation with {\tt Planck} cosmology, see {\tt Bolshoi} simulation paper \citealt{2011ApJ...740..102K}). In the simulation, the smallest halos resolved are $\approx5\times10^9~M_\odot$, well below the mass of halos that are expected to host the bulk of [\hbox{C~$\scriptstyle\rm II$}] emission (see Sec.s \ref{COmasking}). The box of this simulation is $L=250~h^{-1}$cMpc on a side, corresponding to 2.4 degree when located at $z=7$. When making light cones from a simulation with a box length smaller than the cone depth, a standard protocol is to replicate the same halo catalog along the radial direction at the same time applying a ``randomization'' procedure made of random translations, rotations and reflections in order to avoid spurious periodicity effects \citep{2005MNRAS.360..159B}. We divide a $2.4\times2.4$~deg$^2$ sky region into $200\times200$ pixels whose angular size is $43''$. This corresponds to the beam size of a 6~m radio telescope for $\lambda_0=\lambda_{\rm CII }(1+7)$. The frequency range [238, 317]~GHz ($z_{\rm CII} = $[7, 5]) is equally divided into 60 bins with bandwidth of each bin $\Delta\nu_0 = 1.3$~GHz. Given a pixel in the [\hbox{C~$\scriptstyle\rm II$}] map, the measured intensity in the frequency bin centered at $\nu_0$ is \begin{equation} I_{\rm CII}(\nu_0) = \frac{1}{(\Delta \theta)^2}\sum_j \frac{1}{\Delta \nu_0}\frac{L^j_{\rm CII}}{4\pi r_j^2(1+z_j)^2}, \label{ICII} \end{equation} where $\Delta \theta$ is the angular size of the pixel and $r_j$ is the comoving distance up to $z_j$; the sum is performed on all halos seen by this beam and with redshift \begin{equation} \frac{\nu_{\rm CII}}{\nu_0+\Delta \nu_0/2}-1 \le z_j \le\frac{\nu_{\rm CII}}{\nu_0-\Delta \nu_0/2}-1. \label{zj} \end{equation} For the CO and [\CI] emission lines the procedure is the same as for [\hbox{C~$\scriptstyle\rm II$}]. For the FIR continuum, Eq. (\ref{ICII}) becomes \begin{equation} I_{\rm FIR}(\nu_0)=\frac{1}{(\Delta \theta)^2}\sum_j \frac{L^j_{\rm IR}{\rm SED}_{\rm IR}(\nu)(1+z_j)}{4\pi r_j^2(1+z_j)^2}, \end{equation} where $\nu = \nu_0(1+z_j)$, and ${\rm SED}_{\rm IR}(\nu)$ is the normalized spectrum template of the galaxy, \begin{equation} \int_{c/1000\mu m}^{c/8\mu m} {\rm SED}_{\rm IR}(\nu) d\nu = 1. \end{equation} For simplicity, we choose the Spi4 spiral galaxy SED template from the {\tt SWIRE} template library\footnote{\url{http://www.iasf-milano.inaf.it/~polletta/templates/swire_templates.html}}\citep{2007ApJ...663...81P} as a typical continuum template for all galaxies (note that only the FIR part is used). For all above radiation, we take into account the redshift distortions produced by peculiar motions along the radial direction. We also generate noise maps at each frequency by adopting\footnote{ We consider the third octile of precipitable water vapour, pwv = 0.913~mm, namely the one assumed by the ALMA Sensitivity Calculator (ASC) in the default case. The reader may also refer to Fig. 2.14 of the ALMA handbook (\url{https://almascience.eso.org/documents-and-tools/cycle-0/alma-technical-handbook/at_download/file}) } $T_{\rm sys} = 150$~K, $N_{\rm pix} = 128\times128$, and a total integration time $t_{\rm obs}=5000$~hr. The frequency range that we are considering, [238 - 316]~GHz, is sufficiently far from the prominent water atmospheric absorption at 325 GHz. Although there is a deep decreasing trend of transmission with increasing frequency, we note that the transmission is larger than 0.9 at $\nu_0 < 300$~GHz, for pwv=1 mm, meaning that the [\hbox{C~$\scriptstyle\rm II$}] signal from $z > 5.3$ galaxies is not strongly affected by transmission issues. In case of other atmospheric absorption features, we could simply drop the corresponding frequency bins. This treatment would not strongly affect our conclusions. \begin{figure*} \centering{ \includegraphics[width=0.49\textwidth]{fig5a.eps} \includegraphics[width=0.49\textwidth]{fig5b.eps} \includegraphics[width=0.49\textwidth]{fig5c.eps} \includegraphics[width=0.49\textwidth]{fig5d.eps} \caption{Map of the [\hbox{C~$\scriptstyle\rm II$}] signal (left top), FIR continuum foreground (right top), the CO lines contamination (left bottom) and full observed map made by the sum of the signals and instrumental noise (right bottom). All maps are for the $(316\pm0.65)$~GHz frequency bin. As the emission line signal is much weaker than the continuum, the full map looks very similar to the continuum map. } \label{map} } \end{figure*} \begin{figure} \centering{ \includegraphics[scale=0.4]{fig6.eps} \caption{Mean intensity of [\hbox{C~$\scriptstyle\rm II$}] signal, CO and [\CI] contamination and FIR continuum as a function of observed frequency. The $z_{\rm CII}$ is marked on upper abscissa. } \label{intensity} } \end{figure} For the $(316\pm 0.65)$~GHz frequency bin, maps of [\hbox{C~$\scriptstyle\rm II$}] signal, FIR continuum foreground, CO lines contamination and their sum plus the [\CI] lines and instrumental noise are shown separately in Fig. \ref{map}. The CO map includes all CO lines from $J = 1$ to $J = 13$. Fig. \ref{intensity} shows the mean [\hbox{C~$\scriptstyle\rm II$}] signal, FIR continuum, CO lines and [\CI] lines as a function of frequency. The FIR continuum is much stronger than the [\hbox{C~$\scriptstyle\rm II$}] and contamination emission: for instance, at 316~GHz the FIR continuum is $\sim3\times10^5$~Jy~sr$^{-1}$, while the [\hbox{C~$\scriptstyle\rm II$}] signal and the CO contamination are $\sim$1200~Jy~sr$^{-1}$ and $\sim$800~Jy~sr$^{-1}$, respectively. Moreover, although at $z_{\rm CII} \sim 5$, the CO lines signal is comparable to the [\hbox{C~$\scriptstyle\rm II$}] one, at $z_{\rm CII} \sim 7$ CO dominates by a factor $\sim20$. At $z_{\rm CII} = 5$, the sum of the two [\CI] line fluxes is $\sim$100~Jy~sr$^{-1}$, therefore negligible with respect to the [\hbox{C~$\scriptstyle\rm II$}] signal. However at $z_{\rm CII} = 7$ it represents an important contamination for [\hbox{C~$\scriptstyle\rm II$}] intensity mapping, being $\sim$60~Jy~sr$^{-1}$, namely comparable to the [\hbox{C~$\scriptstyle\rm II$}] signal. The frequency dependence of the fluctuations of the [\hbox{C~$\scriptstyle\rm II$}], FIR, CO, [\CI] and the instrumental noise, can be qualitatively appreciated by inspecting a single line of sight cut through the mock light cone, as shown in Fig. \ref{line}. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig7.eps} \caption{The [\hbox{C~$\scriptstyle\rm II$}] signal, CO and [\CI] lines contamination, instrumental noise (bottom panel), and the FIR continuum foreground (top panel) as a function of frequency along a randomly selected line of sight cut through the mock light cone.} \label{line} } \end{figure} \subsection{Recovering the [\hbox{C~$\scriptstyle\rm II$}] signal}\label{COmasking} To recover the [\hbox{C~$\scriptstyle\rm II$}] signal from the observed maps it is necessary to subtract the CO and [\CI] contamination and FIR continuum foreground. For the purpose of contamination subtraction, it is useful to know the fractional contribution from halos with different mass to the total fluctuations signal coming from galaxy clustering. The clustering term of the angular power spectrum of a line emitted from sources located in a narrow redshift range is \begin{equation}\label{eq_clustering} P_{\rm clust}\propto \left[\int L(M_{\rm h})b(M_{\rm h})\frac{dn}{dM_{\rm h}}dM_{\rm h}\right]^2, \end{equation} where $L(M_{\rm h})$ is the line luminosity and $b(M_{\rm h})$ is the halo bias. For $\nu_0 = 316~$GHz, the observed CO lines with $J = 4, 5, 6$ and 7 are from $z\sim 0.45, 0.82, 1.18$, and 1.54, respectively. Most of the CO contamination to the 316~GHz map is due to these four lines. The two [\CI] lines are from $z\sim 0.56$ and 1.56 respectively. Fig. \ref{power_fraction} shows the fractional power spectrum from halos below a certain mass for [\hbox{C~$\scriptstyle\rm II$}] and these four CO lines and two [\CI] lines. In order to decrease CO and [\CI] contamination by $> 90\%$, radiation from halos above $\sim 10^{12} - 3\times10^{12}~M_\odot$ must be subtracted. In a $2.4\times2.4$ deg$^2$ field there are $\sim2\times10^5$ halos with $M_{\rm h}>10^{12}~M_\odot$ at all redshifts; however, only those whose CO or [\CI] emission lines are received in the frequency bin centered at $\nu_0$ produce contamination. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig8.eps} \caption{Fractional contribution from halos below $M_{\rm h}$ to the clustering term (Eq. \ref{eq_clustering}) of the angular power spectrum for the $316\pm0.65$~GHz frequency bin for various emission lines. The two thin lines refer to [\CI(1-0)] and [\CI(2-1)] lines respectively. To guide the eye we have plot a horizontal line corresponding to 0.1.} \label{power_fraction} } \end{figure} In principle, de-contaminating the map would be easy if we could measure the contamination line flux of each galaxy in the map, then subtract it from the relevant pixels. However, this procedure is very time consuming. Alternatively, we could identify pixels that are supposed to be heavily contaminated by CO or [\CI] lines and discard them completely. We note that the CO or [\CI] sources should be much brighter than [\hbox{C~$\scriptstyle\rm II$}] sources, say, in the optical/IR band, where they are also more easily resolved. Thus, one can directly drop the pixels in which optical/IR bright sources are detected in the redshift range from which CO or [\CI] lines are redshifted into the observed frequency bin. Of course, this would also cause a loss of [\hbox{C~$\scriptstyle\rm II$}] flux from the dropped pixels. However, as [\hbox{C~$\scriptstyle\rm II$}] (CO or [\CI]) lines come primarily from high-$z$ (low-$z$) galaxies, there are no correlations between the two galaxy populations, i.e., removing CO([\CI])-bright pixels is equivalent to a random masking of [\hbox{C~$\scriptstyle\rm II$}] signal. The missing [\hbox{C~$\scriptstyle\rm II$}] power is limited as long as the masking fraction is $\lower.5ex\hbox{\ltsima} 30\%$ \citep{2005Natur.438...45K}. How do we select optical/IR bright sources? The natural choice is to use the K-band magnitude \citep{2014arXiv1410.4808S}. However, this quantity may not be a good indicator for IR and CO or [\CI] luminosities. For example, the K-band to total IR flux ratio of the 7 spiral galaxy templates in the {\tt SWIRE} SED library varies by a factor of $\sim$30. The discrepancy for different galaxy types is even larger. To use the specific K-band brightness in contamination removal, the scatter of the $L_K - L_{\rm IR}$ relation should be modeled reasonably. In the samples used in \citet{2014ApJ...794..142G}, we find 51 (U)LIRGs and 15 high-$z$ DSFGs whose K-band flux can be found on their website. From such data, we fit the following $L_{K^\prime} - L_{\rm IR}$ relation: \begin{equation} {\rm log}\left(\frac{L_{K^\prime}}{\rm erg~s^{-1} Hz^{-1}}\right) = 0.39 \times{\rm log}\left( \frac{L_{\rm IR}}{L_\odot} \right) +25.26, \label{LK} \end{equation} with a standard deviation of residuals equal to 0.35; $L_{K^\prime}$ is the luminosity at the rest frame frequency that is redshifted into K band. For a halo with mass $M_{\rm h}$ and IR luminosity $L_{\rm IR}$ (computed from Eq. \ref{LIR}), $L_{K^\prime}$ is randomly generated from a log-normal distribution with mean given by Eq. (\ref{LK}) and standard deviation $\sigma_{K^\prime}=0.35$. We find that, for example, a halo with typical IR luminosity $10^{11}~L_\odot$ at $z=1$ is as bright as $m_K =21$ when adopting the Spi4 spiral galaxy SED template. We are then confident that K-band luminosities can be safely used to remove CO and [\CI] contamination. The advantage with respect to relying on UV/optical magnitudes is that dust extinction effects in the K-band are much smaller, and can be neglected to a first approximation. The FIR foreground subtraction algorithm exploits the fact that the continuum is a very smooth function in frequency space. Such feature is widely used, for example, in HI 21cm intensity mapping \citep{2006ApJ...650..529W,2008MNRAS.389.1319J,2014arXiv1409.8667A}. For this reason, we believe that assuming the same FIR continuum template for all halos is acceptable, as smoothness without specifying the slope, is the only feature of the foreground that is required by this algorithm. We check that, adopting a very different SED template, e.g. an elliptical galaxy or a starburst galaxy, result in different slopes and amplitude of predicted FIR foreground, but the recovered [\hbox{C~$\scriptstyle\rm II$}] signal is almost identical. In what follows we list the steps for recovering the [\hbox{C~$\scriptstyle\rm II$}] signal from the full, observed map at the 60 frequency bins in which the frequency range [238, 317]~GHz ($z_{\rm CII} = $[7, 5]) is sampled ($\Delta\nu_0 = 1.3$~GHz). They form a set of $200\times200$ line of sights (one for each pixel of the map). \begin{enumerate} \item Identify the ``CO or [\CI] contaminated'' pixels (pixels containing $m_K < 22$ galaxies whose contamination lines are redshifted into the relevant band) in each line of sight, replace their flux with the interpolated value from the two neighboring pixels along the same line of sight. \item For each line of sight, take out its foreground component that is found by either singular value decomposition (SVD), or polynomial fitting algorithm (details are given in Appendix \ref{app_foreground}). \item Set the flux of ``CO or [\CI] contaminated'' pixels identified in step (i) be zero. \end{enumerate} After the above procedures, the final map contains the [\hbox{C~$\scriptstyle\rm II$}] signal, instrumental noise, and relatively negligible FIR continuum foreground and contamination residuals. We check that, at angular scale $\sim1000''$, for frequencies corresponding to $z_{\rm CII} =5$, 6 and 7, 0.1\%, 6\% and 20\% of the CO contamination power spectrum is left as residuals, respectively. For [\CI] lines the corresponding fraction is 2\%, 5\% and 10\%, respectively. \section{Results}\label{result} Fig. \ref{angular_power_recovered} shows the recovered [\hbox{C~$\scriptstyle\rm II$}] angular power spectrum (dashed) from $ z\sim 5, 6, 7$, along with the original signal (solid). At $z_{\rm CII}\sim5$, the power spectrum is almost perfectly recovered, with the slight deficiency at large angular scales due to the discarded [\hbox{C~$\scriptstyle\rm II$}] flux in CO or [\CI] contaminated pixels. At $z_{\rm CII}\sim6$, the [\hbox{C~$\scriptstyle\rm II$}] signal is recovered for $\theta \lower.5ex\hbox{\gtsima} 1000''$; at smaller scales the limiting factor is the instrumental noise that can however be suppressed by a longer integration time. The signal from $z_{\rm CII}\sim7$ remains largely inaccessible, as noise dominates at all scales. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig9.eps} \caption{The remaining angular power spectrum after foreground and contamination removal (open squares connected with lines) and the original [\hbox{C~$\scriptstyle\rm II$}] signal (filled circles connected with lines) as a function of angular scale $\theta = 2\pi/q$, where $q$ is the wavenumber, at $z_{\rm CII} \sim 5$ (solid), 6 (dotted) and 7 (dashed) respectively. We give the masked percentages for each. The assumed observational configuration is a 6 m telescope with $T_{\rm sys} = 150$~K, $N_{\rm pixel}=128\times128$, and a $t_{\rm obs}=5000$~hr.} \label{angular_power_recovered} } \end{figure} \section{Conclusions and Discussions}\label{conclusion} We have studied the collective [\hbox{C~$\scriptstyle\rm II$}] emission signal from star-forming galaxies at $z \ge 5$, as well as the influence of FIR continuum foreground and CO and [\CI] contamination on the experimental detection of the signal. To this aim we have combined the predicted [\hbox{C~$\scriptstyle\rm II$}] line luminosity as a function of the galaxy star formation rate and metallicity (derived from single galaxy simulations including a sub-grid treatment of the interstellar medium presented in V15) with a semi-empirical approach to compute the $L_{\rm CII}-M_{\rm h}$ relation. This relation is subsequently applied to halo catalogs built from the large-scale N-body simulation {\tt BolshoiP}, to generate mock maps of [\hbox{C~$\scriptstyle\rm II$}] signal. To compute the FIR continuum foreground, we derived the $L_{\rm IR} - M_{\rm h}$ relation via the abundance-matching technique. As for the contamination by CO lines emitted from low-redshift galaxies, instead of using the poorly constrained CO luminosity functions, we use intermediate $L^{\rm J}_{\rm CO} - L_{\rm IR}$ relations that were better fitted from measurements of both local and high-redshift samples. We use the theoretical calculations of the [\CI] line luminosities as a function of $L_{\rm IR}$, according to the Popping et al. (2014) model, mentioned in Sec. 2.5. We generated mock maps for FIR continuum, CO and [\CI] emission, in close analogy with the [\hbox{C~$\scriptstyle\rm II$}] mock maps. We carried out FIR foreground removal and contamination masking experiments on the total mock maps (containing the signal + foreground + contamination and also the instrumental noise) to recover the angular power spectrum of original [\hbox{C~$\scriptstyle\rm II$}] maps. We pointed out that, in order to efficiently subtract the CO and [\CI] contamination one could discard pixels that are allegedly contaminated by contamination lines. This is feasible if the map has sufficient angular resolution to avoid losing too many pixels. We estimated that if the intensity map has a resolution $\sim40''$, contamination can be suppressed by dropping all pixels containing galaxies brighter than $m_K =22$ and located at the relevant redshift range. We found that the $z > 5$ [\hbox{C~$\scriptstyle\rm II$}] signal comes mainly from halos in the mass range $10^{11-12} ~M_\odot$ (H-band apparent magnitude $\sim 26.8 - 23.8$); as this mass range is narrow, intensity mapping is an ideal experiment to investigate these early galaxies. The [\hbox{C~$\scriptstyle\rm II$}] signal from $z_{\rm CII} \sim 5 - 6$ is detectable for a ground-based, noise-limited telescope with a 6~m aperture, $T_{\rm sys} = 150$~K, a FIR camera with $128\times128$ pixels in about $5000$~hr total integration time. Although feasible in principle, the experiment is difficult to be performed using currently available telescopes. In addition, the integration time itself could be longer if the atmospheric conditions are on average worse than assumed here. Therefore a dedicated telescope is required and its location is essential. In any case, our study will serve as a robust guideline for the design of future facilities. A further motivation of a [\hbox{C~$\scriptstyle\rm II$}] intensity mapping experiment is to detect the signal from faint galaxies that are unresolved even in deepest optical/IR surveys. This is important as these faint galaxies are believed to contribute most ionizing photons to reionization \citep{2011MNRAS.414..847S,2013MNRAS.434.1486D,2007MNRAS.380L...6C,2011MNRAS.414.1455L, 2012MNRAS.420.1606J, 2012ApJ...752L...5B,2012ApJ...758...93F}. In what follows, we therefore discuss the feasibility of such a kind of experiment. As can be seen from Fig. \ref{power_fraction}, at $z\sim5$, faint galaxies hosted in halos below $10^{11}~M_\odot$ only produce less than 1\% of the total [\hbox{C~$\scriptstyle\rm II$}] power spectrum. Therefore an intensity mapping experiment aimed at detecting such faint galaxies could only be carried out by a telescope whose noise level is background-limited, such as a cryogenic space telescope. In this case the noise is mainly due to Poisson fluctuations of the CMB \citep{2010JCAP...11..016V}: \begin{equation} \sigma_N = \sqrt{\frac{B(T_{\rm CMB},\nu_0) h\nu}{\lambda^2_0\Delta \nu_0 t}}, \end{equation} where $B$ is the CMB emission at $\nu_0$. For the following calculation we adopt a 2~m aperture telescope \footnote{A 2~m aperture telescope can be considered typical for a space observatory. While a larger aperture would be more helpful in reducing integration time, the main challenge is to measure the contamination flux of foreground galaxies and [\hbox{C~$\scriptstyle\rm II$}] flux of bright high-$z$ galaxies.} and $t_{\rm obs} = t\times \Omega_{\rm map}/\Omega_{\rm FOV}=100$~hr, which is appropriate as a reference for a space instrument with a low noise level. To analyze the [\hbox{C~$\scriptstyle\rm II$}] signal of these faint galaxies, the [\hbox{C~$\scriptstyle\rm II$}] flux from bright galaxies needs to be measured by a high-resolution interferometer array, and then subtracted in the relevant pixels. In the $(316\pm0.65)$~GHz frequency bin, in our light-cone there are $\sim7\times10^3$ halos with [\hbox{C~$\scriptstyle\rm II$}] line flux $>10^{-22}$~Wm$^{-2}$. These halos have $M_{\rm h}\lower.5ex\hbox{\gtsima} 6\times10^{10}~M_\odot$. In addition, there are $1.3\times10^4$ halos having CO and [\CI] contamination line flux above $10^{-22}$~Wm$^{-2}$. By assuming a line width of 50~km s$^{-1}$, to resolve these halos with a signal-to-noise ratio $> 5$, the required sensitivity is $4\times10^{-5}$~Jy. For comparison, at 316~GHz for a channel width of 50~km s$^{-1}$, ALMA sensitivity with 34 antennas is $\sim 4\times10^{-5}$~Jy with 28 hr. Assuming a $\sim$(20 arcsec)$^2$ FOV for ALMA, to cover a sky region of $2.4\times2.4$ deg$^2$, we need $\sim 2\times10^5$ pointings. Therefore, even for only one frequency bin the required integration time is so high ($\sim5\times10^6$~hour!) to make the experiment unfeasible with current technology. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig10.eps} \caption{The original angular power spectrum of [\hbox{C~$\scriptstyle\rm II$}] emission from halos with [\hbox{C~$\scriptstyle\rm II$}] line flux $<10^{-22}$~Wm$^{-2}$ at the $316\pm 0.65$~GHz frequency bin (filled symbols) and the recovered one (open symbols).} \label{angular_power_recovered_faint} } \end{figure} In spite of this, we generate new maps including only the [\hbox{C~$\scriptstyle\rm II$}] signal and contamination from halos with a line flux below $< 10^{-22}$~Wm$^{-2}$. Such halos have $\lower.5ex\hbox{\ltsima} 10^{11}~M_\odot$, therefore we can use the $L_{\rm CII} - M_{\rm h}$ relation in P14 simulation. We can recover the [\hbox{C~$\scriptstyle\rm II$}] power spectrum using the procedures detailed in Sec. \ref{COmasking}, with the following considerations. Here we have assumed that galaxies with [\hbox{C~$\scriptstyle\rm II$}]/CO/[\CI] line flux above $10^{-22}$~Wm$^{-2}$ have already been resolved. Thus, when removing contamination, we can directly subtract the flux of the resolved sources in the relevant frequency bin, instead of discarding the pixel completely. Additionally, here the [\hbox{C~$\scriptstyle\rm II$}] signal to FIR continuum ratio is even smaller, which allows us to take out the first 15 modes in continuum subtraction. In Fig. \ref{angular_power_recovered_faint} we show the original and recovered power spectrum from $z_{\rm CII}\sim 5$. We can see that the experiment is indeed possible at $z=5$. At higher redshift, though, the recovery of the [\hbox{C~$\scriptstyle\rm II$}] signal from faint galaxies is still hampered by the noise level. During the final stages of this work, \citet{2014arXiv1410.4808S} presented a investigation of the [\hbox{C~$\scriptstyle\rm II$}] signal from high-$z$ galaxies through mock surveys obtained from semi-numerical simulations. The two studies are in broad agreement, although they differ in the conclusions concerning the FIR continuum, which we found to be much stronger than both the [\hbox{C~$\scriptstyle\rm II$}] signal and contamination. Hence, accurately subtracting this foreground is vital in order to recover the [\hbox{C~$\scriptstyle\rm II$}] signal. The good news are that we showed here that the proposed algorithm to remove a spectrally smooth component from each line of sight, inspired by 21 cm experiments, can be successfully applied to [\hbox{C~$\scriptstyle\rm II$}] intensity mapping as well. Finally, we comment on a possible caveat of our work, related to the fact that the V15 model neglects the contribution of HII regions to the [\hbox{C~$\scriptstyle\rm II$}] emission. We have already shown that HII regions in the diffuse medium do not have significant contribution compared with the cold neutral medium and warm neutral medium (see Fig. 8 in \citealt{2014ApJ...784...99G}), and V15 further shows that [\hbox{C~$\scriptstyle\rm II$}] emission by cold neutral medium and warm neutral medium is negligible compared with PDRs. For what concerns HII regions surrounding molecular clouds (MC), we consider the MC properties predicted by the V15 model: number density, $n_{\rm H}\sim10^{3}$~cm$^{-3}$; size, $r_{\rm MC}\sim1$~pc; ionization parameter\footnote{The quoted value for $U$ is estimated without considering the MC optical depth $\tau_{\rm MC}$ to ionizing photons. If $\tau_{\rm MC}$ is considered, $U$ will become smaller and the [\hbox{C~$\scriptstyle\rm II$}] emission from HII regions will further decrease.} $U \sim10^{-2.7}$. These fiducial values for $n_{\rm H}$ and $U$ are close to the parameters used in Fig. 2 of \citet{2011A&A...526A.149N}, where HII regions represent MC outskirts with column density $4\times10^{20}$~cm$^{-2}$. From this figure, we can see that in correspondence of the MC column density expected by the V15 model ($\sim 3\times10^{21}$~cm$^{-2}$) the [\hbox{C~$\scriptstyle\rm II$}] emission is almost 50 time stronger than the one at the edge of the HII regions, implying that the latter only contribute $f_{\rm HII}\sim2\%$ to the total emission. This calculation allows us to conclude that in the V15 model HII regions are expected to provide a negligible contribution to the total [\hbox{C~$\scriptstyle\rm II$}] emission, once compared with PDRs. This conclusion is consistent with the results by \citet{2006MNRAS.368.1949A}. In fact, this author finds that for $n_{\rm H}=10^3$~cm$^{-3}$ and $U=10^{-2.7}$ $f_{\rm HII}\lsim10\%$. The HII regions contribution to the [\hbox{C~$\scriptstyle\rm II$}] emission remains, however, a very controversial topic. For example, \citet{2006ApJ...652L.125O,2011ApJ...739..100O} find $f_{\rm HII}\sim(30-40)\%$ in Carina Nebula, which is a single star formation region in Milky Way. \citet{2010MNRAS.404.1910V} report fractions spanning a range $5.5\% - 60\%$ in their samples. \citet{2014ApJ...782L..17D} observed two Lyman Alpha Emitters (LAE) at $z = 4.7$, concluded that in these LAEs (which are actually members of an interacting system including quasars) most of the [\hbox{C~$\scriptstyle\rm II$}] emission is from the ionized medium. However, we also note that \citet{2015arXiv150203131C} found that HII regions contribution is typically less than 15\% for dwarf galaxies. To summarize, the contribution of HII regions to the [\hbox{C~$\scriptstyle\rm II$}] emission is not clearly known, especially for galaxies at $z > 5$, since [\hbox{C~$\scriptstyle\rm II$}] observations are still poor at these epochs. Given that $f_{\rm HII}$ is expected to be negligible according to current state-of-the-art theoretical models, we do not consider the [\hbox{C~$\scriptstyle\rm II$}] emission from HII regions as a major component in our calculations. Accounting for this contribution (e.g. $f_{\rm HII} \sim30\%$) would only enhance the [\hbox{C~$\scriptstyle\rm II$}] power spectrum (by a factor of $\sim2$), making the signal stronger.
{ "timestamp": "2015-04-27T02:09:23", "yymm": "1504", "arxiv_id": "1504.06530", "language": "en", "url": "https://arxiv.org/abs/1504.06530" }
\section*{Introduction} The purpose of this paper is the study of anticyclotomic analogs of the results of \cite{EPW} on the variation of Iwasawa invariants in Hida families. Let $\bar\rho:G_\mathbf Q:={\rm Gal}(\overline{\mathbf Q}/\mathbf Q)\rightarrow{\rm GL}_2(\mathbf{F})$ be an odd and absolutely irreducible Galois representation over a finite field $\mathbf{F}$ of characteristic $p$. After the celebrated proof of Serre's conjecture \cite{kw}, we know that $\bar\rho$ is modular. Let $\mathfrak{H}(\bar{\rho})$ denote the set of all $p$-ordinary and $p$-stabilized newforms with mod $p$ Galois representation isomorphic to $\bar\rho$. Let $K$ be an imaginary quadratic field of discriminant prime to $p$. Let $N^-$ be a square-free product of an \emph{odd} number of primes, each inert in $K$, containing all such primes at which $\bar\rho$ is ramified. As in \cite{pollack-weston}, we say that $(\bar\rho,N^-)$ satisfies condition (CR) if the following hold: \begin{ass}[CR]\label{CR} \begin{enumerate} \item{} $\bar{\rho}$ is irreducible, and surjective if $\mathbf{F}=\mathbf{F}_5$. \item{} If $\ell\vert N^-$ and $\ell\equiv\pm{1}\pmod{p}$, then $\bar{\rho}$ is ramified at $\ell$. \end{enumerate} \end{ass} Let $\Gamma$ be the Galois group of the anticyclotomic $\mathbf Z_p$-extension $K_\infty/K$. Associated with each $f\in\mathfrak{H}(\bar\rho)$ there is a $p$-adic $L$-function $L_p(f/K)\in\Lambda:={\mathcal O}\pwseries{\Gamma}$, where ${\mathcal O}$ is the ring of integers of a finite extension $F$ of $\mathbf Q_p$ over which $f$ is defined, characterised by an interpolation property of the form \begin{equation}\label{eq:interp} \chi(L_p(f/K))=C_p(f,\chi) \cdot E_p(f,\chi)\cdot\frac{L(f,\chi,k/2)}{\Omega_{f,N^-}}\nonumber \end{equation} as $\chi$ runs over the $p$-adic characters of $\Gamma$ corresponding to certain algebraic Hecke characters of $K$, where $C_p(f,\chi)$ is an explicit nonzero constant, $E_p(f,\chi)$ is a $p$-adic multiplier, and $\Omega_{f,N^-}$ is a complex period (specified up to a $p$-adic unit) making the above ratio algebraic. The anticyclotomic Iwasawa main conjecture gives an arithmetic interpretation of $L_p(f/K)$. More precisely, let \[ \rho_f:G_{\mathbf Q}\longrightarrow{\rm Aut}_F(V_f)\simeq{\rm GL}_2(F) \] be a self-dual twist of the $p$-adic Galois representation associated to $f$, fix an ${\mathcal O}$-stable lattice $T_f\subset V_f$, and set $A_f=V_f/T_f$. Since $f$ is $p$-ordinary, there is a unique one-dimensional $G_{\mathbf Q_p}$-invariant subspace $F_p^+V_f\subset V_f$ where the inertia group at $p$ acts via $\varepsilon_{\rm cyc}^{k/2}\psi$, where $\varepsilon_{\rm cyc}$ is the $p$-adic cyclotomic character and $\psi$ is of finite order. Let $F_p^+A_f$ be the image of $F_p^+V_f$ in $A_f$ and set $F_p^-A_f=A_f/F_p^+A_f$. Define the \emph{minimal Selmer group} of $f$ by \[ {\rm Sel}(K_\infty,f):=\ker\left\{H^1(K_\infty,A_f \longrightarrow \prod_{w\nmid p}H^1(K_{\infty,w},A_f)\times\prod_{w\vert p}H^1(K_{\infty,w},F_p^-A_f)\right\} \] where $w$ runs over the places of $K_\infty$. By standard arguments (see \cite{greenberg-Iw}, for example), one knows that the Pontryagin dual of ${\rm Sel}(K_\infty,f)$ is finitely generated over $\Lambda$. The \emph{anticyclotomic main conjecture} is then the following. \begin{introconj}\label{AIMC} ${\rm Sel}(K_\infty,f)^\vee$ is $\Lambda_{}$-torsion, and \[ Ch_{\Lambda_{}}({\rm Sel}(K_\infty,f)^\vee)=(L_p(f/K)). \] \end{introconj} For $f$ corresponding to $p$-ordinary elliptic curves, and under rather stringent assumptions on $\bar\rho_f$ which were later relaxed by Pollack--Weston \cite{pollack-weston}, one of the divisibilities predicted by Conjecture~\ref{AIMC} was obtained by Bertolini--Darmon \cite{bdIMC} using Heegner points and Kolyvagin's method of Euler systems. More recently, after the work of Chida--Hsieh \cite{ChHs2} the divisibility \begin{equation}\label{ES-div} Ch_{\Lambda_{}}({\rm Sel}(K_\infty,f)^\vee)\supseteq(L_p(f/K))\nonumber \end{equation} is known for all newforms $f\in\mathfrak{H}(\bar\rho)$ of weight $k\leq p-2$ and trivial nebentypus, provided the pair $(\bar{\rho},N_f^-)$ satisfies a mild strengthening of condition (CR). Here, $N_f^-$ denotes as usual the product of the prime factors of $N_f$ which are inert in $K$. The restriction to weights $k\leq p-2$ in \cite{ChHs2} comes from the use of the version of Ihara's lemma proved in \cite{DT-inv}. While it seems difficult to directly extend their arguments to higher weights, it might be possible to obtain the above divisibility for all weights by adapting the strategy of Bertolini--Darmon \cite{bdIMC} to the setting of Heegner points in Hida families \cite{LV-MM}. In fact, the results of this paper complete the proof of many new cases of Conjecture~\ref{AIMC} using big Heegner points, but by a rather different approach, as we now explain. Associated with every $f\in\mathfrak{H}(\bar\rho)$ there are \emph{anticyclotomic Iwasawa invariants} $\mu^{\rm an}(K_\infty,f)$, $\lambda^{\rm an}(K_\infty,f)$, $\mu^{\rm alg}(K_\infty,f)$, and $\lambda^{\rm alg}(K_\infty,f)$. The analytic (resp. algebraic) $\lambda$-invariants are the number of zeros of $L_p(f/K)$ (resp. of a generator of the characteristic ideal of ${\rm Sel}(K_\infty,f)^\vee$), while the $\mu$-invariants are defined as the exponent of the highest power of $p$ dividing the same objects. In this paper we study the behavior of these invariants as $f$ varies over the subset $\mathcal{H}(\bar\rho)$ of $\mathfrak{H}(\bar\rho)$ consisting of newforms with $N_f^-=N^-$. Our main result is then the following. \begin{introthm}\label{thmA} Suppose that $\bar\rho$ is $p$-ordinary, $p$-distinguished, and ramified at all $\ell\vert N^-$, and fix $*\in\{{\rm alg},{\rm an}\}$. \begin{enumerate} \item For all $f\in\mathcal{H}(\bar\rho)$, we have \[ \mu^*(K_\infty,f)=0. \] \item Let $f_1$, $f_2\in\mathcal{H}(\bar{\rho})$ lie on the branches $\mathbb{T}(\mathfrak{a}_1)$, $\mathbb{T}(\mathfrak{a}_2)$, respectively. Then \begin{equation}\label{lambda} \lambda^*(K_\infty,f_1)-\lambda^*(K_\infty,f_2)=\sum_{\ell\mid N^+_1 N^+_2} e_\ell(\mathfrak{a}_2)-e_\ell(\mathfrak{a}_1)\nonumber \end{equation} where the sum is over the split primes in $K$ which divide the tame level of $f_1$ or $f_2$, and $e_\ell(\mathfrak{a}_j)$ is an explicit non-negative invariant of the branch $\mathbb{T}(\mathfrak{a}_j)$ and the prime $\ell$. \end{enumerate} \end{introthm} Provided $p$ splits in $K$, and under the same assumptions on $\bar\rho$ as in Theorem~\ref{thmA}, the deep work of Skinner--Urban \cite{SU} establishes one of the divisibilities in a related ``three-variable'' Iwasawa main conjecture. Combining their work with the main result of this paper, we deduce the following. \begin{introcor}\label{corA} Let $\bar\rho$ be as in Theorem~\ref{thmA} and suppose that $p$ splits in $K$. If the anticyclotomic main conjecture holds for some newform $f_0\in\mathcal{H}(\bar\rho)$ of weight $k_0\equiv 2\pmod{p-1}$ and trivial nebentypus, then it holds for all newforms $f\in\mathcal{H}(\bar\rho)$ of weight $k\equiv 2\pmod{p-1}$ and trivial nebentypus. \end{introcor} As hinted at above, the proof of our main results closely follows the methods of \cite{EPW}. In fact, on the algebraic side the arguments of \emph{loc.cit.} apply in our context almost verbatim, and the main contribution of this paper is the development of anticyclotomic analogs of their results on the analytic side. Indeed, the proof of the analytic parts of \cite{EPW} is based on the study of certain variants of the two-variable $p$-adic $L$-functions of Mazur--Kitagawa, whose construction relies on the theory of modular symbols on classical modular curves. In contrast, by our assumptions on $N^-$ we are led to work on a family of Shimura curves associated with a (definite) quaternion algebra over $\mathbf Q$ of discriminant $N^->1$, and these curves are well-known to have no cusps. In the cyclotomic case, modular symbols are useful two ways: They yield a concrete realization of the degree one compactly supported cohomology of open modular curves, and provide a powerful tool for studying the arithmetic properties of critical values of Hecke $L$-functions. Our basic observation is that in the present anticyclotomic setting, Heegner points on definite Shimura curves provide a similarly convenient way of describing the \emph{central} critical values of the Rankin $L$-series $L(f/K,\chi,s)$. Also fundamental for the method of \cite{EPW} is the possibility to ``deform'' modular symbols in Hida families. In our anticyclotomic context, the construction of big Heegner points in Hida families was obtained in the work \cite{LV-MM} of the third named author in collaboration with Vigni, following an original construction due to Howard \cite{howard-invmath}, while the relation between these points and classical $L$-values was established in the work \cite{cas-longo} by the first and third named authors. With these key results at hand, and working over appropriate quotients of the Hecke algebras considered in \cite{EPW} via the Jaquet--Langlands correspondence, we are then able to adapt the arguments of \emph{loc.cit.} to our setting, making use of the ramification hypotheses on $\bar\rho$ to ensure a multiplicity one property of certain Hecke modules (among other uses). We conclude this introduction with the following overview of the contents of this paper. In the next section, we briefly recall the Hida theory used in this paper, following the exposition in \cite[\S{1}]{EPW} for the most part. In Section~\ref{sec:heegner}, we describe an extension of the construction of big Heegner points to ``imprimitive'' branches of the Hida family, an extension necessary for the purposes of this paper. In Section~\ref{sec:p-adicL}, we construct two-variable $p$-adic $L$-functions attached to a Hida family and to each of its irreducible components (or branches), and prove Theorem~\ref{thm:3.6.2} relating the two. This theorem is the key technical result of this paper, and the analytic part of Theorem~\ref{thmA} follows easily from this. In Section~\ref{sec:Selmer}, we deduce the algebraic part of Theorem~\ref{thmA} using the residual Selmer groups studied in \cite[\S{3.2}]{pollack-weston}. Finally, in Section~\ref{sec:applications} we give the applications of our results to the anticyclotomic main conjecture. \section{Hida theory}\label{sec:hida-theory} \subsection{Hecke algebras}\label{subsec:hecke} Fix a positive integer $N$ admitting a factorization $N=N^+N^-$ with $(N^+,N^-)=1$ and $N^-$ square-free, and fix a prime $p\nmid N$. For each integer $k\geq 2$, denote by $\mathfrak{h}_{N,r,k}$ the $\mathbf Z_p$-algebra generated by the Hecke operators $T_\ell$ for $\ell\nmid Np$, the operators $U_\ell$ for $\ell\vert Np$, and the diamond operators $\langle a\rangle$ for $a\in(\mathbf Z/p^r\mathbf Z)^\times$, acting on the space $S_k(\Gamma_{0,1}(N,p^r),\overline{\mathbf Q}_p)$ of cusp forms of weight $k$ on $\Gamma_{0,1}(N,p^r):=\Gamma_0(N)\cap\Gamma_1(p^r)$. For $k=2$, we abbreviate $\mathfrak h_{N,r}:=\mathfrak h_{N,r,2}$. Let $e^{\rm ord}:=\lim_{n\to\infty}U_p^{n!}$ be Hida's ordinary projector, and define \[ \mathfrak{h}_{N,r,k}^{\rm ord}:=e^{\rm ord}\mathfrak{h}_{N,r,k}\qquad \mathfrak{h}_{N,r}^{\rm ord}:=e^{\rm ord}\mathfrak{h}_{N,r}\qquad \mathfrak{h}_{N}^{\rm ord}:=\varprojlim_r\mathfrak{h}^{\rm ord}_{N,r} \] where the limit is over the projections induced by the natural restriction maps. Let $\mathbb{T}^{N^-}_{N,r,k}$ be the quotient of $\mathfrak{h}_{N,r,k}^\mathrm{ord}$ acting faithfully on the subspace of $e^{\rm ord}S_k(\Gamma_{0,1}(N,p^r),\overline{\mathbf Q}_p)$ consisting of forms which are new at all primes dividing $N^-$. Set $\mathbb T^{N^-}_{N,r}:=\mathbb T^{N^-}_{N,r,2}$ and define \[ \mathbb{T}_{N}^{N^-}:=\varprojlim_r\mathbb{T}^{N^-}_{N,r}. \] Each of these Hecke algebras are equipped with natural $\mathbf Z_p\pwseries{\mathbf Z_p^\times}$-algebra structures via the diamond operators, and by a well-known result of Hida, $\mathfrak{h}_N^{\rm ord}$ is finite and flat over $\mathbf Z_p\pwseries{1+p\mathbf Z_p}$. \subsection{Galois representations on Hecke algebras} For each positive integer $M\vert N$ we may consider the new quotient $\mathbb T_M^\mathrm{new}$ of $\mathfrak h_M^\mathrm{ord}$, and the Galois representation \begin{equation}\label{2.2.1} \rho_M:G_\mathbf Q\longrightarrow\GL_2(\mathbb T_M^\mathrm{new}\otimes_{}\mathcal L)\nonumber \end{equation} described in \cite[Thm.~2.2.1]{EPW}, where $\mathcal L$ denotes the fraction field of $\mathbf Z_p\pwseries{1+p\mathbf Z_p}$. Let $\mathbb T_N'$ be the $\mathbf Z_p\pwseries{1+p\mathbf Z_p}$-subalgebra of $\mathbb T_N^{N^-}$ generated by the image under the natural projection $\mathfrak{h}_{N}^{\rm ord}\rightarrow\mathbb{T}_N^{N^-}$ of the Hecke operators of level prime to $N$. As in \cite[Prop.~2.3.2]{EPW}, one can show that the canonical map \[ \mathbb T_N'\longrightarrow\prod_M\mathbb T_M^\mathrm{new} \] where the product is over all integers $M\geq 1$ with $N^-\vert M\vert N$, becomes an isomorphism after tensoring with $\mathcal L$. Taking the product of the Galois representations $\rho_M$ we thus obtain \[ \rho:G_\mathbf Q\longrightarrow\GL_2(\mathbb T_N'\otimes\mathcal L). \] For any maximal ideal $\mathfrak m$ of $\mathbb T_N$, let $(\mathbb T_N')_\mathfrak m$ denote the localization of $\mathbb T_N'$ at $\mathfrak m$ and let \[ \rho_\mathfrak m:G_\mathbf Q\longrightarrow \GL_2\left((\mathbb T_N ')_\mathfrak m\otimes\mathcal L\right) \] be the resulting Galois representation. If the residual representation $\bar\rho_\mathfrak m$ is irreducible, then $\rho_{\mathfrak m}$ admits an integral model (still denoted in the same manner) \[ \rho_\mathfrak m:G_\mathbf Q\longrightarrow\GL_2\left((\mathbb T_N')_\mathfrak m\right) \] which is unique up to isomorphism. \subsection{Residual representations} \label{subsec:residual} Let $\bar\rho:G_\mathbf Q\rightarrow\GL_2(\mathbf{F})$ be an odd irreducible Galois representation defined over a finite field $\mathbf{F}$ of characteristic $p$. By \cite{kw}, $\bar\rho$ is modular, meaning that it arises as the residual representation associated with a modular form of some weight and level defined over $\overline{\mathbf Q}_p$. Consider three more conditions we may impose on $\bar\rho$, where $N^-$ is a fixed square-free product of an odd number of primes. \begin{assSU}[SU] \begin{enumerate} \item $\bar\rho$ is \emph{$p$-ordinary}: the restriction of $\bar\rho$ to a decomposition group $G_p\subset G_{\mathbf Q}$ at $p$ has a one-dimensional unramified quotient over $\mathbf{F}$. \item $\bar\rho$ is \emph{$p$-distinguished}: $\bar\rho_{}\vert_{G_p}\sim\left(\begin{smallmatrix}\bar{\varepsilon}&*\\0&\bar{\delta}\end{smallmatrix}\right)$ with $\bar{\varepsilon}\neq\bar{\delta}$. \item $\bar\rho$ is ramified at all primes $\ell\vert N^-$. \end{enumerate} \end{assSU} Fix once and for all a representation $\bar\rho$ as above satisfying Assumption~(SU), together with a $p$-stabilization of $\bar\rho$ in the sense of \cite[Def.~2.2.10]{EPW}. Let $\overline{V}$ be a two-dimensional $\mathbf{F}$-vector space which affords $\bar\rho$, and for any finite set of primes $\Sigma$ that does not contain $p$, define \begin{equation}\label{Def:N(Sigma)} N(\Sigma):=N(\bar\rho)\prod_{\ell\in\Sigma}\ell^{m_\ell} \end{equation} where $N(\bar{\rho})$ is the tame conductor of $\bar{\rho}$, and $m_\ell:={\rm dim}_{\mathbf{F}}\;\overline{V}_{I_\ell}$. Combining \cite[Thm.~2.4.1]{EPW} and \cite[Prop.~2.4.2]{EPW} with the fact that $\bar\rho$ is ramified at the primes dividing $N^-$, one can see that there exist unique maximal ideals $\mathfrak n$ and $\mathfrak{m}$ of $\mathbb T_{N(\Sigma)}^{N^-}$ and $\mathbb T_{N(\Sigma)}'$, respectively, such that $\mathfrak n$ lifts $\mathfrak m$, $(\mathbb T_{N(\Sigma)}')_\mathfrak m\simeq(\mathbb T^{N^-}_{N(\Sigma)})_\mathfrak n$, and $\bar\rho_\mathfrak m\simeq\bar\rho$. Define the ordinary Hecke algebra $\mathbb{T}_\Sigma$ attached to $\bar\rho$ and $\Sigma$ by \[ \mathbb T_\Sigma:=(\mathbb T_{N(\Sigma)}')_{\mathfrak{m}}. \] Thus $\mathbb{T}_\Sigma$ is a local factor of $\mathbb{T}_{N(\Sigma)}'$, and we let \[ \rho_\Sigma:G_\mathbf Q\longrightarrow \GL_2\left(\mathbb T_\Sigma\right) \] denote the Galois representation deduced from $\rho_\mathfrak{m}$. Following the terminology of \cite[\S{2.4}]{EPW}, we shall refer to ${\rm Spec}(\mathbb{T}_\Sigma)$ as ``the Hida family'' $\mathcal{H}(\bar{\rho})$ attached to $\bar\rho$ (and our chosen $p$-stabilization) that is minimally ramified outside $\Sigma$. \subsection{Branches of the Hida family}\label{sec:branches} If $\mathfrak a$ is a minimal prime of $\mathbb T_\Sigma$ (for a finite set of primes $\Sigma$ as above), we put $\mathbb T (\mathfrak a):=\mathbb T_\Sigma /\mathfrak a$ and let \[ \rho(\mathfrak a):G_\mathbf Q\longrightarrow\GL_2(\mathbb T(\mathfrak a)) \] be the Galois representation induced by $\rho_\Sigma$. As in \cite[Prop.~2.5.2]{EPW}, one can show that there is a unique divisor $N(\mathfrak a)$ of $N(\Sigma)$ and a unique minimal prime $\mathfrak a'\subset\mathbb T_{N(\mathfrak a)}^\mathrm{new}$ above $\mathfrak{a}$ such that the diagram \[ \xymatrix{ \mathbb T_\Sigma\ar[r] \ar[d]& \mathbb T'_{N(\Sigma)} \ar[r] & \prod_{N^-\mid M\mid N(\Sigma)}\mathbb T^\mathrm{new}_{M}\ar[d]\\ \mathbb{T}_{\Sigma}/\mathfrak{a} \ar[r]^-{=} & \mathbb T(\mathfrak a)\ar[r] & \mathbb T^\mathrm{new}_{N(\mathfrak a)}/\mathfrak a'} \] commutes. We call $N(\mathfrak a)$ the \emph{tame conductor} of $\mathfrak a$ and set \[ \mathbb T(\mathfrak a)^\circ:=\mathbb T^\mathrm{new}_{N(\mathfrak a)}/\mathfrak a'. \] In particular, note that $N^-\vert N(\mathfrak a)$ by construction, and that the natural map $\mathbb{T}(\mathfrak{a})\rightarrow\mathbb{T}(\mathfrak{a})^\circ$ is an embedding of local domains. \subsection{Arithmetic specializations} For any finite $\mathbf Z_p\pwseries{1+p\mathbf Z_p}$-algebra $\mathbb{T}$, we say that a height one prime $\wp$ of $\mathbb{T}$ is an \emph{arithmetic prime} of $\mathbb{T}$ if $\wp$ is the kernel of a $\mathbf Z_p$-algebra homomorphism $\mathbb{T}\rightarrow\overline{\mathbf Q}_p$ such that the composite map \[ 1+p\mathbf Z_p\rightarrow\mathbf Z_p\pwseries{1+p\mathbf Z_p}^\times\rightarrow\mathbb{T}^\times \rightarrow\overline{\mathbf Q}_p^\times \] is given by $\gamma\mapsto\gamma^{k-2}$ on some open subgroup of $1+p\mathbf Z_p$, for some integer $k\geq 2$. We then say that $\wp$ has \emph{weight} $k$. Let $\mathfrak a\subset\mathbb{T}_\Sigma$ be a minimal prime as above. For each $n\geq 1$, let $\mathbf{a}_n\in\mathbb T(\mathfrak a)^\circ$ be the image of $T_n$ under the natural projection $\mathfrak{h}^{\rm ord}_{N(\Sigma)}\rightarrow\mathbb T(\mathfrak a)^\circ$, and form the $q$-expansion \[ \mathbf f(\mathfrak a)=\sum_{n\geq 1}\mathbf{a}_nq^n\in\mathbb T(\mathfrak a)^\circ\pwseries{q}. \] By \cite[Thm.~1.2]{hida86b}, if $\wp$ is an arithmetic prime of $\mathbb T(\mathfrak a)$ of weight $k$, then there is a unique height one prime $\wp'$ of $\mathbb{T}(\mathfrak{a})^\circ$ such that \[ \mathbf{f}_\wp(\mathfrak a):=\sum_{n\geq 1}(\mathbf{a}_n\;{\rm mod}\;\wp')q^n\in{\mathcal O}_\wp^\circ\pwseries{q} \] is the $q$-expansion a $p$-ordinary eigenform $f_\wp$ of weight $k$ and tame level $N(\mathfrak{a})$, where ${\mathcal O}_\wp^\circ:=\mathbb{T}(\mathfrak{a})^\circ/\wp'$ (see \cite[Prop.~2.5.6]{EPW}). \section{Big Heegner points}\label{sec:heegner} Fix an integer $N\geq 1$ admitting a factorization $N=N^+N^-$ with $(N^+,N^-)=1$ and $N^-$ equal to the square-free product of an \emph{odd} number of primes, and fix a prime $p\nmid 6N$. Also, let $K$ be an imaginary quadratic field of discriminant $-D_K<0$ prime to $Np$ and such that every prime factor of $N^+$ (resp. $N^-$) splits (resp. is inert) in $K$. In this section we describe a mild extension of the construction in \cite{LV-MM} (following \cite{howard-invmath}) of big Heegner points attached to $K$. Indeed, using the results from the preceding section, we can extend the constructions of \emph{loc.cit.} to branches of the Hida family which are \emph{not necessarily} primitive (in the sense of \cite[\S{1}]{hida86b}). The availability of such extension is fundamental for the purposes of this paper. \subsection{Definite Shimura curves}\label{subsec:Sh} Let $B$ be the definite quaternion algebra over $\mathbf Q$ of discriminant $N^-$. We fix once and for all an embedding of $\mathbf Q$-algebras $K\hookrightarrow B$, and use it to identity $K$ with a subalgebra of $B$. Denote by $z\mapsto\overline{z}$ the nontrivial automorphism of $K$, and choose a basis $\{1,j\}$ of $B$ over $K$ with \begin{itemize} \item $j^2=\beta\in\mathbf Q^\times$ with $\beta<0$, \item $jt=\bar tj$ for all $t\in K$, \item $\beta\in (\mathbf Z_q^\times)^2$ for $q\mid pN^+$, and $\beta\in\mathbf Z_q^\times$ for $q\mid D_K$. \end{itemize} Fix a square-root $\delta_K=\sqrt{-D_K}$, and define $\boldsymbol{\theta}\in K$ by \[ \boldsymbol{\theta}:=\frac{D'+\delta_K}{2},\quad\textrm{where}\; D':=\left\{ \begin{array}{ll} D_K &\textrm{if $2\nmid D_K$,}\\ D_K/2 &\textrm{if $2\vert D_K$.} \end{array} \right. \] For each prime $q\mid pN^+$, define $i_q:B_q:=B\otimes_\mathbf Q\Q_q \simeq \M_2(\mathbf Q_q)$ by \[ i_q(\boldsymbol{\theta})=\mat{\mathrm{Tr}(\boldsymbol{\theta})}{-\mathrm{Nm}(\boldsymbol{\theta})}10; \quad\quad i_q(j)=\sqrt\beta\mat{-1}{\mathrm{Tr}(\boldsymbol{\theta})}01 \] where $\mathrm{Tr}$ and $\mathrm{Nm}$ are the reduced trace and reduced norm maps on $B$, respectively. On the other hand, for each prime $q\nmid Np$ we fix any isomorphism $i_q:B_q\simeq \M_2(\mathbf Q_q)$ with the property that $i_q(\mathcal O_K\otimes_\mathbf Z\Z_q)\subset\M_2(\mathbf Z_q)$. For each $r\geq 0$, let $R_{N^+,r}$ be the Eichler order of $B$ of level $N^+p^r$ with respect to the above isomorphisms $\{i_q:B_q\simeq{\rm M}_2(\mathbf Q_q)\}_{q\nmid N^-}$, and let $U_{N^+,r}$ be the compact open subgroup of $\widehat{R}_{N^+,r}^\times$ defined by \[ U_{N^+,r}:=\left\{(x_q)_q\in\widehat{R}_{N^+,r}^\times\;\;\vert\;\;i_p(x_p)\equiv\mat 1*0*\pmod{p^r}\right\}. \] Consider the double coset spaces \begin{equation}\label{def:gross-curve} \widetilde X_{N^+,r}=B^\times\big\backslash\bigl(\Hom_\mathbf Q(K,B)\times\widehat{B}^\times\bigr)\big/U_{N^+,r} \end{equation} where $b\in B^\times$ acts on $(\Psi,g)\in\Hom_\mathbf Q(K,B)\times\widehat B^\times$ by \[ b\cdot(\Psi,g)=(b\Psi b^{-1},bg) \] and $U_{N^+,r}$ acts on $\widehat{B}^\times$ by right multiplication. As explained in \cite[\S{2.1}]{LV-MM}, $\widetilde X_{N^+,r}$ is naturally identified with the set of $K$-rational points of certain genus zero curves defined over $\mathbf Q$. Nonetheless, there is a nontrivial Galois action on $\widetilde X_{N^+,r}$ defined as follows: If $\sigma\in{\rm Gal}(K^{\rm ab}/K)$ and $P\in\widetilde X_{N^+,r}$ is the class of a pair $(\Psi,g)$, then \[ \sigma P:=[(\Psi,\widehat{\Psi}(a)g)] \] where $a\in K^\times\backslash\widehat{K}^\times$ is chosen so that ${\rm rec}_K(a)=\sigma$. It will be convenient to extend this action to an action of $G_K:={\rm Gal}(\overline{\mathbf Q}/K)$ by letting $\sigma\in G_K$ act on $\widetilde X_{N^+,r}$ as $\sigma\vert_{K^{\rm ab}}$. Since $\Gal(K^\mathrm{ab}/K)$ is obviously abelian, we will set $P^\sigma:=\sigma P$ for the ease of notation. Finally, we note that $\widetilde X_{N^+,r} $ is also equipped with standard actions of $U_p$, Hecke operators $T_\ell$ for $\ell\nmid Np$, and diamond operators $\langle d \rangle$ for $d\in(\mathbf Z/p^r\mathbf Z)^\times$ (see \cite[\S{2.4}]{LV-MM}, for example). \subsection{Compatible systems of Heegner Points}\label{subsec:construct} For each integer $c\geq 1$, let ${\mathcal O}_c=\mathbf Z+c{\mathcal O}_K$ be the order of $K$ of conductor $c$. \begin{definition} We say that $P\in\widetilde X_{N^+,r}$ is a \emph{Heegner point of conductor $c$} if $P$ is the class of a pair $(\Psi,g)$ with \[ \Psi({\mathcal O}_c)=\Psi(K)\cap(B\cap g\widehat{R}_{N^+,r}g^{-1}) \] and \[ \Psi_p(({\mathcal O}_c\otimes\mathbf Z_p)^\times\cap(1+p^r{\mathcal O}_K\otimes\mathbf Z_p)^\times) =\Psi_p(({\mathcal O}_c\otimes\mathbf Z_p)^\times)\cap g_pU_{N^+,r,p}g_p^{-1} \] where $U_{N^+,r,p}$ denotes the $p$-component of $U_{N^+,r}$. \end{definition} Fix a decomposition $N^+{\mathcal O}_K=\mathfrak{N}^+\overline{\mathfrak{N}^+}$, and define, for each prime $q\neq p$, \begin{itemize} \item{} $\varsigma_q=1$, if $q\nmid N^+$, \item{} $\varsigma_q=\delta_K^{-1}\begin{pmatrix}\boldsymbol{\theta} & \overline{\boldsymbol{\theta}} \\ 1 & 1 \end{pmatrix} \in{\rm GL}_2(K_{\mathfrak{q}})={\rm GL}_2(\mathbf Q_q)$, if $q=\mathfrak{q}\overline{\mathfrak{q}}$ splits with $\mathfrak{q}\vert\mathfrak{N}^+$, \end{itemize} and for each $s\geq 0$, \begin{itemize} \item{} $\varsigma_p^{(s)}=\begin{pmatrix}\boldsymbol{\theta}&-1\\1&0\end{pmatrix}\begin{pmatrix}p^s&0\\0&1\end{pmatrix} \in{\rm GL}_2(K_{\mathfrak{p}})={\rm GL}_2(\mathbf Q_p)$, if $p=\mathfrak{p}\overline{\mathfrak{p}}$ splits in $K$, \item{} $\varsigma_p^{(s)}=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}p^s&0\\0&1\end{pmatrix}$, if $p$ is inert in $K$. \end{itemize} Set $\varsigma^{(s)}:=\varsigma_p^{(s)}\prod_{q\neq p}\varsigma_q\in\widehat{B}^\times$, and let $\imath_K:K\hookrightarrow B$ be the inclusion. For all $n,r\geq 0$, it is easy to see that the point \[ \widetilde{P}_{p^n,r}^{}:=[(\imath_K,\varsigma^{(n+r)})]\in\widetilde X_{N^+,r} \] is a Heegner point of conductor $p^{n+r}$. Moreover, one can show that the points $\widetilde{P}_{p^n,r}$ enjoy the following properties: \begin{itemize} \item{} \emph{Field of definition}: $\widetilde{P}_{p^n,r}\in H^0(L_{p^n,r},\widetilde X_{N^+,r})$, where $L_{p^n,r}:=H_{p^{n+r}}(\boldsymbol{\mu}_{p^r})$ and $H_{c}$ is the ring class field of $K$ of conductor $c$. \item{} \emph{Galois equivariance}: For all $\sigma\in{\rm Gal}(L_{p^n,r}/H_{p^{n+r}})$, \[ \widetilde{P}_{p^n,r}^\sigma=\langle\vartheta(\sigma)\rangle\cdot\widetilde{P}_{p^n,r} \] where $\vartheta:{\rm Gal}(L_{p^n,r}/H_{p^{n+r}})\rightarrow\mathbf Z_p^\times/\{\pm{1}\}$ is such that $\vartheta^2=\varepsilon_{\rm cyc}$. \item{} \emph{Horizontal compatibility}: If $r>1$, then \[ \sum_{\sigma\in{\rm Gal}(L_{p^n,r}/L_{p^{n-1},r})} \widetilde{\alpha}_r(\widetilde{P}_{p^{n},r}^{{\sigma}}) =U_p\cdot\widetilde{P}_{p^{n},r-1} \] where $\widetilde{\alpha}_r:\widetilde X_{N^+,r}\rightarrow\widetilde{X}_{N^+,{r-1}}$ is the map induced by the inclusion $U_{N^+,r}\subset U_{N^+,r-1}$. \item{} \emph{Vertical Compatibility}: If $n>0$, then \[ \sum_{\sigma\in{\rm Gal}(L_{p^n,r}/L_{p^{n-1},r})}\widetilde{P}_{p^{n},r}^{{\sigma}} =U_p\cdot\widetilde{P}_{p^{n-1},r}. \] \end{itemize} (See \cite[Thm.~1.2]{cas-longo} and the references therein.) \begin{remark} Even though it is not reflected in the notation, the points $\widetilde{P}_{p^n,r}$ clearly depend on $N^+$ (and the discriminant $N^-$ of the quaternion algebra $B$). In all the constructions in this paper we will keep $N^-$ fixed, but it will be important to consider different values of $N^+$. \end{remark} \subsection{Critical character}\label{subsec:crit} Factor the $p$-adic cyclotomic character as \[ \varepsilon_{\rm cyc}=\varepsilon_{\rm tame}\cdot\varepsilon_{\rm wild}: G_\mathbf Q\longrightarrow\mathbf Z_p^\times\simeq\boldsymbol{\mu}_{p-1}\times(1+p\mathbf Z_p) \] and define the \emph{critical character} $\Theta:G_\mathbf Q\rightarrow\mathbf Z_p\pwseries{1+p\mathbf Z_p}^\times$ by \begin{equation}\label{def:crit} \Theta(\sigma) [\varepsilon^{1/2}_{\rm wild}(\sigma)] \end{equation} where $\varepsilon_{\rm wild}^{1/2}$ is the unique square-root of $\varepsilon_{\rm wild}$ taking values in $1+p\mathbf Z_p$, and $[\cdot]:1+p\mathbf Z_p\rightarrow\mathbf Z_p\pwseries{1+p\mathbf Z_p}^\times$ is the map given by the inclusion as group-like elements. \subsection{Big Heegner points}\label{subsec:bigHP} Recall the Shimura curves $\widetilde X_{N^+,p^r}$ from Section~\ref{subsec:Sh}, and set \[ \mathfrak{D}_{N^+,r}:=e^{\rm ord}({\rm Div}(\widetilde{X}_{N^+,r})\otimes_{\mathbf Z}\mathbf Z_p); \] by the Jacquet--Langlands correspondence, $\mathfrak{D}_{N^+,r}$ is naturally endowed with an action of the Hecke algebra $\mathbb{T}_{N,r}^{N^-}$. Let $(\mathbb T_{N,r}^{N^-})^\dagger$ be the free $\mathbb T_{N,r}^{N^-}$-module of rank one equipped with the Galois action via the inverse of the critical character $\Theta$, and set \[ \mathfrak{D}_{N^+,r}^\dagger:=\mathfrak{D}_{N^+,r}\otimes_{\mathbb{T}_{N,r}^{N^-}}(\mathbb T_{N,r}^{N^-})^\dagger. \] Let $\widetilde{P}_{p^{n},r}\in\widetilde{X}_{N^+,r}$ be the system of Heegner points of Section~\ref{subsec:construct}, and denote by $\mathcal{P}_{p^{n},r}^{}$ the image of $e^{\rm ord}\widetilde{P}_{p^{n},r}^{}$ in $\mathfrak{D}_{N^+,r}$. By the Galois equivariance of $\widetilde{P}_{p^{n},r}$ (see \cite[\S{7.1}]{LV-MM}), we have \[ \mathcal{P}_{p^{n},r}^\sigma=\Theta(\sigma)\cdot\mathcal{P}_{p^{n},r} \] for all $\sigma\in{\rm Gal}(L_{p^n,r}/H_{p^{n+r}})$, and hence $\mathcal{P}_{p^{n},r}$ defines an element \begin{equation}\label{eq:n,m-sigma} \mathcal{P}_{p^n,r}\otimes\zeta_r\in H^0(H_{p^{n+r}},\mathfrak{D}_{N^+,r}^\dagger). \end{equation} In the next section we shall see how this system of points, for varying $n$ and $r$, can be used to construct various anticyclotomic $p$-adic $L$-functions. \section{Anticyclotomic $p$-adic $L$-functions}\label{sec:p-adicL} \subsection{Multiplicity one}\label{subsec:periods} Keep the notations introduced in Section~\ref{sec:heegner}. For each integer $k\geq 2$, denote by $L_k(R)$ the set of polynomials of degree less than or equal to $k-2$ with coefficients in a ring $R$, and define \[ \mathfrak{J}_{N^+,r,k}:=e^{\rm ord}H_0(\widetilde{X}_{N^+,r},\mathcal{L}_k(\mathbf Z_p)) \] where $\mathcal{L}_k(\mathbf Z_p)$ is the local system on $\widetilde{X}_{N^+,r}$ associated with $L_k(\mathbf Z_p)$. Note that $\mathfrak{J}_{N^+,r,k}$ is naturally a module over the Hecke algebra $\mathbb{T}_{N,r,k}^{N^-}$. \begin{theorem}\label{thm:3.1.1} Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}^{N^-}_{N,r,k}$ whose residual representation is irreducible and satisfies Assumption~(SU). Then $(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}}$ is free of rank one over $(\mathbb{T}_{N,r,k}^{N^-})_{\mathfrak{m}}$. In particular, there is a $(\mathbb{T}^{N^-}_{N,r,k})_{\mathfrak{m}}$-module isomorphism \[ (\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}}\overset{\alpha_{N,r,k}}\simeq (\mathbb{T}_{N,r,k}^{N^-})_{\mathfrak{m}}. \] \end{theorem} \begin{proof} If $k=2$ and $r=1$, this follows by combining \cite[Thm.~6.2]{pollack-weston} and [\emph{loc.cit.}, Prop.~6.5]. The general case will be deduced from this case in Section~\ref{subsec:period-families} using Hida theory. \end{proof} Associated with any $N^-$-new eigenform $f\in S_k(\Gamma_{0,1}(N,p^r))$ whose associated maximal ideal in $\mathbb{T}_{N,r,k}^{N^-}$ is $\mathfrak{m}$, there is a $\mathbf Z_p$-algebra homomorphism $(\mathbb{T}_{N,r,k}^{N^-})_{\mathfrak{m}}\rightarrow{\mathcal O}$, where ${\mathcal O}$ is the ring of integers of a finite extension $F/\mathbf Q_p$ generated by the Hecke eigenvalues of $f$. Composing with a fixed isomorphism $\alpha_{N,r,k}$ as in Theorem~\ref{thm:3.1.1}, we thus obtain the functional \[ \delta_f:(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}}\longrightarrow{\mathcal O}. \] On the other hand, if $\phi_f\in S_k(\widetilde{X}_{N^+,r})$ is a \emph{$p$-adically normalised} Jacquet--Langlands transfer (in the sense of \cite[\S{4.1}]{ChHs1}) of $f$, then by evaluation $\phi_f$ defines another ${\mathcal O}$-valued functional \[ \phi_f:(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}}\longrightarrow{\mathcal O}. \] By the multiplicity one theorem, $\phi_f$ and $\delta_f$ differ by a nonzero constant $\lambda_f\in F^\times$ which is easily seen to be necessarily a $p$-adic unit. Since both $\phi_f$ and $\delta_f$ are themselves defined up to a $p$-adic unit, we may assume $\phi_f=\delta_f$, as we shall do in the following. If $f$ is in fact a newform, following \cite[\S{2.1}]{pollack-weston} and \cite[\S{4.1}]{ChHs1} we define \emph{Gross period} \begin{equation}\label{def:period} \Omega_{f,N^-}:=\frac{(f,f)_{\Gamma_0(N)}}{\xi_f(N^+,N^-)} \end{equation} where $\xi_f(N^+,N^-)$ is the self-product of $\phi_f$ with respect to a certain ``intersection'' pairing (see \cite[Eq.(3.9)]{ChHs1}). In [\emph{loc.cit.}, \S{5.4}], it is shown that a certain $p$-adic $L$-function $L_p(f/K)$ normalized by the complex period $\Omega_{f,N^-}$ has vanishing $\mu$-invariant. The preceding description of $\phi_f$ in terms of $\delta_f$ will thus allow us to show that this property is preserved over the Hida family. \subsection{One-variable $p$-adic $L$-functions} Denote by $\Gamma$ the Galois group of the anticyclotomic $\mathbf Z_p$-extension $K_\infty/K$. For each $n$, let $K_n\subset K_\infty$ be defined by $\Gal(K_n/K)\simeq\mathbf Z/p^n\mathbf Z$ and let $\Gamma_n$ be the subgroup of $\Gamma$ such that $\Gamma/\Gamma_n\simeq\Gal(K_n/K)$. Let $\mathcal{P}_{p^{n+1},r}^{}\otimes\zeta_r\in H^0(H_{p^{n+1+r}},\mathfrak{D}^\dagger_{N^+,r})$ be the Heegner point of conductor $p^{n+1}$, and define \begin{equation}\label{def:Q} \mathcal{Q}_{n,r}^{}:={\rm Cor}_{H_{p^{n+1+r}}/K_n}(\mathcal{P}_{p^{n+1},r}^{}\otimes\zeta_r) \in H^0(K_n,\mathfrak{D}^\dagger_{N^+,r}); \end{equation} with a slight abuse of notation, we will still denote by $\mathcal{Q}_{n,r}^{}$ the its image under the natural map $H^0(K_n,\mathfrak{D}^\dagger_{N^+,r})\subset\mathfrak{D}_{N^+,r}\rightarrow\mathfrak{J}_{N^+,r}$ composed with localization at $\mathfrak{m}$, where $\mathfrak{J}_{N^+,r}:=\mathfrak{J}_{N^+,r,2}$. \begin{definition}\label{def:1var} For any open subset $\sigma\Gamma_n$ of $\Gamma$, define \[ \mu_r^{}(\sigma\Gamma_n):=U_p^{-n}\cdot\mathcal{Q}_{n,r} \in(\mathfrak{J}_{N^+,r})_{\mathfrak{m}}. \] \end{definition} \begin{proposition} The rule $\mu_r$ is a measure on $\Gamma$. \end{proposition} \begin{proof} This follows immediately from the ``horizontal compatibility'' of Heegner points. \end{proof} \subsection{Gross periods in Hida families}\label{subsec:period-families} Keep the notations of Section~\ref{subsec:periods}, and let \[ (\mathfrak{J}_{N^+})_{\mathfrak{m}}:=\varprojlim_r(\mathfrak{J}_{N^+,r})_{\mathfrak{m}} \] which is naturally equipped with an action of the big Hecke algebra $\mathbb{T}_N^{N^-}=\varprojlim_r\mathbb{T}^{N^-}_{N,r}$. \begin{theorem}\label{thm:3.3.1} Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}^{N^-}_{N}$ whose residual representation is irreducible and satisfies Assumption~(SU). Then $(\mathfrak{J}_{N^+})_{\mathfrak{m}}$ is free of rank one over $(\mathbb{T}_{N}^{N^-})_{\mathfrak{m}}$. In particular, there is a $(\mathbb{T}^{N^-}_{N})_{\mathfrak{m}}$-module isomorphism \[ (\mathfrak{J}_{N^+})_{\mathfrak{m}}\overset{\alpha_{N}}\simeq(\mathbb{T}_{N}^{N^-})_{\mathfrak{m}}. \] \end{theorem} \begin{proof} As in \cite[Prop.~3.3.1]{EPW}. Note that the version of Hida's control theorem in our context is provided by \cite[Thm.~9.4]{hida-annals}. \end{proof} We can now conclude the proof of Theorem~\ref{thm:3.1.1} just as in \cite[\S{3.3}]{EPW}. For the convenience of the reader, we include here the argument. \begin{proof}[Proof of Theorem~\ref{thm:3.1.1}] Let $\wp_{N,r,k}$ be the product of all the arithmetic primes of $\mathbb{T}^{N^-}_N$ of weight $k$ which become trivial upon restriction to $1+p^r\mathbf Z_p$. By \cite[Thm.~9.4]{hida-annals}, we then have \begin{equation}\label{control-H0} (\mathfrak{J}_{N^+})_{\mathfrak{m}}\otimes\mathbb{T}^{N^-}_N/\wp_{N,r,k} \simeq(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}_{r,k}} \end{equation} where $\mathfrak{m}_{r,k}$ is the maximal ideal of $\mathbb{T}^{N^-}_{N,r,k}$ induced by $\mathfrak{m}$. Since $(\mathfrak{J}_{N^+})_{\mathfrak{m}}$ is free of rank one over $\mathbb{T}^{N^-}_N$ by Theorem~\ref{thm:3.3.1}, it follows that $(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}_{r,k}}$ is free of rank one over $\mathbb{T}^{N^-}_{N,r,k}\simeq\mathbb{T}^{N^-}_N/\wp_{N,r,k}$, as was to be shown. \end{proof} \begin{remark}\label{remark3.5} In the above proofs we made crucial use of \cite[Thm.~9.4]{hida-annals}, which is stated in the context of totally definite quaternion algebras which are unramified at all finite places, since this is the only relevant case for the study of Hilbert modular forms over totally real number fields of even degree. However, the proofs immediately extend to the (simpler) situation of definite quaternion algebras over $\mathbf Q$. \end{remark} \subsection{Two-variable $p$-adic $L$-functions}\label{subsec:2varL} By the ``vertical compatibility'' satisfied by Heegner points, the points \[ U_p^{-r}\cdot\mathcal{Q}_{n,r}\in(\mathfrak{J}_{N^+,r})_{\mathfrak{m}} \] are compatible for varying $r$, thus defining an element \[ \mathcal{Q}_n^{}:=\varprojlim_r U_p^{-r}\cdot\mathcal{Q}_{n,r}^{} \in(\mathfrak{J}_{N^+})_{\mathfrak{m}}. \] \begin{definition}\label{def:2var} For any open subset $\sigma\Gamma_n$ of $\Gamma$, define \[ \mu^{}(\sigma\Gamma_n):=U_p^{-n}\cdot\mathcal{Q}_{n}^{} \in(\mathfrak{J}_{N^+})_{\mathfrak{m}}. \] \end{definition} \begin{proposition} The rule $\mu$ is a measure on $\Gamma$. \end{proposition} \begin{proof} This follows immediately from the ``horizontal compatibility'' of Heegner points. \end{proof} Upon the choice of an isomorphism $\alpha_N$ as in Theorem~\ref{thm:3.3.1}, we may regard $\mu$ as an element \[ \mathcal{L}(\mathfrak{m},N)\in(\mathbb{T}^{N^-}_N)_{\mathfrak{m}}\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma}. \] Denoting by $\mathcal{L}(\mathfrak{m},N)^*$ the image of $\mathcal{L}(\mathfrak{m},N)$ under the involution induced by $\gamma\mapsto\gamma^{-1}$ on group-like elements, we set $L(\mathfrak{m},N):=\mathcal{L}(\mathfrak{m},N)\cdot\mathcal{L}(\mathfrak{m},N)^*$, to which we will refer as the \emph{two-variable $p$-adic $L$-function attached to $(\mathbb{T}^{N^-}_N)_{\mathfrak{m}}$}. \subsection{Two-variable $p$-adic $L$-functions on branches of the Hida family}\label{subsec:2var-branch} Let $\mathbf{F}$ be a finite field of characteristic $p$, let $\bar{\rho}:G_\mathbf Q\rightarrow\GL_2(\mathbf{F})$ be an odd irreducible (and hence modular!) Galois representation satisfying Assumption~(SU), and let $\mathbb{T}_\Sigma$ be the universal ordinary Hecke algebra \begin{equation}\label{eq:2.4.2} \mathbb{T}_\Sigma:=(\mathbb{T}_{N(\Sigma)}')_{\mathfrak{m}}\simeq(\mathbb{T}_{N(\Sigma)}^{N^-})_{\mathfrak{n}} \end{equation} associated with $\bar{\rho}$ and a finite set of primes $\Sigma$ as described in Section~\ref{subsec:residual}. \begin{remark}\label{rem:split} Note that $N^-\vert N(\bar{\rho})$ by hypothesis. Throughout the following, it will be assumed that $N^-$ contains \emph{all} prime factors of $N(\bar{\rho})$ which are inert in $K$ and at which $\bar\rho$ is ramified, and that every prime factor of $N(\Sigma)/N^-$ splits in $K$. In particular, every prime $\ell\in\Sigma$ splits in $K$. \end{remark} The construction of the preceding section produces a two-variable $p$-adic $L$-function \[ L(\mathfrak{n},N(\Sigma))\in(\mathbb{T}_{N(\Sigma)}^{N^-})_{\mathfrak{n}}\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma} \] which combined with the isomorphism $(\ref{eq:2.4.2})$ yields an element \[ L_\Sigma(\bar{\rho})\in\mathbb{T}_\Sigma\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma}. \] If $\mathfrak{a}$ is a minimal prime of $\mathbb{T}_\Sigma$, we thus obtain an element \[ L_\Sigma(\bar{\rho},\mathfrak{a})\in\mathbb{T}(\mathfrak{a})^\circ\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma} \] by reducing $L_\Sigma(\bar{\rho})$ mod $\mathfrak{a}$ (see \S\ref{sec:branches}). On the other hand, if we let $\mathfrak{m}$ denote the inverse image of the maximal ideal of $\mathbb{T}(\mathfrak{a})^\circ$ under the composite surjection \begin{equation}\label{eq:3.7} \mathbb{T}^{N^-}_{N(\mathfrak{a})}\longrightarrow\mathbb{T}_{N(\mathfrak{a})}^{\rm new} \longrightarrow\mathbb{T}_{N(\mathfrak{a})}^{\rm new}/\mathfrak{a}'=\mathbb{T}(\mathfrak{a})^\circ, \end{equation} the construction of the preceding section yields an $L$-function \[ L(\mathfrak{m},N(\mathfrak{a}))\in(\mathbb{T}_{N(\mathfrak{a})}^{N^-})_{\mathfrak{m}}\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma} \] giving rise, via $(\ref{eq:3.7})$, to a second element \[ L(\bar\rho,\mathfrak{a})\in\mathbb T(\mathfrak a)^\circ\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma}. \] It is natural to compare $L_\Sigma(\bar{\rho},\mathfrak{a})$ and $L(\bar{\rho},\mathfrak{a})$, a task that is carried out in the next section, and provides the key for understanding the variation of \emph{analytic} Iwasawa invariants. \subsection{Comparison}\label{subsec:comparison} Write $\Sigma=\{\ell_1,\dots,\ell_n\}$ and for each $\ell=\ell_i\in\Sigma$, let $e_\ell$ be the valuation of $N(\Sigma)/N(\mathfrak a)$ at $\ell$, and define the reciprocal Euler factor $E_{\ell}(\mathfrak{a},X)\in\mathbb T(\mathfrak a)^\circ[X]$ by \[ E_{\ell}(\mathfrak{a},X):= \begin{cases} 1& \text{ if $e_\ell=0$}\\ 1-(T_{\ell}\;{\rm mod}\;{\mathfrak{a}'})\Theta^{-1}(\ell)X&\text{ if $e_\ell=1$}\\ 1-(T_{\ell}\;{\rm mod}\;{\mathfrak{a}'})\Theta^{-1}(\ell)X +\ell X^2&\text{ if $e_\ell=2$.} \end{cases} \] Also, writing $\ell=\mathfrak{l}\bar{\mathfrak{l}}$, define $E_\ell(\mathfrak{a})\in\mathbb T(\mathfrak a)^\circ\hat{\otimes}_{\mathbf Z_p}\mathbf Z_p\pwseries{\Gamma}$ by \begin{equation}\label{def:e-a} E_\ell(\mathfrak{a}):=E_\ell(\mathfrak{a},\ell^{-1}\gamma_{\mathfrak{l}}) \cdot E_\ell(\mathfrak{a},\ell^{-1}\gamma_{\bar{\mathfrak{l}}}) \end{equation} where $\gamma_{\mathfrak{l}}$, $\gamma_{\bar{\mathfrak{l}}}$ are arithmetic Frobenii at $\mathfrak{l}$, $\bar{\mathfrak{l}}$ in $\Gamma$, respectively, and put $E_\Sigma(\mathfrak a):=\prod_{\ell\in\Sigma}E_{\ell}(\mathfrak a)$. Recall that $N^-\vert N(\mathfrak{a})\vert N(\Sigma)$ and set \[ N(\mathfrak{a})^+:=N(\mathfrak{a})/N^-;\quad\quad N(\Sigma)^+:=N(\Sigma)/N^- \] both of which consist entirely of prime factors which split in $K$. The purpose of this section to prove the following result. \begin{theorem}\label{thm:3.6.2} There is an isomorphism of $\mathbb{T}(\mathfrak{a})^\circ$-modules \[ \mathbb{T}(\mathfrak{a})^\circ\otimes_{(\mathbb{T}_{N(\Sigma)}^{N^-})_{\mathfrak{n}}}(\mathfrak{J}_{N(\Sigma)^+})_{\mathfrak{n}}\;\simeq\; \mathbb{T}(\mathfrak{a})^\circ\otimes_{(\mathbb{T}_{N(\mathfrak{a})}^{N^-})_{\mathfrak{m}}}(\mathfrak{J}_{N(\mathfrak{a})^+})_{\mathfrak{m}} \] such that the map induced on the corresponding spaces of measures valued in these modules sends $L_\Sigma(\bar\rho,\mathfrak{a})$ to $E_\Sigma(\mathfrak{a})\cdot L(\bar\rho,\mathfrak a)$. \end{theorem} \begin{proof} The proof follows very closely the constructions and arguments given in \cite[\S{3.8}]{EPW}. Let $r\geq 1$. If $M$ is any positive integer with $(M,pN^-)=1$, and $d'\vert d$ are divisors of $M$, we have degeneracy maps \[ B_{d,d'}:\widetilde X_{M,r}\longrightarrow\widetilde X_{M/d,r} \] induced by $(\Psi,g)\mapsto(\Psi,\pi_{d'}g)$, where $\pi_{d'}\in\widehat{B}^\times$ has local component $\smallmat 100{\ell^{\mathrm{val}_\ell(d')}}$ at every prime $\ell\vert d'$ and $1$ outside $d'$. We thus obtain a map on homology \[ (B_{d,d'})_*:e^{\rm ord}H_0(\widetilde X_{M,r},\mathbf Z_p)\longrightarrow e^{\rm ord}H_0(\widetilde X_{M/d,r},\mathbf Z_p) \] and we may define \begin{equation}\label{eq:eps_r} \epsilon_r:e^{\rm ord}H_0(\widetilde{X}_{N(\Sigma)^+,r},\mathbf Z_p)\longrightarrow e^{\rm ord}H_0(\widetilde{X}_{N(\mathfrak{a})^+,r},\mathbf Z_p) \end{equation} by $\epsilon_r:=\epsilon(\ell_n)\circ\cdots\circ\epsilon(\ell_1)$, where for every $\ell=\ell_i\in\Sigma$ we put \[ \epsilon(\ell):= \begin{cases}1 & \textrm{if $e_\ell=0$}\\ (B_{\ell,1})_*-(B_{\ell,\ell})_*\ell^{-1}T_\ell & \textrm{if $e_\ell=1$}\\ (B_{\ell^2,1})_*-(B_{\ell^2,\ell})_*\ell^{-1}T_\ell+(B_{\ell^2,\ell^2})_*\ell^{-1} \langle\ell\rangle_{N(\mathfrak{a})p} & \textrm{if $e_\ell=2$}. \end{cases} \] As before, let $M$ be a positive integer with $(M,pN^-)=1$ all of whose prime factors split in $K$, and let $\ell\nmid Mp$ be a prime which also splits in $K$. We shall adopt the following simplifying notations for the system of points $\widetilde{P}_{p^n,r}\in\widetilde{X}_{N^+,r}$ constructed in Section~\ref{subsec:construct}: \[ P:=\widetilde{P}_{p^n,r}^{(M)}\in\widetilde{X}_{M,r},\quad P^{(\ell)}:=\widetilde{P}_{p^n,r}^{(M\ell)}\in\widetilde{X}_{M\ell,r},\quad P^{(\ell^2)}:=\widetilde{P}_{p^n,r}^{(M\ell^2)}\in\widetilde{X}_{M\ell^2,r}. \] It is easy to check that for a suitable factorization $\ell=\mathfrak{l}\bar{\mathfrak{l}}$ we then have the following relations: \begin{itemize} \item $(B_{\ell, 1})_*(P ^{(\ell)}) = P$ \item $(B_{\ell,\ell})_*(P ^{(\ell)}) = P ^{\sigma_{{\mathfrak{l}}}}$ \item $(B_{\ell^2, 1})_*({P ^{(\ell^2)}}) = P $ \item $(B_{\ell^2,\ell})_*(P ^{(\ell^2)}) = P^{\sigma_{{\mathfrak{l}}}}$ \item $(B_{\ell^2,\ell^2})_*(P ^{(\ell^2)}) = P^{ {\sigma_{{\mathfrak{l}}}^2}}$ \end{itemize} in $\widetilde{X}_{M,r}$, where $\sigma_{\mathfrak{l}}\in{\rm Gal}(L_{p^n,r}/K)$ is a Frobenius element at $\mathfrak{l}$. Letting $\mathcal{P}$ denote the image of $e^{\rm ord}P$ in $\mathfrak{D}_{M,r}$, and defining $\mathcal{P}^{(\ell)}\in\mathfrak{D}_{M\ell,r}$ and $\mathcal{P}^{(\ell^2)}\in\mathfrak{D}_{M\ell^2,r}$ similarly, it follows that \begin{itemize} \item $(B_{\ell, 1})_*(\mathcal{P}^{(\ell)}\otimes\zeta_r)= \mathcal P\otimes\zeta_r$ \item $(B_{\ell,\ell})_*(\mathcal P^{(\ell)}\otimes\zeta_r)= \mathcal P^{\sigma_{{\mathfrak{l}}}}\otimes\zeta_r= \Theta^{-1}(\sigma_\mathfrak{l})\cdot(\mathcal P\otimes\zeta_r)^{\sigma_\mathfrak{l}}$ \item $(B_{\ell^2, 1})_*(\mathcal P ^{(\ell^2)}\otimes\zeta_r) = \mathcal P\otimes\zeta_r$ \item $(B_{\ell^2,\ell})_*(\mathcal P ^{(\ell^2)}\otimes\zeta_r) = \mathcal P ^{\sigma_{{\mathfrak{l}}}}\otimes\zeta_r= \Theta^{-1}(\sigma_\mathfrak{l})\cdot(\mathcal P\otimes\zeta_r)^{\sigma_\mathfrak{l}}$ \item $(B_{\ell^2,\ell^2})_*(\mathcal P ^{(\ell^2)}\otimes\zeta_r) =\mathcal P^{\sigma^2_{{\mathfrak{l}}}}\otimes\zeta_r= \Theta^{-2}(\sigma_\mathfrak{l})\cdot(\mathcal P \otimes\zeta_r)^{\sigma_\mathfrak{l}}$ \end{itemize} as elements in $\mathfrak{D}_{M,r}^\dagger$. Finally, setting $\mathcal{Q}_{}:={\rm Cor}_{H_{p^{n+1+r}}/K_n}(\mathcal{P})\in H^0(K_n,\mathfrak{D}_{M,r}^\dagger)$, and defining $\mathcal{Q}_{}^{(\ell)}\in H^0(K_n,\mathfrak{D}_{M\ell,r}^\dagger)$ and $\mathcal{Q}_{}^{(\ell^2)}\in H^0(K_n,\mathfrak{D}_{M\ell^2,r}^\dagger)$ similarly, we see that \begin{itemize} \item $(B_{\ell, 1})_*(\mathcal{Q}^{(\ell)})=\mathcal Q$ \item $(B_{\ell,\ell})_*(\mathcal{Q}^{(\ell)})= \Theta^{-1}(\sigma_{\mathfrak{l}})\cdot\mathcal Q^{\sigma_{\mathfrak{l}}}$ \item $(B_{\ell^2, 1})_*({\mathcal Q ^{(\ell^2)})}=\mathcal Q$ \item $(B_{\ell^2,\ell})_*(\mathcal Q ^{(\ell^2)})=\Theta^{-1}(\sigma_\mathfrak{l})\cdot\mathcal Q^{\sigma_{\mathfrak{l}}}$ \item $(B_{\ell^2,\ell^2})_*(\mathcal Q ^{(\ell^2)})=\Theta^{-2}(\sigma_\mathfrak{l})\cdot\mathcal Q^{\sigma^2_{\mathfrak{l}}}$ \end{itemize} in $H^0(K_n,\mathfrak{D}_{M,r}^\dagger)$. Each of these equalities is checked by an explicit calculation. For example, for the second one: \begin{align*} (B_{\ell,\ell})_*(\mathcal Q^{(\ell)}) &=(B_{\ell,\ell})_*\left({\rm Cor}_{H_{p^{n+1+r}}/K_n}(\mathcal P^{(\ell)}\otimes\zeta_r)\right)\\ &=(B_{\ell,\ell})_*\left(\Bigl(\sum_{\sigma\in{\rm Gal}(H_{p^{n+1+r}}/K_n)}\Theta(\tilde{\sigma}^{-1})\cdot (\mathcal P^{(\ell)})^{\tilde\sigma}\Bigr)\otimes\zeta_r\right)\\ &=\sum_{\sigma\in{\rm Gal}(H_{p^{n+1+r}}/K_n)}\Theta(\tilde\sigma^{-1}) \cdot(B_{\ell,\ell})_*((\mathcal P^{(\ell)})^{\tilde{\sigma}}\otimes\zeta_r)\\ &=\sum_{\sigma\in{\rm Gal}(H_{p^{n+1+r}}/K_n)}\Theta(\tilde\sigma^{-1}) \Theta^{-1}(\sigma_\mathfrak{l})\cdot(\mathcal P^{\tilde{\sigma}}\otimes\zeta_r)^{\sigma_{\mathfrak{l}}}\\ &=\Theta^{-1}(\sigma_\mathfrak{l})\cdot \mathcal Q^{\sigma_\mathfrak{l}}. \end{align*} Now let $\mathcal{Q}_{n,r}\in\mathfrak{J}_{N(\Sigma)^+,r}$ be as in $(\ref{def:Q})$ with $N=N(\Sigma)$. Using the above formulae, we easily see that of any finite order character $\chi$ of $\Gamma$ of conductor $p^n$, the effect of $\epsilon_r$ on the element $\sum_{\sigma\in\Gamma/\Gamma_n}\chi(\sigma)\mathcal{Q}_{n,r}^\sigma$ is given by multiplication by \[ \prod_{\ell_i\colon e_{\ell_i}=1} (1-(\chi\Theta)^{-1}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}T_{\ell_i}) \prod_{\ell_i\colon e_{\ell_i}=2}(1-(\chi\Theta)^{-1}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}T_{\ell_i} +(\chi\Theta)^{-2}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}\langle\ell_i\rangle_{N(\mathfrak{a})p}). \] Similarly, we see that $\epsilon_r$ has the effect of multiplying the element $\sum_{\sigma\in\Gamma/\Gamma_n}\chi^{-1}(\sigma)\mathcal{Q}_{n,r}^\sigma$ by \[ \prod_{\ell_i\colon e_{\ell_i}=1} (1-(\chi^{-1}\Theta)^{-1}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}T_{\ell_i}) \prod_{\ell_i\colon e_{\ell_i}=2}(1-(\chi^{-1}\Theta)^{-1}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}T_{\ell_i} +(\chi^{-1}\Theta)^{-2}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}\langle\ell_i\rangle_{N(\mathfrak{a})p}). \] Hence, using the relations \[ \chi(\sigma_{\bar{\mathfrak{l}}_i})=\chi^{-1}(\sigma_{\mathfrak{l}_i}); \quad\quad\Theta(\sigma_{\mathfrak{l}_i})=\Theta(\sigma_{\bar{\mathfrak{l}}_i})=\theta(\ell_i); \quad\quad\theta^2(\ell_i)=\langle\ell_i\rangle_{N(\mathfrak{a})p} \] it follows that the effect of $\epsilon_r$ on the product of the above two elements is given by multiplication by \[ \prod_{\mathfrak{l}_i\mid \ell_i\colon e_{\ell_i}=1}(1-\chi(\sigma_{\mathfrak{l}_i})\theta^{-1}(\ell_i)\ell_i^{-1}T_{\ell_i}) \prod_{\mathfrak{l}_i\mid \ell_i\colon e_{\ell_i}=2}(1-\chi(\sigma_{\mathfrak{l}_i})\theta^{-1}(\ell_i)\ell_i^{-1}T_{\ell_i}+ \chi^{2}(\sigma_{\mathfrak{l}_i})\ell_i^{-1}). \] Taking the limit over $r$, we thus obtain a $\mathbb{T}(\mathfrak{a})^\circ$-linear map \begin{equation}\label{3.11} \mathbb{T}(\mathfrak{a})^\circ\otimes_{(\mathbb{T}_{N(\Sigma)}^{N^-})_{\mathfrak{n}}}(\mathfrak{J}_{N(\Sigma)^+})_{\mathfrak{n}}\longrightarrow \mathbb{T}(\mathfrak{a})^\circ\otimes_{(\mathbb{T}_{N(\mathfrak{a})}^{N^-})_{\mathfrak{m}}}(\mathfrak{J}_{N(\mathfrak{a})^+})_{\mathfrak{m}} \end{equation} having as effect on the corresponding measures as stated in Theorem~\ref{thm:3.6.2}. Hence to conclude the proof it remains to show that $(\ref{3.11})$ is an isomorphism. By Theorem~\ref{thm:3.3.1}, both the source and the target of this map are free of rank one over $\mathbb{T}(\mathfrak{a})^\circ$, and as in \cite[p.559]{EPW} (using \cite[Thm.~9.4]{hida-annals}), one is reduced to showing the injectivity of the dual map modulo $p$: \begin{align}\label{3.15} H^0(\widetilde{X}_{N(\mathfrak{a})^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{m}] &\longrightarrow(\mathbb{T}_{N(\mathfrak{a})}^{N^-}/\mathfrak{m}) \otimes_{\mathbb{T}_{N(\Sigma)}^{N^-}/\mathfrak{n}}(H^0(\widetilde{X}_{N(\mathfrak{a})^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{m}'])\\ &\longrightarrow(\mathbb{T}_{N(\mathfrak{a})}^{N^-}/\mathfrak{m}) \otimes_{\mathbb{T}_{N(\Sigma)}^{N^-}/\mathfrak{n}}(H^0(\widetilde{X}_{N(\Sigma)^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{m}'])\nonumber\\ &\longrightarrow(\mathbb{T}_{N(\mathfrak{a})}^{N^-}/\mathfrak{m}) \otimes_{\mathbb{T}_{N(\Sigma)}^{N^-}/\mathfrak{n}}(H^0(\widetilde{X}_{N(\Sigma)^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{n}]);\nonumber \end{align} or equivalently (by a version of \cite[Lemma~3.8.1]{EPW}), to showing that the composite of the first two arrows in (\ref{3.15}) is injective. In turn, the latter injectivity follows from Lemma~\ref{lem:3.8.2}, where the notations are as follows: $M^+$ is any positive integer with $(M^+,pN^-)=1$, $\ell\neq p$ is a prime, $n_\ell=1$ or $2$ according to whether or not $\ell$ divides $M^+$, $N^+:=\ell^{n_\ell}M^+$, and \begin{equation}\label{3.17} \epsilon_\ell^*:H^0(\widetilde{X}_{M^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{m}]\longrightarrow(\mathbb{T}^{N^-}_{M^+N^-}/\mathfrak{m}) \otimes_{\mathbb{T}_{N^+N^-}'/\mathfrak{m}'}(H^0(\widetilde{X}_{N^+,1};\mathbf{F}_p)^{\rm ord}[\mathfrak{m}']) \end{equation} is the map defined by \[ \epsilon_\ell^*:= \left\{ \begin{array}{ll} B_{\ell,1}^*-B_{\ell,\ell}^*\ell^{-1}T_\ell & \textrm{if $n_\ell=1$}\\ B_{\ell^2,1}^*-B_{\ell^2,\ell}^*\ell^{-1}T_\ell+B_{\ell^2,\ell^2}^*\ell^{-1} \langle\ell\rangle_{N(\mathfrak{a})p} & \textrm{if $n_\ell=2$}. \end{array} \right. \] \begin{lemma}\label{lem:3.8.2} The map $(\ref{3.17})$ is injective. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:3.8.2}] As in the proof of the analogous result \cite[Lemma~3.8.2]{EPW} in the modular curve case, it suffices to show the injectivity of the map \[ (H^0(\widetilde{X}_{M^+,1};\mathbf{F})^{\rm ord}[\mathfrak{m}_{\mathbf{F}}])^{n_\ell+1}\xrightarrow{\;\beta_\ell\;} H^0(\widetilde{X}_{N^+,1};\mathbf{F})^{\rm ord}[\mathfrak{m}_{\mathbf{F}}'] \] defined by \[ \beta_\ell:= \left\{ \begin{array}{ll} B_{\ell,1}^*\pi_1+B_{\ell,\ell}^*\pi_2 & \textrm{if $n_\ell=1$}\\ B_{\ell^2,1}^*\pi_1+B_{\ell^2,\ell}^*\pi_2+B_{\ell^2,\ell^2}^*\pi_3 & \textrm{if $n_\ell=2$}. \end{array} \right. \] But in our quaternionic setting the proof of this injectivity follows from \cite[Lemma~3.26]{SW} for $n_\ell=1$ and [\emph{loc.cit.}, Lemma~3.28] for $n_\ell=2$. \end{proof} Applying inductively Lemma~\ref{lem:3.8.2} to the primes in $\Sigma$, the proof of Theorem~\ref{thm:3.6.2} follows. \end{proof} \subsection{Analytic Iwasawa invariants} Upon the choice of an isomorphism \[ \mathbf Z_p\pwseries{\Gamma}\simeq\mathbf Z_p\pwseries{T} \] we may regard the $p$-adic $L$-functions $L_\Sigma(\bar\rho,\mathfrak{a})$ and $L(\bar\rho,\mathfrak{a})$, as well as the Euler factor $E_\Sigma(\bar\rho,\mathfrak{a})$, as elements in $\mathbb{T}(\mathfrak{a})^\circ\pwseries{T}$. In this section we apply the main result of the preceding section to study the variation of the Iwasawa invariants attached to the anticyclotomic $p$-adic $L$-functions of $p$-ordinary modular forms. For any power series $f(T)\in R\pwseries{T}$ with coefficients in a ring $R$, recall that the \emph{content} of $f(T)$ is defined to be the ideal $I(f(T))\subseteq R$ generated by the coefficients of $f(T)$. If $\wp$ is a height one prime of $\mathbb{T}_\Sigma$ belonging to the branch $\mathbb T(\mathfrak a)$ (in the sense that $\mathfrak{a}$ is the unique minimal prime of $\mathbb{T}_\Sigma$ contained in $\wp$), we denote by $L(\bar\rho,\mathfrak a)(\wp)$ the element of $\mathcal O_\wp\pwseries{\Gamma}$ obtained from $L(\bar\rho,\mathfrak a)$ via reduction modulo $\wp$. In particular, we note that $L(\bar\rho,\mathfrak{a})(\wp)$ has unit content if and only if its $\mu$-invariant vanishes. \begin{theorem}\label{3.7.5} The following are equivalent: \begin{enumerate} \item $\mu(L(\bar\rho,\mathfrak a)(\wp))=0$ for some newform $f_\wp$ in $\mathcal H(\bar\rho)$. \item $\mu(L(\bar\rho,\mathfrak a)(\wp))=0$ for every newform $f_\wp$ in $\mathcal H(\bar\rho)$. \item $L(\bar{\rho},\mathfrak{a})$ has unit content for some irreducible component $\mathbb{T}(\mathfrak{a})$ of $\mathcal{H}(\bar\rho)$. \item $L(\bar{\rho},\mathfrak{a})$ has unit content for every irreducible component $\mathbb{T}(\mathfrak{a})$ of $\mathcal{H}(\bar\rho)$. \end{enumerate} \end{theorem} \begin{proof} The argument in \cite[Thm~3.7.5]{EPW} applies verbatim, replacing the appeal to [\emph{loc.cit.}, Cor.~3.6.3] by our Theorem~\ref{thm:3.6.2} above. \end{proof} When any of the conditions in Theorem~\ref{3.7.5} hold, we shall write \[ \mu^\mathrm{an}(\bar\rho)=0. \] For a power series $f(T)$ with unit content and coefficients in a local ring $R$, recall that the $\lambda$-invariant $\lambda(f(T))$ is defined to be the smallest degree in which $f(T)$ has a unit coefficient. \begin{theorem}\label{3.7.7} Assume that $\mu^\mathrm{an}(\bar\rho)=0$. \begin{enumerate} \item Let $\mathbb T(\mathfrak a)$ be an irreducible component of $\mathcal{H}(\bar\rho)$. As $\wp$ varies over the arithmetic primes of $\mathbb T(\mathfrak a)$, the $\lambda$-invariant $\lambda(L(\bar\rho,\mathfrak a)(\wp))$ takes on a constant value, denoted $\lambda^{\rm an}(\bar\rho,\mathfrak{a})$. \item For any two irreducible components $\mathbb T(\mathfrak a_1), \mathbb T(\mathfrak a_2)$ of $\mathcal{H}(\bar\rho)$, we have that \[ \lambda^\mathrm{an}(\bar\rho,\mathfrak a_1)-\lambda^\mathrm{an}(\bar\rho,\mathfrak a_2)= \sum_{\ell\neq p}e_\ell(\mathfrak a_2)-e_\ell(\mathfrak a_1) \] where $e_\ell(\mathfrak a)=\lambda(E_\ell(\mathfrak{a}))$. \end{enumerate} \end{theorem} \begin{proof} The first part follows immediately from the definitions. For the second part, the argument in \cite[Thm.~3.7.7]{EPW} applies verbatim, replacing their appeal to [\emph{loc.cit.}, Cor.~3.6.3] by our Theorem~\ref{thm:3.6.2} above. \end{proof} By Theorem~\ref{3.7.5} and Theorem~\ref{3.7.7}, the Iwasawa invariants of $L(\bar\rho,\mathfrak a)(\wp)$ are well-behaved as $\wp$ varies. However, for the applications of these results to the Iwasawa main conjecture it is of course necessary to relate $L(\bar\rho,\mathfrak a)(\wp)$ to $p$-adic $L$-functions defined by the interpolation of special values of $L$-functions. This question was addressed in \cite{cas-longo}, as we now recall. \begin{theorem}\label{thm:3.4.3} If $\wp$ is the arithmetic prime of $\mathbb T(\mathfrak a)$ corresponding to a $p$-ordinary $p$-stabilized newform $f_\wp$ of weight $k\geq 2$ and trivial nebentypus, then \[ L(\bar\rho,\mathfrak a)(\wp)=L_p(f_\wp/K) \] where $L_p(f_{\wp}/K)$ is the $p$-adic $L$-function of Chida--Hsieh \cite{ChHs1}. In particular, if $\chi:\Gamma\rightarrow\mathbf C_p^\times$ is the $p$-adic avatar of an anticyclotomic Hecke character of $K$ of infinity type $(m,-m)$ with $-k/2<m<k/2$, then $L(\bar\rho,\mathfrak a)(\wp)$ interpolates the central critical values \[ \frac{L(f_\wp/K,\chi,k/2)}{\Omega_{f_\wp,N^-}} \] as $\chi$ varies, where $\Omega_{f_\wp,N^-}$ is the complex period $(\ref{def:period})$. \end{theorem} \begin{proof} This is a reformulation of the main result of \cite{cas-longo}. (Note that the constant $\lambda_\wp\in F_\wp^\times$ in \cite{cas-longo}, Thm.~4.6] is not needed here, since the specialization map of [\emph{loc.cit.}, \S{3.1}] is being replaced by the map $(\mathfrak{J}_{N^+})_{\mathfrak{m}}\rightarrow(\mathfrak{J}_{N^+,r,k})_{\mathfrak{m}_{r,k}}$ induced by the isomorphism $(\ref{control-H0})$, which preserves integrality.) \end{proof} \begin{corollary}\label{cor:lambda-an} Let $f_1$, $f_2\in\mathcal{H}(\bar\rho)$ be newforms with trivial nebentypus lying in the branches $\mathbb{T}(\mathfrak{a}_1)$, $\mathbb{T}(\mathfrak{a}_2)$, respectively. Then $\mu^{\rm an}(\bar\rho)=0$ and \[ \lambda(L_p(f_1/K))-\lambda(L_p(f_2/K))= \sum_{\ell\neq p}e_\ell(\mathfrak{a}_2)-e_\ell(\mathfrak{a}_1) \] where $e_\ell(\mathfrak{a}_j)=\lambda(E_\ell(\mathfrak{a}_j))$. \end{corollary} \begin{proof} By \cite[Thm.~5.7]{ChHs1} (extending Vatsal's result \cite{Vat1} to higher weights), if $f\in\mathcal{H}(\bar\rho)$ has weight $k\leq p+1$ and trivial nebentypus, then $\mu(L_p(f/K))=0$. By Theorem~\ref{3.7.5} and Theorem~\ref{thm:3.4.3}, this implies $\mu^{\rm an}(\bar\rho)=0$. The result thus follows from Theorem~\ref{3.7.7}, using again Theorem~\ref{thm:3.4.3} to replace $\lambda^{\rm an}(\bar\rho,\mathfrak{a}_j)$ by $\lambda(L_p(f_j/K))$. \end{proof} \section{Anticyclotomic Selmer groups}\label{sec:Selmer} We continue with the notation of the previous sections. In particular, $\bar\rho:G_\mathbf Q\rightarrow{\rm GL}_2(\mathbf{F})$ is an odd irreducible Galois representation satisfying (SU), $\mathcal{H}(\bar\rho)$ is the associated Hida family, and $\Sigma$ is a finite set of primes split in the imaginary quadratic field $K$. For each $f\in\mathcal{H}(\bar\rho)$, let $V_f$ denote the self-dual Tate twist of the $p$-adic Galois representation associated to $f$, fix an ${\mathcal O}$-stable lattice $T_f\subset V_f$, and set $A_f:=V_f/T_f$. Since $f$ is $p$-ordinary, there is a unique one-dimensional $G_{\mathbf Q_p}$-invariant subspace $F_p^+V_f\subset V_f$ where the inertia group at $p$ acts via $\varepsilon_{\rm cyc}^{k/2}\psi$, with $\psi$ of finite order. Let $F_p^+A_f$ be the image of $F_p^+V_f$ in $A_f$, and define the \emph{minimal Selmer group} of $f$ by \begin{equation}\label{def:Sel-min} {\rm Sel}(K_\infty,f):=\ker\left\{H^1(K_\infty,A_f \longrightarrow \prod_{w\nmid p}H^1(K_{\infty,w},A_f)\times\prod_{w\vert p}H^1(K_{\infty,w},F_p^-A_f)\right\}\nonumber \end{equation} where $w$ runs over the places of $K_\infty$ and we set $F_p^-A_f:=A_f/F_p^+A_f$. It is well-known that ${\rm Sel}(K_\infty,f)$ is cofinitely generated over $\Lambda$. When it is also $\Lambda$-cotorsion, we define the $\mu$-invariant $\mu({\rm Sel}(K_\infty,f))$ (resp. $\lambda$-invariant $\lambda({\rm Sel}(K_\infty,f))$) to the largest power of $\varpi$ dividing (resp. the number of zeros of) the characteristic power series of the Pontryagin dual of ${\rm Sel}(K_\infty,f)$. The same remarks and definitions apply to $\mathfrak{Sel}(K_\infty,f)$. A distinguishing feature of the anticyclotomic setting (in comparison with cyclotomic Iwasawa theory) is the presence of primes which split infinitely in the corresponding $\mathbf Z_p$-extension. Indeed, being inert in $K$, all primes $\ell\vert N^-$ are infinitely split in $K_\infty/K$. As a result, the above Selmer group differs in general from the \emph{Greenberg Selmer group} of $f$, which is defined by \begin{equation}\label{def:Sel-Gr} \mathfrak{Sel}(K_\infty,f):=\ker\left\{H^1(K_\infty,A_f \longrightarrow \prod_{w\nmid p}H^1(I_{\infty,w},A_f)\times\prod_{w\vert p}H^1(K_{\infty,w},F_p^-A_f)\right\}\nonumber \end{equation} where $I_{\infty,w}\subset G_{K_\infty}$ denotes the inertia group at $w$. If $S$ is a finite set of primes in $K$, we let ${\rm Sel}^S(K_\infty,f)$ and $\mathfrak{Sel}^S(K_\infty,f)$ be the ``$S$-primitive'' Selmer groups defined as above by omitting the local conditions at the primes in $S$ (except those above $p$, when any such prime is in $S$). Moreover, if $S$ consists of the primes dividing a rational integer $M$, we replace the superscript $S$ by $M$ in the above notation. Immediately from the definitions, we see that there is as exact sequence \begin{equation}\label{eq:defs} 0\longrightarrow{\rm Sel}(K_\infty,f)\longrightarrow\mathfrak{Sel}(K_\infty,f) \longrightarrow\prod_{\ell\vert N^-}\mathcal{H}^{\rm un}_\ell \end{equation} where \[ \mathcal{H}_\ell^{\rm un}:={\ker}\left\{\prod_{w\vert\ell}H^1(K_{\infty,w},A_f) \longrightarrow\prod_{w\vert\ell}H^1(I_{\infty,w},A_f)\right\} \] is the set of unramified cocycles. In \cite{pollack-weston}, Pollack and Weston carried out a careful analysis of the difference between ${\rm Sel}(K_\infty,f)$ and $\mathfrak{Sel}(K_\infty,f)$. Even though in \emph{loc.cit.} they focused on the case where $f$ is associated with an elliptic curve, many of their arguments apply more generally. In fact, the next result follows essentially from their work. \begin{theorem}\label{thm:mu-alg} Assume that $\bar\rho$ satisfies (SU). Then the following are equivalent: \begin{enumerate} \item{} ${\rm Sel}(K_\infty,f_0)$ is $\Lambda$-cotorsion with $\mu$-invariant zero for some newform $f_0\in\mathcal{H}(\bar\rho)$. \item{} ${\rm Sel}(K_\infty,f)$ is $\Lambda$-cotorsion with $\mu$-invariant zero for all newforms $f\in\mathcal{H}(\bar\rho)$. \item{} $\mathfrak{Sel}(K_\infty,f)$ is $\Lambda$-cotorsion with $\mu$-invariant zero for all newforms $f\in\mathcal{H}(\bar\rho)$. \end{enumerate} Moreover, in that case ${\rm Sel}(K_\infty,f)\simeq\mathfrak{Sel}(K_\infty,f)$. \end{theorem} \begin{proof} Assume $f_0$ is a newform in $\mathcal{H}(\bar\rho)$ for which ${\rm Sel}(K_\infty,f_0)$ is $\Lambda$-cotorsion with $\mu$-invariant zero, and set $N^+:=N(\Sigma)/N^-$. By \cite[Prop.~5.1]{pollack-weston}, we then have the exact sequences \begin{align} 0\longrightarrow{\rm Sel}(K_\infty,f_0)\longrightarrow &\;{\rm Sel}^{N^+}(K_\infty,f_0) \longrightarrow\prod_{\ell\vert N^+}\mathcal{H}_\ell\longrightarrow 0\label{5.1a}\\ 0\longrightarrow\mathfrak{Sel}(K_\infty,f_0)\longrightarrow&\;\mathfrak{Sel}^{N^+}(K_\infty,f_0) \longrightarrow\prod_{\ell\vert N^+}\mathcal{H}_\ell\longrightarrow 0\label{5.1b} \end{align} where $\mathcal{H}_\ell$ is the product of $H^1(K_{\infty,w},A_{f_0})$ over the places $w\vert\ell$ in $K_\infty$. Since every prime $\ell\vert N^+$ splits in $K$ (see Remark~\ref{rem:split}), the $\Lambda$-cotorsionness and the vanishing of the $\mu$-invariant of $\mathcal{H}_\ell$ can be deduced from \cite[Prop.~2.4]{GV}. Since ${\rm Sel}(K_\infty,f_0)[\varpi]$ is finite by assumption, it thus follows from $(\ref{5.1a})$ that ${\rm Sel}^{N^+}(K_\infty,f_0)[\varpi]$ is finite. Combined with $(\ref{eq:defs})$ and \cite[Cor.~5.2]{pollack-weston}, the same argument using (\ref{5.1b}) shows that then $\mathfrak{Sel}^{N^+}(K_\infty,f_0)[\varpi]$ is also finite. On the other hand, following the arguments in the proof \cite[Prop.~3.6]{pollack-weston} we see that for any $f\in\mathcal{H}(\bar{\rho})$ we have \begin{align*} {\rm Sel}^{N^+}(K_\infty,\bar{\rho}) &\simeq{\rm Sel}^{N^+}(K_\infty,f)[\varpi]\\ \mathfrak{Sel}^{N^+}(K_\infty,\bar{\rho}) &\simeq\mathfrak{Sel}^{N^+}(K_\infty,f)[\varpi]. \end{align*} As a result, the argument in the previous paragraph implies that, for any newform $f\in\mathcal{H}(\bar\rho)$, both ${\rm Sel}^{N^+}(K_\infty,f)[\varpi]$ and $\mathfrak{Sel}^{N^+}(K_\infty,f)[\varpi]$ are finite , from where (using (\ref{5.1a}) and (\ref{5.1b}) with $f$ in place of $f_0$) the $\Lambda$-cotorsionness and the vanishing of both the $\mu$-invariant of ${\rm Sel}(K_\infty,f)$ and of $\mathfrak{Sel}(K_\infty,f)$ follows. In view of $(\ref{eq:defs})$ and \cite[Lemma~3.4]{pollack-weston}, the resulf follows. \end{proof} Let $w$ be a prime of $K_\infty$ above $\ell\neq p$ and denote by $G_{w}\subset G_{K_\infty}$ its decomposition group. Let $\mathbb T(\mathfrak a)$ be the irreducible component of $\mathbb{T}_\Sigma$ passing through $f$, and define \[ \delta_w(\mathfrak a):=\dim_{\mathbf{F}}A_f^{G_{w}}/\varpi. \] (Note that this is well-defined by \cite[Lemma 4.3.1]{EPW}.) Assume $\ell=\mathfrak{l}\bar{\mathfrak{l}}$ splits in $K$ and put \begin{equation}\label{def:delta-ell} \delta_\ell(\mathfrak a :=\sum_{w\mid \ell}\delta_w(\mathfrak a) \end{equation} where the sum is over the (finitely many) primes $w$ of $K_\infty$ above $\ell$. In view of Theorem~\ref{thm:mu-alg}, we write $\mu^\mathrm{alg}(\bar\rho)=0$ whenever any of the $\mu$-invariants appearing in that result vanish. In that case, for any newform $f$ in $\mathcal{H}(\bar\rho)$ we may consider the $\lambda$-invariants $\lambda({\rm Sel}(K_\infty,f))=\lambda(\mathfrak{Sel}(K_\infty,f))$. \begin{theorem}\label{thm:lambda-alg} Let $\bar\rho$ and $\Sigma$ be as above, and assume that $\mu^{\rm alg}(\bar\rho)=0$. If $f_1$ and $f_2$ are any two newforms in the Hida family of $\bar{\rho}$ lying in the branches $\mathbb{T}(\mathfrak{a}_1)$ and $\mathbb{T}(\mathfrak{a}_2)$, respectively, then \[ \lambda({\rm Sel}(K_\infty,f_1))-\lambda({\rm Sel}(K_\infty,f_2))= \sum_{\ell\neq p}\delta_\ell(\mathfrak{a}_1)-\delta_\ell(\mathfrak{a}_2). \] \end{theorem} \begin{proof} Since $N^-\vert N(\mathfrak{a}_i)\vert N(\Sigma)$ and $N(\Sigma)/N^-$ is only divisible by primes that are split in $K$, the arguments of \cite[\S{4}]{EPW} apply verbatim (cf. \cite[Thm.~7.1]{pollack-weston}). \end{proof} \section{Applications to the main conjecture}\label{sec:applications} \subsection{Variation of Iwasawa invariants} Recall the definition of the analytic invariant $e_\ell(\mathfrak{a})=\lambda(E_\ell(\mathfrak{a}))$, where $E_\ell(\mathfrak{a})$ is the Euler factor from Section~\ref{subsec:comparison}, and of the algebraic invariant $\delta_\ell(\mathfrak{\mathfrak{a}})$ introduced in $(\ref{def:delta-ell})$. \begin{lemma}\label{5.1.5} Let $\mathfrak a_1$, $\mathfrak a_2$ be minimal primes of $\mathbb T_\Sigma$. For any prime $\ell\neq p$ split in $K$, we have \[ \delta_\ell(\mathfrak a_1)-\delta_\ell(\mathfrak a_2)=e_\ell(\mathfrak a_2)-e_\ell(\mathfrak a_1). \] \end{lemma} \begin{proof} Let $\mathfrak{a}$ be a minimal prime of $\mathbb T_\Sigma$, let $f$ be a newform in the branch $\mathbb{T}(\mathfrak{a})$, and let $\wp_f\subset\mathfrak{a}$ be the corresponding height one prime. Since $\ell=\mathfrak{l}\bar{\mathfrak{l}}$ splits in $K$, we have \[ \oplus_{w\vert\ell}H^1(K_{\infty,w},A_f)= \left(\oplus_{w\vert\mathfrak{l}}H^1(K_{\infty,w},A_f)\right) \oplus\left(\oplus_{w\vert\bar{\mathfrak{l}}}H^1(K_{\infty,w},A_f)\right) \] and \cite[Prop.~2.4]{GV} immediately implies that \[ Ch_{\Lambda}\left(\oplus_{w\vert\ell}H^1(K_{\infty,w},A_f)^\vee\right) =E_\ell(f,\ell^{-1}\gamma_{\mathfrak{l}})\cdot E_\ell(f,\ell^{-1}\gamma_{\bar{\mathfrak{l}}}) \] where $E_\ell(f,\ell^{-1}\gamma_{\mathfrak{l}})\cdot E_\ell(f,\ell^{-1}\gamma_{\bar{\mathfrak{l}}})$ is the specialization of $E_\ell(\mathfrak{a})$ at $\wp_f$. The result thus follows from \cite[Lemma 5.1.5]{EPW}. \end{proof} \begin{theorem}\label{thm:variation} Assume that $\bar\rho$ satisfies (SU). If for some newform $f_0\in\mathcal H(\bar\rho)$ we have \[ \mu({\rm Sel}(K_\infty,f_0))=\mu(L_p(f_0/K))=0\quad\textrm{and}\quad \lambda({\rm Sel}(K_\infty,f_0))=\lambda(L_p(f_0/K)) \] then \[ \mu({\rm Sel}(K_\infty,f))=\mu(L_p(f/K))=0\quad\textrm{and}\quad \lambda({\rm Sel}(K_\infty,f))=\lambda(L_p(f/K)) \] for all newforms $f\in\mathcal H(\bar\rho)$. \end{theorem} \begin{proof} Let $f$ be any newform in $\mathcal H(\bar\rho)$. Since the $\mu$-invariants of $f_0$ vanish, the vanishing of $\mu({\rm Sel}(K_\infty,f))$ and $\mu(L_p(f/K))$ follows from Theorem~\ref{thm:mu-alg} and Theorem~\ref{3.7.5}, respectively. On the other hand, combining Theorems~\ref{3.7.7} and \ref{thm:lambda-alg}, and Lemma~\ref{5.1.5}, we see that \[ \lambda({\rm Sel}(K_\infty,f))-\lambda({\rm Sel}(K_\infty,f_0))= \lambda(L_p(f/K))-\lambda(L_p(f_0/K)), \] and hence the equality $\lambda({\rm Sel}(K_\infty,f_0))=\lambda(L_p(f_0/K))$ implies the same equality for $f$. \end{proof} \subsection{Applications to the main conjecture} As an immediate consequence of Weierstrass preparation theorem, our Theorem~\ref{thm:variation} together with one the divisibilities predicted by the anticyclotomic main conjecture implies the full anticyclotomic main conjecture. \begin{theorem}[Skinner--Urban]\label{thm:SU} Let $f\in S_k(\Gamma_0(N))$ be a newform of weight $k\equiv 2\pmod{p-1}$ and trivial nebentypus, and assume that $\bar{\rho}_f$ satisfies (SU)~ and that $p$ splits in $K$. Then \[ (L_p(f/K))\supseteq Ch_{\Lambda}({\rm Sel}(K_\infty,f)^\vee). \] \end{theorem} \begin{proof} This follows from specializing the divisibility in \cite[Thm.~3.26]{SU} to the anticyclotomic line. Indeed, let $\mathbf{f}=\sum_{n\geq 1}\mathbf{a}_n(\mathbf{f})q^n\in\mathbb{I}\pwseries{q}$ be the $\Lambda$-adic form with coefficients in $\mathbb{I}:=\mathbb{T}(\mathfrak{a})^\circ$ associated with the branch of the Hida family containing $f$, let $\Sigma$ be a finite set of primes as in Section~\ref{subsec:2var-branch}, let $\Sigma'\supseteq\Sigma$ be a finite set of primes of $K$ containing $\Sigma$ and all primes dividing $pN(\mathfrak{a})D_K$, and assume that $\Sigma'$ contains at least one prime $\ell\neq p$ that splits in $K$. Under these assumptions, in \cite[Thm.~3.26]{SU} it is shown that \begin{equation}\label{eq:SU} (\mathfrak{L}_p^{\Sigma'}(\mathbf{f}/K))\supseteq Ch_{\Lambda_{\mathbf{f}}(L_\infty)}(\mathfrak{Sel}^{\Sigma'}(L_\infty,A_{\mathbf{f}})^\vee) \end{equation} where $L_\infty=K_\infty K_{\rm cyc}$ is the $\mathbf Z_p^2$-extension of $K$, $\Lambda_{\mathbf{f}}(L_\infty)$ is the three-variable Iwasawa algebra $\mathbb{I}\pwseries{{\rm Gal}(L_\infty/K)}$, and $\mathfrak{L}_p^{\Sigma'}(\mathbf{f}/K)$ and $\mathfrak{Sel}^{\Sigma'}(L_\infty,A_{\mathbf{f}})$ are the ``$\Sigma'$-primitive'' $p$-adic $L$-function and Selmer group defined in \cite[\S{3.4.5}]{SU} and \cite[\S\S{3.1.3},10]{SU}, respectively. Recall the character $\Theta:G_\mathbf Q\rightarrow\mathbf Z_p[[1+p\mathbf Z_p]]^\times$ from Section~\ref{subsec:crit}, regarded as a character on ${\rm Gal}(L_\infty/K)$, and let \[ {\rm Tw}_{\Theta^{-1}}:\Lambda_{\mathbf{f}}(L_\infty)\longrightarrow\Lambda_{\mathbf{f}}(L_\infty) \] be the $\mathbb{I}$-linear isomorphism induced by ${\rm Tw}_{\Theta^{-1}}(g)=\Theta^{-1}(g)g$ for $g\in{\rm Gal}(L_\infty/K)$. Choose a topological generator $\gamma\in{\rm Gal}(K_{\rm cyc}/K)$, and expand \[ {\rm Tw}_{\Theta^{-1}}(\mathfrak{L}_p^{\Sigma'}(\mathbf{f}/K))= \mathfrak{L}_{p,0}^{\Sigma'}(\mathbf{f}/K)+\mathfrak{L}_{p,1}^{\Sigma'}(\mathbf{f}/K)(\gamma-1)+\cdots \] with $\mathfrak{L}_{p,i}^{\Sigma'}(\mathbf{f}/K)\in\Lambda_{\mathbf{f}}(K_\infty)=\mathbb{I}[[\Gamma]]$. In particular, note that $\mathfrak{L}_{p,0}^{\Sigma'}(\mathbf{f}/K)$ is the restriction of the twisted three-variable $p$-adic $L$-function ${\rm Tw}_{\Theta^{-1}}(\mathfrak{L}_p^{\Sigma'}(\mathbf{f}/K))$ to the ``self-dual'' plane. Because of our assumptions on $f$, the $\Lambda$-adic form $\mathbf{f}$ has trivial tame character, and hence denoting by ${\rm Frob}_\ell$ an arithmetic Frobenius at any prime $\ell\nmid N(\mathfrak{a})p$, the Galois representation \[ \rho(\mathfrak{a}):G_\mathbf Q\longrightarrow{\rm GL}(T_{\mathbf{f}})\simeq{\rm GL}_2(\mathbb{T}(\mathfrak{a})^\circ) \] considered in $\S\ref{sec:branches}$ (which is easily seen to agree with the twisted representation considered in \cite[p.37]{SU}) is such that \[ {\rm det}(X-{\rm Frob}_\ell\vert T_{\mathbf{f}})=X^2-\mathbf{a}_\ell(\mathbf{f})X+\Theta^2(\ell)\ell. \] The twist $T_{\mathbf{f}}^\dagger:=T_{\mathbf{f}}\otimes\Theta^{-1}$ is therefore self-dual. Thus combining \cite[Lemma~6.1.2]{Rubin-ES} with a straightforward variant of \cite[Prop.~3.9]{SU} having ${\rm Gal}(K_\infty/K)$ in place of ${\rm Gal}(K_{\rm cyc}/K)$, we see that the divisibility $(\ref{eq:SU})$ implies that \begin{equation}\label{eq:SU-} (\mathfrak{L}_{p,0}^{\Sigma'}(\mathbf{f}/K))\supseteq Ch_{\Lambda_{\mathbf{f}}(K_\infty)}(\mathfrak{Sel}^{\Sigma'}(K_\infty,A_{\mathbf{f}}^\dagger)^\vee). \end{equation} (Here, as above, $A_{\mathbf{f}}$ denotes the Pontryagin dual $T_{\mathbf{f}}\otimes_{\mathbb{I}}{\rm Hom}_{\rm cts}(\mathbb{I},\mathbf Q_p/\mathbf Z_p)$, and $A_{\mathbf{f}}^\dagger$ is the corresponding twist.) We next claim that, setting $\Sigma'':=\Sigma'\smallsetminus\Sigma$, we have \begin{equation}\label{eq:comp} (\mathfrak{L}_{p,0}^{\Sigma'}(\mathbf{f}/K))=(L_\Sigma(\bar\rho,\mathfrak{a})\cdot \prod_{\substack{v\in\Sigma''\\v\nmid p}}E_v(\mathfrak{a})) \end{equation} where $L_\Sigma(\bar\rho,\mathfrak{a})$ is the two-variable $p$-adic $L$-function constructed in $\S\ref{subsec:2varL}$, and if $v$ lies over the rational prime $\ell$, $E_v(\mathfrak{a})$ is the Euler factor given by \[ E_v(\mathfrak{a})=\det({\rm Id}-{\rm Frob}_vX\vert(V_{\mathbf{f}}^\dagger)_{I_v})_{X=\ell^{-1}{\rm Frob}_v} \] where $V_{\mathbf{f}}:=T_{\mathbf{f}}\otimes_{\mathbb{I}}{\rm Frac}(\mathbb{I})$, and ${\rm Frob}_v$ is an arithmetic Frobenius at $v$. (Note that for $\ell=\mathfrak{l}\bar{\mathfrak{l}}$ split in $K$, $E_{\mathfrak{l}}(\mathfrak{a})\cdot E_{\bar{\mathfrak{l}}}(\mathfrak{a})$ is simply the Euler factor (\ref{def:e-a}).) Indeed, combined with Theorem~\ref{thm:3.6.2} and Theorem~\ref{thm:3.4.3}, the equality $(\ref{eq:comp})$ specialized to any arithmetic prime $\wp\subset\mathbb{T}(\mathfrak{a})$ of weight $2$ is shown in \cite[(12.3)]{SU}, from where the claim follows easily from the density of these primes. (See also \cite[Thm.~6.8]{pollack-weston} for the comparison between the different periods involved in the two constructions, which differ by a $p$-adic unit under our assumptions.) Finally, $(\ref{eq:SU-})$ and $(\ref{eq:comp})$ combined with Theorem~\ref{thm:3.6.2} and \cite[Props.~2.3,8]{GV} imply that \[ (L(\bar\rho,\mathfrak{a}))\supseteq Ch_{\Lambda_{\mathbf{f}}(K_\infty)}(\mathfrak{Sel}(K_\infty,A^\dagger_{\mathbf{f}})^\vee) \] from where the result follows by specializing at $\wp_f$ using Theorem~\ref{thm:3.4.3} and Theorem~\ref{thm:mu-alg}. \end{proof} \begin{corollary} Suppose that $\bar\rho$ satisfies (SU)~and that $p$ splits in $K$. If the anticyclotomic main conjecture holds for some newform $f_0$ in $\mathcal H(\bar\rho)$ of weight $k_0\equiv 2\pmod{p-1}$ and trivial nebentypus, then it holds for all newforms $f$ in $\mathcal H(\bar\rho)$ of weight $k\equiv 2\pmod{p-1}$ and trivial nebentypus. \end{corollary} \begin{proof} After Theorem~\ref{thm:SU}, to check the anticyclotomic main conjecture for any newform $f$ as in the statement, it suffices to check that \begin{equation}\label{Iw} \mu({\rm Sel}(K_\infty,f))=\mu(L_p(f/K))=0\quad\textrm{and}\quad \lambda({\rm Sel}(K_\infty,f))=\lambda(L_p(f/K)). \end{equation} If the anticyclotomic main conjecture holds for some newform $f_0$ as in the statement, then the first and third equalities in $(\ref{Iw})$ clearly hold for $f_0$, while the vanishing of $\mu(L_p(f_0/K))$ follows from Corollary~\ref{cor:lambda-an}; by Theorem~\ref{thm:variation}, the equalities $(\ref{Iw})$ then also hold $f$, and hence the anticyclotomic main conjecture for $f$ follows. \end{proof} \emph{Acknowledgements.} During the preparation of this paper, F.C. was partially supported by Grant MTM20121-34611 and by Prof.~Hida's NSF Research Grant DMS-0753991; C.K. was partially supported by AMS-Simons Travel Grants; and M.L. was supported by PRIN 2010-11 ``Arithmetic Algebraic Geometry and Number Theory'' and by PRAT 2013 ``Arithmetic of Varieties over Number Fields''. \bibliographystyle{amsalpha}
{ "timestamp": "2015-05-04T02:00:38", "yymm": "1504", "arxiv_id": "1504.06310", "language": "en", "url": "https://arxiv.org/abs/1504.06310" }
\section*{CHANGES} \section*{Introduction} Let $(X,L)$ be a polarized complex manifold, \ie a smooth complex projective variety $X$ endowed with an ample line bundle $L$. Assuming for simplicity that the reduced automorphism group $\Aut(X,L)/\mathbb{C}^*$ is discrete (and hence finite), the Yau-Tian-Donaldson conjecture predicts that the first Chern class $c_1(L)$ contains a constant scalar curvature K\"ahler metric (cscK metric for short) iff $(X,L)$ satisfies a certain algebro-geometric condition known as \emph{K-stability}. Building on~\cite{Don1,AP}, it was proved in~\cite{Sto} that K-stability indeed follows from the existence of a cscK metric. When $c_1(X)$ is a multiple of $c_1(L)$, the converse was recently established (\cite{CDS15}, see also~\cite{Tian15}); in this case a cscK metric is the same as a K\"ahler-Einstein metric. In the original definition of~\cite{Don2}, $(X,L)$ is K-semistable if the Donaldson-Futaki invariant $\DF(\mathcal{X},\mathcal{L})$ of every (ample) test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is non-negative, and K-stable if we further have $\DF(\mathcal{X},\mathcal{L})=0$ only when $\mathcal{X}=X\times\mathbb{C}$ is trivial (and hence $\mathcal{L}=p_1^*L$ with $\mathbb{C}^*$ acting through a character). However, as pointed out in~\cite{LX}, $(X,L)$ always admits test configurations $(\mathcal{X},\mathcal{L})$ with $\mathcal{X}$ non-trivial, but \emph{almost trivial} in the sense that its normalization $\widetilde{\mathcal{X}}$ is trivial. Such test configurations automatically satisfy $\DF(\mathcal{X},\mathcal{L})=0$, and the solution adopted in~\cite{Sto2,Oda4} was therefore to replace `trivial' with `almost trivial' in the definition of K-stability. On the other hand, G.~Sz\'ekelyhidi~\cite{Sze1,Sze2} proposed that a \emph{uniform} notion of K-stability should be used to formulate the Yau-Tian-Donaldson conjecture for general polarizations. In this uniform version, $\DF(\mathcal{X},\mathcal{L})$ is bounded below by a positive multiple of the $L^p$-norm $\|(\mathcal{X},\mathcal{L})\|_p$. Since uniform K-stability should of course imply K-stability, one then faces the problem of showing that test configurations with norm zero are almost trivial. In the first part of the paper, we prove that this is indeed the case. In fact, the $L^p$-norm $\|(\mathcal{X},\mathcal{L})\|_p$ of a test configuration $(\mathcal{X},\mathcal{L})$ can be computed via the \emph{Duister\-maat-Heckman measure} $\DH_{(\mathcal{X},\mathcal{L})}$ associated to the test configuration. We undertake a quite thorough study of Duistermaat-Heckman measures and prove in particular that $\DH_{(\mathcal{X},\mathcal{L})}$ is a Dirac mass iff $(\mathcal{X},\mathcal{L})$ is almost trivial. The second main purpose of the paper is to introduce a \emph{non-Archimedean} perspective on K-stability, in which test configurations for $(X,L)$ are viewed as non-Archimedean metrics on (the Berkovich analytification with respect to the trivial norm of) $L$. We introduce non-Archimedean analogues of many classical functionals in K\"ahler geometry, and interpret uniform K-stability as the non-Archimedean counterpart of the coercivity of the Mabuchi K-energy. Finally, in the third part of the paper, we use this formalism to analyze the interaction between singularities of pairs (in the sense of the Minimal Model Program) and uniform K-stability, revisiting Y.~Odaka's work~\cite{Oda1,Oda3,OSa,OSu}. \medskip We now describe the contents of the paper in more detail. \subsection*{Duistermaat-Heckman measures} Working, for the moment, over any arbitrary alge\-braically closed ground field, let $(X,L)$ be a polarized scheme, \ie a (possibly non-reduced) scheme $X$ together with an ample line bundle $L$ on $X$. Given a $\mathbb{G}_m$-action on $(X,L)$, let $H^0(X,mL)=\bigoplus_{\lambda\in\mathbb{Z}}H^0(X,mL)_\lambda$ be the weight decomposition. For each $d\in\mathbb{N}$, the finite sum $\sum_{\lambda\in\mathbb{Z}}\lambda^d \dim H^0(X,mL)_\lambda$ is a polynomial function of $m\gg 1$, of degree at most $\dim X+d$ (cf.~Theorem~\ref{thm:equivRR}, as well as Appendix B). Setting $N_m:=\dim H^0(X,mL)$, we get, as a direct consequence, the existence of the \emph{Duistermaat-Heckman measure} $$ \DH_{(X,L)}:=\lim_{m\to\infty}\frac{1}{N_m}\sum_{\lambda\in\mathbb{Z}}\dim H^0(X,mL)_\lambda\d_{m^{-1}\lambda}, $$ a probability measure with compact support in $\mathbb{R}$ describing the asymptotic distribution as $m\to\infty$ of the (scaled) weights of $H^0(X,mL)$, counted with multiplicity. The \emph{Donaldson-Futaki invariant} $\DF(X,L)$ appears in the subdominant term of the expansion $$ \frac{w_m}{mN_m}=\frac{1}{N_m}\sum_{\lambda\in\mathbb{Z}}m^{-1}\lambda\dim H^0(X,mL)_\lambda=\int_\mathbb{R}\lambda\,\DH_{(X,L)}(d\lambda)-(2m)^{-1}\DF(X,L)+O(m^{-2}), $$ where $w_m$ is the weight of the induced action on the determinant line $\det H^0(X,mL)$. \smallskip Instead of a $\mathbb{G}_m$-action on $(X,L)$, consider more generally a \emph{test configuration} $(\mathcal{X},\mathcal{L})$ for $(X,L)$, \ie a $\mathbb{G}_m$-equivariant partial compactification of $(X,L)\times(\mathbb{A}^1\setminus\{0\})$. It comes with a proper, flat, $\mathbb{G}_m$-equivariant morphism $\pi\colon\mathcal{X}\to\mathbb{A}^1$, together with a $\mathbb{G}_m$-linearized $\mathbb{Q}$-line bundle $\mathcal{L}$ extending $p_1^*L$ on $X\times(\mathbb{A}^1\setminus\{0\})$. When the test configuration is ample, \ie $\mathcal{L}$ is $\pi$-ample, the central fiber $(\mathcal{X}_0,\mathcal{L}_0)$ is a polarized $\mathbb{G}_m$-scheme, and the Duistermaat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$ and Donaldson-Futaki invariant $\DF(\mathcal{X},\mathcal{L})$ are defined to be those of $(\mathcal{X}_0,\mathcal{L}_0)$. In the previous case, where $(X,L)$ comes with a $\mathbb{G}_m$-action, the Duistermaat-Heckman measure and Donaldson-Futaki as defined above coincide with those of the corresponding \emph{product} test configuration $(X,L)\times\mathbb{A}^1$ with the diagonal action of $\mathbb{G}_m$. Such a test configuration is called \emph{trivial} if the action on $X$ is trivial. Our first main result may be summarized as follows. \begin{thmA} Let $(X,L)$ be a polarized scheme and $(\mathcal{X},\mathcal{L})$ an ample test configuration for $(X,L)$, with Duistermaat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$. \begin{itemize} \item[(i)] The absolutely continuous part of $\DH_{(\mathcal{X},\mathcal{L})}$ has piecewise polynomial density, and its singular part is a finite sum of point masses. \item[(ii)] The measure $\DH_{(\mathcal{X},\mathcal{L})}$ is a finite sum of point masses iff $(\mathcal{X},\mathcal{L})$ is \emph{almost trivial} in the sense that the normalization of each top-dimensional irreducible component of $\mathcal{X}$ is trivial. \end{itemize} \end{thmA} The piecewise polynomiality in (i) generalizes a well-known property of Duistermaat-Heckman measures for polarized complex manifolds with a $\mathbb{C}^*$-action~\cite{DH}. In (ii), the normalization of $\mathcal{X}$ is viewed as a test configuration for the normalization of $X$. The notion of almost triviality is compatible with the one introduced in~\cite{Sto2,Oda4} for $X$ reduced and equidimensional, cf.~Proposition~\ref{prop:almosttriv}. In Theorem~A, $X$ is a possibly non-reduced scheme. If we specialize to the case when $X$ is a (reduced, irreducible) variety, Theorem~A and its proof yield the following characterization of almost trivial test configurations: \begin{corB} Let $(X,L)$ be a polarized variety and $(\mathcal{X},\mathcal{L})$ an ample test configuration for $(X,L)$, with Duistermaat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] the Duistermaat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$ is a Dirac mass; \item[(ii)] for some (or, equivalently, any) $p\in[1,\infty]$, we have $\|(\mathcal{X},\mathcal{L})\|_p=0$; \item[(iii)] $(\mathcal{X},\mathcal{L})$ is almost trivial, that is, the normalization $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ is trivial. \end{itemize} \end{corB} Here the $L^p$-norm $\|(\mathcal{X},\mathcal{L})\|_p$ is defined, following~\cite{Don3,WN12,His12}, as the $L^p$ norm of $\lambda\mapsto\lambda-\bar\lambda$ with respect to $\DH_{(\mathcal{X},\mathcal{L})}$, where $\bar\lambda$ is the barycenter of this measure. \subsection*{Uniform K-stability and non-Archimedean functionals} A polarized scheme $(X,L)$ is \emph{K-semistable} if $\DF(\mathcal{X},\mathcal{L})\ge 0$ for each ample test configuration. It is \emph{K-stable} if, furthermore, $\DF(\mathcal{X},\mathcal{L})=0$ only when $(\mathcal{X},\mathcal{L})$ is almost trivial, in the sense of Theorem~A~(ii). Assume from now on that $X$ is irreducible and normal. By Corollary~B, the almost triviality of an ample test configuration can then be detected by the $L^p$-norm $\|(\mathcal{X},\mathcal{L})\|_p$ with $p\in[1,\infty]$. We say that $(X,L)$ is \emph{$L^p$-uniformly K-stable} if $\DF(\mathcal{X},\mathcal{L})\ge\d\|(\mathcal{X},\mathcal{L})\|_p$ for some uniform constant $\d>0$. For $p=1$, we simply speak of \emph{uniform K-stability}, which is therefore implied by $L^p$-uniform K-stability since $\|(\mathcal{X},\mathcal{L})\|_p\ge\|(\mathcal{X},\mathcal{L})\|_1$. These notions also apply when $((X,B);L)$ is a \emph{polarized pair}, consisting of a normal polarized variety and a $\mathbb{Q}$-Weil divisor on $X$ such that $K_{(X,B)}:=K_X+B$ is $\mathbb{Q}$-Cartier, using the \emph{log Donaldson-Futaki invariant} $\DF_B(\mathcal{X},\mathcal{L})$ of a test configuration $(\mathcal{X},\mathcal{L})$. We show that $L^p$-uniformly K-stability can in fact only hold for $p\le \tfrac{n}{n-1}$ (cf.~Proposition~\ref{prop:thresh}). One of the points of the present paper is to show that ($L^1$-)uniform K-stability of polarized pairs can be understood in terms of the non-Archimedean counterparts of well-known functionals in K\"ahler geometry. In order to achieve this, we interpret a test configuration for $(X,L)$ as a \emph{non-Archimedean metric} on the Berkovich analytification of $L$ with respect to the \emph{trivial} norm on the ground field, see~\S\ref{sec:NAmetrics}. In this language, ample test configurations become positive metrics. Several classical functionals on the space of Hermitian metrics in K\"ahler geometry have natural counterparts in the non-Archimedean setting. For example, the \emph{non-Archimedean Monge-Amp\`ere energy} is $$ E^{\mathrm{NA}}(\mathcal{X},\mathcal{L})=\frac{(\bar\mathcal{L}^{n+1})}{(n+1)V}=\int_\mathbb{R}\lambda\,\DH_{(\mathcal{X},\mathcal{L})}(d\lambda), $$ where $V=(L^n)$, $(\bar\mathcal{X},\bar\mathcal{L})$ is the natural $\mathbb{G}_m$-equivariant compactification of $(\mathcal{X},\mathcal{L})$ over $\P^1$ and $\DH_{(\mathcal{X},\mathcal{L})}$ is the Duistermaat-Heckman measure of $(\mathcal{X},\mathcal{L})$. The \emph{non-Archimedean J-energy} is $$ J^{\mathrm{NA}}(\mathcal{X},\mathcal{L}) =\lambda_{\max}-E^{\mathrm{NA}}(\mathcal{X},\mathcal{L}) =\lambda_{\max}-\int_\mathbb{R}\lambda\,\DH_{(\mathcal{X},\mathcal{L})}(d\lambda)\ge 0, $$ with $\lambda_{\max}$ the upper bound of the support of $\DH_{(\mathcal{X},\mathcal{L})}$. We show that this quantity is equivalent to the $L^1$-norm in the following sense: $$ c_n J^\mathrm{NA}(\mathcal{X},\mathcal{L})\le\|(\mathcal{X},\mathcal{L})\|_1\le 2 J^\mathrm{NA}(\mathcal{X},\mathcal{L}) $$ for some numerical constant $c_n>0$. Given a boundary $B$, we define the \emph{non-Archimedean Ricci energy} $R_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})$ in terms of intersection numbers on a suitable test configuration dominating $(\mathcal{X},\mathcal{L})$. The \emph{non-Archimedean entropy} $H_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})$ is defined in terms of the log discrepancies with respect to $(X,B)$ of certain divisorial valuations, and will be described in more detail below. The \emph{non-Archimedean Mabuchi functional} is now defined so as to satisfy the analogue of the \emph{Chen-Tian formula} (see~\cite{Che2} and also~\cite[Proposition 3.1]{BB}) $$ M_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})=H_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L}) +R_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})+\bar S_B E^{\mathrm{NA}}(\mathcal{X},\mathcal{L}) $$ with $$ \bar S_B:=-nV^{-1}\left(K_{(X,B)}\cdot L^{n-1}\right), $$ which, for $X$ smooth over $\mathbb{C}$ and $B=0$, gives the mean value of the scalar curvature of any K\"ahler metric in $c_1(L)$. The whole point of these constructions is that $M_B^{\mathrm{NA}}$ is essentially the same as the log Donaldson-Futaki invariant.\footnote{The interpretation of the Donaldson-Futaki invariant as a non-Archimedean Mabuchi functional has been known to Shou-Wu Zhang for quite some time, cf.~\cite[Remark 6]{PRS}.} We show more precisely that every normal, ample test configuration $(\mathcal{X},\mathcal{L})$ satisfies \begin{equation}\label{equ:Mslope} \DF_B(\mathcal{X},\mathcal{L})=M_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})+V^{-1}\left((\mathcal{X}_0-\mathcal{X}_{0,\red})\cdot\mathcal{L}^n\right). \end{equation} Further, $M^{\mathrm{NA}}_B$ is homogeneous with respect to $\mathbb{G}_m$-equivariant base change, a property which is particularly useful in relation with semistable reduction, and fails for the Donaldson-Futaki invariant when the central fiber is non-reduced. Using this, we show that uniform K-stability of $((X,B);L)$ is equivalent to the apparently stronger condition $M_B^{\mathrm{NA}}\ge\d J^{\mathrm{NA}}$, which we interpret as a counterpart to the \emph{coercivity} of the Mabuchi energy in K\"ahler geometry~\cite{Tian97}. \medskip The relation between the non-Archimedean functionals above and their classical counterparts will be systematically studied in~\cite{BHJ2}. Let us indicate the main idea. Assume $(X,L)$ is a smooth polarized complex variety, and $B=0$. Denote by $\mathcal{H}$ the space of K\"ahler metrics on $L$ and by $\mathcal{H}^{\mathrm{NA}}$ the space of non-Archimedean metrics. The general idea is that $\mathcal{H}^{\mathrm{NA}}$ plays the role of the `Tits boundary' of the (infinite dimensional) symmetric space $\mathcal{H}$. Given an ample test configuration $(\mathcal{X},\mathcal{L})$ (viewed as an element of $\mathcal{H}^{\mathrm{NA}}$) and a smooth ray $(\phi_s)_{s\in(0,+\infty)}$ corresponding to a smooth $S^1$-invariant metric on $\mathcal{L}$, we shall prove in~\cite{BHJ2} that \begin{equation}\label{e301} \lim_{s\to+\infty}\frac{F(\phi_s)}{s}=F^{\mathrm{NA}}(\mathcal{X},\mathcal{L}), \end{equation} where $F$ denotes the Monge-Amp\`ere energy, $J$-energy, entropy, or Mabuchi energy functional and $F^{\mathrm{NA}}$ is the corresponding non-Archimedean functional defined above. In the case of the Mabuchi energy, this result is closely related to~\cite{PT1,PT2,PRS}. \subsection*{Singularities of pairs and uniform K-stability} A key point in our approach to K-stability is to relate the birational geometry of $X$ and that of its test configurations using the language of \emph{valuations}. More specifically, let $(X,L)$ be a normal polarized variety, and $(\mathcal{X},\mathcal{L})$ a normal test configuration. Every irreducible component $E$ of $\mathcal{X}_0$ defines a divisorial valuation $\ord_E$ on the function field of $\mathcal{X}$. Since the latter is canonically isomorphic to $k(X\times\mathbb{A}^1)\simeq k(X)(t)$, we may consider the restriction $r(\ord_E)$ of $\ord_E$ to $k(X)$; this is proved to be a divisorial valuation as well when $E$ is non-trivial, \ie not the strict transform of the central fiber of the trivial test configuration. This correspondence between irreducible components of $\mathcal{X}_0$ and divisorial valuations on $X$ is analyzed in detail in~\S\ref{sec:valtest}. In particular, we prove that the Rees valuations of a closed subscheme $Z\subset X$, \ie the divisorial valuations associated to the normalized blow-up of $X$ along $Z$, coincide with the valuations induced on $X$ by the normalization of the deformation to the normal cone of $Z$. Given a boundary $B$ on $X$, we define the non-Archimedean entropy of a normal test configuration $(\mathcal{X},\mathcal{L})$ mentioned above as $$ H^{\mathrm{NA}}_B(\mathcal{X},\mathcal{L})=V^{-1}\sum_E A_{(X,B)}(r(\ord_E))(E\cdot\mathcal{L}^n), $$ the sum running over the non-trivial irreducible components of $\mathcal{X}_0$ and $A_{(X,B)}(v)$ denoting the log discrepancy of a divisorial valuation $v$ with respect the pair $(X,B)$. Recall that the pair $(X,B)$ is log canonical (lc for short) if $A_{(X,B)}(v)\ge 0$ for all divisorial valuations on $X$, and Kawamata log terminal (klt for short) if the inequality is everywhere strict. Our main result here is a characterization of these singularity classes in terms of the non-Archimedean entropy functional. \begin{thmC} Let $(X,L)$ be a normal polarized variety, and $B$ an effective boundary on $X$. Then $(X,B)$ is lc (resp.\ klt) iff $H_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})\ge 0$ (resp.\ $>0$) for every non-trivial normal, ample test configuration $(\mathcal{X},\mathcal{L})$. In the klt case, there automatically exists $\d>0$ such that $H_B^{\mathrm{NA}}(\mathcal{X},\mathcal{L})\ge\d J^{\mathrm{NA}}(\mathcal{X},\mathcal{L})$ for all $(\mathcal{X},\mathcal{L})$. \end{thmC} The strategy to prove the first two points is closely related to that of~\cite{Oda3}. In fact, we also provide a complete proof of the following mild generalization (in the normal case) of the main result of~\textit{loc.cit}: $$ ((X,B);L)\text{ K-semistable }\Longrightarrow (X,B)\text{ lc}. $$ The non-normal case is discussed in \S\ref{sec:slc}. If $(X,B)$ is not lc (resp.\ not klt), then known results from the Minimal Model Program allow us to construct a closed subscheme $Z$ whose Rees valuations have negative (resp.\ non-positive) discrepancies; the normalization of the deformation to the normal cone of $Z$ then provides a test configuration $(\mathcal{X},\mathcal{L})$ with $H^{\mathrm{NA}}_B(\mathcal{X},\mathcal{L})<0$ (resp.\ $\le 0$). To prove uniformity in the klt case, we exploit the the strict positivity of the \emph{global log canonical threshold} $\lct((X,B);L)$ of $((X,B);L)$. As a consequence, we are able to analyze uniform K-stability in the \emph{log K\"ahler-Einstein case}, \ie when $K_{(X,B)}$ is numerically proportional to $L$. \begin{corD} Let $(X,L)$ be a normal polarized variety, $B$ an effective boundary, and assume that $K_{(X,B)}\equiv\lambda L$ with $\lambda\in\mathbb{Q}$. \begin{itemize} \item[(i)] If $\lambda>0$, then $((X,B);L)$ is uniformly K-stable iff $(X,B)$ is lc; \item[(ii)] If $\lambda=0$, then $((X,B);L)$ is uniformly K-stable iff $(X,B)$ is klt; \item[(iii)] If $\lambda<0$ and $\lct((X,B);L)>\frac{n}{n+1}|\lambda|$, then $((X,B);L)$ is uniformly K-stable. \end{itemize} \end{corD} This result thus gives `uniform versions' of~\cite{Oda1,OSa}. In the last case, when $-K_{(X,B)}$ is ample, we also prove that uniform K-stability is equivalent to \emph{uniform Ding stability}, defined as $D^\mathrm{NA}_B\ge\delta J^\mathrm{NA}$, where $D^\mathrm{NA}$ is the \emph{non-Archimedean Ding functional} that appeared in the work of Berman~\cite{Berm16}; see also~\cite{BBJ15,Fuj15b,Fuj16}. \subsection*{Relation to other works} Since we aim to give a systematic introduction to uniform K-stability, and to set up some non-Archimedean terminology, we have tried to make the exposition as self-contained as possible. This means that we reprove or slightly generalize some already known results~\cite{Oda1,Oda3,OSa,Sun,OSu}. During the preparation of the present paper, we were informed of R.~Dervan's independent work~\cite{Der1}(see also~\cite{Der2}), which has a substantial overlap with the present paper. First, when $X$ is normal, ample test configurations with trivial norm were also characterized in~\cite[Theorem 1.3]{Der1}. Next, the \emph{minimum norm} introduced in~\textit{loc.cit} turns out to be equivalent to our non-Archimedean J-functional, up to multiplicative constants (cf.~Remark~\ref{rmk:derJ}). As a result, \emph{uniform K-stability with respect to the minimum norm} as in~\cite{Der1} is the same as our concept of uniform K-stability. Finally, Corollary C above is to a large extent contained in~\cite[\S3]{Der1} and~\cite{Der2}. Several papers exploring K-stability through valuations have appeared since the first version of this paper. We mention in particular~\cite{Fuj15a,Fuj15b,Fuj16,FO16,Li15,Li16,Liu16}. \subsection*{Structure of the paper} Section~\ref{sec:prelim} gathers a number of preliminary facts on filtrations and valuations, with a special emphasis on the Rees construction and the relation between Rees valuations and integral closure. In Section~\ref{sec:test} we provide a number of elementary facts on test configurations, and discus in particular some scheme theoretic aspects. Section~\ref{sec:DHDF} gives a fairly self-contained treatment of Duistermaat-Heckman measures and Donaldson-Futaki invariants in the context of polarized schemes. The existence of asymptotic expansions for power sums of weights is established in Theorem~\ref{thm:equivRR}, following an idea of Donaldson. The correspondence between irreducible components of the central fiber of a normal test configuration and divisorial valuations on $X$ is considered in Section~\ref{sec:valtest}. In particular, Theorem~\ref{thm:reescone} relates Rees valuations and the deformation to the normal cone. Section~\ref{sec:DHfiltr} contains an in-depth study of Duistermaat-Heckman measures in the normal case, leading to the proof of Theorem~A and Corollary~B. Certain non-Archimedean metrics on $L$ are introduced in Section~\ref{sec:NAmetrics} as equivalence classes of test configurations. This is inspired by~\cite{siminag,nama,simons}. In Section~\ref{S201} we introduce non-Archimedean analogues of the usual energy functionals and in Section~\ref{sec:Kstab} we use these to define and study uniform K-stability. In the Fano case, we relate (uniform) K-stability to the notion of (uniform) Ding stability. Section~\ref{sec:kstabsing} is concerned with the interaction between uniform K-stability and singularities of pairs. Specifically, Theorem~\ref{thm:lc} and Theorem~\ref{thm:klt} establish Theorem~C as well as the generalization of~\cite{Oda3} mentioned above. Corollary~D is a combination of Corollary~\ref{cor:canpol}, Corollary~\ref{cor:CY} and Proposition~\ref{prop:alpha}. Finally, Appendix A provides a proof of the two-term Riemann-Roch theorem on a normal variety, whose complete proof we could not locate in the literature, and Appendix B summarizes Edidin and Graham's equivariant version of the Riemann-Roch theorem for schemes, yielding an alternative proof of Theorem~\ref{thm:equivRR}. \begin{ackn} The authors would like to thank Robert Berman, Michel Brion, Ama\"el Broustet, Kento Fujita, Vincent Guedj, Marco Maculan, Mircea Musta\c{t}\u{a}, Yuji Odaka and Ahmed Zeriahi for helpful conversations. We are also grateful to the anonymous referee for several very useful comments. S.B. was partially supported by the ANR projects MACK and POSITIVE\@. T.H. was supported by JSPS Research Fellowships for Young Scientists (25-6660). M.J. was partially supported by NSF grant DMS-1266207, the Knut and Alice Wallenberg foundation, and the United States---Israel Binational Science Foundation. \end{ackn} \section{Preliminary facts on filtrations and valuations}\label{sec:prelim} We work over an algebraically closed field $k$, whose characteristic is arbitrary unless otherwise specified. Write $\mathbb{G}_m$ for the multiplicative group over $k$ and $\mathbb{A}^1=\Spec k[t]$ for the affine line. The \emph{trivial absolute value} $|\cdot|_0$ on $k$ is defined by $|0|_0=0$ and $|c|_0=1$ for $c\in k^*$. All schemes are assumed to be separated and of finite type over $k$. We restrict the use of \emph{variety} to denote a reduced and irreducible scheme. A reduced scheme is thus a finite union of varieties, and a normal scheme is a disjoint union of normal varieties. By an \emph{ideal} on a scheme $X$ we mean a coherent ideal sheaf, whereas a \emph{fractional ideal} is a coherent $\mathcal{O}_X$-submodule of the sheaf of rational functions. If $X$ is a scheme and $L$ a line bundle on $X$, then $\mathbb{G}_m$-action on $(X,L)$ means a $\mathbb{G}_m$-action on $X$ together with a $\mathbb{G}_m$-linearization of $L$. This induces an action on $(X,rL)$ for any $r\in\mathbb{Z}_{>0}$. If $L$ is a $\mathbb{Q}$-line bundle on $X$, then a $\mathbb{G}_m$-action on $(X,L)$ means a compatible family of actions on $(X,rL)$ for all sufficiently divisible $r\in\mathbb{Z}_{>0}$. A polarized scheme (resp.\ variety) is a pair $(X,L)$ where $X$ is a projective scheme (resp.\ variety) and $L$ is an ample $\mathbb{Q}$-line bundle on $X$. \subsection{Norms and filtrations}\label{sec:filtr} Let $V$ be a finite dimensional $k$-vector space. In this paper, a \emph{filtration} of $V$ will mean a decreasing, left-continuous, separating and exhaustive $\mathbb{R}$-indexed filtration $F^\bullet V$. In other words, it is a family of subspaces $(F^\lambda V)_{\lambda\in\mathbb{R}}$ of $V$ such that \begin{itemize} \item[(i)] $F^\lambda V\subset F^{\lambda'}V$ when $\lambda\ge\lambda'$; \item[(ii)] $F^\lambda V=\bigcap_{\lambda'<\lambda} F^{\lambda'}V$; \item[(iii)] $F^\lambda V=0$ for $\lambda\gg 0$; \item[(iv)] $F^\lambda V=V$ for $\lambda\ll 0$. \end{itemize} A \emph{$\mathbb{Z}$-filtration} is a filtration $F^\bullet V$ such that $F^\lambda V=F^{\lceil\lambda\rceil} V$ for $\lambda\in\mathbb{R}$. Equivalently, it is a family of subspaces $(F^\lambda V)_{\lambda\in\mathbb{Z}}$ satisfying (i), (iii) and (iv) above. With these conventions, filtrations are in one-to-one correspondence with non-Archimedean norms on $V$ compatible with the trivial absolute value on $k$, \ie functions $\|\cdot\|\colon V\to\mathbb{R}_+$ such that \begin{itemize} \item[(i)] $\|s+s'\|\le\max\left\{\|s\|,\|s'\|\right\}$ for all $s,s'\in V$; \item[(ii)] $\|c s\|=|c|_0\cdot\|s\|=\|s\|$ for all $s\in V$ and $c\in k^*$; \item[(iii)] $\|s\|=0\Longleftrightarrow s=0$. \end{itemize} The correspondence is given by \begin{equation*} -\log \|s\|=\sup\left\{\lambda\in\mathbb{R}\mid s\in F^\lambda V\right\} \quad\text{and}\quad F^\lambda V=\left\{s\in V\mid \|s\|\le e^{-\lambda}\right\}. \end{equation*} The \emph{successive minima} of the filtration $F^\bullet V$ is the decreasing sequence $$ \lambda_{\max}=\lambda_1\ge\dots\ge\lambda_N=\lambda_{\min} $$ where $N=\dim V$, defined by $$ \lambda_j=\max\left\{\lambda\in\mathbb{R}\mid\dim F^\lambda V\ge j\right\}. $$ From the point of view of the norm, they are indeed the analogues of the (logarithmic) successive minima in Minkowski's geometry of numbers. Choosing a basis $(s_j)$ compatible with the flag $F^{\lambda_1} V\subset\dots\subset F^{\lambda_N}V$ diagonalizes the associated norm $\|\cdot\|$, in the sense that $$ \|\sum c_i s_i\|=\max|c_i|_0 e^{-\lambda_i}. $$ \medskip Next let $R:=\bigoplus_{m\in\mathbb{N}} R_m$ be a graded $k$-algebra with finite dimensional graded pieces $R_m$. A filtration $F^\bullet R$ of $R$ is defined as the data of a filtration $F^\bullet R_m$ for each $m$, satisfying $$ F^\lambda R_m\cdot F^{\lambda'}R_{m'}\subset F^{\lambda+\lambda'}R_{m+m'} $$ for all $\lambda,\lambda'\in\mathbb{R}$ and $m,m'\in\mathbb{N}$. The data of $F^\bullet R$ is equivalent to the data of a non-Archimedean submultiplicative norm $\|\cdot\|$ on $R$, \ie a non-Archimedean norm $\|\cdot\|_m$ as above on each $R_m$, satisfying $$ \|s\cdot s'\|_{m+m'}\le\|s\|_m\|s'\|_{m'} $$ for all $s\in R_m$, $s'\in R_{m'}$. We will use the following terminology. \begin{defi}\label{defi:fingen} We say that a $\mathbb{Z}$-filtration $F^\bullet R$ of a graded algebra $R$ is \emph{finitely generated} if the bigraded algebra $$ \bigoplus_{(\lambda,m)\in\mathbb{Z}\times\mathbb{N}} F^\lambda R_m $$ is finitely generated over $k$. \end{defi} The condition equivalently means that the graded $k[t]$-algebra $$ \bigoplus_{m\in\mathbb{N}}\left(\bigoplus_{\lambda\in\mathbb{Z}}t^{-\lambda}F^\lambda R_m\right) $$ is finitely generated. \subsection{The Rees construction}\label{sec:reesfiltr} We review here a classical construction due to Rees, which yields a geometric interpretation of $\mathbb{Z}$-filtrations. Start with a $\mathbb{G}_m$-linearized vector bundle $\mathcal{V}$ on $\mathbb{A}^1$, and set $V=\mathcal{V}_1$. The weight decomposition $$ H^0(\mathbb{A}^1,\mathcal{V})=\bigoplus_{\lambda\in\mathbb{Z}} H^0(\mathbb{A}^1,\mathcal{V})_\lambda $$ yields a $\mathbb{Z}$-filtration $F^\bullet V$, with $F^\lambda V$ defined as the image of the weight-$\lambda$ part of $H^0(\mathbb{A}^1,\mathcal{V})$ under the restriction map $H^0(\mathbb{A}^1,\mathcal{V})\to V$. Since $t$ has weight $-1$ with respect to the $\mathbb{G}_m$-action on $\mathbb{A}^1$, multiplication by $t$ induces an injection $F^{\lambda+1} V\subset F^\lambda V$, so that this is indeed a decreasing filtration. Conversely, consider a $\mathbb{Z}$-filtration $F^\bullet V$ of a $k$-vector space $V$. Then $\bigoplus_{\lambda\in\mathbb{Z}} t^{-\lambda}F^\lambda V$ is a torsion free, finitely generated $k[t]$-module. It can thus be written as the space of global sections of a unique vector bundle $\mathcal{V}$ on $\mathbb{A}^1=\Spec k[t]$. The grading provides a $\mathbb{G}_m$-linearization of $\mathcal{V}$, and the corresponding weight spaces are given by $H^0(\mathbb{A}^1,\mathcal{V})_\lambda\simeq t^{-\lambda} F^\lambda V$. \begin{lem}\label{lem:graded} In the above notation, we have a $\mathbb{G}_m$-equivariant vector bundle isomorphism, \begin{equation}\label{equ:reesgen} \mathcal{V}|_{\mathbb{A}^1\setminus\{0\}}\simeq V\times\left(\mathbb{A}^1\setminus\{0\}\right) \end{equation} as well as \begin{equation}\label{equ:reessp} \mathcal{V}_0\simeq\mathrm{Gr}^F_\bullet V=\bigoplus_{\lambda\in\mathbb{Z}}F^\lambda V/F^{\lambda+1}V. \end{equation} \end{lem} Intuitively, this says that $\mathcal{V}$ may be thought of as a way to degenerate the filtration to its graded object. \begin{proof} To see that (\ref{equ:reesgen}) holds, consider the $k$-linear map $\pi\colon H^0(\mathbb{A}^1,\mathcal{V})\to V$ sending $\sum_\lambda t^{-\lambda}v_\lambda$ to $\sum_\lambda v_\lambda$. This map is surjective since $F^\lambda V=V$ for $\lambda\ll0$. If $\sum_\lambda t^{-\lambda}v_\lambda$ lies in the kernel, then $v_\lambda=w_{\lambda+1}-w_\lambda$ for all $\lambda$, where $w_\lambda=-\sum_{\mu\ge\lambda}v_\mu\in F^\lambda V$. Conversely, any element of the form $\sum_\lambda t^{-\lambda}(w_{\lambda+1}-w_\lambda)$, where $w_\lambda\in F^\lambda V$, is in the kernel of $\pi$, and the set of such elements is equal to $(t-1)H^0(\mathbb{A}^1,\mathcal{V})$. Thus $\pi$ induces an isomorphism between $\mathcal{V}_1=H^0(\mathbb{A}^1,\mathcal{V})/(t-1)H^0(\mathbb{A}^1,\mathcal{V})$ and $V$, which induces~\eqref{equ:reesgen} using the $\mathbb{G}_m$-action. The proof of (\ref{equ:reessp}) is similar. \end{proof} Using this, it is easy to verify that the two constructions above are inverse to each other, and actually define an equivalence of categories between $\mathbb{Z}$-filtered, finite dimensional vector spaces $F^\bullet V$ and $\mathbb{G}_m$-linearized vector bundles $\mathcal{V}$ on $\mathbb{A}^1$, related by the $\mathbb{G}_m$-equivariant isomorphism $$ H^0(\mathbb{A}^1,\mathcal{V})\simeq\bigoplus_{\lambda\in\mathbb{Z}} t^{-\lambda} F^\lambda V. $$ Every filtered vector space admits a basis compatible with the filtration, and is thus (non-canonically) isomorphic to its graded object. On the vector bundle side, this yields (compare~\cite[Lemma 2]{Don3}): \begin{prop}\label{prop:reesfiltr} Every $\mathbb{G}_m$-linearized vector bundle $\mathcal{V}$ on $\mathbb{A}^1$ is $\mathbb{G}_m$-equivariantly trivial, \ie $\mathbb{G}_m$-isomorphic to $\mathcal{V}_0\times\mathbb{A}^1$ with $\mathcal{V}_0$ the fiber at $0$. \end{prop} For line bundles, the trivialization admits the following particularly simple description. \begin{cor}\label{cor:trivline} Let $\mathcal{L}$ be a $\mathbb{G}_m$-linearized line bundle on $\mathbb{A}^1$, and let $\lambda\in\mathbb{Z}$ be the weight of the $\mathbb{G}_m$-action on $\mathcal{L}_0$. For each non-zero $v\in\mathcal{L}_1$, setting $s(t):=t^{-\lambda}(t\cdot v)$ defines a weight-$\lambda$ trivialization of $\mathcal{L}$. \end{cor} \begin{proof} While this is a special case of the above construction, it can be directly checked as follows. The section $s'\in H^0(\mathbb{A}^1\setminus\{0\},\mathcal{L})$ defined by $s'(t):=t\cdot v$ defines a rational section of $\mathcal{L}$. If we set $\mu:=\ord_0(s')$, then $v_0:=\lim_{z\to 0}z^{-\mu} s'(z)$ is a non-zero element of $\mathcal{L}_0$, which satisfies $$ t\cdot v_0=\lim_{z\to 0}z^{-\mu}\left((t z)\cdot v\right)=t^{\mu}\lim_{z\to 0}(t z)^{-\mu}\left((t z)\cdot v\right)=t^{\mu}v_0. $$ It follows that $\mu$ coincides with the weight $\lambda$ of the $\mathbb{G}_m$-action on $\mathcal{L}_0$. \end{proof} We introduce the following piece of terminology. \begin{defi}\label{defi:weight} Let $W=\bigoplus_{\lambda\in\mathbb{Z}} W_\lambda$ be the weight decomposition of a $\mathbb{G}_m$-module. The \emph{weight measure} of $W$ is defined as the probability measure $$ \mu_W:=\frac{1}{\dim W}\sum_{\lambda\in\mathbb{Z}}(\dim W_\lambda)\d_\lambda. $$ \end{defi} For later use, we record the following immediate consequence of (\ref{equ:reessp}). \begin{lem}\label{lem:tail} Let $\mathcal{V}$ be a $\mathbb{G}_m$-linearized vector bundle over $\mathbb{A}^1$, and $F^\bullet V$ the corresponding $\mathbb{Z}$-filtration of the fiber $V=\mathcal{V}_1$. The weight measure $\mu_{\mathcal{V}_0}$ of the $\mathbb{G}_m$-module $\mathcal{V}_0$ then satisfies $$ \mu_{\mathcal{V}_0}\{x\ge\lambda\}=\frac{\dim F^{\lceil\lambda\rceil} V}{\dim V} $$ for all $\lambda\in\mathbb{R}$. \end{lem} \subsection{Valuations}\label{sec:val} Let $K$ be a finitely generated field extension of $k$, with $n:=\trdeg K/k$, so that $K$ may be realized as the function field of a (normal, projective) $n$-dimensional variety. Since we only consider real-valued valuations, we simply call \emph{valuation} $v$ on $K$ a group homomorphism $v:K^*\to(\mathbb{R},+)$ such that $v(f+g)\ge\min\left\{v(f),v(g)\right\}$ and $v|_{k^*}\equiv 0$~\cite{ZS}. It is convenient to set $v(0)=+\infty$. The \emph{trivial valuation} $v_\triv$ is defined by $v_\triv(f)=0$ for all $f\in K^*$. To each valuation $v$ is attached the following list of invariants. The \emph{valuation ring} of $v$ is $\mathcal{O}_v:=\left\{f\in K\mid v(f)\ge 0\right\}$. This is a local ring with maximal ideal $\mathfrak{m}_v:=\left\{f\in K\mid v(f)>0\right\}$, and the \emph{residue field} of $v$ is $k(v):=\mathcal{O}_v/\mathfrak{m}_v$. The \emph{transcendence degree} of $v$ (over $k$) is $\trdeg(v):=\trdeg k(v)/k$. Finally, the \emph{value group} of $v$ is $\Gamma_v:=v(K^*)\subset\mathbb{R}$, and the \emph{rational rank} of $v$ is $\ratrk(v):=\dim_\mathbb{Q}\left(\Gamma_v\otimes\mathbb{Q}\right)$. If $k\subset K'\subset K$ is an intermediate field extension, $v$ is a valuation on $K$ and $v'$ is its restriction to $K'$, the Abhyankar-Zariski inequality states that \begin{equation}\label{equ:Abhy} \trdeg(v)+\ratrk(v)\le\trdeg(v')+\ratrk(v')+\trdeg K/K'. \end{equation} Taking $K'=k$, we get $\trdeg(v)+\ratrk(v)\le n$, and we say that $v$ is an \emph{Abhyankar valuation} if equality holds; such valuations can be geometrically characterized, see~\cite{ELS,KK05,JM}. In particular, the trivial valuation is Abhyankar; it is the unique valuation with transcendence degree $n$. We say that $v$ is \emph{divisorial} if $\ratrk(v)=1$ and $\trdeg(v)=n-1$. By a theorem of Zariski, this is the case iff there exists a normal projective variety $Y$ with $k(Y)=K$ and a prime divisor $F$ of $Y$ such that $v=c\ord_F$ for some $c>0$. We then have $k(v)=k(F)$ and $\Gamma_v=c\mathbb{Z}$. If $X$ is a variety with $k(X)=K$, a valuation $v$ is \emph{centered on $X$} if there exists a scheme point $\xi\in X$ such that $v\ge 0$ on the local ring $\mathcal{O}_{X,\xi}$ and $v>0$ on its maximal ideal. We also say $v$ is a valuation on $X$ in this case. By the valuative criterion of separatedness, the point $\xi$ is unique, and is called the \emph{center} of $v$ on $X$. If $X$ is proper, the valuative criterion of properness guarantees that any $v$ is centered on $X$. If a divisorial valuation $v$ is centered on $X$, then $v=c\ord_F$ where $F$ is a prime divisor on a normal variety $Y$ with a proper birational morphism $\mu\colon Y\to X$; the center of $v$ on $X$ is then the generic point of $\mu(F)$. For any valuation $v$ centered on $X$, we can make sense of $v(s)\in\mathbb{R}_+$ for a (non-zero) section $s\in H^0(X,L)$ of a line bundle $L$ on $X$, by trivializing $L$ at the center $\xi$ of $v$ on $X$ and evaluating $v$ on the local function corresponding to $s$ in this trivialization. Since any two such trivializations differ by a unit at $\xi$, $v(s)$ is well-defined, and $v(s)>0$ iff $s(\xi)=0$. Similarly, given an ideal $\mathfrak{a}\subset\mathcal{O}_X$ we set $$ v(\mathfrak{a})=\inf\{v(f)\mid f\in\mathfrak{a}_\xi\}. $$ It is in fact enough to take the min over any finite set of generators of $\mathfrak{a}_\xi$. We also set $v(Z):=v(\mathfrak{a})$, where $Z$ is the closed subscheme defined by $\mathfrak{a}$. \medskip Finally, for later use we record the following simple variant of~\cite[Theorem 10.1.6]{HS}. \begin{lem}\label{lem:irredundant} Assume that $X=\Spec A$ is affine. Let $S$ be a finite set of valuations on $X$, which is irredundant in the sense that for each $v\in S$ there exists $f\in A$ with $v(f)<v'(f)$ for all $v'\in S\setminus\{v\}$. Then $S$ is uniquely determined by the function $h_S(f):=\min_{v\in S} v(f)$. \end{lem} \begin{proof} Let $S$ and $T$ be two irredundant finite sets of valuations with $h_S=h_T=:h$. For each $v\in S$, $w\in T$ set $C_v:=\left\{f\in A\mid h(f)=v(f)\right\}$ and $D_w:=\left\{f\in A\mid h(f)=w(f)\right\}$, and observe that these sets are stable under finite products. For each $v\in S$, we claim that there exists $w\in T$ with $C_v\subset D_w$. Otherwise, for each $w$ there exists $f_w\in C_v\setminus D_w$, \ie $v(f_w)=h(f_w)<w(f_w)$. Setting $f=\prod_w f_w$, we get for each $w'\in T$ $$ w'(f)=\sum_{w\in T} w'(f_w)>\sum_{w\in S} h(f_w)=\sum_{w\in S} v(f_w)=v(f)\ge h(f), $$ and taking the min over $w'\in T$ yields a contradiction. We next claim that $C_v\subset D_w$ implies that $v=w$. This will prove that $S\subset T$, and hence $S=T$ by symmetry. Note first that $v(f)=h(f)=w(f)$ for each $f\in C_v$. Now choose $g_v\in A$ with $v(g_v)<v'(g_v)$ for all $v'\ne v$ in $S$, so that $g_v\in C_v\subset D_w$. For each $f\in A$, we then have $v(g_v^m f)<v'(g_v^m f)$ for $m\gg 1$, and hence $g_v^m f\in C_v\subset D_w$. It follows that $$ m v(g_v)+v(f)=v(g_v^m f)=w(g_v^m f)=m w(g_v)+w(f)=m v(g_v)+w(f), $$ and hence $v(f)=w(f)$. \end{proof} \subsection{Integral closure and Rees valuations}\label{sec:rees} We assume in this section that $X$ is a normal variety. Let $Z\subset X$ a closed subscheme with ideal $\mathfrak{a}\subset\mathcal{O}_X$. On the one hand, the \emph{normalized blow-up} $\pi\colon\widetilde{X}\to X$ along $Z$ is the composition of the blow-up of $Z$ in $X$ with the normalization morphism. On the other hand, the \emph{integral closure} $\overline\mathfrak{a}$ of $\mathfrak{a}$ is the set of elements $f\in\mathcal{O}_X$ satisfying a monic equation $f^d+a_1f^{d-1}+\dots+a_d=0$ with $a_j\in\mathfrak{a}^j$. The following well-known connection between normalized blow-ups and integral closures shows in particular that $\overline\mathfrak{a}$ is a coherent ideal sheaf. \begin{lem}\label{lem:normblow} Let $Z\subset X$ be a closed subscheme, with ideal $\mathfrak{a}\subset\mathcal{O}_X$, and let $\pi\colon\widetilde{X}\to X$ be the normalized blow-up along $Z$. Then $D:=\pi^{-1}(Z)$ is an effective Cartier divisor with $-D$ $\pi$-ample, and we have for each $m\in\mathbb{N}$: \begin{itemize} \item[(i)] $\mathcal{O}_{\widetilde{X}}(-mD)$ is $\pi$-globally generated; \item[(ii)] $\pi_*\mathcal{O}_{\widetilde{X}}(-mD)=\overline{\mathfrak{a}^m}$; \item[(iii)] $\mathcal{O}_{\widetilde{X}}(-mD)=\mathcal{O}_{\widetilde{X}}\cdot\overline{\mathfrak{a}^m}=\mathcal{O}_{\widetilde{X}}\cdot\mathfrak{a}^m$; \end{itemize} In particular, $\pi$ coincides with the normalized blow-up of $\overline{\mathfrak{a}}$, and also with the (usual) blow-up of $\overline{\mathfrak{a}^m}$ for any $m\gg 1$. \end{lem} We recall the brief argument for the convenience of the reader. \begin{proof} Let $\mu\colon X'\to X$ be the blow-up along $Z$, so that $\mu^{-1}(Z)=D'$ is a Cartier divisor on $X'$ with $-D'$ $\mu$-very ample, and hence $\mathcal{O}_{X'}(-mD')$ $\mu$-globally generated for all $m\in\mathbb{N}$. Denoting by $\nu\colon\widetilde{X}\to X'$ the normalization morphism, we have $\nu^*D'=D$. Since $\nu$ is finite, it follows that $-D$ is $\pi$-ample and satisfies (i), which reads $\mathcal{O}_{\widetilde{X}}(-mD)=\mathcal{O}_{\widetilde{X}}\cdot\mathfrak{a}_m$ with $$ \mathfrak{a}_m:=\pi_*\mathcal{O}_{\widetilde{X}}(-mD). $$ It therefore remains to establish (ii). By normality of $\widetilde{X}$, $\mathcal{O}_{\widetilde{X}}(-mD)$ is integrally closed, hence so is $\mathfrak{a}_m$. As $\mathfrak{a}\subset\mathfrak{a}_1$, we have $\mathfrak{a}^m\subset\mathfrak{a}_1^m\subset\mathfrak{a}_m$, and hence $\overline{\mathfrak{a}^m}\subset\mathfrak{a}_m$. The reverse inclusion requires more work; we reproduce the elegant geometric argument of~\cite[II.11.1.7]{Laz}. Fix $m\ge 1$. As the statement is local over $X$, we may choose a system of generators $(f_1,\dots,f_p)$ for $\mathfrak{a}^m$. This defines a surjection $\mathcal{O}_X^{\oplus p}\to\mathfrak{a}^m$, which induces, after pull-back and twisting by $-lD$, a surjection $$ \mathcal{O}_{\widetilde{X}}(-lD)^{\oplus p}\to\mathcal{O}_{\widetilde{X}}\left(-(m+l)D\right)=\mathfrak{a}^m\cdot\mathcal{O}_{\widetilde{X}}(-lD) $$ for any $l\ge 1$. Since $-D$ is $\pi$-ample, Serre vanishing implies that the induced map $$ \mathfrak{a}_{l}^{\oplus p}=\pi_*\mathcal{O}_{\widetilde{X}}\left(-lD\right)^{\oplus p}\to\mathfrak{a}_{(m+l)}=\pi_*\mathcal{O}_{\widetilde{X}}\left(-(m+l)mD\right) $$ is also surjective for $l\gg 1$, \ie $\mathfrak{a}^m\cdot\mathfrak{a}_l=\mathfrak{a}_{m+l}$. But since $\mathfrak{a}_{m+l}\supset\mathfrak{a}_m\cdot\mathfrak{a}_l\supset\mathfrak{a}^m\cdot\mathfrak{a}_l$, $\mathfrak{a}_m$ acts on the finitely generated $\mathcal{O}_X$-module $\mathfrak{a}_l$ by multiplication by $\mathfrak{a}^m$, and the usual determinant trick therefore yields $\mathfrak{a}_m\subset\overline{\mathfrak{a}^m}$. \end{proof} \begin{defi}\label{defi:rees} Let $Z\subset X$ be a closed subscheme with ideal $\mathfrak{a}$, and let $\pi\colon\widetilde{X}\to X$ be the normalized blow-up of $Z$, with $D:=\pi^{-1}(Z)$. The \emph{Rees valuations} of $Z$ (or $\mathfrak{a}$) are the divisorial valuations $v_E=\frac{\ord_E}{\ord_E(D)}$, where $E$ runs over the irreducible components of $D$.\end{defi} Note that $v_E(Z)=v_E(\mathfrak{a})=v_E(D)=1$ for all $E$. We now show that the present definition of Rees valuations coincides with the standard one in valuation theory (see for instance~\cite[Chapter 5]{HS}). The next result is a slightly more precise version of~\cite[Theorem 2.2.2, (3)]{HS}. \begin{thm}\label{thm:rees} The set of Rees valuations of $\mathfrak{a}$ is the unique finite set $S$ of valuations such that: \begin{itemize} \item[(i)] $\overline{\mathfrak{a}^m}=\bigcap_{v\in S}\left\{f\in\mathcal{O}_X\mid v(f)\ge m \right\}$ for all $m\in\mathbb{N}$; \item[(ii)] $S$ is minimal with respect to (i). \end{itemize} \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:rees}] For each finite set of valuations $S$, set $h_S(f):=\min_{v\in S}v(f)$. Using that $h_S(f^m)=m h_S(f)$, it is straightforward to check that any two sets $S$, $S'$ satisfying (i) have $h_S=h_{S'}$. If $S$ and $S'$ further satisfy (ii), then they are irredundant in the sense of Lemma~\ref{lem:irredundant}, which therefore proves that $S=S'$. It remains to check that the set $S$ of Rees valuations of $Z$ satisfies (i) and (ii). The first property is merely a reformulation of Lemma~\ref{lem:normblow}. Now pick an irreducible component $E$ of $D$. It defines a fractional ideal $\mathcal{O}_{\widetilde{X}}(E)$. Since $-D$ is $\pi$-ample, $\mathcal{O}_{\widetilde{X}}(-mD)$ and $\mathcal{O}_{\widetilde{X}}(-mD)\cdot\mathcal{O}_{\widetilde{X}}(E)$ both become $\pi$-globally generated for $m\gg 1$. Since $\mathcal{O}_{\widetilde{X}}(-mD)$ is strictly contained in $\mathcal{O}_{\widetilde{X}}(-mD)\cdot\mathcal{O}_{\widetilde{X}}(E)$, it follows that $\overline{\mathfrak{a}^m}=\pi_*\mathcal{O}_{\widetilde{X}}(-mD)$ is strictly contained in $$ \pi_*\left(\mathcal{O}_{\widetilde{X}}(-mD)\cdot\mathcal{O}_{\widetilde{X}}(E)\right)\subset\bigcap_{E'\ne E}\left\{f\in\mathcal{O}_X\mid v_{E'}(f)\ge m\right\}, $$ which proves (ii). \end{proof} \begin{exam}\label{ex:rees} The Rees valuations of an effective Weil divisor $D=\sum_{i=1}^m a_i D_i$ on a normal variety $X$ are given by $v_i:=\frac{1}{a_i}\ord_{D_i}$, $1\le i\le m$. \end{exam} We end this section on Rees valuations with the following result. \begin{prop}\label{prop:reesexc} Let $\pi:Y\to X$ be a projective birational morphism between normal varieties, and assume that $Y$ admits a Cartier divisor that is both $\pi$-exceptional and $\pi$-ample. Then $\pi$ is isomorphic to the blow-up of $X$ along a closed subscheme $Z$ of codimension at least $2$, and the divisorial valuations $\ord_F$ defined by the $\pi$-exceptional prime divisors $F$ on $Y$ coincide, up to scaling, with the Rees valuations of $Z$. \end{prop} This is indeed a direct consequence of the following well-known facts. \begin{lem}\label{lem:ample} Let $\pi:Y\to X$ be a projective birational morphism between varieties with $X$ normal. If $G$ is a $\pi$-exceptional, $\pi$-ample Cartier divisor, then: \begin{itemize} \item[(i)] $-G$ is effective; \item[(ii)] $\supp G$ coincides with the exceptional locus of $\pi$; \item[(iii)] for $m$ divisible enough, $\pi$ is isomorphic to the blow-up of the ideal $\mathfrak{a}_m:=\pi_*\mathcal{O}_Y(mG)$, whose zero locus has codimension at least $2$. \end{itemize} Conversely, the blow-up of $X$ along a closed subscheme of codimension at least $2$ admits a $\pi$-exceptional, $\pi$-ample Cartier divisor. \end{lem} Assertion (i) is a special (=ample) case of the Negativity Lemma~\cite[Lemma 3.39]{KM}. The simple direct argument given here is taken from the alternative proof of the Negativity Lemma provided in~\cite[Proposition 2.12]{BdFF}. \begin{proof} Set $\mathfrak{a}_m:=\pi_*\mathcal{O}_Y(mG)$, viewed as a fractional ideal on $X$. Since $G$ is $\pi$-exceptional, every rational function in $\pi_*\mathcal{O}_Y(mG)$ is regular in codimension $1$, and $\mathfrak{a}_m$ is thus an ideal whose zero locus has codimension at least $2$, by the normality of $X$. If we choose $m\gg 1$ such that $\mathcal{O}_Y(mG)$ is $\pi$-globally generated, then we have $\mathcal{O}_Y(mG)=\mathcal{O}_Y\cdot\mathfrak{a}_m\subset\mathcal{O}_Y$, which proves (i). By assumption, $\supp G$ is contained in the exceptional locus $E$ of $\pi$. Since $X$ is normal, $\pi$ has connected fibers by Zariski's main theorem, so $E$ is the union of all projective curves $C\subset Y$ that are mapped to a point of $X$. Any such curve satisfies $G\cdot C>0$ by the relative ampleness of $G$, and hence $C\subset \supp G$ since $-G$ is effective. Thus $\supp G=E$, proving~(ii). Finally, the relative ampleness of $G$, implies that the $\mathcal{O}_X$-algebra $\bigoplus_{m\in\mathbb{N}}\mathfrak{a}_m$ is finitely generated, and its relative $\Proj$ over $X$ is isomorphic to $Y$. The finite generation implies $\bigoplus_{l\in\mathbb{N}}\mathfrak{a}_{ml}=\bigoplus_{l\in\mathbb{N}}\mathfrak{a}_m^l$ for all $m$ divisible enough, and applying $\Proj_X$ shows that $X$ is isomorphic to the blow-up of $X$ along $\mathfrak{a}_m$. \end{proof} \subsection{Boundaries and log discrepancies}\label{sec:bound} Let $X$ be a normal variety. In the Minimal Model Program (MMP) terminology, a \emph{boundary} $B$ on $X$ is a $\mathbb{Q}$-Weil divisor (\ie a codimension one cycle with rational coefficients) such that $K_{(X,B)}:=K_X+B$ is $\mathbb{Q}$-Cartier. Alternatively, one says that $(X,B)$ is a \emph{pair} to describe this condition, and $K_{(X,B)}$ is called the \emph{log canonical divisor} of this pair. In particular, $0$ is a boundary iff $X$ is $\mathbb{Q}$-Gorenstein. If $(X',B')$ and $(X,B)$ are pairs and $f\colon X'\to X$ is a morphism, then we set $K_{(X',B')/(X,B)}=K_{(X',B')}-f^*K_{(X,B)}$. To any divisorial valuation $v$ on $X$ is associated its \emph{log discrepancy} with respect to the pair $(X,B)$, denoted by $A_{(X,B)}(v)$ and defined as follows. For any proper birational morphism $\mu\colon Y\to X$, with $Y$ normal, and any prime divisor $F$ of $Y$ such that $v=c\ord_F$, we set $$ A_{(X,B)}(v):=c\left(1+\ord_F\left(K_{Y/(X,B)}\right)\right) $$ with $K_{Y/(X,B)}:=K_{Y}-\mu^*K_{(X,B)}$. This is well-defined (\ie independent of the choice of $\mu$), by compatibility of canonical divisor classes under push-forward. By construction, $A_{(X,B)}$ is homogeneous with respect to the natural action of $\mathbb{R}_+^*$ on divisorial valuations by scaling, \ie $A_{(X,B)}(c\,v)=c A_{(X,B)}(v)$ for all $c>0$. As a real valued function on $k(X)^*$, $c\,v$ converges pointwise to the trivial valuation $v_{\triv}$ as $c\to 0$. It is thus natural to set $A_{(X,B)}(v_{\triv}):=0$. The pair $(X,B)$ is \emph{sublc} if $A_{(X,B)}(v)\ge0$ for all divisorial valuations $v$. It is \emph{subklt} if the inequality is strict. If $B$ is furthermore effective, then $(X,B)$ is \emph{lc} (or log canonical) and \emph{klt} (Kawamata log terminal), respectively. If $\mu\colon X'\to X$ is a birational morphism and $B'$ is defined by $K_{(X',B')}=\mu^*K_{(X,B)}$ and $\mu_*B'=B$, then $A_{(X',B')}=A_{(X,B)}$, so $(X',B')$ is subklt (resp.\ sublc) iff $(X,B)$ is subklt (resp.\ sublc), but the corresponding equivalence may fail for klt or lc pairs, since $B'$ is not necessarily effective even when $B$ is. If $(X,B)$ is a pair and $D$ is an effective $\mathbb{Q}$-Cartier divisor on $X$, then we define the \emph{log canonical threshold} of $D$ with respect to $(X,B)$ as \begin{equation*} \lct_{(X,B)}(D):=\sup\left\{t\in\mathbb{Q}\mid(X,B+tD)\ \text{is subklt}\right\}, \end{equation*} with the convention $\lct_{(X,B)}(D)=-\infty$ if $(X,B+tD)$ is not subklt for any $t$. Assume $\lct_{(X,B)}(D)>-\infty$. Since $A_{(X,B+tD)}(v)=A_{(X,B)}(v)-t v(D)$ for all divisorial valuations $v$ on $X$, we have \begin{equation*} \lct_{(X,B)}(D)=\inf_v\frac{A_{(X,B)}(v)}{v(D)}, \end{equation*} where the infimum is taken over $v$ with $v(D)>0$. When $k$ has characteristic zero, we can compute $\lct_{(X,B)}(D)$ using resolution of singularities. Pick a birational morphism $\mu\colon X'\to X$, with $X'$ a smooth projective variety, such that if $B'$ and $D'$ are defined by $K_{(X',B')}=\mu^*K_{(X,B)}$, $\mu_*B'=B$ and $D':=\mu^*D$, then the union of the supports of $B'$ and $D'$ has simple normal crossings. Then $\lct_{(X,B)}(D)=\lct_{(X',B')}(D')=\min_iA_{(X',B')}(\ord_{E_i})/\ord_{E_i}(D')$, where $E_i$ runs over the irreducible components of $D'$. \section{Test configurations}\label{sec:test} In what follows, $X$ is a projective scheme over $k$, and $L$ is a $\mathbb{Q}$-line bundle on $X$. Most often, $L$ will be ample, but it is sometimes useful to consider the general case. Similarly, it will be convenient to allow some flexibility in the definition of test configurations. \begin{defi}\label{defi:test1} A test configuration $\mathcal{X}$ for $X$ consists of the following data: \begin{itemize} \item[(i)] a flat and proper morphism of schemes $\pi\colon\mathcal{X}\to\mathbb{A}^1$; \item[(ii)] a $\mathbb{G}_m$-action on $\mathcal{X}$ lifting the canonical action on $\mathbb{A}^1$; \item[(iii)] an isomorphism $\mathcal{X}_1\simeq X$. \end{itemize} \end{defi} By Proposition~\ref{prop:basic} below, $\mathcal{X}$ is automatically a variety (\ie reduced and irreducible) when $X$ is. The central fiber $\mathcal{X}_0:=\pi^{-1}(0)$ is an effective Cartier divisor on $\mathcal{X}$ by the flatness of~$\pi$. Given test configurations $\mathcal{X}$, $\mathcal{X}'$ for $X$, the isomorphism $\mathcal{X}'_1\simeq X\simeq\mathcal{X}_1$ induces a canonical $\mathbb{G}_m$-isomorphism $\mathcal{X}'\setminus\mathcal{X}'_0\simeq\mathcal{X}\setminus\mathcal{X}_0$. We say that $\mathcal{X}'$ \emph{dominates} $\mathcal{X}$ if this isomorphism extends to a morphism $\mathcal{X}'\to\mathcal{X}$. When it is an isomorphism, we abuse notation slightly and write $\mathcal{X}'=\mathcal{X}$ (which is reasonable given that the isomorphism is canonical). Any two test configurations $\mathcal{X}_1$, $\mathcal{X}_2$ can be dominated by a third, for example the graph of $\mathcal{X}_1\dashrightarrow\mathcal{X}_2$. \begin{defi}\label{defi:test2} A test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ consists of a test configuration $\mathcal{X}$ for $X$, together with the following data: \begin{itemize} \item[(iv)] a $\mathbb{G}_m$-linearized $\mathbb{Q}$-line bundle $\mathcal{L}$ on $\mathcal{X}$; \item[(v)] an isomorphism $(\mathcal{X}_1,\mathcal{L}_1)\simeq (X,L)$ extending the one in~(iii). \end{itemize} \end{defi} By a $\mathbb{G}_m$-linearized $\mathbb{Q}$-line bundle $\mathcal{L}$ as in~(iv), we mean that $r\mathcal{L}$ is an actual $\mathbb{G}_m$-linearized line bundle for some $r\in\mathbb{Z}_{>0}$ that is not part of the data. The isomorphism in (v) then means $(\mathcal{X},r\mathcal{L}_1)\simeq(X,rL)$. We say that $(\mathcal{X},\mathcal{L})$ is ample, semiample,\dots (resp.~normal, $S_1$,\dots) when $\mathcal{L}$ (resp.~$\mathcal{X}$) has the corresponding property. A \emph{pull-back} of a test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is a test configuration $(\mathcal{X}',\mathcal{L}')$ where $\mathcal{X}'$ dominates $\mathcal{X}$ and $\mathcal{L}'$ is the pull-back of $\mathcal{L}$. For each $c\in\mathbb{Q}$, the $\mathbb{G}_m$-linearization of the $\mathbb{Q}$-line bundle $\mathcal{L}$ may be twisted by $ t^c$, in the sense that the $\mathbb{G}_m$-linearization of $r\mathcal{L}$ is twisted by the character $ t^{rc}$ with $r$ divisible enough. The resulting test configuration can be identified with $(\mathcal{X},\mathcal{L}+c\mathcal{X}_0)$. If $(\mathcal{X},\mathcal{L}_1)$ and $(\mathcal{X},\mathcal{L}_2)$ are test configurations for $(X,L_1)$, $(X,L_2)$, respectively, and $c_1,c_2\in\mathbb{Q}_{>0}$, then $(\mathcal{X},c_1\mathcal{L}_1+c_2\mathcal{L}_2)$ is a test configuration for $(X,c_1L_1+c_2L_2)$. If $(\mathcal{X},\mathcal{L})$ is a test configuration of $(X,\mathcal{O}_X)$, then there exists $r\in\mathbb{Z}_{>0}$ and a Cartier divisor $D$ on $\mathcal{X}$ supported on $\mathcal{X}_0$ such that $r\mathcal{L}=\mathcal{O}_{\mathcal{X}}(D)$. \begin{exam}\label{ex:product} Every $\mathbb{G}_m$-action on $X$ induces a diagonal $\mathbb{G}_m$-action on $X\times\mathbb{A}^1$, thereby defining a \emph{product test configuration} $\mathcal{X}$ for $X$. Similarly, a $\mathbb{G}_m$-linearization of $rL$ for some $r\ge 1$ induces a product test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$, which is simply $(X,L)\times\mathbb{A}^1$ with diagonal action of $\mathbb{G}_m$. \end{exam} We denote by $X_{\mathbb{A}^1}$ (resp.~$(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$ the product test configuration induced by the trivial $\mathbb{G}_m$-action on $X$ (resp.~$(X,L)$). \begin{exam}\label{ex:defnorm} The \emph{deformation to the normal cone} of a closed subscheme $Z\subset X$ is the blow-up $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$ along $Z\times\{0\}$. Thus $\mathcal{X}$ is a test configuration dominating $X_{\mathbb{A}^1}$. By~\cite[Chapter 5]{Ful}, its central fiber splits as $\mathcal{X}_0=E+F$, where $E=\rho^{-1}(Z\times\{0\})$ is the exceptional divisor and $F$ is the strict transform of $X\times\{0\}$, which is isomorphic to the blow-up of $X$ along $Z$. \end{exam} \begin{exam}\label{E201} More generally we can blow up any $\mathbb{G}_m$-invariant ideal $\mathfrak{a}$ on $X\times\mathbb{A}^1$ supported on the central fiber. We discuss this further in \S\ref{FlagIdeals}. \end{exam} \subsection{Scheme theoretic features} Recall that a scheme $Z$ satisfies \emph{Serre's condition $S_k$} iff \begin{equation*} \depth\mathcal{O}_{Z,\xi}\ge\min\{\codim \xi,k\} \quad\text{for every point $\xi\in Z$}. \end{equation*} In particular, $Z$ is $S_1$ iff it has no embedded points. While we will not use it, one can show that $Z$ is $S_2$ iff it has no embedded points and satisfies the Riemann extension property across closed subsets of codimension at least $2$. On the other hand, $Z$ is \emph{regular in codimension $k$} ($R_k$ for short) iff $\mathcal{O}_{Z,\xi}$ is regular for every $\xi\in\mathcal{X}$ of codimension at most $k$. Equivalently $Z$ is $R_k$ iff its singular locus has codimension greater than $k$. Note that $Z$ is $R_0$ iff it is generically reduced. Serre's criterion states that $Z$ is normal iff it is $R_1$ and $S_2$. Similarly, $Z$ is reduced iff it is $R_0$ and $S_1$ (in other words, iff $Z$ is generically reduced and without embedded points). \begin{prop}\label{prop:basic} Let $\mathcal{X}$ be a test configuration for $X$. \begin{itemize} \item[(i)] $\mathcal{X}$ is reduced iff so is $X$. \item[(ii)] $\mathcal{X}$ is $S_2$ iff $X$ is $S_2$ and $\mathcal{X}_0$ is $S_1$ (\ie without embedded points). \item[(iii)] If $X$ is $R_1$ and $\mathcal{X}_0$ is generically reduced (that is, `without multiple components'), then $\mathcal{X}$ is $R_1$. \item[(iv)] If $X$ is normal and $\mathcal{X}_0$ is reduced, then $\mathcal{X}$ is normal. \item[(v)] Every irreducible component $\mathcal{Y}$ (with its reduced structure) of $\mathcal{X}$ is a test configuration for a unique irreducible component $Y$ of $X$. Further, the multiplicities of $\mathcal{X}$ along $\mathcal{Y}$ and those of $X$ along $Y$ are equal. \item[(vi)] $\mathcal{X}$ is a variety iff so is $X$. \end{itemize} \end{prop} Recall that the multiplicity of $X$ along $Y$ is defined as the length of $\mathcal{O}_X$ at the generic point of $Y$. \begin{proof} Flatness of $\pi\colon\mathcal{X}\to\mathbb{A}^1$ implies that $\mathcal{X}_0$ is Cartier divisor and that every associated (\ie generic or embedded) point of $\mathcal{X}$ belongs to $\mathcal{X}\setminus\mathcal{X}_0$ (cf.~\cite[Proposition III.9.7]{Har}). The proposition is a simple consequence of this fact and of the isomorphism $\mathcal{X}\setminus\mathcal{X}_0\simeq X\times(\mathbb{A}^1\setminus\{0\})$. More specifically, since $\mathbb{A}^1\setminus\{0\}$ is smooth, $\mathcal{X}\setminus\mathcal{X}_0$ is $R_k$ (resp.~$S_k$) iff $X$ is. Since $\mathcal{X}_0$ is a Cartier divisor, we also have $$ \depth\mathcal{O}_{\mathcal{X}_0,\xi}=\depth\mathcal{O}_{\mathcal{X},\xi}-1 $$ for each $\xi\in\mathcal{X}_0$, so that $\mathcal{X}$ is $S_k$ iff $X$ is $S_k$ and $\mathcal{X}_0$ is $S_{k-1}$. It remains to show that $\mathcal{X}_0$ being generically reduced and $X$ being $R_1$ imply that $\mathcal{X}$ is $R_1$. But every codimension one point $\xi\in\mathcal{X}$ either lies the open subset $\mathcal{X}\setminus\mathcal{X}_0$, in which case $\mathcal{X}$ is regular at $\xi$, or is a generic point of the Cartier divisor $\mathcal{X}_0$. In the latter case, the closed point of $\Spec\mathcal{O}_{\mathcal{X},\xi}$ is a reduced Cartier divisor; hence $\mathcal{O}_{\mathcal{X},\xi}$ is regular. Now, $\mathcal{X}\setminus\mathcal{X}_0$ is Zariski dense in $\mathcal{X}$ since $\mathcal{X}_0$ is a Cartier divisor. Hence $\mathcal{X}$ is isomorphic to $X\times\mathbb{A}^1$ at each generic point, and (v) easily follows. Finally,~(vi) is a consequence of~(i) and~(v). \end{proof} \subsection{Compactification}\label{sec:compact} For many purposes it is convenient to compactify test configurations. The following notion provides a canonical way of doing so. \begin{defi}\label{defi:comp} The \emph{compactification} $\bar\mathcal{X}$ of a test configuration $\mathcal{X}$ for $X$ is defined by gluing together $\mathcal{X}$ and $X\times(\P^1\setminus\{0\})$ along their respective open subsets $\mathcal{X}\setminus\mathcal{X}_0$ and $X\times(\mathbb{A}^1\setminus\{0\})$, which are identified using the canonical $\mathbb{G}_m$-equivariant isomorphism $\mathcal{X}\setminus\mathcal{X}_0\simeq X\times(\mathbb{A}^1\setminus\{0\})$. \end{defi} The compactification comes with a $\mathbb{G}_m$-equivariant flat morphism $\bar\mathcal{X}\to\P^1$, still denoted by $\pi$. By construction, $\pi^{-1}(\P^1\setminus\{0\})$ is $\mathbb{G}_m$-equivariantly isomorphic to $X_{\P^1\setminus\{0\}}$ over $\P^1\setminus\{0\}$. Similarly, a test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ admits a compactification $(\bar\mathcal{X},\bar\mathcal{L})$, where $\bar\mathcal{L}$ is a $\mathbb{G}_m$-linearized $\mathbb{Q}$-line bundle on $\bar\mathcal{X}$. Note that $\bar\mathcal{L}$ is relatively (semi)ample iff $\mathcal{L}$ is. \begin{exam}\label{ex:comp} When $\mathcal{X}$ is the product test configuration defined by a $\mathbb{G}_m$-action on $X$, the compactification $\bar\mathcal{X}\to\P^1$ may be alternatively described as the locally trivial fiber bundle with typical fiber $X$ associated to the principal $\mathbb{G}_m$-bundle $\mathbb{A}^2\setminus\{0\}\to\P^1$, \ie $$ \bar\mathcal{X}=\left((\mathbb{A}^2\setminus\{0\})\times X\right)/\mathbb{G}_m $$ with $\mathbb{G}_m$ acting diagonally. Note in particular that $\bar\mathcal{X}$ is not itself a product in general. For instance, the $\mathbb{G}_m$-action $ t\cdot[x:y]=[t^d x:y]$ on $X=\P^1$ gives rise to the Hirzebruch surface $\bar\mathcal{X}\simeq\P\left(\mathcal{O}_{\P^1}\oplus\mathcal{O}_{\P^1}(d)\right)$. \end{exam} \subsection{Ample test configurations and one-parameter subgroups} Let $(X,L)$ be a polarized projective scheme. Fix $r\ge 1$ such that $rL$ is very ample, and consider the corresponding closed embedding $X\hookrightarrow\P V^*$ with $V:=H^0(X,rL)$. Every one-parameter subgroup ($1$-PS for short) $\rho\colon\mathbb{G}_m\to\mathrm{PGL}(V)$ induces a test configuration $\mathcal{X}$ for $X$, defined as the schematic closure in $\P V^*\times\mathbb{A}^1$ of the image of the closed embedding $X\times\mathbb{G}_m\hookrightarrow\P V^*\times\mathbb{G}_m$ mapping $(x, t)$ to $(\rho(t)x, t)$. In other words, $\mathcal{X}_0$ is defined as the `flat limit' as $t\to 0$ of the image of $X$ under $\rho(t)$, cf.~\cite[Proposition 9.8]{Har}. By Proposition~\ref{prop:basic}, the schematic closure is simply given by the Zariski closure when $X$ is reduced. If we are now given $\rho\colon\mathbb{G}_m\to\GL(V)$, then $\mathcal{O}_\mathcal{X}(1)$ is $\mathbb{G}_m$-linearized, and we get an ample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ by setting $\mathcal{L}:=\tfrac 1r\mathcal{O}_\mathcal{X}(1)$. \smallskip Conversely, every ample test configuration is obtained in this way, as was originally pointed out in~\cite[Proposition 3.7]{RT}. Indeed, let $(\mathcal{X},\mathcal{L})$ be an ample test configuration, and pick $r\ge 1$ such that $r\mathcal{L}$ is (relatively) very ample. The direct image $\mathcal{V}:=\pi_*\mathcal{O}_\mathcal{X}(r\mathcal{L})$ under $\pi\colon\mathcal{X}\to\mathbb{A}^1$ is torsion-free by flatness of $\pi$, and hence a $\mathbb{G}_m$-linearized vector bundle on $\mathbb{A}^1$ with an equivariant embedding $\mathcal{X}\hookrightarrow\P(\mathcal{V}^*)$ such that $r\mathcal{L}=\mathcal{O}_\mathcal{X}(1)$. By Proposition~\ref{prop:reesfiltr}, $\mathcal{V}$ is $\mathbb{G}_m$-equivariantly isomorphic to $V\times\mathbb{A}^1$ for a certain $\mathbb{G}_m$-action $\rho\colon\mathbb{G}_m\to\GL(V)$, and it follows that $(\mathcal{X},\mathcal{L})$ is the ample test configuration attached to $\rho$. \subsection{Trivial and almost trivial test configurations}\label{sec:triv} The normalization $\nu\colon\widetilde{X}\to X$ of a (possibly non-reduced) scheme $X$ is defined as the normalization of the reduction $X_\red$ of $X$. Denoting by $X_\red=\bigcup_\a X^\a$ the irreducible decomposition, we have $\widetilde{X}=\coprod_\a \widetilde{X}^\a$, the disjoint union of the normalizations $\widetilde{X}^\a\to X^\a$. If $L$ is a $\mathbb{Q}$-line bundle and $\tilde{L}:=\nu^*L$, we call $(\widetilde{X},\tilde{L})$ the \emph{normalization} of $(X,L)$. If $L$ is ample, then so is $\tilde{L}$ (cf.~\cite[\S4]{Har2}). The normalization $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ of a test configuration $(\mathcal{X},\mathcal{L})$ is similarly defined (the flatness of $\widetilde{\mathcal{X}}\to\mathbb{A}^1$ being a consequence of~\cite[Proposition III.9.7]{Har}), and is a test configuration for $(\widetilde{X},\tilde{L})$. By Proposition~\ref{prop:basic}, we have $\widetilde{\mathcal{X}}=\coprod_\a\widetilde{\mathcal{X}}^\a$ with $(\widetilde{\mathcal{X}}^\a,\tilde{\mathcal{L}}^\a)$ a test configuration for $(\widetilde{X}^\a,\tilde{L}^\a)$. \begin{defi}\label{defi:triv} A test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is \emph{trivial} if $\mathcal{X}=X_{\mathbb{A}^1}$. We say that $(\mathcal{X},\mathcal{L})$ is \emph{almost trivial} if the normalization $\widetilde{\mathcal{X}}^\a$ of each top-dimensional irreducible component $\mathcal{X}^\a$ is trivial. \end{defi} Note that (almost) triviality does not a priori bear on $\mathcal{L}$. However, we have: \begin{lem}\label{lem:triv} A test configuration $(\mathcal{X},\mathcal{L})$ is almost trivial iff for each top-dimensional irreducible component $X^\a$ of $X$, the corresponding irreducible component $\widetilde{\mathcal{X}}^\a$ of the normalization of $\mathcal{X}$ satisfies $(\widetilde{\mathcal{X}}^\a,\tilde{\mathcal{L}}^\a+c_\a\widetilde{\mathcal{X}}^\a_0)=(\widetilde{X}^\a_{\mathbb{A}^1},\tilde{L}^\a_{\mathbb{A}^1})$ for some $c_\a\in\mathbb{Q}$. \end{lem} \begin{proof} We may assume that $\mathcal{X}$ (and hence $X$) is normal and irreducible. Replacing $\mathcal{L}$ with $\mathcal{L}-L_{\mathbb{A}^1}$, we may also assume that $L=\mathcal{O}_X$, and we then have $\mathcal{L}=D$ for a unique $\mathbb{Q}$-Cartier divisor $D$ supported on $\mathcal{X}_0$. If $(\mathcal{X},\mathcal{L})$ is almost trivial, then $\mathcal{X}=X_{\mathbb{A}^1}$, and $\mathcal{X}_0=X\times\{0\}$ is thus irreducible. It follows that $D$ is a multiple of $\mathcal{X}_0$, hence the result. \end{proof} The next result shows that the current notion of almost triviality is compatible with the one introduced in~\cite{Sto2,Oda4}. \begin{prop}\label{prop:almosttriv} Assume that $L$ is ample, and let $(\mathcal{X},\mathcal{L})$ be an ample test configuration for $(X,L)$. \begin{itemize} \item[(i)] If $X$ is normal, then $(\mathcal{X},\mathcal{L})$ is almost trivial iff $X_{\mathbb{A}^1}$ dominates $\mathcal{X}$. \item[(ii)] If $X$ is reduced and equidimensional, then $(\mathcal{X},\mathcal{L})$ is almost trivial iff the canonical birational map $X_{\mathbb{A}^1}\dashrightarrow\mathcal{X}$ is an isomorphism in codimension one. \end{itemize} \end{prop} \begin{proof} Consider first the case where $\mathcal{X}$ is normal and irreducible, and assume that $X_{\mathbb{A}^1}\dashrightarrow\mathcal{X}$ is an isomorphism in codimension one. The strict transform $\mathcal{L}'$ of $\mathcal{L}$ (viewed as a $\mathbb{Q}$-Weil divisor class) on $X_{\mathbb{A}^1}$ coincides with $L_{\mathbb{A}^1}$ outside $X\times\{0\}$. The latter being irreducible, we thus have $\mathcal{L}'=L_{\mathbb{A}^1}+c X\times\{0\}$. This shows that $\mathcal{L}'$ is ($\mathbb{Q}$-Cartier and) relatively ample. Since the normal varieties $X_{\mathbb{A}^1}$ and $\mathcal{X}$ are isomorphic outside a Zariski closed subset of codimension at least $2$, we further have $H^0(\mathcal{X},m\mathcal{L})\simeq H^0(X_{\mathbb{A}^1},m\mathcal{L}')$ for all $m$ divisible enough, and we conclude by ampleness that $(\mathcal{X},\mathcal{L})\simeq(X_{\mathbb{A}^1},\mathcal{L}')$ is trivial. We now treat the reduced case, as in (ii). Observe first that $X_{\mathbb{A}^1}$ is regular at each generic point of $X\times\{0\}$, because $X$ is regular in codimension zero, being reduced. As a result, $\widetilde{X}_{\mathbb{A}^1}\to X_{\mathbb{A}^1}$ is an isomorphism in codimension one. Now assume that $X_{\mathbb{A}^1}\dashrightarrow\mathcal{X}$ is an isomorphism in codimension one. Then $\mathcal{X}$ is isomorphic to $X_{\mathbb{A}^1}$ at each generic point $\xi$ of $\mathcal{X}_0$. By the previous observation, $\mathcal{X}$ is regular at $\xi$, so that $\widetilde{\mathcal{X}}\to\mathcal{X}$ is an isomorphism at each generic point of $\widetilde{\mathcal{X}}_0$. The same therefore holds for $\widetilde{\mathcal{X}}\dashrightarrow\widetilde{X}_{\mathbb{A}^1}$, which means that $\widetilde{\mathcal{X}}$ is almost trivial. Applying the first part of the proof to each irreducible component of $\widetilde{\mathcal{X}}$ shows that $\widetilde{\mathcal{X}}=\widetilde{X}_{\mathbb{A}^1}$. Assume conversely that $(\mathcal{X},\mathcal{L})$ is almost trivial, \ie $\widetilde{\mathcal{X}}=\widetilde{X}_{\mathbb{A}^1}$. Since $\widetilde{\mathcal{X}}\to\mathbb{A}^1$ factors through $\mathcal{X}\to\mathbb{A}^1$ and $\widetilde{X}_{\mathbb{A}^1}\to X_{\mathbb{A}^1}$ is an isomorphism in codimension one, we see that the coordinate $t$ on $\mathbb{A}^1$ is a uniformizing parameter on $\mathcal{X}$ at each generic point of $\mathcal{X}_0$, and it follows easily that $\mathcal{X}\dashrightarrow X_{\mathbb{A}^1}$ is an isomorphism in codimension one. Finally, (i) is a consequence of (ii). \end{proof} At the level of one-parameter subgroups, almost triviality admits the following simple characterization, which completes~\cite[Proposition~3.5]{Oda4}. \begin{prop}\label{prop:triv1PS} Assume that $(X,L)$ is a polarized normal variety, and pick $r\ge 1$ with $rL$ very ample. Let $(s_i)$ be a basis of $H^0(X,rL)$, pick integers $$ a_1=\dots=a_p<a_{p+1}\le\dots\le a_{N_r}, $$ and let $\rho\colon\mathbb{G}_m\to\GL(H^0(X,rL))$ be the $1$-parameter subgroup such that $\rho(t)s_i=t^{a_i} s_i$. The test configuration $(\mathcal{X},\mathcal{L})$ defined by $\rho$ is then almost trivial iff $\bigcap_{1\le i\le p}(s_i=0)=\emptyset$ in $X$. \end{prop} This recovers the key observation of~\cite[\S3.1]{LX} that almost trivial, nontrivial test configurations always exist, and gives a simple explicit way to construct them. \begin{proof} The canonical rational map $$ \phi\colon X\times\mathbb{A}^1\dashrightarrow\mathcal{X}\hookrightarrow\P^{N_r-1}\times\mathbb{A}^1 $$ is given by \begin{multline*} \phi(x,t)=(\rho(t)[s_i(x)],t)=([t^{a_i}s_i(x)],t)\\ =([s_1(x):\dots:s_p(x):t^{a_{p+1}-a_1}s_{p+1}:\dots:t^{a_{N_r}-a_1}s_{N_r}(x)],t), \end{multline*} where $a_j-a_1\ge 1$ for $j>p$. By (i) of Proposition~\ref{prop:almosttriv}, $(\mathcal{X},\mathcal{L})$ is almost trivial iff $\phi$ extends to a morphism $X\times\mathbb{A}^1\to\P^{N_r-1}\times\mathbb{A}^1$, and this is clearly the case iff $\bigcap_{1\le i\le p}(s_i=0)=\emptyset$. \end{proof} \subsection{Test configurations and filtrations}\label{sec:filtrtc} By the reverse Rees construction of~\S\ref{sec:reesfiltr}, every test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ induces a $\mathbb{Z}$-filtration of the graded algebra $$ R(X,rL):=\bigoplus_{m\in\mathbb{N}} H^0(X,mrL) $$ for $r$ divisible enough. More precisely, for each $r$ such that $r\mathcal{L}$ is a line bundle, we define a $\mathbb{Z}$-filtration on $R(X,rL)$ by letting $F^\lambda H^0(X,mrL)$ be the (injective) image of the weight-$\lambda$ part $H^0(\mathcal{X},rm\mathcal{L})_\lambda$ of $H^0(\mathcal{X},mr\mathcal{L})$ under the restriction map $$ H^0(\mathcal{X},mr\mathcal{L})\to H^0(\mathcal{X},mr\mathcal{L})_{t=1}=H^0(X,mrL). $$ Alternatively, we have \begin{equation}\label{equ:filtr} F^\lambda H^0(X,mrL)=\left\{s\in H^0(X,mrL)\mid t^{-\lambda}\bar s\in H^0(\mathcal{X},mr\mathcal{L})\right\} \end{equation} where $\bar s\in H^0(\mathcal{X}\setminus\mathcal{X}_0,mr\mathcal{L})$ denotes the $\mathbb{G}_m$-invariant section defined by $s\in H^0(X,mrL)$. As a direct consequence of the projection formula, we get the following invariance property. \begin{lem}\label{lem:inv} Let $(\mathcal{X},\mathcal{L})$ be a test configuration, and let $(\mathcal{X}',\mathcal{L}')$ be a pull-back of $(\mathcal{X},\mathcal{L})$ such that the corresponding morphism $\mu\colon\mathcal{X}'\to\mathcal{X}$ satisfies $\mu_*\mathcal{O}_{\mathcal{X}'}=\mathcal{O}_\mathcal{X}$. Then $(\mathcal{X},\mathcal{L})$ and $(\mathcal{X}',\mathcal{L}')$ define the same filtrations on $R(X,rL)$. \end{lem} Note that $\mu_*\mathcal{O}_{\mathcal{X}'}=\mathcal{O}_\mathcal{X}$ holds automatically when $\mathcal{X}$ (and hence $X$) is normal, by Zariski's main theorem. For later use, we also record the following direct consequence of the $\mathbb{G}_m$-equivariant isomorphism (\ref{equ:reessp}). \begin{lem}\label{L201} Let $(\mathcal{X},\mathcal{L})$ be a test configuration, with projection $\pi\colon\mathcal{X}\to\mathbb{A}^1$. For each $m$ with $m\mathcal{L}$ a line bundle, the multiplicities of the $\mathbb{G}_m$-module $\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})_0$ satisfy \begin{equation*} \dim\left(\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})_0\right)_\lambda=\dim F^\lambda H^0(X,mL)/F^{\lambda+1} H^0(X,mL) \end{equation*} for all $\lambda\in\mathbb{Z}$. In particular, the weights of $\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})_0$ coincide with the successive minima of $F^\bullet H^0(X,mL)$. \end{lem} \smallskip \begin{prop}\label{prop:filtrtest} Assume $L$ is ample. Then the above construction sets up a one-to-one correspondence between ample test configurations for $(X,L)$ and finitely generated $\mathbb{Z}$-filtrations of $R(X,rL)$ for $r$ divisible enough. \end{prop} \begin{proof} When $(\mathcal{X},\mathcal{L})$ is an ample test configuration, the $\mathbb{Z}$-filtration it defines on $R(X,rL)$ is finitely generated in the sense of Definition~\ref{defi:fingen}, since $$ \bigoplus_{m\in\mathbb{N}}\left(\bigoplus_{\lambda\in\mathbb{Z}}t^{-\lambda} F^\lambda H^0(X,mrL)\right)=R(\mathcal{X},r\mathcal{L}) $$ is finitely generated over $k[t]$. Conversely, let $F^\bullet$ be a finitely generated $\mathbb{Z}$-filtration of $R(X,rL)$ for some $r$. Replacing $r$ with a multiple, we may assume that the graded $k[t]$-algebra $$ \bigoplus_{m\in\mathbb{N}}\left(\bigoplus_{\lambda\in\mathbb{Z}}t^{-\lambda} F^\lambda H^0(X,mrL)\right) $$ is generated in degree $m=1$, and taking the Proj over $\mathbb{A}^1$ defines an ample test configuration for $(X,rL)$, hence also one for $(X,L)$. Using \S\ref{sec:reesfiltr}, it is straightforward to see that the two constructions are inverse to each other. \end{proof} Still assuming $L$ is ample, let $(\mathcal{X},\mathcal{L})$ be merely semiample. The $\mathbb{Z}$-filtration it defines on $R(X,rL)$ is still finitely generated, as $$ \bigoplus_{m\in\mathbb{N}}\left(\bigoplus_{\lambda\in\mathbb{Z}}t^{-\lambda} F^\lambda H^0(X,mrL)\right)=R(\mathcal{X},r\mathcal{L}) $$ is finitely generated over $k[t]$. \begin{defi}\label{defi:amplemodel} The \emph{ample model} of a semiample test configuration $(\mathcal{X},\mathcal{L})$ is defined as the unique ample test configuration $(\mathcal{X}_\amp,\mathcal{L}_\amp)$ corresponding to the finitely generated $\mathbb{Z}$-filtration defined by $(\mathcal{X},\mathcal{L})$ on $R(X,rL)$ for $r$ divisible enough. \end{defi} Ample models admit the following alternative characterization. \begin{prop}\label{prop:amplemodel} The ample model $(\mathcal{X}_\amp,\mathcal{L}_\amp)$ of a semiample test configuration $(\mathcal{X},\mathcal{L})$ is the unique ample test configuration such that: \begin{itemize} \item[(i)] $(\mathcal{X},\mathcal{L})$ is a pull-back of $(\mathcal{X}_\amp,\mathcal{L}_\amp)$; \item[(ii)] the canonical morphism $\mu\colon\mathcal{X}\to\mathcal{X}_\amp$ satisfies $\mu_*\mathcal{O}_{\mathcal{X}}=\mathcal{O}_{\mathcal{X}_\amp}$. \end{itemize} \end{prop} Note that (ii) implies that $\mathcal{X}_\amp$ is normal whenever $\mathcal{X}$ (and hence $X$) is. \begin{proof} Choose $r\ge 1$ such that $r\mathcal{L}$ is a globally generated line bundle. By Proposition~\ref{prop:reesfiltr}, the vector bundle $\pi_*\mathcal{O}_\mathcal{X}(r\mathcal{L})$ is $\mathbb{G}_m$-equivariantly trivial over $\mathbb{A}^1$, and we thus get an induced $\mathbb{G}_m$-equivariant morphism $f\colon\mathcal{X}\to\P^N_{\mathbb{A}^1}$ over $\mathbb{A}^1$ for some $N$ with the property that $f^*\mathcal{O}(1)=r\mathcal{L}$. The Stein factorization of $f$ thus yields an ample test configuration $(\mathcal{X}',\mathcal{L}')$ satisfying (i) and (ii). By Lemma~\ref{lem:inv}, these properties guarantee that $(\mathcal{X},\mathcal{L})$ and $(\mathcal{X}',\mathcal{L}')$ induce the same $\mathbb{Z}$-filtration on $R(X,rL)$, and hence $(\mathcal{X}',\mathcal{L}')=(\mathcal{X}_\amp,\mathcal{L}_\amp)$ by Proposition~\ref{prop:filtrtest}. \end{proof} \subsection{Flag ideals}\label{FlagIdeals} In this final section, we discuss (a small variant of) the \emph{flag ideal} point of view of~\cite{Oda1,Oda3}. We assume that $X$ is normal, and use the following terminology. \begin{defi}\label{defi:pull} A \emph{determination} of a test configuration $\mathcal{X}$ for $X$ is a normal test configuration $\mathcal{X}'$ dominating both $\mathcal{X}$ and $X_{\mathbb{A}^1}$. \end{defi} Note that a determination always exists: just pick $\mathcal{X}'$ to be the normalization of the graph of the canonical birational map $\mathcal{X}\dashrightarrow X_{\mathbb{A}^1}$. Similarly, a determination of a test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is a normal test configuration $(\mathcal{X}',\mathcal{L}')$ such that $\mathcal{X}'$ is a determination of $\mathcal{X}$ and $\mathcal{L}'$ is the pull-back of $\mathcal{L}$ under the morphism $\mathcal{X}'\to\mathcal{X}$ (\ie $(\mathcal{X}',\mathcal{L}')$ is a pull-back of $(\mathcal{X},\mathcal{L})$). In this case, denoting by $\rho\colon\mathcal{X}'\to X_{\mathbb{A}^1}$ the canonical morphism, we have $\mathcal{L}'=\rho^*L_{\mathbb{A}^1}+D$ for a unique $\mathbb{Q}$-Cartier divisor $D$ supported on $\mathcal{X}_0'$, by the normality of $\mathcal{X}'$. \begin{defi}\label{defi:flag} Let $(\mathcal{X},\mathcal{L})$ be test configuration for $(X,L)$. For each $m$ such that $m\mathcal{L}$ is a line bundle, we define the \emph{flag ideal} of $(\mathcal{X},m\mathcal{L})$ as $$ \mathfrak{a}^{(m)}:=\rho_*\mathcal{O}_{\mathcal{X}'}(mD), $$ viewed as a $\mathbb{G}_m$-invariant, integrally closed fractional ideal of the normal variety $X_{\mathbb{A}^1}$. \end{defi} By Lemma~\ref{lem:flag} below, $\mathfrak{a}^{(m)}$ is indeed independent of the choice of a determination. In particular, $\mathfrak{a}^{(m)}$ is also the flag ideal of $(\mathcal{X}',m\mathcal{L}')$ for every normal pull-back $(\mathcal{X}',\mathcal{L}')$ of $(\mathcal{X},\mathcal{L})$. Since $\mathfrak{a}^{(m)}$ is a $\mathbb{G}_m$-invariant fractional ideal on $X_{\mathbb{A}^1}$ that is trivial outside the central fiber, it is of the form \begin{equation}\label{equ:flag} \mathfrak{a}^{(m)}=\sum_{\lambda\in\mathbb{Z}} t^{-\lambda}\mathfrak{a}^{(m)}_\lambda \end{equation} where $\mathfrak{a}^{(m)}_\lambda\subset\mathcal{O}_X$ is a non-increasing sequence of integrally closed ideals on $X$ with $\mathfrak{a}^{(m)}_\lambda=0$ for $\lambda\gg 0$ and $\mathfrak{a}^{(m)}_\lambda=\mathcal{O}_X$ for $\lambda\ll 0$ (see Proposition~\ref{prop:filtrflag} below for the choice of sign). \begin{lem}\label{lem:flag} The flag ideal $\mathfrak{a}^{(m)}$ is independent of the choice of a determination $(\mathcal{X}',\mathcal{L}')$. \end{lem} \begin{proof} Let $(\mathcal{X}'',\mathcal{L}'')$ be another determination of $(\mathcal{X},\mathcal{L})$ (and recall that $\mathcal{X}'$ and $\mathcal{X}''$ are normal, by definition). Since any two determinations of $(\mathcal{X},\mathcal{L})$ are dominated by a third one, we may assume that $\mathcal{X}''$ dominates $\mathcal{X}'$. Denoting by $\mu'\colon\mathcal{X}''\to\mathcal{X}'$ the corresponding morphism, the fractional ideal attached to $(\mathcal{X}'',\mathcal{L}'')$ is then given by $$ (\rho\circ\mu')_*\mathcal{O}_{\mathcal{X}''}(m\mu'^*D) $$ By the projection formula we have $$ \mu'_*\mathcal{O}_{\mathcal{X}''}(m\mu'^*D)=\mathcal{O}_{\mathcal{X}'}(mD)\otimes\mu'_*\mathcal{O}_{\mathcal{X}''}, $$ and we get the desired result since $\mu'_*\mathcal{O}_{\mathcal{X}''}=\mathcal{O}_{\mathcal{X}'}$ by normality of $\mathcal{X}'$. \end{proof} \begin{prop}\label{prop:filtrflag} Let $(\mathcal{X},\mathcal{L})$ be a normal, semiample test configuration for $(X,L)$. For each $m$ with $m\mathcal{L}$ a line bundle, let $F^\bullet H^0(X,mL)$ be the corresponding $\mathbb{Z}$-filtration and $\mathfrak{a}^{(m)}$ the flag ideal of $(\mathcal{X},m\mathcal{L})$. Then, for $m$ sufficiently divisible and $\lambda\in\mathbb{Z}$, the $\mathcal{O}_X$-module $\mathcal{O}_X(mL)\otimes\mathfrak{a}^{(m)}_\lambda$ is globally generated and \begin{equation*} F^\lambda H^0(X,mL)=H^0\left(X,\mathcal{O}_X(mL)\otimes\mathfrak{a}^{(m)}_\lambda\right) \end{equation*} In particular, the successive minima of $F^\bullet H^0(X,mL)$ (see~\S\ref{sec:prelim}) are exactly the $\lambda\in\mathbb{Z}$ with $\mathfrak{a}^{(m)}_\lambda\ne\mathfrak{a}^{(m)}_{\lambda+1}$, with the largest one being $\lambda^{(m)}_{\max}=\max\left\{\lambda\in\mathbb{Z}\mid\mathfrak{a}^{(m)}_\lambda\ne 0\right\}$. \end{prop} \begin{proof} Let $(\mathcal{X}',\mathcal{L}')$ be a determination of $(\mathcal{X},\mathcal{L})$, \ie a pull-back such that $\mathcal{X}'$ is normal and dominates $X_{\mathbb{A}^1}$. By normality of $\mathcal{X}$, the morphism $\mu\colon\mathcal{X}'\to\mathcal{X}$ satisfies $\mu_*\mathcal{O}_{\mathcal{X}'}=\mathcal{O}_\mathcal{X}$, and the projection formula therefore shows that $(\mathcal{X}',\mathcal{L}')$ and $(\mathcal{X},\mathcal{L})$ define the same $\mathbb{Z}$-filtration of $R(X,rL)$ for $r$ divisible enough. Since $\mathfrak{a}^{(m)}$ is also the flag ideal of $(\mathcal{X}',m\mathcal{L}')$, we may assume to begin with that $\mathcal{X}$ dominates $X_{\mathbb{A}^1}$. Denoting by $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$ the canonical morphism, we then have $\mathcal{L}=\rho^*L_{\mathbb{A}^1}+D$ and \begin{equation*} \mathfrak{a}^{(m)}=\rho_*\mathcal{O}_{\mathcal{X}}(mD), \end{equation*} and hence \begin{equation*} \rho_*\mathcal{O}_X(m\mathcal{L})=\mathcal{O}_X(mL_{\mathbb{A}^1})\otimes\mathfrak{a}^{(m)} \end{equation*} by the projection formula. As a consequence, $H^0(X_{\mathbb{A}^1},\mathcal{O}_{X_{\mathbb{A}^1}}(mL_{\mathbb{A}^1})\otimes t^{-\lambda}\mathfrak{a}^{(m)}_\lambda)$ is isomorphic to the weight-$\lambda$ part of $H^0(\mathcal{X},m\mathcal{L})$, and the first point follows. For $m$ divisible enough, $m\mathcal{L}$ is globally generated on $\mathcal{X}$, and hence so is $\rho_*\mathcal{O}_\mathcal{X}(m\mathcal{L})$ on $X_{\mathbb{A}^1}$. Decomposing into weight spaces thus shows that $\mathcal{O}_X(mL)\otimes\mathfrak{a}^{(m)}_\lambda$ is globally generated on $X$ for all $\lambda\in\mathbb{Z}$. We therefore have $\mathfrak{a}^{(m)}_\lambda\ne \mathfrak{a}^{(m)}_{\lambda+1}$ iff $F^\lambda H^0(X,mL)\ne F^{\lambda+1} H^0(X,mL)$, hence the second point. \end{proof} \section{Duistermaat-Heckman measures and Donaldson-Futaki invariants}\label{sec:DHDF} In this section, $(X,L)$ is a polarized\footnote{As before we allow $L$ to be an (ample) $\mathbb{Q}$-line bundle on $X$.} scheme over $k$. Our goal is to provide an elementary, self-contained treatment of Duistermaat-Heckman measures and Donaldson-Futaki invariants. Most arguments are inspired by those in~\cite{Don3,RT,Wan,Oda2,LX}. \subsection{The case of a $\mathbb{G}_m$-action}\label{sec:DHDFaction} First assume that $L$ is an ample line bundle (as opposed to a $\mathbb{Q}$-line bundle) and that $(X,L)$ is given a $\mathbb{G}_m$-action. For each $d\in\mathbb{N}$, the principal $\mathbb{G}_m$-bundle $\mathbb{A}^{d+1}\setminus\{0\}\to\P^d$ induces a projective morphism $\pi_d:X_d\to\P^d$, locally trivial in the Zariski topology and with typical fiber $X$, as well as a relatively ample line bundle $L_d$ on $X_d$. For $d=1$, we recover the compactified product test configuration, cf.~Example~\ref{ex:comp}. Following~\cite[p.470]{Don3}, we use this construction to prove the following key result, which is often claimed to follow from `general theory' in the K-stability literature. Another proof relying on the equivariant Rieman-Roch theorem is provided in Appendix B. \begin{thm}\label{thm:equivRR} Let $(X,L)$ be a polarized scheme with a $\mathbb{G}_m$-action, and set $n=\dim X$. For each $d,m\in\mathbb{N}$, the finite sum $$ \sum_{\lambda\in\mathbb{Z}}\frac{\lambda^d}{d!}\dim H^0(X,mL)_\lambda $$ is a polynomial function of $m\gg 1$, of degree at most $n+d$. The coefficient of $m^{n+d}$ is further equal to $(L_d^{n+d})/(n+d)!$. \end{thm} Here we write as usual $(L_d^{n+d})=c_1(L_d)^{n+d}\cdot[X_d]$, with $[X_d]\in\CH_{n+d}(X_d)$ the fundamental class. Granting this result for the moment, we get as a first consequence: \begin{cor}\label{cor:total} Let $w_m\in\mathbb{Z}$ be the weight of the $\mathbb{G}_m$-action on the determinant line $\det H^0(X,mL)$, and $N_m:=h^0(X,mL)$. Then we have an asymptotic expansion \begin{equation}\label{equ:Futaki} \frac{w_m}{mN_m}=F_0+m^{-1}F_1+m^{-2}F_2+\dots \end{equation} with $F_i\in\mathbb{Q}$. \end{cor} Indeed, $w_m=\sum_{\lambda\in\mathbb{Z}}\lambda\dim H^0(X,mL)_\lambda$ is a polynomial of degree at most $n+1$ by Theorem~\ref{thm:equivRR}, while $N_m$ is a polynomial of degree $n$ by Riemann-Roch. \begin{defi}\label{defi:DFaction} The \emph{Donaldson-Futaki invariant} $\DF(X,L)$ of the polarized $\mathbb{G}_m$-scheme $(X,L)$ is defined as $$ \DF(X,L)=-2F_1. $$ \end{defi} The factor $2$ in the definition is here just for convenience, while the sign is chosen so that K-semistability will later correspond to $\DF\ge 0$, cf.~Definition~\ref{defi:Kstab}. As a second consequence of Theorem~\ref{thm:equivRR}, we will prove: \begin{cor}\label{cor:DH} The rescaled weight measures (cf.~Definition~\ref{defi:weight}) $$ \mu_m:=(1/m)_*\mu_{H^0(X,mL)} $$ have uniformly bounded support, and converge weakly to a probability measure $\DH_{(X,L)}$ on $\mathbb{R}$ as $m\to\infty$. Its moments are further given by \begin{equation}\label{equ:dmoment} \int_{\mathbb{R}} \lambda^d\,\DH_{(X,L)}(d\lambda)=\binom{n+d}{n}^{-1}\frac{(L_d^{n+d})}{(L^n)} \end{equation} for each $d\in\mathbb{N}$. \end{cor} \begin{defi}\label{defi:DHaction} We call $\DH_{(X,L)}$ the \emph{Duistermaat-Heckman measure} of the polarized $\mathbb{G}_m$-scheme $(X,L)$. \end{defi} For any $r\in\mathbb{Z}_{>0}$, the $\mathbb{G}_m$-action on $(X,L)$ induces an action on $(X,rL)$. It follows immediately from the definition that $\DH(X,rL)=r_*\DH(X,L)$ and $\DF(X,rL)=\DF(X,L)$. This allows us to define the Duistermaat-Heckman measure and Donaldson-Futaki invariant for $\mathbb{G}_m$-actions on polarized schemes $(X,L)$, where $L$ is an (ample) $\mathbb{Q}$-line bundle. \begin{defi}\label{D101} For any polarized scheme $(X,L)$ with a $\mathbb{G}_m$-action, we define \begin{equation*} \DH(X,L):=(1/r)_*\DH(X,rL) \quad\text{and}\quad \DF(X,L):=\DF(X,rL) \end{equation*} for any sufficiently divisible $r\in\mathbb{Z}_{>0}$. \end{defi} \begin{proof}[Proof of Theorem~\ref{thm:equivRR}] Let $\pi_d:X_d\to\P^d$ be the fiber bundle defined above. The key observation is that the $\mathbb{G}_m$-decomposition $H^0(X,mL)=\bigoplus_{\lambda\in\mathbb{Z}} H^0(X,mL)_\lambda$ implies that $$ (\pi_d)_*\mathcal{O}_{X_d}(mL_d)=\bigoplus_{\lambda\in\mathbb{Z}} H^0(X,mL)_\lambda\otimes\mathcal{O}_{\P^d}(\lambda). $$ By relative ampleness of $L_d$, the higher direct images of $mL_d$ vanish for $m\gg 1$; the Leray spectral sequence and the asymptotic Riemann-Roch theorem (cf.~\cite[\S1]{Kle}) therefore yield $$ \sum_{\lambda\in\mathbb{Z}}\chi(\P^d,\mathcal{O}_{\P^d}(\lambda))\dim H^0(X,mL)_\lambda=\chi(\P^d,(\pi_d)_*\mathcal{O}_{X_d}(mL_d)) $$ $$ =\chi(X_d,mL_d)=\frac{(L_d^{n+d})}{(n+d)!}m^{n+d}+O(m^{n+d-1}). $$ Now $\chi(\P^d,\mathcal{O}_{\P^d}(\lambda))=\frac{\lambda(\lambda-1)\cdots(\lambda-d+1)}{d!}=\frac{\lambda^d}{d!}+O(\lambda^{d-1})$, and we get the result by induction on~$d$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:DH}] Since $L$ is ample, $R(X,L)$ is finitely generated. It follows that the weights of $H^0(X,mL)$ grow at most linearly with $m$, which proves that $\mu_m$ has uniformly bounded support. Since $\mu_m$ is a probability measure, it therefore converges to a probability measure iff the moments $\int_\mathbb{R}\lambda^d\mu_m(d\lambda)$ converge for each $d\in\mathbb{N}$. We have, by definition, $$ \int_\mathbb{R}\frac{\lambda^d}{d!}\mu_m(d\lambda)=\frac{1}{m^d N_m}\sum_{\lambda\in\mathbb{Z}}\frac{\lambda^d}{d!}\dim H^0(X,mL)_\lambda $$ with $N_m=h^0(X,mL)$. Theorem~\ref{thm:equivRR} shows that $$ \sum_{\lambda\in\mathbb{Z}}\frac{\lambda^d}{d!}\dim H^0(X,mL)_\lambda=\frac{(L_d^{n+d})}{(n+d)!}m^{n+d}+O(m^{n+d-1}), $$ while $$ N_m=\frac{(L^n)}{n!}m^n+O(m^{n-1}), $$ hence the result. \end{proof} \begin{rmk}\label{rmk:DH} In order to explain the terminology, consider the case where $X$ is a smooth complex variety with an $S^1$-invariant hermitian metric on $L$ with positive curvature form $\omega$. We then get a Hamiltonian function $H:X\to\mathbb{R}$ for the $S^1$-action on the symplectic manifold $(X,\omega)$. The Duistermaat-Heckman measure as originally defined in~\cite{DH} is $H_*(\omega^n)$, but this is known to coincide (up to normalization of the mass) with $\DH_{(X,L)}$ as defined above (see for instance~\cite[Theorem 9.1]{WN12} and~\cite[Proposition 4.1]{BWN}). See also~\cite{Bern09,WN12,His12} for an analytic approach to Duistermaat-Heckman measures via geodesic rays. \end{rmk} \begin{rmk}\label{rmk:Ok} When $X$ is a variety, the existence part of Corollary~\ref{cor:DH} is a rather special case of~\cite{Ok}, which also shows that $\DH_{(X,L)}$ can be written as a linear projection of the Lebesgue measure of some convex body. This implies in particular that $\DH_{(X,L)}$ is either absolutely continuous or a point mass. Its density is claimed to be piecewise polynomial on~\cite[p.1]{Ok}, but while this is a classical result of Duistermaat and Heckman when $X$ is a smooth complex variety as in Remark~\ref{rmk:DH}, we were not able to locate a proof in the literature when $X$ is singular. In particular, the proof of~\cite[Proposition 3.4]{BP} is incomplete. Piecewise polynomiality will be established in Theorem~\ref{thm:PP} below. \end{rmk} We gather here the first few properties of Duistermaat-Heckman measures. \begin{prop}\label{prop:DHaction} Let $(X,L)$ be a polarized $\mathbb{G}_m$-scheme of dimension $n$, and set $V=(L^n)$. \begin{itemize} \item[(i)] Denote by $(X,L(\lambda))$ the result of twisting the action on $L$ by the character $t^\lambda$. Then $\DH_{(X,L(\lambda))}=\lambda+\DH_{(X,L)}$. \item[(ii)] If $X^\a$ are the irreducible components of $X$ (with their reduced scheme structure), then \begin{equation*} \DH_{(X,L)}=\sum_\a c_\a\DH_{(X^\a,L|_{X^\a})}, \end{equation*} where $c_\a=m_\a\frac{c_1(L)^n\cdot[X^\a]}{c_1(L)^n\cdot[X]}$, with $m_\a$ the multiplicity of $X$ along $X^\a$. \end{itemize} \end{prop} Note that $c_\a>0$ iff $X^\a$ has dimension $n$, and that $\sum_\a c_\a=1$ since $[X]=\sum_\a m_\a [X^\a]$. \begin{proof} Property (i) is straightforward. Since $X_d\to\P^d$ is locally trivial, its irreducible components are of the form $X^\a_d$, with multiplicity $m_\a$. It follows that $[X_d]\in\CH_n(X_d)=\bigoplus\mathbb{Z}[X^\a_d]$ decomposes as $[X_d]=\sum_\a m_\a[X^\a_d]$. Assertion (ii) is now a direct consequence of (\ref{equ:dmoment}). \end{proof} \subsection{The case of a test configuration}\label{sec:DHDFtest} We still denote by $(X,L)$ a polarized scheme (where $L$ is allowed to be a $\mathbb{Q}$-line bundle), but now without any \emph{a priori} given $\mathbb{G}_m$-action. \begin{defi}\label{defi:DHsemi} Let $(\mathcal{X},\mathcal{L})$ be an ample test configuration for $(X,L)$. We define the \emph{Duistermaat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$ and the \emph{Donaldson-Futaki invariant} $\DF(\mathcal{X},\mathcal{L})$ of $(\mathcal{X},\mathcal{L})$} as those of the polarized $\mathbb{G}_m$-scheme $(\mathcal{X}_0,\mathcal{L}_0)$. \end{defi} \begin{defi}\label{defi:Kstab} A polarized scheme $(X,L)$ is \emph{K-semistable} if $\DF(\mathcal{X},\mathcal{L})\ge 0$ for all ample test configurations $(\mathcal{X},\mathcal{L})$. It is \emph{K-stable} if we further have $\DF(\mathcal{X},\mathcal{L})=0$ only when $(\mathcal{X},\mathcal{L})$ is almost trivial in the sense of Definition~\ref{defi:triv}. \end{defi} \begin{prop}\label{prop:DHDFsemi} Let $(\mathcal{X},\mathcal{L})$ be an ample test configuration for $(X,L)$, with $\mathbb{G}_m$-equivariant projection $\pi\colon\mathcal{X}\to\mathbb{A}^1$ and compactification $(\bar\mathcal{X},\bar\mathcal{L})$, and set $V:=(L^n)$. \begin{itemize} \item[(i)] For each $c\in\mathbb{Q}$, we have $\DH_{(\mathcal{X},\mathcal{L}+c\mathcal{X}_0)}=\DH_{(\mathcal{X},\mathcal{L})}+c$. \item[(ii)] Let $X^\a$ be the top-dimensional irreducible components of $X$ (with their reduced scheme structure), and $m_\a$ be the multiplicity of $X$ along $X^\a$. Then \begin{equation*} \DH_{(\mathcal{X},\mathcal{L})}=\sum_\a c_\a\DH_{(\mathcal{X}^\a,\mathcal{L}|_{\mathcal{X}^\a})}, \end{equation*} where $\mathcal{X}^\a$ is the irreducible component of $\mathcal{X}$ corresponding to $X^\a$, and where $c_\a=V^{-1} m_\a c_1(L)^n\cdot[X^\a]$. \item[(iii)] The barycenter of the Duistermaat-Heckman measure satisfies \begin{equation}\label{equ:DHbar} \int_{\mathbb{R}} \lambda\,\DH_{(\mathcal{X},\mathcal{L})}(d\lambda)=\frac{(\bar\mathcal{L}^{n+1})}{(n+1)V} \end{equation} \item[(iv)] If $\mathcal{X}$ (and hence $X$) is normal, then \begin{equation}\label{equ:DF} \DF(\mathcal{X},\mathcal{L})=\frac{(K_{\bar\mathcal{X}/\P^1}\cdot\bar\mathcal{L}^n)}{V}+\bar S\frac{(\bar\mathcal{L}^{n+1})}{(n+1)V} \end{equation} with $\bar S:=nV^{-1}(-K_X\cdot L^{n-1})$. \end{itemize} \end{prop} In (iv), $K_X$ and $K_{\bar\mathcal{X}/\P^1}=K_{\bar\mathcal{X}}-\pi^*K_{\P^1}$ are understood as Weil divisor classes on the normal schemes $X$ and $\bar\mathcal{X}$, respectively. This intersection theoretic expression is originally due to~\cite{Wan,Oda2}, see also~\cite{LX}. \begin{rmk} When $X$ is smooth, $k=\mathbb{C}$ and $L$ is a line bundle, $\bar S$ is the mean value of the scalar curvature $S(\omega)$ of any K\"ahler form $\omega\in c_1(L)$ (hence the chosen notation). \end{rmk} \begin{proof}[Proof of Proposition~\ref{prop:DHDFsemi}] After passing to a multiple, we may assume that $L$ and $\mathcal{L}$ are line bundles. By flatness of $\mathcal{X}\to\mathbb{A}^1$, the decomposition $[X]=\sum_\a m_\a[X^\a]$ in $\CH_n(\mathcal{X})$ implies $[\mathcal{X}_0]=\sum_\a m_\a[\mathcal{X}^\a_0]$, where $\mathcal{X}^\a_0$ denotes the (possibly reducible) central fiber of $\mathcal{X}^\a$. We now get (ii) as a consequence of Proposition~\ref{prop:DHaction}, which also implies (i). We now turn to the proof of the last two points. By relative ampleness, $\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L})$ is a vector bundle on $\P^1$ of rank $N_m=h^0(mL)$ for $m\gg 1$, with fiber at $0$ isomorphic to $H^0(\mathcal{X}_0,m\mathcal{L}_0)$. As a result, $w_m$ is the weight of $\det\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\mathcal{L})_0$, and hence \begin{equation*} w_m =\deg\det\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L}) =\deg\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L}), \end{equation*} since $\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L})$ is $\mathbb{G}_m$-equivariantly trivial away from $0$ by construction of the compactification. By the usual Riemann-Roch theorem on $\P^1$, we infer \begin{equation*} w_m= \chi(\P^1,\pi_*\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L}))-N_m. \end{equation*} By relative ampleness again, the higher direct images of $m\mathcal{L}$ vanish for $m$ divisible enough, and the Leray spectral sequence and the asymptotic Riemann-Roch theorem give as in the proof of Theorem~\ref{thm:equivRR} \begin{equation*} w_m=\chi(\bar\mathcal{X},m\bar\mathcal{L})-N_m=\frac{m^{n+1}}{(n+1)!}(\bar\mathcal{L}^{n+1})+O(m^n), \end{equation*} which yields (iii) since $N_m=\frac{m^n}{n!}V+O(m^{n-1})$. When $\mathcal{X}$ (and hence $\bar\mathcal{X}$) is normal, the two-term asymptotic Riemann-Roch theorem on a normal variety (cf.~Theorem~\ref{thm:RR} in the appendix) yields \begin{equation*} N_m=V\frac{m^n}{n!}\left[1+\frac{\bar S}{2} m^{-1}+O(m^{-2})\right], \end{equation*} and \begin{align*} w_m&=-N_m+\frac{(\bar\mathcal{L}^{n+1})}{(n+1)!}m^{n+1} -\frac{(K_{\bar\mathcal{X}}\cdot\bar\mathcal{L}^n)}{2n!}m^{n}+O(m^{n-1})\\ &=\frac{(\bar\mathcal{L}^{n+1})}{(n+1)!}m^{n+1} -\frac{(K_{\bar\mathcal{X}/\P^1}\cdot\bar\mathcal{L}^n)}{2n!}m^{n}+O(m^{n-1}), \end{align*} using that $(\pi^*K_{\P^1}\cdot\bar\mathcal{L}^n)=-2V$ since $\deg K_{\P^1}=-2$. The formula for $\DF(\mathcal{X},\mathcal{L})$ in (iv) now follows from a straightforward computation. \end{proof} \subsection{Behavior under normalization}\label{sec:DHDFnormal} We now study the behavior of Duistermaat-Heckman measures and Donaldson-Futaki invariants under normalization. Recall that the normalization of the polarized scheme $(X,L)$ is the normal polarized scheme $(\widetilde{X},\tilde{L})$ obtained by setting $\tilde{L}=\nu^*L$ with $\nu\colon\widetilde{X}\to X$ the normalization morphism. Similarly, the normalization of an ample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is the ample test configuration $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ for $(\widetilde{X},\tilde{L})$ obtained by $\tilde{\mathcal{L}}=\nu^*\mathcal{L}$ with $\nu\colon\widetilde{\mathcal{X}}\to\mathcal{X}$ the normalization morphism. We first prove that Duistermaat-Heckman measures are invariant under normalization, in the reduced case. \begin{thm}\label{thm:DHinv} If $X$ is reduced, then $\DH_{(\mathcal{X},\mathcal{L})}=\DH_{(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})}$ for every ample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$. \end{thm} \begin{proof} By Proposition~\ref{prop:DHDFsemi}~(ii), after twisting the $\mathbb{G}_m$-action on $\mathcal{L}$ by $t^\lambda$ with $\lambda\gg 1$, we may assume $\mu:=\DH_{(\mathcal{X},\mathcal{L})}$ and $\tilde{\mu}:=\DH_{(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})}$ are supported in $\mathbb{R}_+$. For $m$ divisible enough, let \begin{equation*} \mu_m:=(1/m)_*\mu_{\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})_0} \end{equation*} be the scaled weight measure of the $\mathbb{G}_m$-module $\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})_0$. Thus $\mu_m$ converges weakly to $\mu$ by Proposition~\ref{prop:DHDFsemi}~(i). By Lemma~\ref{lem:tail}, the tail distribution of $\mu_m$ is given by \begin{equation*} \mu_m\{x\ge\lambda\}=\frac{1}{N_m}\dim F^{\lceil m\lambda\rceil} H^0(X,mL), \end{equation*} where $F^\bullet H^0(X,mL)$ is the Rees filtration induced by $(\mathcal{X},\mathcal{L})$, and $N_m=H^0(X,mL)=\dim H^0(\mathcal{X}_0,m\mathcal{L}_0)$ for $m\gg 1$, by flatness and Serre vanishing. Denoting by $\tilde{\mu}_m$ and $F^\bullet H^0(\widetilde{X},m\tilde{L})$ the scaled weight measure and filtration defined by $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$, we similarly have \begin{equation*} \tilde{\mu}_m\{x\ge\lambda\}=\frac{1}{\tilde{N}_m}\dim F^{\lceil m\lambda\rceil} H^0(\widetilde{X},m\tilde{L}). \end{equation*} Since $\mathcal{X}$ is reduced by Proposition~\ref{prop:basic}, the canonical morphism $\mathcal{O}_\mathcal{X}\to\nu_*\mathcal{O}_{\widetilde{\mathcal{X}}}$ is injective, and the projection formula yields a $\mathbb{G}_m$-equivariant inclusion $H^0(\mathcal{X},m\mathcal{L})\hookrightarrow H^0(\widetilde{\mathcal{X}},m\tilde{\mathcal{L}})$. For each $\lambda\in\mathbb{Z}$, we thus have $H^0(\mathcal{X},m\mathcal{L})_\lambda\hookrightarrow H^0(\widetilde{\mathcal{X}},m\tilde{\mathcal{L}})_\lambda$, and hence $F^\lambda H^0(X,mL)\hookrightarrow F^\lambda H^0(\widetilde{X},m\tilde{L})$, which implies \begin{equation*} \tilde{\mu}_m\{x\ge\lambda\}\ge\frac{\tilde{N}_m}{N_m}\mu\{x\ge\lambda\}. \end{equation*} Since $X$ is reduced, $\nu_*\mathcal{O}_{\widetilde{X}}/\mathcal{O}_X$ is supported on a nowhere dense Zariski closed subset, and hence \begin{equation*} \tilde{N}_m=h^0(\widetilde{X},m\tilde{L})=h^0(X,\mathcal{O}_X(mL)\otimes\nu_*\mathcal{O}_{\widetilde{X}})=N_m+O(m^{n-1}). \end{equation*} Since the weak convergence of probability measures $\mu_m\to\mu$ implies (in fact, is equivalent to) the a.e.\ convergence of the tail distributions, we conclude \begin{equation}\label{e203} \tilde{\mu}\{x\ge\lambda\}\ge\mu\{x\ge\lambda\} \end{equation} for a.e.\ $\lambda\in\mathbb{R}$. By (iii) of Proposition~\ref{prop:DHDFsemi} and the projection formula, $\mu$ and $\tilde{\mu}$ have the same barycenter $\bar\lambda$, and hence \begin{equation}\label{e204} \int_{\mathbb{R}_+}\mu\{x\ge\lambda\}d\lambda =\int_{\mathbb{R}_+}\lambda\,d\mu =\bar\lambda =\int_{\mathbb{R}_+}\lambda\,d\tilde{\mu} =\int_{\mathbb{R}_+}\tilde{\mu}\{x\ge\lambda\}d\lambda \end{equation} since $\mu$ and $\tilde{\mu}$ are supported in $\mathbb{R}_+$. By (\ref{e203}), we thus have $\tilde{\mu}\{x\ge\lambda\}=\mu\{x\ge\lambda\}$ for a.e.\ $\lambda\in\mathbb{R}$, and hence $\tilde{\mu}=\mu$ (by taking for instance the distributional derivatives), which concludes the proof. \end{proof} Regarding Donaldson-Futaki invariants, we prove the following explicit version of~\cite[Proposition 5.1]{RT} and~\cite[Corollary 3.9]{ADVLN}. \begin{prop}\label{prop:DFnorm} Let $(\mathcal{X},\mathcal{L})$ be an ample test configuration for a polarized scheme $(X,L)$. Let $\mathcal{X}'$ be another test configuration for $X$ dominating $\mathcal{X}$, such that $\mu\colon\mathcal{X}'\to\mathcal{X}$ is finite, and set $\mathcal{L}':=\mu^*\mathcal{L}$. Then \begin{equation*} \DF(\mathcal{X},\mathcal{L})=\DF(\mathcal{X}',\mathcal{L}')+2 V^{-1}\sum_E m_E\left(E\cdot\mathcal{L}^n\right), \end{equation*} where $E$ ranges over the irreducible components of $\mathcal{X}_0$ contained in the singular locus of $\mathcal{X}$ and $m_E\in\mathbb{N}^*$ is the length of the sheaf $\mathcal{F}:=\left(\mu_*\mathcal{O}_{\mathcal{X}'}\right)/\mathcal{O}_{\mathcal{X}}$ at the generic point of $E$. \end{prop} When $X$ is normal, the result applies to the normalization of a test configuration; hence \begin{cor}\label{cor:Kstabnorm} If $X$ is normal, then $(X,L)$ is K-semistable iff $\DF(\mathcal{X},\mathcal{L})\ge 0$ for all \emph{normal} ample test configurations. \end{cor} \begin{proof}[Proof of Proposition~\ref{prop:DFnorm}] Let $m$ be sufficiently divisible. Denoting by $w_m$ and $w'_m$ the $\mathbb{G}_m$-weights of $\det H^0(\mathcal{X}_0,m\mathcal{L}_0)$ and $\det H^0(\mathcal{X}'_0,m\mathcal{L}'_0)$, the proof of Proposition~\ref{prop:DHDFsemi} yields $$ w'_m-w_m =\chi(\bar\mathcal{X}',m\bar\mathcal{L}')-\chi(\bar\mathcal{X},m\bar\mathcal{L}). $$ Since $\mu$ is finite, we have $R^q\mu_*\mathcal{O}_{\bar\mathcal{X}'}=0$ for all $q\ge 1$, and the Leray spectral sequence gives \begin{equation*} \chi(\bar\mathcal{X}',m\bar\mathcal{L}') =\chi\left(\bar\mathcal{X},\mathcal{O}_\mathcal{X}(m\bar\mathcal{L})\otimes\mu_*\mathcal{O}_{\bar\mathcal{X}'}\right). \end{equation*} By additivity of the Euler characteristic in exact sequences and~\cite[\S2]{Kle}, we infer \begin{align*} w'_m-w_m &=\chi\left(\bar\mathcal{X},\mathcal{O}_{\bar\mathcal{X}}(m\bar\mathcal{L})\otimes\mathcal{F}\right)\\ &=\frac{m^n}{n!}\sum_E m_E\left(E\cdot\mathcal{L}^n\right)+O(m^{n-1}), \end{align*} which yields the desired result in view of Definition~\ref{defi:DFaction}. \end{proof} \subsection{The logarithmic case}\label{sec:DFlog} Assume that $X$ is normal, let $B$ be a boundary on $X$ and write $K_{(X,B)}:=K_X+B$ (see~\S\ref{sec:bound}). Let $L$ be an ample $\mathbb{Q}$-line bundle on $X$. We then introduce a log version of the `mean scalar curvature' $\bar S$ by setting \begin{equation*} \bar S_B:=nV^{-1}\left(-K_{(X,B)}\cdot L^{n-1}\right). \end{equation*} If $\mathcal{X}$ is a normal test configuration for $X$, denote by $\mathcal{B}$ (resp.\ $\bar\mathcal{B}$) the $\mathbb{Q}$-Weil divisor on $\mathcal{X}$ (resp.\ $\bar\mathcal{X}$) obtained as the (component-wise) Zariski closure in $\mathcal{X}$ (resp.\ $\bar\mathcal{X}$) of the $\mathbb{Q}$-Weil divisor $B\times(\mathbb{A}^1\setminus\{0\})$ with respect to the open embedding of $X\times(\mathbb{A}^1\setminus\{0\})$ into $\mathcal{X}$ (resp. $\bar\mathcal{X}$). We then set \begin{equation*} K_{(\mathcal{X},\mathcal{B})}:=K_\mathcal{X}+\mathcal{B}, \quad K_{(\bar\mathcal{X},\bar\mathcal{B})}:=K_{\bar\mathcal{X}}+\bar\mathcal{B}, \end{equation*} and \begin{equation*} K_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}:=K_{(\mathcal{X},\mathcal{B})}-\pi^*K_{\mathbb{A}^1}, \quad K_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}:=K_{(\bar\mathcal{X},\bar\mathcal{B})}-\pi^*K_{\P^1}. \end{equation*} Note that these $\mathbb{Q}$-Weil divisor (classes) may not be $\mathbb{Q}$-Cartier in general. The intersection theoretic formula for $\DF$ in Proposition~\ref{prop:DHDFsemi} suggests the following generalization for pairs (compare~\cite[Theorem 3.7]{OSu}, see also~\cite{Don4,LS}). \begin{defi}\label{defi:DFpairs} Let $B$ be a boundary on $X$. For each normal test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$, we define the \emph{log Donaldson-Futaki invariant} of $(\mathcal{X},\mathcal{L})$ as \begin{equation*} \DF_B(\mathcal{X},\mathcal{L}):=V^{-1}(K^B_{\bar\mathcal{X}/\P^1}\cdot\bar\mathcal{L}^n) +\bar S_BV^{-1}\frac{(\bar\mathcal{L}^{n+1})}{n+1}, \end{equation*} \end{defi} In view of Corollary~\ref{cor:Kstabnorm}, we may then introduce the following notion: \begin{defi}\label{defi:logKstab} A polarized pair $((X,B);L)$ is \emph{K-semistable} if $\DF_B(\mathcal{X},\mathcal{L})\ge 0$ for all normal ample test configurations. It is \emph{K-stable} if we further have $\DF_B(\mathcal{X},\mathcal{L})=0$ only when $(\mathcal{X},\mathcal{L})$ is trivial. \end{defi} Note that $((X,B);L)$ is K-semistable (resp.\ K-stable) iff $((X,B);rL)$ is K-semistable (resp.\ K-stable) for some (or, equivalently, any) $r\in\mathbb{Z}_{>0}$. \begin{rmk}\label{rmk:slc} Let $X$ be a \emph{deminormal} scheme, \ie reduced, of pure dimension $n$, $S_2$ and with at most normal crossing singularities in codimension one, and let $\nu\colon\widetilde{X}\to X$ be the normalization. If $K_X$ is $\mathbb{Q}$-Cartier, then $\nu^*K_X=K_{\widetilde{X}}+\widetilde{B}$, where $\widetilde{B}$ denotes the inverse image of the conductor, and is a reduced Weil divisor on $\widetilde{X}$ by the deminormality assumption. By definition, $X$ has \emph{semi-log canonical singularities} (slc for short) if $(\widetilde{X},\widetilde{B})$ is lc. (See~\cite[\S5]{KollarBook} for details.) Now let $L$ be an ample $\mathbb{Q}$-line bundle on $X$, and let $(\mathcal{X},\mathcal{L})$ be an ample test configuration for $(X,L)$, with normalization $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$. In~\cite[Proposition 3.8]{Oda2} and~\cite[\S5]{Oda3}, Odaka introduces the \emph{partial normalization} $\widetilde{\mathcal{X}}\to\mathcal{X}'\to\mathcal{X}$ by requiring that $$ \mathcal{O}_{\mathcal{X}'}=\mathcal{O}_{\widetilde{\mathcal{X}}}\cap \mathcal{O}_{X\times\left(\mathbb{A}^1\setminus\{0\}\right)}. $$ We get this way an ample test configuration $(\mathcal{X}',\mathcal{L}')$ for $(X,L)$, with the extra property that $\widetilde{\mathcal{X}}\to\mathcal{X}'$ is an isomorphism over the generic points of $\mathcal{X}'_0$, cf.~\cite[Lemma 3.9]{Oda2}. Arguing as in the proof of Proposition~\ref{prop:DFnorm}, we may then check that \begin{equation}\label{equ:DFslc} \DF_{\widetilde{B}}(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})=\DF(\mathcal{X}',\mathcal{L}')\le\DF(\mathcal{X},\mathcal{L}). \end{equation} This shows that $(X,L)$ is K-semistable iff $\DF_{\widetilde{B}}(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})\ge 0$ for the normalization $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ of every ample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$. \end{rmk} \section{Valuations and test configurations}\label{sec:valtest} In what follows, $X$ denotes a normal variety of dimension $n$, with function field $K=k(X)$. The function field of any test configuration for $X$ is then isomorphic to $K(t)$. We shall relate valuations on $K$ and $K(t)$ from both an algebraic and geometric point of view. \subsection{Restriction and Gauss extension} First consider a valuation $w$ on $K(t)$. We denote by $r(w)$ its restriction to $K$. \begin{lem}\label{lem:div} If $w$ is an Abhyankar valuation, then so is $r(w)$. If $w$ is divisorial, then $r(w)$ is either divisorial or trivial. \end{lem} \begin{proof} The first assertion follows from Abhyankar's inequality (\ref{equ:Abhy}). Indeed, if $w$ is Abhyankar, then $\trdeg(w)+\ratrk(w)=n+1$, so~\eqref{equ:Abhy} gives $\trdeg(r(w))+\ratrk(r(w))\ge n$. As the opposite inequality always holds, we must have $\trdeg(r(w))+\ratrk(r(w))=n$, \ie $r(w)$ is Abhyankar. We also have $\trdeg(r(w))\le\trdeg(w)$, so if $w$ is divisorial, then $\trdeg(r(w))=n$ or $\trdeg(r(w))=n-1$, corresponding to $r(w)$ being trivial or divisorial, respectively. \end{proof} \medskip The restriction map $r$ is far from injective, but we can construct a natural one-sided inverse by exploiting the $k^*$-action (or $\mathbb{G}_m$-action) on $K(t)=k(X_{\mathbb{A}^1})$ defined by $(a\cdot f)(t)=f(a^{-1}t)$ for $a\in k^*$ and $f\in K(t)$. In terms of the Laurent polynomial expansion \begin{equation}\label{equ:Laurent} f=\sum_{\lambda\in\mathbb{Z}}f_\lambda t^\lambda \end{equation} with $f_\lambda\in K$, the $k^*$-action on $K(t)$ reads \begin{equation}\label{equ:action} a\cdot f=\sum_{\lambda\in\mathbb{Z}}a^{-\lambda}f_\lambda t^\lambda. \end{equation} \begin{lem}\label{lem:gauss} A valuation $w$ on $K(t)$ is $k^*$-invariant iff \begin{equation}\label{equ:gauss} w(f)=\min_{\lambda\in\mathbb{Z}}\left(r(w)(f_\lambda)+\lambda w(t)\right). \end{equation} for all $f\in K(t)$ with Laurent polynomial expansion (\ref{equ:Laurent}). In particular, $r(w)$ is trivial iff $w$ is the multiple of the $t$-adic valuation. \end{lem} \begin{proof} In view of (\ref{equ:action}), it is clear that (\ref{equ:gauss}) implies $k^*$-invariance. Conversely let $w$ be a $k^*$-invariant valuation on $K(t)$. The valuation property of $w$ shows that $$ w(f)\ge\min_{\lambda\in\mathbb{Z}}\left(r(w)(f_\lambda)+\lambda w(t)\right) $$ Set $\Lambda:=\left\{\lambda\in\mathbb{Z}\mid f_\lambda\ne 0\right\}$ and pick distinct elements $a_\mu\in k^*$, $\mu\in\Lambda$ (recall that $k$ is algebraically closed, and hence infinite). The Vandermonde matrix $(a_\mu^\lambda)_{\lambda,\mu\in\Lambda}$ is then invertible, and each term $f_\lambda t^\lambda$ with $\lambda\in\Lambda$ may thus be expressed as $k$-linear combination of $(a_\mu\cdot f)_{\mu\in\Lambda}$. Using the valuation property of $w$ again, we get for each $\lambda\in\Lambda$ $$ r(w)(f_\lambda)+\lambda w(t)=w\left(f_\lambda t^\lambda\right)\ge\min_{\mu\in\Lambda} w(a_\mu\cdot f)=w(f), $$ where the right-hand equality holds by $k^*$-invariance of $w$. The result follows. \end{proof} \begin{defi}\label{defi:Gauss} The \emph{Gauss extension} of a valuation $v$ on $K$ is the valuation $G(v)$ on $K(t)$ defined by $$ G(v)(f)=\min_{\lambda\in\mathbb{Z}}\left(v(f_\lambda)+\lambda\right) $$ for all $f$ with Laurent polynomial expansion (\ref{equ:Laurent}). \end{defi} Note that $r(G(v))=v$ for all valuations $v$ on $K$, while a valuation $w$ on $K(t)$ satisfies $w=G(r(w))$ iff it is $k^*$-invariant and $w(t)=1$, by Lemma~\ref{lem:gauss}. Further, the Gauss extension of $v$ is the smallest extension $w$ with $w(t)=1$. \subsection{Geometric interpretation} We now relate the previous algebraic considerations to test configurations. For each test configuration $\mathcal{X}$ for $X$, the canonical birational map $\mathcal{X}\dashrightarrow X_{\mathbb{A}^1}$ yields an isomorphism $k(\mathcal{X})\simeq K(t)$. When $\mathcal{X}$ is normal, every irreducible component $E$ of $\mathcal{X}_0$ therefore defines a divisorial valuation $\ord_E$ on $K(t)$. \begin{defi}\label{defi:nontriv} Let $\mathcal{X}$ be a normal test configuration for $X$. For each irreducible component $E$ of $\mathcal{X}_0$, we set $v_E:=b_E^{-1}r(\ord_E)$ with $b_E=\ord_E(\mathcal{X}_0)=\ord_E(t)$. We say that $E$ is \emph{nontrivial} if it is not the strict transform of $X\times\{0\}$. \end{defi} Since $E$ is preserved under the $\mathbb{G}_m$-action on $\mathcal{X}$, $\ord_E$ is $k^*$-invariant, and we infer from Lemma~\ref{lem:div} and Lemma~\ref{lem:gauss}: \begin{lem}\label{lem:div2} For each irreducible component $E$ of $\mathcal{X}_0$, we have $b_E^{-1}\ord_E=G(v_E)$, \ie $$ b_E^{-1}\ord_E(f)=\min_\lambda\left(v_E(f_\lambda)+\lambda\right). $$ in terms of the Laurent polynomial expansion (\ref{equ:Laurent}). Further, $E$ is nontrivial iff $v_E$ is nontrivial, and hence a divisorial valuation on $X$. \end{lem} By construction, divisorial valuations on $X$ of the form $v_E$ have a value group $\Gamma_v=v(K^*)$ contained in $\mathbb{Q}$. Thus they are of the form $v_E=c\,\ord_F$ with $c\in\mathbb{Q}_{>0}$ and $F$ a prime divisor on a normal variety $Y$ mapping birationally to $X$. Conversely, we prove: \begin{thm}\label{thm:restrdiv} A divisorial valuation $v$ on $X$ is of the form $v=v_E$ for a non-trivial irreducible component $E$ of a normal test configuration iff $\Gamma_v$ is contained in $\mathbb{Q}$. In this case, we may recover $b_E$ as the denominator of the generator of $\Gamma_v$. \end{thm} \begin{lem}\label{lem:divtest} A divisorial valuation $w$ on $K(t)$ satisfying $w(t)>0$ is $k^*$-invariant iff $w=c\,\ord_E$ with $c>0$ and $E$ an irreducible component of the central fiber $\mathcal{X}_0$ of a normal test configuration $\mathcal{X}$ of $X$. \end{lem} \begin{proof} If $E$ is an irreducible component of $\mathcal{X}_0$, then $\ord_E(t)>0$, and the $\mathbb{G}_m$-invariance of $E$ easily implies that $\ord_E$ is $k^*$-invariant. Conversely, let $w$ be a $k^*$-invariant divisorial valuation on $K(t)$ satisfying $w(t)>0$. The center $\xi$ on $X\times\mathbb{A}^1$ is then $\mathbb{G}_m$-invariant and contained in $X\times\{0\}$. If we let $\mathcal{Y}_1$ be the test configuration obtained by blowing-up the closure of $\xi$ in $X\times\mathbb{A}^1$, then the center $\xi_1$ of $w$ on $\mathcal{Y}_1$ is again $\mathbb{G}_m$-invariant by $k^*$-invariance of $w$, and the blow-up $\mathcal{Y}_2$ of the closure of $\xi_1$ is thus a test configuration. Continuing this way, we get a tower of test configurations $$ X\times\mathbb{A}^1\leftarrow\mathcal{Y}_1\leftarrow\mathcal{Y}_2\leftarrow\dots\leftarrow\mathcal{Y}_i\leftarrow\dots $$ Since $w$ is divisorial, a result of Zariski (cf.~\cite[Lemma 2.45]{KM}) guarantees that the closure of the center $\xi_i$ of $w$ on $\mathcal{Y}_i$ has codimension $1$ for $i\gg 1$. We then have $w=c\,\ord_E$ with $E$ the closure of the center of $w$ on the normalization $\mathcal{X}$ of $\mathcal{Y}_i$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:restrdiv}] Let $E$ be a non-trivial irreducible component of $\mathcal{X}_0$ for a normal test configuration $\mathcal{X}$ of $X$. Since the value group of $\ord_E$ on $k(\mathcal{X})=K(t)$ is $\mathbb{Z}$, the value group of $v_E$ on $k(X)=K$ is of the form $\frac{c}{b_E}\mathbb{Z}$ for some positive integer $c$. Lemma~\ref{lem:div2} yields $\mathbb{Z}=c\mathbb{Z}+b_E\mathbb{Z}$, so that $c$ and $b_E$ are coprime. Conversely, let $v$ be a divisorial valuation on $X$ with $\Gamma_v=\frac{c}{b}\mathbb{Z}$ for some coprime positive integers $b,c$. Then $w:=b G(v)$ is a $k^*$-invariant divisorial valuation on $K(t)$ with value group $c\mathbb{Z}+b\mathbb{Z}=\mathbb{Z}$. By Lemma~\ref{lem:divtest}, we may thus find a normal test configuration $\mathcal{X}$ for $X$ and a non-trivial irreducible component $E$ of $\mathcal{X}_0$ such that $\ord_E=w$. We then have $b_E=w(t)=b$, and hence $v=v_E$. \end{proof} \subsection{Rees valuations and deformation to the normal cone}\label{sec:reesdef} Our goal in this section is to relate the Rees valuations of a closed subscheme $Z\subset X$ to the valuations associated to the normalization of the deformation to the normal cone of $Z$, see Example~\ref{ex:defnorm}. \begin{thm}\label{thm:reescone} Let $Z\subset X$ be a closed subscheme, $\mathcal{X}$ the deformation to the normal cone of $Z$, and $\widetilde{\mathcal{X}}$ its normalization, so that $\mu\colon\widetilde{\mathcal{X}}\to X_{\mathbb{A}^1}$ is the normalized blow-up of $Z\times\{0\}$. Then the Rees valuations of $Z$ coincide with the valuations $v_E$, where $E$ runs over the non-trivial irreducible irreducible components of $\widetilde{\mathcal{X}}_0$. \end{thm} In other words, the Rees valuations of $Z$ are obtained by restricting to $k(X)\subset k(X)(t)$ those of $Z\times\{0\}$. If we denote by $E_0$ the strict transform of $X\times\{0\}$ in $\mathcal{X}$, one can show that $\mathcal{X}\setminus E_0$ is isomorphic to the Spec over $X$ of the \emph{extended Rees algebra} $\mathcal{O}_X[ t^{-1}\mathfrak{a}, t]$, where $\mathfrak{a}$ is the ideal of $Z$, cf.~\cite[pp.87--88]{Ful}. We thus see that Theorem~\ref{thm:reescone} is equivalent to the well-known fact that the Rees valuations of $\mathfrak{a}$ coincide with the restrictions to $X$ of the Rees valuations of the principal ideal $(t)$ of the extended Rees algebra (see for instance~\cite[Exercise 10.5]{HS}). We nevertheless provide a proof for the benefit of the reader. \begin{lem}\label{lem:intinv} Let $\mathfrak{b}=\sum_{\lambda\in\mathbb{N}}\mathfrak{b}_\lambda t^\lambda$ be a $\mathbb{G}_m$-invariant ideal of $X\times\mathbb{A}^1$, and let $$ \overline{\mathfrak{b}}=\sum_{\lambda\in\mathbb{N}}(\overline{\mathfrak{b}})_\lambda t^\lambda $$ be its integral closure. For each $\lambda$ we then have $\overline{\mathfrak{b}_\lambda}\subset(\overline{\mathfrak{b}})_\lambda$, with equality for $\lambda=0$. \end{lem} \begin{proof} Each $f\in\overline{\mathfrak{b}_\lambda}$ satisfies a monic equation $f^d+\sum_{j=1}^d b_j f^{d-j}=0$ with $b_j\in\mathfrak{b}_\lambda^j$. Then $$ (t^\lambda f)^d+\sum_{j=1}^d(t^{\lambda j}b_j)(t^\lambda f)^{d-j}=0 $$ with $t^{\lambda j}b_j\in (t^\lambda\mathfrak{b}_\lambda)^j\subset\mathfrak{b}^j$. It follows that $t^\lambda f\in\overline{\mathfrak{b}}$, which proves the first assertion. Conversely, we may choose $l\gg 1$ such that the $\mathbb{G}_m$-invariant ideal $\mathfrak{c}:=\overline{\mathfrak{b}^l}$ satisfies $\overline{\mathfrak{b}}\cdot\mathfrak{c}=\mathfrak{b}\cdot\mathfrak{c}$ (\cf proof of Lemma~\ref{lem:normblow}). Write $\mathfrak{c}=\sum_{\lambda\ge\lambda_0}\mathfrak{c}_\lambda t^\lambda$ with $\mathfrak{c}_{\lambda_0}\ne 0$. Then $(\overline{\mathfrak{b}})_0\cdot\mathfrak{c}_{\lambda_0} t^{\lambda_0}$ is contained in the weight $\lambda_0$ part of $\mathfrak{b}\cdot\mathfrak{c}$, which is equal to $(\mathfrak{b}_0\cdot\mathfrak{c}_{\lambda_0}) t^{\lambda_0}$. We thus have $(\overline{\mathfrak{b}})_0\cdot\mathfrak{c}_{\lambda_0}\subset\mathfrak{b}_0\cdot\mathfrak{c}_{\lambda_0}$, and hence $(\overline{\mathfrak{b}})_0\subset\overline{\mathfrak{b}_0}$ by the determinant trick. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:reescone}] Let $\mathfrak{a}$ be the ideal defining $Z$. By Theorem~\ref{thm:rees}, we are to check that: \begin{itemize} \item[(i)] $\overline{\mathfrak{a}^m}=\bigcap_E\left\{f\in\mathcal{O}_X\mid v_E(f)\ge m\right\}$ for all $m\in\mathbb{N}$; \item[(ii)] no $E$ can be omitted in (i). \end{itemize} Set $D:=\mu^{-1}(Z\times\{0\})$. Since $\ord_E$ is $k^*$-invariant, Lemma~\ref{lem:gauss} yields $$ \ord_E(D)=\ord_E(\mathfrak{a}+(t))=\min\{r(\ord_E)(\mathfrak{a}),b_E\}. $$ We claim that we have in fact $\ord_E(D)=b_E$. As recalled in Example~\ref{ex:defnorm}, the blow-up $\rho\colon\mathcal{X}\to X\times\mathbb{A}^1$ along $Z\times\{0\}$ satisfies $\mathcal{X}_0=\mu^{-1}(Z\times\{0\})+F$, with $F$ the strict transform of $X\times\{0\}$. Denoting by $\nu\colon\widetilde{\mathcal{X}}\to\mathcal{X}$ the normalization morphism, we infer $\widetilde{\mathcal{X}}_0=D+\nu^*F$, and hence $b_E=\ord_E(\widetilde{\mathcal{X}}_0)=\ord_E(D)$. This shows in particular that the valuations $b_E^{-1}\ord_E$ are the Rees valuations of $\mathfrak{a}+(t)$. We also get that $v_E(\mathfrak{a})=b_E^{-1}r(\ord_E)(\mathfrak{a})\ge 1$, and hence $\overline{\mathfrak{a}^m}\subset\bigcap_E\left\{f\in\mathcal{O}_X\mid v_E(f)\ge m\right\}$. Conversely, assume $f\in\mathcal{O}_X$ satisfies $v_E(f)\ge m$ for all $E$. Since the $b_E^{-1}\ord_E$ are the Rees valuations of $\mathfrak{a}+(t)$, applying Theorem~\ref{thm:rees} on $X\times\mathbb{A}^1$ yields $f\in\overline{(\mathfrak{a}+(t))^m}$. Since $\mathfrak{a}^m$ is the weight $0$ part of $(\mathfrak{a}+(t))^m$, Lemma~\ref{lem:intinv} yields $f\in\overline{\mathfrak{a}^m}$, and we have thus established (i). Finally, let $S$ be any finite set of $k^*$-invariant valuations $w$ on $K(t)$ such that $$ \overline{\mathfrak{a}^m}=\bigcap_{w\in S}\left\{f\in\mathcal{O}_X\mid r(w)(f)\ge m\right\} $$ for all $m\in\mathbb{N}$. We claim that we then have $$ \overline{(\mathfrak{a}+(t))^m}=\bigcap_{w\in S}\left\{f\in\mathcal{O}_\mathcal{X}\mid w(f)\ge m\right\} $$ for all $m\ge\mathbb{N}$. This will prove (ii), by the minimality of the set of Rees valuations of $\mathfrak{a}+(t)$. So assume that $f\in\mathcal{O}_\mathcal{X}$ satisfies $w(f)\ge m$ for all $w\in S$. In terms of the Laurent expansion (\ref{equ:Laurent}), we get $r(w)(f_\lambda)+\lambda\ge m$ for all $\lambda$, $w$, and hence $f_\lambda\in\overline{\mathfrak{a}^{m-\lambda}}$ by assumption. By Lemma~\ref{lem:intinv}, we conclude as desired that $f\in\overline{(\mathfrak{a}+(t))^m}$. \end{proof} \begin{cor}\label{cor:rees} Let $(X,L)$ be a normal polarized variety and $Z\subset X$ a closed subscheme. Then there exists a normal, ample test configuration $(\mathcal{X},\mathcal{L})$ such that the Rees valuations of $Z$ are exactly the divisorial valuations $v_E$ on $X$ associated to the non-trivial irreducible components of $\mathcal{X}_0$. \end{cor} \begin{proof} Let $\mu\colon\mathcal{X}\to X\times\mathbb{A}^1$ be the normalized blow-up of $Z\times\{0\}$, so that $\mathcal{X}$ is the normalization of the deformation to the normal cone of $Z$. As recalled in Lemma~\ref{lem:normblow}, $D:=\mu^{-1}(Z)$ is a Cartier divisor with $-D$ ample. We may thus choose $0<c\ll 1$ such that $\mathcal{L}:=\mu^*L_{\mathbb{A}^1}-c D$ is ample, and $(\mathcal{X},\mathcal{L})$ is then a normal, ample test configuration. The rest follows from Theorem~\ref{thm:reescone}. \end{proof} \subsection{Log discrepancies and log canonical divisors}\label{sec:logdisc} In this section we assume that $k$ has characteristic $0$. Let $B$ be a boundary on $X$. Recall the definition of $A_{(X,B)}$ from~\S\ref{sec:bound}. \begin{prop}\label{prop:discr} For every irreducible component $E$ of $\mathcal{X}_0$, the log discrepancies of $v_E$ and $\ord_E$ (with respect to the pairs $(X,B)$ and $(X_{\mathbb{A}^1},B_{\mathbb{A}^1})$, respectively) are related by \begin{align*} A_{(X,B)}(v_E) &=A_{\left(X_{\mathbb{A}^1},B_{\mathbb{A}^1}\right)}\left(b_E^{-1}\ord_E\right)-1\\ &=A_{\left(X_{\mathbb{A}^1},B_{\mathbb{A}^1}+X\times\{0\}\right)}(b_E^{-1}\ord_E). \end{align*} \end{prop} Recall that $A_{(X,B)}(v_\triv)$ is defined to be $0$, and that $b_E=\ord_E(\mathcal{X}_0)=\ord_E(t)$. \begin{proof} If $E$ is the strict transform of $X\times\{0\}$, then $A_{(X,B)}(v_E)=A_{(X,B)}(v_\triv)=0$, while $A_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})}(\ord_E)=b_E=1$. Assume now that $E$ is non-trivial. Since $v_E$ is a divisorial valuation on $X$, we may find a proper birational morphism $\mu\colon X'\to X$ with $X'$ smooth and a smooth irreducible divisor $F\subset X'$ such that $v_E=c\ord_F$ for some rational $c>0$. By Lemma~\ref{lem:div2}, the divisorial valuation $\ord_E$ is monomial on $X'_{\mathbb{A}^1}$ with respect to the snc divisor $X'\times\{0\}+F_{\mathbb{A}^1}$, with weights $\ord_E(X'\times\{0\})=b_E$ and $\ord_E(F_{\mathbb{A}^1})=b_Ev_E(F)=b_E c$. It follows (see~\eg~\cite[Prop.~5.1]{JM}) that \begin{multline*} A_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})}(\ord_E) =b_E A_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})}\left(\ord_{X\times\{0\}}\right) +b_E c A_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})}\left(\ord_{F_{\mathbb{A}^1}}\right)\\ =b_E+b_E c A_{(X,B)}\left(\ord_F\right) =b_E\left(1+A_{(X,B)}(v_E)\right), \end{multline*} which completes the proof. \end{proof} Now consider a normal test configuration $\mathcal{X}$ for $X$, with compactification $\bar\mathcal{X}$. As in~\S\ref{sec:DFlog}, let $\mathcal{B}$ (resp.\ $\bar\mathcal{B}$) be the closure of $B\times(\mathbb{A}^1\setminus\{0\})$ in $\mathcal{X}$ (resp. $\bar\mathcal{X}$). The log canonical divisors on $\mathbb{A}^1$ and $\P^1$ are defined as \begin{equation*} K^\mathrm{log}_{\mathbb{A}^1}:=K_{\mathbb{A}^1}+[0] \quad\text{and}\quad K^\mathrm{log}_{\P^1}:=K_{\P^1}+[0]+[\infty], \end{equation*} respectively. We now set \begin{align*} K^{\mathrm{log}}_{(\mathcal{X},\mathcal{B})} :&=K_\mathcal{X}+\mathcal{B}+\mathcal{X}_{0,\red},\\ K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})} :&=K_{\bar\mathcal{X}}+\bar\mathcal{B}+\bar\mathcal{X}_{0,\red}+\bar\mathcal{X}_{\infty,\red}\\ &=K_{\bar\mathcal{X}}+\bar\mathcal{B}+\mathcal{X}_{0,\red}+\bar\mathcal{X}_\infty, \end{align*} and call these the \emph{log canonical divisors} of $(\mathcal{X},\mathcal{B})$ and $(\bar\mathcal{X},\bar\mathcal{B})$, respectively. Similarly, \begin{align*} K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1} :&=K^\mathrm{log}_{(\mathcal{X},\mathcal{B})}-\pi^*K^\mathrm{log}_{\mathbb{A}^1}\\ &=K_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}-(\mathcal{X}_0-\mathcal{X}_{0,\red}) \end{align*} and \begin{align*} K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1} :&=K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})}-\pi^*K^\mathrm{log}_{\P^1}\\ &=K_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}-(\mathcal{X}_0-\mathcal{X}_{0,\red}) \end{align*} are the \emph{relative log canonical divisors}. Again we emphasize that these $\mathbb{Q}$-Weil divisor classes may not be $\mathbb{Q}$-Cartier in general. \smallskip There are two main reasons for introducing the relative log canonical divisors. First, they connect well with the log discrepancy function on divisorial valuations on $X$. Namely, consider normal test configurations $\mathcal{X}$ and $\mathcal{X}'$ for $X$, with $\mathcal{X}'$ dominating $\mathcal{X}$ via $\mu\colon\mathcal{X}'\to\mathcal{X}$. Suppose that $K^\mathrm{log}_{(\mathcal{X},\mathcal{B})}$ is $\mathbb{Q}$-Cartier. Then \begin{align}\label{equ:Klog} K^\mathrm{log}_{(\bar\mathcal{X}',\bar\mathcal{B}')/\P^1}-\mu^*K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1} =K^\mathrm{log}_{(\mathcal{X}',\mathcal{B}')/\mathbb{A}^1}-\mu^*K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1} =\sum_{E'} A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(\ord_{E'}){E'}, \end{align} where $E'$ ranges over the irreducible components of $\mathcal{X}'_0$. Combining this with Proposition~\ref{prop:discr}, we infer: \begin{cor}\label{cor:discr} For any normal test configuration $\mathcal{X}$ dominating $X_{\mathbb{A}^1}$ via $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$, we have \begin{equation}\label{equ:Klogbis} K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}-\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1} =K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}-\rho^*K^\mathrm{log}_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})/\mathbb{A}^1} =\sum_E b_E A_{(X,B)}(v_E) E, \end{equation} with $E$ ranging over the irreducible components of $\mathcal{X}_0$. \end{cor} Second, the relative log canonical divisors behave well under base change. Namely, let $(\mathcal{X}_d,\mathcal{L}_d)$ be the normalized base change of $(\mathcal{X},\mathcal{L})$, and denote by $f_d\colon\P^1\to\P^1$ and $g_d\colon\bar\mathcal{X}_d\to\bar\mathcal{X}$ the induced finite morphisms, both of which have degree $d$. The pull-back formula for log canonical divisors (see~\eg~\cite[\S2.42]{KollarBook}) then yields \begin{equation}\label{e404} K^\mathrm{log}_{(\bar\mathcal{X}_d,\bar\mathcal{B}_d)}=g_d^*K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})} \quad\text{and}\quad K^\mathrm{log}_{\P^1}=f_d^*K^\mathrm{log}_{\P^1}, \end{equation} so that $K^\mathrm{log}_{(\bar\mathcal{X}_d,\bar\mathcal{B}_d)/\P^1}=g_d^*K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}$. Note that while the (relative) log canonical divisors above may not be $\mathbb{Q}$-Cartier, we can pull them back under the finite morphism $g_d$, see~\cite[\S2.40]{KollarBook}. \section{Duistermaat-Heckman measures and filtrations}\label{sec:DHfiltr} In this section, we analyze in detail the limit measure of a filtration, a concept closely related to Duistermaat-Heckman measures. This allows us to establish Theorem A and Corollary~B. \subsection{The limit measure of a filtration}\label{sec:limit} Let $X$ be a variety of dimension $n$, $L$ an ample line bundle on $X$, and set $R=R(X,L)$. Let us we review and complement the study in~\cite{BC} of a natural measure on $\mathbb{R}$ associated to a general $\mathbb{R}$-filtration $F^\bullet R$ on $R$. Recall that the \emph{volume} of a graded subalgebra $S\subset R$ is defined as \begin{equation}\label{equ:vol} \vol(S):=\limsup_{m\to\infty}\frac{n!}{m^n}\dim S_m\in\mathbb{R}_{\ge0}. \end{equation} The following result is proved using Okounkov bodies~\cite{LM,KK} (see also the first author's appendix in~\cite{Sze2}). \begin{lem}\label{lem:vol} Let $S\subset R$ be a graded subalgebra containing an ample series, \ie \begin{itemize} \item[(i)] $S_m\ne 0$ for all $m\gg 1$; \item[(ii)] there exist $\mathbb{Q}$-divisors $A$ and $E$, ample and effective respectively, such that $L=A+E$ and $H^0(X,mA)\subset S_m\subset H^0(X,mL)$ for all $m$ divisible enough. \end{itemize} Then $\vol(S)>0$, and the limsup in (\ref{equ:vol}) is a limit. For each $m\gg 1$, let $\mathfrak{a}_m\subset\mathcal{O}_X$ be the base ideal of $S_m$, \ie the image of the evaluation map $S_m\otimes\mathcal{O}_X(-mL)\to\mathcal{O}_X$, and let $\mu_m\colon X_m\to X$ be the normalized blow-up of $X$ along $\mathfrak{a}_m$, so that $\mathcal{O}_{X_m}\cdot\mathfrak{a}_m=\mathcal{O}_{X_m}(-F_m)$ with $F_m$ an effective Cartier divisor. Then we also have $$ \vol(S)=\lim_{m\to\infty}\left(\mu_m^*L-\tfrac{1}{m}F_m\right)^n. $$ \end{lem} Now let $F^\bullet R$ be an $\mathbb{R}$-filtration of the graded ring $R$, as defined in \S\ref{sec:filtr}. We denote by $$ \lambda^{(m)}_{\max}=\lambda^{(m)}_{1}\ge\dots\ge\lambda^{(m)}_{N_m}=\lambda^{(m)}_{\min} $$ the successive minima of $F^\bullet H^0(X,mL)$. As $R$ is an integral domain, the sequence $(\lambda^{(m)}_{\max})_{m\in\mathbb{N}}$ is superadditive in the sense that $\lambda^{(m+m')}_{\max}\ge\lambda^{(m)}_{\max}+\lambda^{(m')}_{\max}$, and this implies that $$ \lambda_{\max}=\lambda_{\max}(F^{\bullet} R):=\lim_{m\to\infty}\frac{\lambda^{(m)}_{\max}}{m}=\sup_{m\ge 1}\frac{\lambda^{(m)}_{\max}}{m}\in(-\infty,+\infty]. $$ By definition, we have $\lambda_{\max}<+\infty$ iff there exists $C>0$ such that $F^\lambda H^0(X,mL)=0$ for any $\lambda,m$ such that $\lambda\ge Cm$, and we then say that $F^\bullet R$ has \emph{linear growth}. For example, it follows from~\cite[Lemma~3.1]{PS07} that the filtration associated to a test configuration (see~\S\ref{sec:filtrtc}) has linear growth. \begin{rmk}\label{R403} In contrast, there always exists $C>0$ such that $F^\lambda H^0(X,mL)=H^0(X,mL)$ for any $\lambda,m$ such that $\lambda\le-Cm$. This is a simple consequence of the finite generation of $R$, cf.~\cite[Lemma 1.5]{BC}. \end{rmk} For each $\lambda\in\mathbb{R}$, we define a graded subalgebra of $R$ by setting \begin{equation}\label{equ:Rla} R^{(\lambda)}:=\bigoplus_{m\in\mathbb{N}}F^{m\lambda} H^0(X,mL). \end{equation} The main result of~\cite{BC} may be summarized as follows. \begin{thm}\label{thm:BC} Let $F^\bullet R$ be a filtration with linear growth. \begin{itemize} \item[(i)] For each $\lambda<\lambda_{\max}$, $R^{(\lambda)}$ contains an ample series. \item[(ii)] The function $\lambda\mapsto \vol(R^{(\lambda)})^{1/n}$ is concave on $(-\infty,\lambda_{\max})$, and vanishes on $(\lambda_{\max},+\infty)$. \item[(iii)] If we introduce, for each $m$, the probability measure \begin{equation}\label{equ:num} \nu_m:=\frac{1}{N_m}\sum_j\delta_{m^{-1}\lambda^{(m)}_{j}}=-\frac{d}{d\lambda}\frac{\dim F^{m\lambda} H^0(X,mL)}{N_m} \end{equation} on $\mathbb{R}$, then $\nu_m$ has uniformly bounded support and converges weakly as $m\to\infty$ to the probability measure \begin{equation}\label{equ:limmeas} \nu:=-\frac{d}{d\lambda}V^{-1}\vol(R^{(\lambda)}). \end{equation} \end{itemize} \end{thm} We call $\nu$ the \emph{limit measure} of the filtration $F^\bullet R$. The log concavity property of $\vol(R^{(\lambda)})$ immediately yields: \begin{cor}\label{cor:supp} The support of the limit measure $\nu$ is given by $\supp\nu=[\lambda_{\min},\lambda_{\max}]$ with $$ \lambda_{\min}:=\inf\left\{\lambda\in\mathbb{R}\mid\vol(R^{(\lambda)})<V\right\}. $$ Further, $\nu$ is absolutely continuous with respect to the Lebesgue measure, except perhaps for a point mass at $\lambda_{\max}$. \end{cor} More precisely, the mass of $\nu$ on $\{\lambda_{\max}\}$ is equal to $\lim_{\lambda\to(\lambda_{\max})_-}\vol(R^{(\lambda)})$. \begin{rmk} While we trivially have $\lambda_{\min}\ge\limsup_{m\to\infty}m^{-1}\lambda^{(m)}_{\min}$, the inequality can be strict in general. It will, however, be an equality for the filtrations considered in~\S\ref{sec:PP} and~\S\ref{sec:filtval}. \end{rmk} \begin{rmk}\label{R101} Let $F^\bullet R(X,L)$ be a filtration of linear growth and limit measure $\nu$. For any $r\in\mathbb{Z}_{>0}$, we obtain a filtration $F^\bullet R(X,rL)$ by restriction. This filtration also has linear growth and its limit measure is given by $r_*\nu$. \end{rmk} \subsection{Limit measures and Duistermaat-Heckman measures}\label{S102} Now suppose $L$ is an ample $\mathbb{Q}$-line bundle on $X$. To simplify the terminology, we introduce \begin{defi} We define the \emph{Duistermaat-Heckman} measure of any semiample test configuration of $(X,L)$ as $\DH_{(\mathcal{X},\mathcal{L})}:=\DH_{(\mathcal{X}_\amp,\mathcal{L}_\amp)}$, where $(\mathcal{X}_\amp,\mathcal{L}_\amp)$ is the ample model of $(\mathcal{X},\mathcal{L})$ as in Proposition~\ref{prop:amplemodel}. \end{defi} With this definition, Duistermaat-Heckman measures are invariant under normalization: \begin{cor}\label{C301} If $(\mathcal{X},\mathcal{L})$ is a semiample test configuration for $(X,L)$, and $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ is its normalization, then $\DH_{(\mathcal{X},\mathcal{L})}=\DH_{(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})}$. \end{cor} \begin{proof} Let $(\mathcal{X}_{\amp},\mathcal{L}_{\amp})$ be the ample model of $(\mathcal{X},\mathcal{L})$ and $(\widetilde{\mathcal{X}}_{\amp},\tilde{\mathcal{L}}_{\amp})$ its normalization. The composition $\widetilde{\mathcal{X}}\to\mathcal{X}_\amp$ lifts to a map $\widetilde{\mathcal{X}}\to\widetilde{\mathcal{X}}_\amp$, under which $\tilde{\mathcal{L}}$ is the pullback of $\mathcal{L}_\amp$. By the uniqueness statement of Proposition~\ref{prop:amplemodel}, $(\widetilde{\mathcal{X}}_\amp,\tilde{\mathcal{L}}_\amp)$ is the ample model of $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$. By definition, we thus have $\DH_{(\mathcal{X},\mathcal{L})}=\DH_{(\mathcal{X}_{\amp},\mathcal{L}_{\amp})}$ and $\DH_{(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})}=\DH_{(\widetilde{\mathcal{X}}_{\amp},\tilde{\mathcal{L}}_{\amp})}$, whereas Theorem~\ref{thm:DHinv} yields $\DH_{(\widetilde{\mathcal{X}}_{\amp},\tilde{\mathcal{L}}_{\amp})}=\DH_{(\mathcal{X}_{\amp},\mathcal{L}_{\amp})}$, concluding the proof. \end{proof} We now relate Duistermaat-Heckman measures and limit measures. Recall from~\S\ref{sec:filtrtc} that any test configuration for $(X,L)$ induces a filtration of $R(X,rL)$ for $r$ sufficiently divisible. \begin{prop}\label{prop:DHlimit} If $(\mathcal{X},\mathcal{L})$ is semiample, then, for $r$ sufficiently divisible, the limit measure of the filtration on $R(X,rL)$ induced by $(\mathcal{X},\mathcal{L})$ is equal to $r_*\DH(\mathcal{X},\mathcal{L})$. \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:DHlimit}] Using homogeneity (see Definition~\ref{D101} and Remark~\ref{R101}), we may assume $r=1$. Further, $(\mathcal{X},\mathcal{L})$ and $(\mathcal{X}_\amp,\mathcal{L}_\amp)$ induce the same filtration on $(X,L)$, so we may assume $(\mathcal{X},\mathcal{L})$ is ample. By the projection formula and the ampleness of $\mathcal{L}$, it then follows that the fiber at $0$ of the vector bundle $\pi_*\mathcal{O}_\mathcal{X}(m\mathcal{L})$ can be identified with $H^0((\mathcal{X})_0,m(\mathcal{L})_0)$, and Lemma~\ref{L201} therefore shows that the weight measure of the latter $\mathbb{G}_m$-module is given by \begin{equation*} \mu_{H^0((\mathcal{X}_\amp)_0,m(\mathcal{L}_\amp)_0)} =\frac{1}{N_m}\sum_{\lambda\in\mathbb{Z}}\dim\left(F^\lambda H^0(X,mL)/F^{\lambda+1} H^0(X,mL)\right)\d_\lambda. \end{equation*} As a result, $\mu_m:=(1/m)_*\mu_{H^0((\mathcal{X})_0,m(\mathcal{L})_0)}$ satisfies \begin{equation*} \mu_m=-\frac{d}{d\lambda}\frac{\dim F^{m\lambda} H^0(X,mL)}{N_m}, \end{equation*} and hence converges to the limit measure measure of $F^\bullet R$ by Theorem~\ref{thm:BC}. \end{proof} \subsection{Piecewise polynomiality in the normal case}\label{sec:PP} \begin{thm}\label{thm:PP} Let $X$ be an $n$-dimensional normal variety, $L$ an ample line bundle on $X$, and $F^\bullet R$ a finitely generated $\mathbb{Z}$-filtration of $R=R(X,L)$. Then $F^\bullet R$ has linear growth, and the density of its limit measure $\nu$ is a piecewise polynomial function, of degree at most $n-1$. \end{thm} By the density of $\nu$ we mean the density of the absolutely continuous part, see Corollary~\ref{cor:supp}. Since a semiample test configuration of a polarized variety $(X,L)$ induces a finitely generated filtration of $R(X,rL)$ for $r$ sufficiently divisible, we get: \begin{cor}\label{cor:PP} Let $(X,L)$ be a polarized normal variety. Then the Duistermaat measure $\DH_{(\mathcal{X},\mathcal{L})}$ of any semiample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ is the sum of a point mass and an absolutely continuous measure with piecewise polynomial density. \end{cor} The general case where $(X,L)$ is an arbitrary polarized scheme will be treated in Theorem~\ref{thm:PPscheme}, by reducing to the normal case studied here. \begin{proof}[Proof of Theorem~\ref{thm:PP}] The following argument is inspired by the proof of~\cite[Proposition 4.13]{ELMNP}.\footnote{While the base field in~\textit{loc.cit.}\ is $\mathbb{C}$, the results we use are valid over an arbitrary algebraically closed field.} For $\tau=(m,\lambda)\in\mathbb{N}\times\mathbb{Z}$, let $\mathfrak{a}_\tau$ be the base ideal of $F^\lambda H^0(X,mL)$, \ie the image of the evaluation map $F^\lambda H^0(X,mL)\otimes\mathcal{O}_X(-mL)\to\mathcal{O}_X$. Let $\mu_\tau\colon X_\tau\to X$ be the normalized blow-up of $\mathfrak{a}_\tau$, which is also the normalized blow-up of its integral closure $\overline{\mathfrak{a}_\tau}$. Then $$ \mathcal{O}_{X_\tau}\cdot\mathfrak{a}_\tau=\mathcal{O}_{X_\tau}\cdot\overline{\mathfrak{a}_\tau}=\mathcal{O}_{X_\tau}(-F_\tau), $$ with $F_\tau$ a Cartier divisor, and we set $$ V_\tau:=\left(\mu_\tau^*L-\tfrac{1}{m}F_\tau\right)^n. $$ Since $R^{(\lambda)}$ contains an ample series for $\lambda\in(-\infty,\lambda_{\max})$, Lemma~\ref{lem:vol} yields $$ \vol(R^{(\lambda)})=\lim_{m\to\infty}V_{(m,\lceil m\lambda\rceil)}. $$ Now, we use the finite generation of $F^\bullet R$, which implies that the $\mathbb{N}\times\mathbb{Z}$-graded $\mathcal{O}_X$-algebra $\bigoplus_{\tau\in\mathbb{N}\times\mathbb{Z}}\mathfrak{a}_\tau$ is finitely generated. By~\cite[Proposition 4.7]{ELMNP}, we may thus find a positive integer $d$ and finitely many vectors $e_i=(m_i,\lambda_i)\in\mathbb{N}\times\mathbb{Z}$, $1\le i\le r$, with the following properties: \begin{itemize} \item[(i)] $e_1=(0,-1)$, $e_r=(0,1)$, and the slopes $a_i:=\lambda_i/m_i$ are strictly increasing with $i$; \item[(ii)] Every $\tau\in\mathbb{N}\times\mathbb{Z}$ may be written as $\tau=p_i e_i+p_{i+1}e_{i+1}$ with $i$, $p_i,p_{i+1}\in\mathbb{N}$ uniquely determined, and the integral closures of $\mathfrak{a}_{d\tau}$ and $\mathfrak{a}_{de_i}^{p_i}\cdot\mathfrak{a}_{de_{i+1}}^{p_{i+1}}$ coincide. \end{itemize} Choose a projective birational morphism $\mu\colon X'\to X$ with $X'$ normal and dominating the blow-up of each $\mathfrak{a}_{d e_i}$, so that there is a Cartier divisor $E_i$ with $\mathcal{O}_{X'}\cdot\mathfrak{a}_{de_i}=\mathcal{O}_{X'}(-E_i)$. For all $\tau=(m,\lambda)\in\mathbb{N}\times\mathbb{Z}$ written as in (ii) as $\tau=p_ie_i+p_{i+1}e_{i+1}$, we get $$ \mathcal{O}_{X'}\cdot\mathfrak{a}_{d e_i}^{p_i}\cdot\mathfrak{a}_{d e_{i+1}}^{p_{i+1}}=\mathcal{O}_{X'}(-(p_i E_i+p_{i+1}E_{i+1})), $$ and the universal property of normalized blow-ups therefore shows that $\mu$ factors through the normalized blow-up of $\mathfrak{a}_{d e_i}^{p_i}\cdot\mathfrak{a}_{d e_{i+1}}^{p_{i+1}}$. By Lemma~\ref{lem:normblow}, the latter is also the normalized blow-up of $$ \overline{\mathfrak{a}_{d e_i}^{p_i}\cdot\mathfrak{a}_{d e_{i+1}}^{p_{i+1}}}=\overline{\mathfrak{a}_{d\tau}}, $$ so we infer that $$ \mathfrak{a}_{d\tau}\cdot\mathcal{O}_{X'}=\mathcal{O}_{X'}\left(-(p_i E_i+p_{i+1}E_{i+1})\right), $$ with $p_i E_i+p_{i+1}E_{i+1}$ the pull-back of $F_{d\tau}$. As a result, we get $$ V_{d\tau}=\left(\mu^*L-\tfrac{1}{dm}\left(p_i E_i+p_{i+1}E_{i+1}\right)\right)^n. $$ Pick $\lambda\in(0,\lambda_{\max})$, so that $\lambda\in[a_i,a_{i+1})$ for some $i$. We infer from the previous discussion that $$ \vol(R^{(\lambda)})=\lim_{m\to\infty} V_{(m,\lceil m\lambda\rceil)}=\left(\mu^*L-(f_i(\lambda) E_i+f_{i+1}(\lambda)E_{i+1})\right)^n $$ for some affine functions $f_i,f_{i+1}$, and we conclude that $\vol(R^{(\lambda)})$ is a piecewise polynomial function of $\lambda\in(-\infty,\lambda_{\max})$, of degree at most $n$. The result follows by (\ref{equ:limmeas}). \end{proof} \begin{rmk}\label{rmk:finitetype} For a finitely generated $\mathbb{Z}$-filtration $F^\bullet R$, the graded subalgebra $R^{(\lambda)}$ is finitely generated for each $\lambda\in\mathbb{Q}$~\cite[Lemma 4.8]{ELMNP}. In particular, $\vol(R^{(\lambda)})\in\mathbb{Q}$ for all $\lambda\in\mathbb{Q}\cap(-\infty,\lambda_{\max})$. \end{rmk} \subsection{The filtration defined by a divisorial valuation}\label{sec:filtval} Let $X$ be a normal projective variety and $L$ an ample line bundle on $X$. Any valuation $v$ on $X$ defines a filtration $F_v^\bullet R$ by setting $$ F_v^\lambda H^0(X,mL):=\left\{s\in H^0(X,mL)\mid v(s)\ge\lambda\right\}. $$ As a special case of~\cite[Proposition 2.12]{BKMS}, $F_v^\bullet R$ has linear growth for any divisorial valuation $v$. The following result will be needed later on. \begin{lem}\label{lem:suppdiv} Let $v$ be a divisorial valuation on $X$, and let $\nu$ be the limit measure of the corresponding filtration $F_v^\bullet R$. Then $\supp\nu=[0,\lambda_{\max}]$. In other words, we have $$ \lim_{m\to\infty}\frac{\dim\left\{s\in H^0(X,mL)\mid v(s)\ge\lambda m\right\}}{N_m}<1 $$ for any $\lambda>0$. \end{lem} The equivalence between the two statements follows from Corollary~\ref{cor:supp} above. \begin{proof} Let $Z\subset X$ be the closure of the center $c_X(v)$ of $v$ on $X$, and $w$ a Rees valuation of $Z$. Since the center of $w$ on $X$ belongs to $Z=\overline{c_X(v)}$, the general version of Izumi's theorem in~\cite{HS01} yields a constant $C>0$ such that $v(f)\le C w(f)$ for all $f\in\mathcal{O}_{X,c_X(w)}$. Let $\mu\colon X'\to X$ be the normalized blow-up of $Z$ and set $E:=\mu^{-1}(Z)$. By definition, the Rees valuations of $Z$ are given up to scaling by vanishing order along the irreducible components of $E$. Given $\lambda>0$, we infer that $$ \left\{v\ge\lambda m\right\}\subset\mu_*\mathcal{O}_{X'}(-m\d E) $$ for all $0<\d\ll 1$ and all $m\ge 1$. It follows that $$ \left\{s\in H^0(X,mL)\mid v(s)\ge\varepsilon\lambda m\right\}\hookrightarrow H^0\left(X',m\left(\mu^*L-\d E\right)\right), $$ so that $\vol(R^{(\lambda)})\le(\mu^*L-\d E)^n$. But since $-E$ is $\mu$-ample, $\mu^*L-\d E$ is ample on $X'$ for $0<\d\ll 1$, so that $$ \frac{d}{d\lambda}(\mu^*L-\d E)^n=-n\left(E\cdot(\mu^*L-\d E)^{n-1}\right)<0. $$ It follows that $$ \vol(R^{(\lambda)})\le (\mu^*L-\d E)^n<(\mu^*L)^n=V $$ for $0<\d\ll 1$, hence the result. \end{proof} \begin{rmk}\label{rmk:div} At least in characteristic zero, the continuity of the volume function shows that $\vol(R^{(\lambda)})\to 0$ as $\lambda\to\lambda_{\max}$ from below, so that $\nu$ has no atom at $\lambda_{\max}$, and is thus absolutely continuous on $\mathbb{R}$ (cf.~\cite[Proposition 2.25]{BKMS}). On the other hand, $F^\bullet R$ is not finitely generated in general. Indeed, well-known examples of irrational volume show that $\vol(R^{(1)})$ can sometimes be irrational (compare Remark~\ref{rmk:finitetype}). \end{rmk} \begin{rmk} The filtration defined by a valuation and its relation to $K$-stability has been recently studied by Fujita~\cite{Fuj15a,Fuj15b,Fuj16}, Li~\cite{Li15,Li16} and Liu~\cite{Liu16}. \end{rmk} \subsection{The support of a Duistermaat-Heckman measure}\label{sec:support} The following precise description of the support of a Duistermaat-Heckman measure is the key to the characterization of almost trivial ample test configurations to be given below. \begin{thm}\label{thm:supp} Let $(\mathcal{X},\mathcal{L})$ be a normal, semiample test configuration dominating $X_{\mathbb{A}^1}$, and write $\mathcal{L}=\rho^*L_{\mathbb{A}^1}+D$ with $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$ the canonical morphism. Then the support $[\lambda_{\min},\lambda_{\max}]$ of its Duistermaat-Heckman measure satisfies \begin{equation*} \lambda_{\min}=\min_Eb_E^{-1}\ord_E(D) \quad\text{and}\quad \lambda_{\max}=\max_Eb_E^{-1}\ord_E(D)=\ord_{E_0}(D), \end{equation*} where $E$ runs over the irreducible components of $\mathcal{X}_0$, $b_E:=\ord_E(\mathcal{X}_0)=\ord_E(t)$, and $E_0$ is the strict transform of $X\times\{0\}$ (which has $b_{E_0}=1$). \end{thm} \begin{lem}\label{lem:filtr} In the notation of Theorem~\ref{thm:supp}, the induced filtration of $R$ satisfies, for all $m$ divisible enough and all $\lambda\in\mathbb{Z}$, $$ F^\lambda H^0(X,mL)=\bigcap_E\left\{s\in H^0(X,mL)\mid v_E(s)+m\,b_E^{-1}\ord_E(D)\ge\lambda\right\}, $$ where $E$ runs over the irreducible components of $\mathcal{X}_0$. \end{lem} According to Lemma~\ref{lem:div2}, $v_E$ is a divisorial valuation on $X$ for $E\ne E_0$, while $v_{E_0}$ is the trivial valuation (so that $v_{E_0}(s)$ is either $0$ for $s\ne 0$, or $+\infty$ for $s=0$). \begin{proof} Pick any $m$ such that $m\mathcal{L}$ is a line bundle. By (\ref{equ:filtr}), a section $s\in H^0(X,mL)$ is in $F_{(\mathcal{X},\mathcal{L})}^\lambda H^0(X,mL)$ iff $\bar s t^{-\lambda}\in H^0(\mathcal{X},m\mathcal{L})$, with $\bar s$ the $\mathbb{G}_m$-invariant rational section of $m\mathcal{L}$ induced by $s$. By normality of $\mathcal{X}$, this amounts in turn to $\ord_E\left(\bar s t^{-\lambda}\right)\ge 0$ for all $E$, \ie $\ord_E(\bar s)\ge\lambda b_E$ for all $E$. The result follows since $m\mathcal{L}=\rho^*(mL_{\mathbb{A}^1})+m D$ implies that $$ \ord_E(\bar s)=r(\ord_E)(s)+m \ord_E(D)=b_Ev_E(s)+m\ord_E(D). $$ \end{proof} \begin{lem}\label{lem:minmax} In the notation of Theorem~\ref{thm:supp}, the filtration $F^\bullet H^0(X,mL)$ satisfies \begin{equation*} \frac{\lambda^{(m)}_{\min}}{m}=\min_Eb_E^{-1}\ord_E(D) \quad\text{and}\quad \frac{\lambda^{(m)}_{\max}}{m}=\ord_{E_0}(D)=\max_Eb_E^{-1}\ord_E(D) \end{equation*} for all $m$ divisible enough. \end{lem} \begin{proof} Set $c:=\min_Eb_E^{-1}\ord_E(D)$, and pick $m$ divisible enough (so that $mc$ is in particular an integer). The condition $v_E(s)+m\ord_E(D)\ge m c b_E$ automatically holds for all $s\in H^0(X,mL)$, since $v_E(s)\ge 0$. By Lemma~\ref{lem:filtr}, we thus have $F^{mc} H^0(X,mL)=H^0(X,mL)$, and hence $m c\le\lambda^{(m)}_{\min}$. We may assume $mL$ is globally generated, so for every $E$ we may find a section $$ s\in H^0(X,mL)=F^{\lambda^{(m)}_{\min}} H^0(X,mL) $$ that does not vanish at the center of $v_E$ on $X$, \ie $v_E(s)=0$. By Lemma~\ref{lem:filtr}, it follows that $m\ord_E(D)\ge\lambda^{(m)}_{\min}b_E$. Since this holds for every $E$, we conclude that $mc\ge\lambda^{(m)}_{\min}$. We next use that $m\mathcal{L}=\rho^*(mL_{\mathbb{A}^1})+mD$ is globally generated. This implies in particular that $\mathcal{O}_\mathcal{X}(mD)$ is $\rho$-globally generated, which reads $$ \mathcal{O}_\mathcal{X}(mD)=\rho_*\mathcal{O}_\mathcal{X}(mD)\cdot\mathcal{O}_{\mathcal{X}} $$ as fractional ideals. But we trivially have $\rho_*\mathcal{O}_\mathcal{X}(mD)\subset\mathcal{O}_{X_{\mathbb{A}^1}}(m\rho_*D)$, and we infer $$ D\le\rho^*\rho_*D. $$ Now $\rho_*D=\ord_{E_0}(D) X\times\{0\}$, hence $\rho^*\rho_*D=\ord_{E_0}\mathcal{X}_0'$, which yields $\ord_E(D)\le\ord_{E_0}(D)b_E$, hence $\ord_{E_0}(D)=\max_Eb_E^{-1}\ord_E(D)$. Since $\rho_*\mathcal{O}_\mathcal{X}(mD)$ is the flag ideal $\mathfrak{a}^{(m)}$ of Definition~\ref{defi:flag}, we also see that \begin{multline*} m\max_Eb_E^{-1}\ord_E(D)=\min\left\{\lambda\in\mathbb{Z}\mid mD\le\lambda\mathcal{X}_0\right\}\\ =\min\left\{\lambda\in\mathbb{Z}\mid t^{-\lambda}\in\mathfrak{a}^{(m)}\right\} =\max\left\{\lambda\in\mathbb{Z}\mid\mathfrak{a}^{(m)}_\lambda\ne 0\right\}, \end{multline*} and we conclude thanks to Proposition~\ref{prop:filtrflag}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:supp}] In view of Corollary~\ref{cor:supp}, the description of the supremum of the support of $\nu=\DH_{(\mathcal{X},\mathcal{L})}$ follows directly from Lemma~\ref{lem:minmax}. We now turn to the infimum. The subtle point of the argument is that it is not a priori obvious that the stationary value $$ \frac{\lambda^{(m)}_{\min}}{m}=\min_Eb_E^{-1}\ord_E(D) $$ given by Lemma~\ref{lem:minmax}, which is of course the infimum of the support of $\nu_m$ as in (\ref{equ:num}), should also be the infimum of the support of their weak limit $\nu=\lim_m\nu_m$. What is trivially true is the inequality $$ \min_Eb_E^{-1}\ord_E(D)=\inf\supp\nu_m\le\inf\supp\nu. $$ Now pick $\lambda>\min_Eb_E^{-1}\ord_E(D)$. According to Corollary~\ref{cor:supp}, it remains to show that \begin{equation}\label{equ:volnegl} \lim_{m\to\infty}\frac{\dim F^{m\lambda}H^0(X,mL)}{N_m}<1. \end{equation} Note that $\varepsilon:=\lambda b_E-\ord_E(D)>0$ for at least one irreducible component $E$. By Lemma~\ref{lem:filtr}, it follows that \begin{equation}\label{equ:finc} F^{m\lambda}H^0(X,mL)\subset\left\{s\in H^0(X,mL)\mid v_E(s)\ge m\varepsilon\right\}. \end{equation} By Lemma~\ref{lem:div2}, $v_E$ is either the trivial valuation or a divisorial valuation. In the former case, the right-hand side of (\ref{equ:finc}) consists of the zero section only, while in the latter case we get (\ref{equ:volnegl}) thanks to Lemma~\ref{lem:suppdiv}. \end{proof} \subsection{Proof of Theorem A}\label{sec:thmA} Now let $(X,L)$ be an arbitrary polarized scheme, and $(\mathcal{X},\mathcal{L})$ an ample test configuration for $(X,L)$. Theorem~A and Corollary~B in the introduction are consequences of the following two results. \begin{thm}\label{thm:PPscheme} The density of the absolutely continuous part of the Duister\-maat-Heckman measure $\DH_{(\mathcal{X},\mathcal{L})}$ is a piecewise polynomial function, while the singular part is a finite sum of point masses. \end{thm} \begin{thm}\label{thm:normzero} The measure $\DH_{(\mathcal{X},\mathcal{L})}$ is a finite sum of point masses iff $(\mathcal{X},\mathcal{L})$ is almost trivial. In this case, $\DH_{(\mathcal{X},\mathcal{L})}$ is a Dirac mass when $X$ is irreducible. \end{thm} Recall that almost trivial but nontrivial test configurations always exist when $X$ is, say, a normal variety (cf.~\cite[\S3.1]{LX} and Proposition~\ref{prop:triv1PS}). \begin{proof}[Proof of Theorem~\ref{thm:PPscheme}] Since $\DH_{(\mathcal{X},\mathcal{L})}$ is defined as the Duistermaat-Heckman measure of some polarized $\mathbb{G}_m$-scheme by Definition~\ref{defi:DHsemi}, it is enough to show the result for the Duister\-maat-Heckman measure $\DH_{(X,L)}$ of a polarized $\mathbb{G}_m$-scheme $(X,L)$ (\ie the special case of Theorem~\ref{thm:PPscheme} where $(\mathcal{X},\mathcal{L})$ is a product test configuration). By (ii) in Proposition~\ref{prop:DHaction}, we may further assume that $X$ is a variety, \ie reduced and irreducible. By the invariance property of Theorem~\ref{thm:DHinv}, we are even reduced to the case where $X$ is a normal variety, which is treated in Corollary~\ref{cor:PP}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:normzero}] Using again (ii) in Proposition~\ref{prop:DHaction} and Theorem~\ref{thm:DHinv}, we may assume that $\mathcal{X}$ (and hence $X$) is a normal variety. By Lemma~\ref{lem:triv}, our goal is then to show that the support of $\DH_{(\mathcal{X},\mathcal{L})}$ is reduced to a point iff $(\mathcal{X},\mathcal{L}+c\mathcal{X}_0)=(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$ for some $c\in\mathbb{Q}$. In order to deduce this from Theorem~\ref{thm:supp}, let $(\mathcal{X}',\mathcal{L}')$ be a pull-back of $(\mathcal{X},\mathcal{L})$ with $\mathcal{X}'$ normal and dominating $X_{\mathbb{A}^1}$. Since $(\mathcal{X},\mathcal{L})$ is the ample model of $(\mathcal{X}',\mathcal{L}')$, we have $\DH_{(\mathcal{X},\mathcal{L})}=\DH_{(\mathcal{X}',\mathcal{L}')}$. In the notation of Theorem~\ref{thm:supp}, this measure is a Dirac mass iff $b_E^{-1}\ord_E(D)=c$ is independent of $E$, \ie $D=c\mathcal{X}'_0$. But this means that $(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$ is the ample model of $(\mathcal{X}',\mathcal{L}'-c\mathcal{X}'_0)$, \ie $(\mathcal{X},\mathcal{L}-c\mathcal{X}_0)=(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$ by uniqueness of the ample model. \end{proof} \section{Non-Archimedean metrics}\label{sec:NAmetrics} From now on, $X$ will always denote a normal projective variety, unless otherwise specified, and $L$ will be a $\mathbb{Q}$-line bundle on $X$. \subsection{Test configurations as non-Archimedean metrics} Motivated by Berkovich space considerations (see~\S\ref{S202} below) we introduce the following notion. \begin{defi}\label{defi:equiv} Two test configurations $(\mathcal{X}_1,\mathcal{L}_1)$, $(\mathcal{X}_2,\mathcal{L}_2)$ for $(X,L)$ are \emph{equivalent} if there exists a test configuration $(\mathcal{X}_3,\mathcal{L}_3)$ that is a pull-back of both $(\mathcal{X}_1,\mathcal{L}_1)$ and $(\mathcal{X}_2,\mathcal{L}_2)$. An equivalence class is called a \emph{non-Archimedean metric} on $L$, and is denoted by $\phi$. We denote by $\phi_\triv$ the equivalence class of $(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$. \end{defi} A non-Archimedean metric $\phi$ on the trivial line bundle $\mathcal{O}_X$ can be viewed as a function $\phi:X^{\mathrm{div}}\to\mathbb{Q}$ on the set of divisorial valuations on $X$. Indeed, by Theorem~\ref{thm:restrdiv}, every divisorial valuation on $X$ is of the form $v_E=b_E^{-1}r(\ord_E)$, where $E$ is an irreducible component of $\mathcal{X}_0$ for some normal test configuration $\mathcal{X}$ of $X$. We may assume $\phi$ is represented by $\mathcal{O}_X(D)$ for some $\mathbb{Q}$-Cartier divisor $D$ supported on $\mathcal{X}_0$, and then $\phi(v_E)=b_E^{-1}\ord_E(D)$. When $D=\mathcal{X}_0$, we get the constant function $\phi\equiv1$. In general, there exists $C>0$ such that $D\pm C\mathcal{X}_0$ is effective; hence $|\phi|\le C$, so $\phi$ is a bounded function. To the trivial metric on $\mathcal{O}_X$ corresponds the zero function. \subsection{Operations on non-Archimedean metrics}\label{S101} If $\phi_i$ is a non-Archimedean metric on a $\mathbb{Q}$-line bundle $L_i$ on $X$ and $r_i\in\mathbb{Q}$ for $i=1,2$, then we get a naturally defined non-Archimedean metric $r_1\phi_1+r_2\phi_2$ on $L:=r_1L_1+r_2L_2$ as follows: if $\phi_i$ is represented by a test configuration $(\mathcal{X},\mathcal{L}_i)$ with the same $\mathcal{X}$, then $\phi$ is represented by $(\mathcal{X},r_1\mathcal{L}_1+r_2\mathcal{L}_2)$. In particular, if $\phi,\phi'$ are non-Archimedean metrics on the same line bundle $L$, then $\phi-\phi'$ is a non-Archimedean metric on $\mathcal{O}_X$, which will thus be viewed as a function on $X^{\mathrm{div}}$. If we choose a normal representative $(\mathcal{X},\mathcal{L})$ of $\phi$ that dominates $X_{\mathbb{A}^1}$ and write as before $\mathcal{L}=\rho^*L_{\mathbb{A}^1}+D$ with $D$ a $\mathbb{Q}$-Cartier divisor supported on $\mathcal{X}_0$, then \begin{equation}\label{equ:divfunc} (\phi-\phi_\triv)(v_E)=b_E^{-1}\ord_E(D) \end{equation} for each component $E$ of $\mathcal{X}_0$. If $f\colon Y\to X$ is a surjective morphism, with $Y$ normal and projective, then any non-Archimedean metric $\phi$ on $L$ induces a non-Archimedean metric $f^*\phi$ on $f^*L$. Indeed, suppose $\phi$ is represented by a test configuration $(\mathcal{X},\mathcal{L})$. We can find a test configuration $\mathcal{Y}$ of $Y$ and a projective $\mathbb{G}_m$-equivariant morphism $\mathcal{Y}\to\mathcal{X}$ compatible with $f$ via the identifications $\mathcal{X}_1\simeq X$, $\mathcal{Y}_1\simeq Y$. Define $f^*\phi$ as the metric represented by the pullback of $\mathcal{L}$ to $\mathcal{Y}$. The pullback of the trivial metric on $L$ is the trivial metric on $f^*L$. \subsection{Translation and scaling} The operations above give rise to two natural actions on the space of non-Archimedean metrics on a fixed line bundle. First, if $\phi$ is a non-Archimedean metric on a line bundle $L$, then so is $\phi+c$, for any $c\in\mathbb{Q}$. Thus we obtain a \emph{translation action} by $(\mathbb{Q},+)$ on the set of non-Archimedean metrics. Second, we have a \emph{scaling action} by the semigroup $\mathbb{N}^*$ which to a non-Archimedean metric $\phi$ on $L$ associates a new non-Archimedean metric $\phi_d$ on $L$ for every $d\in\mathbb{N}^*$. If $\phi$ is represented by a test configuration $(\mathcal{X},\mathcal{L})$, then $\phi_d$ is represented by the base change of $(\mathcal{X},\mathcal{L})$ under the base change $t\to t^d$. This scaling action is quite useful and a particular feature of working over a trivially valued ground field. Note that $\phi_\triv$ is fixed by the scaling action. Viewing, as above, a metric $\phi$ on the trivial line bundle $\mathcal{O}_X$ as a function on divisorial valuations, we have \begin{equation}\label{e405} \phi_d(dv)=d\phi(v) \end{equation} for any divisorial valuation $v$ on $X$. \subsection{Positivity} Next we introduce positivity notions for metrics. \begin{defi} Assume $L$ is ample. Then a non-Archimedean metric $\phi$ on $L$ is called \emph{positive} if some representative $(\mathcal{X},\mathcal{L})$ of $\phi$ is semiample. We denote by $\mathcal{H}^{\mathrm{NA}}(L)$ the set of all non-Archimedean positive metrics on $L$, \ie the quotient of the set of semiample test configurations by the above equivalence relation. \end{defi} We sometimes write $\mathcal{H}^{\mathrm{NA}}$ when no confusion is possible. The notation mimics $\mathcal{H}=\mathcal{H}(L)$ for the space of smooth, positively curved Hermitian metrics on $L$ when working over~$\mathbb{C}$. \begin{lem}\label{lem:amp} When $L$ is ample, every metric $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ is represented by a unique normal, ample test configuration $(\mathcal{X},\mathcal{L})$. Every normal representative of $\phi$ is a pull-back of $(\mathcal{X},\mathcal{L})$. \end{lem} \begin{proof} We first prove uniqueness. Let $(\mathcal{X}_i,\mathcal{L}_i)$, $i=1,2$, be equivalent normal ample test configurations, so that there exists $(\mathcal{X}_3,\mathcal{L}_3)$ as in Definition~\ref{defi:equiv}. For $i=1,2$, the birational morphism $\mu_i\colon\mathcal{X}_3\to\mathcal{X}_i$ satisfies $(\mu_i)_*\mathcal{O}_{\mathcal{X}_3}=\mathcal{O}_{\mathcal{X}_i}$, by normality of $\mathcal{X}_i$. It follows that $(\mathcal{X}_1,\mathcal{L}_1)$ and $(\mathcal{X}_2,\mathcal{L}_2)$ are both ample models of $(\mathcal{X}_3,\mathcal{L}_3)$, and hence $(\mathcal{X}_1,\mathcal{L}_1)=(\mathcal{X}_2,\mathcal{L}_2)$ by the uniqueness part of Proposition~\ref{prop:amplemodel}. Now pick a normal representative $(\mathcal{X},\mathcal{L})$ of $\phi$. By Proposition~\ref{prop:amplemodel}, its ample model $(\mathcal{X}_\amp,\mathcal{L}_\amp)$ is a normal, ample representative, and $(\mathcal{X},\mathcal{L})$ is a pull-back of $(\mathcal{X}_\amp,\mathcal{L}_\amp)$. This proves the existence part, as well as the final assertion. \end{proof} It is sometimes convenient to work with a weaker positivity notion. \begin{defi} Assume $L$ is nef. Then a non-Archimedean metric $\phi$ on $L$ is \emph{semipositive} if some (or, equivalently, any) representative $(\mathcal{X},\mathcal{L})$ of $\phi$ is relatively nef with respect to $\mathcal{X}\to\mathbb{A}^1$. \end{defi} In this case, $\bar\mathcal{L}$ is relatively nef for $\bar\mathcal{X}\to\P^1$, where $(\bar\mathcal{X},\bar\mathcal{L})$ is the compactification of $(\mathcal{X},\mathcal{L})$. \smallskip When $L$ is nef (resp.\ ample), the translation and scaling actions preserve the subset of semipositive (resp.\ positive) non-Archimedean metrics on $L$. Positivity of metrics is also preserved under pull-back, as follows. If $f:Y\to X$ is surjective morphism with $Y$ normal, projective, then $f^*L$ is nef for any nef line bundle $L$ on $X$, and $f^*\phi$ is semipositive for any semipositive metric $\phi$ on $L$. If $L$ and $f^*L$ are further ample (which implies that $f$ is finite), then $f^*\phi$ is positive for any positive metric $\phi$ on $L$.\footnote{This seemingly inconsistent property is explained by the fact that the (analytification of the) ramification locus of $f$ does not meet the Berkovich skeleton where $\phi$ is determined.} \subsection{Duistermaat-Heckman measures and $L^p$-norms}\label{sec:DHmetric} In this section, $L$ is ample. \begin{defi}\label{defi:DHmetric} Let $\phi\in\mathcal{H}^\mathrm{NA}(L)$ be a positive non-Archimedean metric on $L$. \begin{itemize} \item[(i)] The \emph{Duistermaat-Heckman measure} of $\phi$ is defined by setting $\DH_\phi:=\DH_{(\mathcal{X},\mathcal{L})}$ for any semiample representative of $\phi$. \item[(ii)] The \emph{$L^p$-norm} of $\phi$ is defined as the $L^p(\nu)$-norm of $\lambda-\bar\lambda$, with $\bar\lambda:=\int_\mathbb{R}\lambda\,d\nu$ the barycenter of $\nu=\DH_\phi$. \end{itemize} \end{defi} This is indeed well-defined, thanks to the following result. \begin{lem}\label{lem:DHmetric} For any two equivalent semiample test configurations $(\mathcal{X}_1,\mathcal{L}_1)$, $(\mathcal{X}_2,\mathcal{L}_2)$, we have $\DH_{(\mathcal{X}_1,\mathcal{L}_1)}=\DH_{(\mathcal{X}_2,\mathcal{L}_2)}$. \end{lem} \begin{proof} By Corollary~\ref{C301} we may also assume that $\mathcal{X}_i$ is normal for $i=1,2$. Since any two normal test configurations is dominated by a third, we may also assume that $\mathcal{X}_1$ dominates $\mathcal{X}_2$. In this case, $(\mathcal{X}_1,\mathcal{L}_1)$ and $(\mathcal{X}_2,\mathcal{L}_2)$ have the same ample model, and hence the same Duistermaat-Heckman measure. \end{proof} In view of (\ref{equ:divfunc}), Theorem~\ref{thm:supp} can be reformulated as follows. \begin{thm}\label{thm:suppNA} If $\phi$ is a positive metric on $L$, then $$ \sup_{X^{\mathrm{div}}}(\phi-\phi_\triv)=(\phi-\phi_\triv)(v_\triv)=\sup\supp\DH_\phi. $$ \end{thm} The key property of $L^p$-norms is to characterize triviality, as follows. \begin{thm}\label{thm:trivmetric} Let $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ be a positive non-Archimedean metric on $L$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] the Duistermaat-Heckman measure $\DH_\phi$ is a Dirac mass; \item[(ii)] for some (or, equivalently, any) $p\in[1,\infty]$, $\|\phi\|_p=0$; \item[(iii)] $\phi=\phi_\triv+c$ for some $c\in\mathbb{Q}$. \end{itemize} \end{thm} \begin{lem}\label{lem:DHtrans} Let $\phi\in\mathcal{H}^\mathrm{NA}(L)$ be a positive non-Archimedean metric on $L$. For $c\in\mathbb{Q}$ and $d\in\mathbb{N}^*$, we have \begin{itemize} \item[(i)] $\DH_{\phi+c}$ and $\DH_{\phi_d}$ are the pushforwards of $\DH_\phi$ by $\lambda\mapsto\lambda+c$ and $\lambda\mapsto d\lambda$, respectively. \item[(ii)] $\|\phi+c\|_p=\|\phi\|_p$ and $\|\phi_d\|_p=d\|\phi\|_p$. \end{itemize} \end{lem} \begin{proof} The first property in (i) follows from Proposition~\ref{prop:DHDFsemi}~(i). Let $(\mathcal{X},\mathcal{L})$ be the unique normal, ample representative of $\phi$, and denote by $(\mathcal{X}',\mathcal{L}')$ the base change of $(\mathcal{X},\mathcal{L})$ by $t\mapsto t^d$. Then $(\mathcal{X}'_0,\mathcal{L}'_0)\simeq(\mathcal{X}_0,\mathcal{L}_0)$, but with the $\mathbb{G}_m$-action composed with $t\mapsto t^d$. As a result, the $\mathbb{G}_m$-weights of $H^0(\mathcal{X}'_0,m\mathcal{L}'_0)$ are obtained by multiplying those of $H^0(\mathcal{X}_0,m\mathcal{L}_0)$ by $d$, and the second property in (i) follows. Part (ii) is a formal consequence of (i). \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:trivmetric}] The equivalence between (i) and (ii) is immediate, and (iii)$\Longrightarrow$(ii) follows from Lemma~\ref{lem:DHtrans}. Conversely, assume that $\DH_\phi$ is a Dirac mass. By Theorem~\ref{thm:normzero}, the unique normal ample representative $(\mathcal{X},\mathcal{L})$ of $\phi$ is (almost) trivial. By Lemma~\ref{lem:triv}, this means that $(\mathcal{X},\mathcal{L}+c\mathcal{X}_0)=(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$, \ie $\phi+c=\phi_\triv$. \end{proof} \begin{rmk}\label{R304} For each ample representative $(\mathcal{X},\mathcal{L})$ of $\phi$, we have, by definition, $$ \|\phi\|_p^p=\lim_{m\to\infty}\frac{1}{N_m}\sum_{\lambda\in\mathbb{Z}}|m^{-1}\lambda-\bar\lambda|^p\dim H^0(\mathcal{X}_0,m\mathcal{L}_0)_\lambda $$ with $$ \bar\lambda=\lim_{m\to\infty}\frac{1}{mN_m}\sum_{\lambda\in\mathbb{Z}}\lambda\dim H^0(\mathcal{X}_0,m\mathcal{L}_0)_\lambda. $$ This shows that the present definition generalizes the $L^p$-norm of an ample test configuration introduced in~\cite{Don2} for $p$ an even integer. \end{rmk} \subsection{Intersection numbers}\label{S402} Various operations on test configurations descend to non-Archimedean metrics. As a first example, we discuss intersection numbers. Every finite set of test configurations $\mathcal{X}_i$ for $X$ is dominated by some test configuration $\mathcal{X}$. Given finitely many non-Archimedean metrics $\phi_i$ on $\mathbb{Q}$-line bundles $L_i$, we may thus find representatives $(\mathcal{X}_i,\mathcal{L}_i)$ for $\phi_i$ with $\mathcal{X}_i=\mathcal{X}$ independent of $i$. \begin{defi}\label{defi:int} Let $\phi_i$ be a non-Archimedean metric on $L_i$ for $0\le i\le n$. We define the \emph{intersection number} of the $\phi_i$ as \begin{equation}\label{e201} (\phi_0\cdot\ldots\cdot\phi_n):=(\bar\mathcal{L}_0\cdot\ldots\cdot\bar\mathcal{L}_n), \end{equation} where $(\mathcal{X}_i,\mathcal{L}_i)$ is any representative of $\phi_i$ with $\mathcal{X}_i=\mathcal{X}$ independent of $i$, and where $(\bar\mathcal{X},\bar\mathcal{L}_i)$ is the compactification of $(\mathcal{X},\mathcal{L}_i)$. \end{defi} By the projection formula, the right hand side of~\eqref{e201} is independent of the choice of representatives. Note that the intersection number $(\phi_0\cdot\ldots\cdot\phi_n)$ may be negative even when the $L_i$ are ample and the $\phi_i$ are positive, since in this case $\bar\mathcal{L}_i$ is only \emph{relatively} semiample with respect to $\bar\mathcal{X}\to\P^1$. \begin{rmk}\label{R401} When $L_0=\mathcal{O}_X$, we can compute the intersection number in~\eqref{e201} without passing to the compactification. Indeed, if we write $\mathcal{L}_0=\mathcal{O}_\mathcal{X}(D)$ and $D=\sum_Er_EE$, then \begin{equation*} (\phi_0\cdot\ldots\cdot\phi_n) =\sum_Er_E(\mathcal{L}_1|_E\cdot\ldots\cdot\mathcal{L}_n|_E). \end{equation*} If $\phi_0\equiv 1$, that is, $D=\mathcal{X}_0$, then $(\phi_0\cdot\ldots\cdot\phi_n)=(L_1\cdot\ldots\cdot L_n)$ by flatness of $\bar\mathcal{X}\to\P^1$. \end{rmk} The intersection paring $(\phi_0,\dots,\phi_n)\mapsto(\phi_0\cdot\ldots\cdot\phi_n)$ is $\mathbb{Q}$-multilinear in its arguments in the sense of~\S\ref{S101}. By the projection formula, it is invariant under pullbacks: if $Y$ is a projective normal variety of dimension $n$ and $f\colon Y\to X$ is a surjective morphism of degree $d$, then $(f^*\phi_0\cdot\ldots\cdot f^*\phi_n)=d(\phi_0\cdot\ldots\cdot\phi_n)$. \begin{lem}\label{lem:int} For non-Archimedean metrics $\phi_0,\dots,\phi_n$ on $\mathbb{Q}$-line bundles $L_0,\dots,L_n$ we have \begin{equation*} ((\phi_0+c)\cdot\phi_1\cdot\ldots\cdot\phi_n) =(\phi_0\cdot\ldots\cdot\phi_n) +c(L_1\cdot\ldots\cdot L_n) \end{equation*} and \begin{equation*} ((\phi_0)_d\cdot\ldots\cdot(\phi_n)_d)=d(\phi_0\cdot\ldots\cdot\phi_n) \end{equation*} for all $d\in\mathbb{N}^*$ and $c\in\mathbb{Q}$. \end{lem} \begin{proof} The first equality is a consequence of the the discussion above, and the second formula follows from the projection formula. \end{proof} The following inequality is crucial. See~\cite{YZ16} for far-reaching generalizations. \begin{lem}\label{lem:monotone} Let $L_2,\dots,L_n$ be nef $\mathbb{Q}$-line bundles on $X$, $\phi$ a non-Archimedean metric on $\mathcal{O}_X$, and $\phi_i$ a semipositive non-Archimedean metric on $L_i$ for $2\le i\le n$. Then \begin{equation}\label{e302} \left(\phi\cdot\phi\cdot\phi_2\cdot\ldots\cdot\phi_n\right)\le 0. \end{equation} \end{lem} \begin{proof} Choose normal representatives $(\mathcal{X},\mathcal{L})$, $(\mathcal{X},\mathcal{L}_i)$ for $\phi$, with the same test configuration $\mathcal{X}$ for $X$. We have $\mathcal{L}=\mathcal{O}_\mathcal{X}(D)$ for a $\mathbb{Q}$-Cartier divisor $D$ supported on $\mathcal{X}_0$. Then~\eqref{e302} amounts to $\left(D\cdot D\cdot\bar\mathcal{L}_2\cdot\ldots\cdot\bar\mathcal{L}_n\right)\le 0$, which follows from a standard Hodge Index Theorem argument; see~\eg~\cite[Lemma~1]{LX}. \end{proof} \subsection{The non-Archimedean Monge-Amp\`ere measure}\label{S401} Let $L$ be a big and nef $\mathbb{Q}$-line bundle on $X$, and set $V:=(L^n)$. Then any $n$-tuple $(\phi_1,\dots,\phi_n)$ of non-Archimedean metrics on $L$ induces a signed finite atomic \emph{mixed Monge-Amp\`ere measure} on $X_{\mathrm{div}}$ as follows. Pick representatives $(\mathcal{X},\mathcal{L}_i)$ of $\phi_i$, $1\le i\le n$, with the same test configuration $\mathcal{X}$ for $X$ and set \begin{equation*} \MA^\mathrm{NA}(\phi_1,\dots,\phi_n)=V^{-1}\sum_Eb_E(\mathcal{L}_1|_E\cdot\ldots\cdot\mathcal{L}_n|_E)\delta_{v_E}, \end{equation*} where $E$ ranges over irreducible components of $\mathcal{X}_0=\sum_Eb_EE$, and $v_E=r(b_E^{-1}\ord_E)\inX_{\mathrm{div}}$. Note that \begin{equation*} \int_{X_{\mathrm{div}}}\MA^\mathrm{NA}(\phi_1,\dots,\phi_n) =V^{-1}(\mathcal{X}_0\cdot\mathcal{L}_1\cdot\ldots\cdot\mathcal{L}_n) =V^{-1}(\mathcal{X}_1\cdot\mathcal{L}_1\cdot\ldots\cdot\mathcal{L}_n) =V^{-1}(L^n)=1, \end{equation*} where the second equality follows from the flatness of $\mathcal{X}\to\mathbb{A}^1$. When the $\phi_i$ are semipositive, the mixed Monge-Amp\`ere measure is therefore a probability measure. As in the complex case, we also write $\MA^\mathrm{NA}(\phi)$ for $\MA^\mathrm{NA}(\phi,\dots,\phi)$. Note that $\MA^\mathrm{NA}(\phi+c)=\MA^\mathrm{NA}(\phi)$ for any $c\in\mathbb{Q}$. \subsection{Berkovich space interpretation}\label{S202} Let us now briefly explain the term ``non-Archime\-dean metric''. See~\cite{siminag,simons,trivval} for more details. Equip the base field $k$ with the trivial absolute value $|\cdot|_0$, \ie $|a|_0=1$ for $a\in k^*$. Also equip the field $K:=k\lau{t}$ of Laurent series with the non-Archimedean norm in which $|t|=e^{-1}$ and $|a|=1$ for $a\in k^*$. The Berkovich analytification $X^{\mathrm{an}}$ is a compact Hausdorff space equipped with a structure sheaf~\cite{BerkBook}. It contains the set of valuations $v:k(X)^*\to\mathbb{R}$ on the function field of $X$ as a dense subset. Similarly, any line bundle $L$ on $X$ has an analytification $L^{\mathrm{an}}$. The valued field extension $K/k$ further gives rise to analytifications $X_K^{\mathrm{an}}$ and $L_K^{\mathrm{an}}$, together with a natural morphism $X_K^\mathrm{an}\toX^{\mathrm{an}}$ under which $L^{\mathrm{an}}$ pulls pack to $L_K^\mathrm{an}$. The Gauss extension in~\S\ref{sec:valtest} gives a section $X^{\mathrm{an}}\to X_K^{\mathrm{an}}$, whose image exactly consists of the $k^*$-invariant points. After the base change $k[t]\to k\cro{t}$, any test configuration $(\mathcal{X},\mathcal{L})$ defines a model of $(X_K,L_K)$ over the valuation ring $k\cro{t}$ of $K=k\lau{t}$. When $\mathcal{X}$ is normal, this further induces a continuous metric on $L_K^{\mathrm{an}}$, \ie a function on the total space satisfying certain natural conditions. Using the Gauss extension, we obtain a metric also on $L^{\mathrm{an}}$. Replacing a normal test configuration $(\mathcal{X},\mathcal{L})$ by a pullback does not change the induced metric on $L^{\mathrm{an}}$, and one may in fact show that two normal test configurations induce the same metric iff they are equivalent in the Definition~\ref{defi:equiv}. This justifies the name non-Archimedean metric for an equivalence class of test configurations. Further, in the analysis of~\cite{siminag,nama}, positive metrics play the role of K\"ahler potentials in complex geometry. However, we abuse terminology a little since there are natural metrics on $L^{\mathrm{an}}$ that do not come from test configurations. For example, any filtration on $R(X,L)$ defines a metric on $L^{\mathrm{an}}$. Metrics arising from test configurations can be viewed as analogues of smooth metrics on a holomorphic line bundle. For some purposes it is important to work with a more flexible notion of metrics, but we shall not do so here. \section{Non-Archimedean functionals}\label{S201} The aim of this section is to introduce non-Archimedean analogues of several classical functionals in K\"ahler geometry; as indicated in the introduction, the analogy will be turned into a precise connection in~\cite{BHJ2}. Throughout this section, $X$ is a normal projective variety and $L$ a $\mathbb{Q}$-line bundle on $X$. We shall assume that $L$ is big and nef, so that $V:=(L^n)>0$. The most important case is of course when $L$ is ample. \begin{defi} Let $V$ be a set of non-Archimedean metrics on $L$ that is closed under translation and scaling. Then a functional $F\colon V\to\mathbb{R}$ is \emph{homogeneous} if $F(\phi_d)=d F(\phi)$ for $\phi\in V$ and $d\in\mathbb{N}^*$, and \emph{translation invariant} if $F(\phi+c)=F(\phi)$ for $\phi\in V$ and $c\in\mathbb{Q}$. \end{defi} For example, Lemma~\ref{lem:DHtrans} shows that when $L$ is ample, the $L^p$-norm is a homogeneous and translation invariant functional on $\mathcal{H}^{\mathrm{NA}}(L)$. \subsection{The non-Archimedean Monge-Amp\`ere energy}\label{sec:E} \begin{defi}\label{defi:E} The \emph{non-Archimedean Monge-Amp\`ere energy functional} is defined by \begin{equation*} E^{\mathrm{NA}}(\phi):=\frac{\left(\phi^{n+1}\right)}{(n+1)V} \end{equation*} for any non-Archimedean metric $\phi$ on $L$. \end{defi} Here $(\phi^{n+1})$ denotes the intersection number defined in~\S\ref{S402}. Note that $E^\mathrm{NA}(\phi_\triv)=0$ since $(\phi_\triv^{n+1})=(L_{\P^1}^{n+1})=0$. Lemma~\ref{lem:int} and Proposition~\ref{prop:DHDFsemi} imply: \begin{lem}\label{lem:ENA} The functional $E^{\mathrm{NA}}$ is homogeneous and satisfies \begin{equation}\label{equ:transE} E^{\mathrm{NA}}(\phi+c)=E^{\mathrm{NA}}(\phi)+c \end{equation} for any non-Archimedean metric $\phi$ on $L$ and any $c\in\mathbb{Q}$. We further have $$ E^{\mathrm{NA}}(\phi)=\int_\mathbb{R}\lambda\,\DH_\phi(d\lambda). $$ when $L$ is ample and $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ is positive. \end{lem} \begin{lem}\label{lem:EMA} For every non-Archimedean metric $\phi$ on $L$ we have \begin{equation*} E^{\mathrm{NA}}(\phi) =\frac{1}{(n+1)V}\sum_{j=0}^n\left((\phi-\phi_\triv)\cdot\phi^j\cdot\phi_\triv^{n-j}\right). \end{equation*} Further, when $\phi$ is semipositive, we have, for $j=0,\dots,n-1$, \begin{equation}\label{e202} \left((\phi-\phi_\triv)\cdot\phi^j\cdot\phi_\triv^{n-j}\right) \ge\left((\phi-\phi_\triv)\cdot\phi^{j+1}\cdot\phi_\triv^{n-j-1}\right). \end{equation} \end{lem} \begin{proof} Since $(\phi_\triv^{n+1})=0$, we get $$ (n+1)V E^{\mathrm{NA}}(\phi)=(\phi^{n+1})-(\phi_\triv^{n+1})=\sum_{j=0}^n\left((\phi-\phi_\triv)\cdot\phi^j\cdot\phi_\triv^{n-j}\right). $$ The inequality~\eqref{e202} is now a consequence of Lemma~\ref{lem:monotone}. \end{proof} \begin{rmk}\label{R402} In view of Remark~\ref{R401} we can write the energy functional as \begin{equation*} E^\mathrm{NA}(\phi)=\frac{1}{n+1}\sum_{j=0}^n\frac1V\int_{X_{\mathrm{div}}}(\phi-\phi_\triv)\mu_j, \end{equation*} where $\mu_j=\MA^\mathrm{NA}(\phi,\dots,\phi,\phi_\triv,\dots,\phi_\triv)$ is a mixed Monge-Amp\`ere measure with $j$ copies of $\phi$. Note that this formula is identical to its counterpart in K\"ahler geometry. \end{rmk} \subsection{The non-Archimedean $I$ and $J$-functionals}\label{sec:J} \begin{defi}\label{defi:J} The \emph{non-Archimedean $I$ and $J$-functionals} are defined by $$ I^{\mathrm{NA}}(\phi):=V^{-1}\left(\phi\cdot\phi_\triv^n\right)-V^{-1}\left((\phi-\phi_\triv)\cdot\phi^n\right) $$ and $$ J^{\mathrm{NA}}(\phi):=V^{-1}(\phi\cdot\phi_\triv^n)-E^{\mathrm{NA}}(\phi) $$ for any non-Archimedean metric $\phi$ on $L$. \end{defi} \begin{lem}\label{lem:sup} When $L$ is ample, we have $$ V^{-1}(\phi\cdot\phi_\triv^n)=(\phi-\phi_\triv)(v_\triv)=\sup_{X^{\mathrm{div}}}(\phi-\phi_\triv)=\sup\supp\DH_\phi $$ for every positive metric $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$. \end{lem} \begin{proof} Choose a normal, semiample test configuration $(\mathcal{X},\mathcal{L})$ representing $\phi$ and such that $\mathcal{X}$ dominates $X_{\mathbb{A}^1}$. Denote by $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$ the canonical morphism, so that $\mathcal{L}=\rho^*L_{\mathbb{A}^1}+D$ for a unique $\mathbb{Q}$-Cartier divisor $D$ supported on $\mathcal{X}_0$. Then $$ (\phi\cdot\phi_\triv^n)=\left((\phi-\phi_\triv)\cdot\phi_\triv^n\right)=\left(D\cdot\rho^*L_{\mathbb{A}^1}^n\right)=(\rho_*D\cdot L_{\mathbb{A}^1}^n)=V\ord_{E_0}(D), $$ with $E_0$ the strict transform of $X\times\{0\}$ on $\mathcal{X}$. Theorem~\ref{thm:suppNA} yields the desired conclusion. \end{proof} \begin{prop}\label{prop:J} The non-Archimedean functionals $I^{\mathrm{NA}}$ and $J^{\mathrm{NA}}$ are translation invariant and homogeneous. On the space of semipositive metrics, they are nonnegative and satisfy \begin{equation}\label{e303} \tfrac 1 nJ^{\mathrm{NA}}\le I^{\mathrm{NA}}-J^{\mathrm{NA}}\le n J^{\mathrm{NA}}. \end{equation} When $L$ is ample, we further have $$ J^{\mathrm{NA}}(\phi)=\sup\supp\DH_{\phi}-\int_\mathbb{R}\lambda\,\DH_{\phi}(d\lambda) $$ for all positive metrics $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$. \end{prop} \begin{proof} Translation invariance and homogeneity follow directly from Lemma~\ref{lem:int}. Now assume $\phi$ is semipositive. Then~\eqref{e202} shows that $I^{\mathrm{NA}}(\phi)\ge0$, $J^{\mathrm{NA}}(\phi)\ge0$, and \begin{multline*} V^{-1}\left((\phi-\phi_\triv)\cdot\phi_\triv^n\right) +nV^{-1}\left((\phi-\phi_\triv)\cdot\phi^n\right)\\ \le (n+1)E^{\mathrm{NA}}(\phi) \le n V^{-1}\left((\phi-\phi_\triv)\cdot\phi_\triv^n\right) +V^{-1}\left((\phi-\phi_\triv)\cdot\phi^n\right). \end{multline*} This implies \begin{multline*} n\left(I^{\mathrm{NA}}(\phi)-J^{\mathrm{NA}}(\phi)\right) =n\left(E^{\mathrm{NA}}(\phi)-V^{-1}\left((\phi-\phi_\triv)\cdot\phi^n\right)\right)\\ \ge V^{-1}\left((\phi-\phi_\triv)\cdot\phi_\triv^n\right)-E^{\mathrm{NA}}(\phi)=J^{\mathrm{NA}}(\phi), \end{multline*} and similarly for the second inequality in~\eqref{e303}. The final assertion is a consequence of Lemma~\ref{lem:ENA} and Lemma~\ref{lem:sup}. \end{proof} The above result shows that the functionals $J^\mathrm{NA}$ and $I^\mathrm{NA}$ are equivalent on the space of semipositive metrics, in the following sense: $$ \frac{n+1}{n} J^\mathrm{NA}\le I^\mathrm{NA}\le(n+1) J^\mathrm{NA}. $$ We next show that they are also equivalent to the $L^1$-norm $\|\cdot\|_1$ on positive metrics. \begin{thm}\label{thm:J} Assume $L$ is ample. Then, for every positive metric $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$, we have $$ c_n J^{\mathrm{NA}}(\phi)\le\|\phi\|_1\le 2 J^\mathrm{NA}(\phi) $$ with $c_n:=2n^n/(n+1)^{n+1}$. In particular, $J^{\mathrm{NA}}(\phi)=0$ iff $\phi=\phi_\triv+c$ for some $c\in\mathbb{Q}$. \end{thm} \begin{proof} The final assertion follows from Theorem~\ref{thm:trivmetric}. By translation invariance, we may assume, after replacing $\phi$ with $\phi+c$, that $\nu:=\DH_\phi$ has barycenter $\bar\lambda=0$. By Proposition~\ref{prop:J} and Definition~\ref{defi:DHmetric}, we then have $$ J^\mathrm{NA}(\phi)=\lambda_{\max}=\sup\supp\nu $$ and $\|\phi\|_1=\int_\mathbb{R}|\lambda|d\nu$. By Theorem~\ref{thm:BC}, $f(\lambda)=\nu\{x\ge\lambda\}^{1/n}$ is further concave on $(-\infty,\lambda_{\max})$. Theorem~\ref{thm:J} is now a consequence of Lemma~\ref{lem:logconc} below. \end{proof} \begin{lem}\label{lem:logconc} Let $\nu$ be a probability measure on $\mathbb{R}$ with compact support and such that $\int_\mathbb{R}\lambda\,d\nu=0$. Assume also that $f(\lambda):=\nu\{x\ge\lambda\}^{1/n}$ is concave on $(-\infty,\lambda_{\max})$, with $\lambda_{\max}=\max\supp\nu$. Then \begin{equation}\label{equ:J1} c_n\lambda_{\max} \le\int|\lambda|d\nu \le2\lambda_{\max}, \end{equation} with $c_n$ as above. \end{lem} \begin{proof} Since $\int_\mathbb{R}\lambda\,d\nu=0$, we have $$ \int_\mathbb{R}|\lambda|d\nu=2\int_0^{\lambda_{\max}}\lambda\,d\nu, $$ giving the right-hand inequality in (\ref{equ:J1}). Our goal is to show that $$ \int_0^{\lambda_{\max}}\lambda\,d\nu\ge\frac{n^n}{(n+1)^{n+1}}\lambda_{\max}. $$ After scaling, we may and do assume for simplicity that $\lambda_{\max}=1$. Since $\nu$ is the distributional derivative of $-f(\lambda)^n$, it is easy to check that $$ \int_0^1\lambda\,d\nu=\int_0^1f(\lambda)^nd\lambda=\int_{-\infty}^0(1-f(\lambda)^n)d\lambda. $$ Set $a:=f'(0_+)<0$ and $b:=f(0)\in(0,1)$. By concavity of $f$ on $(-\infty,1)$, we have $f(\lambda)\le a\lambda+b$ on $(-\infty,1)$ and $$ f(\lambda)\ge b(1-\lambda)+f(1_+)\ge b(1-\lambda) $$ on $(0,1)$. This last inequality yields \begin{equation}\label{equ:fb} \int_0^1\lambda\,d\nu=\int_0^1{f(\lambda)}^n d\lambda\ge b^n\int_0^1(1-\lambda)^n d\lambda=\frac{b^n}{n+1}. \end{equation} The first one shows that \begin{equation*} \int_0^1\left(a\lambda+b\right)^n d\lambda\ge\int_0^1 f(\lambda)^n d\lambda =\int_{-\infty}^0\left(1-f(\lambda)^n\right)d\lambda\ge\int_{\lambda_0}^0\left(1-(a\lambda+b)^n\right) d\lambda, \end{equation*} with $\lambda_0<0$ defined by $a\lambda_0+b=1$. Computing the integrals, we infer $$ \frac{1}{a(n+1)}\left((a+b)^{n+1}-b^{n+1}\right)\ge-\lambda_0+\frac{1}{a(n+1)}\left(1-b^{n+1}\right), $$ \ie $$ (a+b)^{n+1}-b^{n+1}\le-a \lambda_0(n+1)+1-b^{n+1}. $$ Since $-a\lambda_0=b-1$ and $a+b\ge f(1_-)\ge 0$, this shows that $0\le(b-1)(n+1)+1$, \ie $b\ge\frac{n}{n+1}$. Plugging this into (\ref{equ:fb}) yields the desired result. \end{proof} \begin{rmk} The inequalities in Theorem~7.9 can be viewed as non-Archimedean analogues of~\cite[(61)]{Dar15} and~\cite[Proposition~5.5]{DR15}. \end{rmk} \begin{rmk}\label{rmk:derJ} In our notation, the expression for the \emph{minimum norm} $\|(\mathcal{X},\mathcal{L})\|_m$ given in~\cite[Remark 3.11]{Der1} reads $\|(\mathcal{X},\mathcal{L})\|_m=\tfrac{1}{n+1}(\phi^{n+1})-\left((\phi-\phi_\triv)\cdot\phi^n\right)$, \ie $$ V^{-1}\|(\mathcal{X},\mathcal{L})\|_m=I^\mathrm{NA}(\phi)-J^{\mathrm{NA}}(\phi), $$ where $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ denotes the metric induced by $(\mathcal{X},\mathcal{L})$. It therefore follows from our results that the minimum norm is equivalent to the $L^1$-norm on positive metrics. \end{rmk} \subsection{The non-Archimedean Mabuchi functional}\label{sec:MNA} From now on we assume that the base field $k$ has characteristic $0$. We still assume that $X$ is a normal projective variety and $L$ a nef and big $\mathbb{Q}$-line bundle on $X$. Fix a boundary $B$ on $X$. Recall the notation introduced in~\S\ref{sec:logdisc} for the relative canonical and log canonical divisors. When $L$ is ample, we can rewrite the definition of the Donaldson-Futaki invariant with respect to $((X,B);L)$ of a normal test configuration $(\mathcal{X},\mathcal{L})$ (see Definition~\ref{defi:DFpairs}) as \begin{equation}\label{equ:DFB} \DF_B(\mathcal{X},\mathcal{L}) =V^{-1}(K_{(\bar\mathcal{X},\mathcal{B})/\P^1}\cdot\bar\mathcal{L}^n)+\bar S_B E^{\mathrm{NA}}(\mathcal{X},\mathcal{L}). \end{equation} This formula also makes sense when $L$ is not ample. Since canonical divisor classes are compatible under push-forward, the projection formula shows that $\DF_B$ is invariant under pull-back, hence descends to a functional, also denoted $\DF_B$, on non-Archimedean metrics on $L$. While it is straightforward to see that $\DF_B$ is translation invariant, it is, however, \emph{not} homogeneous, and we therefore introduce an `error term' to recover this property. \begin{defi}\label{defi:M} The \emph{non-Archimedean Mabuchi functional} with respect to $((X,B);L)$ is \begin{align} M_B^{\mathrm{NA}}(\phi) :&=\DF_B(\phi) +V^{-1}\left((\mathcal{X}_{0,\red}-\mathcal{X}_0)\cdot\mathcal{L}^n\right)\label{e403}\\ &=V^{-1}\left(K^{\log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}\cdot\bar\mathcal{L}^n\right) +\bar S_B E^{\mathrm{NA}}(\mathcal{X},\mathcal{L}),\label{e308} \end{align} for any normal test configuration $(\mathcal{X},\mathcal{L})$ representing $\phi$. \end{defi} \begin{prop}\label{prop:M} The non-Archimedean Mabuchi functional $M_B^{\mathrm{NA}}$ is translation invariant and homogeneous. \end{prop} \begin{proof} Translation invariance is straightforward to verify. As for homogeneity, it is enough to prove it for \begin{equation*} (\mathcal{X},\mathcal{L})\mapsto\left(K^\mathrm{log}_{(\bar\mathcal{X},\mathcal{B})/\P^1}\cdot\bar\mathcal{L}^n\right). \end{equation*} As in~\cite[\S3]{LX}, this, in turn, is a consequence of the pull-back formula for log canonical divisors. More precisely, let $(\mathcal{X}_d,\mathcal{L}_d)$ be the normalized base change of $(\mathcal{X},\mathcal{L})$, and denote by $f_d\colon\P^1\to\P^1$ and $g_d\colon\bar\mathcal{X}_d\to\bar\mathcal{X}$ the induced finite morphisms, both of which have degree $d$. By~\eqref{e404} we have $K^\mathrm{log}_{(\bar\mathcal{X}_d,\bar\mathcal{B}_d)/\P^1}=g_d^*K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}$. Hence we get \begin{equation}\label{e401} \left(K^\mathrm{log}_{(\bar\mathcal{X}_d,\bar\mathcal{B}_d)/\P^1}\cdot\bar\mathcal{L}_d^n\right) =d\left(K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}\cdot\bar\mathcal{L}^n\right) \end{equation} by the projection formula. \end{proof} \begin{prop}\label{P201} We have $M_B^{\mathrm{NA}}(\phi)\le\DF_B(\phi)$ when $\phi$ is semipositive. Further, equality holds if $\phi$ is represented by a normal test configuration $(\mathcal{X},\mathcal{L})$ with $\mathcal{X}_0$ reduced. \end{prop} Indeed, the `error term' in~\eqref{e403} is nonpositive since $\bar\mathcal{L}$ is relatively semiample. While equality does not always hold in Proposition~\ref{P201}, we have the following useful result. \begin{prop}\label{prop:weaksemi} For every non-Archimedean metric $\phi$ on $L$ there exists $d_0=d_0(\phi)\in\mathbb{Z}_{>0}$ such that $\DF_B(\phi_d)=M_B(\phi_d)=dM_B^{\mathrm{NA}}(\phi)$ for all $d$ divisible by $d_0$. \end{prop} \begin{proof} Let $(\mathcal{X},\mathcal{L})$ be any normal representative of $\phi$. Then a normal representative $(\mathcal{X}_d,\mathcal{L}_d)$ of $\phi_d$ is given by the normalization of the base change of $(\mathcal{X},\mathcal{L})$ by $t\mapsto t^d$. It is well-known (see~\eg~\cite[\href{stacks.math.columbia.edu/tag/09IJ}{Tag 09IJ}]{stacks}) that the central fiber of $\mathcal{X}_d$ is reduced for $d$ sufficiently divisible. Then $\DF_B(\phi_d)=M_B(\phi_d)$, whereas $M_B(\phi_d)=dM_B(\phi)$ by Proposition~\ref{prop:M}. \end{proof} \subsection{Entropy and Ricci energy}\label{S301} Next we define non-Archimedean analogues of the entropy and Ricci energy functionals, and prove that the Chen-Tian formula holds. \begin{defi} We define the \emph{non-Archimedean entropy} $H_B^{\mathrm{NA}}(\phi)$ of a non-Archimedean metric $\phi$ on $L$ by \begin{equation*} \int_{X_{\mathrm{div}}}A_{(X,B)}(v)\MA^{\mathrm{NA}}(\phi), \end{equation*} where $\MA^\mathrm{NA}(\phi)$ is the non-Archimedean Monge-Amp\`ere measure of $\phi$, defined in~\S\ref{S401}. \end{defi} Concretely, pick a normal test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ representing $\phi$, and write $\mathcal{X}_0=\sum_Eb_EE$ and $v_E=b_E^{-1}r(\ord_E)$. Then \begin{equation}\label{e402} H_B^{\mathrm{NA}}(\phi):=V^{-1}\sum_E A_{(X,B)}(v_E)b_E(E\cdot\mathcal{L}^n). \end{equation} Note that $H_B^{\mathrm{NA}}(\phi)\ge0$ whenever $(X,B)$ is lc and $\phi$ is semipositive. Indeed, in this case we have $A_{(X,B)}(v_E)\ge0$ and $(E\cdot\mathcal{L}^n)\ge0$ for all $E$. See~\S\ref{S304} for much more precise results. As an immediate consequence of~\eqref{e402} and Corollary~\ref{cor:discr}, we have \begin{cor}\label{C402} If $(\mathcal{X},\mathcal{L})$ is a normal representative of $\phi$, with $\mathcal{X}$ dominating $X_{\mathbb{A}^1}$, then \begin{equation}\label{e305} H^\mathrm{NA}_B(\phi) =V^{-1}\left(K^\mathrm{log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}\cdot\bar\mathcal{L}^n\right) -V^{-1}\left(\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\cdot\bar\mathcal{L}^n\right), \end{equation} where $\rho\colon\bar\mathcal{X}\to X_{\P^1}$ is the canonical morphism. \end{cor} \begin{cor}\label{C401} The non-Archimedean entropy functional $H_B^{\mathrm{NA}}$ is translation invariant and homogeneous. \end{cor} \begin{proof} Translation invariance is clear from the definition, since $\MA^\mathrm{NA}(\phi+c)=\MA(\phi)$, and homogeneity follows from~\eqref{e401} and~\eqref{e305}. \end{proof} \begin{defi} The \emph{non-Archimedean Ricci energy} $R_B^{\mathrm{NA}}(\phi)$ of a non-Archimedean metric $\phi$ on $L$ is \begin{equation*} R^{\mathrm{NA}}_B(\phi):=V^{-1}\left(\psi_\triv\cdot\phi^n\right), \end{equation*} with $\psi_\triv$ the trivial non-Archimedean metric on $K_{(X,B)}$. \end{defi} More concretely, if $(\mathcal{X},\mathcal{L})$ is a normal representative of $\phi$, with $\mathcal{X}$ dominating $X_{\mathbb{A}^1}$, then \begin{equation}\label{e307} R^{\mathrm{NA}}_B(\phi) =V^{-1}\left(\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\cdot\bar\mathcal{L}^n\right) =V^{-1}\left(p^*K_{(X,B)}\cdot\bar\mathcal{L}^n\right), \end{equation} with $p\colon\bar\mathcal{X}\to X$ the composition of $\rho\colon\bar\mathcal{X}\to X_{\P^1}$ with $X_{\P^1}\to X$. \begin{prop}\label{P401} The non-Archimedean Ricci energy functional $R_B^{\mathrm{NA}}$ is homogenous and satisfies $R_B^\mathrm{NA}(\phi+c)=R_B^\mathrm{NA}(\phi)-\bar{S}_B c$ for any $c\in\mathbb{Q}$. \end{prop} \begin{proof} Homogeneity follows from~\eqref{e401} and~\eqref{e307}. The formula for $R_B^\mathrm{NA}(\phi+c)$ also follows from~\eqref{e307}. Indeed, set $\bar\mathcal{M}:=\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}$. Then \begin{multline*} R^\mathrm{NA}_B(\phi+c)-R^\mathrm{NA}(\phi) =V^{-1}(\bar\mathcal{M}\cdot(\bar\mathcal{L}+c\mathcal{X}_0)^n) -V^{-1}(\bar\mathcal{M}\cdot\bar\mathcal{L}^n)\\ =cnV^{-1}(\bar\mathcal{M}\cdot\bar\mathcal{L}^{n-1}\cdot\mathcal{X}_0) =cnV^{-1}(K_X\cdot L^n) =-\bar{S}_Bc. \end{multline*} by flatness of $\bar\mathcal{X}\to\P^1$, since $\bar\mathcal{M}|_{\mathcal{X}_1}\simeq K_{(X,B)}$, $\bar\mathcal{L}|_{\mathcal{X}_1}\simeq L$, and $\mathcal{X}_0\cdot\mathcal{X}_0=0$. \end{proof} As an immediate consequence of~\eqref{e308},~\eqref{e305} and~\eqref{e307} we get \begin{prop}\label{prop:DFChen} The following version of the Chen-Tian formula holds: \begin{equation*} M_B^{\mathrm{NA}}=H_B^{\mathrm{NA}}+R^{\mathrm{NA}}_B+\bar S_B E^{\mathrm{NA}}. \end{equation*} \end{prop} \begin{rmk} In the terminology of~\cite{Oda1}, $H_B^{\mathrm{NA}}(\phi)+V^{-1}\left((\mathcal{X}_0-\mathcal{X}_{0,\red})\cdot\mathcal{L}^n\right)$ coincides (up to a multiplicative constant) with the `discrepancy term' of the Donaldson-Futaki invariant, while $\bar S_B\,E^{\mathrm{NA}}(\phi)+R^{\mathrm{NA}}_B(\phi)$ corresponds to the `canonical divisor part'. \end{rmk} \subsection{Functoriality}\label{S305} Consider a birational morphism $\mu\colon X'\to X$, with $X'$ a normal projective variety. Set $L':=\mu^*L$ and define a boundary $B'$ on $X'$ by $K_{(X',B')}=\mu^*K_{(X,B)}$ and $\mu_*B'=B$. For any non-Archimedean metric $\phi$ on $L$, let $\phi'=\mu^*\phi$ be the pullback, see~\S\ref{S101}. Note that $L'$ is big and nef. By the projection formula, we have $V':=((L')^n)=V$ and $\bar S_{B'}:=n(V')^{-1}\left(-K_{(X',B')}\cdot (L')^{n-1}\right)=\bar S_B$. Let us say that a functional $F=F_{X,B}$ on non-Archimedean metrics is \emph{pull-back invariant} if $F_{X',B'}(\phi')=F_{X,B}(\phi)$ for every non-Archimedean metric $\phi$ on $L$. \begin{prop} The functionals $E^\mathrm{NA}$, $I^\mathrm{NA}$, $J^\mathrm{NA}$, $\DF_B^\mathrm{NA}$, $M_B^\mathrm{NA}$, $H_B^\mathrm{NA}$ and $R_B^\mathrm{NA}$ are all pullback invariant. \end{prop} \begin{proof} Let $(\mathcal{X},\mathcal{L})$ be a normal representative of $\phi$ such that $\mathcal{X}$ dominates $X_{\mathbb{A}^1}$. Pick a normal test configuration $\mathcal{X}'$ that dominates $X'_{\mathbb{A}^1}$ and such that unique $\mathbb{G}_m$-equivariant birational map $\mathcal{X}'\to\mathcal{X}$ extending $\mu$ is a morphism. Then $\phi'$ is represented by $(\mathcal{X}',\mathcal{L}')$, where $\mathcal{L}'$ is the pullback of $\mathcal{L}$. The pullback-invariance of the all the functionals now follows from the projection formula for the induced map $\bar\mathcal{X}'\to\bar\mathcal{X}$. \end{proof} Recall from~\S\ref{S101} that if $\phi$ is a non-Archimedean metric on $L$, then $r\phi$ is a non-Archimedean metric on $rL$ for any $r\in\mathbb{Q}_{>0}$. One directly verifies that the functionals $E^{\mathrm{NA}}$, $I^{\mathrm{NA}}$ and $J^\mathrm{NA}$ are homogeneous of degree 1 in the sense that $E^\mathrm{NA}(r\phi)=rE^\mathrm{NA}(\phi)$ etc, whereas the functionals $\DF_B$, $M_B$, $H_B$ and $R_B$ are homogeneous of degree 0, that is, $\DF_B(r\phi)=\DF_B(\phi)$ etc. \subsection{The log K\"ahler-Einstein case}\label{S302} In the \emph{log K\"ahler-Einstein case}, \ie when $K_{(X,B)}$ is proportional to $L$, the formula for $M_B^{\mathrm{NA}}$ takes the following alternative form. \begin{lem}\label{lem:MKE} Assume that $K_{(X,B)}\equiv\lambda L$ for some $\lambda\in\mathbb{Q}$. Then $$ M_B^{\mathrm{NA}}=H_B^{\mathrm{NA}}+\lambda\left(I^{\mathrm{NA}}-J^{\mathrm{NA}}\right). $$ \end{lem} \begin{proof} Let $\psi_\triv$ and $\phi_\triv$ be the trivial non-Archimedean metrics on $K_{(X,B)}$ and $L$, respectively. Since $K_{(X_{\P^1},B_{\P^1})}\equiv\lambda L_{\P^1}$ we get \begin{equation*} R^{\mathrm{NA}}_B(\phi)=V^{-1}(\psi_\triv\cdot\phi^n)=\lambda V^{-1}\left(\phi_\triv\cdot\phi^n\right). \end{equation*} Further, $\bar S_B=-n\lambda$, so we infer \begin{multline*} R^{\mathrm{NA}}_B(\phi)+\bar S_B E^{\mathrm{NA}}(\phi) =\lambda V^{-1}\left[\left(\phi_\triv\cdot\phi^n\right) -\frac{n}{n+1}(\phi^{n+1}) \right]\\ =\lambda V^{-1}\left[\frac{1}{n+1}(\phi^{n+1}) -\left((\phi-\phi_\triv)\cdot\phi^n\right)\right]\\ =\lambda\left[E^{\mathrm{NA}}(\phi)-V^{-1}\left((\phi-\phi_\triv)\cdot\phi^n\right)\right] =\lambda\left(I^{\mathrm{NA}}(\phi)-J^{\mathrm{NA}}(\phi)\right), \end{multline*} which completes the proof in view of the Chen-Tian formula. \end{proof} \subsection{The non-Archimedean Ding functional}\label{sec:Ding} In this section, $(X,B)$ denotes a \emph{weak log Fano pair}, \ie $X$ is a normal, projective variety and $B$ is a $\mathbb{Q}$-Weil divisor such that $(X,B)$ is subklt with $L:=-K_{(X,B)}$ big and nef. For example, $X$ could be smooth, with $-K_X$ ample (and $B=0$). The following non-Archimedean version of the Ding functional first appeared in~\cite{Berm16}.\footnote{This appears in~\cite[Proposition~3.8]{Berm16}. See also Proposition~\ref{P403} below.} It plays a crucial role in the variational approach to the Yau-Tian-Donaldson conjecture in~\cite{BBJ15}; see also~\cite{Fuj15b,Fuj16}. The usual Ding functional was introduced in~\cite{Din88}. \begin{defi} The \emph{non-Archimedean Ding functional} is defined by \begin{equation*} D^{\mathrm{NA}}_B:=L^{\mathrm{NA}}_B-E^{\mathrm{NA}}, \end{equation*} with \begin{equation*} L^\mathrm{NA}_B(\phi):=\inf_v(A_{(X,B)}(v)+(\phi-\phi_{\triv})(v)), \end{equation*} the infinimum taken over all valuations $v$ on $X$ that are divisorial or trivial. \end{defi} Recall that $\phi-\phi_\triv$ is a non-Archimedean metric on $\mathcal{O}_X$, which we identify with a bounded function on divisorial valuations. \begin{prop}\label{P402} The non-Archimedean Ding functional $D_B^{\mathrm{NA}}$ is translation invariant, homogenous, and pullback invariant. \end{prop} \begin{proof} By the corresponding properties of the functional $E^\mathrm{NA}$, it suffices to prove that $L_B^\mathrm{NA}$ is homogenous, pullback invariant, and satisfies $L_B^\mathrm{NA}(\phi+c)=L_B^\mathrm{NA}(\phi)+c$ for $c\in\mathbb{Q}$. The latter equality is clear from the definition, and the homogeneity of $D_B^\mathrm{NA}$ follows from~\eqref{e405} applied to the metric $\phi-\phi_\triv$ on $\mathcal{O}_X$, together with the fact that $A_{(X,B)}(tv)=tA_{(X,B)}(v)$ for $t\in\mathbb{Q}_+$. Functoriality is also clear. Indeed, with notation as in~\S\ref{S305}, and with the identification of divisorial (or trivial) valuations on $X$ and $X'$, we have, by construction, $\phi'-\phi'_\triv=\phi-\phi_\triv$ and $A_{(X,B)}=A_{(X',B')}$. Thus $L^\mathrm{NA}_{B'}(\phi')=L^\mathrm{NA}_B(\phi)$. \end{proof} \begin{prop}\label{P404} For every non-Archimedean metric $\phi$ on $L$, we have $D^{\mathrm{NA}}_B(\phi)\le J^{\mathrm{NA}}(\phi)$. \end{prop} \begin{proof} The trivial valuation $v_\triv$ on $X$ satisfies $A_{(X,B)}(v_\triv)=0$ and $E^\mathrm{NA}(\phi)+J^\mathrm{NA}(\phi)=(\phi-\phi_\triv)(v_\triv)$. Hence \begin{equation*} L^\mathrm{NA}_B(\phi) \le A_{(X,B)}(v_\triv)+(\phi-\phi_\triv)(v_\triv) =E^\mathrm{NA}(\phi)+J^\mathrm{NA}(\phi), \end{equation*} which yields $D^\mathrm{NA}_B(\phi)\le J^\mathrm{NA}(\phi)$. \end{proof} In the definition of the Ding functional, we take the infimum over all divisorial valuations on $X$. As the next result shows, this is neither practical nor necessary. \begin{prop}\label{P403} Let $\phi$ be a non-Archimedean metric on $L=-K_{(X,B)}$ determined on a normal test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$, such that $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ is a sublc pair. Write \begin{equation*} \mathcal{L}+K_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}^\mathrm{log}=\mathcal{O}_\mathcal{X}(D), \end{equation*} for a $\mathbb{Q}$-Cartier divisor $D$ on $\mathcal{X}$ supported on $\mathcal{X}_0$. Then \begin{align*} L^{\mathrm{NA}}_B(\phi) &=\lct_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red}-D)}(\mathcal{X}_0)\\ &=\min_E\left(A_{(X,B)}(v_E)+(\phi-\phi_{\triv})(v_E)\right), \end{align*} where $E$ ranges over the irreducible components of $\mathcal{X}_0$. \end{prop} Note that the assumption that $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ be sublc is satisfied when $(\mathcal{X},\mathcal{B}+\mathcal{X}_0)$ is log smooth (even when $\mathcal{X}_0$ is not necessarily reduced). Proposition~\ref{P403} shows in particular that the definition of $D^\mathrm{NA}_B$ given above is compatible with~\cite{Berm16,Fuj15b}. By~\cite[Proposition 3.8]{Berm16}, the non-Archimedean Ding functional is thus the limit of the usual Ding functional in the sense of~\eqref{e301}; hence the name. \begin{lem}\label{L401} Let $w$ be a divisorial valuation $w$ on $\mathcal{X}$ centered on $\mathcal{X}_0$ and normalized by $w(\mathcal{X}_0)=1$, and let $v=r(w)$ be the associated divisorial (or trivial) valuation on $X$. Then \begin{equation}\label{e407} A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(w)+w(D) =A_{(X,B)}(v)+(\phi-\phi_\triv)(v). \end{equation} In particular, $\ord_E(D)=b_E(A_{(X,B)}(v_E)+(\phi-\phi_\triv)(v_E))$ for every irreducible component $E$ of $\mathcal{X}_0$. \end{lem} \begin{proof} Pick any normal test configuration $\mathcal{X}'$ for $X$ dominating both $\mathcal{X}$ and $X_{\mathbb{A}^1}$ via $\mu\colon\mathcal{X}'\to\mathcal{X}$ and $\rho\colon\mathcal{X}'\to X_{\mathbb{A}^1}$, respectively, such that $w=b_{E'}^{-1}\ord_{E'}$ for an irreducible component $E'$ of $\mathcal{X}'_0$. By~\eqref{equ:Klog} and~\eqref{equ:Klogbis} we have \begin{equation*} K^\mathrm{log}_{(\mathcal{X}',\mathcal{B}')/\mathbb{A}^1}-\mu^*K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1} =\sum_{E'}A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(\ord_{E'})E', \end{equation*} and \begin{equation*} K^\mathrm{log}_{(\mathcal{X}',\mathcal{B}')/\mathbb{A}^1}-\rho^*K^\mathrm{log}_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})/\mathbb{A}^1} =\sum_{E'} b_{E'}A_{(X,B)}(v_{E'})E', \end{equation*} respectively. We also have \begin{equation*} \mu^*\mathcal{L} =-\rho^*K^\mathrm{log}_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})/\mathbb{A}^1} +\sum_{E'}b_{E'}(\phi-\phi_\triv)(v_{E'})E' \end{equation*} Putting this together, and using $D=\mathcal{L}+K_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}^\mathrm{log}$, we get \begin{equation*} \mu^*D+\sum_{E'}A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(\ord_{E'})E' =\sum_{E'}b_{E'}(A_{(X,B)}(\nu_{E'})+(\phi-\phi_\triv)(v_{E'}))E', \end{equation*} and taking the coefficient along $E'$ yields~\eqref{e407}. Finally, the last assertion follows since $A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(w)=0$ when $w=b_E^{-1}\ord_E$ for any irreducible component $E$ of $\mathcal{X}_0$. \end{proof} \begin{proof}[Proof of Proposition~\ref{P403}] Recall that $\lct_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red}-D)}(\mathcal{X}_0)$ is the supremum of $c\in\mathbb{R}$ such that \begin{equation*} 0\le A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red}-D+c\mathcal{X}_0)}(w) =A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(w)+w(D)-cw(\mathcal{X}_0) \end{equation*} for all divisorial valuations $w$ on $\mathcal{X}$. Here it suffices to consider $w$ centered on $\mathcal{X}_0$. Indeed, otherwise $w(D)=w(\mathcal{X}_0)=0$ and $A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(w)=A_{(X_{\mathbb{A}^1},B_{\mathbb{A}^1})}(w)\ge 0$, since $(X_{\mathbb{A}^1},B_{\mathbb{A}^1})$ is sublc. If $w$ is centered on $\mathcal{X}_0$, then we may after scaling assume that $w(\mathcal{X}_0)=1$. In this case,~\eqref{e407} applies, and shows that $\lct_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red}-D)}(\mathcal{X}_0)=L^\mathrm{NA}(\phi)$. It remains to prove that $L^\mathrm{NA}(\phi)\le\ell:=\min_E(A_{(X,B)}(v_E)+(\phi-\phi_{\triv})(v_E))$, where $E$ ranges over irreducible components of $\mathcal{X}_0$. The inequality $L^\mathrm{NA}(\phi)\le\ell$ is obvious. For the reverse inequality, note that Lemma~\ref{L401} implies $D\ge\ell\mathcal{X}_0$. We now use the assumption that $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ is sublc. Consider $w$ and $v$ as above. On the one hand, $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ being sublc implies $A_{(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})}(w)\ge0$. On the other hand, we have $w(D)\ge\ell$ since $D\ge\ell\mathcal{X}_0$. Thus~\eqref{e407} yields $A_{(X,B)}(v)+(\phi-\phi_\triv)(v)\ge\ell$. Since this is true for all divisorial or trivial valuations on $X$, we get $L^\mathrm{NA}(\phi)\ge\ell$, which completes the proof. \end{proof} \subsection{Ding vs Mabuchi} We continue to assume that $(X,B)$ is a weak log Fano pair. By Lemma~\ref{lem:MKE}, the non-Archimedean Mabuchi functional is given by \begin{equation}\label{e109} M^\mathrm{NA}_B=H^\mathrm{NA}_B-(I^\mathrm{NA}-J^\mathrm{NA}). \end{equation} For any normal test configuration $(\mathcal{X},\mathcal{L})$ representing a non-Archimedean metric $\phi$ on $L$, we can write this as \begin{align} M^\mathrm{NA}_B(\phi) &=V^{-1}((K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}+\mathcal{L})\cdot\mathcal{L}^n) -E^\mathrm{NA}(\phi)\label{e408}\\ &=\sum_E c_E(A_{(X,B)}(v_E)+(\phi-\phi_\triv)(v_E)) -E^\mathrm{NA}(\phi),\label{e409} \end{align} where $E$ ranges over the irreducible components of $\mathcal{X}_0$ and $c_E:=V^{-1} b_E(\mathcal{L}^n\cdot E)$. Note that $\sum_E c_E=1$ and that $c_E\ge0$ if $\phi$ is semipositive. \begin{defi} A non-Archimedean metric $\phi$ on $L=-K_{(X,B)}$ is \emph{anticanonical} if it is represented by a normal test configuration $(\mathcal{X},\mathcal{L})$ for $(X,-K_{(X,B)})$ such that $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ is sublc and such that $\mathcal{L}=-K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}+c\mathcal{X}_0$ for some $c\in\mathbb{Q}$. \end{defi} Note that if $\phi$ is anticanonical, then so is $\phi+c$ for any $c\in\mathbb{Q}$. \begin{prop}\label{P102} For every semipositive non-Archimedean metric $\phi$ on $L$, we have \begin{equation*} D_B^\mathrm{NA}(\phi)\le M^\mathrm{NA}_B(\phi), \end{equation*} with equality if $\phi$ is anticanonical. \end{prop} \begin{rmk} In K\"ahler geometry, the inequality $D_B(\phi)\le M_B(\phi)$ is well-known, and equality holds iff $\phi$ is a K\"ahler-Einstein metric, see~\eg~\cite[Lemma~4.4]{BBEGZ}. Proposition~\ref{P102} therefore suggests that semipositive anticanonical non-Archimedean metrics on $-K_{(X,B)}$ play the role of (weak) non-Archimedean K\"ahler-Einstein metrics. \end{rmk} \begin{proof} Consider the expression~\eqref{e409} for $M^\mathrm{NA}(\phi)$. Since $\phi$ is semipositive, we have $c_E\ge 0$ and $\sum_Ec_E=1$. This implies \begin{multline*} M^\mathrm{NA}_B(\phi) \ge\min_E(A_{(X,B)}(v_E)+(\phi-\phi_\triv)(v_E))-E^\mathrm{NA}(\phi)\\ \ge\inf_v(A_{(X,B)}(v)+(\phi-\phi_\triv)(v))-E^\mathrm{NA}(\phi) =D^\mathrm{NA}_B(\phi). \end{multline*} Now suppose $\phi$ is anticanonical and let $(\mathcal{X},\mathcal{L})$ be a test configuration for $(X,L)$ such that $(\mathcal{X},\mathcal{B}+\mathcal{X}_{0,\red})$ is sublc and such that $\mathcal{L}=-K^\mathrm{log}_{(\mathcal{X},\mathcal{B})/\mathbb{A}^1}+c\mathcal{X}_0$ for some $c\in\mathbb{Q}$. On the one hand,~\eqref{e408} gives $M^\mathrm{NA}_B(\phi)=c-E^\mathrm{NA}(\phi)$. On the other hand, Lemma~\ref{L401} yields $A(v_E)+(\phi-\phi_\triv)(v_E)=c$ for all irreducible components $E$ of $\mathcal{X}_0$. Thus Proposition~\ref{P403} implies that $D_B^\mathrm{NA}(\phi)=c-E^\mathrm{NA}(\phi)$, which completes the proof. \end{proof} \section{Uniform K-stability}\label{sec:Kstab} We continue working with a pair $(X,B)$, where $X$ is a normal projective variety over a algebraically closed field $k$ of characteristic zero. In this section, we further assume that the $\mathbb{Q}$-line bundle $L$ is ample. \subsection{Uniform K-stability} In the present language, Definition~\ref{defi:logKstab} says that $((X,B);L)$ is K-semistable iff $\DF_B(\phi)\ge 0$ for all positive metrics $\phi\in\mathcal{H}^\mathrm{NA}(L)$, while K-stability further requires that $\DF_B(\phi)=0$ only when $\phi=\phi_\triv+c$ for some $c\in\mathbb{Q}$. In line with the point of view of~\cite{Sze2}, we introduce: \begin{defi}\label{defi:unifKstab} The polarized pair $((X,B);L)$ is \emph{$L^p$-uniformly K-stable} if $\DF_B\ge\d\|\cdot\|_p$ on $\mathcal{H}^\mathrm{NA}(L)$ for some uniform constant $\d>0$. For $p=1$, we simply speak of \emph{uniform K-stability}. \end{defi} Since $\|\cdot\|_p\ge\|\cdot\|_1$, $L^p$-uniform K-stability implies ($L^1$-)uniform K-stability for any $p\ge1$. Note also that uniform K-stability implies (as it should!) K-stability, thanks to Theorem~\ref{thm:trivmetric}. \begin{prop}\label{prop:coer} The polarized pair $((X,B);L)$ is K-semistable iff $M_B^\mathrm{NA}\ge 0$ on $\mathcal{H}^\mathrm{NA}(L)$. It is $L^p$-uniformly K-stable iff $M_B^\mathrm{NA}\ge\d\|\cdot\|_p$ on $\mathcal{H}^\mathrm{NA}(L)$ for some $\d>0$. For $p=1$, this is also equivalent to $M_B^\mathrm{NA}\ge\d J^\mathrm{NA}$ for some $\d>0$. \end{prop} \begin{proof} We prove the second point, the first one being similar (and easier). The if part is clear, since $M_B^{\mathrm{NA}}\le\DF_B$. For the reverse implication, let $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$. By Proposition~\ref{P201} we can pick $d\ge 1$ such that $M_B^{\mathrm{NA}}(\phi_d)=\DF_B(\phi_d)$. By assumption, $\DF_B(\phi_d)\ge\d \|\phi_d\|_p$, and we conclude by homogeneity of $M_B^{\mathrm{NA}}$ and $\|\cdot\|_p$. The final assertion is now a consequence of the equivalence between $J^\mathrm{NA}$ and $\|\cdot\|_1$ proved in Theorem~\ref{thm:J}. \end{proof} \begin{rmk} By Remark~\ref{rmk:derJ}, our notion of uniform K-stability is also equivalent to uniform K-stability with respect to the minimum norm in the sense of~\cite{Der1}. \end{rmk} \begin{rmk}\label{R303} It is clear that, for any $r\in\mathbb{Q}_{>0}$ and $p\ge 1$, $((X,B);L)$ is K-semistable (resp.\ $L^p$-uniformly K-stable) iff $((X,B);rL)$ is K-semistable (resp.\ $L^p$-uniformly K-stable). \end{rmk} The next result confirms G.~Sz\'ekelyhidi's expectation that $p=\tfrac{n}{n-1}$ is a threshold value for $L^p$-uniform K-stability, cf.~\cite[\S3.1.1]{Sze1}. \begin{prop}\label{prop:thresh} A polarized pair $((X,B);L)$ cannot be $L^p$-uniformly K-stable unless $p\le\frac{n}{n-1}$. More precisely, any polarized pair $((X,B);L)$ admits a sequence $\phi_\varepsilon\in\mathcal{H}^{\mathrm{NA}}(L)$, paramet\-rized by $0<\varepsilon\ll 1$ rational, such that $M_B^{\mathrm{NA}}(\phi_\varepsilon)\sim\varepsilon^n$, $\|\phi_\varepsilon\|_p\sim\varepsilon^{1+\frac{n}{p}}$ for each $p\ge 1$. \end{prop} \begin{proof} We shall construct $\phi_\varepsilon$ as a small perturbation of the trivial metric. By Remark~\ref{R303} we may assume that $L$ is an actual line bundle. Let $x\in X\smallsetminus\supp B$ be a regular closed point, and $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$ be the blow-up of $(x,0)$ (\ie the deformation to the normal cone), with exceptional divisor $E$. For each rational $\varepsilon>0$ small enough, $\mathcal{L}_\varepsilon:=\rho^*L_{\mathbb{A}^1}-\varepsilon E$ is relatively ample, and hence defines a normal, ample test configuration $(\mathcal{X},\mathcal{L}_\varepsilon)$ for $(X,L)$, with associated non-Archimedean metric $\phi_\varepsilon\in\mathcal{H}^{\mathrm{NA}}(L)$. Lemma~\ref{lem:filtr} gives the following description of the filtration $F^\bullet_\varepsilon R$ attached to $(\mathcal{X},\mathcal{L}_\varepsilon)$: \begin{equation*} F_\varepsilon^{m\lambda} H^0(X,mL)=\left\{s\in H^0(X,mL)\mid v_E(s)\ge m(\lambda+\varepsilon)\right\} \end{equation*} for $\lambda\le 0$, and $F_\varepsilon^{m\lambda}H^0(X,mL)=0$ for $\lambda>0$. If we denote by $F$ the exceptional divisor of the blow-up $X'\to X$ at $x$, then $v_E=\ord_F$, and the Duistermaat-Heckman measure $\DH_\varepsilon$ is thus given by \begin{equation*} \DH_\varepsilon\{x\ge\lambda\}=V^{-1}\left(\rho^*L-(\lambda+\varepsilon)F\right)^n=1-V^{-1}(\lambda+\varepsilon)^n \end{equation*} for $\lambda\in(-\varepsilon,0)$, $\DH_\varepsilon\{x\ge\lambda\}=1$ for $\lambda\le-\varepsilon$, and $\DH_\varepsilon\{x\ge\lambda\}=0$ for $\lambda>0$. Hence \begin{equation*} \DH_\varepsilon=nV^{-1}{\bf 1}_{[-\varepsilon,0]}(\lambda+\varepsilon)^{n-1}d\lambda+(1-V^{-1}\varepsilon^n)\d_0. \end{equation*} We see from this that $\lambda_{\mathrm{max}}=0$, \begin{equation*} J^{\mathrm{NA}}(\phi_\varepsilon) =-E^{\mathrm{NA}}(\phi_\varepsilon) =-\int_\mathbb{R}\lambda\,\DH_\varepsilon(d\lambda) =-\frac{n}{V}\int_{-\varepsilon}^0\lambda(\lambda+\varepsilon)^{n-1}\,d\lambda =O(\varepsilon^{n+1}), \end{equation*} and \begin{multline*} \|\phi_\varepsilon\|_p^p=\int_\mathbb{R}\left|\lambda-E^{\mathrm{NA}}(\phi_\varepsilon)\right|^p\,\DH_\varepsilon(d\lambda)\\ =nV^{-1}\int_{-\varepsilon}^0\left|\lambda+O(\varepsilon^{n+1})\right|^p(\lambda+\varepsilon)^{n-1}d\lambda +(1-V^{-1}\varepsilon^n)O(\varepsilon^{p(n+1)})\\ =\varepsilon^{p+n}\left[nV^{-1}\int_0^1\left|t+O(\varepsilon^n)\right|^p(1-t)^{n-1}dt +O(\varepsilon^{n(p-1)})+o(1)\right]\\ =\varepsilon^{p+n}(c+o(1)) \end{multline*} for some $c>0$. The estimate for $M_B^{\mathrm{NA}}(\phi_\varepsilon)$ is a special case of Proposition~\ref{prop:neartriv} below, but let us give a direct proof. By~\eqref{e308} it suffices to prove that $(K^{\log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}\cdot\bar\mathcal{L}_\varepsilon^n)\sim\varepsilon^n$. Here $K^{\log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}=\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})}+(n+1)E$. Since $\rho_*E^j=0$ for $0\le j\le n$ and $((-E)^{n+1})=-1$, the projection formula yields $(K^{\log}_{(\bar\mathcal{X},\bar\mathcal{B})/\P^1}\cdot\bar\mathcal{L}_\varepsilon^n)=(n+1)\varepsilon^n$. \end{proof} \subsection{Uniform Ding stability} Now consider the log Fano case, that is, $(X,B)$ is klt and $L:=-K_{(X,B)}$ is ample. We can then consider stability with respect to the non-Archimedean Ding functional $D_B^\mathrm{NA}$ on $\mathcal{H}^\mathrm{NA}$ defined in~\S\ref{sec:Ding}. Namely, following~\cite{BBJ15} (see also Fujita~\cite{Fuj15b,Fuj16}) we say that $(X,B)$ is \emph{Ding semistable} if $D_B^\mathrm{NA}\ge0$, and \emph{uniformly Ding stable} if $D_B^\mathrm{NA}\ge\d J^\mathrm{NA}$ for some $\d>0$. A proof of the following result in the case when $X$ is smooth and $B=0$ appears in~\cite{BBJ15}. The general case is treated in~\cite{Fuj16}. \begin{thm}\label{T101} Let $X$ be a normal projective variety and $B$ an effective boundary on $X$ such that $(X,B)$ is klt and $L:=-K_{(X,B)}$ is ample. Then, for any $\d\in[0,1]$, we have $M^\mathrm{NA}_B\ge\delta J^\mathrm{NA}$ on $\mathcal{H}^\mathrm{NA}$ iff $D^\mathrm{NA}_B\ge\delta J^\mathrm{NA}$ on $\mathcal{H}^\mathrm{NA}$. In particular, $((X,B);L)$ is K-semistable (resp.\ uniformly K-stable) iff $(X,B)$ is Ding-semistable (resp.\ uniformly Ding-stable). \end{thm} \section{Uniform K-stability and singularities of pairs}\label{sec:kstabsing} In this section, the base field $k$ is assumed to have characteristic $0$. We still assume $X$ is a normal variety, unless otherwise stated. \subsection{Odaka-type results for pairs}\label{S304} Let $B$ be an effective boundary on $X$. Recall that the pair $(X,B)$ is lc (log canonical) if $A_{(X,B)}(v)\ge 0$ for all divisorial valuations $v$ on $X$, while $(X,B)$ is klt if $A_{(X,B)}(v)>0$ for all such $v$. \begin{thm}\label{thm:lc} Let $(X,L)$ be a normal polarized variety, and $B$ an effective boundary on $X$. Then $$ (X,B)\text{ lc}\Longleftrightarrow H_B^{\mathrm{NA}}\ge 0\text{ on }\mathcal{H}^{\mathrm{NA}}(L) $$ and $$ ((X,B);L)\text{ K-semistable}\Longrightarrow(X,B)\text{ lc}. $$ \end{thm} The proof of this result, given in~\S\ref{S203}, follows rather closely the line of argument of~\cite{Oda3}. The second implication is also observed in~\cite[Theorem 6.1]{OSu}. The general result of~\cite{Oda3}, dealing with the non-normal case, is discussed in \S\ref{sec:slc}. \begin{thm}\label{thm:klt} Let $(X,L)$ be a normal polarized variety and $B$ an effective boundary on $X$. Then the following assertions are equivalent: \begin{itemize} \item[(i)] $(X,B)$ is klt; \item[(ii)] there exists $\d>0$ such that $H_B^{\mathrm{NA}}\ge\d J^{\mathrm{NA}}$ on $\mathcal{H}^{\mathrm{NA}}(L)$; \item[(iii)] $H_B^{\mathrm{NA}}(\phi)>0$ for every $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ that is not a translate of $\phi_\triv$. \end{itemize} \end{thm} We prove this in~\S\ref{S204}. The proof of (iii)$\Longrightarrow$(i) is similar to that of~\cite[Theorem 1.3]{Oda3} (which deals with the Fano case), while that of (i)$\Longrightarrow$(ii) relies on an Izumi-type estimate (Theorem~\ref{thm:izumi}). As we shall see, (ii) holds with $\d$ equal to the global log canonical threshold of $((X,B);L)$ (cf.~Proposition~\ref{prop:lct} below). \smallskip The above results have the following consequences in the `log K\"ahler-Einstein case', \ie when $K_{(X,B)}\equiv\lambda L$ for some $\lambda\in\mathbb{Q}$. After scaling $L$, we may assume $\lambda=0$ or $\lambda=\pm1$. First, we have a uniform version of~\cite[Theorem 4.1, (i)]{OSu}. Closely related results were independently obtained in~\cite[\S3.4]{Der1}. \begin{cor}\label{cor:canpol} Let $X$ be a normal projective variety and $B$ an effective boundary on $X$ such that $L:=K_{(X,B)}$ is ample. Then the following assertions are equivalent: \begin{itemize} \item[(i)] $(X,B)$ is lc; \item[(ii)] $((X,B);L)$ is uniformly K-stable, with $M_B^{\mathrm{NA}}\ge\frac1{n}J^{\mathrm{NA}}$ on $\mathcal{H}^{\mathrm{NA}}(L)$; \item[(iii)] $((X,B);L)$ is K-semistable. \end{itemize} \end{cor} Next, in the log Calabi-Yau case we get a uniform version of~\cite[Theorem 4.1, (ii)]{OSu}: \begin{cor}\label{cor:CY} Let $(X,L)$ be normal polarized variety, $B$ an effective boundary on $X$, and assume that $K_{(X,B)}\equiv 0$. Then $((X,B);L)$ is K-semistable iff $(X,B)$ is lc. Further, the following assertions are equivalent: \begin{itemize} \item[(i)] $(X,B)$ is klt; \item[(ii)] $((X,B);L)$ is uniformly K-stable; \item[(iii)] $((X,B);L)$ is K-stable. \end{itemize} \end{cor} \begin{rmk} By~\cite[Corollary~3.3]{Oda1}, there exist polarized K-stable Calabi-Yau orbifolds (which have log terminal singularities) $(X,L)$ that are not asymptotically Chow (or, equivalently, Hilbert) semistable. In view of Corollary~\ref{cor:CY}, it follows that uniform K-stability does not imply asymptotic Chow stability in general. \end{rmk} Finally, in the log Fano case we obtain: \begin{cor}\label{cor:Fano} Let $X$ be a normal projective variety and $B$ an effective boundary on $X$ such that $L:=-K_{(X,B)}$ is ample. If $((X,B);L)$ is K-semistable, then $H_B^{\mathrm{NA}}\ge\frac1nJ^{\mathrm{NA}}$ on $\mathcal{H}^{\mathrm{NA}}(L)$; in particular, $(X,B)$ is klt. \end{cor} A partial result in the reverse direction can be found in Proposition~\ref{prop:alpha}. See also~\cite[Theorem 6.1]{OSu} and~\cite[Theorem 3.39]{Der1} for closely related results. Corollaries~\ref{cor:canpol},~\ref{cor:CY} and~\ref{cor:Fano} are proved in~\S\ref{S205}. \subsection{Lc and klt blow-ups} The following result, due to Y.~Odaka and C.~Xu, deals with lc blow-ups. The proof is based on an ingenious application of the MMP. \begin{thm}\label{thm:OX}\cite[Theorem 1.1]{OX} Let $B$ be an effective boundary on $X$ with coefficients at most $1$. Then there exists a unique projective birational morphism $\mu\colon X'\to X$ such that the strict transform $B'$ of $B$ on $Y$ satisfies: \begin{itemize} \item[(i)] the exceptional locus of $\mu$ is a (reduced) divisor $E$; \item[(ii)] $(X',E+B')$ is lc and $K_{X'}+E+B'$ is $\mu$-ample. \end{itemize} \end{thm} \begin{cor}\label{cor:lc} Let $B$ be an effective boundary on $X$, and assume that $(X,B)$ is not lc. Then there exists a closed subscheme $Z\subset X$ whose Rees valuations $v$ all satisfy $A_{(X,B)}(v)<0$. \end{cor} \begin{proof} If $B$ has an irreducible component $F$ with coefficient $>1$, then $A_{(X,B)}(\ord_F)<0$, and $Z:=F$ has the desired property, since $\ord_F$ is its unique Rees valuation (cf.~Example~\ref{ex:rees}). If not, Theorem~\ref{thm:OX} applies. Denoting by $A_i:=A_{(X,B)}(\ord_{E_i})$ the log discrepancies of the irreducible component $E_i$ of $E$, we have \begin{equation}\label{equ:disc} K_{X'}+E+B'=\pi^*K_{(X,B)}+\sum_i A_i E_i, \end{equation} which proves that $\sum_i A_i E_i$ is $\mu$-ample, and hence $A_i<0$ by the negativity lemma (or Lemma~\ref{lem:ample}). Proposition~\ref{prop:reesexc} now yields the desired subscheme. \end{proof} We next prove an analogous result for klt pairs, using a well-known and easy consequence of the MMP as in~\cite{BCHM}. \begin{prop}\label{prop:klt} Let $B$ be an effective boundary, and assume that $(X,B)$ is not klt. Then there exists a closed subscheme $Z\subset X$ whose Rees valuations $v$ all satisfy $A_{(X,B)}(v)\le 0$. \end{prop} \begin{proof} If $B$ has an irreducible component $F$ with coefficient at least $1$, then $A_{(X,B)}(\ord_F)\le 0$, and we may again take $Z=F$. Assume now that $B$ has coefficients $<1$. Let $\pi\colon X'\to X$ be a log resolution of $(X,B)$. This means $X'$ is smooth, the exceptional locus $E$ of $\pi$ is a (reduced) divisor, and $E+B'$ has snc support, with $B'$ the strict transform of $B$. If we denote by $A_i:=A_{(X,B)}(\ord_{E_i})$ the log discrepancies of the irreducible component $E_i$ of $E$, then~\eqref{equ:disc} holds, and hence \begin{equation}\label{equ:knum} K_{X'}+(1-\varepsilon)E+B'=\pi^*(K_X+B)+\sum_i(A_i-\varepsilon)E_i \end{equation} for any $0<\varepsilon<1$. If we pick $\varepsilon$ smaller than $\min_{A_i>0}A_i$, then the $\mathbb{Q}$-divisor $D:=\sum_i(A_i-\varepsilon)F_i$ is $\pi$-big (since the generic fiber of $\pi$ is a point), and $\pi$-numerically equivalent to the log canonical divisor of the klt pair $(X',(1-\varepsilon)E+B')$ by (\ref{equ:knum}). Picking any $m_0\ge 1$ such that $m_0D$ is a Cartier divisor,~\cite[Theorem 1.2]{BCHM} shows that the $\mathcal{O}_X$-algebra of relative sections $$ R(X'/X,m_0D):=\bigoplus_{m\in\mathbb{N}}\mu_*\mathcal{O}_{X'}(mm_0D) $$ is finitely generated. Its relative $\Proj$ over $X$ yields a projective birational morphism $\mu\colon Y\to X$, with $Y$ normal, such that the induced birational map $\phi\colon X'\dashrightarrow Y$ is surjective in codimension one (\ie $\phi^{-1}$ does not contract any divisor) and $\phi_*D=\sum_i(A_i-\varepsilon)\phi_*E_i$ is $\mu$-ample. Since $D$ is $\mu$-exceptional and $\phi$ is surjective in codimension $1$, $\phi_*D$ is also $\mu$-exceptional. By Lemma~\ref{lem:ample}, $-\phi_*D$ is effective and its support coincides the exceptional locus of $\mu$. Hence that the $\mu$-exceptional prime divisors are exactly the strict transforms of those $E_i$'s with $A_i-\varepsilon<0$, \ie $A_i\le 0$ by the definition of $\varepsilon$. As before, we conclude using Proposition~\ref{prop:reesexc}. \end{proof} \subsection{Proof of Theorem~\ref{thm:lc}}\label{S203} If $(X,B)$ is lc, then it is clear from the definition of the non-Archi\-medean entropy functional that $H^{\mathrm{NA}}_B\ge0$ on $\mathcal{H}^{\mathrm{NA}}(L)$. Now assume that $(X,B)$ is not lc. By Corollary~\ref{cor:lc}, there exists a closed subscheme $Z\subset X$ whose Rees valuations $v$ all satisfy $A_{(X,B)}(v)<0$. Corollary~\ref{cor:rees} then yields a normal, ample test configuration $(\mathcal{X},\mathcal{L})$ of $(X,L)$ such that \begin{equation*} \left\{v_E\mid E\ \text{a nontrivial irreducible component of }\mathcal{X}_0\right\} \end{equation*} coincides with the (nonempty) set of Rees valuations of $Z$. Thus $A_X(v_E)<0$ for all nontrivial irreducible components $E$ of $\mathcal{X}_0$. Denote by $\phi\in\mathcal{H}^{\mathrm{NA}}$ the non-Archimedean metric defined by $(\mathcal{X},\mathcal{L})$. We directly get $H_B^{\mathrm{NA}}(\phi)<0$, so $H_B^\mathrm{NA}\not\ge0$ on $\mathcal{H}^\mathrm{NA}$. Further, Proposition~\ref{prop:neartriv} implies that the positive metric $\phi_\varepsilon:=\varepsilon\phi+(1-\varepsilon)\phi_\triv$ satisfies $M_B^{\mathrm{NA}}(\phi_\varepsilon)<0$ for $0<\varepsilon\ll 1$. Hence $((X,B);L)$ cannot be K-semistable. This completes the proof. \begin{defi}\label{defi:c_E} Let $(\mathcal{X},\mathcal{L})$ be a normal, semiample test configuration for $(X,L)$ representing a positive metric $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$. For each irreducible component $E$ of $\mathcal{X}_0$, let $Z_E\subset X$ be the closure of the center of $v_E$ on $X$, and set $r_E:=\codim_X Z_E$. Then the canonical birational map $\mathcal{X}\dashrightarrow X_{\mathbb{A}^1}$ maps $E$ onto $Z_E\times\{0\}$. Let $F_E$ be the generic fiber of the induced map $E\dashrightarrow Z_E$, and define the \emph{local degree} $\deg_E(\phi)$ of $\phi$ at $E$ as $$ \deg_E(\phi):=(F_E\cdot\mathcal{L}^{r_E}). $$ \end{defi} Since $\mathcal{L}$ is semiample on $E\subset\mathcal{X}_0$, we have $\deg_E(\phi)\ge 0$, and $\deg_E(\phi)>0$ iff $E$ is not contracted on the ample model of $(\mathcal{X},\mathcal{L})$. The significance of these invariants is illustrated by the following estimate, whose proof is straightforward. \begin{lem}\label{lem:estim} With the above notation, assume that $\mathcal{X}$ dominates $X_{\mathbb{A}^1}$ via $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$. Given $0\le j\le n$ and line bundles $M_1,\dots,M_{n-j}$ on $X$, we have, for $0<\varepsilon\ll1$ rational: \begin{multline*} \left(E\cdot\left(\rho^*L_{\mathbb{A}^1}+\varepsilon D\right)^j \cdot\rho^*\left(M_{1,\mathbb{A}^1}\cdot\ldots\cdot M_{n-j,\mathbb{A}^1}\right)\right)\\ =\begin{cases} \varepsilon^{r_E}\left[\deg_E(\phi)\binom{j}{r_E}\left(Z_E\cdot L^{j-r_E}\cdot M_1\cdot\ldots\cdot M_{n-j}\right)\right]+O(\varepsilon^{r_E+1}) &\text{for $j\ge r_E$}\\ 0&\text{for $j<r_E$}. \end{cases} \end{multline*} \end{lem} \begin{prop}\label{prop:neartriv} Pick $\phi\in\mathcal{H}^{\mathrm{NA}}(L)$ that is not a translate of $\phi_\triv$, and let $(\mathcal{X},\mathcal{L})$ be its unique normal ample representative. Set $r:=\min_E r_E$, with $r_E=\codim_X Z_E$ and $E$ running over all non-trivial irreducible components of the ample model $(\mathcal{X},\mathcal{L})$ of $\phi$ (and hence $r\ge 1$). Let further $B$ be a boundary on $X$. Then $\phi_\varepsilon:=\varepsilon\phi+(1-\varepsilon)\phi_\triv$ satisfies \begin{equation*} J^{\mathrm{NA}}(\phi_\varepsilon)=O(\varepsilon^{r+1}), \quad R_B^{\mathrm{NA}}(\phi_\varepsilon)=O(\varepsilon^{r+1}), \end{equation*} and \begin{align*} M_B^{\mathrm{NA}}(\phi_\varepsilon) &=H_B^{\mathrm{NA}}(\phi_\varepsilon)+O(\varepsilon^{r+1})\\ &=\varepsilon^r\left[V^{-1}\sum_{r_E=r}\deg_E(\phi)b_E \left(Z_E\cdot L^{n-r}\right)A_{(X,B)}(v_E)\right]+O(\varepsilon^{r+1}). \end{align*} \end{prop} \begin{proof} Let $(\mathcal{X}',\mathcal{L}')$ be a normal test configuration dominating $(\mathcal{X},\mathcal{L})$ and $(X_{\mathbb{A}^1},L_{\mathbb{A}^1})$. Write $\mathcal{L}'=\rho^*L_{\mathbb{A}^1}+D$, where $\rho\colon\mathcal{X}'\to X_{\mathbb{A}^1}$ is the morphism. Note that $(\mathcal{X}',\mathcal{L}'_\varepsilon)$, with $\mathcal{L}'_\varepsilon=\rho^*L_{\mathbb{A}^1}+\varepsilon D$, is a representative of $\phi_\varepsilon$. By translation invariance of $J^{\mathrm{NA}}$ and $M^{\mathrm{NA}}$, we may assume $(\phi\cdot\phi_\triv^n)=0$, \ie $\ord_{E_0}(D)=0$ for the strict transform $E_0$ of $X\times\{0\}$ to $\mathcal{X}'$, by Theorem~\ref{thm:supp}. Then $(\phi_\varepsilon\cdot\phi_\triv^n)=0$, and hence $J^{\mathrm{NA}}(\phi_\varepsilon)=-E^{\mathrm{NA}}(\phi_\varepsilon)$. Lemma~\ref{lem:EMA} yields \begin{equation*} (n+1)VE^{\mathrm{NA}}(\phi_\varepsilon)=\sum_{j=0}^n\left(\varepsilon D\cdot\left(\rho^*L_{\mathbb{A}^1} +\varepsilon D\right)^j\cdot\rho^*L_{\mathbb{A}^1}^{n-j}\right). \end{equation*} Since we have normalized $D$ by $\ord_{E_0}(D)=0$, Lemma~\ref{lem:estim} implies $E^{\mathrm{NA}}(\phi_\varepsilon)=O(\varepsilon^{r+1})$, and hence $J^{\mathrm{NA}}(\phi_\varepsilon)=O(\varepsilon^{r+1})$. Similarly, \begin{multline*} V R^{\mathrm{NA}}_B(\phi_\varepsilon) =\left(\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\cdot(\bar\mathcal{L}'_\varepsilon)^n\right)\\ =\left(\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\cdot(\bar\mathcal{L}'_\varepsilon)^n\right) -\left(\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\cdot\rho^*L_{\P^1}^n\right)\\ =\sum_{j=0}^{n-1}\left(\varepsilon D\cdot\left(\rho^*L_{\mathbb{A}^1} +\varepsilon D\right)^j\cdot\rho^*L_{\mathbb{A}^1}^{n-j-1}\cdot\rho^*K^\mathrm{log}_{(X_{\P^1},B_{\P^1})/\P^1}\right) =O(\varepsilon^{r+1}). \end{multline*} The expression for $M^{\mathrm{NA}}_B$ now follows from the Chen-Tian formula (see Proposition~\ref{prop:DFChen}) and Lemma~\ref{lem:estim} applied to \begin{equation*} H_B^{\mathrm{NA}}(\phi_\varepsilon) =V^{-1}\sum_EA_{(X,B)}(v_E)b_E\left(E\cdot\left(\rho^*L_{\mathbb{A}^1}+\varepsilon D\right)^n\right) \end{equation*} where $E$ runs over the non-trivial irreducible components of $\mathcal{X}'_0$. \end{proof} \subsection{The non-normal case}\label{sec:slc} In this section we briefly sketch the proof of the following general result, due to Odaka~\cite[Theorem 1.2]{Oda3}. \begin{thm}\label{thm:scl} Let $X$ be deminormal scheme with $K_X$ $\mathbb{Q}$-Cartier. Let $L$ be an ample line bundle on $X$, and assume that $(X,L)$ is K-semistable. Then $X$ is slc. \end{thm} Recall from Remark~\ref{rmk:slc} that $(X,L)$ is K-semistable iff $\DF_{\widetilde{B}}(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})\ge 0$ for all ample test configurations $(\mathcal{X},\mathcal{L})$ for $(X,L)$. Here $(\widetilde{X},\tilde{L})$ and $(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})$ denote the normalizations of $(X,L)$ and $(\mathcal{X},\mathcal{L})$, respectively, and $\widetilde{B}$ is the conductor, viewed as a reduced Weil divisor on $\widetilde{X}$. On the other hand, $X$ is slc iff $(\widetilde{X},\widetilde{B})$ is lc, by definition. Assuming that $X$ is not slc, \ie $(\widetilde{X},\widetilde{B})$ not lc, our goal is thus to produce an ample test configuration $(\mathcal{X},\mathcal{L})$ for $(X,L)$ such that $\DF_{\widetilde{B}}(\widetilde{\mathcal{X}},\tilde{\mathcal{L}})<0$. By Theorem~\ref{thm:OX}, the non-lc pair $(\widetilde{X},\widetilde{B})$ admits an lc blow-up $\mu\colon\widetilde{X}'\to\widetilde{X}$. As explained on~\cite[p.332]{OX}, Koll\'ar's gluing theorem implies that $\widetilde{X}'$ is the normalization of a reduced scheme $X'$, with a morphism $X'\to X$. As a consequence, we can find a closed subscheme $Z\subset X$ whose inverse image $\widetilde{Z}\subset\widetilde{X}$ is such that $A_{(\widetilde{X},\widetilde{B})}(v)<0$ for each Rees valuation $v$ of $\widetilde{Z}$. Let $\mu\colon\mathcal{X}\to X\times\mathbb{A}^1$ be the deformation to the normal cone of $Z$, with exceptional divisor $E$, and set $\mathcal{L}_\varepsilon=\mu^*L_{\mathbb{A}^1}-\varepsilon E$ with $0<\varepsilon\ll 1$. Since the normalization $\widetilde{\mathcal{X}}$ of $\mathcal{X}$ is also the normalization of the deformation to the normal cone of $\widetilde{Z}$, we have $A_{(\widetilde{X},\widetilde{B})}(v_E)<0$ for each irreducible component $E$ of $\widetilde{\mathcal{X}}_0$, and Proposition~\ref{prop:neartriv} gives, as desired, $\DF_{\widetilde{B}}(\widetilde{\mathcal{X}},\tilde{\mathcal{L}}_\varepsilon)<0$ for $0<\varepsilon\ll 1$. \subsection{The global log canonical threshold and proof of Theorem~\ref{thm:klt}}\label{S204} Recall from~\S\ref{sec:bound} the definition of the log canonical threshold of an effective $\mathbb{Q}$-Cartier divisor $D$ with respect to a subklt pair $(X,B)$: \begin{equation*} \lct_{(X,B)}(D)=\inf_v\frac{A_{(X,B)}(v)}{v(D)}. \end{equation*} Similarly, given an ideal $\mathfrak{a}$ and $c\in\mathbb{Q}_+$, we set \begin{equation*} \lct_{(X,B)}(\mathfrak{a}^c):=\inf_v\frac{A_{(X,B)}(v)}{v(\mathfrak{a}^c)}, \end{equation*} with $v(\mathfrak{a}^c):=c v(\mathfrak{a})$. The main ingredient in the proof of (i)$\Longrightarrow$(ii) of Theorem~\ref{thm:klt} is the following result. \begin{thm}\label{thm:izumi} If $((X,B);L)$ is a polarized subklt pair, then \begin{equation}\label{equ:infima} \inf_D\lct_{(X,B)}(D)=\inf_{\mathfrak{a},c}\lct_{(X,B)}(\mathfrak{a}^c), \end{equation} where the left-hand infimum is taken over all effective $\mathbb{Q}$-Cartier divisors $D$ on $X$ that are $\mathbb{Q}$-linearly equivalent to $L$, and the right-hand one is over all non-zero ideals $\mathfrak{a}\subset\mathcal{O}_X$ and all $c\in\mathbb{Q}_+$ such that $L\otimes\mathfrak{a}^c$ is nef. Further, these two infima are strictly positive. \end{thm} Here we say that $L\otimes\mathfrak{a}^c$ is nef if $\mu^*L-cE$ is nef on the normalized blow-up $\mu\colon X'\to X$ of $\mathfrak{a}$, with $E$ the effective Cartier divisor such that $\mathfrak{a}\cdot\mathcal{O}_{X'}=\mathcal{O}_{X'}(-E)$. \begin{defi}\label{defi:glct} The \emph{global log canonical threshold} $\lct((X,B);L)$ of a polarized subklt pair $((X,B);L)$ is the common value of the two infima in Theorem~\ref{thm:izumi}. \end{defi} \begin{proof}[Proof of Theorem~\ref{thm:izumi}] Let us first prove that the two infima coincide. Let $D$ be an effective $\mathbb{Q}$-Cartier divisor $\mathbb{Q}$-linearly equivalent to $L$. Pick $m\ge 1$ such that $mD$ is Cartier, and set $\mathfrak{a}:=\mathcal{O}_X(-mD)$ and $c:=1/m$. Then $v(\mathfrak{a}^c)=v(D)$ for all $v$, and $L\otimes\mathfrak{a}^c$ is nef since $L-cmD$ is even numerically trivial. Hence $\inf\lct_{(X,B)}(D)\le\inf\lct_{(X,B)}(\mathfrak{a}^c)$. Conversely, assume that $L\otimes\mathfrak{a}^c$ is nef. Let $\mu\colon X'\to X$ be the normalized blow-up of $X$ along $\mathfrak{a}$ and $E$ the effective Cartier divisor on $X'$ such that $\mathcal{O}_{X'}(-E)=\mathfrak{a}\cdot\mathcal{O}_{X'}$, so that $\mu^*L-c E$ is nef. Since $-E$ is $\mu$-ample, we can find $0<c'\ll 1$ such that $\mu^*L-c' E$ is ample. Setting $c_\varepsilon:=(1-\varepsilon)c+\varepsilon c'$, we then have $\mu^*L-c_\varepsilon E$ is ample for all $0<\varepsilon<1$. Let also $B'$ the unique $\mathbb{Q}$-Weil divisor on $X'$ such that $\mu^*K_{(X,B)}=K_{(X',B')}$ and $\mu_*B'=B$, so that $(X',B')$ is a pair with $A_{(X,B)}=A_{(X',B')}$. If we choose a log resolution $\pi\colon X''\to X'$ of $(X',B'+E)$ and let $F=\sum_i F_i$ be the sum of all $\pi$-exceptional primes and of the strict transform of $B'_\red+E_\red$, then $$ \lct_{(X,B)}(\mathfrak{a}^{c_\varepsilon})=\lct_{(X',B')}(c_\varepsilon E)=\min_i\frac{A_{(X',B')}(\ord_{F_i})}{\ord_{F_i}(c_\varepsilon D)} $$ Given $0<\varepsilon<1$, pick $m\gg 1$ such that \begin{itemize} \item[(i)] $mc_\varepsilon\in\mathbb{N}$; \item[(ii)] $m(\mu^*L-c_\varepsilon E)$ is very ample; \item[(iii)] $m\ge\lct_{(X,B)}(\mathfrak{a}^{c_\varepsilon})$. \end{itemize} Let $H\in |m(\mu^*L-c_\varepsilon E)|$ be a general element, and set $D:=\mu_*(c_\varepsilon E+m^{-1}H)$, so that $D$ is $\mathbb{Q}$-Cartier, $\mathbb{Q}$-linearly equivalent to $L$, and $\mu^*D=c_\varepsilon E+m^{-1}H$. By Bertini's theorem, $\pi$ is also a log resolution of $(X',B'+E+H)$, and hence \begin{equation*} \lct_{(X,B)}(D) =\lct_{(X',B')}(c_\varepsilon E+m^{-1}H) =\min\left\{\frac{A_{(X',B')}(v)}{v(c_\varepsilon E+m^{-1}H)}\ \bigg|\ v=v_i\ \text{or $v=\ord_H$}\right\}. \end{equation*} But $H$, being general, does not contain the center of $\ord_{D_i}$ on $X'$ and is not contained in $\supp E$, \ie $\ord_{D_i}(H)=0$ and $\ord_H(E)=0$, and (iii) above shows that \begin{equation*} \lct_{(X,B)}(D)=\min\left\{\lct_{(X,B)}(\mathfrak{a}^{c_\varepsilon}),m\right\}=\lct_{(X,B)}(\mathfrak{a}^{c_\varepsilon}). \end{equation*} Since we have $\lct_{(X,B)}(\mathfrak{a}^{c_\varepsilon})=\frac{c}{c_\varepsilon}\lct_{(X,B)}(\mathfrak{a}^c)$ with $c_\varepsilon/c$ arbitrarily close to $1$, we conclude that the two infima in (\ref{equ:infima}) are indeed equal. \medskip We next show that the left-hand infimum in (\ref{equ:infima}) is strictly positive, in two steps. \smallskip \noindent{\bf Step 1}. We first treat the case where $X$ is smooth and $B=0$. By Skoda's theorem (see for instance~\cite[Proposition 5.10]{JM}), we then have $$ v(D)\le\ord_p(D) A_X(v) $$ for every effective $\mathbb{Q}$-Cartier divisor $D$ on $X$, every divisorial valuation $v$, and every closed point $p$ in the closure of the center of $v$ on $X$. It is thus enough to show that $\ord_p(D)$ is uniformly bounded when $D\sim_\mathbb{Q} L$. Let $\mu\colon X'\to X$ be the blow-up at $p$, with exceptional divisor $E$. Since $L$ is ample, there exists $\varepsilon>0$ independent of $p$ such that $L_\varepsilon:=\mu^*L-\varepsilon E$ is ample, by Seshadri's theorem. Since $D$ is effective, we have $\mu^*D\ge\ord_p(D)E$, and hence $$ (L^n)=\left(\mu^*L\cdot L_\varepsilon^{n-1}\right)\ge\ord_p(D)(E\cdot L_\varepsilon^{n-1})=\varepsilon^{n-1}\ord_p(D), $$ which yields the desired bound on $\ord_p(D)$. \noindent{\bf Step 2}. Suppose now that $(X,B)$ is a subklt pair. Pick a log resolution $\mu\colon X'\to X$, and let $B'$ be the unique $\mathbb{Q}$-divisor such that $\mu^*K_{(X,B)}=K_{(X',B')}$ and $\mu_*B'=B$, so that $$ A_{(X,B)}(v)=A_{(X',B')}(v)=A_{X'}(v)-v(B') $$ for all divisorial valuations $v$. Since $(X,B)$ is subklt, $B'$ has coefficients less than $1$, so there exists $0<\varepsilon\ll 1$ such that $B'\le(1-\varepsilon)B'_\red$. Since $B'_\red$ is a reduced snc divisor, the pair $(X',B'_\red)$ is lc, and hence $v(B')\le A_{X'}(v)$ for all divisorial valuations $v$. It follows that $v(B)\le(1-\varepsilon)A_{X'}(v)$, \ie $$ \varepsilon A_{X'}(v)\le A_{(X,B)}(v) $$ for all $v$. Pick any very ample effective divisor $H$ on $X'$ such that $L':=\mu^*L+H$ is ample. For each effective $\mathbb{Q}$-Cartier divisor $D\sim_\mathbb{Q} L$, $D':=\mu^*D+H$ is an effective $\mathbb{Q}$-Cartier divisor on $X'$ with $D'\sim_\mathbb{Q} L'$. By Step 1, we conclude that $$ v(D)\le v(D')\le C A_{X'}(v)\le C\varepsilon^{-1}A_{(X,B)}(v), $$ which completes the proof. \end{proof} \begin{prop}\label{prop:lct} For each polarized subklt pair $((X,B);L)$, we have $$ H^{\mathrm{NA}}\ge\d I^{\mathrm{NA}}\ge\frac{\d}{n}J^{\mathrm{NA}} $$ on $\mathcal{H}^{\mathrm{NA}}$ with $\d:=\lct((X,B);L)>0$. \end{prop} \begin{proof} Pick $\phi\in\mathcal{H}^{\mathrm{NA}}$, and let $(\mathcal{X},\mathcal{L})$ be a normal representative such that $\mathcal{X}$ dominates $X_{\mathbb{A}^1}$ via $\rho\colon\mathcal{X}\to X_{\mathbb{A}^1}$, and write $\mathcal{L}=\rho^*L_{\mathbb{A}^1}+D$. Choose $m\ge 1$ such that $m\mathcal{L}$ is a globally generated line bundle, and let $$ \rho_*\mathcal{O}_\mathcal{X}(mD)=\mathfrak{a}^{(m)}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{a}^{(m)}_\lambda t^{-\lambda} $$ be the corresponding flag ideal. By Proposition~\ref{prop:filtrflag}, $\mathcal{O}_X(mL)\otimes\mathfrak{a}^{(m)}_\lambda$ is globally generated on $X$ for all $\lambda\in\mathbb{Z}$. In particular, $L\otimes(\mathfrak{a}^{(m)}_\lambda)^{1/m}$ is nef, and hence $$ v(\mathfrak{a}^{(m)}_\lambda)\le m\d^{-1}A_{(X,B)}(v) $$ whenever $\mathfrak{a}^{(m)}_\lambda$ is non-zero. Now let $E$ be a non-trivial irreducible component of $\mathcal{X}_0$. By Lemma~\ref{lem:div2}, we have $$ \ord_E(\mathfrak{a}^{(m)})=\min_\lambda\left(v_E(\mathfrak{a}^{(m)}_\lambda)-\lambda b_E\right) $$ with $b_E=\ord_E(\mathcal{X}_0)$, and hence $$ \ord_E(\mathfrak{a}^{(m)})\le m \d^{-1} A_{(X,B)}(v_E)-b_E\max\left\{\lambda\in\mathbb{Z}\mid\mathfrak{a}^{(m)}_\lambda\ne 0\right\} $$ By Proposition~\ref{prop:filtrflag}, we have $$ \max\left\{\lambda\in\mathbb{Z}\mid\mathfrak{a}^{(m)}_\lambda\ne 0\right\}=\lambda_{\max}^{(m)}, $$ which is bounded above by $$ m\lambda_{\max}=m(\phi\cdot\phi_\triv^n), $$ by Lemma~\ref{lem:sup}. We have thus proved that \begin{equation}\label{equ:efa} m^{-1}\ord_E(\mathfrak{a}^{(m)})\le \d^{-1} A_{(X,B)}(v_E)-b_E V^{-1}(\phi\cdot\phi_\triv^n). \end{equation} But since $mD$ is $\rho$-globally generated, we have $\mathcal{O}_\mathcal{X}(mD)=\mathcal{O}_{\mathcal{X}}\cdot\mathfrak{a}^{(m)}$, and hence $$ m^{-1}\ord_E(\mathfrak{a}^{(m)})=-\ord_E(D). $$ Using (\ref{equ:efa}) and $\sum_E b_E(E\cdot\mathcal{L}^n)=(\mathcal{X}_0\cdot\mathcal{L}^n)=V$, we infer $$ -V^{-1}((\phi-\phi_\triv)\cdot\phi^n)=-V^{-1}\left(D\cdot\mathcal{L}^n\right)\le \d^{-1} H^{\mathrm{NA}}(\phi)-V^{-1}(\phi\cdot\phi_\triv^n) $$ and the result follows by the definition of $I^{\mathrm{NA}}$ and by Proposition~\ref{prop:J}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:klt}] The implication (i)$\Longrightarrow$(ii) follows from Proposition~\ref{prop:lct}, and (ii)$\Longrightarrow$(iii) is trivial. Now assume that (iii) holds. If $(X,B)$ is not klt, Proposition~\ref{prop:klt} yields a closed subscheme $Z\subset X$ with $A_{(X,B)}(v)\le 0$ for all Rees valuations $v$ of $Z$. By Corollary~\ref{cor:rees}, we can thus find a normal, ample test configuration $(\mathcal{X},\mathcal{L})$ such that $A_{(X,B)}(v_E)\le 0$ for each non-trivial irreducible component $E$ of $\mathcal{X}_0$. The corresponding non-Archimedean metric $\phi\in\mathcal{H}^{\mathrm{NA}}$ therefore satisfies $H^{\mathrm{NA}}_B(\phi)\le 0$, which contradicts (iii). \end{proof} \subsection{The K\"ahler-Einstein case}\label{S205} \begin{proof}[Proof of Corollary~\ref{cor:canpol}] The implication (iii)$\Longrightarrow$(i) follows from Theorem~\ref{thm:lc}, and (ii)$\Longrightarrow$(iii) is trivial. Now assume (i), so that $H_B^{\mathrm{NA}}\ge 0$ on $\mathcal{H}^{\mathrm{NA}}$ by Theorem~\ref{thm:lc}. By Lemma~\ref{lem:MKE}, we have $M_B^{\mathrm{NA}}=H_B^{\mathrm{NA}}+\left(I^{\mathrm{NA}}-J^{\mathrm{NA}}\right)$, while $I^{\mathrm{NA}}-J^{\mathrm{NA}}\ge\frac{1}{n}J^{\mathrm{NA}}$ by Proposition~\ref{prop:J}. We thus get $M_B^{\mathrm{NA}}\ge\frac1n J^{\mathrm{NA}}$, which proves (iii). \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:CY}] If $K_{(X,B)}$ is numerically trivial, then Lemma~\ref{lem:MKE} gives $M_B^{\mathrm{NA}}=H_B^{\mathrm{NA}}$. The result is thus a direct consequence of Theorem~\ref{thm:lc} and Theorem~\ref{thm:klt}. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:Fano}] Lemma~\ref{lem:MKE} yields $M_B^{\mathrm{NA}}=H_B^{\mathrm{NA}}-(I^{\mathrm{NA}}-J^{\mathrm{NA}})$. The K-semistability of $(X,B)$ thus means that $H_B^{\mathrm{NA}}\ge I^{\mathrm{NA}}-J^{\mathrm{NA}}$, and hence $H_B^{\mathrm{NA}}\ge\frac1nJ^{\mathrm{NA}}$ by Proposition~\ref{prop:J}. By Theorem~\ref{thm:klt}, this implies that $(X,B)$ is klt. \end{proof} The following result gives a slightly more precise version of the computations of~\cite[Theorem 1.4]{OSa} and~\cite[Theorem 3.24]{Der1}. \begin{prop}\label{prop:alpha} Let $B$ be an effective boundary on $X$ such that $(X,B)$ is klt and $L:=-K_{(X,B)}$ is ample. Assume also that $\varepsilon:=\lct((X,B);L)-\frac{n}{n+1}>0$. Then we have \begin{equation*} M_B^{\mathrm{NA}}\ge\varepsilon I^{\mathrm{NA}}\ge\frac{n+1}{n}\varepsilon J^{\mathrm{NA}}. \end{equation*} In particular, the polarized pair $((X,B);L)$ is uniformly K-stable. \end{prop} \begin{proof} By Proposition~\ref{prop:lct} we have $H^{\mathrm{NA}}\ge\left(\frac{n}{n+1}+\varepsilon\right)I^{\mathrm{NA}}$, and hence $$ M_B^{\mathrm{NA}}\ge\varepsilon I^{\mathrm{NA}}+\left(J^{\mathrm{NA}}-\frac{1}{n+1}I^{\mathrm{NA}}\right). $$ The result follows since we have $$ \frac{1}{n+1} I^{\mathrm{NA}}\le J^{\mathrm{NA}}\le\frac{n}{n+1}I^{\mathrm{NA}} $$ by Proposition~\ref{prop:J}. \end{proof}
{ "timestamp": "2016-12-01T02:06:36", "yymm": "1504", "arxiv_id": "1504.06568", "language": "en", "url": "https://arxiv.org/abs/1504.06568" }
\section{Introduction} Distributed algorithms are a classic discipline of computer science and continue to be an active field of research \cite{Lynch:1996,Fokkink2013}. A distributed algorithm employs several processes, which perform one and the same program to achieve a common goal. It is required to be correct independently of the number of processes. Prominent examples are leader-election algorithms, whose task is to determine a unique leader process and to announce it to all other processes. Those algorithms are often studied for ring architectures. One practical motivation comes from local-area networks that are based on a token-ring protocol. Moreover, rings generally allow one to nicely illustrate the main conceptual ideas of an algorithm. However, it is well-known that there is no (deterministic) distributed algorithm over rings that elects a leader under the assumption of anonymous processes. Therefore, classical algorithms, such as Franklin's algorithm \cite{Franklin:1982} or the Dolev-Klawe-Rodeh algorithm \cite{DolevKR82}, assume that every process is equipped with a unique process identifier (pid) from an infinite, totally ordered domain. In this paper, we consider such distributed algorithms, which work on ring architectures and can access unique pids as well as the associated total order. Distributed algorithms are intrinsically hard to analyze. Correctness proofs are often intricate and use subtle inductive arguments. Therefore, it is worthwhile to consider automatic verification methods such as model checking \cite{ClarkeGP2001}. Besides a formal model of an algorithm, this requires a generic specification language that is feasible from an algorithmic point of view but expressive enough to formulate correctness properties. In this paper, we propose a language that can reason about processes, states, and pids. In particular, it will allow us to formalize when a leader-election algorithm is correct: \emph{At the end of an execution, every process stores, in register $r$, the maximum pid among all processes}. Our language is inspired by Data-XPath, which can reason about trees over infinite alphabets \cite{BenediktFG08,BojanczykMSS09,FS11}. However, formal verification of distributed algorithms cumulates various difficulties that already arise, separately, in more standard verification: First, the number of processes is unknown, which amounts to parameterized verification \cite{Esparza14}; second, processes manipulate data from an infinite domain \cite{BojanczykMSS09,FS11}. In each case, even simple verification questions are undecidable, and so is the combination of both. In various other contexts, a successful approach to retrieving decidability has been a form of \emph{bounded model checking}. The idea is to consider correctness up to some parameter, which restricts the set of runs of the algorithm in a non-trivial way. In multi-threaded recursive programs, for example, one may restrict the number of control switches between different threads \cite{Qadeer:TACAS05}. Actually, this idea seems even more natural in the context of distributed algorithms, which usually proceed in \emph{rounds}. In each round, a process may emit some messages (here: pids) to its neighbors, and then receive messages from its neighbors. Pids can be stored in registers, and a process can check the relation between stored pids before it moves to a new state and is ready for a new round. It turns out that the number of rounds is often exponentially smaller than the number of processes (cf.\ the above-mentioned leader-election algorithms). Thus, roughly speaking, a small number of rounds allows us to verify correctness of an algorithm for a large number of processes. The key idea of our method is to interpret a (round-bounded) execution of a distributed algorithm symbolically as a word-like structure over a finite alphabet. The finite alphabet is constituted by the transitions that occur in the algorithm and possibly contain tests of pids wrt.\ equality or the associated total order. To determine feasibility of a symbolic execution (i.e., \emph{is there a ring that satisfies all the guards employed?}), we use propositional dynamic logic with loop and converse (LCPDL) over words \cite{Goeller2009}. Basically, we translate a given distributed algorithm into a formula that detects cyclic (i.e., contradictory) smaller-than tests. Its models are precisely the feasible symbolic executions. A specification is translated into LCPDL as well so that verification amounts to checking satisfiability of a single formula. The latter can be reduced to an emptiness problem for alternating two-way automata over words so that we obtain a PSPACE procedure for round-bounded model checking. \paragraph{Related Work.} Considerable effort has been devoted to the verification of fault-tolerant algorithms, which have to cope with faults such as lost or corrupted messages (e.g., \cite{Merz:2009,KonnovVW14}). After all, there have been only very few generic approaches to model checking distributed algorithms. In \cite{KVW12}, several possible reasons for this are identified, among them the presence of unbounded data types and an unbounded number of processes, which we have to treat simultaneously in our framework. Parameterized model checking of ring-based systems where communication is subject to a token policy and the message alphabet is finite has been studied in \cite{EmersonN03,AminofJKR14}. The theory of words and trees over infinite alphabets (aka data words/trees) provides an elegant formal framework for database-related notions such as XML documents \cite{BojanczykMSS09}, or for the analysis of programs with data structures such as lists and arrays \cite{Alur:2011,Alur:2012}. Notably, streaming transducers \cite{Alur:2011} also work over an infinite, totally ordered domain. The difference to our work is that we model distributed algorithms and provide a logical specification language. Recall that the latter borrows concepts from \cite{BenediktFG08,BojanczykMSS09,FS11}, whose logic is designed to reason about XML documents. A fragment of MSO logic over \emph{ordered} data trees was studied in \cite{Tan14}. The paper \cite{BCGK-fossacs12} pursued a symbolic model-checking approach to systems involving data. But the model was purely sequential and pids could only be compared for equality. The ordering on the data domain actually has a subtle impact on the choice of the specification language. \paragraph{Outline.} In Section,~\ref{sec:algorithms}, we present our model of a distributed algorithm. Section~\ref{sec:spec} introduces the specification language to express correctness criteria. In Section~\ref{sec:verification}, we show how to solve the round-bounded model-checking problem in polynomial space. We conclude in Section~\ref{sec:conclusion}. Some proof details are omitted but can be found in the appendix. \section{Distributed Algorithms}\label{sec:algorithms} \newcommand{\mathbf{left}}{\mathbf{left}} \newcommand{\mathbf{right}}{\mathbf{right}} \newcommand{\mathit{snd}}{\mathit{snd}} \newcommand{\mathit{rec}}{\mathit{rec}} \newcommand{\mathit{upd}}{\mathit{upd}} \newcommand{\mathit{grd}}{\mathit{grd}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\Pos}[2]{\mathit{Pos}(#1,#2)} \newcommand{\Coord}[1]{\mathit{Pos}(#1)} By $\mathbb{N} = \{0,1,2,\ldots\}$, we denote the set of natural numbers. For $n \in \mathbb{N}$, we set $\set{n} = \{1,\ldots,n\}$ and $\setz{n} = \{0,1,\ldots,n\}$. The set of finite words over an alphabet $A$ is denoted by $A^\ast$, and the set of nonempty finite words by $A^+$. \paragraph{Syntax of Distributed Algorithms.} We consider distributed algorithms that run on arbitrary ring architectures. A ring consists of a finite number of processes, each having a unique process identifier (pid). Every process has a unique left neighbor (referred to by $\mathbf{left}$) and a unique right neighbor (referred to by $\mathbf{right}$). Formally, a \emph{ring} is a tuple $\mathcal{R}=(n:p_1,\ldots,p_n)$, given by its size $n \ge 1$ and the pids $p_i \in \mathbb{N}$ assigned to process $i \in \set{n}$. We require that pids are unique, i.e., $p_i \neq p_j$ whenever $i \neq j$. For a process $i < n$, process $i+1$ is the right neighbor of $i$. Moreover, $1$ is the right neighbor of $n$. Analogously, if $i > 2$, then $i-1$ is the left neighbor of $i$. Moreover, $n$ is the left neighbor of $1$. Thus, processes $1$ and $n$ must not be considered as the ``first'' or ``last'' process. Actually, a distributed algorithm will not be able to distinguish between, for example, $(4:4,1,5,2)$ and $(4:5,2,4,1)$. One given distributed algorithm can be run on \emph{any} ring. It is given by a single program $\mathcal{D}$, and each process runs a copy of $\mathcal{D}$. It is convenient to think of $\mathcal{D}$ as a (finite) automaton. Processes proceed in synchronous rounds. In one round, every process executes one transition of its program. In addition to the change of state, a process may optionally perform the following phases within a transition: (i) send some pids to its neighbors, (ii) receive pids from its neighbors and store them in registers, (iii) compare register contents with one another, (iv) update its registers. For example, consider the transition $t=\datrans{s}{\sendleft{r} \,\text{;}\, \sendright{r'}}{\recright{r'}}{r<r'}{\updcmd{r}{r'}}{s'}$. A process can execute $t$ if it is in state $s$. It then sends the contents of register $r$ to its left neighbor and the contents of $r'$ to its right neighbor. If, afterwards, it receives a pid $p$ from its right neighbor, it stores $p$ in $r'$. If $p$ is greater than what has been stored in $r$, it sets $r$ to $p$ and goes to state $s'$. Otherwise, the transition is not applicable. The first phase can, alternatively, be filled with a special command $\mathbf{fwd}$. Then, a process will just forward any pid it receives. Note that a message can be forwarded, in one and the same round, across several processes executing $\mathbf{fwd}$. \begin{definition}\label{def:da} A \emph{distributed algorithm} $\mathcal{D}=({S},s_0,\mathit{Reg},\Delta)$ consists of a nonempty finite set ${S}$ of \emph{(local) states}, an \emph{initial state} $s_0 \in {S}$, a nonempty finite set $\mathit{Reg}$ of \emph{registers}, and a nonempty finite set $\Delta$ of \emph{transitions}. A transition is of the form $\datrans{s}{\mathit{send}}{\mathit{rec}}{\mathit{guard}}{\mathit{update}}{s'}$ where $s,s' \in {S}$ and the components $\mathit{send}$, $\mathit{rec}$, $\mathit{guard}$, and $\mathit{update}$ are built as follows: \begin{itemize}\itemsep=0.4ex \item[] $\mathit{send} ~::=~ \mathbf{skip} ~\mid~ \mathbf{fwd} ~\mid~ \sendleft{r} ~\mid~ \sendright{r} ~\mid~ \sendleft{r}\,\text{;}\, \sendright{r'}$ \item[] $\mathit{rec} ~::=~ \mathbf{skip} ~\mid~ \recleft{r} ~\mid~ \recright{r} ~\mid~ \recleft{r}\,\text{;}\, \recright{r'}$ \item[] $\mathit{guard} ~::=~ \mathbf{skip} ~\mid~ r < r' ~\mid~ r = r' ~\mid~ \mathit{guard} \,\text{;}\, \mathit{guard}$ \item[] $\mathit{update} ~::=~ \mathbf{skip} ~\mid~ \updcmd{r}{r'} ~\mid~ \mathit{update}\,\text{;}\,\mathit{update}$ \end{itemize} with $r$ and $r'$ ranging over $\mathit{Reg}$. We require that \begin{itemize} \item[(1)] in a $\mathit{rec}$ statement of the form $\recleft{r}\,\text{;}\, \recright{r'}$, we have $r \neq r'$ (actually, the order of the two receive actions does not matter), and \item[(2)] in an $\mathit{update}$ statement, every register occurs at most once as a left-hand side. \end{itemize} In the following, occurrences of ``$\mathbf{skip}\,\text{;}$'' are omitted; this does not affect the semantics. \hfill\ensuremath{\lhd} \end{definition} Note that a guard $r \le r'$ can be simulated in terms of guards $r < r'$ and $r = r'$, using several transitions. We separate $<$ and $=$ for convenience. They are actually quite different in nature, as we will see later in the proof of our main result. At the beginning of an execution of an algorithm, every register contains the pid of the respective process. We also assume, wlog., that there is a special register $\mathit{id} \in \mathit{Reg}$ that is never updated, i.e., no transition contains a command of the form $\recleft{\mathit{id}}$, $\recright{\mathit{id}}$, or $\updcmd{\mathit{id}}{r}$. A process can thus, at any time, access its own pid in terms of $\mathit{id}$. In the semantics, we will suppose that all updates of a transition happen simultaneously, i.e., after executing $\updcmd{r}{r'} \,\text{;}\, \updcmd{r'}{r}$, the values previously stored in $r$ and $r'$ will be swapped (and do not necessarily coincide). As, moreover, the order of two sends and the order of two receives within a transition do not matter, this will allow us to identify a transition with the set of states, commands (apart from $\mathbf{skip}$), and guards that it contains. For example, $t=\datrans{s}{\sendleft{r} \,\text{;}\, \sendright{r'}}{\recright{r'}}{r<r'}{\updcmd{r}{r'}}{s'}$ is considered as the set $t=\{s\,,\,\sendleft{r}\,,\,\sendright{r'}\,,\,\recright{r'}\,,\,r<r'\,,\,\updcmd{r}{r'}\,,\,\gotocmd{s'}\}$. \begin{figure}[t] \centering \parbox{\textwidth}{ \scalebox{0.85}{ $\begin{array}{lcl} \textbf{states: } \mathit{active},\mathit{passive} & & t_1 = \langle \mathit{active}\textup{:}~\sendleft{\mathit{id}} \,\text{;}\, \sendright{\mathit{id}} \,\text{;}\, \recleft{r_1} \,\text{;}\, \recright{r_2} \,\text{;}\, r_1 < \mathit{id} \,\text{;}\, r_2 < \mathit{id} \,\text{;}\, \gotocmd{\mathit{active}}\rangle\\[0.5ex] \phantom{\textbf{states: }} {\mathit{found}} & & t_2 = \langle \mathit{active}\textup{:}~\rule{\widthof{$\sendleft{\mathit{id}} \,\text{;}\, \sendright{\mathit{id}} \,\text{;}\, \recleft{r_1} \,\text{;}\, \recright{r_2}$}}{0.4pt} \,\text{;}\, \mathit{id} < r_1 \,\text{;}\, \gotocmd{\mathit{passive}}\rangle\\[0.5ex] \textbf{initial state: } \mathit{active} & & t_3 = \langle \mathit{active}\textup{:}~\rule{\widthof{$\sendleft{\mathit{id}} \,\text{;}\, \sendright{\mathit{id}} \,\text{;}\, \recleft{r_1} \,\text{;}\, \recright{r_2}$}}{0.4pt} \,\text{;}\, \mathit{id} < r_2 \,\text{;}\, \gotocmd{\mathit{passive}}\rangle\\[0.5ex] \textbf{registers: } \mathit{id},r,r_1,r_2 & & t_4 = \langle \mathit{active}\textup{:}~\rule{\widthof{$\sendleft{\mathit{id}} \,\text{;}\, \sendright{\mathit{id}} \,\text{;}\, \recleft{r_1} \,\text{;}\, \recright{r_2}$}}{0.4pt} \,\text{;}\, \mathit{id} = r_1 \,\text{;}\, \updcmd{r}{\mathit{id}} \,\text{;}\, \gotocmd{\mathit{found}}\rangle\\[0.5ex] & & t_5 = \langle \mathit{passive}\textup{:}~\mathbf{fwd} \,\text{;}\, \recleft{r}\,\text{;}\, \gotocmd{\mathit{passive}}\rangle \end{array}$ }} \caption{Franklin's leader-election algorithm $\mathcal{D}_\mathsf{Franklin}$\label{fig:franklin}} \hspace{3ex} \centering \parbox{\textwidth}{ \scalebox{0.85}{ $\begin{array}{lcl} \textbf{states: } \mathit{active}_0,\mathit{active}_1 & & t_1 = \langle \mathit{active}_0\textup{:}~\sendright{r} \,\text{;}\, \recleft{r'} \,\text{;}\, \gotocmd{\mathit{active}_1}\rangle\\[0.5ex] \phantom{\textbf{states: }} {\mathit{passive},\mathit{found}} & & t_2 = \langle \mathit{active}_1\textup{:}~\sendright{r'} \,\text{;}\, \recleft{r''} \,\text{;}\, r'' < r' \,\text{;}\, r < r' \,\text{;}\, \updcmd{r}{r'} \,\text{;}\, \gotocmd{\mathit{active}_0}\rangle\\[0.5ex] \textbf{initial state: } \mathit{active}_0 & & t_3 = \langle \mathit{active}_1\textup{:}~\rule{\widthof{$\sendright{r'} \,\text{;}\, \recleft{r''}$}}{0.4pt} \,\text{;}\, r' < r \,\text{;}\, \gotocmd{\mathit{passive}}\rangle\\[0.5ex] \textbf{registers: } \mathit{id},r,r',r'' & & t_4 = \langle \mathit{active}_1\textup{:}~\rule{\widthof{$\sendright{r'} \,\text{;}\, \recleft{r''}$}}{0.4pt} \,\text{;}\, r' < r'' \,\text{;}\, \gotocmd{\mathit{passive}}\rangle\\[0.5ex] & & t_5 = \langle \mathit{active}_1\textup{:}~\rule{\widthof{$\sendright{r'} \,\text{;}\, \recleft{r''}$}}{0.4pt} \,\text{;}\, r = r' \,\text{;}\, \gotocmd{\mathit{found}}\rangle\\[0.5ex] \phantom{\textbf{states: } \mathit{active},\mathit{passive},\mathit{found}} & & t_6 = \langle \mathit{passive}\textup{:}~\mathbf{fwd} \,\text{;}\, \recleft{r}\,\text{;}\, \gotocmd{\mathit{passive}}\rangle \end{array}$ }} \caption{Dolev-Klawe-Rodeh leader-election algorithm $\mathcal{D}_\mathsf{DKR}$\label{fig:dkr}} \end{figure} Before defining the semantics of a distributed algorithm, we will look at two examples. \begin{example}[Franklin's Leader-Election Algorithm]\label{ex:franklin} Consider Franklin's algorithm $\mathcal{D}_\mathsf{Franklin}$ to determine a leader in a ring \cite{Franklin:1982}. It is given in Figure~\ref{fig:franklin}. The goal is to assign leadership to the process with the highest pid. To do so, every process sends its own pid to both neighbors, receives the pids of its left and right neighbor, and stores them in registers $r_1$ and $r_2$, respectively (transitions $t_1,\ldots,t_4$). If a process is a local maximum, i.e., $r_1 < \mathit{id}$ and $r_2 < \mathit{id}$ hold, it is still in the race for leadership and stays in state $\mathit{active}$. Otherwise, it has to take $t_2$ or $t_3$ and goes into state $\mathit{passive}$. In $\mathit{passive}$, a process will just forward any pid it receives and store the message coming from the left in $r$ (transition $t_5$). When an active process receives its own pid (transition $t_4$), it knows it is the only remaining active process. It copies its own pid into $r$, which henceforth refers to the leader. We may say that a run is accepting (or terminating) when all processes terminate in $\mathit{passive}$ or $\mathit{found}$. Then, at the end of any accepting run, (i) there is exactly one process $i_0$ that terminates in $\mathit{found}$, (ii) all processes store the pid of $i_0$ in register $r$, and the pid of $i_0$ is the maximum of all pids in the ring. Since, in every round, at least half of the active processes become passive, the algorithm terminates after at most $\lfloor \log_2 n\rfloor +1$ rounds where $n$ is the number of processes. \hfill\ensuremath{\lhd} \end{example} \begin{example}[Dolev-Klawe-Rodeh Leader-Election Algorithm]\label{ex:dolev} The Dolev-Klawe-Rodeh leader-election algorithm \cite{DolevKR82} is an adaptation of Franklin's algorithm to cope with unidirectional rings, where a process can only, say, send to the right and receive from the left. The algorithm, denoted $\mathcal{D}_\mathsf{DKR}$, is given in Figure~\ref{fig:dkr}. The idea is that the local maximum among the processes $i-2,i-1,i$ is determined by $i$ (rather than $i-1$). Therefore, each process $i$ will execute two transitions, namely $t_1$ and $t_2$, and store the pids sent by $i-2$ and $i-1$ in $r''$ and $r'$, respectively. After two rounds, since $r$ still contains the pid of $i$ itself, $i$ can test if $i-1$ is a local maximum among $i-2,i-1,i$ using the guards in transition $t_2$. If both guards are satisfied, $i$ stores the pid sent by $i-1$ in $r$. It henceforth ''represents'' process $i-1$, which is still in the race, and goes to state $\mathit{active}_0$. Otherwise, it enters $\mathit{passive}$, which has the same task as in Franklin's algorithm. The algorithm is correct in the following sense: At the end of an accepting run (each process ends in $\mathit{passive}$ or $\mathit{found}$), (i) there is exactly one process that terminates in $\mathit{found}$ (but not necessarily the one with the highest pid), and (ii) all processes store the maximal pid in register $r$. The algorithm terminates after at most $2\lfloor \log_2 n\rfloor+2$ rounds. Note that the correctness of $\mathcal{D}_\mathsf{DKR}$ is less clear than that of $\mathcal{D}_\mathsf{Franklin}$. \hfill\ensuremath{\lhd} \end{example} \newcommand{\parbox[0pt][3.6ex][c]{0cm}{}}{\parbox[0pt][3.6ex][c]{0cm}{}} \makeatletter \def\hlinewd#1{% \noalign{\ifnum0=`}\fi\hrule \@height #1 \futurelet \reserved@a\@xhline} \makeatother \newcommand{\mathit{pid}}{\rho} \newcommand{\chi}{\chi} \newcommand{\confrel}[1]{\stackrel{#1}{\rightsquigarrow}} \paragraph{Semantics of Distributed Algorithms.} Now, we give the formal semantics of a distributed algorithm $\mathcal{D}=({S},s_0,\mathit{Reg},\Delta)$. Recall that $\mathcal{D}$ can be run on any ring $\mathcal{R}=(n:p_1,\ldots,p_n)$. An ($\mathcal{R}$-)configuration of $\mathcal{D}$ is a tuple $(s_1,\ldots,s_n,\mathit{pid}_1,\ldots,\mathit{pid}_n)$ where $s_i$ is the current state of process $i$ and $\mathit{pid}_i: \mathit{Reg} \to \{p_1,\ldots,p_n\}$ maps each register to a pid. The configuration is called \emph{initial} if, for all processes $i \in \set{n}$, we have $s_i = s_0$ and $\mathit{pid}_i(r) = p_i$ for all $r \in \mathit{Reg}$. Note that there is a unique initial $\mathcal{R}$-configuration. In one round, the algorithm moves from one configuration to another one. This is described by a relation $C \confrel{t} C'$ where $C=(s_1,\ldots,s_n,\mathit{pid}_1,\ldots,\mathit{pid}_n)$ and $C'=(s_1',\ldots,s_n',\mathit{pid}_1',\ldots,\mathit{pid}_n')$ are $\mathcal{R}$-configurations and $t = (t_1,\ldots,t_n) \in \Delta^n$ is a tuple of transitions where $t_i$ is executed by process $i$. To determine when $C \confrel{t} C'$ holds, we first define two auxiliary relations. For registers $r,r' \in \mathit{Reg}$ and processes $i,j \in \set{n}$, we write $\auxright{r}{i}{r'}{j}$ if the contents of $r$ is sent to the right from $i$ to $j$, where it is stored in $r'$. Thus, we require that \begin{center} $\sendright{r} \in t_i ~\wedge~ \recleft{r'} \in t_j ~\wedge~ \mathbf{fwd} \in t_k$ for all $k \in \mathit{Between}(i,j)$ \end{center} where $\mathit{Between}(i,j)$ means $\{i+1,\ldots,j-1\}$ if $i<j$ or $\{1,\ldots,j-1,i+1,\ldots,n\}$ if $j\leq i$. Note that, due to the $\mathbf{fwd}$ command, $\auxright{r}{i}{r'}{j}$ may hold for several $r'$ and $j$. The meaning of $\auxleft{r}{i}{r'}{j}$ is analogous, we just replace ``right direction'' by ``left direction'': \begin{center} $\sendleft{r} \in t_i ~\wedge~ \recright{r'} \in t_j ~\wedge~ \mathbf{fwd} \in t_k$ for all $k \in \mathit{Between}(j,i)$. \end{center} \begin{figure}[t] \centering \scalebox{0.85}{ \gusepicture{symbrun} } \caption{Run of Dolev-Klawe-Rodeh algorithm and runs of path automata\label{fig:symbrun}} \end{figure} The guards in the transitions $t_1,\ldots,t_n$ are checked against ``intermediate'' register assignments $\hat{\mathit{pid}}_1,\ldots,\hat{\mathit{pid}}_n: \mathit{Reg} \to \{p_1,\ldots,p_n\}$, which are defined as follows: $$\hat{\mathit{pid}}_j(r') = \begin{cases} \mathit{pid}_i(r) & \text{ if } \auxright{r}{i}{r'}{j} \text{ or } \auxleft{r}{i}{r'}{j} \\ \mathit{pid}_j(r') & \text{ if, for all } r,i\text{, neither } \auxright{r}{i}{r'}{j} \text{ nor } \auxleft{r}{i}{r'}{j} \end{cases}$$ Note that this is well-defined, due to condition (1) in Definition~\ref{def:da}. Now, we write $C \confrel{t} C'$ if, for all $j \in \set{n}$ and $r,r' \in \mathit{Reg}$, the following hold: \begin{enumerate}\itemsep=1ex \item $s_j \in t_j$ and $(\gotocmd{s_j'}) \in t_j$, \item $\hat{\mathit{pid}}_j(r) < \hat{\mathit{pid}}_j(r')$ ~~if $(r<r') \in {t_j}$, \item $\hat{\mathit{pid}}_j(r) = \hat{\mathit{pid}}_j(r')$ ~~if $(r=r') \in {t_j}$, \item $\mathit{pid}_j'(r) = \begin{cases} \hat{\mathit{pid}}_j(r') & \text{ if } (\updcmd{r}{r'}) \in t_j \\ \hat{\mathit{pid}}_j(r) & \text{ if } t_j \text{ does not contain an update of the form } \updcmd{r}{r''} \\ \end{cases}$ \end{enumerate} Again, 4.\ is well-defined thanks to condition (2) in Definition~\ref{def:da}. An ($\mathcal{R}$-)\emph{run} of $\mathcal{D}$ is a sequence $\chi = {C_0 \confrel{t^1} C_1 \confrel{t^2} \ldots \confrel{t^k} C_k}$ where $k \ge 1$, $C_0$ is the initial $\mathcal{R}$-configuration, and $t^j = (t_{1}^j,\ldots,t_{n}^j) \in \Delta^n$ for all $j \in \set{k}$. We call $k$ the \emph{length} of $\chi$. Note that $\chi$ uniquely determines the underlying ring $\mathcal{R}$. \begin{myremark} A receive command is always non-blocking even if there is no corresponding send. As an alternative semantics, one could require that it can only be executed if there has been a matching send, or vice versa. One could even include tags from a finite alphabet that can be sent along with pids. All this will not change any of the forthcoming results. \hfill\ensuremath{\lhd} \end{myremark} \begin{example} A run of $\mathcal{D}_{\mathsf{DKR}}$ from Example~\ref{ex:dolev} on the ring $\mathcal{R}=(7:4,8,3,1,6,5,7)$ is depicted in Figure~\ref{fig:symbrun} (for the moment, we may ignore the blue and violet lines). A colored row forms a configuration. The three pids in a cell refer to registers $r,r',r''$, respectively (we ignore $\mathit{id}$). Moreover, a non-colored row forms, together with the states above and below, a transition tuple. When looking at the step from $C_3$ to $C_4$, we have, for example, $\auxright{r'}{3}{r}{4}$ and $\auxright{r'}{3}{r''}{6}$. Moreover, $\auxright{r'}{6}{r}{7}$ and $\auxright{r'}{6}{r''}{1}$ (recall that we are in a ring). Note that the run conforms to the correctness property formulated in Example~\ref{ex:dolev}. In particular, in the final configuration, all processes store the maximum pid in register $r$. \hfill\ensuremath{\lhd} \end{example} \section{The Specification Language}\label{sec:spec} \newcommand{\existsp}[5]{\langle #3 \rangle#1 #5 \langle #4 \rangle#2} \newcommand{\forallp}[5]{[#3]#1 #5 [#4]#2} \newcommand{\existseq}[4]{\existsp{#1}{#2}{#3}{#4}{=}} \newcommand{\existsleq}[4]{\existsp{#1}{#2}{#3}{#4}{\le}} \newcommand{\existsless}[4]{\existsp{#1}{#2}{#3}{#4}{<}} \newcommand{\foralleq}[4]{[#1@#3 = #2@#4]} \newcommand{\forallleq}[4]{[#1@#3 \le #2@#4]} \newcommand{\forallless}[4]{[#1@#3 < #2@#4]} \newcommand{\Allrings}[1]{\forall_{\!\mathit{rings}}\forall_{\!\mathit{runs}}\forall_{\mathsf{m}}#1} \newcommand{\Allexistsrings}[1]{\forall_{\!\mathit{rings}}\exists_{\mathit{run}}\forall_{\mathsf{m}}#1} \newcommand{\mathsf{m}}{\mathsf{m}} \newcommand{\ensuremath{\textup{DataPDL}}\xspace}{\ensuremath{\textup{DataPDL}}\xspace} \newcommand{\ensuremath{\textup{DataPDL}^{\ominus}}\xspace}{\ensuremath{\textup{DataPDL}^{\ominus}}\xspace} In Examples~\ref{ex:franklin} and \ref{ex:dolev}, we informally stated the correctness criterion for the presented algorithms (e.g., ``at the end, all processes store the maximal pid in register $r$''). Now, we introduce a \emph{formal} language to specify correctness properties. It is defined wrt.\ a given distributed algorithm $\mathcal{D}=({S},s_0,\mathit{Reg},\Delta)$, which we fix for the rest of this section. Typically, one requires that a distributed algorithm is correct no matter what the underlying ring is. Since we will bound the number of rounds, we moreover study a form of partial correctness. Accordingly, a property is of the form $\Allrings{\phi}$, which has to be read as ``for all rings, all runs, and all processes $\mathsf{m}$, we have $\phi$''. The marking $\mathsf{m}$ is used to avoid to ``get lost'' in a ring when writing the property $\phi$. This is like placing a pebble in the ring that can be retrieved at any time. Actually, $\phi$ allows us to ``navigate'' back and forth ($\mathord{\uparrow}$ and $\mathord{\downarrow}$) in a run, i.e., from one configuration to the previous or next one (similar to a temporal logic with past operators). By means of $\mathord{\leftarrow}$ and $\mathord{\rightarrow}$, we may also navigate horizontally within a configuration, i.e., from one process to a neighboring one. Essentially, a sequence of configurations is interpreted as a cylinder (cf.\ Figure~\ref{fig:symbrun}) that can be explored using regular expressions $\pi$ over $\{\epsilon,\mathord{\leftarrow},\mathord{\rightarrow},\mathord{\uparrow},\mathord{\downarrow}\}$ (where $\epsilon$ means ``stay''). At a given position/coordinate of the cylinder, we can check \emph{local (or positional)} properties like the state taken by a process, or whether we are on the marked process $\mathsf{m}$. Such a property can be combined with a regular expression $\pi$: The formula $\forallpath{\pi}\phi$ says that $\phi$ holds at every position that is reachable through a $\pi$-path (a path matching $\pi$). Dually, $\existspath{\pi}\phi$ holds if there is a $\pi$-path to some position where $\phi$ is satisfied. The most interesting construct in our logic is $\existsp{r}{r'}{\pi}{\pi'}{\bowtie}$, where ${\bowtie} \in \{\mathord{=},\mathord{\neq},\mathord{<},\mathord{\le}\}$, which has been used for reasoning about XML documents \cite{BenediktFG08,BojanczykMSS09,FS11}. It says that, from the current position, there are a $\pi$-path and a $\pi'$-path that lead to positions $y$ and $y'$, respectively, such that the pid stored in register $r$ at $y$ and the pid stored in $r'$ at $y'$ satisfy the relation $\bowtie$. We will now introduce our logic in full generality. Later, we will restrict the use of $<$- and $\le$-guards to obtain positive results. \begin{definition}\label{def:datapdl} The logic $\ensuremath{\textup{DataPDL}}\xspace(\mathcal{D})$ is given by the following grammar: \begin{align*} \Phi &::= \Allrings{\phi}\\ \phi,\phi' &::= \mathsf{m} \,\mid\, s \,\mid\, \neg\phi \,\mid\, \phi \wedge \phi' \,\mid\, \phi \Rightarrow \phi' \,\mid\, \forallpath{\pi}\phi \,\mid\, \existsp{r}{r'}{\pi}{\pi'}{\bowtie}\\ \pi,\pi' &::= \test{\phi} \,\mid\, d \,\mid\, \pi + \pi' \,\mid\, \pi \cdot \pi' \,\mid\, \pi^{\ast} \end{align*} where $s \in {S}$, $r,r' \in \mathit{Reg}$, ${\bowtie} \in \{\mathord{=},\mathord{\neq},\mathord{<},\mathord{\le}\}$, and $d \in \{\epsilon,\mathord{\leftarrow},\mathord{\rightarrow},\mathord{\uparrow},\mathord{\downarrow}\}$. \hfill\ensuremath{\lhd} \end{definition} We call $\phi$ a \emph{local formula}, and $\pi$ a \emph{path formula}. We use common abbreviations such as $\mathit{false} = \mathsf{m} \wedge \neg\mathsf{m}$, $\existspath{\pi}\phi = \neg\forallpath{\pi}\neg\phi$, and $\phi \vee \phi' = \neg(\neg\phi \wedge \neg\phi')$, and we may write $\pi\pi'$ instead of $\pi \cdot \pi'$. Implication $\Rightarrow$ is included explicitly in view of the restriction defined below. Next, we define the semantics. Consider a run $\chi = {C_0 \confrel{t^1} C_1 \confrel{t^2} \ldots \confrel{t^k} C_k}$ of $\mathcal{D}$ where $C_j = (s_1^j,\ldots,s_n^j,\mathit{pid}_1^j,\ldots,\mathit{pid}_n^j)$, i.e., $n$ is the number of processes in the underlying ring. A local formula $\phi$ is interpreted over $\chi$ wrt.\ a marked process $m \in \set{n}$ and a position $(i,j) \in \Coord{\chi}$ where $\Coord{\chi} = \set{n} \times \setz{k}$. Let us define when $\chi,m,(i,j) \models \phi$ holds. The operators $\neg$, $\wedge$, and $\Rightarrow$ are as usual. Moreover, $\chi,m,(i,j) \models \mathsf{m}$ if $i = m$, and $\chi,m,(i,j) \models s$ if $s_i^j = s$. The other local formulas use path formulas. The semantics of a path formula $\pi$ is given in terms of a binary relation $\Sem{\pi}{\chi,m} \subseteq \Coord{\chi} \times \Coord{\chi}$, which we define below. First, we set: \begin{itemize}\itemsep=1ex \item $\chi,m,(i,j) \models \forallpath{\pi}\phi$ if $\forall (i',j')$ such that $((i,j),(i',j')) \in \Sem{\pi}{\chi,m}$, we have $\chi,m,(i',j') \models \phi$ \item $\chi,m,(i,j) \models \existsp{r}{r'}{\pi}{\pi'}{\bowtie}$ (where ${\bowtie} \in \{=,\neq,\mathord{<},\mathord{\le}\}$) if $\exists (i_1,j_1),(i_2,j_2)$ such that $((i,j),(i_1,j_1)) \in \Sem{\pi}{\chi,m}$ and $((i,j),(i_2,j_2)) \in \Sem{\pi'}{\chi,m}$ and $\mathit{pid}_{i_1}^{j_1}(r) \bowtie \mathit{pid}_{i_2}^{j_2}(r')$ \end{itemize} It remains to define $\Sem{\pi}{\chi,m}$ for a path formula $\pi$. First, a local test and a stay $\epsilon$ do not ``move'' at all: $\Sem{\test{\phi}}{\chi,m} = \{(x,x) \mid x \in \Coord{\chi}$ such that $\chi,m,x \models \phi\}$, and $\Sem{\epsilon}{\chi,m} = \{(x,x) \mid x \in \Coord{\chi}\}$. Using $\mathord{\rightarrow}$, we move to the right neighbor of a process: $\Sem{\mathord{\rightarrow}}{\chi,m} = \{((i,j),(i+1,j)) \mid i \in \set{n-1}$ and $j \in \setz{k}\} \cup \{((n,j),(1,j)) \mid j \in \setz{k}\}$. We define $\Sem{\mathord{\leftarrow}}{\chi,m}$ accordingly. Moreover, $\Sem{\mathord{\downarrow}}{\chi,m} = \{((i,j),(i,j+1)) \mid i \in \set{n}$ and $j \in \setz{k-1}\}$, and similarly for $\Sem{\mathord{\uparrow}}{\chi,m}$. The regular constructs, $+$, $\cdot$, and $\ast$ are as expected and refer to the union, relation composition, and star over binary relations. Finally, $\mathcal{D}$ satisfies the \ensuremath{\textup{DataPDL}}\xspace formula $\Allrings{\phi}$, written $\mathcal{D} \models \Allrings{\phi}$, if, for all rings $\mathcal{R}=(n:\ldots)$, all $\mathcal{R}$-runs $\chi$, and all processes $m \in \set{n}$, we have $\chi,m,(m,0) \models \phi$. Thus, $\phi$ is evaluated at the first configuration, wrt.\ all processes $m$. Next, we define a restricted logic, $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$, for which we later present our main result. We say that a path formula $\pi$ is \emph{unambiguous} if, from a given position, it defines at most one reference point. Formally, for all rings $\mathcal{R}=(n:\ldots)$, $\mathcal{R}$-runs $\chi$ of $\mathcal{D}$, processes $m \in \set{n}$, and positions $x \in \Coord{\chi}$, there is at most one $x' \in \Coord{\chi}$ such that $(x,x') \in \Sem{\pi}{\chi,m}$. For example, $\epsilon$, $\mathord{\downarrow}$, $\mathord{\rightarrow}$, and $\mathord{\rightarrow}^\ast\test{\mathsf{m}}$ are unambiguous, while $\mathord{\rightarrow}^\ast$ and $\mathord{\leftarrow} + \mathord{\rightarrow}$ are not unambiguous. \begin{definition} A $\ensuremath{\textup{DataPDL}}\xspace(\mathcal{D})$ formula is contained in $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$ if every subformula $\phi=\existsp{r}{r'}{\pi}{\pi'}{\bowtie}$ with ${\bowtie} \in \{<,\le\}$ is such that $\pi$ and $\pi'$ are \emph{unambiguous}. Moreover, $\phi$ must \emph{not} occur (i) in the scope of a negation, (ii) on the left-hand side of an implication $\underline{~\;} \!\Rightarrow\! \underline{~\;}\,$, or (iii) within a test $\test{\,\underline{~\;}\,}$. Note that guards using $=$ and $\neq$ are still unrestricted. \hfill\ensuremath{\lhd} \end{definition} \newcommand{\pi_\mathsf{found}}{\pi_\mathsf{found}} \begin{example}\label{ex:datapdl} Let us \emph{formalize}, in $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$, the correctness criteria for $\mathcal{D}_\mathsf{Franklin}$ and $\mathcal{D}_\mathsf{DKR}$ that we stated informally in Examples~\ref{ex:franklin} and \ref{ex:dolev}. Consider the following local formulas: \[\begin{array}{ll} \phi_\mathsf{last}=\forallpath{\mathord{\downarrow}}\mathit{false} & \phi_\mathsf{max} = \forallpath{\mathord{\rightarrow}^\ast} \bigl(\existsleq{\mathit{id}}{r}{\epsilon}{\pi_\mathsf{found}}\bigr)\\[1ex] \phi_\mathsf{acc} = \forallpath{\mathord{\rightarrow}^\ast}(\mathit{passive} \vee \mathit{found}) & \phi_{r=\mathit{id}} = \existspath{\pi_\mathsf{found}}\bigl(\existsp{r}{\mathit{id}}{\epsilon}{\epsilon}{=}\bigr)\\[1ex] \phi_\mathsf{found} = \existspath{\pi_\mathsf{found}\mathord{\rightarrow}(\test{\neg\mathit{found}}\mathord{\rightarrow})^\ast}\mathsf{m} ~~~~~~ & \phi_{r=r} = \neg\bigl(\existsp{r}{r}{\epsilon}{\mathord{\rightarrow}^\ast}{\neq}\bigr) \end{array}\] where $\pi_\mathsf{found}=(\test{\neg\mathit{found}}\mathord{\rightarrow})^\ast\test{\mathit{found}}$. Note that $\pi_\mathsf{found}$ is unambiguous: while going to the right, it always stops at the \emph{nearest} process that is in state $\mathit{found}$. Thus, $\phi_\mathsf{max}$ is indeed a local \ensuremath{\textup{DataPDL}^{\ominus}}\xspace formula. Consider the $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace$ formula \[\Phi_1 = \Allrings{ \forallpath{\mathord{\downarrow}^\ast}\bigl((\phi_\mathsf{last}\wedge\phi_\mathsf{acc}) \Rightarrow (\phi_\mathsf{found} \wedge \phi_\mathsf{max} \wedge \phi_{r=r} \wedge \phi_{r=\mathit{id}})\bigr) }\,.\] It says that, at the end (i.e., in the last configuration) of each accepting run, expressed by $\forallpath{\mathord{\downarrow}^\ast}\bigl((\phi_\mathsf{last}\wedge\phi_\mathsf{acc}) \Rightarrow{\ldots}\bigr)$, we have that \begin{itemize}\itemsep=0.5ex \item[(i)] there is exactly one process $i_0$ that ends in state $\mathit{found}$ (guaranteed by $\phi_\mathsf{found}$), \item[(ii)] register $r$ of $i_0$ contains the maximum over all pids ($\phi_\mathsf{max}$), \item[(iii)] register $r$ of $i_0$ contains the pid of $i_0$ itself ($\phi_{r=\mathit{id}}$), and \item[(iv)] all processes store the same pid in $r$ ($\phi_{r=r}$). \end{itemize} Thus, $\mathcal{D}_\mathsf{Franklin} \models \Phi_1$. On the other hand, we have $\mathcal{D}_\mathsf{DKR} \not\models \Phi_1$, because in $\mathcal{D}_\mathsf{DKR}$ the process that ends in $\mathit{found}$ is not necessarily the process with the maximum pid. However, we still have $\mathcal{D}_\mathsf{DKR} \models \Phi_2$ where \[\Phi_2=\Allrings{ \forallpath{\mathord{\downarrow}^\ast}\bigl((\phi_\mathsf{last}\wedge\phi_\mathsf{acc}) \Rightarrow (\phi_\mathsf{found} \wedge \phi_\mathsf{max} \wedge \phi_{r=r})\bigr) }\,.\] The next example formulates the correctness constraint for a distributed sorting algorithm. We would like to say that, at the end of an accepting run, the pids stored in registers $\mathit{r}$ are strictly totally ordered. Suppose $\phi_\mathsf{acc}$ represents an acceptance condition and $\phi_\mathsf{least}$ says that there is exactly one process that terminates in some dedicated state $\mathit{least}$, similarly to $\phi_\mathsf{found}$ above. Then, \[\Phi_3 = \Allrings{ \forallpath{\mathord{\downarrow}^\ast}\bigl((\phi_\mathsf{last}\wedge\phi_\mathsf{acc}) \Rightarrow (\phi_\mathsf{least} \wedge \forallpath{\mathord{\rightarrow}^\ast\test{\neg\mathit{least}}} (\existsless{r}{r}{\mathord{\leftarrow}}{\epsilon}))\bigr)}\] makes sure that, whenever process $j$ is not terminating in $\mathit{least}$, its left neighbor $i$ stores a smaller pid in $r$ than $j$ does. Note that $\Phi_1$, $\Phi_2$, and $\Phi_3$ are indeed \ensuremath{\textup{DataPDL}^{\ominus}}\xspace formulas. \hfill\ensuremath{\lhd} \end{example} Unsurprisingly, model checking distributed algorithms against \ensuremath{\textup{DataPDL}^{\ominus}}\xspace is undecidable: \begin{theorem}\label{thm:undecidable} The following problem is undecidable: Given a distributed algorithm $\mathcal{D}$ and $\Phi \in \ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$, do we have $\mathcal{D} \models \Phi$\,? (Actually, this even holds for formulas $\Phi$ that express simple state-reachability properties and do not use any guards on pids.) \end{theorem} \section{Round-Bounded Model Checking} \label{sec:verification} In the realm of multithreaded concurrent programs, where model checking is undecidable in general, a fruitful approach has been to underapproximate the behavior of a system \cite{Qadeer:TACAS05}. The idea is to introduce a parameter that measures a characteristic of a run such as the number of thread switches it performs. One then imposes a bound on this parameter and explores all behaviors up to that bound. In numerous distributed algorithms, the number $b$ of rounds needed to conclude is exponentially smaller than the number of processes (cf.\ Examples~\ref{ex:franklin} and \ref{ex:dolev}). Therefore, $b$ seems to be a promising parameter for bounded model checking of distributed algorithms. For a distributed algorithm $\mathcal{D}$, a formula $\Phi=\Allrings{\phi} \in \ensuremath{\textup{DataPDL}}\xspace(\mathcal{D})$, and $b \ge 1$, we write $\mathcal{D} \models_b \Phi$ if, for all rings $\mathcal{R}=(n:\ldots)$, all $\mathcal{R}$-runs $\chi$ of length $k \le b$, and all processes $m \in \set{n}$, we have $\chi,m,(m,0) \models \phi$. We now present our main result: \begin{theorem}\label{thm:main} The following problem is PSPACE-complete: Given a distributed algorithm $\mathcal{D}$, $\Phi \in \ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$, and a natural number $b \ge 1$ (encoded in unary), do we have $\mathcal{D} \models_b \Phi$\,? \end{theorem} The lower-bound proof, a reduction from the intersection-emptiness problem for a list of finite automata, can be found in the appendix. Before we prove the upper bound, let us discuss the result in more detail. We will first compare it with ``na{\"i}ve'' approaches to solve related questions. Consider the problem to determine whether a distributed algorithm satisfies its specification for all rings up to size $n$ and all runs up to length $b$. This problem is in coNP: We guess a ring (i.e., essentially, a permutation of pids) and a run, and we check, using \cite{Lange06}, whether the run does \emph{not} satisfy the formula. Next, suppose only $b$ is given and the question is whether, for all rings up to size $2^b$ and all runs up to length $b$, the property holds. Then, the above procedure gives us a coNEXPTIME algorithm. Thus, our result is interesting complexity-wise, but it offers some other advantages. First, it actually checks correctness (up to round number $b$) for \emph{all} rings. This is essential when verifying distributed \emph{protocols} against safety properties. Second, it reduces to a satisfiability check in the well-studied propositional dynamic logic with loop and converse (LCPDL) \cite{Goeller2009}, which in turn can be reduced to an emptiness check of alternating two-way automata (A2As) over words \cite{Vardi1998}. The ``na{\"i}ve'' approaches, on the other hand, do not seem to give rise to viable algorithms. Finally, our approach is uniform in the following sense: We will construct, in polynomial time, an A2A that recognizes precisely the symbolic abstractions of runs (over arbitrary rings) that violate (or satisfy) a given formula. Our construction is \emph{independent} of the parameter $b$. The emptiness check then requires a bound on the number of rounds (or on the number of processes), which can be adjusted gradually without changing the automaton. \paragraph{Proof Outline for Upper Bound of Theorem~\ref{thm:main}.} \newcommand{\Trans^{++}}{\Delta^{++}} \newcommand{T}{T} \newcommand{\RunsTR}[2]{\mathit{Runs}_{#1,#2}} \newcommand{\Runs}[1]{\mathit{Runs}(#1)} \newcommand{\posRuns}[1]{\mathit{Runs}^+(#1)} \newcommand{\negRuns}[1]{\mathit{Runs}^-(#1)} Let $\mathcal{D}$ be the given distributed algorithm and $\Phi \in \ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$. We will reduce model checking to the satisfiability problem for LCPDL \cite{Goeller2009}. While \ensuremath{\textup{DataPDL}^{\ominus}}\xspace is interpreted over runs, containing pids from an infinite alphabet, the new logic will reason about symbolic abstractions over a \emph{finite} alphabet. A symbolic abstraction of a run only keeps the transitions and discards pids. Thus, it can be seen as a table (or picture) whose entries are transitions (cf.\ Figure~\ref{fig:symbrun}). First, we translate $\mathcal{D}$ into an LCPDL formula. Essentially, it checks that guards are not used in a contradictory way. To compare $\mathcal{D}$ with $\Phi$, the latter is translated into an LCPDL formula, too. However, there is a subtle point here. For simplicity, let us write $r < r'$ instead of $\existsp{r}{r'}{\epsilon}{\epsilon}{<}$. Satisfaction of a formula $r < r'$ can only be guaranteed in a symbolic execution if the flow of pids provides \emph{evidence} that $r < r'$ really holds. More concretely, the (hypothetic) formula $(r < r') \vee (r = r') \vee (r' < r)$ is a tautology, but it may not be possible to prove any of its disjuncts on the basis of a symbolic run. This is the reason why $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace$ restricts $<$- and $\le$-tests. It is then indeed enough to reason about symbolic runs (cf.\ Lemma~\ref{lem:lcpdl} below). We leave open whether one can deal with full \ensuremath{\textup{DataPDL}}\xspace. Overall, we reduce model checking to satisfiability of the conjunction of two \ensuremath{\textup{LCPDL}}\xspace formulas of polynomial size: the formula representing the algorithm, and the negation of the formula representing the specification. Satisfiability of LCPDL over symbolic runs (of bounded height) can be checked in PSPACE \cite{Goeller2009} by a reduction to the emptiness problem for A2As over words \cite{Vardi1998}. Our approach is, thus, automata theoretic in spirit, though the power of alternation is used differently than in \cite{Vardi1996}, which translates LTL formulas into automata. Next, we present the logic LCPDL over symbolic runs. Then, in separate subsections, we translate $\mathcal{D}$ as well as its \ensuremath{\textup{DataPDL}^{\ominus}}\xspace specification into LCPDL. For the remainder of this section, we fix a distributed algorithm $\mathcal{D}=({S},s_0,\mathit{Reg},\Delta)$. \paragraph{PDL with Loop and Converse (LCPDL).} As mentioned before, a symbolic abstraction of a run of $\mathcal{D}$ is a table, whose entries are transitions from the finite alphabet $\Delta$. A \emph{table} is a triple $T=(n,k,\lambda)$ where $n,k \ge 1$ and $\lambda: \Coord{T} \to \Delta$ labels each position/coordinate from $\Coord{T}= \set{n} \times \setz{k}$ with a transition. Thus, we may consider that $T$ has $n$ columns and $k+1$ rows. In the following, we will write $\coord{T}{i}{j}$ for $\lambda(i,j)$, and $\col{T}{i}$ for the $i$-th column of $T$, i.e., $\col{T}{i}=T[i,0] \ldots T[i,k] \in \Delta^+$. Let $\Trans^{++}$ denote the set of all tables. \newcommand{\rotatebox[origin=cc]{180}{$\circlearrowleft$}}{\rotatebox[origin=cc]{180}{$\circlearrowleft$}} \newcommand{\rotatebox[origin=cc]{180}{$\circlearrowright$}}{\rotatebox[origin=cc]{180}{$\circlearrowright$}} Formulas $\psi \in \ensuremath{\textup{LCPDL}}\xspace(\mathcal{D})$ are interpreted over tables. Their syntax is given as follows: \[\begin{array}{rcl} \psi,\psi' &\!\!::=\!\!& t \mid s \mid \gotocmd{s} \mid \mathbf{fwd} \mid \sendleft{r} \mid \sendright{r} \mid \recleft{r} \mid \recright{r} \mid r<r' \mid r=r' \mid \updcmd{r}{r'} \mid\\[0.5ex] & & \neg \psi \mid \psi \wedge \psi' \mid \Existspath{\pi}{\psi} \mid \loopform{\pi} \\[1ex] \pi,\pi' &\!\!::=\!\!& \test{\psi} \mid d \mid \pi + \pi' \mid \pi \cdot \pi' \mid \pi^{\ast} \mid \pi^{-1} \mid \mathcal{A} \end{array}\] where $t \in \Delta$, $s \in {S}$, $r,r' \in \mathit{Reg}$, $d \in \{\epsilon,\mathord{\rightarrow},\mathord{\downarrow}\}$, and $\mathcal{A}$ is a \emph{path automaton}: a non-deterministic finite automaton whose transitions are labeled with path formulas $\pi$. Again, $\psi$ is called a \emph{local formula}. We use common abbreviations to include disjunction, implication, $\mathit{true}$, and $\mathit{false}$, and we let $\pi^+ = \pi \cdot \pi^\ast$, $\forallpath{\pi}\psi = \neg\existspath{\pi}\neg\psi$, $\existspath{\pi} = \existspath{\pi}\mathit{true}$, $\mathord{\leftarrow} = \mathord{\rightarrow}^{-1}$, and $\mathord{\uparrow} = \mathord{\downarrow}^{-1}$. The semantics of \ensuremath{\textup{LCPDL}}\xspace is very similar to that of \ensuremath{\textup{DataPDL}}\xspace. A local formula $\psi$ is interpreted over a table $T=(n,k,\lambda)$ and a position $x \in \Coord{T}$. When it is satisfied, we write $T,x \models \psi$. Moreover, a path formula $\pi$ determines a binary relation $\Sem{\pi}{T} \subseteq \Coord{T} \times \Coord{T}$, relating those positions that are connected by a path matching $\pi$. We consider only the most important cases: We have $T,(i,j) \models t$ if $T[i,j]=t$. For a state, command, guard, or update $\gamma$, let $T,(i,j) \models \gamma$ if $\gamma \in T[i,j]$. Loop and converse are as expected: $T,x \models \loopform{\pi}$ if $(x,x) \in \Sem{\pi}{T}$, and $\Sem{\pi^{-1}}{T} = \{(y,x) \mid (x,y) \in \Sem{\pi}{T}\}$. The semantics of $\mathord{\rightarrow}$ (and $\mathord{\leftarrow}$) is slightly different than in \ensuremath{\textup{DataPDL}}\xspace, since we are not allowed to go beyond the last and first column. Thus, $\Sem{\mathord{\rightarrow}}{T} = \{((i,j),(i+1,j)) \mid i \in \set{n-1}$ and $j \in \setz{k}\}$. However, we can simulate the ``roundabout'' of a ring and set ${\hookrightarrow} = \mathord{\rightarrow} + \test{\neg\existspath{\mathord{\rightarrow}}}\mathord{\leftarrow}^\ast\test{\neg\existspath{\mathord{\leftarrow}}}$ as well as ${\hookleftarrow} = {\hookrightarrow^{-1}}$. Actually, the first column of a table will play the role of a marked process in a ring (later, $\mathsf{m}$ will be translated to $\neg\existspath{\mathord{\leftarrow}}$). Finally, the semantics of path automata is given by $\Sem{\mathcal{A}}{T} = \{(x,y) \mid$ there is $\pi_1 \ldots \pi_\ell \in L(\mathcal{A})$ with $(x,y) \in \Sem{\pi_1 \cdot \ldots \cdot \pi_\ell}{T}\}$ where $L(\mathcal{A})$ contains a \emph{sequence} $\pi_1 \ldots \pi_\ell$ of path formulas if $\mathcal{A}$ admits a path $q_0 \xrightarrow{\pi_1} q_1 \xrightarrow{\pi_2} \ldots \xrightarrow{\pi_\ell} q_\ell$ from its initial state $q_0$ to a final state $q_\ell$. A formula $\psi \in \ensuremath{\textup{LCPDL}}\xspace(\mathcal{D})$ defines the language $L(\psi) = \{T \in \Trans^{++} \mid T,(1,0) \models \psi\}$. For $b \ge 1$, we denote by $L_b(\psi)$ the set of tables $(n,k,\lambda) \in L(\psi)$ such that $k \le b$. \begin{theorem}[essentially \cite{Goeller2009}]\label{thm:icpdl} The following problem is PSPACE-complete: Given a distributed algorithm $\mathcal{D}$, a formula $\psi \in \ensuremath{\textup{LCPDL}}\xspace(\mathcal{D})$, and $b \ge 1$ (encoded in unary), do we have $L_b(\psi) = \emptyset$\,? (The input $\mathcal{D}$ is only needed to determine the signature of the logic.) \end{theorem} \paragraph{From Distributed Algorithms to \ensuremath{\textup{LCPDL}}\xspace.} \begin{figure}[t] \centering \begin{tabular}{l} $\localform{r}{r'} = \begin{cases} \test{\bigwedge_{\bar{r} \in \mathit{Reg}}\!\neg\existspath{(\msgform{\bar{r}}{r})^{-1}}} & \textup{if~} r = r'\\[0.5ex] \test{\mathit{false}} & \textup{if~} r \neq r' \end{cases}$ \qquad $\updform{r}{r'} = \begin{cases} \test{ \bigwedge_{\bar{r} \neq r} \neg(\updcmd{r}{\bar{r}}) } & \textup{if~} r = r'\\[0.5ex] \test{\updcmd{r'}{r}} & \textup{if~} r \neq r' \end{cases}$\\[5ex] $\msgform{r}{r'} = \left( \begin{array}{rl} & \test{\sendright{r}} \cdot (\hookrightarrow \cdot \test{\mathbf{fwd}})^\ast \cdot \hookrightarrow \cdot\test{\recleft{{r'}}}\\[0.5ex] \!+ \!\!\!\!& \test{\sendleft{r}} \cdot (\hookleftarrow \cdot \test{\mathbf{fwd}})^\ast \cdot \hookleftarrow \cdot\test{\recright{{r'}}} \end{array}\right)$ \qquad $\nextform{r}{r'} = \begin{cases} \mathord{\downarrow} & \textup{if~} r = r'\\[0.5ex] \test{\mathit{false}} & \textup{if~} r \neq r' \end{cases}$ \end{tabular} \caption{Path formulas to trace back transmission of pids\label{fig:pathform}} \end{figure} \newcommand{h}{h} \newcommand{\mathsf{t}}{\mathsf{t}} Wlog., we assume that $\Delta$ contains $\mathsf{t}=\datrans{\mathsf{s}}{\mathbf{skip}}{\mathbf{skip}}{\mathbf{skip}}{\mathbf{skip}}{s_0}$ where $\mathsf{s} \neq s_0$ does not occur in any other transition. Let $\mathcal{R}=(n:p_1,\ldots,p_n)$ be a ring and $\chi = {C_0 \confrel{t^1} C_1 \confrel{t^2} \ldots \confrel{t^k} C_k}$ be an $\mathcal{R}$-run of $\mathcal{D}$, where $t^j = (t_{1}^j,\ldots,t_{n}^j) \in \Delta^n$ for all $j \in \set{k}$. From $\chi$, we extract the \emph{symbolic run} $\pictofrun{\chi}=(n,k,\lambda) \in \Trans^{++}$ given by its columns $\pictofrun{\chi}[i] = \mathsf{t}\, t_i^1 \ldots t_i^k$. The purpose of the dummy transition $\mathsf{t}$ at the beginning of a column is to match the number of configurations in a run. We will construct, in polynomial time, a formula $\psi_\mathcal{D} \in \ensuremath{\textup{LCPDL}}\xspace(\mathcal{D})$ such that $L(\psi_\mathcal{D}) = \{\pictofrun{\chi} \mid \chi$ is a run of $\mathcal{D}\}$. In particular, $\psi_\mathcal{D}$ will verify that (i) there are no cyclic dependencies that arise from $<$-guards, and (ii) registers in equality guards can be traced back to the same origin. In that case, the symbolic run is consistent and corresponds to a ``real'' run of $\mathcal{D}$. The main ingredients of $\psi_\mathcal{D}$ are some path formulas that describe the transmission of pids in a symbolic run. They are depicted in Figure~\ref{fig:pathform}. For $\theta \in \{\mathit{loc},\mathit{msg},\mathit{upd},\mathit{next}\}$ and $h \in \{0,1,2\}$, the meaning of $(x,y) \in \Sem{\theta_{r,r'}^{h,h'}}{T}$ is that the pid stored in $r$ at \emph{stage} $h$ of position/transition $x$ has been propagated to register $r'$ at stage $h'$ of $y$. Here, $h=0$ means ``after sending'', $h=1$ ``after receiving'', and $h=2$ ``after register update''. The interpretation of ``propagated'' depends on $\theta$. Formula $\localform{r}{r'}$ says that the value of register $r$ is not affected by reception. Similarly, $\smash{\updform{r}{r'}}$ takes care of updates. Formula $\smash{\nextform{r}{r'}}$ allows us to switch to the next transition of a process, preserving the value of $r (= r')$. The most interesting case is $\smash{\msgform{r}{r'}}$, which describes paths across several processes. It relates the sending of $r$ and a corresponding receive in $r'$, which requires that all intermediate transitions are forward transitions. All path formulas are illustrated in Figure~\ref{fig:symbrun}. Since pids can be transmitted along several transitions and messages, the formulas $\smash{\theta_{r,r'}^{h,h'}}$ will be composed by path automata. For $\textup{h} \in \{1,2\}$ and $\textup{r} \in \mathit{Reg}$, we define a path automaton $\smash{\mathcal{A}_{\textup{r}}^{\textup{h}}}$ that, in $T_\chi$, connects some positions $(i,0)$ and $(i',j')$ iff, in $\chi$, register $\textup{r}$ stores $p_i$ at stage $\textup{h}$ of position $(i',j')$. Its set of states is $\iota \cup (\{0,1,2\} \times \mathit{Reg})$. For all $\mathit{r} \in \mathit{Reg}$, there is a transition from the initial state $\iota$ to $(0,r)$ with transition label $\test{\neg\existspath{\mathord{\uparrow}}}$. Thus, the automaton starts at the top row and non-deterministically chooses some register $r$. From state $(h,r)$, it can read any transition label $\smash{\theta_{r,r'}^{h,h'}}$ and move to $(h',r')$. The only final state is $(\textup{h},\textup{r})$. Figure~\ref{fig:symbrun} describes (partial) runs of $\mathcal{A}_{r'}^{1}$ and $\mathcal{A}_{r''}^{1}$, which allow us to identify the origin of $r'$ and $r''$ when applying the guard $r'<r''$. Now, consistency of equality guards can indeed be verified by an \ensuremath{\textup{LCPDL}}\xspace formula. It says that, whenever an equality check $r = r'$ occurs in the symbolic run, then the pids stored in $r$ and $r'$ have a common origin. This can be conveniently expressed in terms of loop and converse. Note that guards are checked at stage $h=1$ of the corresponding transition: \[\psi_= ~=~ \forallpath{(\mathord{\rightarrow} + \mathord{\downarrow})^\ast} \textstyle\bigwedge_{r,r' \in \mathit{Reg}} \Bigl(r = r' ~\Rightarrow~ \loopform{(\mathcal{A}_{r}^{1})^{-1} \cdot \mathcal{A}_{r'}^{1}}\Bigr)\,.\] The next path formula connects the first coordinate of a process $i$ with the first coordinate of another process $i'$ if some guard forces the pid of $i$ to be smaller than that of $i'$: \[\pi_< ~= \Bigl(\textstyle\sum_{r,r' \in \mathit{Reg}} \mathcal{A}_{r}^{1} \cdot \test{r < r'} \cdot (\mathcal{A}_{r'}^{1})^{-1}\Bigr)^+\,.\] Note that, here, we use the (strict) transitive closure. Consistency of $<$-guards now reduces to saying that there is no $\pi_<$-loop: $\psi_< ~=~ \neg\existspath{\mathord{\rightarrow}^\ast} \loopform{\pi_<}$. Finally, we can easily write an \ensuremath{\textup{LCPDL}}\xspace formula $\psi_{\mathsf{col}}$ that checks whether every column $T[i] \in \Delta^+$ (ignoring $\mathsf{t}$) is a valid transition sequence of $\mathcal{D}$. Finally, let $\psi_\mathcal{D} = \psi_= \wedge \psi_< \wedge \psi_{\mathsf{col}}$. \begin{lemma}\label{lem:datapdl} We have $L(\psi_\mathcal{D}) = \{\pictofrun{\chi} \mid \chi$ is a run of $\mathcal{D}\}$. \end{lemma} \paragraph{From \ensuremath{\textup{DataPDL}^{\ominus}}\xspace to \ensuremath{\textup{LCPDL}}\xspace.} \newcommand{\transl}[1]{\widetilde{#1}} Next, we inductively translate every local $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$ formula $\phi$ into an $\ensuremath{\textup{LCPDL}}\xspace(\mathcal{D})$ formula $\transl{\phi}$. The translation is given in Figure~\ref{fig:transl}. As mentioned before, the first column in a table plays the role of a marked process so that $\transl{\mathsf{m}} = \neg\existspath{\mathord{\leftarrow}}$. The standard formulas are translated as expected. Now, consider $\transl{~\smash{\existsp{r}{r'}{\pi}{\pi'}{<}}}$ (the remaining cases are similar). To ``prove'' $\existsp{r}{r'}{\pi}{\pi'}{<}$ at a given position in a symbolic run, we require that there are a $\transl{\pi}$-path and a $\transl{\pi}'$-path to coordinates $x$ and $x'$, respectively, whose registers $r$ and $r'$ satisfy $r < r'$. To guarantee the latter, the pids stored in $r$ and $r'$ have to go back to coordinates that are connected by a $\pi_<$-path. Again, using converse, this can be expressed as a loop (cf.\ Figure~\ref{fig:existsless}). Note that, hereby, $\mathcal{A}_{r}^{2}$ and $\mathcal{A}_{r'}^{2}$ refer to stage $h=2$, which reflects the fact that $\ensuremath{\textup{DataPDL}}\xspace$ speaks about \emph{configurations} (determined after updates). \begin{figure}[t] \centering { \parbox[b]{0.65\textwidth}{ \centering \begin{tabular}{l} $\transl{\mathsf{m}} = \neg\existspath{\mathord{\leftarrow}}$ \qquad $\transl{s} = \gotocmd{s}$ ~for all $s \in {S}$\\[0.5ex] $\transl{\neg\phi} = \neg\transl{\phi}$ \quad $\transl{\phi_1 \wedge \phi_2} = \transl{\phi_1} \wedge \transl{\phi_2}$ \quad $\transl{\phi_1 \Rightarrow \phi_2} = \transl{\phi_1} \Rightarrow \transl{\phi_2}$ \quad $\transl{\forallpath{\pi}\phi} = \forallpath{\transl{\pi}}\transl{\phi}$\\[0.5ex] $\transl{\existsp{r}{r'}{\pi}{\pi'}{<}} = \loopform{\transl{\pi} \cdot (\mathcal{A}_{r}^{2})^{-1} \cdot \pi_< \cdot \mathcal{A}_{r'}^{2} \cdot (\transl{\pi}')^{-1}}$\\[0.5ex] $\transl{\existsp{r}{r'}{\pi}{\pi'}{\le}} = \loopform{\transl{\pi} \cdot (\mathcal{A}_{r}^{2})^{-1} \cdot (\pi_< + \epsilon)\cdot \mathcal{A}_{r'}^{2} \cdot (\transl{\pi}')^{-1}}$\\[0.5ex] $\transl{\existsp{r}{r'}{\pi}{\pi'}{=}} = \loopform{\transl{\pi} \cdot (\mathcal{A}_{r}^{2})^{-1} \cdot \mathcal{A}_{r'}^{2} \cdot (\transl{\pi}')^{-1}}$\\[0.5ex] $\transl{\existsp{r}{r'}{\pi}{\pi'}{\neq}} = \loopform{\transl{\pi} \cdot (\mathcal{A}_{r}^{2})^{-1} \cdot (\mathord{\leftarrow}^+ + \mathord{\rightarrow}^+) \cdot \mathcal{A}_{r'}^{2} \cdot (\transl{\pi}')^{-1}}$\\[0.5ex] $\transl{\pi}$ is inductively obtained from $\pi$ by replacing tests $\test{\phi}$ by $\test{\transl{\phi}}$,\\[-0.3ex] \qquad $\mathord{\rightarrow}$ by $\hookrightarrow$, and $\mathord{\leftarrow}$ by $\hookleftarrow$ \end{tabular} \caption{From \ensuremath{\textup{DataPDL}^{\ominus}}\xspace to LCPDL\label{fig:transl}} }} ~~~ { \parbox[b]{0.28\textwidth}{ \centering \scalebox{0.9}{ \begin{gpicture} \gasset{Nframe=y,Nh=1.4,Nw=1.4,AHLength=1.8,AHlength=1.5} \unitlength=0.7mm \node(current)(0,0){} \node(r)(-20,20){} \node(rp)(20,20){} \node(p1)(-10,40){} \node(p2)(10,40){} \drawedge[curvedepth=0,AHnb=1](current,r){$\transl{\pi}$} \drawedge[ELside=r,ELpos=60,curvedepth=-5,AHnb=1](current,rp){$\transl{\pi}'$} \drawedge[ELside=r,ELpos=50,ELdist=-1.2,curvedepth=0,AHnb=1](rp,current){$(\transl{\pi}')^{-1}$} \drawedge[ELside=r,ELpos=50,curvedepth=-5,AHnb=1](p1,r){$\mathcal{A}_{r}^{2}$} \drawedge[ELside=l,ELpos=50,curvedepth=0,AHnb=1](p2,rp){$\mathcal{A}_{r'}^{2}$} \drawedge[ELside=r,ELpos=60,ELdist=0.2,curvedepth=0,AHnb=1](r,p1){$(\mathcal{A}_{r}^{2})^{-1}$} \drawedge[ELside=l,ELpos=50,curvedepth=0,AHnb=1](p1,p2){$\pi_<$} \end{gpicture} } \caption{$\transl{\existsp{r}{r'}{\pi}{\pi'}{<}}$\label{fig:existsless}} }} \end{figure} \begin{lemma}\label{lem:lcpdl} Let $T \in \{\pictofrun{\chi} \mid \chi$ is a run of $\mathcal{D}\}$ and $\phi$ be a local $\ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$ formula. We have $T,(1,0) \models \transl{\phi} \;\Longleftrightarrow\, \bigl(\chi,1,(1,0) \models \phi \text{ for all runs } \chi \text{ of } \mathcal{D} \text{ such that } T_\chi = T\bigr)$. \end{lemma} Using Lemmas~\ref{lem:datapdl} and \ref{lem:lcpdl}, we can now prove Lemma~\ref{lem:satisf} below. Together with Theorem~\ref{thm:icpdl}, the upper bound of Theorem~\ref{thm:main} follows. \begin{lemma}\label{lem:satisf} Let $\mathcal{D}$ be a distributed algorithm, $\Phi=\Allrings{\phi} \in \ensuremath{\textup{DataPDL}^{\ominus}}\xspace(\mathcal{D})$, and $b \ge 1$. We have (a) $\mathcal{D} \models \Phi \,\Longleftrightarrow\, L(\psi_\mathcal{D} \wedge \neg\transl{\phi}) = \emptyset$, and (b) $\mathcal{D} \models_b \Phi \,\Longleftrightarrow\, L_b(\psi_\mathcal{D} \wedge \neg\transl{\phi}) = \emptyset$. \end{lemma} \section{Conclusion}\label{sec:conclusion} In this paper, we provided a conceptually new approach to the verification of distributed algorithms that is robust against small changes of the model. Actually, we made some assumptions that simplify the presentation, but are not crucial to the approach and results. For example, we assumed that an algorithm is synchronous, i.e., there is a global clock that, at every clock tick, triggers a round, in which every process participates. This can be relaxed to handle communication via (bounded) channels. Second, messages are pids, but they could contain message contents from a finite alphabet as well. Though the restriction to the class of rings is crucial for the complexity of our algorithm, the logical framework we developed is largely independent of concrete (ring) architectures. Essentially, we could choose any class of architectures for which \ensuremath{\textup{LCPDL}}\xspace is decidable. We leave open whether round-bounded model checking can deal with full \ensuremath{\textup{DataPDL}}\xspace, or with properties of the form $\Allexistsrings{\phi}$, which are branching-time in spirit.
{ "timestamp": "2015-04-27T02:09:32", "yymm": "1504", "arxiv_id": "1504.06534", "language": "en", "url": "https://arxiv.org/abs/1504.06534" }
\section{Introduction} In the last 20 years evolutionary computation has developed a number of algorithmic techniques for the analysis of evolutionary and genetic algorithms. These methods typically focus on runtime, and allow for rigorous bounds on the time required to reach a global optimum, or other well-specified high-fitness solutions. The runtime analysis of evolutionary algorithms has become one of the dominant concepts in evolutionary computation, leading to a plethora of results for evolutionary algorithms ~\cite{Auger2011,Jansen2013,NeumannWitt2010} as well as novel optimisation paradigms such as swarm intelligence~\cite{NeumannWitt2010} and artificial immune systems~\cite{Jansen2011a}. Interestingly, although evolutionary algorithms are heavily inspired by natural evolution, these methods have seldom been applied to natural evolution as studied in mathematical population genetics. This is a missed opportunity: the time it takes for a natural population to reach a fitness peak is an important question for the study of natural evolution. The kinds of results obtained from runtime analysis, namely how the runtime scales with genome size and mutation rate, are of general interest to population genetics. Moreover, recently there has been a renewed interest in applying computer science methods to problems in evolutionary biology with contributions from unlikely fields such as game theory~\cite{chastain_algorithms_2014}, machine learning~\cite{valiant_evolvability_2009} and Markov chain theory~\cite{chatterjee_time_2014}. Here, we present a first attempt at applying runtime analysis to the so-called Strong Selection Weak Mutation regime of natural populations. The Strong Selection Weak Mutation model applies when the population size, mutation rate, and selection strength are such that the time between occurrence of new mutations is long compared to the time a new genotype takes to replace the parent genotype~\cite{gillespie_molecular_1984}. Under these conditions, only one genotype is present in the population most of the time, and evolution occurs through ``jumps'' between different genotypes, corresponding to a new mutation replacing the resident genotype in the population. The relevant dynamics can then be characterized by a (1+1)-type stochastic process. This model is obtained as a limit of many other models, such as the Wright-Fisher model. One important aspect of this model is that new solutions are accepted with a probability $\frac{1-e^{-2\beta\Delta f}}{1-e^{-2 N\beta \Delta f}}$ that depends on the fitness difference $\Delta f$ between the new mutation and the resident genotype. Here $N$ reflects the size of the underlying population, and $\beta$ represents the selection strength. One can think of $f$ as defining a phenotype that is under selection to be maximized; $\beta$ quantifies how strongly a unit change in $f$ is favoured. This probability was first derived by Kimura~\cite{kimura_probability_1962} for a population of $N$ individuals that are sampled binomially in proportion to their fitness. This choice of acceptance function introduces two main differences to the (1+1)~EA\xspace: First, solutions of lower fitness (worsenings) may be accepted with some positive probability. This is reminiscent of the Metropolis algorithm (Simulated Annealing with constant temperature) which can also accept worsenings (see, e.\,g.~\cite{Jansen2007}). Second, solutions of higher fitness can be rejected, since they are accepted with a probability that is roughly proportional to the relative advantage they have over the current solution. We cast this model of natural evolution in a (1+1)-type algorithm referred to as SSWM, using common mutation operators from evolutionary algorithms. We then present first runtime analyses of this process. Our aims are manifold: \begin{itemize} \item to explore the performance of natural evolution in the context of runtime, comparing it against simple evolutionary algorithms like the (1+1)~EA\xspace, \item to investigate the non-elitistic selection mechanism implicit to SSWM and its usefulness in the context of evolutionary algorithms, and \item to show that techniques for the analysis of evolutionary algorithms can be applied to simple models of natural evolution, aiming to open up a new research field at the intersection of evolutionary computation and population genetics. \end{itemize} Our results are summarised as follows. For the simple function \text{\sc OneMax}\xspace we show in Section~\ref{sec:onemax} that with suitably large population sizes, when $N \beta \ge \frac{1}{2}\ln(11n)$, SSWM is an effective hill climber as it optimises \text{\sc OneMax}\xspace in expected time $O((n \log n)/\beta)$. However, when the population size is by any constant factor smaller than this threshold, we encounter a phase transition and SSWM requires exponential time even on \text{\sc OneMax}\xspace. We then illustrate the particular features of the selection rule in more depth. In Section~\ref{sec:cliff} we consider a function $\cliff{d}$ where a fitness valley of Hamming distance~$d$ needs to be crossed. For $d = \omega(\log n)$ the (1+1)~EA\xspace needs time $\Theta(n^d)$, but SSWM is faster by a factor of $e^{\Omega(d)}$ because of its ability to accept worse solutions. Finally, in Section~\ref{sec:balance} we illustrate on the function \text{\sc Balance}\xspace~\cite{RohlfshagenLehreYao2009} that SSWM can drastically outperform the (1+1)~EA\xspace because the fitness-dependent selection drives it to follow the steepest gradient. While the (1+1)~EA\xspace needs exponential time in expectation, SSWM with overwhelming probability finds an optimum in polynomial time. The main technical difficulties are that in contrast to the simple (1+1)~EA\xspace, SSWM is a non-elitist algorithm, hence fitness-level arguments based on elitism are not applicable. Level-based theorems for non-elitist populations~\cite{Corus2014} are not applicable either because they require population sizes larger than~1. Moreover, while for the (1+1)~EA\xspace transition probabilities to better solutions are solely determined by probabilities for flipping bits during mutation, for SSWM these additionally depend on the probability of fixation and hence the absolute fitness difference. The analysis of SSWM is more challenging than the analysis of the (1+1)~EA\xspace, and requires tailored proof techniques. We hope that these techniques will be helpful for analysing other evolutionary algorithms with fitness-based selection schemes. \section{Preliminaries} We define the optimisation time of SSWM as the first generation where the optimum is accepted as new individual. As can be seen from the description above, the model resembles the (1+1)~EA\xspace in that it only maintains one genotype that may be replaced by mutated versions of it. The candidate solutions are accepted with probability \begin{equation} \ensuremath{p_\mathrm{fix}}(\Delta f)=\frac{1-e^{-2\beta\Delta f}}{1-e^{-2 N\beta \Delta f}} \end{equation} where $\Delta f \neq 0$ is the fitness difference to the current solution and $N\geq 1$ is the size of the underlying population. For $\Delta f = 0$ we define $\ensuremath{p_\mathrm{fix}}(0) := \lim_{\Delta f\rightarrow 0} \ensuremath{p_\mathrm{fix}}(\Delta f)=\frac{1}{N}$, so that $\ensuremath{p_\mathrm{fix}}$ is continuous and well defined for all $\Delta f$. If $N=1$, this probability will be $p_\text{fix}(s)=1$, meaning that any offspring will be accepted, and if $N\rightarrow\infty$, it will only accept solution for which $\Delta f>0$. This expression was first derived by Kimura~\cite{kimura_probability_1962} and represents the \emph{probability of fixation}, that is, the probability that a gene that is initially present in one copy in a population of $N$ individuals is eventually present in all individuals. Since the acceptance function in this algorithm depends on the absolute difference in fitness between genotypes, we include a parameter $\beta\in (0,1]$ that effectively scales the fitness function and that in population genetics models the strength of selection on a phenotype. By incorporating $\beta$ as a parameter of this function (and hence of the algorithm) we avoid having to explicitly rescale the fitness functions we analyse, while allowing us to explore the performance of this algorithm on a family of functions. This function has a sigmoid shape (strictly increasing - see Lemma \ref{lemma:pfix-strictly-increasing}) with limits $\lim_{\Delta f\rightarrow -\infty}\ensuremath{p_\mathrm{fix}}(\Delta f)=0$ and $\lim_{\Delta f\rightarrow \infty}\ensuremath{p_\mathrm{fix}}(\Delta f)=1$. As such, for large $\vert\beta \Delta f\vert $ this probability of acceptance is close to the one in the (1+1)~EA\xspace, as long as $N>1$, defeating the purpose of the comparison. By bounding $\beta$ to $1$, we avoid artefactual results obtained by inflating the fitness differences between genotypes. We can then cast the SSWM regime as Algorithm~\ref{alg:sswm}, where the function $\textrm{mutate}(x)$ can be either standard bit mutation (all bits are mutated independently with probability $p_m=1/n$, which we call \emph{global mutations}) or flipping a single bit chosen uniformly at random (which we call \emph{local mutations}). SSWM is valid when the expected number of new mutants in the population is much less than one, which implies that local mutations are a better approximation for this regime. However, we also consider global mutations in order to facilitate a comparison with evolutionary algorithms such as the (1+1)~EA\xspace (Algorithm~\ref{alg:ea}), which uses global mutations. \begin{algorithm}[h] \caption{SSWM} \label{alg:sswm} \begin{algorithmic} \STATE {Choose $x\in \{0,1\}^n$ uniformly at random } \REPEAT \STATE {$y\leftarrow \mathrm{mutate}(x) $} \STATE {$\Delta f =f(y)-f(x)$ } \STATE {Choose $r\in\left[0,1\right]$ uniformly at random} \IF {$r<\ensuremath{p_\mathrm{fix}} (\Delta f)$} \STATE {$x\leftarrow y$} \ENDIF \UNTIL {stop} \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{(1+1)~EA\xspace} \label{alg:ea} \begin{algorithmic} \STATE {Choose $x\in \{0,1\}^n$ uniformly at random } \REPEAT \STATE {$y\leftarrow \mathrm{mutate}(x)$} \IF {$f(y) \ge f(x)$} \STATE {$x\leftarrow y$} \ENDIF \UNTIL {stop} \end{algorithmic} \end{algorithm} Next, we derive upper and lower bounds for $\ensuremath{p_\mathrm{fix}}(\Delta f) $ that will be useful throughout the manuscript. \begin{lemma} For every $\beta \in \ensuremath{{\mathbb R}}^+$ and $N \in \ensuremath{{\mathbb N}}^+$ the following inequalities hold. If $\Delta f \ge 0$ then \[ \frac{2\beta \Delta f}{1+2\beta \Delta f} \le \ensuremath{p_\mathrm{fix}}(\Delta f) \le \frac{2\beta \Delta f}{1-e^{-2N\beta \Delta f}}. \] If $\Delta f \le 0$ then \[ \frac{-2\beta\Delta f }{e^{-2 N \beta \Delta f}}\le \ensuremath{p_\mathrm{fix}}(\Delta f) \le \frac{e^{-2\beta\Delta f}}{e^{-2N\beta\Delta f}-1}. \] \label{lem:p.fix.bounds} \end{lemma} \begin{proof} In the following we frequently use $1 + x \le e^x$ and $1-e^{-x}\leq 1$ for all $x \in \ensuremath{{\mathbb R}}$ as well as $e^x \le \frac{1}{1-x}$ for $x < 1$. If $x\geq 0$, \begin{eqnarray*} \ensuremath{p_\mathrm{fix}}(\Delta f) &=& \frac{1-e^{-2\beta\Delta f}}{1-e^{-2N\beta\Delta f}} \geq 1-e^{-2\beta\Delta f}\\ &\geq& 1-\frac{1}{1+2\beta\Delta f} = \frac{2\beta \Delta f}{1+2\beta \Delta f} \end{eqnarray*} as well as \begin{eqnarray*} \ensuremath{p_\mathrm{fix}}(\Delta f) &=& \frac{1-e^{-2\beta\Delta f}}{1-e^{-2N\beta\Delta f}} \leq \frac{2\beta \Delta f}{1-e^{-2N\beta\Delta f}}. \end{eqnarray*} If $\Delta f \leq 0$, \begin{eqnarray*} \ensuremath{p_\mathrm{fix}}(\Delta f) &=& \frac{e^{-2\beta\Delta f}-1}{e^{-2N\beta\Delta f}-1} \leq\frac{e^{-2\beta\Delta f}}{e^{-2N\beta\Delta f}-1}. \end{eqnarray*} Using the fact that $e^{-x}-1\leq e^{-x}$: \begin{eqnarray*} \ensuremath{p_\mathrm{fix}}(\Delta f) &=& \frac{e^{-2\beta\Delta f}-1}{e^{-2N\beta\Delta f}-1} \geq \frac{e^{-2\beta\Delta f}-1}{e^{-2N\beta\Delta f}} \geq \frac{-2\beta \Delta f }{e^{-2 N \beta \Delta f}}. \end{eqnarray*} \end{proof} The previous bounds for $\Delta f>0$ show that \ensuremath{p_\mathrm{fix}} \xspace is roughly proportional to the fitness difference between solutions $\beta \Delta f$. \section{SSWM on OneMax} \label{sec:onemax} The function $\text{\sc OneMax}\xspace(x) := \sum_{i=1}^n x_i$ has been studied extensively in natural computation because of its simplicity. It represents an easy hill climbing task, and it is the easiest function with a unique optimum for all evolutionary algorithms that only use standard bit mutation for variation~\cite{Sudholt2012c}. Showing that SSWM can optimise \text{\sc OneMax}\xspace efficiently serves as proof of concept that SSWM is a reasonable optimiser. It further sheds light on how to set algorithmic parameters such as the selection strength~$\beta$ and the population size~$N$. To this effect, we first show a polynomial upper bound for the runtime of SSWM on \text{\sc OneMax}\xspace. We then show that SSWM exhibits a phase transition on its runtime as a function of $N\beta$; changing this parameter by a constant factor leads to exponential runtimes on \text{\sc OneMax}\xspace. Another reason why studying \text{\sc OneMax}\xspace for SSWM makes sense is because not all evolutionary algorithms that use a fitness-dependent selection perform well on \text{\sc OneMax}\xspace. Oliveto and Witt~\cite{Oliveto2013a} showed that the Simple Genetic Algorithm, which uses fitness-proportional selection, fails badly on \text{\sc OneMax}\xspace even within exponential time, with a very high probability. \subsection{Upper Bound for SSWM on OneMax} We first show the following simple lemma, which gives an upper bound on the probability of increasing or decreasing the number of ones in a search point by~$k$ in one mutation. \begin{lemma} \label{lem:mutations-decreasing-ones} For any positive integer $k > 0$, let $\mut(i, i\pm k)$ for $0 \le i \le n$ be the probability that a global mutation of a search point with $i$ ones creates an offspring with $i\pm k$ ones. Then \begin{align*} \mut(i, i+k) & \le \left(\frac{n-i}{n}\right)^k \left(1-\frac{1}{n}\right)^{n-k} \cdot \frac{1.14}{k!} \\ \mut(i, i-k) & \le \left(\frac{i}{n}\right)^k \left(1-\frac{1}{n}\right)^{n-k} \cdot \frac{1.14}{k!}. \end{align*} \end{lemma} The proof is omitted due to space restrictions; it uses arguments from the proof of Lemma~2 in~\cite{Sudholt2012c}. The second inequality follows immediately from the first one due to the symmetry $\mut(i,i-k)=\mut(n-i,n-i+k)$. Now we introduce the concept of drift and find some bounds for its forward and backward expression. \begin{definition} Let $X_t$ be the number of ones in the current search point, for all $1\le i\le n$ the forward and backward drifts are \begin{align*} \Delta^{+}(i)=\;& E[X_{t+1}-i \mid X_{t}=i, X_{t+1}>i]\cdot P(X_{t+1}>i \mid X_{t}=i)\\ \Delta^{-}(i)=\;& E[X_{t+1}-i \mid X_{t}=i, X_{t+1}<i]\cdot P(X_{t+1}<i \mid X_{t}=i)\\ \shortintertext{and the net drift is the expected increase in the number of ones} \Delta (i) =\;& \Delta^+(i) + \Delta^-(i). \end{align*} \end{definition} \begin{lemma} \label{lem:bounds.drift} Consider SSWM on \text{\sc OneMax}\xspace and mutation probability $p_{m}=\frac{1}{n}$. Then for global mutations, the forward and backward drifts can be bounded by \begin{align*} \Delta^{+}(i) \ge\;& \frac{n-i}{n}\left(1-\frac{1}{n}\right)^{n-1} \ensuremath{p_\mathrm{fix}}(1)\\ |\Delta^{-}(i)| \le\;& 1.14\left( 1-\frac{1}{n}\right)^{n-1}\cdot \left(\ensuremath{p_\mathrm{fix}}(-1) + e\cdot \ensuremath{p_\mathrm{fix}}(-2)\right). \shortintertext{For local mutations the relations are as follows} \Delta^{+}(i) =\;& \frac{n-i}{n}\cdot \ensuremath{p_\mathrm{fix}}(1)\\ |\Delta^-(i)| \le\;& \ensuremath{p_\mathrm{fix}}(-1). \end{align*} \label{lem:drift.bounds} \end{lemma} \begin{proof} For global mutations firstly we compute the lower bound for the forward drift, \begin{equation} \notag \Delta^{+}(i)=\sum_{j=1}^{n-i}\mut(i,i+j)\cdot j \cdot \ensuremath{p_\mathrm{fix}}(j)\label{eq:Drift.pos} \end{equation} where $\mut(i,i+j)$ is the probability of mutation increasing the \onemax value by $j$ and $i$ is the number of ones of the current search point. \begin{align*} \Delta^{+}(i)\ge\;& \mut(i,i+1)\cdot \ensuremath{p_\mathrm{fix}}(1)\\ \ge\;& \frac{n-i}{n}\left(1-\frac{1}{n}\right)^{n-1} \ensuremath{p_\mathrm{fix}}(1). \end{align*} Secondly we calculate the upper bound for the backward drift \begin{align} |\Delta^{-}(i)|=&\sum_{j=1}^{i}\mut(i,i-j)\cdot j \cdot \ensuremath{p_\mathrm{fix}}(-j) \notag \intertext{where $j$ is now the number of new zeros. We can upper bound $\mut(i,i-j)$ for the probability of flipping any $j$ bits, which from Lemma~\ref{lem:mutations-decreasing-ones} yields}\notag \notag \le& \sum_{j=1}^{i}\frac{1.14}{j!}\cdot \left( 1-\frac{1}{n}\right)^{n-1}\cdot j\cdot \ensuremath{p_\mathrm{fix}}(-j). \notag \intertext{Separating the case $j=1$ and bounding the remaining fixation probabilities by $\ensuremath{p_\mathrm{fix}}(-2)$} \notag \le&\; 1.14\left( 1-\frac{1}{n}\right)^{n-1}\ensuremath{p_\mathrm{fix}}(-1) \\ \notag &+ 1.14\left(1-\frac{1}{n}\right)^{n-1} \cdot \sum_{j=2}^{i} \frac{1}{(j-1)!}\cdot \ensuremath{p_\mathrm{fix}}(-2)\\ \notag \le&\; 1.14\left( 1-\frac{1}{n}\right)^{n-1} (\ensuremath{p_\mathrm{fix}}(-1) + e\cdot \ensuremath{p_\mathrm{fix}}(-2)). \notag \end{align} Finally, the case for local mutations is straightforward since the probability of a local mutation increasing the number of ones is $\frac{n-i}{n}$ and that of decreasing it is at most $1$. \end{proof} The following theorem shows that SSWM is efficient on \text{\sc OneMax}\xspace for $N\beta \ge \frac{1}{2}\ln (11n)$, since then $\ensuremath{p_\mathrm{fix}}(1)$ starts being greater than $n\cdot \ensuremath{p_\mathrm{fix}}(-1)$ allowing for a positive drift even on the hardest fitness level ($n-1$ ones). The upper bound increases with $1/\beta$; this makes sense as for small values of~$\beta$ we have $\ensuremath{p_\mathrm{fix}}(1) \approx 2\beta$ (cf.\ Lemma~\ref{lem:p.fix.bounds}). In this regime absolute fitness differences are small and improvements are only accepted with a small probability. \begin{theorem} \label{the:upper-onemax} For $N\beta \ge \frac{1}{2}\ln(11n)$ and $\beta \in (0,1]$, the expected optimisation time of SSWM on \text{\sc OneMax}\xspace with local or global mutations is $O\left(\frac{n\log n}{\beta} \right)$ for every initial search point. \end{theorem} \begin{proof} The fixation probabilities can be bounded as follows \begin{align} \ensuremath{p_\mathrm{fix}}(1) &= \frac{1-e^{-2\beta}}{1-e^{-2N\beta}} \ge 1-e^{-2\beta} \notag \intertext{and for $N\beta \ge \frac{1}{2}\ln (11n)$} \ensuremath{p_\mathrm{fix}}(-1) &= \frac{e^{2\beta}-1}{e^{2N\beta}-1} \le \frac{e^{2\beta}-1}{11n-1}\label{eq:upper-bound-on-pfix-minus-one}\\ \ensuremath{p_\mathrm{fix}}(-2) &= \frac{e^{4\beta}-1}{e^{4N\beta}-1} \le \frac{e^{4\beta}-1}{(11n)^{2}-1} = O(n^{-2}). \notag \end{align} Using Lemma~\ref{lem:bounds.drift} \[ \Delta(i) \ge \frac{1}{e}\left[ \frac{n-i}{n} \cdot (1-e^{-2\beta}) - 1.14\frac{e^{2\beta}-1}{11n-1} - O(n^{-2})\right] \] We need a positive net drift even in the last step $(n-i=1)$ \begin{align*} \Delta(n-1) &\ge \frac{1}{e}\left[ \frac{1}{n} \cdot (1-e^{-2\beta}) - 1.14\frac{e^{2\beta}-1}{11n-1} - O(n^{-2})\right] \\ &\ge \frac{1}{e}\left[ \frac{1}{11n-1}\left( \frac{11n-1}{n} \cdot (1-e^{-2\beta}) - 1.14(e^{2\beta}-1) \right) \right] \\ &\ge \frac{1}{e}\left[ \frac{1-e^{-2\beta}}{11n-1}\left( 11 -\frac{1}{n} - 1.14\cdot \frac{e^{2\beta}-1}{1-e^{-2\beta}} \right) \right] \intertext{using the relation $e^x=\frac{e^{x}-1}{1-e^{-x}}$} &\ge \frac{1}{e}\left[ \frac{1-e^{-2\beta}}{11n-1}\left( 11 -\frac{1}{n} - 1.14\cdot e^{2\beta} \right) \right] \\ \intertext{since $\beta \in (0,1]$ then $e^{2\beta}\le e^{2} < 7.5$} &\ge \frac{1}{e}\left[ \frac{1-e^{-2\beta}}{11n-1}\left( 2.5 -\frac{1}{n} \right) \right] \\ &\ge \frac{1.5}{e}\cdot \frac{1-e^{-2\beta}}{11n-1} \intertext{also for $\beta \in (0,1]$ we have $1.5(1-e^{-2\beta})\ge \beta$} &\ge \frac{\beta}{e}\cdot \frac{1}{11n-1} \end{align*} which is positive for enough large $n$. Therefore we can lower bound the drift in any point as \begin{equation} \Delta(i) = \Omega\left(\frac{n-i}{n}\cdot \beta\right) \label{eq:onemax.drift} \end{equation} Now we apply Johannsen's variable drift theorem~\cite{Johannsen2010} to the number of zeros. Using ${h(z) := E(X_t-X_{t+1}\mid X_{t}=z)}$ then \begin{align*} E(T \mid X_0) &\leq \dfrac{z_{\min}}{h(z_{\min})}+ \int^{X_0}_{z_{\min}} \dfrac{1}{h(z)} dz \intertext{where $z$ is the number of zeros, $X_t$ the current state and $T$ the optimisation time. Introducing $z_{\min}=1$, $X_0=n$ and} \notag \Delta(i) &=\Omega\left(\frac{z}{n}\cdot \beta\right) =h(z) \end{align*} we obtain an upper bound for the runtime \begin{align*} E(T \mid X_0) &\le \dfrac{1}{h(1)}+ \int^{n}_{1} \dfrac{1}{h(z)} dz = O\left( \frac{n}{\beta} \right) + O\left( \int^{n}_{1} \frac{n}{\beta z}dz \right) \\ &= O\left( \frac{n}{\beta } (1+\log n) \right) = O\left(\frac{n\log n}{\beta} \right). \begingroup \let\quad\hbox{\qedsymbol}\math@qedhere\let\qed@elt\setQED@elt \QED@stack\relax\relax \endgroup \end{align*} \end{proof} \subsection{A Critical Threshold for SSWM on OneMax} The upper bound from Theorem~\ref{the:upper-onemax} required $N \beta \ge \frac{1}{2} \ln(11n) = \frac{1}{2} \ln(n) + O(1)$. This condition is vital since if $N \beta$ is chosen too small, the runtime of SSWM on \text{\sc OneMax}\xspace is exponential with very high probability, as we show next. If $N \beta$ is by a factor of $1-\varepsilon$, for some constant~$\varepsilon > 0$, smaller than $\frac{1}{2} \ln n$, the optimisation time is exponential in~$n$, with overwhelming probability. SSWM therefore exhibits a phase transition behaviour: changing $N \beta$ by a constant factor makes a difference between polynomial and exponential expected optimisation times on \text{\sc OneMax}\xspace. \begin{theorem} \label{the:lower-onemax} If $1 \le N\beta \le \frac{1-\varepsilon}{2} \ln n$ for some ${0 < \varepsilon < 1}$, then the optimisation time of SSWM with local or global mutations on \text{\sc OneMax}\xspace is at least $2^{c n^{\varepsilon/2}}$ with probability ${1-2^{-\Omega(n^{\varepsilon/2})}}$, for some constant $c > 0$. \end{theorem} The idea behind the proof of Theorem~\ref{the:lower-onemax} is to show that for all search points with at least $n - n^{\varepsilon/2}$ ones, there is a negative drift for the number of ones. This is because for small $N \beta$ the selection pressure is too weak, and worsenings in fitness are more likely than steps where mutation leads the algorithm closer to the optimum. We then use the negative drift theorem with self-loops presented in Rowe and Sudholt~\cite{Rowe2013} (an extension of the negative drift theorem~\cite{Oliveto2011} to stochastic processes with large self-loop probabilities). It is stated in the following for the sake of completeness. \begin{theorem}[Negative drift with self-loops~\cite{Rowe2013}] \label{thm:simplified-drift} Consider a Markov process $X_0, X_1, \dots$ on $\{0, \dots, m\}$ and suppose there exists integers $a, b$ with $0 < a < b \leq m$ and $\varepsilon > 0$ such that for all $a \leq k \leq b$ the expected drift towards~0 is \[ E(k - X_{t+1} \mid X_t = k) < -\varepsilon \cdot (1-p_{k,k}) \] where $p_{k, k}$ is the self-loop probability at state~$k$. Further assume there exists constants $r, \delta > 0$ (i.\,e. they are independent of $m$) such that for all $k \geq 1$ and all $d \ge 1$ \[ p_{k, k - d}, p_{k, k + d} \leq \frac{r (1 - p_{k,k})}{(1 + \delta)^d}. \] Let $T$ be the first hitting time of a state at most~$a$, starting from $X_1 \geq b$. Let $\ell = b-a$. Then there is a constant $c > 0$ such that \[ \Prob{T \leq 2^{c \ell /r}} = 2^{-\Omega(\ell / r)}. \] \end{theorem} The proof of Theorem~\ref{the:lower-onemax} applies Theorem~\ref{thm:simplified-drift} with respect to the number of zeros on an interval of $[0, n^{\varepsilon/2}]$. \begin{proof}[Proof of Theorem~\ref{the:lower-onemax}] We only give a proof for global mutations; the same analysis goes through for local mutations with similar, but simpler calculations. Let $p_{k, j}$ be the probability that SSWM will make a transition from a search point with $k$ ones to one with $j$ ones. We start by pessimistically estimating transition probabilities and applying the negative drift theorem with regards to pessimistic transition probabilities $p_{k, j}'$ defined later. The drift theorem will be applied, taking the number of zeros as distance function to the optimum. Our notation refers to numbers of ones for simplicity. Throughout the remainder of the proof we assume $k \ge n - n^{\varepsilon/2}$. From Lemma~\ref{lem:mutations-decreasing-ones} and every $1 \le j \le n-k$ we have \begin{align} p_{k, k+j} \le\;& \frac{1.14}{j!} \cdot \left(\frac{n-k}{n} \right)^{j} \cdot \left(1 - \frac{1}{n}\right)^{n-j} \cdot \ensuremath{p_\mathrm{fix}}(j)\notag\\ \le\;& \frac{1.14}{j!} \cdot \left(\frac{n-k}{n}\right)^j \cdot \ensuremath{p_\mathrm{fix}}(j)\label{eq:bound-on-onemax-increase}\\ \le\;& 1.14\cdot \left(n^{\varepsilon/2-1}\right)^j \cdot \ensuremath{p_\mathrm{fix}}(j)\notag. \end{align} Cf. Lemma \ref{lem:p.fix.bounds} we estimate $\ensuremath{p_\mathrm{fix}}(j)$ by $\ensuremath{p_\mathrm{fix}}(j) \le \frac{2\beta j}{1-e^{-2N \beta j}}$. This gives \[ p_{k, k+j} \le \left(n^{\varepsilon/2-1}\right)^j \cdot \frac{3\beta j}{1-e^{-2N\beta j}} := p_{k, k+j}'. \] The expected drift towards the optimum, $\Delta^+(k)$, is then bounded as follows \begin{align*} \Delta^+(k) \le\;& \sum_{j=1}^{n-k} j \cdot p_{k, k+j}'\\ \le\;& \sum_{j=1}^{n-k} j \cdot \left(n^{\varepsilon/2-1}\right)^j \cdot \frac{3\beta j}{1-e^{-2N \beta j}}\\ \le\;& \frac{3\beta}{1-e^{-2N \beta}} \sum_{j=1}^{\infty} j^2 \cdot \left(n^{\varepsilon/2-1}\right)^j.\\ \intertext{Using $\sum_{j=1}^\infty j^2 \cdot x^j = \frac{x(1+x)}{(1-x)^3} \le x (1+ 5x)$ for $0 \le x \le 0.09$ (this holds for large enough~$n$ as $x= n^{\varepsilon/2-1} = o(1)$) as well as $N\beta \ge 1$} \le\;& \frac{3\beta}{1-e^{-2}} \cdot n^{\varepsilon/2-1} \cdot \left(1+5n^{\varepsilon/2-1}\right). \end{align*} On the other hand, \begin{align*} p_{k, k-1} \ge\;& \frac{k}{n} \cdot \left(1 - \frac{1}{n}\right)^{n-1} \cdot \ensuremath{p_\mathrm{fix}}(-1) \ge\; \frac{n-n^{\varepsilon/2}}{en} \cdot \ensuremath{p_\mathrm{fix}}(-1)\\ =\;& \frac{\ensuremath{p_\mathrm{fix}}(-1)}{e} \cdot \left(1 - n^{\varepsilon/2-1}\right) \ge\; \frac{1}{e} \cdot \frac{2\beta}{e^{2N \beta}} \cdot \left(1 - n^{\varepsilon/2-1}\right)\\ \intertext{using $e^{2N \beta} \le e^{(1-\varepsilon)\ln n} = n^{1-\varepsilon}$} \ge\;& \frac{2\beta \cdot n^{\varepsilon}}{en} \cdot \left(1 - n^{\varepsilon/2-1}\right) := p_{k, k-1}'. \end{align*} We further define $p_{k, k-j}' := 0$ for $j \ge 2$. The expected increase in the number of ones at state~$k$, denoted $\Delta'(k)$, with regards to the pessimistic Markov chain defined by $p_{k, j}'$ is hence at most \begin{align*} & \Delta'(k)\\ \le & \sum_{j=1}^{n-k} j \cdot p_{k, k+j}' - p_{k, k-1}'\\ \le\;& \frac{3\beta}{1-e^{-2}} \cdot n^{\varepsilon/2-1} \cdot \left(1+5n^{\varepsilon/2-1}\right) - \frac{2\beta \cdot n^{\varepsilon}}{en} \cdot \left(1 - n^{\varepsilon/2-1}\right)\\ =\;& 2\beta \cdot n^{\varepsilon/2-1} \cdot \left( \frac{3}{2}\cdot \frac{1+5n^{\varepsilon/2-1}}{1-e^{-2}} - \frac{n^{\varepsilon/2}}{e} \cdot \left(1 - n^{\varepsilon/2-1}\right)\right)\\ =\;& -\Omega(\beta \cdot n^{\varepsilon-1}). \end{align*} Now, the self-loop probability for the pessimistic Markov chain is at least $p_{k, k}' \ge 1 - \sum_{j =1}^{n-k} p_{k, k+j}' - p_{k, k-1}' \ge 1 - \sum_{j =1}^{n-k} j \cdot p_{k, k+j}' - p_{k, k-1}' = 1 - O(\beta n^{\varepsilon-1})$, hence the first condition of the drift theorem is satisfied. The second condition on exponentially decreasing transition probabilities follows from $p_{k, k-1}' \le 1-p_{k,k}'$, $p_{k, k-j}' = 0$ for $j \ge 2$ and, for all $j \in \ensuremath{{\mathbb N}}$, \begin{align*} p_{k, k+j}' =\;& \left(n^{\varepsilon/2-1}\right)^j \cdot \frac{3\beta j}{1-e^{-2N \beta j}} \le\; \left(n^{\varepsilon/2-1}\right)^j \cdot \frac{3\beta j}{1-e^{-2}}\\ \intertext{multiplying by $p_{k,k-1}'/p_{k,k-1}'$} =\;& p_{k, k-1}' \cdot \frac{\left(n^{\varepsilon/2-1}\right)^j \cdot \frac{3\beta j}{1-e^{-2}}}{\frac{2\beta \cdot n^{\varepsilon}}{en} \cdot \left(1 - n^{\varepsilon/2-1}\right)}\\ =\;& p_{k, k-1}' \cdot n^{-\varepsilon/2} \cdot \left(n^{\varepsilon/2-1}\right)^{j-1} \cdot \frac{e}{1-n^{\varepsilon/2-1}} \cdot \frac{3}{2}\cdot \frac{j}{1-e^{-2}}\\ \le\;& p_{k, k-1}' \cdot 2^{-j} \le\; (1-p_{k, k}') \cdot 2^{-j} \end{align*} where the penultimate inequality holds for large enough~$n$. This proves the second condition for $\delta := 1$ and $r := 2$. Now the negative drift theorem, applied to the number of zeros, proves the claimed result. \end{proof} \section{On Traversing Fitness Valleys} \label{sec:cliff} We have shown that with the right parameters, SSWM is an efficient hill climber. On the other hand, in contrast to the (1+1)~EA\xspace, SSWM can accept worse solutions with a probability that depends on the magnitude of the fitness decrease. This is reminiscent of the Metropolis algorithm---although the latter accepts every improvement with probability~1, whereas SSWM may reject improvements. Jansen and Wegener~\cite{Jansen2007} compared the ability of the (1+1)~EA\xspace and a Metropolis algorithm in crossing fitness valleys and found that both showed similar performance on \emph{smooth integer} functions: functions where two Hamming neighbours have a fitness difference of at most~1~\cite[Section~6]{Jansen2007}. We consider a similar function, generalising a construction by J{\"a}gersk{\"u}pper and Storch~\cite{Jagerskupper2007a}: the function $\cliff{d}$ is defined such that non-elitist algorithms have a chance to jump down a ``cliff'' of height roughly~$d$ and to traverse a fitness valley of Hamming distance~$d$ to the optimum (see Figure \ref{fig-cliff}). \input{scheme.tex} \vskip -2em \begin{definition}[Cliff]\label{def-cliff} \[ \cliff{d}(x) = \begin{cases} \ones{x} &\mbox{if } \ones{x} \leq n-d \\ \ones{x}-d+\frac{1}{2} & \mbox{otherwise} \end{cases} \] where $\ones{x} = \sum_{i=1}^n x_i$ counts the number of ones. \end{definition} The (1+1)~EA\xspace typically optimises $\cliff{d}$ through a direct jump from the top of the cliff to the optimum, which takes expected time $\Theta(n^d)$. \begin{theorem} \label{the:ea-on-cliff} The expected optimisation time of the (1+1)~EA\xspace on $\cliff{d}$, for $2 \le d \le n/2$, is $\Theta(n^d)$. \end{theorem} In order to prove Theorem~\ref{the:ea-on-cliff}, the following lemma will be useful for showing that the top of the cliff is reached with good probability. More generally, it shows that the conditional probability of increasing the number of ones in a search point to~$j$, given it is increased to some value of~$j$ or higher, is at least $1/2$. \begin{lemma} \label{lem:conditional-mut} For all $0 \le i < j \le n$, \[ \frac{\mut(i, j)}{\sum_{k=j}^n \mut(i, k)} \ge \frac{1}{2}. \] \end{lemma} The proof of this lemma is presented in the appendix. \begin{proof}[Proof of Theorem~\ref{the:ea-on-cliff}] From any search point with $i < n-d$ ones, the probability of reaching a search point with higher fitness is at least $\frac{n-i}{en}$. The expected time for accepting a search point with at least $n-d$ ones is at most $\sum_{i=0}^{n-d-1} \frac{en}{n-i} = O(n \log n)$. Note that this is $O(n^d)$ since $d \ge 2$. We claim that with probability $\Omega(1)$, the first such search point has $n-d$ ones: with probability at least $1/2$ the initial search point will have at most $n-d$ ones. Invoking Lemma~\ref{lem:conditional-mut} with $j := n-d$, with probability at least $1/2$ the top of the cliff is reached before any other search point with at least $n-d$ ones. Once on the top of the cliff the algorithm has to jump directly to the optimum to overcome it. The probability of such a jump is $\frac{1}{n^d} (1-\frac{1}{n})^{n-d}$ and therefore the expected time to make this jump is $\Theta(n^d)$. \end{proof} SSWM with global mutations also has an opportunity to make a direct jump to the optimum. However, compared to the (1+1)~EA\xspace its performance slightly improves when considering shorter jumps and accepting a search point of inferior fitness. The following theorem shows that for large enough cliffs, $d = \omega(\log n)$, the expected optimisation time is by a factor of $e^{\Omega(d)}$ smaller than that of the (1+1)~EA\xspace. Although both algorithms need a long time for large~$d$, the speedup of SSWM is significant for large~$d$. \begin{theorem} The expected optimisation time of SSWM with global mutations and $\beta=1, N = \frac{1}{2}\ln(11 n)$ on $\cliff{d}$ with $d = \omega(\log n)$ is at most~$n^{d}/e^{\Omega(d)}$. \end{theorem} \begin{proof} We define $R$ as the expected time for reaching a search point with either $n-d$ or $n$ ones, when starting with a worst possible non-optimal search point. Let $T_{\mathrm{peak}}$ be the random optimisation time when starting with any search point of $n-d$ ones, hereinafter called a \emph{peak}. Then the expected optimisation time from any initial point is at most $R + \E{T_{\mathrm{peak}}}$. Let $p_\mathrm{success}$ be the probability of SSWM starting in a peak will reach the optimum before reaching a peak again. We call such a time period a \emph{trial}. After the end of a trial, taking at most $R$ expected generations, with probability $1-p_\mathrm{success}$ SSWM returns to a peak again, so \begin{align} & \E{T_{\mathrm{peak}}} \le R + (1-p_\mathrm{success}) \cdot \E{T_{\mathrm{peak}}}\notag\\ \Leftrightarrow\; & \E{T_{\mathrm{peak}}} \le \frac{R}{p_\mathrm{success}}.\label{eq:Tpeak-recurrence} \end{align} We first bound the worst-case time to return to a peak or a global optimum as $R = O(n \log n)$. Let $S_1$ be the set of all search points with at most $n-d$ ones and ${S_2 := \{0, 1\}^n \setminus S_1}$. As long as the current search point remains within $S_2$, SSWM essentially behaves like on \onemax. Repeating arguments from the proof of Theorem~\ref{the:upper-onemax}, in expected time $O((n \log n)/\beta) = O(n \log n)$ (as here $\beta=1$) SSWM either finds a global optimum or a search point in~$S_1$. Likewise, as long as the current search point remains within $S_1$, SSWM essentially behaves like on \onemax and within expected time $O(n \log n)$ either a peak or a search point in $S_2$ is found. SSWM can switch indefinitely between $S_1$ and $S_2$ within one trial, as long as no optimum or peak is reached. The conditional probability of creating a peak---when from a search point with $i < n-d$ ones either a peak or a non-optimal search point in $S_2$ is reached---is \begin{align*} \frac{\mut(i, n-d) \cdot \ensuremath{p_\mathrm{fix}}(n-d-i)}{\sum_{k=n-d}^{n-1} \mut(i, k) \cdot \ensuremath{p_\mathrm{fix}}(k-i-d+1/2)} \ge\;& \frac{\mut(i, j)}{\sum_{k=j}^n \mut(i, k)} \end{align*} as $\ensuremath{p_\mathrm{fix}}(n-d-i) \ge \ensuremath{p_\mathrm{fix}}(k-i-d+1/2)$ for all $n-d < k < n$. By Lemma~\ref{lem:conditional-mut}, the above fraction is at least~$1/2$. Hence SSWM in expectation only makes $O(1)$ transitions from $S_1$ to $S_2$, and the overall expected time spent in $S_1$ and $S_2$ is at most $R = O(1) \cdot O(n \log n)$. The remainder of the proof now shows a lower bound on $p_\mathrm{success}$, the probability of a trial being successful. A sufficient condition for a successful trial is that the next mutation creates a search point with $n-d+k$ ones, for some integer $1 \le k \le d$, that this point is accepted, and that from there the global optimum is reached before returning to a peak. We estimate the probabilities for these events separately in order to get an overall lower bound on the probability of a trial being successful. From any peak there are $\binom{d}{k}$ search points at Hamming distance~$k$ that have $n-d+k$ ones. Considering only such mutations, the probability of a mutation increasing the number of ones from $n-d$ by~$k$ is at least \begin{align*} \mut(n-d, n-d+k) \ge\;& \frac{1}{n^k} \cdot \left(1 - \frac{1}{n}\right)^{n-1} \cdot \binom{d}{k}\\ \ge\;& \frac{1}{en^k} \cdot \left(\frac{d}{k}\right)^k. \end{align*} The probability of accepting such a move is \[ \ensuremath{p_\mathrm{fix}}(k-d+1/2) = \frac{e^{2\beta(d-k-1/2)}-1}{e^{2N \beta(d-k-1/2)}-1} \ge \frac{e^{2(d-k-1/2)}-1}{(11 n)^{(d-k-1/2)}}. \] We now fix $k := \lfloor d/e\rfloor$ and estimate the probability of making and accepting a jump of length~$k$: \begin{align*} & \mut(n-d, n-d+k) \cdot \ensuremath{p_\mathrm{fix}}(k-d+1/2)\\ \;&\ge \frac{1}{en^k} \cdot \left(\frac{d}{k}\right)^k \cdot \frac{e^{2(d-k-1/2)}-1}{(11 n)^{(d-k-1/2)}}\\ \;&= \Omega\left(n^{-d+1/2} \cdot \left(\frac{d}{k}\right)^k \cdot \left(\frac{e^2}{11}\right)^{d-k}\right)\\ \;&= \Omega\left(n^{-d+1/2} \cdot \left({e^{1/e}} \cdot \left(\frac{e^2}{11}\right)^{1-1/e}\right)^{d}\right)\\ \;&= \Omega\left(n^{-d+1/2} \cdot \left(\frac{10}{9}\right)^{d}\right). \end{align*} Finally, we show that, if SSWM does make this accepted jump, with high probability it climbs up to the global optimum before returning to a search point in $S_1$. To this end we work towards applying the negative drift theorem to the number of ones in the interval $[a := \lceil n-d + k/2 \rceil, b := n-d+k]$ and show that, since we start in state~$b$, a state $a$ or less is unlikely to be reached in polynomial time. We first show that the drift is typically equal to that on \onemax. For every search point with more than $a$ ones, in order to reach $S_1$, at least $k/2$ bits have to flip. Until this happens, SSWM behaves like on \text{\sc OneMax}\xspace and hence reaches either a global optimum or a point in $S_1$ in expected time $O(n \log n)$. The probability for a mutation flipping at least $k/2$ bits is at most $1/((\ln n)/(2e))! = n^{-\Omega(\log n)}$, so the probability that this happens in expected time $O(n \log n)$ is still $n^{-\Omega(\log n)}$. Assuming such jumps do not occur, we can then use drift bounds from the analysis of \text{\sc OneMax}\xspace for states with at least $a$ ones. From the proof of Theorem~\ref{the:upper-onemax} and \eqref{eq:onemax.drift} we know that the drift at $i$ ones for $\beta=1$ is at least \[ \Delta(i) \ge \Omega\left(\frac{n-i}{n}\right). \] Let $p_{i, j}$ denote the transition probability from a state with $i$ ones to one with $j$ ones. The probability of decreasing the current state is at most $\ensuremath{p_\mathrm{fix}}(-1) = O(1/n)$ due to~\eqref{eq:upper-bound-on-pfix-minus-one}. The probability of increasing the current state is at most $(n-i)/n$ as a necessary condition is that one out of $n-i$ zeros needs to flip. Hence for $i \le b$, which implies $n-i = \omega(1)$, the self-loop probability is at least \[ p_{i, i} \ge 1 - O\left(\frac{1}{n}\right) - \frac{n-i}{n} = 1 - O\left(\frac{n-i}{n}\right). \] Together, we get $\Delta(i) \ge \Omega(1-p_{i, i})$, establishing the first condition of Theorem~\ref{thm:simplified-drift}. Note that $\ensuremath{p_\mathrm{fix}}(1) = \frac{1-e^{-2}}{1-1/n} = \Omega(1)$, hence \begin{equation} \label{eq:bound-on-1-pii} 1 - p_{i, i} \ge p_{i, i+1} \ge \frac{n-i}{en} \cdot \ensuremath{p_\mathrm{fix}}(1) = \Omega\left(\frac{n-i}{n}\right). \end{equation} The second condition follows for improving jumps from $i$ to $i+j$, $j \ge 1$, from Lemma~\ref{lem:mutations-decreasing-ones} and~\eqref{eq:bound-on-1-pii}: \[ p_{i, i+j} \le \left(\frac{n-i}{n}\right)^j \cdot \frac{1}{j!} \cdot \ensuremath{p_\mathrm{fix}}(j) \le \frac{n-i}{n} \cdot \frac{1}{j!} \le (1-p_{i, i}) \cdot \frac{O(1)}{2^j}. \] For backward jumps we get, for $1 \le j \le k/2$, and $n$ large enough, \[ p_{i, i-j} \le \ensuremath{p_\mathrm{fix}}(-j) \le \frac{e^{2j}}{e^{2Nj}-1} = \frac{e^{2j}}{(11 n)^{j}-1} \le 2^{-j}. \] Now Theorem~\ref{thm:simplified-drift} can be applied with $r = O(1)$ and $\delta = 1$ and it yields that the probability of reaching a state of $a$ or less in $n^{\omega(1)}$ steps is $n^{-\omega(1)}$. This implies that following a length-$k$ jump, a trial is successful with probability $1-n^{-\omega(1)}$. This establishes $p_\mathrm{success} := \Omega\left(n^{-d+1/2} \cdot \left(\frac{10}{9}\right)^{d}\right)$. Plugging this into~\eqref{eq:Tpeak-recurrence}, adding time $R$ for the time to reach the peak initially, and using that $O(n^{1/2}\log n) \cdot (9/10)^d = e^{-\Omega(d)}$ for $d = \omega(\log n)$ yields the claimed bound. \end{proof} \section{SSWM Outperforms (1+1)~EA on Balance} \label{sec:balance} Finally, we investigate a feature that distinguishes SSWM from the (1+1)~EA\xspace as well as the Metropolis algorithm: the fact that larger improvements are more likely to be accepted than smaller improvements. To this end, we consider the function \text{\sc Balance}\xspace, originally introduced by Rohlfshagen, Lehre, and Yao~\cite{RohlfshagenLehreYao2009} as an example where rapid dynamic changes in dynamic optimisation can be beneficial. The function has also been studied in the context of stochastic ageing by Oliveto and Sudholt~\cite{Oliveto2014}. In its static (non-dynamic) form, \text{\sc Balance}\xspace can be illustrated by a two-dimensional plane, whose coordinates are determined by the number of leading ones (LO) in the first half of the bit string, and the number of ones in the second half, respectively. The former has a steeper gradient than the latter, as the leading ones part is weighted by a factor of~$n$ in the fitness (see Figure \ref{fig-balance}). \begin{definition}[\text{\sc Balance}\xspace~\cite{RohlfshagenLehreYao2009}]\label{def-balance} Let $a,b \in \{0,1\}^{n/2}$ and $x = ab \in \{0,1\}^n$. Then, $\text{\sc Balance}\xspace(x) = $ \begin{equation*} \begin{cases} n^3 & \text{if } \lo{a} = n/2, \text{else}\\ \ones{b} + n \cdot \lo{a} & \text{if } n/16 < \ones{b} < 7n/16 , \text{else}\\ n^2 \cdot \lo{a} & \text{if } \zeros{a} > \sqrt{n}, \text{else}\\ 0 & \text{otherwise.} \end{cases} \end{equation*} where $\ones{x} = \sum_{i=1}^{n/2} x_i$, $\zeros{x} $ is a number of zeros and $\lo{x} :=\sum_{i=1}^{n/2} \prod_{j=1}^i x_j$ counts the number of leading ones. \end{definition} \begin{figure} \center \begin{tikzpicture}[scale=0.4] \draw (0,0) rectangle (10,5); \draw (0,1) -- (9,1); \draw (0,4) -- (9,4); \draw (9,0) -- (9,5); \draw (8,0) -- (8,1); \draw (8,4) -- (8,5); \node at (8.5,0.5) {0}; \node at (8.5,4.5) {0}; \node at (9.5,2.5) {$n^3$}; \node at (4,0.5) {$n^2 \cdot \lo{a}$}; \node at (4,4.5) {$n^2 \cdot \lo{a}$}; \node at (4,2.5) {$n \cdot \lo{a} + \ones{b}$}; \draw[->] (0,-0.5) -- node[below]{$\lo{a}$} (10,-0.5); \draw[->] (-0.5,0) --node[left]{$\ones{b}$} (-0.5,5); \end{tikzpicture} \caption{\boldmath Visualisation of \text{\sc Balance}\xspace\cite{RohlfshagenLehreYao2009}.\label{fig-balance}} \end{figure} The function is constructed in such a way that all points with a maximum number of leading ones are global optima, whereas increasing the number of ones in the second half beyond a threshold of $7n/16$ (or decreasing it below a symmetric threshold of $n/16$) leads to a trap, a region of local optima that is hard to escape from. Rohlfshagen, Lehre, and Yao~\cite[Theorem~3]{RohlfshagenLehreYao2009} showed the following lower bound for the (1+1)~EA\xspace, specialised to non-dynamic optimisation: \begin{theorem}[\cite{RohlfshagenLehreYao2009}] The expected optimisation time of the (1+1)~EA\xspace on \text{\sc Balance}\xspace is $n^{\Omega(n^{1/2})}$. \end{theorem} We next show that SSWM with high probability finds an optimum in polynomial time. For appropriately small~$\beta$ we have sufficiently many successes on the LO-part such that we find an optimum before the \text{\sc OneMax}\xspace -part reaches the region of local optima. This is because for small~$\beta$ the probability of accepting small improvements is small. The fact that SSWM is slower than the (1+1)~EA\xspace on \onemax by a factor of $O(1/\beta)$ turns into an advantage over the (1+1)~EA\xspace on \text{\sc Balance}\xspace. The following lemma shows that SSWM effectively uses elitist selection on the LO-part of the function in a sense that every decrease is rejected, with overwhelming probability. \begin{lemma} \label{lem:lo-elitism} For every~$x = ab$ with $n/16 < \ones{b} < 7n/16$ and $\beta = n^{-3/2}$ and $N \beta = \ln n$, the probability of SSWM with local or global mutations accepting a mutant~$x'=a'b'$ with $\lo{a'} < \lo{a}$ and $n/16 < \ones{b'} < 7n/16$ is $O(n^{-n})$. \end{lemma} \begin{proof} The loss in fitness is at least $n-(\ones{b'}-\ones{b}) \ge n/2$. The probability of SSWM accepting such a loss is at most \begin{align} \notag \ensuremath{p_\mathrm{fix}}(-n/2) & \le \frac{1-e^{-2\beta(-n/2)}}{1-e^{-2N \beta(-n/2)}} \le \frac{e^{2\beta(n/2)}}{e^{2N \beta(n/2)}-1}. \end{align} Assuming $\beta = n^{-3/2}$ and $N \beta = \ln n$, this is at most \begin{align*} \frac{e^{\frac{\sqrt{n}}{n}}}{n^{n}-1} \le \frac{e}{n^{n}-1} = O(n^{-n}). \qquad\begingroup \let\quad\hbox{\qedsymbol}\math@qedhere\let\qed@elt\setQED@elt \QED@stack\relax\relax \endgroup \end{align*} \end{proof} The following lemma establishes the optimisation time of the SSWM algorithm on either the \text{\sc OneMax}\xspace or the LO-part of \text{\sc Balance}\xspace. For global mutations we restrict our considerations to \emph{relevant steps}, defined as steps where no leading ones in the first half of the bit string is flipped. The probability of a relevant step is always at least $(1-1/n)^{n/2} \approx e^{-1/2}$. When using local mutations, all steps are defined as relevant. \begin{lemma} \label{lem:time-on-lo-part} Let $\beta = n^{-3/2}$ and $N \beta = \ln n$. With probability ${1-e^{-\Omega(n^{1/2})}}$, SSWM with either local or global mutations either optimises the LO part or reaches the trap (all search points with fitness $n^2 \cdot \lo{a}$) within \[ T := \frac{n^2}{4} \cdot \frac{1}{\ensuremath{p_\mathrm{fix}}(n-\sqrt{n})} \cdot \left(1 + n^{-1/4}\right) \] relevant steps. \end{lemma} \begin{proof} Consider a relevant step, implying that global mutations will leave all leading ones intact. With probability $1/n$ a local or global mutation will flip the first 0-bit. This increases the fitness by $k \cdot n - \Delta_{\mathrm{OM}}$, where $\Delta_{\mathrm{OM}}$ is the difference in the \text{\sc OneMax}\xspace-value of~$b$ caused by this mutation and $k$ is the number of consecutive 1-bits following this bit position after mutation. The latter bits are called \emph{free riders} and it is well known (see~\cite[Lemma~1 and proof of Theorem~2]{Lehre2012}) that the number of free riders follows a geometric distribution with parameter~$1/2$, only capped by the number of bits to the end of the bit string~$a$. The probability of flipping at least $\sqrt{n}$ bits in one global mutation is at most $1/(\sqrt{n})! = e^{-\Omega(\sqrt{n})}$ and the probability that this happens at least once in $T$ relevant steps is still of the same order (using that $T = \poly{n}$ as $\ensuremath{p_\mathrm{fix}}(n-\sqrt{n}) \ge 1/N \ge 1/\poly{n}$). We assume in the following that this does not happen, which allows us to assume $\Delta_{\mathrm{OM}} \le \sqrt{n}$. We also assume that the number of leading ones is never decreased during non-relevant steps as the probability of accepting such a fitness decrease is $O(n^{-n})$ by Lemma~\ref{lem:lo-elitism} and the expected number of non-relevant steps before $T$ relevant steps have occurred is $O(T)$. We now have that the number of leading ones can never decrease and any increase by mutation is accepted with probability at least $\ensuremath{p_\mathrm{fix}}(n-\sqrt{n})$. In a relevant step, the probability of increasing the number of leading ones is hence at least $1/n \cdot \ensuremath{p_\mathrm{fix}}(n-\sqrt{n})$ and the expected number of such improvements in \[ T := \frac{n^2}{4} \cdot \frac{1}{\ensuremath{p_\mathrm{fix}}(n-\sqrt{n})} \cdot (1+n^{-1/4}) \] relevant steps is at least $n/4 + n^{3/4}/4$. By Chernoff bounds~\cite{Doerr2011chapter}, the probability that less than $n/4 + n^{3/4}/8$ improvements happen is $e^{-\Omega(n^{1/2})}$. Also the probability that during this number of improvements less than $n/4 - n^{3/4}/8$ free riders occur is $e^{-\Omega(n^{1/2})}$. If these two rare events do not happen, a LO-value of $n/2$ is reached before time~$T$. Taking the union bound over all rare failure probabilities proves the claim. \end{proof} We now show that the \text{\sc OneMax}\xspace part is not optimized before the LO part. \begin{lemma} \label{lem:time-on-om-part} Let $\beta = n^{-3/2}$, $N \beta = \ln n$, and $T$ be as in Lemma~\ref{lem:time-on-lo-part}. The probability that SSWM starting with $a_0b_0$ such that $n/4 \le \ones{b_0} \le n/4 + n^{3/4}$ creates a search point $ab$ with $\ones{b} \le n/16$ or $\ones{b} \ge 7n/16$ in~$T$ relevant steps is $e^{-\Omega(n^{1/2})}$. \end{lemma} It will become obvious that in $T$ relevant steps SSWM typically makes a progress of $O(n)$ on the \text{\sc OneMax}\xspace part. The proof of Lemma~\ref{lem:time-on-om-part} requires a careful and delicate analysis to show that the constant factors are small enough such that the stated thresholds for $\ones{b}$ are not surpassed. \begin{proof}[Proof of Lemma~\ref{lem:time-on-om-part}] We only prove that a search point with $\ones{b} \ge 7n/16$ is unlikely to be reached with the claimed probability. The probability for reaching a search point with $\ones{b} \le n/16$ is clearly no larger, and a union bound for these two events leads to a factor of 2 absorbed in the asymptotic notation. Note that for $\beta = n^{-3/2}$ we have \[ \ensuremath{p_\mathrm{fix}}(n-\sqrt{n}) \ge \frac{2\beta(n-\sqrt{n})}{1+2\beta(n-\sqrt{n})} \ge 2\beta n \cdot (1-O(n^{-1/2})). \] Hence \[ T \le \frac{n^2}{4} \cdot \frac{1}{2\beta n} \cdot \left(1 + O(n^{-1/2})\right) = \frac{n}{8\beta} \cdot \left(1 + O(n^{-1/2})\right). \] We call a relevant step \emph{improving} if the number of ones in~$b$ increases and the step is accepted. We first consider only steps where the number of leading ones stays the same. Then the probability that the \text{\sc OneMax}\xspace value increases from~$k$ by~$j$, adapting Lemma~\ref{lem:mutations-decreasing-ones} to a string of length~$n/2$, is at most \begin{align*} p_{j} \le\;& \left(\frac{n/2-k}{n}\right)^j \cdot \frac{1.14}{j!} \cdot \ensuremath{p_\mathrm{fix}}(j)\\ \intertext{using $n/2 - k \le n/4$} \le\;& \frac{1.14 \cdot 4^{-j}}{j!} \cdot \ensuremath{p_\mathrm{fix}}(j) \leq \frac{1.14 \cdot 4^{-j}}{j!} \cdot \frac{2\beta j}{1-e^{-2N \beta j}}\\ \le\;& 2.28\beta \cdot 4^{-j} \cdot \frac{1}{1-e^{-2N \beta j}} =: p_j. \end{align*} In the following, we work with pessimistic transition probabilities~$p_j$. Note that for all $j \ge 1$ \begin{align*} \frac{p_j}{p_1} = 4^{-(j-1)} \cdot \frac{1-e^{-2N \beta}}{1-e^{-2N \beta j}} \le 4^{-(j-1)}. \end{align*} Let $p^+$ denote (a lower bound on) the probability of an improving step, then \begin{align*} p^+ \le\;& \sum_{j=1}^{\infty} p_j \le p_1 \cdot \sum_{j=1}^\infty 4^{-(j-1)} = p_1 \cdot \frac{4}{3}. \end{align*} The conditional probability of advancing by~$j$, given an improving step, is then \begin{align*} \frac{p_j}{p^+} \le 4^{-(j-1)} \cdot \frac{p_1}{p^+} = \left(1 - \frac{3}{4}\right)^{j-1} \cdot \frac{3}{4}, \end{align*} which corresponds to a geometric distribution with parameter~$3/4$. Now, by Chernoff bounds, the probability of having more than $S := (1+n^{-1/4}) \cdot p^+ \cdot T$ improving steps in $T$ relevant steps is $e^{-\Omega(n^{1/2})}$. Using a Chernoff bound for geometric random variables~\cite[Theorem~1.14]{Doerr2011chapter}, the probability of $S$ improving steps yielding a total progress of at least ${(1+n^{-1/4}) \cdot 4/3 \cdot S}$ is $e^{-\Omega(n^{1/2})}$. If none of these rare events happen, the progress is at most \begin{align*} & (1+O(n^{-1/4})) \cdot \frac{4}{3} \cdot p^+ \cdot T\\ =\;& (1+O(n^{-1/4})) \cdot \frac{16}{9} \cdot p_1 \cdot T\\ \le\;& (1+O(n^{-1/4})) \cdot \frac{1.14}{9} \cdot n. \end{align*} We also have at most $n/2$ steps where the number of leading ones increases. If the number of leading ones increases by $\delta \ge 1$, the fitness increase is $\delta n + \ones{b'} - \ones{b}$. Hence the above estimations of jump lengths are not applicable. We call these \emph{special} steps; they are unorthodox as the large fitness increase makes it likely that any mutation on the \text{\sc OneMax}\xspace part is accepted. We show that the progress on the \text{\sc OneMax}\xspace part across all special steps is $O(n^{3/4})$ with high probability. We grant the algorithm an advantage if we assume that, after initialising with $\ones{b} \ge n/4$, no search point with $\ones{b} < n/4$ is ever reached\footnote{Otherwise, we restart our considerations from the first point in time where $\ones{b} \ge n/4$ again, replacing $T$ with the number of remaining steps. With overwhelming probability we will then again have $\ones{b} \le n/4 + n^{3/4}$.}. Under this assumption we always have at least as many 1-bits as 0-bits in $b$, and mutation in expectation flips at least as many 1-bits to 0 as 0-bits to 1. Then the progress in $\ones{b}$ in one special step increasing the number of leading ones by $d$ can be described as follows. Imagine a matching (pairing) between all bits in $b$ such that each pair contains at least one 1-bit. Let $X_i$ denote the random change in $\ones{b}$ by the $i$-th pair. If the pair has two 1-bits, $X_i \le 0$ with probability~1. Otherwise, we have $X_i = 1$ if the 0-bit in the pair is flipped, the 1-bit in the pair is not flipped, and the mutant is accepted (which depends on the overall $\ones{b}$-value in the mutant). The potential fitness increase is at most $d n + n/2$ as the range of $\ones{b}$-values is $n/2$. Likewise, we have $X_i = -1$ if the 0-bit is not flipped, the 1-bit is flipped, and the mutant is accepted (which again depends on the overall $\ones{b}$-value in the mutant). The fitness increase is at least $d n - n/2$. With the remaining probability we have $X_i = 0$. Hence for global mutations (for local mutations simply drop the $1-1/n$ term) the total progress in a special step increasing $\lo{a}$ by~$d$ is stochastically dominated by a sum of independent variables $Y_1, \dots, Y_{n/4}$ where $\Prob{Y_i = \pm 1} = 1/n \cdot (1-1/n) \cdot \ensuremath{p_\mathrm{fix}}(d n \pm n/2)$ and $Y_i = 0$ with the remaining probability. There is a bias towards increasing the number of ones due to differences in the arguments of $\ensuremath{p_\mathrm{fix}}$: $\E{Y_i} = 1/n \cdot (1-1/n) \cdot (\ensuremath{p_\mathrm{fix}}(d n + n/2) - \ensuremath{p_\mathrm{fix}}(d n - n/2))$. Using the definition of $\ensuremath{p_\mathrm{fix}}$ and preconditions $\beta = n^{-3/2}$, $N \beta = \ln n$, the bracket is bounded as \begin{align*} & \ensuremath{p_\mathrm{fix}}(d n + n/2) - \ensuremath{p_\mathrm{fix}}(d n - n/2)\\ =\;& \frac{1-e^{-2d n^{-1/2} - n^{-1/2}}}{1-n^{-2d n + n}} - \frac{1-e^{-2d n^{-1/2} + n^{-1/2}}}{1-n^{-2d n - n}}\\ =\;& (1+o(1)) \left(\left(1-e^{-2d n^{-1/2} - n^{-1/2}}\right) - \left(1-e^{-2d n^{-1/2} + n^{-1/2}}\right)\right)\\ =\;& (1+o(1)) \cdot e^{-2d n^{-1/2}} \left(e^{n^{-1/2}}-e^{- n^{-1/2}}\right)\\ \le\;& (1+o(1)) \cdot e^{-2d n^{-1/2}} \left((1+2n^{-1/2})-(1-n^{-1/2})\right)\\ =\;& (1+o(1)) \cdot e^{-2d n^{-1/2}} \cdot 3n^{-1/2} \end{align*} where in the last inequality we have used $1+x \le e^x$ for all~$x$ and $e^x \le 1+2x$ for $0 \le x \le 1$. Note that the expectation, and hence the bias, is largest for $d=1$, in which case we get, using $e^{-2d n^{-1/2}} \le e^{-2n^{-1/2}} \le 1$, \begin{equation*} \E{Y_i} \le (1+o(1)) \cdot 1/n \cdot (1-1/n) \cdot 3n^{-1/2} \le 4n^{-3/2} \end{equation*} for $n$ large enough. The total progress in all $m$ special steps is hence stochastically dominated by a sequence of $m \cdot n/4$ random variables $Y_i$ as defined above, with $d := 1$. Invoking Lemma~\ref{lem:sum-of-Yi}, stated in the appendix, with $\delta := n^{3/4}$, the total progress in all special steps is at most $\delta + m \cdot n/4 \cdot \E{Y_i} = \delta + O(n^{1/2}) = O(n^{3/4})$ with probability $1-e^{-\Omega(n^{1/2})}$. Hence the net gain in the number of ones in all special steps is at most $n^{3/4} + O(mn/4 \cdot n^{-3/2}) = O(n^{3/4})$ with probability ${1-e^{-\Omega(n^{1/2})}}$. Together with all regular steps, the progress on the \text{\sc OneMax}\xspace part is at most $1.14n/9 + O(n^{3/4})$, which for large enough~$n$ is less than the distance $7n/16 - (n/4+n^{3/4})$ to reach a point with $\ones{b} \ge 7n/16$ from initialisation. This proves the claim. \end{proof} Finally, we put the previous lemmas together into our main theorem that establishes that SSWM can optimise \text{\sc Balance}\xspace in polynomial time. \begin{theorem} With probability $1-e^{-\Omega(n^{1/2})}$ SSWM with $\beta = n^{-3/2}$ and $N \beta = \ln n$ optimises \text{\sc Balance}\xspace in time $O(n/\beta) = O(n^{5/2})$. \end{theorem} \begin{proof} By Chernoff bounds, the probability that for the initial solution $x_0 = a_0 b_0$ we have $n/4 - n^{3/4} \le \ones{b_0} \le n/4 +n^{3/4}$ is $1-e^{-\Omega(n^{1/2})}$. We assume pessimistically that $n/4 \le \ones{b_0} \le n/4 +n^{3/4}$. Then Lemma~\ref{lem:time-on-om-part} is in force, and with probability $1-e^{-\Omega(n^{1/2})}$ within $T$ relevant steps, $T$ as defined in Lemma~\ref{lem:time-on-lo-part}, SSWM does not reach a trap or a search point with fitness~$0$. Lemma~\ref{lem:time-on-lo-part} then implies that with probability $1-e^{-\Omega(n^{1/2})}$ an optimal solution with $n/2$ leading ones is found. The time bound follows from the fact that $T = O(n/\beta)$ and that, again by Chernoff bounds, we have at least $T$ relevant steps in $3T$ iterations of SSWM, with probability $1-e^{-\Omega(n^{1/2})}$. \end{proof} \section{Conclusions} The field of evolutionary computation has matured to the point where techniques can be applied to models of natural evolution. Our analyses have demonstrated that runtime analysis of evolutionary algorithms can be used to analyse a simple model of natural evolution, opening new opportunities for interdisciplinary research with population geneticists and biologists. Our conclusions are highly relevant for biology, and open the door to the analysis of more complex fitness landscapes in this field and to quantifying the efficiency of evolutionary processes in more realistic scenarios of evolution. One interesting aspect of our results is that they impose conditions on population size ($N$) and strength of selection ($\beta$) which represent fundamental limits to what is possible by natural selection. We hope that these results may inspire further research on the similarities and differences between natural and artificial evolution. From a computational perspective, we have shown that SSWM can overcome obstacles such as posed by $\cliff{d}$ and $\text{\sc Balance}\xspace$ in different ways to the (1+1)~EA\xspace, due to its non-elitistic selection mechanism. We have seen how the probability of accepting a mutant can be tuned to enable hill climbing, where fitness-proportional selection fails, as well as tunnelling through fitness valleys, where elitist selection fails. For \text{\sc Balance}\xspace we showed that SSWM can take advantage of information about the steepest gradient. The selection rule in SSWM hence seems to be a versatile and useful mechanism. Future work could investigate its usefulness in the context of population-based evolutionary algorithms. \bigskip \textbf{Acknowledgments:} The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 618091 (SAGE). The authors thank the anonymous GECCO reviewers for their many constructive comments. \bibliographystyle{abbrv}
{ "timestamp": "2015-10-02T02:10:10", "yymm": "1504", "arxiv_id": "1504.06260", "language": "en", "url": "https://arxiv.org/abs/1504.06260" }
\section{Introduction}\label{sec: introduction} In this paper we study a twisting operation on algebras that can be formulated as a Zhang twist \cite{zhang1998twisted} or defined in the context of group actions as in \cite[\S 7.5]{montgomery1993hopf}. We will be primarily concerned with whether certain properties are preserved under such twists, following the work of Montgomery in \cite{montgomery2005algebra}. If an algebra $A$ is acted on by a finite abelian group $G$ then a $G$-grading is induced. We will denote a cocycle twist of this grading using a 2-cocycle $\mu$ by $A^{G,\mu}$. Such twists can be described in another manner, with the following result of Bazlov and Berenstein showing their equivalence. \begin{proposition}[{\cite[Lemma 3.6]{bazlov2012cocycle}}] The algebra $A^{G,\mu}$ is isomorphic to the fixed ring $(AG_{\mu})^G$ for some action of $G$ on the twisted group algebra $AG_{\mu}$. \end{proposition} We show that applying such twists --- in particular to connected graded algebras when the action of $G$ respects this structure --- yields some interesting results. Our main theorem is as follows. It brings together Proposition \ref{prop: uninoeth}, Theorem \ref{thm: asreg} and Propositions \ref{prop: koszul} and \ref{prop: cohenmac}. \begin{theorem}\label{thm: maintheorem} Assume that $A$ is a noetherian connected graded algebra and $G$ a finite abelian group that acts on $A$ by graded automorphisms. If $A$ has one of the following properties then the cocycle twist $A^{G,\mu}$ shares that property: \begin{itemize} \item[(i)] it is strongly noetherian; \item[(ii)] it is AS-regular; \item[(iii)] it is Koszul; \item[(iv)] it is Auslander regular; \item[(v)] it is Cohen-Macaulay. \end{itemize} \end{theorem} Moreover, some of the twists we have uncovered have not been studied previously (see \cite{davies2014cocycle2}). Our ability to prove that properties are preserved under twisting stems from the fact that one of the constructions of such twists has not been fully exploited (it was briefly remarked upon in \cite[\S 3.4]{bazlov2012cocycle}). This formulation generalises an example of Odesskii from \cite[pg. 89-90]{odesskii2002elliptic}, and is especially useful since it allows the use of faithful flatness arguments via the following lemma. \begin{lemma}[{Lemma \ref{lem: fflat}}]\label{lem: fflat-intro} As an $(A^{G,\mu},A^{G,\mu})$-bimodule there is a decomposition \begin{equation* AG_{\mu} \cong \bigoplus_{g \in G} {^{\text{id}}(A^{G,\mu})^{\phi_{g}}}, \end{equation*} for some automorphisms $\phi_{g}$ of $A^{G,\mu}$, with $\phi_e=\text{id}$. Each summand is free as a left and right $A^{G,\mu}$-module. Consequently $AG_{\mu}$ is a faithfully flat extension of both $A^{G,\mu}$ and $A$ on both the left and the right. \end{lemma} Let us describe Odesskii's example, which uses a 4-dimensional Sklyanin algebra. Such an object is important in noncommutative algebraic geometry in the sense of \cite{artin1990some}, whose construction can be phrased in terms of an elliptic curve and a point upon it. Consider a 4-dimensional Sklyanin algebra over $\ensuremath{\mathbb{C}}$, which we denote by $A$; its parameters and relations are unimportant for the purpose of the example. There is an action of the Klein four-group $G=(C_2)^2$ on $A$ by graded algebra automorphisms and also on $M_2(\ensuremath{\mathbb{C}})$, the ring of $2 \times 2$ complex matrices. For $M \in M_2(\ensuremath{\mathbb{C}})$ and generators $g_1, g_2 \in G$, the action is defined by \begin{equation}\label{eq: matrixaction} M^{g_{1}} =\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}M\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix},\; M^{g_{2}}=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}M\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. \end{equation} Now take the tensor product of $\ensuremath{\mathbb{C}}$-algebras $A \otimes_{\ensuremath{\mathbb{C}}} M_2(\ensuremath{\mathbb{C}})$. The example is then given by taking the invariant ring under the diagonal action of $G$, $\left(A \otimes_{\ensuremath{\mathbb{C}}} M_2(\ensuremath{\mathbb{C}})\right)^G$. It is natural to wonder if the properties of $A$ are shared by this twisted algebra and if this construction can be generalised to any algebra or group. Our attempts to answer these questions motivated the work in this paper. In the subsequent paper \cite{davies2014cocycle2} we study twists of 4-dimensional Sklyanin algebras and related algebras from the perspective of noncommutative algebraic geometry. Theorem \ref{thm: maintheorem} applies to such algebras, providing new examples of AS-regular algebras of global dimension 4. In a paper appearing on the arXiv in January 2015, Chirvasitu and Smith \cite{chirvasitu2015exotic} prove some of the same results. As noted in their paper, our results had already appeared on the internet in July 2014 \cite{davies2014thesis}. \subsection{Contents} We now give a brief description of the contents of this paper. In \S\ref{sec: background} we define classical cocycle twists of group-graded algebras, while in \S\ref{sec: construction} we construct the cocycle twists that we will study in two different ways. Our main results appear in \S\ref{sec: preservation}, where we prove both Theorem \ref{thm: maintheorem} and Lemma \ref{lem: fflat-intro}. To end the paper we look for cocycle twists in the context of Rogalski and Zhang's classfication of AS-regular algebras of dimension 4 with three generators and a proper $\ensuremath{\mathbb{Z}}^{2}$-grading \cite{rogalski2012regular}. We show in \S\ref{sec: rogzhang} that several of the families in their classification are related via cocycle twists. \subsection{Notation}\label{subsec: notation} Throughout $k$ will denote an algebraically closed field and $G$ a finite abelian group, unless otherwise stated. Further assumptions on the characteristic of $k$ will be made at the beginning of \S\ref{sec: construction}. By $A$ we will denote an associative $k$-algebra, often with the additional assumption that it is $\N$-graded. This means that $A = \bigoplus_{n \in \N} A_n$ with $ab \in A_{n+m}$ for all $a \in A_n$ and $b \in A_m$. Such an algebra is said to be \emph{connected graded} (or \emph{c.g.}\ for brevity) if $A_0=k$ and $\text{dim}_k A_n < \infty$ for all $n \in \N$. The \emph{Hilbert series} of a c.g.\ algebra is the power series $H_A(t)=\sum_{n \in \N} (\text{dim}_k A_n)t^n$. When describing relations in such an algebra, we will use shorthand notation for two kinds of commutator. For $x,y \in A$, define \begin{equation*} [x,y]:=xy-yx \text{ and } [x,y]_+ := xy+yx. \end{equation*} By $\text{Mod}(A)$ we shall denote the category of $A$-modules and by $\text{GrMod}(A)$ the category of $\N$-graded $A$-modules. If necessary we will specify whether these are categories of left $A$-modules or right $A$-modules. By $\text{lgldim }A$ and $\text{rgldim }A$ we will denote the left and right global dimensions of $A$ respectively, and by $\text{pdim }M_A$ ($\text{idim }M_A$) the projective (injective) dimension of a right $A$-module $M$. When $\text{lgldim }A = \text{rgldim }A$ we will write $\text{gldim }A$. We will often consider an action of the group $G$ on $A$ by $k$-algebra automorphisms. The action of an element $g\in G$ on $a \in A$ will be denoted by the superscript $a^g$. We will denote the group of group automorphisms of $G$ by $\text{Aut}(G)$, and similarly use $\text{Aut}_{\N\text{-alg}}(A)$ to denote the group of $\N$-graded $k$-algebra automorphisms of $A$ when it is graded. The tensor product $\otimes$ will denote the tensor product over $k$, $\otimes_k$, if no other subscript appears. \section*{Acknowledgements} The contents of this paper form part of the author's PhD thesis \cite{davies2014thesis}, completed under the supervision of Professor Toby Stafford. The author wishes to express their gratitude to Professor Stafford for his support and guidance throughout both the preparation of this paper and their PhD, as well as to the EPSRC for funding their study. \section{Background}\label{sec: background} We begin by defining cocycle twists of group-graded algebras, a classical construction that underpins the twists that we construct in \S\ref{sec: construction}. Assume that $A$ is an associative $k$-algebra and $G$ a finite group, not necessarily abelian, such that $A$ admits a $G$-graded structure $A = \bigoplus_{g \in G} A_g$. Consider all functions $\mu: G \times G \rightarrow k^{\times}$ satisfying the following relations for all $g,h,l \in G$: \begin{equation}\label{eq: cocycleid} \mu(g,h)\mu(gh,l)=\mu(g,hl)\mu(h,l),\; \mu(e,g)=\mu(g,e)=1. \end{equation} Such a function is called a \emph{2-cocycle of $G$ with values in $k^{\times}$} (or more formally a \emph{normalised} 2-cocycle). One can define a group structure on the set of 2-cocycles of $G$, denoted by $Z^2(G,k^{\times})$, via pointwise multiplication. A new multiplication $\ast_{\mu}$ can be defined for all homogeneous elements $a \in A_g$ and $b \in A_h$ by \begin{equation* a \ast_{\mu} b := \mu(g,h) ab, \end{equation*} where juxtaposition denotes the `old' multiplication in $A$. The new multiplication is then extended to the whole of $A$ by linearity. \begin{defn}\label{defn: basiccocycletwist} With $\mu$ as above, set $A_{\mu} := (A, \ast_{\mu})$. \end{defn} 2-cocycles are precisely those functions $G \times G \rightarrow k^{\times}$ that preserve associativity when deforming the multiplication in an algebra in this manner. Moreover, twisting by a 2-cocycle preserves the identity element of an algebra. The simplest examples of such twists are twisted group algebras, denoted by $kG_{\mu}$. Now consider a 2-cocycle $\phi$ for which there exists a function $\rho: G \rightarrow k^{\times}$ such that \begin{equation* \phi(g,h)=\rho(g)\rho(h)\rho(gh)^{-1}, \end{equation*} for all $g, h \in G$. Such a cocycle is called a \emph{2-coboundary}. As \cite[Corollary 33.6]{karpilovsky1987structure} shows in a more general situation, the twisted group algebras $kG_{\mu}$ and $kG_{\phi}$ are isomorphic as $G$-graded algebras if and only if $\mu, \phi \in Z^2(G,k^{\times})$ lie in the same coset modulo the subgroup of coboundaries, $B^2(G,k^{\times})$. Thus the isomorphism classes of $G$-graded deformations of $kG$ are parameterised by $Z^2(G,k^{\times})/B^2(G,k^{\times})$, which is called the \emph{Schur multiplier} of $G$ \cite{karpilovsky1987schur}. As \cite[Example 2.9]{zhang1998twisted} shows, the cocycle twists defined in Definition \ref{defn: basiccocycletwist} can be formulated as Zhang twists. For cocycle twists of $G$-graded algebras one therefore has an equivalence between their categories of $G$-graded modules by \cite[Theorem 3.1]{zhang1998twisted}. Finally, it will be useful for us to consider the twisted group algebra $AG_{\mu}$ as a crossed product, whose definition appears in \cite[\S 1.5.8]{mcconnell2001noncommutative}. \section{Constructions}\label{sec: construction} Let us fix the base assumptions under which we will work. \begin{hyp}[General case]\label{hyp: generalcase} Let $A$ be a $k$-algebra where $k$ is an algebraically closed field. Assume that a finite abelian group $G$ acts on $A$ by algebra automorphisms, where $\text{char}(k) \nmid |G|$. Fix an isomorphism between $G$ and its group of characters $G^{\vee}$, mapping $g \mapsto \chi_g$. \end{hyp} Our primary interest in cocycle twists is to apply them to $\N$-graded algebras. As such, we record the following additional assumptions that will often be used. \begin{hyp}[$\N$-graded case]\label{hyp: gradedcase} Further to Hypotheses \ref{hyp: generalcase}, assume that $A$ is $\N$-graded and $G$ acts on $A$ by $\N$-graded algebra automorphisms, i.e. $G \rightarrow \text{Aut}_{\N\text{-alg}}(A)$. \end{hyp} \subsection{Cocycle twists from group actions}\label{subsec: coycletwistgroupaction} In \S\ref{sec: background} we defined twists of a $G$-graded algebra. We will now show that when $G$ is finite abelian a $G$-grading is induced by an action of $G$ on $A$ by algebra automorphisms. Assume that Hypotheses \ref{hyp: generalcase} hold. By Maschke's Theorem, $A$ splits into a direct sum of 1-dimensional irreducible $kG$-submodules. One defines a grading on $A$ by setting $A_g:=A^{\chi_{g^{-1}}}$, the isotypic component of $A$ corresponding to the character $\chi_{g^{-1}}$. To see that this defines a grading, note that for all homogeneous elements $a \in A_{g_{1}}, b \in A_{g_{2}}$ and $h \in G$, \begin{equation* (ab)^h=a^hb^h=\chi_{g_{1}^{-1}}(h)a \chi_{g_{2}^{-1}}(h)b=\chi_{(g_1g_2)^{-1}}(h)ab, \end{equation*} which implies that $ab \in A_{g_{1}g_{2}}$. \begin{defn}\label{defn: fullcocycletwist} With the $G$-graded structure described above and a cocycle $\mu$, the resulting twisted algebra is written $A^{G,\mu}:= (A = \bigoplus_{g \in G} A_g, \ast_{\mu})$. Thus for all $a \in A_{g}, b \in A_{h}$ one has $a \ast_{\mu} b = \mu(g,h)ab$. \end{defn} We now describe another construction, once again working under Hypotheses \ref{hyp: generalcase}. Define an action of $G$ on the twisted group algebra $kG_{\mu}$ by $g^{h}:=\chi_g(h)g$ for all $g, h \in G$ and extending $k$-linearly. Observe that under this action $kG_{\mu}$ is the regular representation of $G$, with isotypic components of the form $\left(kG_{\mu}\right)^{\chi_{g}}=kg$. Given this action we can consider the diagonal action of $G$ on the tensor product $A \otimes kG_{\mu} = AG_{\mu}$, where $(ag)^h=a^h g^h= \chi_{g}(h)a^h g$ for all $a \in A$, $g, h \in G$. The algebra in which we are interested is the invariant ring under this action, $(A \otimes G_{\mu})^G=(AG_{\mu})^G$. The constructions just defined are related by the following result of Bazlov and Berenstein. \begin{proposition}[{\cite[Lemma 3.6]{bazlov2012cocycle}}]\label{prop: twoconstrequal} Assume that Hypotheses \ref{hyp: generalcase} hold. Then $A^{G,\mu} \cong (AG_{\mu})^G$ as $k$-algebras. \end{proposition} Henceforth, our use of the term \emph{cocycle twist} and the notation $A^{G,\mu}$ will refer to either of the equivalent twists in this proposition. Odesskii's example from the introduction is an illustration of the invariant ring construction of a cocycle twist. The only subtlety is that the ring $M_2(\ensuremath{\mathbb{C}})$ is in fact isomorphic to a twisted group algebra over the Klein four-group for a nontrivial 2-cocycle. \subsection{Twisting the $G$-grading}\label{subsec: twistggrading} In this section we investigate the effect of twisting a $G$-grading by a group automorphism. This idea is described in \cite[Example 3.8]{zhang1998twisted}. Given a $G$-graded algebra $A$ and $\sigma \in \text{Aut}(G)$, one can define a new grading on $A$ by $A_{\sigma}=\bigoplus_{g \in G} B_g$ where $B_g :=A_{\sigma(g)}$ for all $g \in G$. When $G$ is abelian this grading corresponds to another action of $G$ on $A$ by $k$-algebra automorphisms. We wish to connect this new grading with a cocycle twist. Given a cocycle $\mu$ we can define an action of $\sigma$ on $\mu$ by \begin{equation* \mu^{\sigma}(g,h):=\mu(\sigma(g),\sigma(h)), \end{equation*} for all $g,h \in G$. It is clear that this is also a 2-cocycle. Moreover, the action of $\sigma$ preserves 2-coboundaries and so there is an action of $\text{Aut}(G)$ on the Schur multiplier of $G$. \begin{lemma}\label{lem: autoncocycle} For a 2-cocycle $\mu$ the cocycle twist $(A_{\sigma},\ast_{\mu})$ is isomorphic as a $k$-algebra to $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$. \end{lemma} \begin{proof} In the twist $(A_{\sigma},\ast_{\mu})$ consider homogeneous elements $a \in B_g$ and $b \in B_h$. Under the graded structure in $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$ one has $a \in A_{\sigma(g)}$ and $b \in A_{\sigma(h)}$. Writing the multiplication of $a$ and $b$ in $(A_{\sigma},\ast_{\mu})$ gives \begin{equation}\label{eq: autactoncocyclemult} a \ast_{\mu} b = \mu(g,h)ab = \mu^{(\sigma^{-1})}(\sigma(g),\sigma(h))ab. \end{equation} Notice that the right-hand side of \eqref{eq: autactoncocyclemult} is precisely the multiplication $a \ast_{\mu^{\left(\sigma^{-1}\right)}} b$ in $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$. \end{proof} We now examine the choice of isomorphism $G \rightarrow G^{\vee}$. Let us use the notation $(A,\phi,\mu)$ for a triple consisting of an algebra, an isomorphism $G \rightarrow G^{\vee}$, and a 2-cocycle respectively. When $G$ acts on $A$ by algebra automorphisms each such triple can be naturally associated to a cocycle twist. \begin{proposition}\label{prop: benign} Let $G$ act on $A$ by algebra automorphisms. Let $\phi$ and $\rho$ be isomorphisms $G \rightarrow G^{\vee}$ and $\mu$ be a 2-cocycle. Then there exists an automorphism of $G$, $\tau$ say, such that the cocycle twists corresponding to the triples $(A,\phi,\mu)$ and $(A,\rho,\mu^{(\tau^{-1})})$ are isomorphic as $k$-algebras. \end{proposition} \begin{proof} Given $\phi$, we will identify $\rho$ with an automorphism of $G$ as follows. Firstly, there exists an automorphism $\psi: G^{\vee} \rightarrow G^{\vee}$ such that $\phi = \psi \circ \rho$. Suppose we have an element $x \in A_g$, where the grading is determined under the duality given by $\phi$. This means that for all $h \in G$, \begin{equation* x^h = \phi(g)^{-1}(h)x=\psi(\rho(g))^{-1}(h)x. \end{equation*} Since all maps involved are isomorphisms, for all $g \in G$ there exists $k_g \in G$ such that $\psi(\rho(g))=\rho(k_g)$. We claim that the map $\tau:\; g \mapsto k_g$ defines an isomorphism of $G$. To see this, note that for all $g,h \in G$ one has \begin{equation* \rho(k_{gh})=\psi(\rho(gh))=\psi(\rho(g)) \psi(\rho(h)) =\rho(k_{g})\rho(k_{h})=\rho(k_{g}k_{h}). \end{equation*} As $\rho$ is an isomorphism, it follows that $\tau$ is also an isomorphism as claimed. Under the duality isomorphism $\rho$ one has $x \in A_{k_{g}}$, since \begin{equation}\label{eq: dualgrading1} x^h =\psi(\rho(g))^{-1}(h)x=\rho(k_g)^{-1}(h)x, \end{equation} for all $h \in G$. Suppose that $x \in A_g$ and $y \in A_h$ for some $g,h \in G$ under the duality given by $\phi$. Thus $x \ast_{\mu} y=\mu(g,h)xy$ in $(A,\phi,\mu)$. Under the duality given by $\rho$ one has $x \in A_{k_{g}}$ by \eqref{eq: dualgrading1}, and in a similar manner $y \in A_{k_{h}}$. Thus in $(A,\rho,\mu^{(\tau^{-1})})$ the multiplication is \begin{equation* x \ast_{\mu^{\left(\tau^{-1}\right)}} y = \mu^{(\tau^{-1})}(k_g,k_h)xy=\mu(\tau^{-1}(k_g),\tau^{-1}(k_h))xy=\mu(g,h)xy. \end{equation*} Since the multiplications agree on homogeneous elements, this completes the proof. \end{proof} \section{Preservation of properties}\label{sec: preservation} In this section we prove that many properties are preserved by the twists defined in \S\ref{sec: construction}. \subsection{Basic properties}\label{subsec: basicprops} \emph{Unless otherwise stated we assume that Hypotheses \ref{hyp: generalcase} hold for all results in this section.} We first state a useful result regarding the behaviour of regular and normal elements under a cocycle twist. This result is not stated explicitly in \cite{zhang1998twisted}, although the proof is essentially contained in that of Proposition 2.2(1) op. cit.. \begin{lemma}\label{lem: stillregular} Any element $a \in A$ that is homogeneous with respect to the $G$-grading is regular (normal) in $A$ if and only if it is regular (normal) in $A^{G,\mu}$. \end{lemma} The next two lemmas (the latter from \cite{montgomery2005algebra}) will be particularly useful when working with algebras defined by generators and relations. \begin{lemma}\label{lem: defrelns} Let $I$ be a $G$-graded ideal of $A$. Then $I$ remains an ideal in $A^{G,\mu}$. Furthermore, a generating set for $I$ that is homogeneous with respect to the $G$-grading is also a generating set for the ideal under twisting. \end{lemma} \begin{proof} That $I$ is still an ideal in the twist is proved in \cite[Proposition 3.1(2)]{montgomery2005algebra}. To complete the proof it suffices to deal with the case that $I=(f)$ for some homogeneous element $f \in A_g$. Suppose that $a \in A$ with homogeneous decomposition $a = \sum_{h \in G} a_h$. One has \begin{equation*} fa = f \ast_{\mu} \left(\sum_{h \in G} \frac{a_h}{\mu(g,h)} \right)\; \text{ and }\; af = \left(\sum_{h \in G} \frac{a_h}{\mu(h,g)} \right) \ast_{\mu} f, \end{equation*} which proves the result. \end{proof} \begin{lemma}[{\cite[ Proposition 3.1(1)]{montgomery2005algebra}}]\label{lemma: finitelygenerated} $A$ is finitely generated as a $k$-algebra if and only if $A^{G,\mu}$ is also finitely generated. Furthermore, if Hypotheses \ref{hyp: gradedcase} hold then $A$ is finitely generated in degree 1 if and only if $A^{G,\mu}$ is too. \end{lemma} \begin{proof} The first part of the statement is proved by Montgomery. By consulting the proof in \cite{montgomery2005algebra}, one can see that a generating set for $A^{G,\mu}$ can be obtained as follows: take a generating set of $A$ and find a vector space $V$ which contains this generating set and is preserved by the action of $G$. Then $A^{G,\mu}$ will be generated by $V$ under the new multiplication on the shared underlying vector space. One may therefore conclude that under the additional hypotheses of the second statement of this lemma, the property of being finitely generated in degree 1 is preserved. \end{proof} We now show that gradings are sometimes preserved under cocycle twists. \begin{lemma}\label{lem: autpresgrad} Suppose that $A$ has a $H$-grading for some arbitrary group $H$ and that a finite abelian group $G$ acts on $A$ by $H$-graded algebra automorphisms. Then any cocycle twist $A^{G,\mu}$ will inherit the $H$-grading from $A$. \end{lemma} \begin{proof} We must show that for all $h_1,h_2 \in H$ and homogeneous elements $x \in A_{h_{1}}$ and $y \in A_{h_{2}}$ one has $x \ast_{\mu} y \in A_{h_{1}h_{2}}$, since then $A^{G,\mu}_{h_{1}} \cdot A^{G,\mu}_{h_{2}} \subseteq A^{G,\mu}_{h_{1}h_{2}}$. As $G$ acts on $A$ by $H$-graded algebra automorphisms, one can apply Maschke's theorem to the $H$-graded components of $A$ under the action of $G$. This allows us to further assume that $x$ and $y$ are homogeneous with respect to the $G$-grading, thus $x \in A_{g_{1}}$ and $y \in A_{g_{2}}$ for some $g_1,g_2 \in G$. Then \begin{equation* x \ast_{\mu} y = \mu(g_1,g_2)xy \in A_{h_{1}h_{2}}, \end{equation*} which completes the proof. \end{proof} \begin{rem} In particular, Lemma \ref{lem: autpresgrad} implies that $A^{G,\mu}$ inherits the $G$-grading from $A$. \end{rem} Before stating our next result, we recall the concept of twisting a module by an automorphism. Let $A$ be a $k$-algebra and $\phi$ be a $k$-algebra automorphism. For a right $A$-module $M$, one can define a new right $A$-module $M^{\phi}$ via the multiplication $m \ast_{\phi} a=m\phi(a)$ for all $a \in A$, $m \in M$. One can twist both sides of an $(A,A)$-bimodule in this manner simultaneously; in particular, if such an $(A,A)$-bimodule is free on each side and the same generator can be used in each module structure, then one may assume that the bimodule is untwisted on one side (see \cite[\S2.3]{brown2008dualising}). \begin{lemma}\label{lem: fflat} As an $(A^{G,\mu},A^{G,\mu})$-bimodule there is a decomposition \begin{equation* AG_{\mu} \cong \bigoplus_{g \in G} {^{\text{id}}(A^{G,\mu})^{\phi_{g}}}, \end{equation*} for some automorphisms $\phi_{g}$ of $A^{G,\mu}$, with $\phi_e=\text{id}$. Each summand is free of rank 1 as a left and right $A^{G,\mu}$-module. Consequently, $AG_{\mu}$ is a faithfully flat extension of $A^{G,\mu}$ on both the left and the right. Similarly, $_A(AG_{\mu})$ and $(AG_{\mu})_A$ are free modules of finite rank, thus $AG_{\mu}$ is a faithfully flat extension of $A$ on both the left and the right. \end{lemma} \begin{proof} We will proceed as in the proof of the main theorem of \cite{smith1989can}. Let $AG_{\mu}=\bigoplus_{g \in G} M^{\chi_{g}}$ be the isotypic decomposition of $AG_{\mu}$ under the action of $G$. Observe that $A^{G,\mu}=M^{\chi_{e}}$ and $M^{\chi_{g}}M^{\chi_{h}}=M^{\chi_{gh}}$ for all $g,h \in G$, since $G$ acts by algebra automorphisms. This means that each isotypic component $M^{\chi_{g}}$ has an $(A^{G,\mu},A^{G,\mu})$-bimodule structure. The isotypic component $M^{\chi_{g}}$ contains the element $1 \otimes g$. An arbitrary element in this component has the form $a \otimes h$ for some $a \in A_{g^{-1}h}=A^{\chi_{gh^{-1}}}$. Thus $a \otimes g^{-1}h \in A^{G,\mu}$ and therefore \begin{equation* a \otimes h= \left(\frac{a \otimes g^{-1}h}{\mu(g^{-1}h,g)}\right) \cdot (1 \otimes g)=(1 \otimes g) \cdot \left(\frac{a \otimes g^{-1}h}{\mu(g,g^{-1}h)}\right). \end{equation*} Consequently, $M^{\chi_{g}}$ is cyclic as a left or a right $A^{G,\mu}$-module. Note that $1 \otimes g$ is regular in $AG$, therefore by Lemma \ref{lem: stillregular} it is also regular in $AG_{\mu}$. This proves that $M^{\chi_{g}}$ is a free $A^{G,\mu}$-module of rank 1 on both the left and the right. By the discussion prior to the statement of the proposition, we know that the bimodule generated by $1 \otimes g$ is isomorphic to ${^{\text{id}}}(A^{G,\mu})^{\phi_{g}}$ for some algebra automorphism $\phi_{g}$. To describe $\phi_g$ it suffices to look at the left action of a homogeneous element in $A^{G,\mu}$ on $1 \otimes g$, which can be taken to be a free generator for the left $A^{G,\mu}$-module structure. Consider a homogeneous element $a \otimes h \in A^{G,\mu}_h$. One has \begin{equation* (a \otimes h) \cdot (1 \otimes g) = \mu(h,g) a \otimes hg = (1 \otimes g) \cdot \frac{\mu(h,g)}{\mu(g,h)}(a \otimes h). \end{equation*} Define a map $\phi_g: A^{G,\mu} \rightarrow A^{G,\mu}$ by $a \otimes h \mapsto \frac{\mu(h,g)}{\mu(g,h)}(a \otimes h)$ on homogeneous elements and extending $k$-linearly. To see that this is a $G$-graded automorphism, consider homogeneous elements $a \otimes h \in A^{G,\mu}_h$ and $b \otimes l \in A^{G,\mu}_l$. Then \begin{equation}\label{eq: phighom} \phi_g(a \otimes h)\phi_g(b \otimes l) = \frac{\mu(h,g)\mu(l,g)\mu(h,l)}{\mu(g,h)\mu(g,l)}(ab\otimes hl). \end{equation} On the other hand, one can use \eqref{eq: cocycleid} to see that \begin{gather} \begin{aligned}\label{eq: phighom1} \phi_g(\mu(h,l)(ab\otimes hl)) &= \frac{\mu(hl,g)\mu(h,l)}{\mu(g,hl)} (ab\otimes hl) \\ &= \frac{\mu(h,lg)\mu(l,g)\mu(h,l)}{\mu(g,h)\mu(gh,l)} (ab\otimes hl). \end{aligned} \end{gather} Observe that $\frac{\mu(h,lg)}{\mu(gh,l)} = \frac{\mu(h,g)}{\mu(g,l)}$, which follows from $G$ being abelian together with another use of \eqref{eq: cocycleid}. Substituting this expression into \eqref{eq: phighom1} produces the expression in \eqref{eq: phighom}. It is clear that $\phi_g$ is injective, therefore it must be a $G$-graded automorphism of $A^{G,\mu}$ as claimed. The result is trivial for $A$ by the definition of $AG_{\mu}$. \end{proof} The previous result allows us to begin proving that various properties are preserved under twisting, beginning with GK dimension. Part (ii) of the following lemma is implicit in \cite[pg. 89-90]{odesskii2002elliptic}. \begin{lemma}\label{lem: hilbseries} The following statements are true: \begin{itemize} \item[(i)] $\text{GKdim }A= \text{GKdim }A^{G,\mu}$; \item[(ii)] Under Hypotheses \ref{hyp: gradedcase} one has $H_A(t)=H_{A^{G,\mu}}(t)$. In particular, if $A$ is connected graded then so is $A^{G,\mu}$. \end{itemize} \end{lemma} \begin{proof} By Lemma \ref{lem: fflat}, $AG_{\mu}$ is a f.g.\ module over $A$ and $A^{G,\mu}$ on both sides. Applying \cite[Proposition 5.5]{krause2000growth} twice, first to $A \subset AG_{\mu}$, then to $A^{G,\mu} \subset AG_{\mu}$, proves part (i) of the lemma. Now assume that Hypotheses \ref{hyp: gradedcase} hold. By Lemma \ref{lem: autpresgrad}, $A^{G,\mu}$ possesses the same $\N$-graded structure as $A$, thus the dimensions of the graded components are the same for both. \end{proof} We now turn to the strongly noetherian property which, as demonstrated by results in \cite{rogalski2008canonical}, has strong geometric consequences. \begin{defn}[{\cite[cf. \S 4]{artin1999generic}}] \label{defn: strongnoeth} Let $A$ be a noetherian $k$-algebra. $A$ is \emph{strongly (right) noetherian} if for all commutative noetherian $k$-algebras $R$, $A \otimes R$ is (right) noetherian. \end{defn} Our result is a partial generalisation of \cite[Proposition 3.1(3)]{montgomery2005algebra} which was concerned with the noetherian condition. \begin{proposition}\label{prop: uninoeth} $A$ is strongly noetherian if and only if $A^{G,\mu}$ is. \end{proposition} \begin{proof} We prove that $A^{G,\mu}$ is strongly right noetherian. Assume that $A$ is strongly noetherian. Then $AG_{\mu}$ is a f.g.\ right $A$-module by the proof of Lemma \ref{lem: fflat}, hence by \cite[Proposition 4.1(1a)]{artin1999generic} $AG_{\mu}$ is strongly right noetherian. Using Lemma \ref{lem: fflat} again, the extension $A^{G,\mu} \subset AG_{\mu}$ is faithfully flat on the right, therefore we can apply \cite[Proposition 4.1(2a)]{artin1999generic} to show that $A^{G, \mu}$ is strongly right noetherian. In the other direction we can use the same argument but with $AG_{\mu}$ replaced with $A^{G,\mu}G_{\mu^{-1}}$; Lemma \ref{lem: fflat} tells us that $A^{G,\mu}G_{\mu^{-1}}$ is a right faithfully flat extension of both $A^{G,\mu}$ and of $A \cong (A^{G,\mu}G_{\mu^{-1}})^G$. An identical proof works for the left-sided part of the result. \end{proof} \subsection{AS-regularity}\label{subsec: asregular} The aim of this section is to prove that being AS-regular is preserved under cocycle twists. Although this property only concerns c.g.\ algebras, we will prove that certain components of the definition are preserved without any assumption on graded structure. \begin{defn}[{\cite{artin1987graded}}]\label{defn: asregular} Let $d \in \mathbb{N}$. A c.g.\ $k$-algebra $A$ is said to be \emph{AS-regular of dimension $d$} if the following conditions are satisfied: \begin{itemize} \item[(i)] $A$ has finite GK dimension; \item[(ii)] $\text{gldim }A=d$; \item[(iii)] $A$ is AS-Gorenstein, thus $\text{Ext}^i_A(k,A)= k \delta_{i,d}$ when $k$ and $A$ are considered as left or as right $\N$-graded $A$-modules. \end{itemize} \end{defn} One can calculate the Ext group in (iii) in either $\text{Mod}(A)$ or in $\text{GrMod}(A)$; since $A$ and $k$ are f.g.\ modules, the discussion in \cite[\S 1.4]{levasseur1992some} shows that the two Ext groups in question are the same. Let us first consider condition (ii) regarding global dimension. For the purposes of this section, we only need to show that finite global dimension is preserved under cocycle twists. However, we will prove the more general result that left and right global dimension are preserved, regardless of whether they are equal or not. We will need the following technical result to compare the global dimension of $A^{G,\mu}$ with that of the twisted group algebra $AG_{\mu}$. There is an analogous version for left modules. \begin{theorem}[{\cite[Theorem 7.2.8(i)]{mcconnell2001noncommutative}}]\label{theorem: mcrobtechnical} Let $R,S$ be rings with $R \subseteq S$ such that $R$ is an $(R,R)$-bimodule direct summand of $S$. Then \begin{equation* \text{rgldim }R \leq \text{rgldim }S +\text{pdim }S_R. \end{equation*} \end{theorem} Without further ado, we now show that left and right global dimension are preserved under cocycle twists. \begin{proposition}\label{prop: gldim} One has $\text{rgldim }A=\text{rgldim }A^{G,\mu}$ and $\text{lgldim }A=\text{lgldim }A^{G,\mu}$. \end{proposition} \begin{proof} We will give the proof for right global dimension, from which a left-sided proof can easily be derived. Since $AG_{\mu}$ has the structure of a crossed product we can apply \cite[Theorem 7.5.6(iii)]{mcconnell2001noncommutative} to conclude that $\text{rgldim }A=\text{rgldim }AG_{\mu}$. By Lemma \ref{lem: fflat} we know that $A^{G,\mu}$ is an $(A^{G,\mu},A^{G,\mu})$-bimodule direct summand of $AG_{\mu}$. We may therefore apply Theorem \ref{theorem: mcrobtechnical}, which tells us that \begin{equation* \text{rgldim }A^{G,\mu} \leq \text{rgldim }AG_{\mu} + \text{pdim }(AG_{\mu})_{A^{G,\mu}}= \text{rgldim }AG_{\mu}. \end{equation*} Here $\text{pdim }(AG_{\mu})_{A^{G,\mu}}=0$ since the module is free by Lemma \ref{lem: fflat}. We have proved that $\text{rgldim }A^{G,\mu} \leq \text{rgldim }A$. Now we repeat the argument with the roles of $A$ and $A^{G,\mu}$ reversed, considering them as subrings of $A^{G,\mu}G_{\mu^{-1}}$. One obtains the opposite inequality, which completes the proof. \end{proof} Let us now address the AS-Gorenstein property. Our main tool to prove the preservation of this property will be the following result of Brown and Levasseur. \begin{proposition}[{\cite[Proposition 1.6]{brown1985cohomology}}]\label{prop: brownlevass} Let $R$ and $S$ be rings and $R \rightarrow S$ a ring homomorphism such that $S$ is a flat as a left and right $R$-module. Let $X$ be an $(R,R)$-bimodule such that the $(R,S)$-bimodule $X \otimes_R S$ is an $(S,S)$-bimodule. Then for every f.g.\ left $R$-module $M$ and all $i \geq 0$, there are isomorphisms of right $S$-modules, \begin{equation* \text{Ext}_R^i(M,X)\otimes_R S \cong \text{Ext}_S^i(S \otimes_R M,X \otimes_R S). \end{equation*} \end{proposition} \begin{rem}\label{rem: x=s} For some of our applications of Proposition \ref{prop: brownlevass} we will take $R=X$. In that case $X \otimes_R S \cong S$ as an $(R,S)$-bimodule, from which $X \otimes_R S$ inherits a natural $(S,S)$-bimodule structure. \end{rem} \begin{proposition}\label{prop: asgor} Assume that Hypotheses \ref{hyp: gradedcase} hold and that $A$ is connected graded. Then $A$ is AS-Gorenstein of global dimension $d$ if and only if $A^{G,\mu}$ shares this property. \end{proposition} \begin{proof} We will give the proof in the only if direction when $k$ and $A$ are considered as left $A$-modules. The proof in the opposite direction is identical by untwisting, while the proof for right modules is almost identical to that below; the only difference is that it requires the use of a right-handed version of Proposition \ref{prop: brownlevass}. First, observe that under the $(\N,G)$-bigrading on $AG_{\mu}$ the subalgebra consisting of elements that have degree zero under the \N-grading is the twisted group algebra $kG_{\mu}$. As right $AG_{\mu}$-modules one has isomorphisms \begin{equation}\label{eq: degree1factor} k \otimes_A AG_{\mu} \cong k \otimes_{A^{G,\mu}} AG_{\mu} \cong kG_{\mu}, \end{equation} and as left $AG_{\mu}$-modules we have \begin{equation}\label{eq: degree1factor1} AG_{\mu} \otimes_A k \cong AG_{\mu} \otimes_{A^{G,\mu}} k \cong kG_{\mu}. \end{equation} We now proceed by applying Proposition \ref{prop: brownlevass} with $R=X=A$, $S=AG_{\mu}$ and $M=k$. To see that the hypotheses of that result are satisfied, observe that $A \subset AG_{\mu}$ is flat by Lemma \ref{lem: fflat} and recall Remark \ref{rem: x=s}. Applying Proposition \ref{prop: brownlevass} gives \begin{gather} \begin{aligned}\label{eq: asgorfirststep} \text{Ext}^i_A(k,A) \otimes_A AG_{\mu} &\cong \text{Ext}^i_{AG_{\mu}}(AG_{\mu} \otimes_A k,A \otimes_A AG_{\mu}) \\ &\cong \text{Ext}^i_{AG_{\mu}}(kG_{\mu},AG_{\mu}), \end{aligned} \end{gather} by using \eqref{eq: degree1factor1}. Since $A$ is AS-Gorenstein of global dimension $d$ we know that the left hand side is non-zero only for $i=d$. In that case it is equal to $k \otimes_A AG_{\mu} \cong kG_{\mu}$ by using \eqref{eq: degree1factor}. We would now like to apply Proposition \ref{prop: brownlevass} a second time, using $R=X=A^{G,\mu}$, $S=AG_{\mu}$ and $M=k$. We may apply the same argument as used earlier in the proof, mutatis mutandis, to see that the hypotheses of that result are satisfied. Applying Proposition \ref{prop: brownlevass} we obtain \begin{gather} \begin{aligned}\label{eq: asgorsecondstep} \text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} & \cong \text{Ext}^i_{AG_{\mu}}(AG_{\mu} \otimes_{A^{G,\mu}} k, A^{G,\mu} \otimes_{A^{G,\mu}} AG_{\mu}) \\ & \cong \text{Ext}^i_{AG_{\mu}}(kG_{\mu}, AG_{\mu}). \end{aligned} \end{gather} Combining the information from \eqref{eq: asgorfirststep} and \eqref{eq: asgorsecondstep} gives \begin{equation* \text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} \cong \left\{ \begin{array}{cl} kG_{\mu} & \text{if }i=d, \\ 0 & \text{if }i \neq d. \end{array}\right. \end{equation*} As $A^{G,\mu} \subset AG_{\mu}$ is a faithfully flat extension on the left, $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ must vanish in all degrees for which $i \neq d$. When $i=d$ we have \begin{equation}\label{eq: asgorfinalstep} \text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} \cong kG_{\mu}, \end{equation} as right $AG_{\mu}$-modules. Since $k$ and $A^{G,\mu}$ are f.g.\ $\N$-graded left $A^{G,\mu}$-modules, the module structure defined in \cite[Theorem 1.15]{rotman2008introduction} implies that $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ is a $\ensuremath{\mathbb{Z}}$-graded group. This $\ensuremath{\mathbb{Z}}$-grading is compatible with the right $A^{G,\mu}$-module structure, in which case the graded module structure on $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ allows us to complete the proof as follows. Consider the $(A^{G,\mu},A^{G,\mu})$-bimodule structure of $AG_{\mu}$ described in Lemma \ref{lem: fflat}. Upon restricting the isomorphism in \eqref{eq: asgorfinalstep} to $A^{G,\mu}$, one obtains \begin{equation* \bigoplus_{g\in G} \text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu})^{\phi_{g}} \cong (kG_{\mu})_{A^{G,\mu}}. \end{equation*} By considering the $G$-graded components of this isomorphism and noting that $\phi_e$ is the identity, one obtains the isomorphism of right $A^{G,\mu}$-modules $\text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu}) \cong k$, which proves the result. \end{proof} We can now combine several previous results to prove the main theorem of this section. \begin{theorem}\label{thm: asreg} Assume that Hypotheses \ref{hyp: gradedcase} hold and $A$ is connected graded. Then $A$ is AS-regular if and only if $A^{G,\mu}$ is. If in addition $A$ has global and GK dimension $\leq 4$, then $A$ is a domain if and only if $A^{G,\mu}$ is a domain. \end{theorem} \begin{proof} The statement about AS-regularity follows from Lemma \ref{lem: hilbseries} and Propositions \ref{prop: gldim} and \ref{prop: asgor}. The second part of the theorem follows from \cite[Theorem 3.9]{artin1991modules}. \end{proof} The property of being a domain is \emph{not} preserved by Zhang twists. Such twists preserve this property when $G$ is an ordered semigroup \cite[Proposition 5.2]{zhang1998twisted} but not in general. See \cite{davies2014cocycle2} for further examples. \subsection{The Koszul property}\label{subsec: koszul} Our next aim is to show that the Koszul property is preserved under cocycle twists. We begin by giving one of several equivalent definitions for the property. \begin{defn}[{\cite[ Proposition 2.1.3]{koszul1996beilinson}}]\label{defn: koszulcomplex} A c.g.\ $k$-algebra $A$ is \emph{Koszul} if and only if for all $i \geq 0$ the $\ensuremath{\mathbb{Z}}$-graded components of $\text{Ext}_A^i(k,k)$ vanish in all degrees other than degree $i$. \end{defn} Let us now prove our result concerning preservation of the Koszul property. \begin{proposition}\label{prop: koszul} In addition to Hypotheses \ref{hyp: gradedcase}, assume that $A$ is connected graded and its defining relations are in degree 2. Then $A$ is Koszul if and only $A^{G,\mu}$ is. \end{proposition} \begin{proof} We wish to apply Proposition \ref{prop: brownlevass} with $R=A$, $S=AG_{\mu}$, $X={_A}k_A$ and $M={_A}k$. Let us check that the hypotheses are satisfied: observe that $A \subset AG_{\mu}$ is flat by Lemma \ref{lem: fflat}, while $X \otimes_R S = kG_\mu$ by \eqref{eq: degree1factor}, whence it has a natural $(AG_{\mu},AG_{\mu})$-bimodule structure. We may therefore apply Proposition \ref{prop: brownlevass}, in which case one has \begin{gather} \begin{aligned}\label{eq: koszulbrown} \text{Ext}_A^i(k,k)\otimes_A AG_{\mu} &\cong \text{Ext}_{AG_{\mu}}^i(AG_{\mu} \otimes_A k,k \otimes_A AG_{\mu}) \\ &\cong \text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu}), \end{aligned} \end{gather} using \eqref{eq: degree1factor} and \eqref{eq: degree1factor1} to pass from the first line of this equation to the second. Now set $R= A^{G,\mu}$, $S=AG_{\mu}$, $X={_A}k_A$ and $M={_A}k$. One can use the same argument as earlier in the proof, mutatis mutandis, to see that the hypotheses of Proposition \ref{prop: brownlevass} are satisfied. Applying that result we obtain \begin{gather} \begin{aligned}\label{eq: koszulbrown1} \text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu} &\cong \text{Ext}_{AG_{\mu}}^i(AG_{\mu} \otimes_{A^{G,\mu}} k,k \otimes_{A^{G,\mu}} AG_{\mu}) \\ &\cong \text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu}), \end{aligned} \end{gather} using \eqref{eq: degree1factor} and \eqref{eq: degree1factor1} once again. The $\ensuremath{\mathbb{Z}}$-grading on $\text{Ext}_A^i(k,k)$ and $\text{Ext}_{A^{G,\mu}}^i(k,k)$ is compatible with their right $A$- and $A^{G,\mu}$-module structures respectively. Thus the tensor products $\text{Ext}_A^i(k,k)\otimes_A AG_{\mu}$ and $\text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu}$ are naturally $\ensuremath{\mathbb{Z}}$-graded right $AG_{\mu}$-modules. The $\ensuremath{\mathbb{Z}}$-grading on the cohomology group $\text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu})$ is also compatible with its right $AG_{\mu}$-module structure. Moreover, one can see from the proof of Proposition \ref{prop: brownlevass} (in \cite[Proposition 1.6]{brown1985cohomology}) that the isomorphisms in \eqref{eq: koszulbrown} and \eqref{eq: koszulbrown1} respect these $\ensuremath{\mathbb{Z}}$-graded structures. We may therefore conclude that there is an isomorphism \begin{equation}\label{eq: comparedims} \text{Ext}_A^i(k,k)\otimes_A AG_{\mu} \cong \text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu} \end{equation} of $\ensuremath{\mathbb{Z}}$-graded right $AG_{\mu}$-modules. Using the free module structures of $_A(AG_{\mu})$ and $_{A^{G,\mu}}(AG_{\mu})$ described in Lemma \ref{lem: fflat}, we may express the isomorphism in \eqref{eq: comparedims} as \begin{equation}\label{eq: comparedims1} \bigoplus_{|G|}\text{Ext}_A^i(k,k) \cong \bigoplus_{|G|}\text{Ext}_{A^{G,\mu}}^i(k,k), \end{equation} at the level of vector spaces. Furthermore, as $AG_{\mu}$ is an $\N$-graded left module over $A$ and over $A^{G,\mu}$, the isomorphism in \eqref{eq: comparedims1} respects the $\ensuremath{\mathbb{Z}}$-graded structure. Since $A$ is Koszul, we know that the $\ensuremath{\mathbb{Z}}$-graded components of the left hand side of \eqref{eq: comparedims1} vanish in all degrees other than degree $i$. It follows that $\text{Ext}_{A^{G,\mu}}^i(k,k)$ must also vanish in all degrees other than degree $i$, hence $A^{G,\mu}$ must be Koszul. \end{proof} \subsection{The Cohen-Macaulay property and Auslander regularity}\label{subsec: cohenmac} In this section we will prove that several more homological properties of algebras are preserved under cocycle twists. The definitions that follow can all be found in \cite[\S 1.2]{levasseur1993modules}. The \emph{grade} of a finitely generated left or right $A$-module $M$ is defined to be the value \begin{equation* j_A(M)=\text{inf}\{i: \text{Ext}_A^i(M,A)\neq 0\} \in \N \cup \{+\infty\}. \end{equation*} \begin{defn}\label{def: cm} A ring $A$ is said to satisfy the \emph{Cohen-Macaulay property} or be \emph{Cohen-Macaulay} (CM) if for all non-zero finitely generated $A$-modules $M$, one has \begin{equation* \text{GKdim }M+j_A(M)=\text{GKdim }A. \end{equation*} \end{defn} \begin{defn}\label{def: auslanderprops} The ring $A$ satistfies the \emph{Auslander-Gorenstein condition} if for every finitely generated left or right module $M$, all $i \geq 0$ and every $A$-submodule $N$ of $\text{Ext}^i_A(M,A)$, one has $j_A(N)\geq i$. The ring is \emph{Auslander-Gorenstein} if it satisfies the Auslander-Gorenstein condition and it has finite left and right injective dimension. It is said to be \emph{Auslander regular} if in addition to satisfying the Auslander-Gorenstein condition it has finite global dimension. \end{defn} The following result shows that all of the properties defined above are preserved under a cocycle twist. \begin{proposition}\label{prop: cohenmac} In addition to Hypotheses \ref{hyp: generalcase}, assume that $A$ is noetherian. Then $A$ has one of the following properties if and only if $A^{G,\mu}$ does as well: \begin{itemize} \item[(i)] it is Cohen-Macaulay; \item[(ii)] it is Auslander-Gorenstein; \item[(iii)] it is Auslander regular. \end{itemize} \end{proposition} \begin{proof} (i) Assume that $A$ is Cohen-Macaulay. As we proved in Lemma \ref{lem: hilbseries}(i), $\text{GKdim }A=\text{GKdim }AG_{\mu}=\text{GKdim }A^{G,\mu}$. Let $M$ be a f.g\ right $AG_{\mu}$-module. It must also be f.g.\ as an $A$-module since the extension $A \subset AG_{\mu}$ is finite by Lemma \ref{lem: fflat}. By \cite[Lemma 5.4]{ardakov2007primeness} it is clear that the grades of $M_{AG_{\mu}}$ and $M_A$ are equal. One can then apply \cite[Lemma 1.6]{lorenz1988on} to conclude that $\text{GKdim }M_{AG_{\mu}}=\text{GKdim }M_{A}$. Piecing this together, we find that \begin{gather} \begin{aligned}\label{eq: cohenaagmu} \text{GKdim }M_{AG_{\mu}}+ j_{AG_{\mu}}(M)= \text{GKdim }M_A+j_A(M) &=\text{GKdim }A \\ &=\text{GKdim }AG_{\mu}, \end{aligned} \end{gather} and therefore $AG_{\mu}$ is Cohen-Macaulay. Now let $M$ be a f.g.\ right $A^{G,\mu}$-module. By applying a right-sided version of Proposition \ref{prop: brownlevass} with $R=X=A^{G,\mu}$ and $S=AG_{\mu}$ we obtain \begin{equation* AG_{\mu} \otimes_{A^{G,\mu}} \text{Ext}_{A^{G,\mu}}^i \left(M,A^{G,\mu}\right) \cong \text{Ext}_{AG_{\mu}}^i\left(M \otimes_{A^{G,\mu}} AG_{\mu},AG_{\mu}\right). \end{equation*} When combined with faithful flatness of the extension $A^{G,\mu} \subset AG_{\mu}$ by Lemma \ref{lem: fflat}, this implies that \begin{equation* j_{A^{G,\mu}}(M)=j_{AG_{\mu}}(M \otimes_{A^{G,\mu}} AG_{\mu}). \end{equation*} By faithful flatness of the extension $A^{G,\mu} \subset AG_{\mu}$, $M$ is contained in $M \otimes_{A^{G,\mu}} AG_{\mu}$. Therefore by the definition of GK dimension one has \begin{equation}\label{eq: gkineq1} \text{GKdim }M_{A^{G,\mu}} \leq \text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{A^{G,\mu}}. \end{equation} By \cite[Proposition 5.6]{krause2000growth} one has the inequality \begin{equation}\label{eq: gkineq2} \text{GKdim }M_{A^{G,\mu}} \geq \text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{AG_{\mu}}. \end{equation} Applying \cite[Lemma 1.6]{lorenz1988on} to $M \otimes_{A^{G,\mu}} AG_{\mu}$ and using the inequalities in \eqref{eq: gkineq1} and \eqref{eq: gkineq2} shows that $\text{GKdim }M_{A^{G,\mu}}=\text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{AG_{\mu}}$. One can then see from an equality like that in \eqref{eq: cohenaagmu} that the Cohen-Macaulay property is preserved under cocycle twists. (ii) Using \cite[Proposition 3.9(i)]{yi1995injective} one can see that if $A$ satisfies the Auslander condition then so must $AG_{\mu}$. The twist $A^{G,\mu}$ then satisfies the Auslander condition by \cite[Theorem 2.2(iv)]{teo1996homological}, since the only hypothesis needed is that the extension be flat -- this is true by Lemma \ref{lem: fflat}. It remains to show that injective dimension is preserved and, by symmetry it suffices to show this for left injective dimension. Consider the $G$-grading on $AG_{\mu}$ for which $(AG_{\mu})_g = A \otimes g$ for all $g \in G$. Under this grading $AG_{\mu}$ is a strongly $G$-graded ring, thus one can apply \cite[Corollary 2.7]{nastasescu1983strongly} with $R = N = AG_{\mu}$ and $\sigma = e$. That result implies that \begin{equation* \text{idim }_{AG_{\mu}}AG_{\mu} = \text{idim }_{(AG_{\mu})_{e}}(AG_{\mu})_{e}= \text{idim }_{A}A. \end{equation*} Now consider the $G$-grading on $AG_{\mu}$ under which $(AG_{\mu})_g = A^{G,\mu}(1 \otimes g)$ for all $g \in G$. This $G$-grading is induced by the diagonal action of $G$ on $AG_{\mu}$. It is clear that $AG_{\mu}$ is a strongly $G$-graded ring under this grading as well. One can therefore apply \cite[Corollary 2.7]{nastasescu1983strongly} once again to see that \begin{equation* \text{idim }_{AG_{\mu}}AG_{\mu} = \text{idim }_{(AG_{\mu})_{e}}(AG_{\mu})_{e}= \text{idim }_{A^{G,\mu}}A^{G,\mu}. \end{equation*} (iii) We saw in the proof of (ii) that the Auslander condition is preserved. One can then use Proposition \ref{prop: gldim} to show that the global dimensions of $A$ and $A^{G,\mu}$ are equal, which completes the proof. \end{proof} \section{Twists in relation to Rogalski and Zhang's classification}\label{sec: rogzhang} We will now apply the prior theory to the work in \cite{rogalski2012regular}. As such, we will assume that $\text{char}(k)=0$ for the duration of this section. Rogalski and Zhang classify AS-regular domains of dimension 4 satisfying two extra conditions: they are generated by three degree 1 elements and admit a proper $\ensuremath{\mathbb{Z}}^{2}$-grading. Properness of such a grading, $A= \bigoplus_{n,m \in \ensuremath{\mathbb{Z}}} A_{m,n}$ say, means that $A_{0,1}\neq 0$ and $A_{1,0}\neq 0$. Their main results are summarised in the following theorem. \begin{theorem}[{\cite[Theorems 0.1 and 0.2]{rogalski2012regular}}]\label{theorem: rogzhangmain} Let $A$ be an AS-regular domain of dimension 4 which is generated by three degree 1 elements and properly $\ensuremath{\mathbb{Z}}^{2}$-graded. Then either $A$ is a normal extension of an AS-regular algebra of dimension 3, or up to isomorphism it falls into one of eight 1 or 2 parameter families, $\mathcal{A}-\mathcal{H}$. Moreover, any such algebra is strongly noetherian, Auslander regular and Cohen-Macaulay. \end{theorem} In order to study cocycle twists of such algebras we require graded algebra automorphisms, where in this case graded refers to the c.g.\ structure rather than the additional $\ensuremath{\mathbb{Z}}^{2}$-grading. Section 5 of Rogalski and Zhang's paper is concerned with precisely this topic. The key result is the following, where generic means avoiding some finite set of parameters given in the statement of \cite[Lemma 5.1]{rogalski2012regular}. \begin{theorem}[{\cite[Theorem 5.2(a)]{rogalski2012regular}}]\label{theorem: rogzhangauts} Consider a generic AS-regular algebra $A$ in one of the families $\mathcal{A} - \mathcal{H}$. The graded automorphism group of $A$ is isomorphic either to $k^{\times}\times k^{\times}$ or to $k^{\times}\times k^{\times} \times C_2$. The first case occurs for the families $\mathcal{A}(b,q)$ with $q \neq -1$, $\mathcal{D}(h,b)$ with $h \neq b^4$, $\mathcal{F}$ and $\mathcal{H}$. The second case occurs if $A$ belongs to one of the families $\mathcal{A}(b,-1)$, $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}(h,b)$ with $h=b^4$, $\mathcal{E}$ or $\mathcal{G}$. \end{theorem} The automorphisms corresponding to $k^{\times}\times k^{\times}$ come from scaling components of the $\ensuremath{\mathbb{Z}}^{2}$-grading. Let us fix some notation for the remainder of the section. The algebra $A$ will be generated by the three degree 1 elements $x_1,x_2$ and $x_3$, where $x_1, x_2 \in A_{1,0}$ and $x_3 \in A_{0,1}$. We will follow Rogalski and Zhang in referring to the extra automorphism of order 2 as the \emph{quasi-trivial} automorphism. This automorphism interchanges $x_1$ and $x_2$ whilst fixing $x_3$. We now move onto the main result of this section, in which we show that the presence of the quasi-trivial automorphism implies the existence of cocycle twists relating algebras in different families. We will use the notation $v_i$ to denote generators of a cocycle twist when we wish to suppress the new multiplication symbol $\ast_{\mu}$. \begin{theorem}\label{theorem: rogzhangmymain} Let $G=(C_2)^2=\langle g_1, g_2\rangle$ and let $\mu$ denote the 2-cocycle on $G$ defined by \begin{equation*} \mu(g_1^p g_2^q, g_1^r g_2^s) = (-1)^{ps} \end{equation*} for all $p,q,r,s \in \{0,1\}$. Fix the isomorphism $G \cong G^{\vee}$ given by $g \mapsto \chi_g$, where \begin{equation*} \chi_g(h) = \left\{ \begin{array}{cl} 1 & \text{if }g=e\text{ or }h \in \{e, g\} \\ -1 & \text{otherwise}, \end{array}\right. \end{equation*} for all $g, h \in G$. Then there are $k$-algebra isomorphisms \begin{align* \mathcal{A}(1,-1)^{G,\mu}\cong \mathcal{D}(1,1),\;\; \mathcal{B}(1)^{G,\mu} &\cong \mathcal{C}(1), \;\; \mathcal{E}(1,\gamma)^{G,\mu}\cong \mathcal{E}(1,-\gamma), \\ \mathcal{G}(1,\gamma)^{G,\mu} &\cong \mathcal{G}(1,\overline{\gamma}). \end{align*} \end{theorem} In order to save space we will not write out the defining relations of these algebras, but recommend that the reader has \cite{rogalski2012regular} to refer to when reading the proof. \begin{proofof}{\ref{theorem: rogzhangmymain}} Let us begin by defining the action of $G$ which we will use for each of the cocycle twists we perform. Note that all of the algebras in the statement of the result admit the quasi-trivial automorphism. Therefore we can let $g_1$ act via the quasi-trivial automorphism and $g_2$ act by multiplying $x_3$ by -1 and fixing the other two generators. Since the standard generators are not diagonal with respect to this action, we will instead use the generators \begin{equation*} w_1=x_1+x_2,\;\; w_2=x_1-x_2,\;\; w_3=x_3, \end{equation*} which are homogeneous with respect to any induced $G$-grading, since all automorphisms act on them diagonally. Denoting the algebra we wish to twist by $A$, the induced $G$-grading on the new generators is given by \begin{equation* w_1 \in A_e,\;\; w_2 \in A_{g_{2}},\;\; w_3 \in A_{g_{1}}. \end{equation*} The defining relations of any algebra in one of the eight families belong to different components of the $\ensuremath{\mathbb{Z}}^2$-grading. Observe that the algebras $\mathcal{A}(1,-1)$, $\mathcal{B}(1)$, $\mathcal{C}(1)$ and $\mathcal{D}(1,1)$ share three relations, only being distinguished from each other by their relations in the $(2,1)$-component. Writing the shared relations in terms of the diagonal basis we show that they are left invariant under the twist. The first two are quadratic relations: \begin{align* 0 &= w_1^2 - w_2^2 = \frac{w_1 \ast_{\mu} w_1}{\mu(e,e)} - \frac{w_2 \ast_{\mu} w_2}{\mu(g_2,g_2)} = w_1 \ast_{\mu} w_1 - w_2 \ast_{\mu} w_2 = v_1^2 - v_2^2,\\ 0 &= w_3 w_1 - w_1 w_3 = \frac{w_3 \ast_{\mu} w_1}{\mu(g_1,e)} - \frac{w_1 \ast_{\mu} w_3}{\mu(e,g_1)} = w_3 \ast_{\mu} w_1 - w_1 \ast_{\mu} w_3 = v_3 v_1 - v_1 v_3, \end{align*} while the third relation is cubic: \begin{align* 0 = w_3^2w_2 - w_2 w_3^2 &= \frac{w_3 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(g_1,g_1)\mu(e,g_2)} - \frac{w_2 \ast_{\mu} w_3 \ast_{\mu} w_3}{\mu(g_2,g_1)\mu(g_1g_2,g_1)} \\ &= w_3 \ast_{\mu} w_3 \ast_{\mu} w_2 - w_2 \ast_{\mu} w_3 \ast_{\mu} w_3 \\ &= v_3^2 v_2 - v_2 v_3^2. \end{align*} Thus, to verify the first two isomorphisms in the statement of the result it suffices to consider the behaviour under the twist of the only relation they do not share. We first twist this relation in the algebra $\mathcal{A}(1,-1)$, having once again written it in terms of the new generators beforehand: \begin{align* 0 &= [w_3,[w_1,w_2]_+] \\ &= w_3 w_1 w_2 + w_3 w_2 w_1 - w_1 w_2 w_3 - w_2 w_1 w_3 \\ &= \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} - \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 - w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 - w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 - w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -[v_3,[v_1,v_2]_+]_+. \end{align*} This relation is the same as that in $\mathcal{D}(1,1)$ under the new generators, which proves the first isomorphism. Let us now move on to $\mathcal{B}(1)$. Twisting the non-shared relation we see that \begin{align* 0 &= [w_3,[w_2,w_1]]_+ \\ &= w_3 w_2 w_1 - w_3 w_1 w_2 + w_2 w_1 w_3 - w_1 w_2 w_3 \\ &= \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} - \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 + w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 - w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 \\ &= [v_3,[v_1,v_2]]. \end{align*} This relation is shared by $\mathcal{C}(1)$ under the new generating set, which proves the second isomorphism. We now move on to the remaining two isomorphisms. The algebras in the relevant families share three relations, two of which we have already shown are preserved under the cocycle twist. This is also true for the third relation, which as yet we have not encountered: \begin{align* 0 = w_3^2w_2 +w_2w_3^2 &= \frac{w_3 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(g_1,g_1)\mu(e,g_2)} + \frac{w_2 \ast_{\mu} w_3 \ast_{\mu} w_3}{\mu(g_2,g_1)\mu(g_1g_2,g_1)} \\&= w_3 \ast_{\mu} w_3 \ast_{\mu} w_2 + w_2 \ast_{\mu} w_3 \ast_{\mu} w_3 \\ &= v_3^2 v_2 +v_2 v_3^2. \end{align*} Once again, it suffices to see what happens to the non-shared relation. In $\mathcal{E}(1,\gamma)$, where $i = \sqrt{-1}$ and $\gamma = \pm i$, one has \begin{align* 0 &= w_3w_2w_1 - w_1w_3w_2 +\gamma w_1w_2w_3 - \gamma w_2w_1w_3 \\ &= \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_1 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(e,g_1)\mu(g_1,g_2)} + \gamma \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} - \gamma \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + w_1 \ast_{\mu} w_3 \ast_{\mu} w_2 + \gamma w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 - \gamma w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -v_3v_2v_1 + v_1v_3v_2 +\gamma v_1v_2v_3 - \gamma v_2v_1v_3. \end{align*} This is the final relation in $\mathcal{E}(1,-\gamma)$ under the new generators, which proves the penultimate isomorphism. We now twist the final relation of $\mathcal{G}(1,\gamma)$, where $\gamma=\frac{1 + i}{2}$ and so $\overline{\gamma}=\frac{1}{2 \gamma}$: \begin{align* 0 &= w_3 w_1 w_2 +w_3 w_2 w_1 +i w_1w_2w_3 + i w_2w_1w_3 \\ &= \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} +i\frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} + i \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 - w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + i w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 + i w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -v_3 v_1 v_2 - v_3 v_2 v_1 + i v_1 v_2 v_3 + i v_2 v_1 v_3. \end{align*} This is precisely the final relation of $\mathcal{G}(1,\overline{\gamma})$ under the new generators, which proves the last isomorphism in the statement of the theorem. \end{proofof} Combined with the fact that $\mathcal{A}(b,-1)$, $\mathcal{B}(b)$, $\mathcal{C}(b)$, $\mathcal{D}(b,b^4)$, $\mathcal{E}(b,\gamma)$ and $\mathcal{G}(b,\gamma)$ are Zhang twists of the respective algebras in the statement of Theorem \ref{theorem: rogzhangmymain} for any parameter $b \in k^{\times}$ \cite[\S 3]{rogalski2012regular}, this result gives further information about such algebras up to Zhang twists. \bibliographystyle{abbrv}
{ "timestamp": "2015-04-24T02:12:49", "yymm": "1504", "arxiv_id": "1504.06299", "language": "en", "url": "https://arxiv.org/abs/1504.06299" }
\section{I. Introduction} Three dimensional (3D) topological insulator (TI) is a phase of matter with topologically protected Dirac-type surface states on their time reversal invariant point \cite{Fu07prb,Hsieh08,Hsieh09,Hasan09,ZSC09,Chen09,Xia09,Ando10,KaneReview,MooreReview,ZhangReview,AndoReview}. With coupling to a ferromagnet (F), Dirac fermions show many exotic properties such as magnetoelectric effect \cite{Qi08prb,Qi08np,Moore09,MacD10,Franz10,Nomura11}. By the proximity effect to a superconductor (S), the 3D TI surface states may become a topological superconductor \cite{Kane2008}. When F and S coexist on 3D TI surfaces, it is found that chiral Majorana edge states can be generated at the boundary between them \cite{Kane2008,Kane2009,Beenakker09}, which leads to the formation of zero-biased conductance peak (ZBCP) \cite{TK95} as experimental signatures\cit {Tanaka2009,BeenakkerReview,AliceaReview,Tanakareview,Snelder15}. Intrinsic topological superconductivity has also been found in doped 3D TIs, e.g., Cu$_{x}$Bi$_{2}$Se$_{3}$ \cite{hor10,097001,Sasak11,Kriener11,levy13}. On the other hand, a variety of interesting phenomena about Josephson effect in TI materials have been discovered \cit {Zhang11,Sacepe,Brinkman,Williams,Wang12,Snelder14,Moler15}. Recently, a non-sinusoidal current-phase relation has been reported in the 3D TI HgTe junction \cite{Moler15}. In the 3D TI heterojunctions like Nb/Bi$_{1.5}$Sb$_{0.5}$Te$_{1.7}$Se$_{1.3}$/Nb, the temperature dependence of the critical current is almost linear in most of the range \cite{Snelder14}. Also, the novel Josephson effect involving Majorana fermions has been predicted theoretically \cit {Tanaka2009,Linderprl10,Linderprb10,Yokoyama12,Snelder13}, however, there has been no experimental report yet. The rapid development in experiments requires for a theoretical approach which can deal with realistic structures for Josephson junctions on 3D TI surface. In this article, we address how to compose Green's function by wave functions on superconducting 3D TI surface. Using the resulting formalism, one can analyze the spacial dependence of physical quantities, such as local density of states (LDOSs) and pair potentials. Also, this approach provides an efficient way to calculate Josephson current for realistic junctions on 3D TI surfaces. In this work, we consider the S/normal metal (N)/ferromagnetic insulator (FI)/N' junction and S/N/FI/N'/S Josephson junction as examples. Since making direct contact between F and S regions is not easily accessible in actual experiments, the presence of N interlayer between S and F is a more realistic setup to study Majorana fermions. In the S/N/FI/N' junction, we find that the conductance spectra and LDOSs have spikes as a function of bias voltage and quasiparticle energy $E$, respectively. The resulting LDOSs shows an asymmetric energy dependence around $E=0$ . For the S/N/FI/N'/S junction, we find that the distance of N interlayer (or N' interlayer) decreases the critical current monotonically. The junctions with or without FI show a non-sinusoidal current-phase relation at low temperatures. The paper is organized as follows: In section II, we introduce our model and construct the Green's function. In section III, we show numerical results for S/N/FI/N' and S/N/FI/N'/S junctions and discuss them. A conclusion remark is given in Section IV. \section{II. Model} We consider the ballistic S/N/FI/N' and S/N/FI/N'/S junctions which are shown in Fig \ref{fig1}. \begin{figure}[tbph] \begin{center} \includegraphics[width = 66 mm]{fig1.eps} \end{center} \begin{center} \caption{Schematics of the system: (a) S/N/FI/N' and (b) S/N/FI/N'/S formed on the surface of a 3D topological insulator. The local density of states can be detected by the STM tip. The differential conductance and the suppercurrent can be obtained from the leads on the two sides. } \label{fig1} \end{center} \end{figure} The system can be described by the BdG Hamiltonian \cit {BdG,Tanaka2009 \begin{equation} \hat{H}=\left[ \begin{array}{cc} h(k_{x},k_{y})+M & i\hat{\sigma} _{y}\Delta \\ -i\hat{\sigma} _{y}\Delta ^{\ast } & -h^{\ast }(-k_{x},-k_{y})-M^{\ast \end{array \right] , \label{Hamiltonian} \end{equation in $\left( \Psi _{\uparrow },\Psi _{\downarrow },\Psi _{\uparrow }^{\dag },\Psi _{\downarrow }^{\dag }\right) ^{T}$ basis, where h(k_{x},k_{y})=v_{f}(k_{y}\hat{\sigma} _{x}-k_{x} \hat{\sigma} _{y})-\mu (\Theta \left( -x+L_{n\left( n1\right) }\right) +\Theta \left( x-L_{n\left( n1\right) }-L_{f}\right) )$ for the S/N/FI/N' (S/N/FI/N'/S) junction. $\hat{\sigma _{i=x,y,z}$ are the Pauli matrices in the spin space and $\mu $ is the chemical potential. Throughout the paper, we set $\hbar =1$. The exchange field in F region is $M=\sum_{i=x,y,z}m_{i \hat{\sigma}_{i}\Theta \left( x-L_{n\left( n1\right) }\right) \Theta \left( L_{n\left( n1\right) }+L_{f}-x\right)$ for the S/N/FI/N' (S/N/FI/N'/S) junction. The pair potential $\Delta $ is given by $\Delta _{0}\Theta (-x)$ for the S/N/FI/N' junction and $\Delta _{0}[\Theta (-x)+e^{-i\phi }\Theta (x-L_{n1}-L_{f}-L_{n2})]$ for the S/N/FI/N'/S junction, where $\phi $ is the macroscopic superconducting phase. In this article, we use a standard formula of tunneling spectroscopy \cit {BTK,TK95} as shown in Ref.\cite{Tanaka2009} to obtain differential conductance spectra of the S/N/FI/N' junction. Here, we would like to present the way of constructing the retarded Green's function which has recently been applied to relativistic system like Graphene \cit {Asano08,Burst10}, and 1D helical states on TI \cite{Pablo15}. In our system, the translational invariance along the $y$-axis is preserved, thus the retarded Green's function with respect to Eq.\ref{Hamiltonian} has the form $\check{G}(x,x^{\prime },y,y^{\prime })=\sum\nolimits_{k_{y}}G^{k_{y}}(x,x^{\prime })e^{ik_{y}(y-y^{\prime })}$. The retarded Green's function can be written as \cite{McMillan,FT,TanakaD,Tanaka2000,Burst10 \begin{equation} \begin{array}{r} G^{k_{y}}(x,x^{\prime })=\alpha _{1}\psi _{1}(x)\tilde{\psi _{3}^{T}(x^{\prime })+\alpha _{2}\psi _{1}(x)\tilde{\psi}_{4}^{T}(x^{\prime }) \\ +\alpha _{3}\psi _{2}(x)\tilde{\psi}_{3}^{T}(x^{\prime })+\alpha _{4}\psi _{2}(x)\tilde{\psi}_{4}^{T}(x^{\prime }) \end{array \end{equation for $x>x^{\prime }$ an \begin{equation} \begin{array}{r} G^{k_{y}}(x,x^{\prime })=\beta _{1}\psi _{3}(x)\tilde{\psi _{1}^{T}(x^{\prime })+\beta _{2}\psi _{4}(x)\tilde{\psi}_{1}^{T}(x^{\prime }) \\ +\beta _{3}\psi _{3}(x)\tilde{\psi}_{2}^{T}(x^{\prime })+\beta _{4}\psi _{4}(x)\tilde{\psi}_{2}^{T}(x^{\prime }) \end{array \end{equation for $x<x^{\prime }$. $\psi _{i=1\sim 4}\left( x\right) $ are wave functions of Eq.(\ref{Hamiltonian}) with wave vector $k_{y}$. $\psi _{1(2)}(x)$ is the wave function for an incident electron-like (hole-like) particle from the left side. $\psi _{3(4)}(x)$ is the wave function for the incident electron-like (hole-like) particles from the right side. $\tilde{\psi _{i=1\sim 4}(x^{\prime })$ are the wave functions corresponding to the conjugate processes under the Hamiltonian \begin{equation} \tilde{H}=\left[ \begin{array}{cc} \tilde{h}(k_{x},k_{y})+M^{\ast } & i\sigma _{y}\Delta ^{\ast } \\ -i\sigma _{y}\Delta & -\tilde{h}^{\ast }(-k_{x},-k_{y})- \end{array \right] \label{cHamiltonian} \end{equation} with wave vector $-k_{y}$ and $\tilde{h}(k_{x},k_{y})$ is given by $\tilde{h (k_{x},k_{y})=v_{f}(-k_{y}\sigma _{x}-k_{x}\sigma _{y})-\mu[\Theta (-x+d_{n1})+\Theta (x-d_{n1}-d_{f})]$. For example, in the left S side, the wave functions are \begin{subequations} \begin{alignat}{4} & \psi _{1}(x)=\hat{A}_{1}e^{ik_{+}x}+a_{1}\hat{A}_{4}e^{ik_{-}x}+b_{1}\hat{ }_{3}e^{-ik_{+}x}, & & & & & & \\ & \psi _{2}(x)=\hat{A}_{2}e^{-ik_{-}x}+a_{2}\hat{A}_{3}e^{-ik_{+}x}+b_{2 \hat{A}_{4}e^{ik_{-}x}, & & & & & & \\ & \psi _{3}(x)=c_{3}\hat{A}_{3}e^{-ik_{+}x}+d_{3}\hat{A}_{4}e^{ik_{-}x}, & & & & & & \\ & \psi _{4}(x)=c_{4}\hat{A}_{4}e^{ik_{-}x}+d_{4}\hat{A}_{3}e^{-ik_{+}x}, & & & & & & \end{alignat and \end{subequations} \begin{subequations} \begin{alignat}{4} & \tilde{\psi}_{1}(x^{\prime })=\hat{B}_{1}e^{ik_{+}x^{\prime }}+\tilde{a _{1}\hat{B}_{4}e^{ik_{-}x^{\prime }}+\tilde{b}_{1}\hat{B}_{3}e^{-ik_{+}x^ \prime }},\newline & & & & & & \\ & \tilde{\psi}_{2}(x^{\prime })=\hat{B}_{2}e^{-ik_{-}x^{\prime }}+\tilde{a _{2}\hat{B}_{3}e^{-ik_{+}x^{\prime }}+\tilde{b}_{2}\hat{B _{4}e^{ik_{-}x^{\prime }},\newline & & & & & & \\ & \tilde{\psi}_{3}(x^{\prime })=\tilde{c}_{3}\hat{B}_{3}e^{-ik_{+}x^{\prime }}+\tilde{d}_{3}\hat{B}_{4}e^{ik_{-}x^{\prime }},\newline & & & & & & \\ & \tilde{\psi}_{4}(x^{\prime })=\tilde{c}_{4}\hat{B}_{4}e^{ik_{-}x^{\prime }}+\tilde{d}_{4}\hat{B}_{3}e^{-ik_{+}x^{\prime }}. & & & & & & \end{alignat The corresponding wave vectors are represented by $k_{\pm }=\sqrt{(\mu \pm \sqrt{E^{2}-\Delta _{0}^{2}})^{2}/v_{f}^{2}-k_{y}^{2}}\equiv q_{e(h)}\cos \theta _{\pm }$ and $q_{e(h)}=(\mu \pm \sqrt{E^{2}-\Delta _{0}^{2}})/v_{f}$. The spinors are given as \end{subequations} \begin{subequations} \begin{alignat}{4} & \hat{A}_{1}(\hat{B}_{3}) & =& [iu,\pm e^{\pm i\theta _{+}}u,\mp e^{\pm i\theta _{+}}v,iv]^{T}, & & & & \\ & \hat{A}_{2}(\hat{B}_{4}) & =& [ie^{\pm i\theta _{-}}v,\mp v,\pm u,ie^{\pm i\theta _{-}}u]^{T}, & & & & \\ & \hat{A}_{3}(\hat{B}_{1}) & =& [ie^{\pm i\theta _{+}}u,\mp u,\pm v,ie^{\pm i\theta _{+}}v]^{T}, & & & & \\ & \hat{A}_{4}(\hat{B}_{2}) & =& [iv,\pm e^{\pm i\theta _{-}}v,\mp e^{\pm i\theta _{-}}u,iu]^{T}, & & & & \end{alignat where $u$ and $v$ are given by $u(v)$ $=$ $\sqrt{(E \pm\sqrt{E^{2}-\Delta_{0} ^{2}})/2E}$. Other wave functions can be found in the Appendix. The coefficients $a_{i}$, $b_{i}$, $\tilde{a}_{i}$ and $\tilde{ }_{i}$ can be solved from the boundary condition for relativistic systems. For example, in S/N/FI/N' junction, the boundary conditions are: $\psi _{i}(x=0_{+})=\psi _{i}(x=0_{-})$, $\psi _{i}(x=d_{n+})=\psi _{i,}(x=d_{n-}) , $\psi _{i}(x=d_{n}+d_{f+})=\psi _{i}(x=d_{n}+d_{f-})$, and similar to other processes. $\alpha _{i=1\sim 4}$ and $\beta _{i=1\sim 4}$ can be determined by the boundary conditions of Green's function \end{subequations} \begin{equation} G^{k_{y}}(x+0,x)-G^{k_{y}}(x-0,x)=v_{f}^{-1}(i\hat{\tau}_{z}\hat{\sigma _{y}), \end{equation where $\hat{\tau}_{i=x,y,z}$ are the Pauli matrices in the electron-hole space. In real materials, the magnitude of the superconducting gap is much smaller than the chemical potential $\Delta _{0}\sim 10^{-3}\mu $, so we can use the quasiclassical approximation as $q_{e}\sim q_{h}$ and $\theta _{+}\sim \theta _{-}\equiv \theta $. Then one can easily obtain the values of $\alpha _{i=1\sim 4}$ and $\beta _{i=1\sim 4}$, \begin{subequations} \begin{eqnarray} \alpha _{1(4)} &=&[2iv_{f}\cos \theta (u^{2}-v^{2})(\tilde{d}_{3}\tilde{d _{4}-\tilde{c}_{3}\tilde{c}_{4})]^{-1}\tilde{c}_{4(3)}, \\ \alpha _{2(3)} &=&[2iv_{f}\cos \theta (u^{2}-v^{2})(\tilde{c}_{3}\tilde{c _{4}-\tilde{d}_{3}\tilde{d}_{4})]^{-1}\tilde{d}_{3(4)}, \\ \beta _{1(4)} &=&[2iv_{f}\cos \theta (u^{2}-v^{2})(d_{3}d_{4}-c_{3}c_{4})]^{-1}c_{4(3)}, \\ \beta _{2(3)} &=&[2iv_{f}\cos \theta (u^{2}-v^{2})(c_{3}c_{4}-d_{3}d_{4})]^{-1}d_{3(4)}. \end{eqnarray From the Green's function, we can obtain the local density of states for electrons: $\rho _{e}(x,E)$ and that for holes: $\rho _{h}(x,E)$, \end{subequations} \begin{equation} \rho _{e\left( h\right) }(x,E)=\rho _{e\left( h\right) ,\uparrow }(x,E)+\rho _{e\left( h\right) ,\downarrow }(x,E), \end{equation} where the spin-resolved LDOSs are given b \begin{eqnarray} \rho _{e,\uparrow \left( \downarrow \right) }(x,E) &=&-\frac{1}{\pi \sum_{k_{y}}\mathrm{Im}[G_{11\left( 22\right) }^{k_{y}}(x,x,E)], \\ \rho _{h,\uparrow \left( \downarrow \right) }(x,E) &=&-\frac{1}{\pi \sum_{k_{y}}\mathrm{Im}[G_{33\left( 44\right) }^{k_{y}}(x,x,E)]. \end{eqnarray} The dc Josephson current is determined by electric charge conservation rule \begin{equation} \partial _{t}P+\partial _{x}J_{x}+S=0, \end{equation where $P=\Psi _{\uparrow }^{\dag }\Psi _{\uparrow }+\Psi _{\downarrow }^{\dag }\Psi _{\downarrow }$, $J_{x}=iv_{f}(\Psi _{\uparrow }^{\dag }\Psi _{\downarrow }-\Psi _{\downarrow }^{\dag }\Psi _{\uparrow })$ and $S= \mathrm{Im}[\Delta ^{\ast }\Psi _{\downarrow }\Psi _{\uparrow }-\Delta ^{\ast }\Psi _{\uparrow }\Psi _{\downarrow }]$ are electric charge density, electric current and source term, respectively. After straightforward calculations following Ref.\cite{FT,Tanaka2000}, we find that the total Josephson current is \begin{equation} J_{x}=ek_{B}T\sum_{k_{y},\omega_{n}}\frac{\Delta}{2}\frac{\mathrm{sgn (\omega_{n})}{\sqrt{\omega_{n}^{2}+\Delta ^{2}}}\left[a_{1}(i\omega _{n})-a_{2}(i\omega_{n})\right], \label{ft} \end{equation where $\omega _{n}$ is the Matsubara frequency $\omega _{n}=\pi k_{B}T(2n+1),(n=0,\pm 1,\pm 2....)$. Eq.(\ref{ft}) shows that Furusaki-Tsukada's formula \cite{FT} can also be applicable to the ballistic Dirac-like electron systems on 3D TI surfaces \cite{Benj,bYang}. It enables us to directly calculate the dc Josephson current in even more complicated or long Josephson junctions on 3D TI surface without starting from the energy levels of Andreev bound states \cite{Tanaka2009,Tkachov}. \section{III. Numerical Results} \subsection{A. S/N/FI/N' junction} First, we show the conductance $\sigma _{s}$ (see Appendix) of S/N/FI/N' junction in Fig.\ref{fig2}. We normalized $\sigma _{s}$ by $\sigma _{n}$ which is the conductance when S is in normal state. We only consider the exchange field along $z-$ and $x-$axis since the magnetization along $y-$axis does not change the conductance \cite{Tanaka2009}. The length of the N layer between S and FI is denoted by $L_{n}$. The direct contact between S and FI means $L_{n}=0$. For sufficient large $m_{z}(m_{x})$, the normalized conductance has a ZBCP similar to that in chiral $p$-wave superconductor \cite{Tanaka2009} when magnetic field is along $z$-axis as shown in Fig.\ref{fig2}(a). Also we can see from Fig.\ref{fig2}(b), ZBCP appears when the magnetization is along $x-$axis. As $L_{n}$ increases, the sub-gap resonant peaks show up (Figs.\ref{fig2}(c)$\sim$(f)). The number of such peaks grows with $L_{n}$. \begin{figure}[tbph] \begin{center} \includegraphics[width = 76 mm]{fig2.eps} \end{center} \begin{center} \caption{Normalized tunneling conductance as a function of bias voltage $(eV/\Delta_{0})$ for S/N/FI/N' junctions. (a), (b): $L_{n}=0$, (c), (d): $L_{n}=\xi$ and (e), (f): $L_{n}=3\xi$. Black curve: $m_{z}/\mu=1$ and red curve: $m_{x}/\mu=1$. $\mu_=1$, $v_{f}$=1, $\Delta_{0}=0.001$ and $L_{f}=0.001\xi$ are chosen for all the panels.} \label{fig2} \end{center} \end{figure} This oscillatory phenomenon can also be seen in the local density of states \rho _{e(h)}\left( x,E\right) $. We normalize $\rho _{e(h)}\left( x,E\right) $ to that of the electron density of states of the bulk normal metal $\rho _{n}$ at Fermi energy. Here, we choose the position in the middle of FI x_{0}=L_{f}/2+L_{n}$ and show the density of states in Fig.\ref{fig3}. When $L_{n}\neq 0$, we obtain the subgap peaks again as shown in Fig.\ref{fig3}(c \sim $ f). The formation of such peaks can be explained as follows. We know that the wave vector for electron (hole) is $k_{n}^{\pm}=\sqrt{(\mu \pm E)^{2}/v_{f}^{2}-k_{y}^{2}}$. The condition of forming the Andreev bound states in the N layer can be estimated from the Bohr-Sommerfeld quantization condition as \begin{equation} e^{i(k_{n}^{+}-k_{n}^{-})L_{n}}=1, \end{equation which shows that the number of peaks is proportional to $L_{n}$. Similar formation of Andreev bound states was also revealed in junctions with 1D helical edge states\cite{Crepin}. \begin{figure}[tbph] \begin{center} \includegraphics[width = 76 mm]{fig3.eps} \end{center} \begin{center} \caption{Local density of states in the middle of FI in the S/N/FI/N' junctions as a function of energy $(E/\Delta_{0})$: (a), (b): $L_{n}=0$, (c), (d): $L_{n}=\xi$ and (e),(f):$L_{n}=3\xi$. Solid line for electron density of states and dashed line for hole density of states. Other parameters are chosen as the same as in Fig.\ref{fig2}. } \label{fig3} \end{center} \end{figure} We also find the asymmetric $E$ dependence of LDOSs near the S/FI interface, e.g., $\rho _{e}\left( x_{0},E\right) $($\rho _{h}\left( x_{0},E\right) $) in Fig.\ref{fig3}. The asymmetry becomes prominent when magnetization is along $z$ axis (Figs.\ref{fig3}(a), (c) and (e)). We know that $\rho _{e}\left( x,E\right) $ and $\rho _{h}\left( x,E\right) $ are symmetric functions of $E$ for chiral $p$-wave superconductor when $\Delta_{0}$ is much smaller than $\mu$ \cite{Sigrist}. In that case, the time-reversal symmetry is already broken in the bulk states of $p -wave superconductor. On the other hand, the superconductor on TI is time-reversal invariant and can not support chiral edge mode without attaching ferromagnet. Therefore, we can imagine that the chiral edge mode studied here has a nature similar to Shiba-type bound states \cite{Yu,Shiba,Rusinov} by magnetic impurity scattering. In usual case, where the spin degree of freedom is degenerate, the emerging Shiba-states still follow the relation $\rho _{e\left( h\right) }(x,E)=\rho _{e\left( h\right) }(x,-E)$, although the decomposed LDOS in each spin sector $\rho _{e\left( h\right) ,\sigma }(x,E)$ does not satisfy $\rho _{e,\sigma }(x,E)=\rho _{e,\sigma }(x,-E)$. Since $\rho _{e\left( h\right) ,\sigma }(x,E)=\rho _{e\left( h\right) ,-\sigma }(x,-E)$ is satisfied, after summing up each spin component, $\rho _{e\left( h\right) }(x,E)=\rho _{e\left( h\right) ,\uparrow }(x,E)+\rho _{e\left( h\right) ,\downarrow }(x,E)=\rho _{e\left( h\right) ,\downarrow }(x,-E)+\rho _{e\left( h\right) ,\uparrow }(x,-E)=\rho _{e\left( h\right) }(x,-E)$ is satisfied. Then, the resulting LDOS is symmetric around $E=0$. On the other hand, if the spin degeneracy is lifted in the superconductor, it is possible that the LDOS becomes asymmetric. In the present case, there is a strong spin-momentum locking in the superconducting region by spin-orbit coupling. Then the asymmetric energy dependence of $\rho _{e\left( h\right) }(x,E)$ appears near the S/FI interface. In recent experiment of scanning tunneling spectroscopy (STS), similar asymmetric behavior of LDOSs has been observed in 1D S/F system\cite{Perge2014}. We can regard our finding in Fig. \ref{fig3} as another example of asymmetric LDOSs in planar S/F junction which can be detected in STS. To see the spacial dependence of the Majorana states in such junctions, we show the zero energy density of states $\rho _{e}(x,E=0)$ throughout the junction. Because $\rho _{e}(x,E=0)$ is $0$ in both isolated S and FI region, significant enhancement of $\rho _{e}(x,E=0)$ in S/FI interface of S/FI/N junction can be regarded as the experimental signature of chiral Majorana fermion. In the S region, we can estimate that the characteristic length expressing the spatial change of $\rho _{e}(x,E=0)$ is the order of macroscopic length scale: $\xi $. This means a sufficient possibility to detect the presence of Majorana fermion experimentally by STS, since the manipulation of tip of STS just on the the S/N or S/F boundary with high resolution is not easy. Also, as seen in Fig.\ref{fig4}(b), even if there is a normal layer between S and FI, the enhancement of $\rho _{e}(x,E=0)$ in both F and S is not affected. In the N layer between S and FI, $\rho _{e}(x,E=0)$ is almost constant. In the right N layer, we find oscillations of $\rho _{e}(x,E=0)$ on the scale of the inverse Fermi momenta. However, this oscillatory behavior may be difficult to be detected in actual experiment. \begin{figure}[tbph] \begin{center} \includegraphics[width = 58 mm]{fig4.eps} \end{center} \begin{center} \caption{Spacial dependence of zero energy states in (a) S/FI/N junction and (b) S/N/FI/N' junction. The width of F layer is $L_{f}=0.001\xi$ and that of the N layer in (b) is $L_{n}=\xi$. Other parameters are the same as in Fig.\ref{fig2}. The scale of the horizontal axis is different in each region. } \label{fig4} \end{center} \end{figure} \subsection{B. Josephson effect} \begin{figure}[tbph] \begin{center} \includegraphics[width = 73 mm]{revisefig5.eps} \end{center} \begin{center} \caption{S/N/S Josephson junction: (a) Current-phase relation and (b) critical current for $L_{n}=0.01\xi$. (c) and (d) are those for $L_{n}=\xi$. Length dependence of $L_{n}$ for $T=0.1T_{c}$: (e) current-phase relation and (f) critical current. Other parameters are chosen as the same as in Fig.\ref{fig2}. } \label{fig5} \end{center} \end{figure} Before discussing the S/N/FI/N'/S junction, let us first look at S/N/S junction. Using Eq.(\ref{ft}), we plot the dc Josephson current in Fig.\ref{fig5}. It is normalized to $eR_{N}J/\Delta_{0}$ where $R_{N}$ is the interface resistance per unit area in the normal state. In panel (a), we can see that the current-phase relation is non-sinusoidal for short-junction in low temperature. This characteristic remains in the long-junction, as shown in panel (c). We notice that in recent experiment of Nb/3D-HgTe/Nb Josephson junctions, the current-phase relation is found to be non-sinusoidal \cite{Moler15}. The experimental condition corresponds to low temperature and long-junction in our calculation. We find a similar result in that limit as shown in (c). The temperature dependence of critical current $J_{c}$ for short and long circumstances are given in panel (b) and (d), respectively. We observe that for high temperature, $J_{c}$ is a concave function of $T$ at small $L_{n}$ while it becomes a convex function with large $L_{n}$. It is also interesting to notice that in the large area of low temperature, $J_{c}$ is nearly a linear function of $T$ in both short- and long-junction. This result is in good agreement with the recent experiments in long Nb/Bi$_{1.5}$Sb$_{0.5}$Te$_{1.7}$Se$_{1.3}$/Nb Josephson junction \cite{Snelder14}. In Figs.\ref{fig5}(e) and (f), we plot the length dependence of Josephson current. We now consider S/N/FI/N'/S Josephson junctions. The length of N layer on the two side of FI is denoted as $L_{n1}$ and $L_{n2}$. When $L_{n1}$ and L_{n2}$ is on the superconducting coherence length scale, the junctions become long-junctions. The influence of N layer between FI and S is shown in Fig.\ref{fig6}. From Figs.\ref{fig6}(a) and (b), we can see that the current-phase relation still retains the non-sinusoidal shape for different values of $L_{n1}$ and $L_{n2}$ in low temperature limit. Throughout our study, we have not found the sawtooth behavior of current-phase relation involving magnetization in the long junction and low temperature limit, as shown in Fig.\ref{fig5}(c). This is because the magnetization makes the Andreev bound states gapped for most values of $k_{y}$ \cite{Tanaka2009}. The derivative of energy dispersion which creates Josephson current will be a smoother function of phase than that in S/N/S junction. For the temperature dependence of the critical current, we can see that it behaves qualitatively different in low temperature limit for $m_{z}$ and $m_{x}$ as shown in Figs.\ref{fig6}(c) and (d), respectively. For $m_{z}$ case, the critical current $J_{c}$ saturates at a constant value, which has been revealed by the previous work \cite{Linderprb10}. However, for the $m_{x}$ case, it shows a Kulik-Ome'lyanchuk type of critical current \cite{KO} which has linear low-temperature behavior. We interpret it as a result of the enhanced zero-energy LDOSs for $m_{x}$ magnetization as illustrated in Figs \ref{fig3}(b)(d)(f). In the high temperature limit, it is shown that for both $m_{z}$ and $m_{x}$ cases, $J_{c}$ is a concave function while it crosses over to a convex function with increasing $L_{n1}$ (or $L_{n2}$). This behavior is similar to the S/N/S junction. Figures.\ref{fig6}(e) and (f) represent the critical current as a function of the length $L_{n1}$ and $L_{n2}$, for different direction of magnetization. \begin{figure}[tbph] \begin{center} \includegraphics[width = 86 mm]{revisefig6.eps} \end{center} \begin{center} \caption{S/N/FI/N'/S junction: current-phase relation for 3 cases of the N layer length $L_{n1}$ and $L_{n2}$ for (a) $m_{z}/\mu=1$ and (b) $m_{x}/\mu=1$. (c)(d) Temperature dependence of critical current corresponding to (a) and (b), respectively. (e)(f) Critical Josephson current as a function of $L_{n1}$ and $L_{n2}$. The temperature is chosen as $T=0.1T_{c}$. Other parameters are chosen as the same as in Fig.\ref{fig5}. } \label{fig6} \end{center} \end{figure} It is worth noting that, although the interlayer N in the S/N/FI/N' junction could generate resonant spikes in the transport phenomena, e.g., spikes in Figs.\ref{fig2} and \ref{fig3}, we find no oscillatory behavior in either current-phase relation or critical current as a function of length N (or N'). The critical current decreases monotonically with the length $L_{n1}+L_{n2}$. \section{IV. SUMMARY} In summary, we theoretically studied the S/N/FI/N', S/N/S and S/N/FI/N'/S junctions on the surface of 3D topological insulator. We have constructed a formula to obtain Green's function. The conductance spectra and local density of states in S/N/FI/N' junction show resonant spikes due to the Andreev bound states. The calculated current phase relation and temperature dependence of critical current in the Josephson junctions are consistent with recent experiments in S/N/S junction. We have also calculated current phase relation and temperature dependence of critical current in S/N/FI/N'/S junction. The non-sinusoidal current phase relation can be expected for short junctions. We hope the obtained results will be confirmed by experiments in the near future. \section{ACKNOWLEDGEMENTS} We thank V.V. Ryazanov, A.A. Golubov, Y. Asano and M. Sato for valuable discussions. This work was supported in part by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan (Topological Quantum Phenomena No.22103005 and No.25287085), by the German-Japanese research unit FOR1483 on "Topotronics", and by the Ministry of Education and Science of the Russian Federation Grant No.14Y.26.31.0007. \section{APPENDIX: WAVE FUNCTIONS} The wave functions in the N interlayer ar \begin{eqnarray} \psi _{i}(x) &=&\sum\limits_{\lambda =1}^{4}s_{\lambda }^{i}\hat{N}_{\lambda }e^{ik_{n,\lambda }x}, \\ \tilde{\psi}_{i}(x) &=&\sum\limits_{\lambda =1}^{4}\tilde{s}_{\lambda }^{i \hat{N}_{\lambda }e^{ik_{n,\lambda }x}, \end{eqnarray where \begin{eqnarray} \hat{N}_{1\left( 2\right) } &=&\left[ v_{f}\left( ik_{n,1\left( 2\right) }+k_{y}\right) ,\mu +E,0,0\right] ^{T}, \\ \hat{N}_{3\left( 4\right) } &=&\left[ 0,0,v_{f}\left( ik_{n,3\left( 4\right) }-k_{y}\right) ,-\mu +E\right] ^{T}, \end{eqnarray wit \begin{eqnarray} k_{n,1\left( 2\right) } &=&\pm \sqrt{(\mu +E)^{2}/v_{f}^{2}-k_{y}^{2}}=\pm k_{n}^{+}, \\ k_{n,3\left( 4\right) } &=&\pm \sqrt{(\mu -E)^{2}/v_{f}^{2}-k_{y}^{2}}=\pm k_{n}^{-}. \end{eqnarray For the FI interlayer, we find tha \begin{eqnarray} \psi _{i}(x) &=&\sum_{\lambda =1}^{4}f_{\lambda }^{i}\hat{F}_{\lambda }e^{ik_{\lambda }^{f}x}, \\ \tilde{\psi}_{i}(x) &=&\sum_{\lambda =1}^{4}\tilde{f}_{\lambda }^{i}\hat{F _{\lambda }e^{ik_{\lambda }^{f}x}, \end{eqnarray} where \begin{subequations} \begin{alignat}{4} \hat{F}_{1}& =\left[ iv_{f}k_{1}^{f}+\left( v_{f}k_{y}+m_{x}\right) ,E-m_{z},0,0\right] ^{T}, & & & & & & \\ \hat{F}_{2}& =\left[ E+m_{z},-iv_{f}k_{2}^{f}+\left( v_{f}k_{y}+m_{x}\right) ,0,0\right] ^{T}, & & & & & & \\ \hat{F}_{3}& =\left[ 0,0,-iv_{f}k_{3}^{f}+\left( v_{f}k_{y}-m_{x}\right) ,E+m_{z}\right] ^{T}, & & & & & & \\ \hat{F}_{4}& =\left[ 0,0,E-m_{z},iv_{f}k_{4}^{f}+\left( v_{f}k_{y}-m_{x}\right) \right] ^{T}, & & & & & & \end{alignat with \end{subequations} \begin{subequations} \begin{alignat}{4} & k_{1}^{f}=-\varsigma _{1}\sqrt{E^{2}-m_{z}^{2}-\left( v_{f}k_{y}+m_{x}\right) ^{2}},\newline & & & & & & \\ & k_{2}^{f}=\varsigma _{1}\sqrt{E^{2}-m_{z}^{2}-\left( v_{f}k_{y}+m_{x}\right) ^{2}}, & & & & & & \\ & k_{3}^{f}=\varsigma _{2}\sqrt{E^{2}-m_{z}^{2}-\left( v_{f}k_{y}-m_{x}\right) ^{2}},\newline & & & & & & \\ & k_{4}^{f}=-\varsigma _{2}\sqrt{E^{2}-m_{z}^{2}-\left( v_{f}k_{y}-m_{x}\right) ^{2}}, & & & & & & \end{alignat and $\varsigma _{1\left( 2\right) }=\mathrm{sgn}\left( v_{f}k_{y}\pm m_{x}\right) $. The wave functions in the N' region of S/N/FI/N' are \end{subequations} \begin{subequations} \begin{alignat}{4} & \psi _{1}(x)=c_{1}\hat{C}_{1}e^{ik_{n}^{+}x}+d_{1}\hat{C _{2}e^{-ik_{n}^{-}x},\newline & & \\ & \psi _{2}(x)=c_{2}\hat{C}_{2}e^{-ik_{n}^{-}x}+d_{2}\hat{C _{1}e^{ik_{n}^{+}x},\newline & & \\ & \psi _{3}(x)=\hat{C}_{3}e^{-ik_{n}^{+}x}+a_{3}\hat{C _{2}e^{-ik_{n}^{-}x}+b_{3}\hat{C}_{1}e^{ik_{n}^{+}x},\newline & & \\ & \psi _{4}(x)=\hat{C}_{4}e^{ik_{n}^{-}x}+a_{4}\hat{C _{1}e^{ik_{n}^{+}x}+b_{4}\hat{C}_{2}e^{-ik_{n}^{-}x}. & & \end{alignat and \end{subequations} \begin{subequations} \begin{alignat}{4} & \tilde{\psi}_{1}(x)=\tilde{c}_{1}\hat{D}_{1}e^{ik_{n}^{+}x^{\prime }} \tilde{d}_{1}\hat{D}_{2}e^{-ik_{n}^{-}x^{\prime }},\newline & & & & & & \\ & \tilde{\psi}_{2}(x)=\tilde{c}_{2}\hat{D}_{2}e^{-ik_{n}^{-}x^{\prime }} \tilde{d}_{2}\hat{D}_{1}e^{ik_{n}^{+}x^{\prime }},\newline & & & & & & \\ & \tilde{\psi}_{3}(x^{\prime })=\hat{D}_{3}e^{-ik_{n}^{+}x^{\prime }}+\tilde a}_{3}\hat{D}_{2}e^{-ik_{n}^{-}x^{\prime }}+\tilde{b}_{3}\hat{D _{1}e^{ik_{n}^{+}x^{\prime }}, & & & & & & \\ & \tilde{\psi}_{4}(x^{\prime })=\hat{D}_{4}e^{ik_{n}^{-}x^{\prime }}+\tilde{ }_{4}\hat{D}_{1}e^{ik_{n}^{+}x^{\prime }}+\tilde{b}_{4}\hat{D _{2}e^{-ik_{n}^{-}x^{\prime }}. & & & & & & \end{alignat with \end{subequations} \begin{equation} k_{n}^{\pm }\equiv \sqrt (\mu \pm E)^{2}/v_{f}^{2}}\cos \theta _{n}^{\pm }, \end{equation The spinors are given by \begin{subequations} \begin{alignat}{4} & \hat{C}_{1}(\hat{D}_{3}) & =& [i,\pm e^{\pm i\theta _{n}^{+}},0,0]^{T}, & & & & \\ & \hat{C}_{2}(\hat{D}_{4}) & =& [0,0,\pm 1,ie^{\pm i\theta _{n}^{-}}]^{T}, & & & & \\ & \hat{C}_{3}(\hat{D}_{1}) & =& [ie^{\pm i\theta _{n}^{+}},\mp 1,0,0]^{T}, & & & & \\ & \hat{C}_{4}(\hat{D}_{2}) & =& [0,0,\mp e^{\pm i\theta _{n}^{-}},i]^{T}. & & & & \end{alignat} Also, the conductance can be given as \end{subequations} \begin{equation} \sigma _{s}=\sigma _{0}\int dk_{y}\mathrm{Re}\left[ 1+\frac{k_{n}^{-}}{k_{n}^{+} \left\vert a_{3}\right\vert ^{2}-\left\vert b_{3}\right\vert ^{2}\right] \end{equation where $\sigma _{0}$ is a constant parameter determined by the geometry of junctions.
{ "timestamp": "2015-06-05T02:08:00", "yymm": "1504", "arxiv_id": "1504.06208", "language": "en", "url": "https://arxiv.org/abs/1504.06208" }
\section{Introduction} A fibred surface, or simply a fibration, is a surjective proper morphism $f:X \to B$ from a non-singular projective surface $X$ onto a non-singular projective curve $B$ with connected fibers. The general fiber of $f$ is a smooth curve of genus $g$, which will be assumed to be at least $2$ in the paper. We always assume that $f$ is relatively minimal, i.e., there is no $(-1)$-curve contained in the fibers of $f$. Here a curve $C$ is called a $(-k)$-curve if it is a smooth rational curve with self-intersection $C^2=-k$. It is called smooth if all its fibers are smooth, isotrivial if all its smooth fibers are isomorphic to each other, locally trivial if it is both smooth and isotrivial, and semi-stable if all its singular fibers are reduced nodal curves. Let $\omega_X$ be the canonical sheaf of $X$, and $\omega_{f}=\omega_X\otimes f^*\omega_B^{\vee}$ the relative canonical sheaf of $f$. The relative minimality of $f$ implies that $\omega_f$ is numerical effective (nef), i.e., $\omega_f\cdot C \geq 0$ for any curve $C\subseteq X$. Let $b=g(B)$, $p_g=h^0(X,\,\omega_X)$, $q=h^1(X,\,\omega_X)$, $\chi(\mathcal O_X)=p_g-q+1$, and $\chit(X)$ be the topological Euler characteristic of $X$. Then we consider the following relative invariants of $f$: \begin{equation}\label{eqn-invarians-f} \left\{\begin{aligned} \chi_f&=\deg f_*\omega_{f}=\chi(\mathcal O_X)-(g-1)(b-1),\\ \omega_{f}^2&=\omega_X^2-8(g-1)(b-1),\\ e_f&=\chit(X)-4(g-1)(b-1). \end{aligned}\right. \end{equation} They satisfy the following properties: \begin{eqnarray} &&12\chi_f=\omega_f^2+e_f.\label{eqnnoether}\\ &&e_f\geq 0;~\text{moreover, $e_f=0$ iff $f$ is smooth}.\nonumber\\ &&\chi_f\geq 0;~\text{moreover, $\chi_f=0$ iff $f$ is locally trivial}.\qquad\nonumber \end{eqnarray} If $f$ is not locally trivial, the slope of $f$ is defined to be $$\lambda_f=\frac{K_f^2}{\chi_f}.$$ It follows immediately that $0< \lambda_f\leq 12$. The main known result is the slope inequality: \begin{theorem}[Cornalba-Harris-Xiao, \cite{cornalba-harris-88,xiao-87a}]\label{thm-chx} If $f$ is not locally trivial, then $$\lambda_f\geq \frac{4(g-1)}{g}.$$ \end{theorem} After that, it is natural to investigate the influence of some properties of the fibration on the behaviour of the slope. For instance, according to \cite{konno-99,barja-stoppino-08}, one knows that the Clifford index of the general fiber has some meaning to the lower bound of the slope. We would like to pay attention to the following conjecture of Barja and Stoppino (cf. \cite[Conjecture\,1.1]{barja-stoppino-08}) about the influence of the relative irregularity $q_f:=q-b$ on the lower bound of the slope. \begin{conjecture}[Barja-Stoppino]\label{conjecturebs} If $f$ is not locally trivial and $q_f < g-1$, then \begin{equation}\label{conjectureequ} \lambda_f\geq \frac{4(g-1)}{g-q_f}. \end{equation} \end{conjecture} The first result in the direction is due to Xiao \cite[Theorem\,3]{xiao-87a}, where he proved that if $q_f > 0$, then $\lambda_f \geq 4$ and the equality can hold only when $q_f = 1$. In \cite[Theorem\,1.3]{barja-stoppino-08}, Barja and Stoppino considered the influence of the Clifford index of the general fiber and the relative irregularity on the lower bound of the slope simultaneously, and obtained a lower bound which is close to the conjectured bound if the Clifford index is large. We proved the above conjecture for hyperelliptic fibrations in \cite[Corollary\,1.5]{lu-zuo-13}. In this paper, we show the following \begin{theorem}\label{thm-main} Let $f$ be a fibration of genus $g\geq 2$ which is not locally trivial. {\rm(i)} \autoref{conjecturebs} holds if $q_f \leq g/9$. {\rm(ii)} There exist fibrations with $q_f=(g+1)/2$ violating \eqref{conjectureequ} whenever $g$ is odd. \end{theorem} Pirola constructed in \cite{pirola-92} the first example which does not satisfy \eqref{conjectureequ}, see also \cite[Remark\,4.6]{barja-stoppino-08}. To our knowledge, the only known counterexamples to the bound \eqref{conjectureequ} belong to the extremal case $q_f=g-1$. According to \cite[Corollary\,4]{xiao-87a}, the genus of fibrations with $q_f=g-1$ is bounded from above ($g\leq 7$). In our construction, the genus has no upper bound. Counterexamples will be given in \autoref{sec-examples}. The main idea of the proof of \autoref{thm-main}(i) is a combination of Xiao's technique \cite{xiao-87a} and the second multiplication map (see \autoref{sec-multi-map}). It turns out that our conclusion follows directly from these two techniques if $f$ is not a double cover fibration. \begin{definition}\label{def-double-cover-fibration} The fibration $f$ is said to be a double (resp. triple) cover fibration of type $(g, \gamma)$ if there are morphisms $h':\,Y' \to B$ and $\pi:\,X \to Y'$ ($Y'$ may be singular) such that the general fiber of $h'$ is a genus-$\gamma$ curve, $\deg\pi=2$ (resp. $\deg\pi=3$) and $h'\circ\pi=f$. $$\xymatrix{ X \ar[rr]^-{\pi}\ar[dr]_-{f} && Y'\ar[dl]^-{h'}\\ &B& }$$ \end{definition} Double cover fibrations were studied earlier by many authors, see \cite{barja-01,barja-zucconi-01,cornalba-stoppino-08} etc. We define certain local relative invariants for the double cover $\pi$ and show that these relative invariants \eqref{eqn-invarians-f} of $f$ can be expressed by these local relative invariants and relative invariants of the quotient fibration (cf. \autoref{thminvariants-double-fibration}). Based on these formulas, we complete the proof of \autoref{thm-main}(i). Our paper is organized as follows. In \autoref{sec-pre}, we recall Xiao's method in the study of the lower bound on the slope, and develop certain inequalities relying on the second multiplication map. In \autoref{sec-slope}, we prove \autoref{thm-main}(i) based on a combination of these two techniques except for double cover fibrations. In \autoref{sec-double}, we treat the double cover fibrations. Meanwhile, we obtain various lower bounds on the slope of double cover fibrations in this section. Finally, in \autoref{sec-examples} we provide counterexamples to \eqref{conjectureequ}. \section{Preliminaries}\label{sec-pre} \subsection{Harder-Narasimhan filtration of the direct image sheaf} In the subsection, we briefly recall the Harder-Narasimhan filtration on the direct image sheaf $f_*\omega_f$ and Xiao's technique. Let $\mathcal E$ be a (non-zero) locally free sheaf over $B$. {\it The slope of $\mathcal E$} is defined to be the rational number $$\mu(\mathcal E)=\frac{\deg \mathcal E}{\rank \mathcal E}.$$ The sheaf $\mathcal E$ is said to be {\it stable} (resp. {\it semi-stable}), if for any coherent subsheaf $0\neq\mathcal E'\subsetneq\mathcal E$ we have $\mu(\mathcal E')<\mu(\mathcal E)$ (resp. $\mu(\mathcal E')\leq\mu(\mathcal E)$); it is said to be {\it positive} (resp. {\it semi-positive}), if for any quotient sheaf $\mathcal E \twoheadrightarrow \mathcal Q \neq 0$, one has $\deg \mathcal Q >0$ (resp. $\deg \mathcal Q \geq0$). It is well-known that the locally free sheaf $\mathcal E:=f_*\omega_f$ has a unique filtration, called the Harder-Narasimhan (H-N) filtration: \begin{equation}\label{eqnharder-nara} 0=\mathcal E_0\subset \mathcal E_1 \subset \cdots \subset \mathcal E_n=\mathcal E, \end{equation} such that: \begin{list}{} {\setlength{\labelwidth}{0.6cm} \setlength{\leftmargin}{0.7cm}} \item[(i)] the quotient $\mathcal E_i/\mathcal E_{i-1}$ is a locally free semi-stable sheaf for each $i$; \item[(ii)] the slopes are strictly decreasing $\mu_i:=\mu(\mathcal E_i/\mathcal E_{i-1})>\mu_j:=\mu(\mathcal E_j/\mathcal E_{j-1})$ if $i>j$. \end{list} Note that we have $\mu_n\geq 0$ due to the semi-positivity of $f_*\omega_f$, and \begin{equation}\label{eqn-degree-chi_f} \chi_f=\sum_{i=1}^{n}r_i(\mu_i-\mu_{i+1}), \quad\text{where~}r_i:=\rank \mathcal E_i\text{~and~}\mu_{n+1}:=0. \end{equation} \begin{definition}[\cite{xiao-87a}]\label{defofN(F)} Let $\mathcal E'$ be a locally free subsheaf of $f_*\omega_f$. The fixed and moving parts of $\mathcal E'$, denoted by $Z(\mathcal E')$ and $M(\mathcal E')$ respectively, are defined as follows. Let $\call$ be a sufficiently ample line bundle on $B$ such that the sheaf $\mathcal E'\otimes\call$ is generated by its global sections, and $\Lambda(\mathcal E')\subseteq |\omega_f\otimes f^*\call|$ be the linear subsystem corresponding to sections in $H^0(B,\,\mathcal E'\otimes\call)$. Then we define $Z(\mathcal E')$ to be the fixed part of $\Lambda(\mathcal E')$, and $M(\mathcal E')=\omega_f-Z(\mathcal E')$. Note that the definitions do not depend on the choice of $\call$. \end{definition} Let $d_i=M(\mathcal E_i)\cdot F$, where $\mathcal E_i\subseteq f_*\omega_f$ is any subsheaf in the H-N filtration of $f_*\omega_f$ in \eqref{eqnharder-nara}, and $F$ is a general fiber of $f$. The next proposition, which is due to Xiao, is crucial to the study of the slope of fibrations (cf. \cite{barja-stoppino-08,barja-zucconi-01,konno-93,xiao-87a}). \begin{proposition}[\cite{xiao-87a}]\label{prop-key-inequality-xiao} For any sequence of indices $1\leq i_1 <\cdots <i_{k}\leq n$, one has \begin{equation}\label{eqn-key-inequality-xiao} \omega_f^2\geq \sum_{j=1}^{k}\big(d_{i_j}+d_{i_{j+1}}\big)\big(\mu_{i_j}-\mu_{i_{j+1}}\big),\quad\text{where $i_{k+1}=n+1$.} \end{equation} \end{proposition} \subsection{Second multiplication map}\label{sec-multi-map} In this subsection, we derive some inequalities based on the second multiplication map $$\varrho:\,S^2(f_*\omega_f) \lra f_*\big(\omega_f^{\otimes 2}\big),$$ where $S^2(f_*\omega_f)$ is the second symmetric power of $f_*\omega_f$, and the map $\varrho$ is induced by the canonical multiplication on the general fibers of $f$. We assume in the subsection that $f$ is non-hyperelliptic. Under this assumption, it is known that $\varrho$ is generically surjective. Thus by studying the image of the map $\varrho$, one may obtain a lower bound $\deg f_*\big(\omega_f^{\otimes 2}\big)$, hence also a lower bound of $\omega_f^2$ due to the following simple fact: \begin{equation}\label{eqn-deg-f-omega^2} \deg f_*\big(\omega_f^{\otimes 2}\big)=\omega_f^2+\chi_f. \end{equation} For any locally free sheaf $\mathcal E$ over $B$, we define $$\mu_f(\mathcal E)=\max\{\deg \mathcal F~|~\mathcal E \otimes \mathcal F^{\vee} \text{~is semi-positive}\}.$$ For those $\mathcal E_i$'s occurring in the H-N filtration \eqref{eqnharder-nara} of $f_*\omega_f$, it is easy to see that $\mu_f(\mathcal E_i)=\mu_i$. In particular, $\mu_f(f_*\omega_f)=\mu_n$. \begin{proposition}\label{lemma-second-1} Let $\tilde \mu_1>\cdots >\tilde \mu_k\geq 0$ {\rm(}resp. $0<\tilde r_1<\cdots <\tilde r_k\leq 3g-3${\rm)} be any decreasing {\rm(}resp. increasing{\rm)} sequence of rational {\rm(}resp. integer{\rm)} numbers. Assume that there exists a subsheaf $\mathcal F_i \subseteq f_*\big(\omega_f^{\otimes 2}\big)$ such that $\mu_f(\mathcal F_i)\geq \tilde \mu_i$ and $\rank\mathcal F_i \geq \tilde r_i$ for each $i$. Then \begin{equation}\label{eqn-compute-deg-f_*omega_f^2} \omega_f^2+\chi_f \geq \sum_{i=1}^k\tilde r_i(\tilde \mu_i-\tilde \mu_{i+1}), \qquad\text{where~}\tilde\mu_{k+1}=0. \end{equation} \end{proposition} \begin{proof} Consider the H-N filtration of $\mathcal E'=f_*\big(\omega_f^{\otimes 2}\big)$: $$0=\mathcal E_0'\subset \mathcal E_1' \subset \cdots \subset \mathcal E_{m}'=\mathcal E'.$$ Let $\mu_i'=\mu(\mathcal E_i'/\mathcal E_{i-1}')$ and $r_i'=\rank \mathcal E_i'$ for $1\leq i\leq m$, and $\mu_{m+1}'=0$. Due to the semi-positivity of $f_*\big(\omega_f^{\otimes 2}\big)$, one has $\mu_{m}'\geq0$. Hence by \eqref{eqn-deg-f-omega^2}, one has $$\omega_f^2+\chi_f=\deg f_*\big(\omega_f^{\otimes 2}\big)=\sum_{i=1}^{m}r_i'(\mu_i'-\mu_{i+1}').$$ If we view $(r_i',\mu_i')$'s as points in the two-dimensional coordinate system, then it is easy to see that $\deg f_*\big(\omega_f^{\otimes 2}\big)$ is nothing but the area of the shadow in the following picture. \begin{center} \setlength{\unitlength}{1.3mm} \begin{picture}(50,17) \put(10,2){\vector(1,0){40}} \put(15,0){\vector(0,1){17}} \put(15,2){\circle*{0.5}} \put(12,15){$\mu$} \put(47,0){$r$} \put(15,13){\line(1,0){4}} \put(19,13){\circle*{0.5}} \put(17.5,14){\tiny $(r_1',\mu_1')$} \put(19,13){\line(0,-1){2}} \put(19,11){\line(1,0){7}} \put(26,11){\circle*{0.5}} \put(25,12){\tiny $(r_2',\mu_2')$} \multiput(26,11)(0,-0.6){6}{\line(0,-1){0.3}} \multiput(26,8)(0.6,0){12}{\line(0,-1){0.3}} \put(33,8){\circle*{0.5}} \put(31,9.5){\tiny $(r_{m-1}',\mu_{m-1}')$} \put(33,8){\line(0,-1){2}} \put(33,6){\line(1,0){10}} \put(43,6){\circle*{0.5}} \put(43,7){\tiny $(r_m',\mu_m')$} \put(43,6){\line(0,-1){4}} \put(15,9.5){\line(1,1){3.5}} \put(15,7){\line(1,1){4}} \put(15,4.5){\line(1,1){6.5}} \put(15,2){\line(1,1){9}} \put(17.5,2){\line(1,1){8.5}} \put(20,2){\line(1,1){6}} \put(22.5,2){\line(1,1){6}} \put(25,2){\line(1,1){6}} \put(27.5,2){\line(1,1){5.5}} \put(30,2){\line(1,1){4}} \put(32.5,2){\line(1,1){4}} \put(35,2){\line(1,1){4}} \put(37.5,2){\line(1,1){4}} \put(39.5,2){\line(1,1){3.5}} \end{picture} \end{center} If we also view $(\tilde r_i, \tilde\mu_i)$'s as points in the above coordinate system, the assumptions in our lemma imply that every point $(\tilde r_i, \tilde\mu_i)$ lies in the shadow part. Hence \eqref{eqn-compute-deg-f_*omega_f^2} follows immediately. \end{proof} The next lemma provides subsheaves of $f_*\big(\omega_f^{\otimes 2}\big)$ with `large' slopes. \begin{lemma}\label{lemma-second-2} Consider the H-N filtration of $f_*\omega_f$ in \eqref{eqnharder-nara}. Let $\mu_i=\mu(\mathcal E_i/\mathcal E_{i-1})$, $r_i=\rank \mathcal E_i$, $\Lambda(\mathcal E_i)$ the linear subsystem defined in \autoref{defofN(F)}, and $g_0$ the geometric genus of the image of $F$ under the map defined by $\Lambda(\mathcal E_i)$, where $F$ is a general fiber of $f$. Then there exists a subsheaf $\mathcal F_{i}\subseteq f_*\big(\omega_f^{\otimes 2}\big)$ such that $$ \mu_f(\mathcal F_{i})\geq 2\mu_i, \qquad \rank\mathcal F_{i}\, \geq \min\big\{3(r_i-1),~2r_i+g_0-1\big\}\geq 2r_i-1. $$ In particular, if $\Lambda(\mathcal E_i)\big|_F$ defines a birational map for a general fiber $F$ of $f$, then there exists a subsheaf $\mathcal F_{i}\subseteq f_*\big(\omega_f^{\otimes 2}\big)$ such that $$ \mu_f(\mathcal F_{i})\geq 2\mu_i, \qquad \rank\mathcal F_{i}\, \geq 3(r_i-1). $$ \end{lemma} \begin{proof} Consider the composition map $$\varrho_{i}:\,\mathcal E_i \otimes \mathcal E_i \lra S^2\big(f_*\omega_f\big) \lra f_*\big(\omega_f^{\otimes 2}\big).$$ It is clear that $\mu_f \big(\im\,(\varrho_{i})\big)\geq 2\mu_f\big(\mathcal E_i\big)= 2\mu_i$. Hence it suffices to show that \begin{equation}\label{eqn-pf-pre-large-1} \rank \big(\im\,(\varrho_{i})\big)\geq \min\big\{3(r_i-1),~2r_i+g_0-1\big\}. \end{equation} Let $Z_i$ be normalization of the image of $F$ under the map defined by $\Lambda(\mathcal E_i)$, $D_i=M(\mathcal E_i)|_F$, and $V_i\subseteq H^0(F,D_i)$ the subspace corresponding to $\Lambda(\mathcal E_i)\big|_F$. Then by assumption, there exists a divisor $D_i'$ on $Z_i$, and a subspace $V_i'\subseteq H^0(Z_i,D_i')$ such that $D_i=\psi_i^*D_i'$ and $V_i=\psi_i^*V_i'$, where $\psi_i:\,F \to Z_i$ is the corresponding map. Moreover, $V_i'$ is base-point-free and defines a birational map on $Z_i$. Consider the natural maps $$\rho_{i}:~V_i\otimes V_i \lra H^0(F,2D_i),\qquad\rho_{i}':~V_i'\otimes V_i' \lra H^0(Z_i,2D_i').$$ Then $\dim V_i=\dim V_i'=r_i$ and \begin{equation}\label{eqn-pf-pre-large-3} \dim \im\,(\rho_{i}')=\dim \im\,(\rho_{i})= \rank \big(\im\,(\varrho_{i})\big). \end{equation} Therefore \eqref{eqn-pf-pre-large-1} follows from the next lemma. The proof is complete. \end{proof} \begin{lemma} Let $D\in \Pic(Z)$ be an effective divisor of a smooth curve $Z$ of genus $g_0$, $V\subseteq H^0(Z,D)$ be a subspace with $\dim V=r$, and $$\rho:~V \otimes V \lra H^0(Z,2D)$$ be the natural multiplication maps. Assume that $V$ is based-point-free and $V$ induces a birational map $\phi_V$ on $Z$. Then \begin{equation}\label{eqn-pre-1} \dim \big(\im(\rho)\big)\geq \min\big\{3(r-1),~2r+g_0-1\big\}. \end{equation} \end{lemma} \begin{proof} Since $\phi_V$ is birational, according to the general position theorem (cf. \cite[\S\,III.1]{acgh-85}), there exist $r$ points $\{p_1,\cdots,p_{r}\}\subseteq F$ such that any $r-1$ of them give linearly independent conditions for the vector space $V$. Hence there exist $\{v_1,\cdots v_{r}\}\subseteq V$ such that $$v_i(p_i)\neq 0,\quad\text{and}\quad v_i(p_j)=0,~\qquad\forall\, 1\leq j \leq r \text{~and~} j\neq i.$$ Since $\{p_1,\cdots,p_{r}\}\subseteq F$ are in general position, one has $$\dim H^0\big(Z,~p_3+\cdots+p_r\big)= \left\{\begin{aligned} &1,&\quad&\text{if~}r\leq g_0+1;\\ &r-1-g_0, &&\text{if~}r> g_0+1. \end{aligned}\right.$$ Let $V_{12}\subseteq V$ be generated by $v_1$ and $v_2$. Consider the subspace $W \subseteq \im(\varrho)$ generated by $v_3^2,\cdots, v_{r}^2$, and the restriction map $$\varphi:~V_{12}\,\otimes V \lra H^0(Z,2D).$$ According to the base-point-free pencil trick (cf. \cite[\S\,III.3]{acgh-85}), one has $$\dim \im(\varphi_i) = 2r-\dim H^0\big(Z,~p_3+\cdots+p_r\big)= \left\{\begin{aligned} &2r-1,&\quad&\text{if~}r\leq g_0+1;\\ &r+g_0+1, &&\text{if~}r> g_0+1. \end{aligned}\right.$$ By valuating at the points $\{p_3,\cdots,p_{r}\}$, we see that $$\dim W=r-2,\qquad W \cap \im(\varphi)=0.$$ Hence \[\dim \big(\im(\rho)\big)\geq \dim W+\dim \big(\im(\varphi)\big)= \left\{\begin{aligned} &3r-3,&\quad&\text{if~}r\leq g_0+1;\\ &2r+g_0-1, &&\text{if~}r> g_0+1. \end{aligned}\right.\] Therefore \eqref{eqn-pre-1} follows. \end{proof} \section{Slope of fibrations}\label{sec-slope} In this section, we prove certain lower bounds on the slope of fibrations, based on the combination of Xiao's method and the second multiplication map. To illustrate this new idea, we first prove the following lower bound for non-triple and non-double cover fibrations. \begin{theorem}\label{thm-slope-1} Let $f:\,X \to B$ be a fibration of genus $g\geq 2$, which is not locally trivial.\\[0.1cm] (i). If $f$ is neither a triple nor a double cover fibration, then \begin{equation}\label{eqn-slope-non-td-cover} \lambda_f > \frac{14(g-1)}{3(g+1)}. \end{equation} (ii). If $f$ is not a double cover fibration, then \begin{equation}\label{eqn-slope-non-double-cover} \lambda_f > \frac{18(g-1)}{4g+3}. \end{equation} \end{theorem} \begin{proof} (i). Consider the H-N filtration \eqref{eqnharder-nara} of $f_*\omega_f$. Let $\Lambda(\mathcal E_i)$ and $M(\mathcal E_i)$ be defined in \autoref{defofN(F)}. By assumption, $f$ is not a double cover fibration. In particular, it is non-hyperelliptic, which implies that $\Lambda(\mathcal E_n)|_F$ defines a birational map for a general fiber $F$. Define \begin{equation}\label{eqn-def-of-l} l=\min\left\{i~\big|~\text{$\Lambda(\mathcal E_i)|_F$ defines a birational map for a general fiber $F$ of $f$}\right\}. \end{equation} According to \autoref{prop-key-inequality-xiao}, one gets \begin{equation}\label{sect3-l2} \begin{aligned} \omega_f^2 &\geq \sum_{i=1}^{n}\big(d_{i}+d_{i+1}\big)\big(\mu_{i}-\mu_{i+1}\big). \end{aligned} \end{equation} By \autoref{lemma-second-1} and \autoref{lemma-second-2} with the decreasing sequence $$\big\{2\mu_1,\cdots,\,2\mu_{l-1},\,2\mu_l,\cdots,\,2\mu_n\big\},$$ and the increasing sequence $$\big\{2r_1-1,\cdots,\,2r_{l-1}-1,\,3(r_l-1),\cdots,\,3(r_n-1)\big\},$$ we obtain \begin{equation}\label{sect3-l3} \omega_f^2 \geq \sum_{i=1}^{l-1}\big(3r_i-2\big)\big(\mu_{i}-\mu_{i+1}\big)+\sum_{i=l}^{n}\big(5r_i-6\big)\big(\mu_{i}-\mu_{i+1}\big). \end{equation} Note that we use \eqref{eqn-degree-chi_f} above. Combining \eqref{sect3-l3} with \eqref{sect3-l2}, we obtain \begin{equation}\label{sect3-l5} \begin{aligned} \omega_f^2 &~\geq~ \sum_{i=1}^{l-1}\bigg(\frac{2}{3}(3r_i-2)+\frac13(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big)\\ &\quad+\sum_{i=l}^{n}\bigg(\frac{2}{3}(5r_i-6)+\frac13(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big). \end{aligned} \end{equation} As $f$ is neither a triple nor a double cover fibration, by the Castelnuovo's Bound (cf. \cite[\S\,III.2]{acgh-85}), we have \begin{equation}\label{eqn-3-2} \left\{\begin{aligned} d_i&\geq 4(r_i-1), &~& \forall~i<l;\\ d_i&\geq \frac{g}{m_i}+\frac{(m_i+1)s_i}{2} -m_i, && \forall~i\geq l, \end{aligned}\right. \end{equation} where $s_i=h^0\big(F,\,M(\mathcal E_i)|_F\big)\geq r_i$ and $m_i=\left[\frac{d_i-1}{s_i-2}\right]$. Hence it is easy to check that $$\begin{aligned} \frac{2}{3}(3r_i-2)+\frac13(d_i+d_{i+1}) &\geq \frac{14}{3}r_i-\frac{8}{3},&& \forall~i<l;\\[0.1cm] \frac{2}{3}(5r_i-6)+\frac13(d_i+d_{i+1})&\geq \frac{14}{3}r_i-\frac{11}{3}, && \forall~l\leq i <n~\&~r_i\neq g-1; \\[0.1cm] \frac{2}{3}(5r_{n-1}-6)+\frac13(d_{n-1}+d_{n})&= \frac{14}{3}r_{n-1}-\frac{13}{3}, && \text{if~} r_{n-1}=g-1; \\[0.1cm] \frac{2}{3}(5r_n-6)+\frac13(d_n+d_{n+1})&=\frac{14}{3}g-\frac{16}{3}.&& \end{aligned}$$ Combining the above inequalities with \eqref{sect3-l5}, we obtain \begin{equation}\label{sect3-l6} \omega_f^2 \geq \left\{ \begin{aligned} &\frac{14}{3}\chi_f-\frac{11}{3}\mu_1-\frac23\mu_{n-1}-\mu_n,&\quad&\text{if~}r_{n-1}=g-1;\\ &\frac{14}{3}\chi_f-\frac{11}{3}\mu_1-\frac53\mu_n,&\quad&\text{if~}r_{n-1}\neq g-1. \end{aligned}\right. \end{equation} We use the simple fact that $\mu_1\geq \mu_l$ above. On the other hand, by \autoref{prop-key-inequality-xiao} one obtains \begin{equation}\label{sect3-l7} \left\{\begin{aligned} \omega_f^2 &~\geq~ (d_1+d_n)(\mu_1-\mu_n)+(d_n+d_{n+1})\mu_n\\ &~\geq~ (2g-2)(\mu_1-\mu_n)+(4g-4)\mu_n=(2g-2)(\mu_1+\mu_n);\\[0.1cm] \omega_f^2 &~\geq~ (2g-3)\mu_1+(2g-2)\mu_{n-1},\qquad \text{if~}r_{n-1}=g-1. \end{aligned}\right. \end{equation} Hence \eqref{eqn-slope-non-td-cover} follows immediately from \eqref{sect3-l6} and \eqref{sect3-l7}. (ii). First of all, according to \cite{konno-93}, we may assume that $g\geq 6$. From \eqref{sect3-l2} and \eqref{sect3-l3} it follows that \begin{equation}\label{sect3-l8} \begin{aligned} \omega_f^2 &~\geq~ \sum_{i=1}^{l-1}\bigg(\frac{1}{2}(3r_i-2)+\frac12(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big)\\ &\quad+\sum_{i=l}^{n}\bigg(\frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big). \end{aligned} \end{equation} We claim that \begin{equation}\label{eqn-3-1} \left\{\begin{aligned} \frac{1}{2}(3r_i-2)+\frac12(d_i+d_{i+1}) &> \frac{9}{2}r_i-2,&\quad& \forall~i<l;\\[0.1cm] \frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1})&\geq \frac{9}{2}r_i-\frac72, && \forall~l\leq i <n; \\[0.1cm] \frac{1}{2}(5r_n-6)+\frac12(d_n+d_{n+1})&=\frac{9}{2}g-5.&& \end{aligned}\right. \end{equation} Assume that \eqref{eqn-3-1} is true. Then \eqref{eqn-slope-non-double-cover} follows easily from \eqref{eqn-3-1} together with \eqref{sect3-l7} and \eqref{sect3-l8}. Hence it suffices to show \eqref{eqn-3-1}. If $\Lambda(\mathcal E_{i})\big|_F:\,F \to \Gamma_i$ is a finite map of degree at least $4$ for any $i<l$, then we have \eqref{eqn-3-2}, from which \eqref{eqn-3-1} follows immediately. Let \begin{equation}\label{eqn-def-tilde-k} \tilde l=\min\left\{i~\big|~\text{$\Lambda(\mathcal E_i)|_F$ is a finite map of degree $3$ for a general fiber $F$}\right\}. \end{equation} By assumption, such an $\tilde l$ exists. Let $g_{\tilde l}$ be the genus of the image $F$ under the map defined by $\Lambda(\mathcal E_i)|_F$. Then we may assume that $g_{\tilde l}\geq 1$: otherwise, $f$ is a trigonal fibration, in which case \eqref{eqn-slope-non-double-cover} follows from the result of \cite{konno-96}. Hence by the Castelnuovo's Bound (cf. \cite[\S\,III.2]{acgh-85}), we have \begin{equation*} \left\{\begin{aligned} d_i&\geq 6(r_i-1), &~& \forall~i<\tilde l;\\ d_i&\geq 3r_i, && \forall~\tilde l\leq i<l;\\ d_i&\geq \frac{g}{m_i}+\frac{(m_i+1)s_i}{2} -m_i, && \forall~i\geq l, \end{aligned}\right. \end{equation*} where $s_i=h^0\big(F,\,M(\mathcal E_i)|_F\big)\geq r_i$ and $m_i=\left[\frac{d-1}{s_i-2}\right]$. Note that the first inequality above follows from the fact that the map defined by $\Lambda(\mathcal E_{i})\big|_F$ factors through that by $\Lambda(\mathcal E_{j})\big|_F$ for $i<j$. By computation, we obtain \eqref{eqn-3-1} from the above inequalities. The proof is complete. \end{proof} \begin{remark} Since the double covers are well-studied in the surface theory (cf. \cite{bhpv-04}), the above theorem turns to be useful if one can construct a fibration with $\lambda_f<\frac{18(g-1)}{4g+3}$. For instance, in \cite{lu-zuo-15b} we apply the above theorem to study the geography of irregular surfaces of general type and maximal Albanese dimension. \end{remark} For fibrations with double cover fibration structure, we also have the following conditional result. \begin{proposition} Let $f:\,X \to B$ be a fibration of genus $g\geq 2$, which is not locally trivial. If $\gamma \geq g/3$ for any double cover fibration structure of type $(g,\gamma)$ on $f$, then \begin{equation}\label{eqn-slope-d-c} \lambda_f > \frac{18(g-1)}{4g+3}. \end{equation} \end{proposition} \begin{proof} Let \begin{equation}\label{eqn-def-k'} l'=\min\left\{i~\big|~\text{$\Lambda(\mathcal E_i)|_F$ is a finite map of degree $2$ for a general fiber $F$}\right\}. \end{equation} According to the proof of \autoref{thm-slope-1}, we may assume that such an $l'$ exsits. Note that the map defined by $\Lambda(\mathcal E_{i})\big|_F$ factors through that by $\Lambda(\mathcal E_{j})\big|_F$ for $i<j$. Hence \begin{equation}\label{sect3-l9} \deg\big(\Lambda(\mathcal E_i)|_F\big)=2,\quad\forall~l'\leq i<l;\qquad \deg\big(\Lambda(\mathcal E_i)|_F\big)\geq 4,\quad \forall~i<l', \end{equation} where $l$ is defined in \eqref{eqn-def-of-l}. Let $\gamma$ be the geometric genus of the image of $F$ under $\Lambda(\mathcal E_{l'})|_F$, and $$\theta_i=\min\big\{3r_i-3,\,2r_i+\gamma-1\big\},\qquad \forall~l'\leq i \leq l-1.$$ By \autoref{lemma-second-1} and \autoref{lemma-second-2} with the decreasing sequence $$\big\{2\mu_1,\cdots\cdots,\,2\mu_n\big\},$$ and the increasing sequence $$\big\{2r_1-1,\cdots,\,2r_{l'-1}-1,\,\theta_{l'},\cdots,\,\theta_{l-1},\,3(r_l-1),\cdots,\,3(r_n-1)\big\},$$ we obtain \begin{equation}\label{sect3-l3'} \begin{aligned} \omega_f^2 ~\geq&~~\sum_{i=1}^{l'-1}\big(3r_i-2\big)\big(\mu_{i}-\mu_{i+1}\big) +\sum_{i=l'}^{l-1}\big(2\theta_i-r_i\big)\big(\mu_{i}-\mu_{i+1}\big)\\ &\hspace{-2mm}+\sum_{i=l}^{n}\big(5r_i-6\big)\big(\mu_{i}-\mu_{i+1}\big). \end{aligned} \end{equation} Combining \eqref{sect3-l3'} with \eqref{sect3-l2}, we obtain \begin{equation}\label{sect3-l5'} \begin{aligned} \omega_f^2 ~\geq&~ \sum_{i=1}^{l'-1}\bigg(\frac{1}{2}(3r_i-2)+\frac12(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big)\\ &\hspace{-2mm}+\sum_{i=l'}^{l-1}\bigg(\frac{1}{2}(2\theta_i-r_i)+\frac12(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big)\\ &\hspace{-2mm}+\sum_{i=l}^{n}\bigg(\frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1})\bigg)\big(\mu_{i}-\mu_{i+1}\big). \end{aligned} \end{equation} By \eqref{sect3-l9} one has $$d_i\geq 4(r_i-1), \quad \forall\,i<l';\qquad d_i\geq 2\min\big\{2(r_i-1),~r_i+\gamma-1\big\},\quad \forall\, l'\leq i <l.$$ Combining this with the assumption $\gamma \geq g/3$ and the Castelnuovo's Bound (cf. \cite[\S\,III.2]{acgh-85}), we have \begin{equation}\label{eqn-3-3} \left\{\begin{aligned} \frac{1}{2}(3r_i-2)+\frac12(d_i+d_{i+1})&~\geq~ \frac92r_i-2, &~& \forall~i<l';\\ \frac{1}{2}(2\theta_i-r_i)+\frac12(d_i+d_{i+1})&~\geq~ \frac92r_i-2, && \forall~l'\leq i< l;\\ \frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1})&~\geq~ \frac{9}{2}r_i-\frac72, && \forall~l\leq i <n; \\ \frac{1}{2}(5r_n-6)+\frac12(d_n+d_{n+1})&~=~\frac{9}{2}g-5.&& \end{aligned}\right. \end{equation} Hence \eqref{eqn-slope-d-c} follows from the above inequalities together with \eqref{sect3-l5'} and \eqref{sect3-l7}. \end{proof} \begin{proof}[Proof of \autoref{thm-main}(i)] According to \autoref{thm-chx} and \cite[Theorem\,3]{xiao-87a}, one may assume that $q_f\geq 2$, which implies that $g\geq 9q_f\geq 18$ by assumption. Consider first the case when $f$ is a double cover fibration of type $(g,\gamma)$ with $g\geq 3\gamma+1$: \begin{enumerate} \item[$\bullet$] If $g\geq 4\gamma+1$, then according to \autoref{thm-double-1} we may assume that $g>9\gamma$ and $q_f>\gamma$. Hence $f$ is an irregular double cover \big(cf. \eqref{eqn-def-irreg}\big), and $g\geq 6\gamma+7$ since $g\geq \min\{18,9\gamma+1\}$. Therefore \eqref{conjectureequ} follows from \eqref{eqn-slope>6}.\vspace{1mm} \item[$\bullet$] If $4\gamma+1>g \geq 3\gamma+1$, then \eqref{conjectureequ} follows from \eqref{eqn-4.3-2}, since in this case $$\frac{4(g-1)(3g+1-4\gamma)}{(g+1-2\gamma)^2+4\gamma(2g+1-3\gamma)}>\frac{9(g-1)}{2g}\geq \frac{4(g-1)}{g-q_f}.$$ \end{enumerate} Hence we may assume that $\gamma \geq g/3$ for any double cover fibration structure of type $(g,\gamma)$ on $f$. Consider the H-N filtration of $f_*\omega_f$ as in \eqref{eqnharder-nara}. According to \cite[Theorem\,3.1]{fujita-78}, one sees that $\mu_n=0$ and $r_{n-1}\leq g-q_f\leq g-2$. Let $l$ be defined as in \eqref{eqn-def-of-l}. Then according to the Castelnuovo's Bound (cf. \cite[\S\,III.2]{acgh-85}), we have $$\begin{aligned} \frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1})&~\geq~ \min\Big\{\frac92r_i-2,\,\frac{8r_i+g-8}{2}\Big\},\quad\forall~l\leq i<n-1,\\ \frac{1}{2}(5r_{n-1}-6)+\frac12(d_{n-1}+d_n)&~\geq~ \min\Big\{\frac92r_i-2,\,\frac{13r_i+5g-20}{4}\Big\}, \end{aligned}$$ Hence if $r_{n-1}\leq g-3$, then \begin{equation}\label{eqn-3-4} \frac{1}{2}(5r_i-6)+\frac12(d_i+d_{i+1}) \geq \frac92r_i-2,\qquad \forall~l\leq i\leq n-1. \end{equation} Therefore, under the assumption that $r_{n-1}\leq g-3$, we obtain from \eqref{sect3-l8}, \eqref{eqn-3-1}, \eqref{sect3-l5'} and \eqref{eqn-3-3} that $$\omega_f^2\geq \frac92\chi_f-2\mu_1.$$ Combining this with \eqref{sect3-l7}, we obtain that \begin{equation}\label{eqn-3-5} \lambda_f\geq \frac{9(g-1)}{2g}\geq \frac{4(g-1)}{g-q_f}. \end{equation} Finally, we consider the case when $r_{n-1}=g-2$. By assumption, $f$ is not hyperelliptic, hence $d_{n-1}\geq 2g-4=2r_{n-1}$. Thus one checks that we still have \eqref{eqn-3-4}. Therefore, similar as above, \eqref{eqn-3-5} holds. The proof is complete. \end{proof} \begin{remarks} (i) One sees from the above proof that the condition $q_f\leq g/9$ might be relaxed a little. But the proof requires a much more complicated computation. (ii) We only deal with the case when $q_f$ is small with respect to $g$. If $q_f$ is big, we refer to \cite[Theorem\,3.2]{barja-zucconi-01} for the case when $f$ is not a double cover fibration. \end{remarks} \section{Double cover fibrations}\label{sec-double} In this section, we treat the double cover fibrations. So we always assume in the section that $f:\,X\to B$ is a locally non-trivial double cover fibration of type $(g, \gamma)$ as in \autoref{def-double-cover-fibration}. Since the case where $\gamma=0$ has been studied in \cite{xiao-91,lu-zuo-13} (see also \cite{cornalba-harris-88,lu-zuo-14} for the semi-stable case), $\gamma$ is assumed to be positive in the section unless other explicit statements. \subsection{Invariants of double cover fibrations}\label{section-invarians-double-cover} In this subsection, we first define the local invariants of the induced double cover, and then show in \autoref{thminvariants-double-fibration} that the relative invariants of $f$ can be expressed by these local invariants and relative invariants of the quotient fibration. The degree-two morphism $\pi$ in \autoref{def-double-cover-fibration} induces an involution $\sigma$ on $X$. Let $\vartheta:\,\wt X \to X$ be the composition of all the blowing-ups of the isolated fixed points of $\sigma$, and $\tilde \sigma$ the induced involution on $\wt X$. Then the quotient $\wt Y:=\wt X /\langle\tilde\sigma\rangle$ is a smooth surface with a natural fibration $\wt h:\,\wt Y \to B$ of genus $\gamma$, which may not be relatively minimal. Let $h:\,Y \to B$ be its relatively minimal model. $$ \xymatrix{ X \ar[drr]_-{f} & \wt X \ar[l]_-{\vartheta}\ar[dr]^-{\tilde f}\ar[rr]^-{\wt\pi}&&\wt Y \ar[dl]_-{\tilde h}\ar[r]^-{\psi} &Y \ar[dll]^-{h}\\ &&B&& }$$ The double cover $\tilde \pi$ induces a double cover $\pi_0:\,X_0 \to Y_0:=Y$, which is determined by the relation $\mathcal O_{Y}(R) \equiv L^{\otimes 2}$ with $R=\psi(\wt R)$ and $\wt R$ being the branch locus of $\tilde\pi$. According to Hurwitz formula, one has \begin{equation}\label{eqnhurwitz-formula} R\cdot \Gamma=2g+2-4\gamma\geq 0,\qquad ~\text{for any fiber~$\Gamma$ of $h$}. \end{equation} The surface $X_0$ is normal but not necessarily smooth. Moreover, $\tilde\pi$ is in fact the canonical resolution of $\pi_0$ (cf. \cite[\S\,III.7]{bhpv-04}): \begin{figure}[H] $$\mbox{} \xymatrix{ \wt X\ar@{=}[r] & X_{t} \ar[r]^-{\phi_t}\ar[d]_-{\tilde \pi=\pi_t}& X_{t-1}\ar[r]^-{\phi_{t-1}}\ar[d]^-{\pi_{t-1}}& \cdots \ar[r]^-{\phi_2} & X_1\ar[r]^-{\phi_1}\ar[d]_-{\pi_1} & X_0 \ar[d]^-{\pi_0}\\ \wt Y\ar@{=}[r] & Y_{t} \ar[r]^-{\psi_t}& Y_{t-1}\ar[r]^-{\psi_{t-1}}& \cdots \ar[r]^-{\psi_2} & Y_1\ar[r]^-{\psi_1} & Y_0 \ar@{=}[r] & Y} $$ \caption{Canonical resolution.} \label{figure-1} \end{figure} \noindent Here $\psi_i$'s are successive blowing-ups resolving the singularities of $R$, and $\pi_{i}:\,X_i \to Y_i$ is the double cover determined by $\mathcal O_{Y_i}(R_i) \equiv L_i^{\otimes 2}$ with $$R_i=\psi_i^*(R_{i-1})-2[m_{i-1}/2]\, \mathcal E_i,\qquad L_i=\psi_i^*(L_{i-1})\otimes \mathcal O_{Y_i}\left(\mathcal E_i^{-[m_{i-1}/2]}\right),$$ where $\mathcal E_i$ is the exceptional divisor of $\psi_i$, $m_{i-1}$ is the multiplicity of the singular point $y_{i-1}$ in $R_{i-1}$ (also called the multiplicity of the blowing-up $\psi_i$), $[~]$ stands for the integral part, $R_0=R$ and $L_0=L$. A singularity $y_j \in R_{j}\subseteq Y_{j}$ is said to be {\it infinitely closed to} $y_{i}\in R_{i}\subseteq Y_{i}$ ($j>i$), if $\psi_{i+1}\circ\cdots\circ\psi_j(y_j)=y_{i}\,.$ We remark that the order of these blowing-ups contained in $\psi$ is not unique. If $y_{i-1}$ is a singular point of $R_{i-1}$ of odd multiplicity $2k+1$ ($k\geq 1$) and there is a unique singular point $y$ of $R_i$ on the exceptional curve $\mathcal E_i$ of multiplicity $2k+2$, then we always assume that $\psi_{i+1}: Y_{i+1} \to Y_{i}$ is a blowing-up at $y_i=y$. We call such a pair $(y_{i-1},y_{i})$ a {\it singularity of $R$ of type $(2k+1 \to 2k+1)$}, and $y_{i-1}$ (resp. $y_i$) the first (resp. second) component. \begin{definition}\label{definitionofs_i} For any singular fiber $F$ of $f$ and $j\geq 2$, we define \begin{list}{} {\setlength{\labelwidth}{0.3cm} \setlength{\leftmargin}{0.4cm}} \item[$\bullet$] if $j$ is odd, $s_j(F)$ equals the number of $(j\to j)$ type singularities of $R$ over the image $f(F)$; \item[$\bullet$] if $j$ is even, $s_j(F)$ equals the number of singularities of multiplicity $j$ or $j+1$ of $R$ over the image $f(F)$, neither belonging to the second component of type $(j-1 \to j-1)$ singularities nor to the first component of type $(j+1 \to j+1)$ singularities. \end{list} Let $\omega_{\tilde h}=\omega_{\wt Y}\otimes\tilde h^*\omega_{B}^{-1}$ and $\wt R'=\wt R \setminus\wt V$, where $\wt V$ is the union of vertical isolated $(-2)$-curves in $\wt R$. Here a curve $C\subseteq \wt R$ is called to be {\it isolated} in $\wt R$, if there is no other curve $C'\subseteq \wt R$ such that $C\cap C' \neq \emptyset$. We define $$\begin{aligned} s_2&:=\left(\omega_{\tilde h}+\wt R'\right)\cdot \wt R'+2\sum_{F \text{~is singular}} s_2(F),\\ s_j&:=\sum_{F \text{~is singular}} s_j(F),\qquad\qquad \forall~ j\geq 3. \end{aligned}$$ Note that the contraction $\psi$ is unique since $\gamma>0$ (although the order of these blowing-ups contained in $\psi$ is not unique). Hence the invariants $s_j$'s are well-defined. By definition, $s_j$ is non-negative for $j\geq 3$, but it is not clear whether $s_2$ is non-negative or not. \end{definition} \begin{lemma}\label{numberofcontraction} Let $F$ be a singular fiber of the fibration $f$, and $\wt F$ (resp. $\wt \Gamma$, resp. $\Gamma$) the corresponding fiber in $\wt X$ (resp. $\wt Y$, resp. $Y$). Then the $(-1)$-curves in $\wt F$ are in one-to-one correspondence to the isolated $(-2)$-curves of $\wt R$, which are also contained in $\wt \Gamma$. And the number of these $(-1)$-curves is equal to $$n_2(F)+\sum_{k\geq 1} s_{2k+1}(F),$$ where $n_2(F)$ is the number of isolated $(-2)$-curves of $R$, which are also contained in $\Gamma$. \end{lemma} \begin{proof} Note that the $(-1)$-curves in $\wt F$ are exactly the inverse image of the isolated fixed points of $\sigma$ on $F$, hence fixed by $\tilde\sigma$. It follows that these $(-1)$-curves in $\wt F$ are in one-to-one correspondence to the isolated $(-2)$-curves of $\wt R$, which are also contained in $\wt \Gamma$. Let $E$ be such a $(-2)$-curve of $\wt R$. Then it is the strict inverse image of either an exceptional curve $\mathcal E_i$ or an irreducible curve $C$ on $\Gamma$. In the first case, it is easy to see that $y_{i-1}=\psi_i(\mathcal E_i)$ is a singularity of $R_{i-1}$ with odd multiplicity $2k+1$, and that $R_i$ has a unique singularity on $\mathcal E_i$ with multiplicity $2k+2$. Equivalently, it corresponds to a singularity of $R$ of type $(2k+1 \to 2k+1)$. In the later case, let $$E=\psi^*(C)-\sum a_j\mathcal E_j,\quad \text{with~}a_j\geq 0.$$ Then $$-2=E^2=C^2-\sum a_j^2,\qquad 0=\omega_{\wt Y}\cdot E=\omega_{Y}\cdot C+\sum a_j.$$ On the other hand, one has $C^2\leq 0$ and $C^2=0$ if and only if $\Gamma=nC$ for some $n$, since $C\subseteq \Gamma$. Hence it follows that $C^2\neq 0$ since $\gamma>0$, and that $C^2\neq -1$; otherwise by construction $C$ must be smooth and hence is $(-1)$-curve, which is impossible due to the relative minimality of $h$. Therefore, $C$ must be an isolated $(-2)$-curve of $R$, which is also contained in $\Gamma$. Conversely, it is clear that each singularity of $R$ of type $(2k+1\to 2k+1)$ creates an isolated $(-2)$-curve contained in $\wt R$, and that the inverse image of each isolated $(-2)$-curve in $R$ is still an isolated $(-2)$-curve in $\wt R$. The proof is complete. \end{proof} \begin{theorem}\label{thminvariants-double-fibration} Let $f$ be a double cover fibration of type $(g, \gamma)$, and $s_i$'s the singularity indices as above. Then $$\begin{aligned} (2g+1-3\gamma)\omega_f^2~=&~\,x\cdot\frac{\omega_h^2}{\gamma-1}+yT+zs_2+\sum_{k\geq 1} a_ks_{2k+1}+\sum_{k\geq 2}b_ks_{2k},\\ (2g+1-3\gamma)\chi_f~=&~\,\bar x\cdot\frac{\omega_h^2}{\gamma-1}+2(2g+1-3\gamma)\chi_h+\bar yT\\ &\,+\bar zs_2-\frac{2g+1-3\gamma}{4}\cdot n_2 +\sum_{k\geq 1} \bar a_ks_{2k+1}+\sum_{k\geq 2}\bar b_ks_{2k},\\ e_f~=&~\,2e_h+s_2-3n_2+\sum_{k\geq 1} s_{2k+1}+\sum_{k\geq 2}2s_{2k}, \end{aligned}$$ where we set $\frac{\omega_h^2}{\gamma-1}=0$ if $\gamma=1$,~ $n_2=\sum\limits_{F~\text{is singular}} n_2(F),$ and $$\begin{aligned} &x=\frac{(3g+1-4\gamma)(g-1)}{2},&\,\quad\quad\quad\,&~ y=\frac{3}{2},&\qquad&~z=g-1;\qquad\quad\\ &\bar x=\frac{(g+1-2\gamma)^2}{8},&~&~ \bar y=\frac{1}{8},&&~\bar z=\frac{g-\gamma}{4}. \end{aligned}$$\vspace{-0.2cm} $$\begin{aligned} &a_k\,=\,12\bar a_k-(2g+1-3\gamma), &&b_k\,=\,12\bar b_k-2(2g+1-3\gamma),\\ &\bar a_k\,=\,k\big(g-1+(k-1)(\gamma-1)\big), &\quad&\bar b_k\,=\,\frac{k\big(g-1+(k-2)(\gamma-1)\big)}{2},~ \end{aligned}~$$\vspace{-0.2cm} $$~T=-\frac{\big((g+1-2\gamma)\omega_h-(\gamma-1)R\big)^2}{\gamma-1}-2(\gamma-1)n_2\geq 0.\qquad\qquad\qquad $$ \end{theorem} \begin{proof} Recall the canonical resolution $\psi$ exhibited in \autoref{figure-1}. By \autoref{numberofcontraction}, one has $$\begin{aligned} &\left(\omega_{\tilde h}+\wt R'\right)\cdot \wt R'-2\left(n_2+\sum_{k\geq 1} s_{2k+1}\right)\\=\,&\left(\omega_{\tilde h}+\wt R\right)\cdot \wt R =\left(\omega_{h}+R\right)\cdot R-\sum_{i=1}^t\left(\left[\frac{m_i}2\right]-1\right)\cdot\left[\frac{m_i}2\right]\\ =\,&\left(\omega_{h}+R\right)\cdot R-\sum_{k\geq1}(8k^2+4k+2)s_{2k+1}-\sum_{k\geq 2}(4k^2-2k)s_{2k}-2\sum_{F~\text{is singular}} s_2(F). \end{aligned}$$ Combining this with the definition of $s_2$, we get \begin{equation}\label{eqn(omegah+R)R} (\omega_{h}+R)\cdot R=(s_2-2n_2)+\sum_{k\geq1}4k(2k+1)s_{2k+1}+\sum_{k\geq 2}2k(2k-1)s_{2k}. \end{equation} Thus by the formulas for double covers (cf. \cite[\S\,V.22]{bhpv-04}), one obtains: \begin{eqnarray} \hspace{-0.2cm}\omega_{\tilde f}^2&\hspace{-0.2cm}=\hspace{-0.2cm}&2\left(\omega_h^2+\omega_h\cdot R +\frac{R^2}{4}\right)-2\left(\sum_{k\geq1}(2k^2-2k+1)s_{2k+1}+\sum_{k\geq 2}(k-1)^2s_{2k}\right)\nonumber\\ &\hspace{-0.2cm}=\hspace{-0.2cm}&x'\cdot\frac{\omega_h^2}{\gamma-1}+y'\big(T+2(\gamma-1)n_2\big)+z'(\omega_{h}+R)\cdot R\label{eqn-omega-tilde-f^2}\\ &\hspace{-0.2cm}\hspace{-0.2cm}&-2\left(\sum_{k\geq1}(2k^2-2k+1)s_{2k+1}+\sum_{k\geq 2}(k-1)^2s_{2k}\right),\nonumber \end{eqnarray} \begin{eqnarray} \hspace{-0.2cm}\chi_{\tilde f}&\hspace{-0.2cm}=\hspace{-0.2cm}&2\chi_{h}+\frac12\left(\frac{\omega_h\cdot R}{2}+\frac{R^2}{4}\right) -\left(\sum_{k\geq1}k^2s_{2k+1}+\sum_{k\geq 2}\frac{k(k-1)}{2}s_{2k}\right)\qquad\qquad\,\nonumber\\ &\hspace{-0.2cm}=\hspace{-0.2cm}&2\chi_{h}+\bar x'\cdot\frac{\omega_h^2}{\gamma-1}+\bar y'\big(T+2(\gamma-1)n_2\big)+\bar z'(\omega_{h}+R)\cdot R\label{eqn-chi-tilde-f}\\ &\hspace{-0.2cm}\hspace{-0.2cm}&-\left(\sum_{k\geq1}k^2s_{2k+1}+\sum_{k\geq 2}\frac{k(k-1)}{2}s_{2k}\right),\nonumber \end{eqnarray} where $\ast'=\frac{\ast}{2g+1-3\gamma}$ for $\ast=x,y,z,\bar x,\bar y$ or $\bar z$. Note that $\omega_f^2=\omega_{\tilde f}^2+n_2+\sum\limits_{k\geq 1} s_{2k+1}$ and $\chi_{f}=\chi_{\tilde f}$ by \autoref{numberofcontraction}. Therefore, the formulas in our theorem follow from the above equalities together with \eqref{eqn(omegah+R)R} and \eqref{eqnnoether}. Note that $T=2(g-1)\omega_h\cdot R \geq 0$ if $\gamma=1$. It remains to show that $T\geq 0$ if $\gamma>1$. For this purpose, let $V\subseteq R$ be these isolated $(-2)$-curves contracted by $h$, and $R'=R\setminus V$. By \autoref{numberofcontraction}, the number of components contained in $V$ is $n_2$. Since $\Gamma\cdot \big((g+1-2\gamma)\omega_h-(\gamma-1)R'\big)=0$, one gets by Hodge index theorem that $$0\geq \big((g+1-2\gamma)\omega_h-(\gamma-1)R'\big)^2=\big((g+1-2\gamma)\omega_h-(\gamma-1)R\big)^2+2(\gamma-1)^2n_2.$$ Hence $T\geq 0$ as required. \end{proof} \subsection{Irregular double cover fibrations} In this subsection, we would like to prove the following restrictions on the invariants of irregular double cover fibrations. Here, the double cover fibration $f$ of type $(g,\gamma)$ is called irregular if \begin{equation}\label{eqn-def-irreg} q_{\pi}:=q(\wt X)-q(\wt Y) >0, \end{equation} where $\wt X$ and $\wt Y$ are the same as in the last subsection. \begin{proposition}\label{prop-restriction-invariants} Let $f:\,X\to B$ be a double cover fibration of type $(g, \gamma)$ with the double cover $\pi$ as in \autoref{def-double-cover-fibration}. {\rm (i)} If the double cover $\pi$ is irregular, i.e., $q_{\pi}>0$, then \begin{eqnarray} \hspace{-0,3cm}&&\hspace{-0,3cm}2(g+1-2\gamma)s_2 \label{eqnq_pi>0}\\ \hspace{-0,3cm}&\leq&\hspace{-0,3cm}(g+1-2\gamma)^2\cdot \frac{\omega_h^2}{\gamma-1}+ T+\sum_{k\geq 1} 2(4\bar a_k+2g+1-3\gamma)s_{2k+1}+\sum_{k\geq 2}8\bar b_ks_{2k}.\qquad\nonumber \end{eqnarray} {\rm (ii)} If the image $J_0(\wt X)\subseteq \Alb_0(\wt X)$ is a curve of geometric genus $g'>0$, then \begin{eqnarray} \hspace{-0,3cm}&&\hspace{-0,3cm}2(g+1-2\gamma)\left(s_2+\sum_{k\geq 1}^{g'-1} 4(2k+1)ks_{2k+1}+\sum_{k\geq 2}^{g'} 2(2k-1)ks_{2k}\right) \label{eqnq_pi>g'}\\ \hspace{-0,3cm}&\leq&\hspace{-0,3cm}(g+1-2\gamma)^2\cdot \frac{\omega_h^2}{\gamma-1}+ T+\sum_{k\geq g'} 2(4\bar a_k+2g+1-3\gamma) s_{2k+1}+\sum_{k\geq g'+1} 8\bar b_ks_{2k};\qquad\nonumber \end{eqnarray} where $\bar a_k$'s, $\bar b_k$'s are defined in \autoref{thminvariants-double-fibration}, and $J_0$ will be defined in \eqref{eqndef-of-J_0}. \end{proposition} The main tool to prove the above proposition is the usage of Albanese varieties. We first review the Albanese varieties and show that the ramified divisor is contracted by $J_0$. Then the proposition follows from the semi-negativity of the divisors contracted by some non-trivial map. Let $\wt {\mathcal R}=\tilde\pi^{-1}(\wt R)\subseteq \wt X$ the ramified divisor. Let $\Alb(\wt X)$ (resp. $\Alb(\wt Y)$) be the Albanese variety of $\wt X$ (resp. $\wt Y$), and $\tau$ the generator of the Galois group ${\rm Gal}(\wt X/\wt Y)\cong \mathbb Z/2\mathbb Z$. Then we have a natural map $\Alb(\tilde\pi):\, \Alb(\wt X)\to \Alb(\wt Y)$ and $\tau$ has a natural action on $\Alb(\wt X)$. Let $\Alb_0(\wt X)$ be the kernel of the action of $\tau$ on $\Alb(\wt X)$. Then it is clear that $\Alb(\wt X)$ is isogenous to $\Alb_0(\wt X) \oplus \Alb(\tilde\pi)^{-1}\big(\Alb(\wt Y)\big)$ and $\dim \Alb_0(\wt X)=q_{\pi}$. Denote by \begin{equation}\label{eqndef-of-J_0} J_0:\,\wt X \to \Alb_0(\wt X) \end{equation} the induced map. \begin{lemma}\label{lemmawtR-contracted} The ramified divisor $\wt {\mathcal R}$ is contracted by the map $J_0$. \end{lemma} \begin{proof} Let $C\subseteq \wt {\mathcal R}$ be any irreducible component, $\wt C$ its normalization, $j:\,\wt C \to \wt X$ the induced map and $\varphi=J_0\circ j:\,\wt C \to \Alb_0(\wt X)$ the composition. We have to prove that $\varphi(\wt C)$ is a point. We argue by contradiction. Assume that $\varphi(\wt C)$ is not a point. Then the induced map $$\varphi^*:~H^0\left(\Alb_0(\wt X),\,\Omega_{\Alb_0(\wt X)}^1\right) \lra H^0\left(\wt C,\,\Omega_{\wt C}^1\right)$$ is non-zero. On the other hand, it is clear that $\varphi^*$ factors through $$H^0\left(\Alb_0(\wt X),\,\Omega_{\Alb_0(\wt X)}^1\right) \overset{J_0^*}\lra H^0\left(\wt X,\,\Omega_{\wt X}^1\right) \overset{j^*}\lra H^0\left(\wt C,\,\Omega_{\wt C}^1\right).$$ Note that the generator $\tau$ of the Galois group ${\rm Gal}(\wt X/\wt Y)$ acts on $H^0\left(\wt X,\,\Omega_{\wt X}^1\right)$. Let $$H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1} \oplus H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{1}$$ be the eigenspace decomposition. Then by construction, the image of $J_0^*$ is contained in $H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1}$. To deduce a contradiction, it suffices to prove that the restricted map $$j^*\big|_{H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1}}:~H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1} \lra H^0\left(\wt C,\,\Omega_{\wt C}^1\right)$$ is zero. In fact, let $p\in C$ be an arbitrary smooth point of $C$. Locally around $p$, there exists local coordinate $(x,y)$ such that the action of $\tau$ is given by $\tau(x,y)=(x,-y)$ and $C$ is defined by $y=0$. For any $1$-form $$\omega=\alpha(x,y)dx+\beta(x,y)dy \in H^0\left(\wt X,\,\Omega_{\wt X}^1\right),$$ one has $$\omega\in H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1} \Longleftrightarrow \alpha(x,y)=y\tilde \alpha(x,y^2), ~\beta(x,y)=\tilde \beta(x, y^2).$$ Hence if $\omega \in H^0\left(\wt X,\,\Omega_{\wt X}^1\right)_{-1}$, one gets that $j^*\omega\big|_{j^{-1}(p)}=0$, from which it follows that $j^*\omega=0$ since $p$ is arbitrary. The proof is complete. \end{proof} \begin{lemma}\label{lemma-order-decrease} Let $y_j \in R_{j}\subseteq Y_{j}$ be a singularity infinitely closed to $y_{i}\in R_{i}\subseteq Y_{i}$ as in the canonical resolution in Figure {\rm\ref{figure-1}}. Then \begin{equation*} m_j\leq m_i,\quad\text{if $m_i$ is even;}\qquad\quad m_j\leq m_i+1,\quad\text{if $m_i$ is odd.} \end{equation*} \end{lemma} \begin{proof} It suffices to consider the case where $j=i+1$ and $\psi_{i+1}(y_{i+1})=y_i$. But this is clear because if $m_i$ is even, then $\mathcal E_{i+1}\nsubseteq R_{i+1}$; and if $m_i$ is odd, then $\mathcal E_{i+1}\subseteq R_{i+1}$. \end{proof} \begin{proof}[Proof of \autoref{prop-restriction-invariants}] Recall that those blowing-ups $\psi_i$'s are contained in the canonical resolution $\psi$. For convenience, we view $\psi_i\circ\psi_{i+1}:\,Y_{i+1}\to Y_{i-1}$ as a single blowing-up (but with two exceptional curves) if $$Y_{i+1}\overset{\psi_{i+1}}\lra Y_{i} \overset{\psi_{i}}\lra Y_{i-1}$$ are blowing-ups of a type-$(2k+1 \to 2k+1)$ singularity. For a blowing-up $\psi'$ contained in $\psi$, the order of $\psi'$ is defined to be $k+1$ if $\psi'$ is a blowing-up of a type-$(2k+1 \to 2k+1)$ singularity, and to be $[m'/2]$ if $\psi'$ is a blowing-up of a singularity of the branch divisor with multiplicity $m'$. Now we introduce a partial order on these blowing-ups contained in $\psi$: we say $\psi'\geq \psi''$ if $k'\geq k''$, where $k'$ (resp. $k''$) is the order of $\psi'$ (resp. $\psi''$). According to \autoref{lemma-order-decrease}, we can reorder these blowing-ups contained in $\psi$ such that $\psi_i\geq \psi_j$ if $i<j$. Let $M$ be the maximal order of these blowing-ups contained in $\psi$. Then $\psi$ can be decomposed as $$\xymatrix{\wt Y\ar@{=}[r] &\hat Y_{M}\ar@/_7mm/"1,5"^-{\psi} \ar[r]^-{\hat\psi_{M}} & \cdots\cdots \ar[r]^-{\hat\psi_2} & \hat Y_1 \ar[r]^-{\hat\psi_1} & \hat Y_0 \ar@{=}[r]& Y} $$ such that the order of each blowing-up contained in $\hat \psi_i$ is $M+1-i$. Consider any blowing-up $\psi'$ contained in $\hat\psi_i$. If it is a blowing-up of a type-$\big(2(M-i)+1 \to 2(M-i)+1\big)$ singularity, let $\mathcal E_1$ and $\mathcal E_2$ be the two exceptional curves. By construction, one of them, saying $\mathcal E_1$ is contained in the branch divisor, hence its strict inverse image on $\wt X$ is a rational curve; another one, saying $\mathcal E_2$, is not contained in the branch divisor and intersects the branch divisor at most $2\big(M-i\big)+2$ points, hence the geometric genus of its strict inverse image on $\wt X$ is at most $M-i$ by Hurwitz formula (cf. \cite[\S\,IV.2]{hartshorne-77}). If $\psi'$ is an ordinary blowing-up with one exceptional curve $\mathcal E$, then one can prove similarly that the geometric genus of its strict inverse image on $\wt X$ is also at most $M-i$. In any case, we obtain that the strict inverse image of any exceptional curve of $\hat \psi_i$ has geometric genus at most $M-i$. Consider first the case when $J_0(\wt X)$ is a curve of geometric genus $g'>0$. In this case, any curve of geometric genus less than $g'$ is contracted by $J_0$. Hence combining this with the above arguments and \autoref{lemmawtR-contracted}, we conclude that the total inverse image of $\hat R_{M-g'}$ in $\wt X$ is contracted by $J_0$, where $\hat R_{M-g'}\subseteq \hat Y_{M-g'}$ is the image of $\wt R$. In particular, the total inverse image of $\hat R_{M-g'}$ is semi-negative definite, which implies that $\hat R_{M-g'}$ is also semi-negative definite. By construction, each blowing-up contained in $$\hat \psi_{M-g'+1}\circ\cdots\circ\hat\psi_{M}:~\wt Y=\hat Y_M \lra \hat Y_{M-g'}$$ has order less than or equal to $g'$. Thus there exist $n_2+\sum\limits_{k\geq g'} s_{2k+1}$ vertical isolated $(-2)$-curves contained in $\hat R_{M-g'}$ by \autoref{numberofcontraction}, since the image of any isolated $(-2)$-curve contained in $\wt R$ is still an isolated $(-2)$-curve contained in $\hat R_{M-g'}$. Therefore \begin{equation}\label{eqnpfq_pi>g'1} \hat R_{M-g'}^2 \leq -2\left(n_2+\sum\limits_{k\geq g'} s_{2k+1}\right). \end{equation} By construction, we have $$\begin{aligned} \hat R_{M-g'}^2=&\,R^2-\left(\sum_{k\geq g'} 4(2k^2+2k+1)s_{2k+1}+\sum_{k\geq g'+1}4k^2s_{2k}\right)\\ =&\,\hat x\cdot\frac{\omega_h^2}{\gamma-1}+\hat y\big(T+2(\gamma-1)n_2\big)+\hat z\left(\omega_{h}+R\right)\cdot R\\ &~-\left(\sum_{k\geq g'} 4(2k^2+2k+1)s_{2k+1}+\sum_{k\geq g'+1}4k^2s_{2k}\right), \end{aligned}$$ where $$\hat x=\frac{-(g+1-2\gamma)^2}{(2g+1-3\gamma)},\qquad \hat y=\frac{-1}{(2g+1-3\gamma)},\qquad \hat z=\frac{2g+2-4\gamma}{2g+1-3\gamma}. $$ Hence \eqref{eqnq_pi>g'} follows from the above equation together with \eqref{eqn(omegah+R)R} and \eqref{eqnpfq_pi>g'1}. Finally, let's consider the case when $q_{\pi}>0$. In this case, $J_0(\wt X)$ is of positive dimension since $J_0(\wt X)$ generates $\Alb_0(\wt X)$ by construction, and any rational curve in $\wt X$ is contracted by $J_0$. Hence similarly as above, one sees that $\hat R_{M-1}$ is semi-negative definite and \begin{equation}\label{eqnpfq_pi>01} \hat R_{M-1}^2 \leq -2\left(n_2+\sum\limits_{k\geq 1} s_{2k+1}\right). \end{equation} Therefore, \eqref{eqnq_pi>0} follows from a similar argument as above. \end{proof} \subsection{Slope of double cover fibrations}\label{sect-slope-double} In this subsection, we would like to consider the question on the lower bound of the slope for double cover fibrations. The main techniques are \autoref{thminvariants-double-fibration} and \autoref{prop-restriction-invariants}. Based on \autoref{thminvariants-double-fibration}, we can reprove the following lower bound of the slope for a double cover fibration, which was proved earlier by Barja, Zucconi, Cornalba and Stoppino. \begin{theorem}[{\cite[Cor.\,2.6]{barja-zucconi-01} \& \cite[Thm.\,2.1]{barja-01} \& \cite[Thm.\,3.1]{cornalba-stoppino-08}}]\label{thm-double-1} Let $f$ be a double cover fibration of type $(g,\gamma)$. If $h$ is locally trivial or $g\geq 4\gamma+1$, then \begin{equation}\label{eqn4(g-1)/(g-gamma)} \lambda_f\geq \frac{4(g-1)}{g-\gamma}. \end{equation} \end{theorem} \begin{proof} By \autoref{thminvariants-double-fibration}, for any $\lambda$, one has \begin{eqnarray} &&~(2g+1-3\gamma)(\omega_f^2-\lambda\cdot\chi_f)\label{eqnomega_f^2-lambda-chi_f}\\[0.15cm] &=\hspace{-0.2cm}&\left(\frac{(3g+1-4\gamma)(g-1)}{2}-\frac{(g+1-2\gamma)^2\lambda}{8}\right)\cdot\frac{\omega_h^2}{\gamma-1}-2(2g+1-3\gamma)\lambda\cdot\chi_h\qquad\nonumber\\ &&\hspace{-0.2cm}+\frac{12-\lambda}{8}\cdot T+\frac{4(g-1)-(g-\gamma)\lambda}{4}\cdot s_2+\frac{(2g+1-3\gamma)\lambda}{4}\cdot n_2\nonumber\\ &&\hspace{-0.2cm}+\sum_{k\geq 1} \Big((12-\lambda)k\big((g-1)+(k-1)(\gamma-1)\big)-(2g+1-3\gamma)\Big)\cdot s_{2k+1}\nonumber\\ &&\hspace{-0.2cm}+\sum_{k\geq 2}\left(\frac{(12-\lambda)k\big((g-1)+(k-2)(\gamma-1)\big)}{2}-2(2g+1-3\gamma)\right)\cdot s_{2k}.\nonumber \end{eqnarray} Taking $\lambda=\frac{4(g-1)}{g-\gamma}$ in \eqref{eqnomega_f^2-lambda-chi_f}, it is easy to see that the coefficients of $n_2$ and $s_j$'s for $j\geq 3$ are all non-negative due to \eqref{eqnhurwitz-formula}. Since $T$, $n_2$ and $s_j$'s for $j\geq 3$ are also all non-negative by definition, it follows from \eqref{eqnomega_f^2-lambda-chi_f} that \begin{equation}\label{eqnpf-4(g-1)/(g-gamma)-1} \omega_f^2-\frac{4(g-1)}{g-\gamma}\cdot\chi_f\geq \frac{1}{2(g-\gamma)}\left((g-1)^2\cdot\frac{\omega_h^2}{\gamma-1}+T-16(g-1)\cdot\chi_h\right). \end{equation} If $h$ is locally trivial, then $\frac{\omega_h^2}{\gamma-1}=\chi_h=0$ and $T\geq 0$, from which together with \eqref{eqnpf-4(g-1)/(g-gamma)-1} the inequality \eqref{eqn4(g-1)/(g-gamma)} follows immediately. If $g\geq 4\gamma+1$ and $\gamma=1$, then by \cite[\S\,V-Theorem\,12.1]{bhpv-04}, one has \begin{equation}\label{eqnpf-4(g-1)/(g-gamma)-2} \omega_h \sim_{\text{(numerically equivalent)}} \left(\chi_h+\sum_{i=1}^{n}\frac{l_i-1}{l_i}\right)\Gamma, \end{equation} where $\Gamma$ is a general fiber of $h$ and $\{\Gamma_i\}_{i=1,\cdots,n}$ are the union of multiple fibers of $h$ with multiplicities $\{l_i\}_{i=1,\cdots,n}$. Hence $T=2(g-1)\omega_h\cdot R\geq 4(g-1)^2\chi_h$. Therefore, it follows from \eqref{eqnpf-4(g-1)/(g-gamma)-1} that $\omega_f^2-4\chi_f\geq 2(g-5)\chi_h\geq 0$. If $g\geq 4\gamma+1$ and $\gamma>1$, then one has $\omega_h^2\geq \frac{4(\gamma-1)}{\gamma}\cdot\chi_h\geq 0$ and $T\geq 0$. Hence by \eqref{eqnpf-4(g-1)/(g-gamma)-1}, we get \[\omega_f^2-\frac{4(g-1)}{g-\gamma}\cdot\chi_f\geq \frac{4(g-1)(g-4\gamma-1)}{2(g-\gamma)\gamma}\cdot\chi_h\geq 0 \,\text{~\,as required.}\qedhere\] \end{proof} When $f$ is an irregular double cover, we have the following better bounds, which is a generalization of \cite[Theorem\,1.4]{lu-zuo-13}. \begin{theorem}\label{thm-irreg-doub} Let $f$ be an irregular double cover fibration of type $(g,\gamma)$, and \begin{equation} F(g,\gamma,q_{\pi})=(g-1)^2-4(g-1)(\gamma q_{\pi}+\gamma+q_{\pi})-4q_{\pi}^2(\gamma^2-1). \end{equation} {\rm(i)} If $h$ is locally trivial or $F(g,\gamma,1)\geq 0$, then \begin{equation}\label{eqn6+4(gamma-1)/(g-1)} \lambda_f\geq 6+\frac{4(\gamma-1)}{g-1}. \end{equation} {\rm(ii)} Assume moreover that $J_0(\wt X)$ is a curve, where $J_0$ is defined in \eqref{eqndef-of-J_0}. If $h$ is locally trivial or $F(g,\gamma,q_{\pi})\geq 0$, then \begin{equation}\label{eqn-lambda_f>lambda_g,q_pi} \lambda_f\geq \lambda_{g,\gamma,q_{\pi}}:=8-\frac{4(g+1-2\gamma)}{(q_{\pi}+1)\big((g-1)+(q_{\pi}-1)(\gamma-1)\big)}. \end{equation} \end{theorem} \begin{proof} We only prove (ii) here, for the proof of (i) is completely the same except replacing the usage of \eqref{eqnq_pi>g'} by \eqref{eqnq_pi>0} in the following. Note that $J_0(\wt X)$ generates $\Alb_0(\wt X)$ by construction. Hence the geometric genus of $J_0(\wt X)$ is at least $q_{\pi}=\dim \Alb_0(\wt X)$. Note also that $\lambda_{g,\gamma,q_{\pi}}\geq \frac{4(g-1)}{g-\gamma}$, since $g+1-2\gamma\geq 0$ by \eqref{eqnhurwitz-formula}. Hence by \eqref{eqnq_pi>g'} and \eqref{eqnomega_f^2-lambda-chi_f} with $\lambda=\lambda_{g,\gamma,q_{\pi}}$, we obtain \begin{eqnarray} \hspace{-0.2cm}&&\omega_f^2-\lambda_{g,\gamma,q_{\pi}}\cdot\chi_f\label{eqn-pf-lambda_f>lambda_g,q_pi-1}\\ \hspace{-0.2cm}&\geq\hspace{-0.1cm}&\frac{8(g-1)-(g+1-2\gamma)\lambda_{g,\gamma,q_{\pi}}}{8}\cdot \frac{\omega_h^2}{\gamma-1}-2\lambda_{g,\gamma,q_{\pi}}\cdot\chi_h+\frac{8-\lambda_{g,\gamma,q_{\pi}}}{8(g+1-2\gamma)}\cdot T\quad\nonumber\\ \hspace{-0.2cm}&&\hspace{-0.2cm}+\frac{\lambda_{g,\gamma,q_{\pi}}}{4}\cdot n_2+\sum_{k=1}^{q_{\pi}-1} \xi_k\cdot s_{2k+1}+\sum_{k=2}^{q_{\pi}}\eta_k\cdot s_{2k} +\sum_{k\geq q_{\pi}} \mu_k\cdot s_{2k+1}+\sum_{k\geq q_{\pi}+1}\nu_k\cdot s_{2k},\nonumber \end{eqnarray} where $$\begin{aligned} \xi_k&\,=\,k^2\lambda_{g,\gamma,q_{\pi}}-(2k-1)^2,\\ \eta_k&\,=\,\frac{(k-1)\big(k\lambda_{g,\gamma,q_{\pi}}-4(k-1)\big)}{2},\\ \mu_k&\,=\,\frac{\big(4k(g-1)+(2k-1)^2(\gamma-1)\big)(8-\lambda_{g,\gamma,q_{\pi}})-(g+1-2\gamma)\lambda_{g,\gamma,q_{\pi}}}{4(g+1-2\gamma)},\\ \nu_k&\,=\,\frac{k\big((g-1)+(k-2)(\gamma-1)\big)(8-\lambda_{g,\gamma,q_{\pi}})-4(g+1-2\gamma)}{2(g+1-2\gamma)}. \end{aligned}$$ It is easy to see that $\xi_k\geq 0$ for any $1\leq k\leq q_{\pi}-1$, $\eta_k\geq 0$ for any $2\leq k\leq q_{\pi}$, and $$\begin{aligned} \mu_k&\geq \mu_{q_{\pi}}=\frac{2(q_{\pi}-1)}{q_{\pi}+1}+\frac{g-\gamma}{(q_{\pi}+1)\big((g-1)+(q_{\pi}-1)(\gamma-1)\big)}\geq0,&~&\forall~k\geq q_{\pi},\\ \nu_k&\geq \nu_{q_{\pi}+1}=0,&&\forall~k\geq q_{\pi}+1. \end{aligned}$$ Hence by \eqref{eqn-pf-lambda_f>lambda_g,q_pi-1}, one has \begin{eqnarray} \hspace{-0.2cm}&&\omega_f^2-\lambda_{g,\gamma,q_{\pi}}\cdot\chi_f\label{eqn-pf-lambda_f>lambda_g,q_pi-2}\\ \hspace{-0.2cm}&\geq\hspace{-0.1cm}&\frac{8(g-1)-(g+1-2\gamma)\lambda_{g,\gamma,q_{\pi}}}{8}\cdot \frac{\omega_h^2}{\gamma-1}-2\lambda_{g,\gamma,q_{\pi}}\cdot\chi_h+\frac{8-\lambda_{g,\gamma,q_{\pi}}}{8(g+1-2\gamma)}\cdot T\quad\nonumber \end{eqnarray} If $h$ is locally trivial, then $\frac{\omega_h^2}{\gamma-1}=\chi_h=0$ and $T\geq 0$. Hence \eqref{eqn-lambda_f>lambda_g,q_pi} is clearly true. If $F(g,\gamma,q_{\pi})\geq 0$ and $\gamma=1$, then by \eqref{eqnpf-4(g-1)/(g-gamma)-2} one has $T=2(g-1)\omega_h\cdot R\geq 4(g-1)^2\chi_h$. Hence it follows from \eqref{eqn-pf-lambda_f>lambda_g,q_pi-2} that $$\omega_f^2-\lambda_{g,1,q_{\pi}}\cdot\chi_f\geq \frac{2(g-8q_{\pi}-5)}{q_{\pi}+1}\cdot \chi_h.$$ Note that the assumption $F(g,\gamma,q_{\pi})\geq 0$ implies that $g\geq 8q_{\pi}+5$ when $\gamma=1$. Thus the above inequality implies that \eqref{eqn-lambda_f>lambda_g,q_pi} holds if $\gamma=1$. Finally, we consider the case when $F(g,\gamma,q_{\pi})\geq 0$ and $\gamma>1$. In this case one has $\omega_h^2\geq \frac{4(\gamma-1)}{\gamma}\cdot\chi_h\geq 0$ and $T\geq 0$. Hence by \eqref{eqn-pf-lambda_f>lambda_g,q_pi-2}, we get \[\omega_f^2-\lambda_{g,\gamma,q_{\pi}}\cdot\chi_f\geq \frac{2F(g,\gamma,q_{\pi})}{\gamma(q_{\pi}+1)\big((g-1)+(q_{\pi}-1)(\gamma-1)\big)}\cdot\chi_h \geq 0.\qedhere\] \end{proof} \begin{remark} Let $f$ be an irregular double cover fibration of type $(g,\gamma)$. Similar to the above proof, one can show that \begin{equation}\label{eqn-slope>6} \lambda_f \geq 6, \qquad \text{if~} g\geq 6\gamma+7. \end{equation} \end{remark} In the following, we would like to consider Conjecture {\rm\ref{conjecturebs}} for double cover fibrations, i.e., consider the influence of $q_f$ on the lower bound of $\lambda_f$. In order to use \autoref{prop-restriction-invariants} and \autoref{thm-irreg-doub}, we first have to know when $J_0(\wt X)$ is a curve, where $J_0$ is defined in \eqref{eqndef-of-J_0}. \begin{lemma}[\cite{cai-98}]\label{lem-cai} If $q_{\pi}>\gamma+1$, then the image $J_0(\wt X)\subseteq \Alb_0(\wt X)$ is a curve. \end{lemma} \begin{proof} Let $\wt F$ be a general fibre of $\tilde f$, and $\wt \Gamma=\tilde\pi(\wt F) \subseteq \wt Y$. Consider the linear map $$\varsigma:\,\wedge^2 H^{1,0}\big(\Alb_0(\wt X)\big) \cong H^{2,0}\big(\Alb_0(\wt X)\big) \to H^{1,0}(\wt F)$$ obtained by composing the linear map $$H^{2,0}\big(\Alb_0(\wt X)\big) \lra H^{2,0}(\wt X)$$ with the restriction map $$H^{2,0}(\wt X) \cong H^{0}\big(\wt S,\,\omega_{\wt S}\big) \lra H^{0}\big(\wt F,\,\omega_{\wt F}\big) \cong H^{1,0}(\wt F),$$ where $\omega_{\wt X}$ (resp. $\omega_{\wt F}$) is the canonical sheaf of $\wt X$ (resp. $\wt F$). Note that the generator $\tau$ of the Galois group ${\rm Gal}(\wt X/\oly)$ acts on $H^{1,0}\big(\Alb_0(\wt X)\big)$ by multiplying $-1$, from which it follows that the image $\im(\varsigma)$ is contained in the invariant subspace $H^{0}\big(\wt F,\,\omega_{\wt F}\big)^{\tau} \cong H^{0}\big(\wt C,\,\omega_{\wt C}\big)$. In particular, one has $$\dim \im(\varsigma) \leq \dim H^{0}\big(\wt C,\,\omega_{\wt C}\big)=\gamma.$$ On the other hand, if $J_0(\wt X)$ is a surface, then it is proved by Xiao (cf. \cite[Theorem\,2]{xiao-87}, see also \cite[Lemma\,1]{pirola-89} by Pirola) that $$\dim \im(\varsigma) \geq q_{\pi} -1.$$ From the two above inequalities it follows that $J_0(\wt X)$ is a curve if $q_{\pi}>\gamma+1$. \end{proof} \begin{theorem} Let $f$ be a double cover fibration of type $(g,\gamma)$. Then \autoref{conjecturebs} holds for $f$ if one of the following is satisfied: \begin{list}{} {\setlength{\labelwidth}{0.6cm} \setlength{\leftmargin}{0.7cm}} \item[{\rm(i)}] $h$ is locally trivial, $g\geq 2(\gamma+1)+\sqrt{8\gamma^2+1}$, and $q_f<(g+1)/2$; \item[{\rm(ii)}] $h$ is locally non-trivial, $g\geq 2\gamma+2q_f+1>6\gamma+3$. \end{list} \end{theorem} \begin{proof} (i) Note that $q_f=q_\pi+q_h\leq q_\pi+\gamma$. If $q_\pi=0$, then $q_f\leq \gamma$, and hence \autoref{conjecturebs} follows from \autoref{thm-double-1}. Now assume that $q_\pi>0$, i.e., $f$ is an irregular double cover. In this case, if $q_\pi\leq \gamma+1$, then $q_f\leq 2\gamma+1$. Hence according to \autoref{thm-irreg-doub}(i), one gets $$\lambda_f \geq 6+\frac{4(\gamma-1)}{g-1}\geq \frac{4(g-1)}{g-2\gamma-1} \geq \frac{4(g-1)}{g-q_f}.$$ If $q_{\pi}>\gamma+1$, then the image $J_0(\wt X)\subseteq \Alb_0(\wt X)$ is a curve by \autoref{lem-cai}. Hence according to \autoref{thm-irreg-doub}(ii), one obtains $$\begin{aligned} \lambda_f\geq~& 8-\frac{4(g+1-2\gamma)}{(q_{\pi}+1)\big((g-1)+(q_{\pi}-1)(\gamma-1)\big)}\\[1.5mm] \geq~&\left\{\begin{aligned} &\frac{4(g-1)}{g-q_\pi-\gamma},&\quad&\text{~if~}q_\pi+\gamma<\frac{g+1}{2},\\[1mm] &8-\frac{8}{g},&&\text{~if~}q_\pi+\gamma\geq\frac{g+1}{2}, \end{aligned}\right.\\[1.5mm] \geq~&\frac{4(g-1)}{g-q_f},\qquad\text{since~}q_f<\frac{g+1}{2}. \end{aligned}$$ (ii) The assumption implies that the image $J_0(\wt X)\subseteq \Alb_0(\wt X)$ is a curve by \autoref{lem-cai}, where $J_0$ is defined in \eqref{eqndef-of-J_0}. Let $\lambda_0=\frac{4(g-1)}{g-q_f}=\frac{4(g-1)}{g-q_\pi-q_h}$. Then $\lambda_0\geq \frac{4(g-1)}{g-\gamma}$ by assumption. Hence similar to the proof of \autoref{thm-irreg-doub}(ii), one obtains \begin{eqnarray} \hspace{-0.2cm}&&\omega_f^2-\lambda_{0}\cdot\chi_f\label{eqn-pf-double-1}\\ \hspace{-0.2cm}&\geq\hspace{-0.1cm}&\frac{8(g-1)-(g+1-2\gamma)\lambda_{0}}{8}\cdot \frac{\omega_h^2}{\gamma-1}-2\lambda_{0}\cdot\chi_h+\frac{8-\lambda_{0}}{8(g+1-2\gamma)}\cdot T\quad\nonumber \end{eqnarray} If $\gamma=1$, then by \eqref{eqnpf-4(g-1)/(g-gamma)-2} one has $T=2(g-1)\omega_h\cdot R\geq 4(g-1)^2\chi_h$. Hence it follows from \eqref{eqn-pf-double-1} that $$\omega_f^2-\lambda_{0}\cdot\chi_f\geq \frac{8(g-1)-(g+3)\lambda_0}{2}\cdot \chi_h\geq 0.$$ If $\gamma>1$, then by \eqref{eqn-pf-double-1} with $\omega_h^2\geq \frac{4(\gamma-1)}{\gamma}\cdot\chi_h\geq 0$ and $T\geq 0$ we get \[\omega_f^2-\lambda_{0}\cdot\chi_f\geq \frac{8(g-1)-(g+1+2\gamma)\lambda_0}{2\gamma}\cdot\chi_h \geq 0.\qedhere\] \end{proof} \begin{remark} There do exist double cover fibrations of type $(g,\gamma)$ with $q_f=(g+1)/2$ but violating \autoref{conjecturebs}, see \autoref{example-3}; \end{remark} We end this section with the following lower bound on the slope of double cover fibrations, which was used in \autoref{sec-slope} for the proof of \autoref{thm-main}(i). It can be viewed as a supplement to \autoref{thm-double-1}. \begin{theorem} Let $f$ be a double cover fibration of type $(g, \gamma)$. If $g\leq 4\gamma+1$ and $(g+1-2\gamma)^2\geq 2(2g+1-3\gamma)$, then \begin{equation}\label{eqn-4.3-2} \lambda_f\geq \frac{4(g-1)(3g+1-4\gamma)}{(g+1-2\gamma)^2+4\gamma(2g+1-3\gamma)}. \end{equation} \end{theorem} \begin{proof} Let $\lambda_0:=\frac{4(g-1)(3g+1-4\gamma)}{(g+1-2\gamma)^2+4\gamma(2g+1-3\gamma)}$. Then $4\leq \lambda_0 \leq \frac{4(g-1)}{g-\gamma}$ by assumptions. If $\gamma=1$, then the assumptions imply that $\lambda_0=4$ and $g=5$. Hence \eqref{eqn-4.3-2} follows from \eqref{eqn4(g-1)/(g-gamma)}. If $\gamma>1$, taking $\lambda=\lambda_0$ in \eqref{eqnomega_f^2-lambda-chi_f} and using \autoref{lemma-4-1} below to eliminate $s_2$, one obtains \begin{eqnarray*} &&\omega_f^2-\lambda_0\cdot\chi_f\\[0.15cm] &\geq \hspace{-0.2cm}&\left(\frac{(3g+1-4\gamma)(g-1)}{2(2g+1-3\gamma)}-\frac{(g+1-2\gamma)^2\lambda_0}{8(2g+1-3\gamma)}\right) \cdot\frac{\omega_h^2}{\gamma-1}-2\lambda_0\cdot\chi_h+\frac{(\lambda_0-4)}{8(\gamma-1)}\cdot T\\ &&\hspace{-0.2cm}+\frac{\lambda_0}{4}\cdot n_2 +\sum_{k\geq 1} \big(k^2\lambda_0-(2k-1)^2\big)\cdot s_{2k+1} +\sum_{k\geq 2}\Big(\frac{k(k-1)}{2}\lambda_0-2(k-1)^2\Big)\cdot s_{2k}\\ &\geq \hspace{-0.2cm}&\left(\frac{(3g+1-4\gamma)(g-1)}{2(2g+1-3\gamma)}-\frac{(g+1-2\gamma)^2\lambda_0}{8(2g+1-3\gamma)}\right) \cdot\frac{\omega_h^2}{\gamma-1}-2\lambda_0\cdot\chi_h\\ &\geq \hspace{-0.2cm}&\Bigg(\left(\frac{(3g+1-4\gamma)(g-1)}{2(2g+1-3\gamma)}-\frac{(g+1-2\gamma)^2\lambda_0}{8(2g+1-3\gamma)}\right) \cdot\frac{4}{\gamma}-2\lambda_0\Bigg)\cdot\chi_h=0, \end{eqnarray*} where the second inequality follows from the non-negativity of $T,\,n_2$ and $s_j$'s for $j\geq 3$; and the third inequality comes comes from the slope inequality $\omega_h^2\geq \frac{4(\gamma-1)}{\gamma}\chi_h$ of the fibration $h$. The proof is complete. \end{proof} \begin{lemma}\label{lemma-4-1} \begin{equation}\label{eqn-4.3-1} T+(\gamma-1)\left(s_2+\sum_{k\geq1}4k(2k+1)s_{2k+1}+\sum_{k\geq 2}2k(2k-1)s_{2k}\right)\geq 0. \end{equation} \end{lemma} \begin{proof} We may assume that $\gamma>1$. By \eqref{eqn(omegah+R)R}, the inequality \eqref{eqn-4.3-1} is equivalent to \begin{equation}\label{eqn-4.3-lin2} T+(\gamma-1)\big((\omega_h+R)\cdot R+2n_2\big)\geq 0. \end{equation} Let $R=\sum\limits_{i=1}^{m}D_i$ be the decomposition into connected components, such that $$D_i\cdot \Gamma> 0, \quad\forall~1\leq i\leq l;~\qquad\, D_i\cdot \Gamma= 0, \quad\forall~l+1\leq i\leq m,$$ where $\Gamma$ is a general fiber of $h$. We claim that \begin{equation}\label{eqn-4.3-lin3} (\omega_h+D_i)\cdot D_i\geq 0, ~\,\forall~1\leq i\leq l;\quad (\omega_h+D_i)\cdot D_i\geq -2, ~\,\forall~l+1\leq i\leq m. \end{equation} Indeed, let $\wt D_i=\sum\limits_{j=1}^{k_i}\wt D_{ij} \to D_i$ be the normalization, and $\sum\limits_{j=1}^{l_i}\wt D_{ij}$ be the irreducible components which are mapped surjectively onto $B$. Then $$\begin{aligned} (\omega_h+D_i)\cdot D_i&~= \big(2g(B)-2\big)\Gamma\cdot D_i+(\omega_Y+D_i)\cdot D_i\\ &~\geq \big(2g(B)-2\big)\Gamma\cdot D_i+\sum_{j=1}^{k_i}\big(2g(\wt D_{ij})-2\big)+2(k_i-1)\\ &~\geq \sum_{j=l_i+1}^{k_i}\big(2g(\wt D_{ij})-2\big)+2(k_i-1) \geq 2(k_i-l_i-1). \end{aligned}$$ Hence \eqref{eqn-4.3-lin3} follows. Let $D=\sum\limits_{i=1}^{l}D_i$ and $D'=\sum\limits_{i=l+1}^{m}D_i$. Then $(\omega_h+D)\cdot D\geq 0$ by \eqref{eqn-4.3-lin3}. Since $\Gamma\cdot \big((g+1-2\gamma)\omega_h-(\gamma-1)D\big)=0$, one gets by Hodge index theorem that $$\begin{aligned} 0&~\geq \big((g+1-2\gamma)\omega_h-(\gamma-1)D\big)^2\\ &~=\big((g+1-2\gamma)\omega_h-(\gamma-1)R\big)^2-(\gamma-1)^2(\omega_h+D')\cdot D'\\ &~\quad+(\gamma-1)(2g+1-3\gamma)\omega_h\cdot D'\\ &~\geq \big((g+1-2\gamma)\omega_h-(\gamma-1)R\big)^2-(\gamma-1)^2(\omega_h+D')\cdot D'. \end{aligned}$$ Combining this with the fact that $$(\omega_h+R)\cdot R=(\omega_h+D)\cdot D+(\omega_h+D')\cdot D'\geq (\omega_h+D')\cdot D',$$ we obtain \eqref{eqn-4.3-lin2}, and hence complete the proof. \end{proof} \section{Examples}\label{sec-examples} \begin{example}\label{example-2} We construct a relatively minimal fibration $f:\,X \to E$ of curves of odd genus $g\geq 3$ over an elliptic curve $E$ with $q_f=\frac{g+1}{2}$ and \begin{equation*} \lambda_f = 8-\frac{4}{g-1}<8=\frac{4(g-1)}{g-q_f}. \end{equation*} Let $E$ be any elliptic curve, and $C$ be any smooth curve of genus $g_0\geq 3$ which admits a double cover to $E$: $$ \xymatrix{\eta:~C \ar[r]^-{2:1} & E.}$$ Let $\Delta\subseteq C\times C$ be the diagonal, $\sigma$ the involution on $C\times C$ defined by exchanging the two factors, and $X=C\times C/\langle\sigma\rangle$ the quotient surface. Since $\sigma$ has no isolated fixed point, $X$ is smooth. According to \cite[\S\,2.4-Example\,(b)]{lopes-pardini-12}, we know that $X$ is minimal of general type with $q(X)=g_0$ and $$\chi(\calo_X)=\frac{(g_0-1)^2-(g_0-1)}{2},\qquad \omega_X^2=4(g_0-1)^2-5(g_0-1).$$ To obtain a fibration on $X$, we consider first the fibration on $C\times C$ defined by $$h:~C\times C \lra E,\qquad (x_1,x_2) \mapsto \eta(x_1)+\eta(x_2),$$ where `$+$' is the addition on the elliptic curve $E$. It is easy to see that the morphism $h$ factors through $X$ and so induces a fibration $f:\,X \to E$: $$\xymatrix{ C\times C\ar[rr]^-{\pi}\ar[dr]_-{h}&&X\ar[dl]^-{f}\\ &E& }$$ It is clear that $f$ is relatively minimal since $X$ is minimal, and $q_f=q(Y)-g(E)=g_0-1$. To compute the genus $g$ of a general fiber of $f$, let $H$ be a general fiber of $h$, $F=\pi(H)\subseteq X$, $p=h(H)\in E$, and $pr_1$ (resp. $pr_2$) be the projection of $C\times C$ to the first (resp. the second) factor $C$. Then for any $(x_1,x_2)\in H$, one has $\eta(x_1)+\eta(x_2)=p$, i.e., $\eta(x_1)=-\eta(x_2)+p$. In other word, one has the following commutative diagram $$\xymatrix{ H\ar[rr]^{pr_1|_{H}}\ar[d]_-{pr_2|_{H}}&&C\ar[d]^-{\eta}\\ C\ar[rr]^-{-\eta+p}&&E }$$ The maps in the above diagram are all double covers, and the branch divisor of $pr_2|_{H}$ is $$T=\big\{x\in C ~\big|~y:=-\eta(x)+p\text{~is a branch point of~}\eta:\,C \to E\big\},$$ which is of degree $4g_0-4$. Hence one obtains that $g(H)=4g_0-3$. Note that $H\cdot \Delta=8$. Thus by Hurwitz formula, we get that $$2g(H)-2=2(2g(F)-2)+8.$$ Hence $g=g(F)=2g_0-3$. Therefore $q_f=g_0-1=\frac{g+1}{2}$, and $$\lambda_f=\frac{\omega_f^2}{\chi_f}=\frac{\omega_X^2}{\chi(\calo_X)}=\frac{8g_0-18}{g_0-2}=8-\frac{4}{g-1}<8=\frac{4(g-1)}{g-q_f},\text{~as required.}$$ \end{example} \begin{example}\label{example-3} We construct a relatively minimal double cover fibration $f:X \to \mathbb P^1$ of type $(g,\gamma)$ with $0<\gamma<(g+1)/2$, $q_f=(g+1)/2$, and $$\lambda_f=8-\frac{4}{(g+1-2\gamma)\gamma}<8=\frac{4(g-1)}{g-q_f}.$$ Consider the ruled surface $\eta_0:\,\mathbb P^1 \times \mathbb P^1 \to \mathbb P^1.$ Let $\Lambda_0$ be a pencil on $\mathbb P^1 \times \mathbb P^1$ such that $H_0$ is a section of $\eta_0$ and $H_0^2=2$ for a general member $H_0\in \Lambda_0$. Assume that $\Lambda_0$ has two distinct base-points, which are mapped to $\{p,\,p'\}\subseteq \mathbb P^1$ by $\eta_0$. Let $\psi:\,\mathbb P^1 \to \mathbb P^1$ be a double cover branched exactly over $\{p,\,p'\}$, and consider the Cartesian product $$\xymatrix{\mathbb P^1 \times \mathbb P^1 \ar[rr] \ar[d]_-{\eta} && \mathbb P^1 \times \mathbb P^1 \ar[d]^{\eta_0}\\ \mathbb P^1 \ar[rr]^{\psi} && \mathbb P^1 }$$ Let $\Lambda$ be the pulling-back of $\Lambda_0$. Then $\Lambda$ also has two distinct base-points ($H$ and $H'$ are tangent to each other at each of these two base-points for any two general $H,H'\in\Lambda$). Let $\xi:\,\mathbb P^1 \times \mathbb P^1 \to \mathbb P^1$ be another fibration, and $\{D_1,D_2,\cdots, D_{2\gamma+2}\}$ be $2\gamma+2$ fibers of $\xi$ such that these two base-points of $\Lambda$ are contained in $D_1$ and $D_2$ respectively. Let $\Gamma \to \mathbb P^1$ be the double cover branched over $\big\{\xi(D_1),\xi(D_2),\cdots, \xi(D_{2\gamma+2})\}$, and $$Y=\big(\mathbb P^1 \times \mathbb P^1\big) \times_{\mathbb P^1} \Gamma=\mathbb P^1 \times \Gamma$$ the fiber-product. Let $\Lambda_Y$ be the inverse of $\Lambda$ on $Y$. Then $\Lambda_Y$ has also exactly two base-points (each of the base-points is of multiplicity two). Blowing up the base-points of the pencil $\Lambda_Y$, we obtain a fibration $$\varphi:~ \wt Y \to \mathbb P^1.$$ By construction, the strict inverse images of $D_1$ and $D_2$ in $\wt Y$ are contracted by $\varphi$. Let $\tilde p,\, \tilde p'$ be the images, and $\Gamma' \to \mathbb P^1$ the double cover branched over $\{\tilde p,\,\tilde p',\,x_1,\cdots,x_{2\gamma'}\}$, where $\gamma'=(g+1)/2-\gamma$, and $x_1,\cdots,x_{2\gamma'}$ are distinct general points on $\mathbb P^1$. Let $X$ be the normalization of the fiber-product $\wt Y \times_{\mathbb P^1} \Gamma'$ and $f:\,X \to \mathbb P^1$ the induced fibration as follows $$\xymatrix{ \Gamma' \ar[d] && X \ar[ll]_-{\phi'} \ar[d]_-{\pi} \ar@/^1mm/"4,4"^-{f} \ar@/^2mm/"2,7"^-{\phi}&& &&\\ \mathbb P^1 && \wt Y \ar[ll]_-{\varphi} \ar[rr] && Y= \mathbb P^1\times \Gamma \ar[d]\ar[rr] \ar@/_5mm/"4,4"_-{h} && \Gamma \ar[d]\\ && && \mathbb P^1\times \mathbb P^1 \ar[rr]^-{\xi} \ar[ld]^-{\eta} && \mathbb P^1\\ &&&\mathbb P^1&&& }$$ Let $\wt C_i=\varphi^*(x_i)$ be the fibers of $\varphi$ for $1\leq i\leq 2\gamma'$. Then it is clear that $$\omega_{\wt Y}^2=-8(\gamma-1)-2,\qquad \chi(\mathcal O_{\wt Y})=-(\gamma-1), \qquad \omega_{\wt Y}\cdot \wt C=4\gamma-4.$$ Note that the fibers of $\varphi$ over $\tilde p$ and $\tilde p'$ are of multiplicity two. Hence $\pi$ is a double cover branched exactly over $\wt R=\big\{\wt C_1,\cdots,\wt C_{2\gamma'}\big\}$. Therefore, $f$ is a relatively minimal fibration of genus $g$, and \begin{eqnarray*} \omega_f^2&=&2\left(\omega_{\wt Y}+\frac12\wt R\right)^2+8(g-1)=8(g+1-2\gamma)\gamma-4,\\ \chi_f&=&2\chi(\mathcal O_{\wt Y})+\frac12\left(\omega_{\wt Y}+\frac12\wt R\right)\cdot\frac{\wt R}{2}+(g-1)=(g+1-2\gamma)\gamma. \end{eqnarray*} Hence $f$ has the required slope. Note that $q(\wt Y)=\gamma$ and $q_{\pi}=\gamma'$ since $\pi$ is the normalization of the fiber-product $\wt Y \times_{\mathbb P^1} \Gamma'$. Therefore $q_f=\gamma+\gamma'=(g+1)/2$ as required. \end{example} \begin{example} We construct a relatively minimal double cover fibration $f:X \to \mathbb P^1$ of type $(g,\gamma)$ with $q_{\pi}=\frac{g+1-2\gamma}{d}-1$ ($d\geq 2$) and \begin{equation*} \lambda_f= \lambda_{g,\gamma,q_{\pi}}:=8-\frac{4(g+1-2\gamma)}{(q_{\pi}+1)\big((g-1)+(q_{\pi}-1)(\gamma-1)\big)}. \end{equation*} Let $C$ be a smooth genus-$\gamma$ curve admitting a morphism $\zeta:\,C\to \mathbb P^1$ of degree $d\geq 2$. Consider the following diagram $$\xymatrix{Y\triangleq C\times \mathbb P^1 \ar[rr]^-{(\zeta,\,id)}\ar[dr]_-{h} && \mathbb P^1\times \mathbb P^1 \ar[dl]^-{\eta}\\ &\mathbb P^1&}$$ Let $H_0\subseteq \mathbb P^1\times \mathbb P^1$ be any section of $\eta$ with $H_0^2=2b_0$ for some $b_0>0$. It is well-known that $H_0$ is very ample (cf. \cite[\S\,V.2]{hartshorne-77}). Take two general members $D,D' \in |H_0|$, and let $\Lambda$ be the pencil on $Y$ generated by $\pi_0^*(D)$ and $\pi_0^*(D')$. Then $\Lambda$ defines a rational map $\varphi_{\Lambda}:\,Y \dashrightarrow \mathbb P^1$. By blowing up the base points of $\Lambda$, we get a fibration $\tilde h':\,\wt Y \to \bbp^1$. Let $\Delta \subseteq \bbp^1$ be a set of $2(q_{\pi}+1)$ general points, $\wt R=(\tilde h')^*(\Delta)$ the corresponding fibers of $\tilde h'$, $\pi':\,B'\to \bbp^1$ the double cover ramified over $\Delta$, and $\wt X$ the normalization of the fiber-product $\wt Y\times_{\bbp^1} B'$: $$\xymatrix{&& B'\ar[rr]^-{\pi'} && \mathbb P^1\\ \wt X \ar[rr]^-{\tilde \pi} \ar[rru]\ar@/_3mm/"3,5"_-{f} &&\wt Y \ar[rr]\ar[rrd]^-{\tilde h} \ar[rru]_-{\tilde h'} && Y \ar[d]^-{h}\ar@{-->}[u]_-{\varphi_{\Lambda}} \\ && && \mathbb P^1 }$$ Since $\Delta$ is general on $\bbp^1$, $\wt R$ is both reduced and smooth. Hence $\wt X$ is also smooth. Let $\wt\Gamma'$ be a general fiber of $\tilde h'$. By construction, one has $q(\wt X)-q(\wt Y)=g(B')=q_{\pi}$, and $$\omega_{f}^2=-2db_0,\qquad \omega_{f}\cdot \wt\Gamma' =2(\gamma-1+d)b_0, \qquad \big(\wt \Gamma'\big)^2=0.$$ Hence by the standard formulas for double covers (cf. \cite[\S\,V.22]{bhpv-04}), we obtain $$\begin{aligned} \omega_{f}^2&= 2\left(\omega_{\tilde h}+\frac{\wt R}{2}\right)^{2}=4\big(2(q_{\pi}+1)(\gamma-1+d)-d\big)b_0,\\ \chi_{f}&=2\chi_{\tilde h}+\frac12\left(\omega_{\wt Y}+\frac{\wt R}{2}\right)\cdot\frac{\wt R}{2}=(q_{\pi}+1)(\gamma-1+d)b_0. \end{aligned}$$ Actually, $f$ is relatively minimal. To see this, let $R$ be the image of $\wt R$ in $Y$. Then the singular points of $R$ are all of multiplicity $2(q_{\pi}+1)$. Hence $s_{2k+1}=0$ for all $k\geq 1$. Combining this with the triviality of $h$, we obtain that $\wt X$ is relatively minimal by \autoref{numberofcontraction}. Hence $f$ is a relatively minimal double cover fibration $f:X \to \mathbb P^1$ of type $(g,\gamma)$ with $q_{\pi}=\frac{g+1-2\gamma}{d}-1$ and $$\lambda_f=\frac{\omega_f^2}{\chi_f}=\frac{4\big(2(q_{\pi}+1)(\gamma-1+d)-d\big)}{(q_{\pi}+1)(\gamma-1+d)}=\lambda_{g,\gamma,q_{\pi}},~\text{~as required.}$$ \end{example}
{ "timestamp": "2015-04-24T02:12:17", "yymm": "1504", "arxiv_id": "1504.06276", "language": "en", "url": "https://arxiv.org/abs/1504.06276" }
\section{Introduction} \label{sec:intro} The {\em Ramsey number} of a graph $G$, denoted $r(G)$, is the minimum number $n$ such that every edge-coloring of $K_n$ using two colors admits a monochromatic copy of $G$. It was first studied in the seminal paper of Ramsey \cite{Ramsey} which established that the Ramsey number of the complete graph $K_k$ on $k$ vertices is finite for all positive integers $k$. Since then, Ramsey theory, the study of various results that can be grouped under the common theme ``every large system has a well-organized subsystem'', flourished and became one of the most active fields of research in combinatorics. It is a beautiful field with many questions still remaining to be answered and has deep connections to other fields such as logic, geometry, and computer science. See the classical book of Graham, Rothschild, and Spencer \cite{GrRoSp} for a comprehensive overview of the field, or a survey of Conlon, Fox, and Sudakov \cite{CoFoSu15} for recent developments in graph Ramsey theory. In this paper we study the Ramsey number of bounded degree graphs. The history of such study can be traced back to a paper of Burr and Erd\H{o}s \cite{BuEr} from 1975 which predicted that the behavior of Ramsey numbers of sparse graphs will be dramatically different from that of the complete graph (the Ramsey number of complete graphs is exponential in terms of the number of vertices \cite{ErSz, Erdos}). A graph $G$ is {\em $d$-degenerate} if all its subgraphs has a vertex of degree at most $d$. In their paper, Burr and Erd\H{o}s conjectured that for all $d$, there exists a constant $c = c(d)$ such that $r(G) \le c(d) n$ for all $n$-vertex $d$-degenerate graphs $G$. This conjecture is still open (see \cite{FoSu, Lee}). In 1983, Chv\'atal, R\"odl, Szemer\'edi, and Trotter \cite{ChRoSzTr} showed that the Burr-Erd\H{o}s conjecture holds if the degeneracy condition is replaced with a bounded degree condition. More precisely, they showed that for all $\Delta$, there exists a constant $c = c(\Delta)$ such that $r(G) \le c(\Delta) n$ for all $n$-vertex graphs $G$ of maximum degree at most $\Delta$. Their proof relied on the regularity lemma and gave a tower-type dependency between $c(\Delta)$ and $\Delta$. Since then, the bound on $c(\Delta)$ has been improved by Eaton \cite{Eaton}, Graham, R\"odl, and Ruci\'nski \cite{GrRoRu00, GrRoRu01}, and by Conlon, Fox, and Sudakov \cite{CoFoSu} who showed that there exists a constant $c$ such that $r(G) \le c^{\Delta \log \Delta} n$ holds for $n$-vertex graphs $G$ of maximum degree at most $\Delta$. For bipartite graphs, independently, Conlon \cite{Conlon}, and Fox and Sudakov \cite{FoSu0} showed that a better bound $r(G) \le c^{\Delta}n$ holds (for some other constant $c$). On the other hand, Graham, R\"odl, and Ruci\'nski \cite{GrRoRu01} showed that there exists a constant $c > 1$ such that for all large enough $n$, there exists a bipartite graph $G$ with maximum degree at most $\Delta$ satisfying $r(G) \ge c^{\Delta}n$. In some cases the constant is known to be significantly smaller. The {\em bandwidth} of an $n$-vertex graph $G$ is the minimum integer $b$ for which there exists a labelling of the vertices by $[n]$ such that $|i-j| \le b$ holds for all edges $\{i,j\} \in E(G)$. Allen, Brightwell, and Skokan \cite{AlBrSk} showed that if $G$ is an $n$-vertex with maximum degree at most $\Delta$ and bandwidth at most $\beta n$, then $r(G) \le (2\chi(G) + 4)n \le (2\Delta + 6)n$. Despite the rather wide gap between the two constants $c^{\Delta}$ and $2\Delta+6$, not much is known about the constant that dictates the Ramsey number of graphs of bounded maximum degree. Since it is known that a graph has small bandwidth if and only if it has poor expansion property (see \cite{BoPrTaWu} for more detail) and the example of Graham, R\"odl, and Ruci\'nski is a good expander, one can reasonably guess that the constant $c_G$ for which $r(G) \le c_G \cdot n$ depends on the expansion property of $G$. In this paper, we study the relation between the constant $c_G$ and the structure of the graph $G$ in further depth. \medskip Throughout the paper, when considering 2-edge-coloring of a graph, we tacitly assume that the two colors are red/blue, respectively, and refer to the subgraph consisting of the red edges as the {\em red graph}, and of the blue edges as the {\em blue graph}. A {\em (vertex) weighted graph} is a pair $(G, w)$ of a graph $G$ and a weight function $w : V(G) \rightarrow [0,1]$. For a set $X \subseteq V(G)$, define $w(X) = \sum_{x \in X} w(x)$. For two graphs $G$ and $H$, a {\em homomorphism} from $G$ to $H$ is a map $f : V(G) \rightarrow V(H)$ such that $\{f(v), f(w)\} \in E(H)$ whenever $\{v,w\} \in E(G)$. \begin{dfn} The {\em Ramsey number of a weighted graph $(G,w)$}, denoted $\hat{r}(G,w)$ is the minimum integer $n$ satisfying the following: for every 2-edge-coloring of $K_n$, there exists a homomorphism $f$ from $G$ to the red graph, or to the blue graph, for which $w(f^{-1}(v)) \le 1$ holds for all $v \in V(K_n)$. We simply denote $\hat{r}(G, w)$ as $\hat{r}(G)$ when the weight function is clear from the context. \end{dfn} Note that the Ramsey number of weighted graphs generalizes the Ramsey number of graphs since given a graph we can always consider a constant weight function assigning weight one to all vertices. For this weight function, the Ramsey number in the traditional sense is equal to the Ramsey number as defined above. This observation in particular implies that $\hat{r}(G,w)$ is finite for all weighted graphs $(G,w)$. Further note that if $G$ is the complete graph, then for all weight functions $w$, the Ramsey number of $(G,w)$ equals the Ramsey number of $G$ since a homomorphism from a complete graph to a graph with no loops is necessarily injective. On the other hand suppose that $G$ is $k$-colorable and suppose that $w$ is a weight function where $w(X) \le 1$ for each of the $k$ color classes $X$ of $G$. Then one can easily check that $\hat{r}(G) \le r(K_k) \le {2k \choose k}$. Therefore both the structure of $G$ and the weight function $w$ plays an important role in determining the Ramsey number of weighted graphs. However we will later see that for bounded degree graphs $G$, the Ramsey number of $(G,w)$ is mostly determined by the total weight $w(V(G))$ of the graph. We consider another generalization of Ramsey numbers, implicitly studied in \cite{AlBrSk}, where the host graph is a graph of large minimum degree instead of the complete graph. \begin{dfn} For a positive real $\varepsilon$ and a weighted graph $(G,w)$, define the {\em $\varepsilon$-stable Ramsey number} $\hat{r}_\varepsilon(G,w)$ as the minimum integer $n$ satisfying the following: for every graph $\Gamma$ on $n$ vertices of minimum degree at least $(1-\varepsilon)n$, for every 2-edge-coloring of $\Gamma$, there exists a homomorphism $f$ from $G$ to the red graph, or to the blue graph, for which $w(f^{-1}(v)) \le 1$ holds for all $v \in V(\Gamma)$. \end{dfn} The stable Ramsey number generalizes Ramsey number since $\hat{r}(G,w) = \hat{r}_{\varepsilon}(G,w)$ holds for every weighted graph $(G,w)$ if $\varepsilon < \frac{1}{\hat{r}(G,w) - 1}$. However, given a weighted graph $(G,w)$, the $\varepsilon$-stable Ramsey number does not necessarily exist. For example if $G$ is an $r$-partite graph, then $\hat{r}_{\varepsilon}(G,w)$ does not exist for $\varepsilon \ge \frac{1}{r-1}$ since we can take the host graph $\Gamma$ to be a complete $(r-1)$-partite graph. In fact $\hat{r}_{\varepsilon}(G,w)$ is finite if and only if $\varepsilon < \frac{1}{r(K_{\chi(G)})-1}$ (see Section~\ref{sec:remarks}). The following theorem extends a theorem of Conlon, Fox, and Sudakov and shows that for bounded degree graphs, the stable Ramsey number is mostly determined by the total weight of the graph. \begin{thm} \label{thm:weight_cfs} There exist constants $c$ such that the following holds for every natural number $\Delta$ and positive real number $\varepsilon$ satisfying $\varepsilon < c^{-\Delta \log \Delta}$. If $(G,w)$ is a weighted graph with maximum degree at most $\Delta$, then $\hat{r}_\varepsilon(G) \le c^{\Delta \log \Delta} \cdot w(V(G))$. \end{thm} The following result is the main theorem of this paper studying the Ramsey number of bounded degree graphs. It roughly asserts that if $G$ is a subgraph of a blow-up of $H$, then the Ramsey number of $G$ can be described in terms of the Ramsey number of $H$. \begin{thm} \label{thm:transference} For all $\Delta, \xi$ and $\varepsilon$, there exists $\beta$ and $n_0$ such that the following holds for all $n \ge n_0$. Let $G$ and $H$ be graphs where $G$ has $n$ vertices and maximum degree at most $\Delta$. Suppose that there exists a homomorphism $f$ from $G$ to $H$ for which $|f^{-1}(v)| \le \beta n$ for all $v \in V(H)$. Then for the weight-function of $H$ defined by $w(v) = \frac{1}{\beta n}|f^{-1}(v)|$ we have $r(G) \le (1+\xi)\hat{r}_\varepsilon(H,w) \cdot \beta n$. \end{thm} Note that the weight function $w$ defined in Theorem~\ref{thm:transference} satisfies $w(V(H)) = \frac{1}{\beta}$. A {\em wheel graph} $W_{k}$ is a graph with $k$ vertices consisting of a cycle on $k-1$ vertices and a vertex adjacent to all vertices on the cycle. In Section~\ref{sec:weighted_ramsey}, we will see that there exist constants $\varepsilon$ and $c$ such that $\hat{r}_\varepsilon(W_k, w) \le c \cdot w(V(W_k))$ for all $k$ and all weight functions $w : V(W_k) \rightarrow [0,1]$. Hence if $G$ has maximum degree at most $\Delta$ and a homomorphism $f$ into a wheel graph $W_k$ where $|f^{-1}(v)| \le \beta n$ for all $v \in V(W_k)$, then by Theorem~\ref{thm:transference} above with $\xi = 1$, we obtain $r(G) \le 2\hat{r}_\varepsilon(W_k,w) \cdot \beta n \le 2c n$ (see Figure~\ref{fig:fig_wheel}). Note in particular that the constant does not depend on $\Delta$. This is in sharp contrast with the bound $r(G) \le c^{\Delta \log \Delta} n$ (the constant $c$ is different from above) that we obtain through the theorem of Conlon, Fox, and Sudakov. \begin{figure} \centering \begin{tabular}{ccc} \input{fig-hom} \end{tabular} \caption{A graph of maximum degree at most $\Delta$ and a homomorphism into a wheel graph.} \label{fig:fig_wheel} \end{figure} As another example, if $H$ has maximum degree at most $d$, then by Theorem~\ref{thm:weight_cfs} we see that $\hat{r}_\varepsilon(H,w) \le c^{d \log d}\frac{1}{\beta}$ for small enough $\varepsilon$. By applying Theorem~\ref{thm:transference} with the $\xi = 1$ and $\varepsilon < c^{d\log d}$, we obtain the following corollary. \begin{cor} \label{cor:transference} There exists a constant $c$ such that for all $\Delta$, there exists $\beta$ and $n_0$ such that the following holds for all $n \ge n_0$. Let $G$ be a $n$-vertex graph with maximum degree at most $\Delta$, and $H$ be a graph with maximum degree at most $d$. Suppose that there exists a homomorphism $f$ from $G$ to $H$ for which $|f^{-1}(v)| \le \beta n$ for all $v \in V(H)$. Then $r(G) \le c^{d \log d} n$. \end{cor} It is known (implicitly in \cite{BoScTa}) that for all $r$, there exists a constant $c > 1$ such that if $G$ is an $r$-partite graph of bandwidth at most $\beta n$, then there exists a homomorphism $f$ from $G$ to the $r$-th power of a path of length $\frac{1}{c \beta}$ where $|f^{-1}(v)| \le c \beta n$ for all $v$. Allen, Brightwell, and Skokan's result $r(G) \le (2\chi(G)+4)n$ mentioned above then follows from Theorem~\ref{thm:transference} and a bound on the Ramsey number of power of paths (in fact the proof of Theorem~\ref{thm:transference} is based on a generalization of their proof). The necessity of forcing $|f^{-1}(v)| \le \beta n$ for some small constant $\beta$ can be seen from the example of Graham, R\"odl, and Ruci\'nski. They proved that there exists a constant $c < 1$ such that for all $\Delta$ and large enough $n$, there exists a $c^{\Delta}n$-vertex bipartite graph $G$ of maximum degree at most $\Delta$ for which $r(G) > n$. If $\beta \ge 20c^{\Delta}$ in Theorem~\ref{thm:transference}, then there exists a homomorphism $f$ from $G$ to $K_2$ such that $|f^{-1}(v)| \le \frac{1}{20}\beta n$ for both vertices $v$ of $K_2$. Since $\hat{r}_\varepsilon(K_2,w) = 2$, Theorem~\ref{thm:transference} (if true) will imply that $r(G) \le (1+\xi)2\beta n < n$ which is a contradiction. Thus we see that $\beta$ must be at most $20c^{\Delta}n$ in Theorem~\ref{thm:transference}. On the other hand, the bound on $\beta$ that we obtain in Theorem~\ref{thm:transference} has a tower-type dependency on $\Delta$. It would be interesting to determine the best possible value of $\beta$ that we can take. The following density embedding theorem has an interesting implication towards this problem. \begin{thm} \label{thm:densityembedding} Let $G$ be an $n$-vertex graph of minimum degree at least $(\delta + \alpha)n$. Then $G$ contains all bipartite graphs $H$ on at most $\delta n$ vertices with maximum degree at most $\Delta$ and bandwidth at most $\frac{1}{256\Delta}\alpha^{6\Delta+1}n$. \end{thm} Theorem~\ref{thm:densityembedding} can be seen as an extension of density embedding theorems of bipartite graphs proved by Conlon \cite{Conlon}, and Fox and Sudakov \cite{FoSu0}, and may be of independent interest. Note that Theorem~\ref{thm:densityembedding} is asymptotically tight in terms of the number of vertices of $H$. As we will later see (Corollary~\ref{cor:bandwidth}), it implies that $r(G) \le (4+\varepsilon)n$ if $G$ is a bipartite graph with maximum degree at most $\Delta$ and bandwidth at most $c^{\Delta}n$ for some positive constant $c < 1$. Similar result can be obtained by the theorem of Allen, Brightwell, and Skokan but with a worse bound on the bandwidth. This corollary implies that a transference-type result holds even when $\beta$ is as large as $c^\Delta n$ for the special case when $G$ is a bipartite graph with small bandwidth. A $d$-dimensional hypercube $Q_d$ is a graph with vertex set $\{0,1\}^d$ where two vertices are adjacent if and only if they differ in exactly one coordinate. Since the bandwidth of $Q_d$ is known to be $O(\frac{2^d}{d})$, Theorem~\ref{thm:densityembedding} is closely related to another conjecture of Burr and Erd\H{o}s stating that there exists a constant $c$ such that $r(Q_d) \le c 2^d$ holds for all natural numbers $d$. Unfortunately the bandwidth condition in Theorem~\ref{cor:bandwidth} is too weak to imply the conjecture. \medskip The rest of the paper is organized as follows. In Section~\ref{sec:transference} we prove Theorem~\ref{thm:transference} using a variant of the blow-up lemma whose proof we defer to a later section. In Section~\ref{sec:weighted_ramsey} we establish a bound on the weighted Ramsey number of wheel graph and prove Theorem \ref{thm:weight_cfs}. In Section~\ref{sec:blowup} we prove the variant of the blow-up lemma used in Section~\ref{sec:transference}. In Section~\ref{sec:bipartite_ramsey}, we prove Theorem~\ref{thm:densityembedding} and then conclude with some remarks in Section~\ref{sec:remarks}. \medskip \noindent \textbf{Notation}. A graph $G = (V,E)$ is given by a pair of vertex set $V$ and edge set $E$. For a vertex $v \in V$, define $\deg(v)$ as the degree of $v$, and for a set $X \subseteq V$, define $\codeg(X)$ as the number of common neighbors of the vertices in $X$. For two vertices $v,w \in V$, we define $\codeg(v,w) = \codeg(\{v,w\})$. For a set $X \subset V$, define $G[X]$ as the subgraph of $G$ induced on $X$. For a pair of sets $X, Y \subseteq V$, define $e(X,Y)$ as the number of pairs $(x,y) \in X \times Y$ that form an edge in $G$. When $X,Y$ are disjoint sets, define $d(X,Y) = \frac{e(X,Y)}{|X||Y|}$. When there are several graphs under consideration, we often use subscript such as in $e_G(X,Y)$ to clarify the graph that we are referring to. For two graphs $H$ and $G$, an {\em embedding} of $H$ to $G$ is an injective map $f : V(H) \rightarrow V(G)$ for which $\{f(v), f(w)\} \in E(G)$ whenever $\{v,w\} \in E(H)$. An {\em embedding} of a weighted graph $(H,w)$ into a graph $G$ is a map $f : V(H) \rightarrow V(G)$ for which $\{f(v), f(w)\} \in E(G)$ whenever $\{v,w\} \in E(H)$ and $w(f^{-1}(v)) \le 1$ for all $v \in V(H)$. For a set $X \subseteq V(H)$, a {\em partial embedding on $X$} is an embedding of $H[X]$ to $G$. For a finite set $X$ and a natural number $n$, we use the notation $X^n$ to denote the product space $X \times X \times \cdots \times X$ where product is taken $n$ times. Equivalently, $X^n$ is the set of ordered $n$-tuples of elements of $X$. We use $\log$ without subscript to denote base 2 logarithm. We omit floor and ceiling whenever they are not crucial. Throughout the paper, we use constants with subscripts such as in $\beta_{2.3}$ to indicate that $\beta$ is the constant coming from Theorem/Corollary/Lemma/Proposition 2.3. \section{Transference principle} \label{sec:transference} Let $G$ be a graph on $n$ vertices. A pair of disjoint vertex subsets $(X,Y)$ is {\em $\varepsilon$-regular} if for all $X' \subseteq X$ and $Y' \subseteq Y$ satisfying $|X'| \ge \varepsilon|X|$ and $|Y'| \ge \varepsilon|Y|$, we have $\left| d(X,Y) - d(X',Y') \right| \le \varepsilon$. A partition $V(G) = V_0 \cup V_1 \cup \cdots \cup V_k$ is {\em $\varepsilon$-regular} if (i) $|V_0| \le \varepsilon n$, (ii) $|V_i| = |V_j|$ for all $i,j \ge 1$, and (iii) for each $i \in [k]$, there exists at most $\varepsilon k$ indices $j \in [k]$ for which $(V_i, V_j)$ is not $\varepsilon$-regular. \footnote{We deviate from the standard practice enforcing at most $\varepsilon k^2$ pairs that are not $\varepsilon$-regular.} We define the {\it $\varepsilon$-reduced graph} of a partition $\{V_i\}_{i =0}^{k}$ as the graph with vertex set $[k]$ where $V_i$ and $V_j$ forms an edge if and only if the pair $(V_i, V_j)$ is $\varepsilon$-regular. Note that condition (iii) is equivalent to saying that the $\varepsilon$-reduced graph of the partition has minimum degree at least $(1-\varepsilon)k$. For a real number $\delta$, we define the {\it $(\varepsilon,\delta)$-reduced graph} of a partition $\{V_i\}_{i=0}^{k}$ as the graph with vertex set $[k]$ where $V_i$ and $V_j$ forms an edge if and only if the pair $(V_i, V_j)$ is $\varepsilon$-regular with density at least $\delta$. The celebrated regularity lemma asserts that all large graphs admit an $\varepsilon$-regular partition (see \cite{KoSi} for the version of the regularity lemma as stated here). \begin{thm} \label{thm:regularity} For all $\varepsilon$ and $t$, there exists $n_0 = n_0(\varepsilon,t)$ and $T = T(\varepsilon,t)$ such that the following holds for all $n \ge n_0$. Every $n$-vertex graph $G$ admits an $\varepsilon$-regular partition into $k$ parts where $t \le k \le T$. \end{thm} We will later need an $\varepsilon$-regular partition with a prescribed number of parts. Such partition can be produced by taking a random refinement of an $\varepsilon$-regular partition obtained through the regularity lemma. The following lemma, proved in \cite{GeKoRoSt}, can be used to verify that such partition indeed works. It asserts that a typical pair of subsets of a regular pair is regular. \begin{lem} \label{lem:inherit_regular} For $0 < \beta, \varepsilon < 1$, there exists $\varepsilon_0 = \varepsilon_0(\beta, \varepsilon)$ and $C = C(\varepsilon)$ such that for all $\varepsilon' \le \varepsilon_0$ and $\delta$, every $\varepsilon'$-regular pair $(X,Y)$ of density at least $\delta$ satisfies that, for every $q \ge C \delta^{-1}$, the number of sets $Q \subseteq X$ of cardinality $q$ that form an $\varepsilon$-regular pair of density at least $\delta$ with $Y$ is at least $(1-\beta^{q}){|V_1| \choose q}$. \end{lem} By combining Theorem~\ref{thm:regularity} and Lemma~\ref{lem:inherit_regular}, we can prove a regularity lemma which outputs a partition with a prescribed number of parts. \begin{lem} \label{lem:regularity_fixednumber} For all $\varepsilon$, there exists $T = T(\varepsilon)$ such that for all $k \ge T$ there exists $n_0(\varepsilon, k)$ such that the following holds for all $n \ge n_0$. Every $n$-vertex graph $G$ admits an $\varepsilon$-regular partition $V_0 \cup V_1 \cup \cdots \cup V_k$. \end{lem} \begin{proof} Let $\varepsilon_0 = \min\{(\varepsilon_0)_{\ref{lem:inherit_regular}}(\frac{1}{2}, \varepsilon), \frac{\varepsilon}{2}\}$, $C = C_{\ref{lem:inherit_regular}}(\varepsilon)$, and $T = \frac{2}{\varepsilon} T_{\ref{thm:regularity}}(\varepsilon_0,\frac{1}{\varepsilon_0})$. Suppose that an integer $k \ge T$ is given. Apply Theorem~\ref{thm:regularity} with $\varepsilon_{\ref{thm:regularity}} = \varepsilon_0$ and $t_{\ref{thm:regularity}} = \frac{1}{\varepsilon_0}$ to find an $\varepsilon_0$-regular partition $V_0 \cup V_1 \cup \cdots \cup V_r$ where $\frac{1}{\varepsilon_0} \le r \le \frac{\varepsilon}{2}T$. Define $s = \left\lceil \frac{k}{r} \right\rceil$ and note that $s \ge \frac{T}{r} \ge \frac{2}{\varepsilon}$. For each $i \in [r]$, we may assume that $|V_i|$ is divisible by $s$ by moving at most $s-1$ vertices from $V_i$ to $V_0$ if necessary. For each $i \in [r]$, let $V_i = V_{i,1} \cup V_{i,2} \cup \cdots \cup V_{i,s}$ be a partition chosen uniformly at random where $|V_{i,j}| = \frac{1}{s}|V_i|$ for all $j \in [s]$. By Lemma~\ref{lem:inherit_regular}, if $(V_i, V_{i'})$ is $\varepsilon_0$-regular, then for all $j,j' \in [s]$, the probability that $(V_{i,j}, V_{i',j'})$ forms an $\varepsilon$-regular pair is at least $1 - 2^{-\Omega(n)}$. Hence by the union bound, we can find partitions $V_i = V_{i,1} \cup V_{i,2} \cup \cdots \cup V_{i,s}$ for each $i \in [r]$ so that $(V_{i,j}, V_{i',j'})$ forms an $\varepsilon$-regular pair whenever $(V_{i}, V_{i'})$ forms an $\varepsilon_0$-regular pair. Thus each $V_{i,j}$ forms an $\varepsilon$-regular pair with at least $(1-\varepsilon_0)rs$ other sets $V_{i',j'}$. Arbitrarily remove $rs - k \le r-1$ parts $(i,j) \in [r] \times [s]$, combine the removed sets with $V_0$ and re-label the sets so that we obtain a partition $U_0 \cup U_1 \cup \cdots \cup U_k$ of the vertex set. For each $i \in [k]$, there are at most $\varepsilon_0 rs \le \frac{\varepsilon}{2} rs \le \varepsilon k$ other indices $i' \in [k]$ for which $(U_i, U_{i'})$ is not $\varepsilon$-regular. Moreover, $|U_0| \le \varepsilon_0 n + (s-1)r + (r - 1)\frac{n}{sr} \le \varepsilon n$ and therefore we found a partition with the desired properties. \end{proof} The blow-up lemma, developed by Koml\'os, S\'ark\"ozy, and Szemer\'edi \cite{KoSaSz}, is a powerful tool used in embedding large subgraphs. Informally, quoting Koml\'os, S\'ark\"ozy, and Szemer\'edi, it asserts that, ``regular pairs behave like complete bipartite graphs from the point of view of bounded degree subgraphs.''. We use the following version of the blow-up lemma. \begin{lem} \label{lem:blowup} For all $\xi, \delta, \Delta$, there exists $\varepsilon = \varepsilon(\xi, \delta, \Delta)$ such that the following holds for all natural numbers $k$ if $m \ge m_0$ for some sufficiently large $m_0 = m_0(k,\varepsilon)$. Let $G$ be a graph with maximum degree at most $\Delta$. Let $\Gamma$ be a graph with a vertex partition $\{V_i\}_{i=1}^{k}$ satisfying $|V_i| \ge (1+\xi)m$ for all $i \in [k]$, and let $R$ be its $(\varepsilon, \delta)$-reduced graph. Suppose that there exists a homomorphism $f$ from $G$ to $R$ where $|f^{-1}(i)| \le m$ for all $i \in [k]$. Then there exists an embedding of $G$ to $\Gamma$. \end{lem} Lemma~\ref{lem:blowup} differs from the original version of the blow-up lemma in that the restriction on $\varepsilon$ does not depend on $R$. It is a subtle but crucial difference. The proof of Lemma~\ref{lem:blowup} is rather technical and hence to avoid unnecessary distraction, we provide it in Section~\ref{sec:blowup}. Theorem~\ref{thm:transference} follows from Lemmas~\ref{lem:regularity_fixednumber} and \ref{lem:blowup}. We re-state the theorem here. \begin{thm*} For all $\Delta, \xi$ and $\varepsilon$, there exists $\beta$ and $n_0$ such that the following holds for all $n \ge n_0$. Let $G$ and $H$ be graphs where $G$ has $n$ vertices and maximum degree at most $\Delta$. Suppose that there exists a homomorphism $f$ from $G$ to $H$ for which $|f^{-1}(v)| \le \beta n$ for all $v \in V(H)$. Then for the weight-function of $H$ defined by $w(v) = \frac{1}{\beta n}|f^{-1}(v)|$ we have $r(G) \le (1+\xi)\hat{r}_\varepsilon(H,w) \cdot \beta n$. \end{thm*} \begin{proof} By reducing $\varepsilon$ if necessary, we may assume that $\varepsilon \le \min\left\{\varepsilon_{\ref{lem:blowup}}(\frac{\xi}{2}, \frac{1}{2}, \Delta), \frac{\xi}{2(1+\xi)}\right\}$. By the theorem of Conlon, Fox, and Sudakov mentioned in the introduction, there exists a constant $c$ for which $r(G) \le c^{\Delta \log \Delta}n$. Define $\beta = T_{\ref{lem:regularity_fixednumber}}(\varepsilon)^{-1}$ and $k = \hat{r}_\varepsilon(H)$. Define $n_0 = \max \{(n_0)_{\ref{lem:regularity_fixednumber}}(\varepsilon, \beta^{-1}c^{\Delta \log \Delta}), (m_0)_{\ref{lem:blowup}}(k, \varepsilon) \}$. By definition we have $k = \hat{r}_{\varepsilon}(H) \ge w(V(H)) = \frac{1}{\beta n} |V(G)| = \beta^{-1}$. Furthermore since $r(G) \le c^{\Delta \log \Delta}n$, the conclusion holds if $\hat{r}_\varepsilon(H) \ge \beta^{-1}c^{\Delta \log \Delta}$. Thus we may assume that $k = \hat{r}_\varepsilon(H) < \beta^{-1}c^{\Delta \log \Delta}$. Thus $\beta^{-1} \le k < \beta^{-1} c^{\Delta \log \Delta}$. Let $N = (1+\xi)\hat{r}_\varepsilon(H) \cdot \beta n$. Suppose that we are given a red/blue coloring of $K_N$. Let $\Gamma_r$ and $\Gamma_b$ be the red graph and blue graph, respectively. By Theorem \ref{thm:regularity}, there exists an $\varepsilon$-regular partition $V_0 \cup V_1 \cup \cdots \cup V_k$ of $\Gamma_r$ (note that it also is an $\varepsilon$-regular partition of $\Gamma_b$). Consider the $\varepsilon$-reduced graph $R$ of the partition, and color the edges with red and blue so that an edge $\{i,j\}$ is red if the red edge density of the pair $(V_i, V_j)$ is at least $\frac{1}{2}$ and blue otherwise. Since $R$ has minimum degree at least $(1-\varepsilon)k$, by the definition of $\hat{r}_{\varepsilon}(H)$, there exists a homomorphism $g$ from $H$ to the red subgraph of $R$ (or the blue subgraph of $R$) such that $|g^{-1}(i)| \le 1$, and thus $|f^{-1}(g^{-1}(i))| \le \beta n$ for all $i \in [k]$. Without loss of generality, assume that it is to the red subgraph of $R$. Note that $h := g \circ f$ is a homomorphism from $G$ to the red subgraph of $R$ satisfying $|h^{-1}(i)| \le \beta n$ for all $i \in [k]$. Further note that for each $i \in [k]$, we have $|V_i| \ge \frac{1-\varepsilon}{k}N \ge (1+\frac{\xi}{2})n$. Therefore by Lemma~\ref{lem:blowup}, we can find a copy of $G$ in $\Gamma_r$. \end{proof} \section{Weighted Ramsey number} \label{sec:weighted_ramsey} \subsection{Wheel graph} Before proving Theorem~\ref{thm:weight_cfs}, we first show that the weighted Ramsey number of wheel graphs is small as claimed in the introduction without proof. Recall that a wheel graph $W_{k}$ is a graph with $k$ vertices consisting of a cycle on $k-1$ vertices and a vertex adjacent to all vertices on the cycle. Let $w : V(W_k) \rightarrow [0,1]$ be a weight function and define $m = w(V(W_k))$ as the total weight. Let $C$ be the cycle obtained from $W_k$ by removing the vertex of degree $k-1$. By restricting the domain of $w$, we may assume that $(C,w)$ is a weighted graph. Since a cycle has maximum degree 2, Theorem~\ref{thm:weight_cfs} that we will prove in the next subsection implies that there exist constants $\varepsilon$ and $c$ such that $\hat{r}_{\varepsilon}(C, w) \le c \cdot w(V(C))$ (where $c$ is a constant not depending on the order of $C$). By increasing $c$ and decreasing $\varepsilon$ if necessary, we may assume that $c \ge 2$ and $\varepsilon < \frac{c}{8}$. For simplicity, we assume that $cm$ is an integer. For $N = 8c m$, consider a red/blue edge coloring of $\Gamma$, where $\Gamma$ is a graph with $N$ vertices and minimum degree at least $(1-\frac{\varepsilon}{8})N$. Without loss of generality, we may assume that there exists a vertex $v_1$ of red degree at least $\lceil \frac{N-1}{2} \rceil \ge 4c m$. Let $X$ be an arbitrary set of red neighbors of $v_1$ of size exactly $4c m$ and let $\Gamma_1$ be the graph induced on $X$. If there exists a vertex $v_2$ of blue degree at least $c m$ in $\Gamma_1$, then let $Y$ be an arbitrary set of blue neighbors of $v_2$ in $\Gamma_1$ of size exactly $cm$ and let $\Gamma_2$ be the subgraph of $\Gamma_1$ induced on $Y$. Note that $\Gamma_2$ has minimum degree at least $cm - \frac{\varepsilon}{8} N = (1-\varepsilon)cm$. Therefore, we can find a monochromatic copy of $(C, w)$ in $\Gamma_2$. If it is red, then together with $v_1$, it forms a monochromatic copy of $(W_k, w)$, and if it is blue, then together with $v_2$, it forms a monochromatic copy of $(W_k, w)$. Hence we may assume that all vertices of $\Gamma_1$ has blue degree at most $cm - 1$ in $\Gamma_1$. Since $\Gamma$ has minimum degree at least $(1-\varepsilon)N$, it follows that $\Gamma_1$ has minimum degree at least $4cm - \varepsilon N$. Therefore $\Gamma_1$ has minimum red degree at least $3cm - \varepsilon N$. Let the vertices of $C$ be $x_1, x_2, \cdots, x_{k-1}$ in decreasing order of weight (where ties are broken arbitrarily). We will greedily embed the vertices of $C$ according to this order. Suppose that we finished embedding $x_1, \cdots, x_{i-1}$ and let $\phi$ denote the partial embedding. Note that $x_i$ has at most two neighbors in $x_1, \cdots, x_{i-1}$. Suppose that it has two neighbors, and let $v, v'$ be the images of these vertices in $\Gamma_1$. Let $R$ be the set of common red neighbors of $v$ and $v'$. By the minimum degree condition of $\Gamma_1$, we know that $|R| \ge 2cm - 2\varepsilon N$. If there exists a vertex $v'' \in R$ such that $w(\phi^{-1}(v'')) + w(x_i) \le 1$, then we define $\phi(x_i) = v''$. Otherwise, we have $w(\phi^{-1}(v'')) > 1 - w(x_i) \ge 0$ for all $v'' \in R$. If $w(x_i) \le \frac{1}{2}$, then it implies that $w(\phi^{-1}(v'')) > \frac{1}{2}$ for all $v'' \in R$. On the other hand, if $w(x_i) > \frac{1}{2}$, then we have $w(x_j) > \frac{1}{2}$ for all $j < i$. Therefore $w(\phi^{-1}(v'')) > 0$ implies that $\phi^{-1}(v'') \neq \emptyset$, and thus $w(\phi^{-1}(v'')) > \frac{1}{2}$ for all $v'' \in R$. In both cases, we have $w(\phi^{-1}(v'')) > \frac{1}{2}$ for all $v'' \in R$. Hence $w(\phi^{-1}(R)) > \frac{1}{2} |R| \ge cm - \varepsilon N > m$ which contradicts the fact that $m = w(V(W_k))$. Therefore we can define $\phi(x_i)$ as above and continue the process. The other case when there are less than two neighbors of $x_i$ in $x_1, \cdots, x_{i-1}$ can be similarly handled. \subsection{Bounded degree graphs} In this section, we adapt the proof of Conlon, Fox, and Sudakov \cite{CoFoSu} to prove Theorem~\ref{thm:weight_cfs}. We say that a graph $\Gamma$ is {\em bi-$(\varepsilon,\delta)$-dense} if for all disjoint pairs of vertex subsets $X, Y \subseteq V(\Gamma)$ of sizes at least $|X|, |Y| \ge \varepsilon|V(\Gamma)|$, we have $d(X,Y) \ge \delta$. The following definition is essentially from \cite{CoFoSu} (we added an additional parameter $\delta$). \begin{dfn} A graph $\Gamma$ on $N$ vertices is $(\alpha, \beta, \rho, \delta, \Delta)$-dense if there is a sequence $U_1, U_2, \cdots, U_s$ of disjoint vertex subsets each of cardinality at least $\alpha N$ and non-negative integers $d_1, \ldots, d_s$ such that $d_1 + \cdots + d_s = \Delta - s + 1$, and the following holds: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] For all $i \in [s]$, the induced subgraph $\Gamma[U_i]$ is bi-$(\rho^{2d_i}, \delta)$-dense, and \item[(ii)] for $1 \le i < j \le s$, each vertex in $U_i$ has at least $(1-\beta)|U_j|$ neighbors in $U_j$. \end{itemize} \end{dfn} Note that monotonicity holds in a sense that if a graph is $(\alpha', \beta', \rho', \delta', \Delta')$-dense and $\alpha' \ge \alpha, \beta' \le \beta, \rho' \le \rho, \delta' \ge \delta, \Delta' \ge \Delta$, then it is also $(\alpha, \beta, \rho, \delta, \Delta)$-dense. The following lemma was proved in \cite[Lemma 2.2]{CoFoSu}. \begin{lem} \label{lem:cfs} Let $D = 2^{h} - 1$ for a non-negative integer $h$, and $\rho$ be a fixed real number. For all $N \ge 1$, every edge-coloring of $K_N$ with two colors red and blue, the red graph or the blue graph is $(2^{-2h}\rho^{6D-4h}, 2(D+1)\rho, \rho, \rho, D)$-dense. \end{lem} The `stable version' of the lemma above immediately follows for small values of $\varepsilon$. \begin{lem} \label{lem:cfs_stable} Let $D = 2^{h} - 1$ for a non-negative integer $h$, and $\rho, \varepsilon$ be positive real numbers satisfying $\varepsilon < 2^{-2h-1}\rho^{8D-4h+1}$. If $\Gamma$ is a graph on $N$ vertices with minimum degree at least $(1-\varepsilon)N$, then for every edge-coloring of $\Gamma$ with two colors red and blue, the red graph or the blue graph is $(2^{-2h}\rho^{6D-4h}, 4(D+1)\rho, \rho, \frac{1}{2}\rho, D)$-dense. \end{lem} \begin{proof} Define $\alpha = 2^{-2h}\rho^{6D-4h}$ and $\beta = 2(D+1)\rho$. Consider an edge-coloring of $K_N$ with three colors, where an edge has color red (blue) if it is an edge of color red (blue) in $\Gamma$, and has color green if it is not an edge in $\Gamma$. Let $\Gamma_r, \Gamma_b, \Gamma_g$ be the graph consisting of red, blue, and green edges respectively. By Lemma~\ref{lem:cfs}, either $\Gamma_r \cup \Gamma_g$ or $\Gamma_b$ is $(\alpha, \beta, \rho, \rho, D)$-dense. If $\Gamma_b$ is $(\alpha, \beta, \rho, \rho, D)$-dense, then the conclusion immediately follows by monotonicity. Hence we may assume that $\Gamma' = \Gamma_r \cup \Gamma_g$ is $(\alpha, \beta, \rho, \rho, D)$-dense. By definition, there exists a sequence $U_1, U_2, \cdots, U_s$ of disjoint vertex subsets each of cardinality at least $\alpha N$ and non-negative integers $d_1, \ldots, d_s$ such that $d_1 + \cdots + d_s = D - s + 1$ where the following holds: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] For all $i \in [s]$, the induced subgraph $\Gamma'[U_i]$ is bi-$(\rho^{2d_i}, \rho)$-dense, and \item[(ii)] for $1 \le i < j \le s$, each vertex in $U_i$ has at least $(1-\beta)|U_j|$ neighbors in $U_j$ in $\Gamma'$. \end{itemize} We claim that $\Gamma_r[U_i]$ is bi-$(\rho^{2d_i}, \frac{1}{2}\rho)$-dense for all $i \in [s]$, and that for $1 \le i < j \le s$, each vertex in $U_i$ has at least $(1-2\beta)|U_j|$ neighbors in $U_j$ in $\Gamma_r$. Note that this proves the lemma. To prove the first part of the claim, fix an index $i \in [s]$ and consider a pair of disjoint vertex subsets $X, Y \subseteq U_i$ of sizes $|X|, |Y| \ge \rho^{2d_i} |U_i| \ge \rho^{2D}\alpha N = 2^{-2h} \rho^{8D-4h}$. By Property (i), we have $e_{\Gamma'}(X,Y) \ge \rho |X||Y|$. Therefore since $\varepsilon N \le \frac{1}{2}\rho |Y|$, \[ e_{\Gamma_r}(X,Y) \ge e_{\Gamma'}(X,Y) - |X| \cdot \varepsilon N \ge \rho |X||Y| - \frac{1}{2}\rho |X||Y| = \frac{1}{2}\rho |X||Y|. \] To prove the second part of the claim, fix two indices $i,j \in [s]$ satisfying $i < j$. By Property (ii), each vertex $u \in U_i$ has at least $(1-\beta)|U_j|$ neighbors in $U_j$ in $\Gamma'$. Since $\varepsilon \le \alpha \beta$, we see that $u$ has at least $(1-\beta)|U_j| - \varepsilon N \ge (1-2\beta)|U_j|$ neighbors in $U_j$, thus proving the claim. \end{proof} We also need the following theorem proved by Lov\'asz \cite{Lovasz}. \begin{lem} \label{lem:Lovasz} Let $G$ be a graph of maximum degree at most $\Delta$, and $d_1, \cdots, d_s$ be non-negative integers satisfying $d_1 + \cdots + d_s \ge \Delta - t + 1$. Then there exists a vertex partition $V(G) = V_1 \cup \cdots \cup V_s$ such that for all $i \in [s]$, the induced subgraph $G[V_i]$ has maximum degree at most $d_i$. \end{lem} The following lemma is an embedding lemma for $(\alpha, \beta, \rho, \delta, \Delta)$-dense graphs. It is a variant of \cite[Lemma 2.5]{CoFoSu} for weighted graphs. \begin{lem} \label{lem:wr_embedding} Let $\alpha, \rho, \delta$ be fixed positive real numbers satisfying $\rho \le \frac{1}{16}$ and $\delta \ge \frac{\rho}{2}$. If $\Gamma$ is an $(\alpha, \frac{1}{2\Delta}, \rho, \delta, \Delta)$-dense graph on $N \ge 8\alpha^{-1}(\frac{2}{\delta})^{\Delta}n$ vertices, then $\Gamma$ contains a copy of every weighted graph $(G,w)$ of total weight at most $n$ and maximum degree at most $\Delta$. \end{lem} \begin{proof} Note that if $\Delta=0$, then the conclusion trivially holds, and hence we may assume that $\Delta \ge 1$. By definition, there exists a sequence $U_1, U_2, \cdots, U_s$ of disjoint vertex subsets of $\Gamma$ each of cardinality at least $\alpha N$ and non-negative integers $\Delta_1, \ldots, \Delta_s$ such that $\Delta_1 + \cdots + \Delta_s = \Delta - s + 1$ for which \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] the induced subgraph $\Gamma[U_j]$ is bi-$(\rho^{2\Delta_j}, \delta)$-dense for each $j \in [s]$, and \item[(ii)] for $1 \le j < j' \le s$, each vertex in $U_{j}$ has at least $\left(1-\frac{1}{2\Delta}\right)|U_{j'}|$ neighbors in $U_{j'}$. \end{itemize} Let $(G,w)$ be a weighted graph of total weight at most $n$ and maximum degree at most $\Delta$. By Lemma~\ref{lem:Lovasz}, there exists a vertex partition $V(G) = V_1 \cup \cdots \cup V_s$ such that for all $j \in [s]$, the induced subgraph $G[V_j]$ has maximum degree at most $\Delta_j$. Let $v_1, v_2, \cdots, v_n$ be an enumeration of the vertices of $G$ with the following properties: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(a)] For all $1 \le j < j' \le s$, the vertices in $V_j$ come before vertices in $V_{j'}$, and \item[(b)] for all $j \in [s]$, the vertices in $V_j$ are ordered so that their weights form a non-increasing sequence. \end{itemize} For each $t \in [n]$, define $\pi(t) \in [s]$ as the index for which $v_t \in V_{\pi(t)}$. Consider the following greedy algorithm of embedding the vertices of $G$, where the $t$-th step of the algorithm selects the image of $v_t$ in $V(\Gamma)$. At time $t$, the algorithm is given as input a partial embedding $f$ defined on $\{v_1, \cdots, v_{t-1}\}$, where at the initial step, $f$ is partial embedding of the empty graph. For $t \in [n]$ and $i \ge t$, define $N_i^{(t)} = N(v_i) \cap \{v_1, \cdots, v_{t-1}\}$ as the set of neighbors of $v_i$ that precede $v_t$. Define $W_i^{(t)} = U_{\pi(i)} \cap \bigcap_{v \in N_i^{(t)}} N(f(v))$ and note that each vertex in $W_t^{(t)}$ can be used as the image of $v_t$ to extend the partial embedding. For each $i \ge t$, define $d_i^{(t)} = |N_i^{(t)} \cap V_{\pi(i)}|$ as the number of neighbors of $v_i$ preceding itself in its own part. Throughout the process, we will maintain the following property: \begin{align} \label{eq:sizebound} \forall i \ge t, \quad |W_i^{(t)}| \ge \frac{1}{2}\left(\frac{\delta}{2}\right)^{d_i^{(t)}} |U_{\pi(i)}|. \end{align} Note that if $d_i^{(t)} = 0$, then \eqref{eq:sizebound} follows from Property (ii) since $|N^{(t)}_i| \le \deg(v_t) \le \Delta$ holds and \begin{align} \label{eq:sizebound_free} |W_i^{(t)}| \ge |U_{\pi(i)}| - \frac{1}{2\Delta}|U_{\pi(i)}| \cdot |N_i^{(t)}| \ge \frac{1}{2} |U_{\pi(i)}|. \end{align} Initially at $t=1$, we define $W_i^{(1)} = U_{\pi(i)}$ for all $i \in [n]$. Moreover since $d_i^{(1)} = 0$ for all $i \in [n]$, equation \eqref{eq:sizebound} holds. Suppose that we are at the $t$-th step of the algorithm for some $t \in [n]$. Define $I^+ = \{i > t \,:\, v_i \in N(v_t) \cap V_{\pi(t)} \}$. Let $W \subseteq W_t^{(t)}$ be the set of vertices $u \in W_t^{(t)}$ such that for all $i \in I^+$, $|N(u) \cap W_i^{(t)}| \ge \frac{\delta}{2} |W_i^{(t)}|$. If there exists a vertex $u \in W$ such that $w(f^{-1}(u)) \le 1 - w(v_t)$, then define $f(v_t) = u$. This is a partial embedding since $f(v_t) \in W_t^{(t)}$ and $w(f^{-1}(u)) \le 1$. Furthermore \eqref{eq:sizebound} is satisfied for $i > t$ having $\pi(i) > \pi(t)$ by \eqref{eq:sizebound_free}, and having $\pi(i) = \pi(t)$ but $i \notin I^+$ since $W_i^{(t+1)} = W_i^{(t)}$. If $\pi(i) = \pi(t)$ and $i \in I^+$, then $d_i^{(t+1)} = d_i^{(t)} + 1$ and therefore \eqref{eq:sizebound} holds since $W_i^{(t+1)} = N(u) \cap W_i^{(t)}$. Therefore it suffices to prove the existence of a vertex $u \in W$ satisfying $w(f^{-1}(u)) \le 1 - w(v_t)$. Suppose that all vertices $u \in W$ satisfy $w(f^{-1}(u)) > 1 - w(v_t)$. Recall that $w(v_j) \ge w(v_t)$ for all $j \le t$ satisfying $\pi(j) = \pi(t)$ by Property (b). Since $w(f^{-1}(u)) > 1 - w(v_t) \ge 0$ implies that $f^{-1}(u) \neq \emptyset$, if $w(v_t) > \frac{1}{2}$, then it follows that $w(f^{-1}(u)) \ge w(v_t) > \frac{1}{2}$. On the other hand, if $w(v_t) \le \frac{1}{2}$, then $w(f^{-1}(u)) > 1 - w(v_t) \ge \frac{1}{2}$. Therefore for all vertices $u \in W$, we have $w(f^{-1}(u)) > \frac{1}{2}$. Since \[ \frac{1}{2}|W| < \sum_{u \in W} w(f^{-1}(u)) \le w(V(G)) \le n, \] we see that $|W| < 2n$. For all vertices $u \in W_t^{(t)} \setminus W$, there exists $i \in I^+$ such that $|N(u) \cap W_i^{(t)}| < \frac{\delta}{2} |W_i^{(t)}|$. For notational simplicity, define $k = \Delta_{\pi(t)}$. Since $|I_+| \le k$, by the pigeonhole principle, there exists an index $i_0 \in I^+$ such that $|N(u) \cap W_{i_0}^{(t)}| < \frac{\delta}{2} |W_{i_0}^{(t)}|$ holds for at least $\frac{1}{k}|W_t^{(t)} \setminus W|$ vertices. Let $X_1$ be the set of these vertices and note that \[ |X_1| \ge \frac{1}{k}(|W_t^{(t)}| - 2n) \ge \frac{1}{k}\left(\frac{1}{2}\left(\frac{\delta}{2}\right)^{k} |U_{\pi(t)}| - 2n\right) \ge \frac{1}{4k}\left(\frac{\delta}{2}\right)^{k} |U_{\pi(t)}| \ge \rho^{2k}|U_{\pi(t)}|, \] where the second to last inequality follows since $|U_{\pi(t)}| \ge \alpha N$ and the last inequality follows since $\rho \le \frac{1}{16}$ and $\rho \le 2\delta$. Define $X_2 = W_{i_0}^{(t)}$ and note that \eqref{eq:sizebound} implies $|X_2| \ge \frac{1}{2}(\frac{\delta}{2})^{k}|U_{\pi(t)}| \ge 2\rho^{2k} |U_{\pi(t)}|$. Let $X_1' \subseteq X_1$ be an arbitrary subset of size exactly $\rho^{2k}|U_{\pi(t)}|$, and define $X_2' = X_2 \setminus X_1'$. Then $|X_2'| \ge \frac{1}{2}|X_2| \ge \rho^{2k}|U_{\pi(t)}|$. Furthermore, each vertex $w \in X_1'$ has at most $\frac{\delta}{2}|X_2| \le \delta |X_2'|$ neighbors in $X_2'$. This contradicts the fact that $\Gamma[U_{\pi(t)}]$ is bi-$(\rho^{2k}, \delta)$-dense. Therefore there exists a vertex $u \in W$ satisfying $w(f^{-1}(u)) \le 1 - w(v_t)$. \end{proof} Theorem~\ref{thm:weight_cfs} straightforwardly follows from Lemmas~\ref{lem:cfs_stable} and \ref{lem:wr_embedding}. \begin{thm*} There exists a constant $c > 1$ such that the following holds for all $\Delta$ and $\varepsilon$ satisfying $\varepsilon < c^{-\Delta \log \Delta}$. If $(G, w)$ is a weighted graph with maximum degree at most $\Delta$ and total weight at most $n$, then $\hat{r}_\varepsilon(G) \le c^{\Delta \log \Delta} n$. \end{thm*} \begin{proof} Let $N = c^{\Delta \log \Delta} n$ for a constant $c$ to be chosen later. Let $(G,w)$ be a weighted graph given as above. Suppose that $\Gamma$ is a graph on $N$ vertices with minimum degree at least $(1-\varepsilon)N$, and consider an edge-coloring with two colors red and blue. Let $h$ be the integer satisfying $2^{h-1} \le \Delta \le 2^{h}-1$ and note that $h \le \log(2\Delta)$. Define $D = 2^{h}-1 \le 2\Delta$ and $\rho = \frac{1}{2^{h+4}\Delta} \ge \frac{1}{32\Delta^2}$. If $c$ is sufficiently large, then $\varepsilon < 2^{-2h-1}\rho^{8D-4h+1}$ and thus by Lemma~\ref{lem:cfs_stable}, we see that the red graph or the blue graph is $(2^{-2h}\rho^{6D-4h}, 4(D+1)\rho, \rho, \frac{1}{2}\rho, D)$-dense. Without loss of generality, assume that it is the red graph. Then by monotonicity, the red graph is $(\rho^{12D}, \frac{1}{2\Delta}, \rho, \frac{1}{2}\rho, \Delta)$-dense. Therefore if $c$ is large enough, then $N \ge 8 \rho^{-12D}(\frac{4}{\rho})^{\Delta}n$ and by Lemma~\ref{lem:wr_embedding}, the red graph contains a copy of $G$. \end{proof} \section{A variant of the blow-up lemma} \label{sec:blowup} In this section, we prove Lemma~\ref{lem:blowup}, a variant of the blow-up lemma. We will use a simplified version of the Random Greedy Algorithm (RGA) developed by Koml\'os, S\'ark\"ozy, and Szemer\'edi \cite{KoSaSz}. Their original algorithm consisted of two phases. In Phase 1, they embed the vertices one at a time, where at each step one considers all possible images that is consistent with the previous embedding and choose a random vertex among them. Phase 1 continues until almost all vertices of the graph has been embedded. In Phase 2, they finish the embedding by invoking Hall's theorem. For our proof, we do not need the second phase, since we only need an almost spanning embedding. One can prove Lemma~\ref{lem:blowup} by carefully making this adjustent in their proof. It is rather straightforward to incorporate this change, but we include the proof here for completeness. \medskip Let $G$ be a graph with maximum degree at most $\Delta$. Let $\Gamma$ be a graph with a vertex partition $\{V_i\}_{i=1}^{k}$ satisfying $|V_i| \ge (1+\xi)m$ for all $i \in [k]$, and let $R$ be its $(\varepsilon, \delta)$-reduced graph. Suppose that there exists a homomorphism $f$ from $G$ to $R$ where for all $i \in [k]$, $|f^{-1}(i)| \le m$ for all $i \in [k]$. For simplicity we assume that $|f^{-1}(i)| = m$ for all $i \in [k]$ by adding isolated vertices if necessary. In order to avoid confusion, we will refer to the vertices in $G$ using $x,y$ and the vertices in $\Gamma$ using $v,w$. Let $\varepsilon, \varepsilon_1, \varepsilon_2$ be positive real numbers satisfying $\varepsilon \ll \varepsilon_2 \ll \varepsilon_1$ where $\varepsilon_1$ is small enough depending on $\delta$ and $\xi$. We first embed $f^{-1}(1)$ to $V_1$, then $f^{-1}(2)$ to $V_2$, and continue until we embed $f^{-1}(k)$ to $V_k$. Suppose that we finished embedding $f^{-1}(i-1)$ to $V_{i-1}$ for some $i \in [k]$. Define $A_0 = f^{-1}(1) \cup \cdots \cup f^{-1}(i-1)$ and $B_0 =V(G) \setminus A_0$. For each $y \in B_0$, define $N_0(y) = N(y) \cap A_0$ as the set of neighbors of $y$ already embedded, and let $d_0(Y) = |N_0(y)|$. Define $U_0(y) = V_{f(y)} \cap \bigcap_{z \in N_0(y)} N(\phi(z))$. Consider the following property: \begin{quote} $\mathcal{P}(i)$ : For all $X \subseteq V_i$ of size $\varepsilon_1 |V_i| \le |X| \le m$, there are less than $\varepsilon_1 m$ vertices $y \in f^{-1}(i)$ such that $|U_0(y) \cap X| \ge (1- \varepsilon_1) |U_0(y)|$. \end{quote} We will show that there exists a random embedding algorithm that embeds $f^{-1}(i)$ to $V_i$ so that the probability that $\mathcal{P}(1), \cdots \mathcal{P}(i)$ hold but $\mathcal{P}(i+1)$ does not is small. Fix an arbitrary enumeration of the vertices in $f^{-1}(i)$. We will iteratively embed the vertices of $f^{-1}(i)$ mainly following the order of this enumeration. For some $s \ge 0$, suppose that we finished embedding $s$ vertices of $f^{-1}(i)$ and let $A_s \subseteq V(G)$ be the set of embedded vertices and $B_s = V(G) \setminus A_s$ be its complement. Hence $|A_s \setminus A_0| = s$. Let $\phi$ be the partial embedding of $G$ to $\Gamma$ defined on $A_s$. We will maintain a first-in first-out queue $Q$ throughout the process, where initially $Q = \emptyset$. At the next step, if $Q \neq \emptyset$, then we let $x_{s}$ be the first vertex in $Q$, and if $Q = \emptyset$, then we let $x_{s}$ be the first non-embedded vertex according to the enumeration given above. We will define the image of $x_{s}$ in the next step. For each vertex $y \in B_s$, define $N_s(y) = N(y) \cap A_s$ as the set of neighbors of $y$ already embedded, and let $d_s(Y) = |N_s(y)|$. Define $U_s(y) = V_{f(y)} \cap \bigcap_{z \in N_s(y)} N(\phi(z))$. Throughout the process, we will maintain the following properties: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] for all $y \in B_s$, we have $|U_s(y)| \ge (\delta - \varepsilon)^{d_s(y)}|V_{f(y)}|$, \item[(ii)] for all $y \in f^{-1}(i) \setminus Q$, we have $|U_s(y) \setminus \phi(A_s)| \ge \varepsilon_2 |V_{i}|$, and \item[(iii)] $|Q| \le \varepsilon_1 m$. \end{itemize} We will add a vertex to $Q$ when and only when (ii) fails. Thus Property (ii) always holds. We will later show that Property (i) is maintained by how we choose the embedding $\phi$. Note that since we are embedding vertices in $f^{-1}(i)$ to $V_i$, the definition of $Q$ implies $Q \subseteq f^{-1}(i)$. Furthermore since $f^{-1}(i)$ is an independent set, for all $y \in B_s \cap f^{-1}(i)$, we have $N_s(y) = N_0(y)$ and hence $U_s(y) = U_0(y)$. Thus $|U_t(y) \setminus \phi(A_t)|$ is non-increasing in time $t$. For $y \in Q$, since $|U_t(y) \setminus \phi(A_s)| < \varepsilon_2|V_{i}|$ at the time $t$ that $y$ was added to $Q$, it follows that if $y \in Q$ at time $s$, then $|U_s(y) \setminus \phi(A_s)| < \varepsilon_2 |V_{i}|$. For all $y \in Q$, since $N_s(y) = N_0(y)$ and $d_s(y) = d_0(y)$ for all $y \in Q$, by Property (i), \begin{align*} |U_s(y) \cap \phi(A_s)| =&\, |U_s(y)| - |U_s(y) \setminus \phi(A_s)| \\ =&\, \left(1 - \frac{|U_s(y) \setminus \phi(A_s)|}{|U_s(y)|} \right) |U_0(y)| \\ \ge&\, \left(1 - \frac{\varepsilon_2}{(\delta - \varepsilon)^{\Delta}} \right) |U_0(y)| \ge \left(1 - \varepsilon_1\right) |U_0(y)|. \end{align*} If $|V_i \cap \phi(A_s)| = s \ge \varepsilon_1 |V_i|$, then by Property $\mathcal{P}(i)$ it follows that $|Q| \le \varepsilon_1 m$. On the other hand if $s < \varepsilon_1 |V_i|$, then for all $y \in B_s$, by Property (i) we have $|U_s(y) \setminus \phi(A_s)| \ge (\delta - \varepsilon)^{d_s(y)}|V_{f(y)}| - \varepsilon_1|V_{f(y)}| > \varepsilon_1|V_{f(y)}|$ and therefore $y \notin Q$. This implies that $Q = \emptyset$ if $s < \varepsilon_1|V_i|$. Therefore Property (iii) holds if $\mathcal{P}(i)$ holds. Define $U = U_s(x_s)$. Note that if $x_s \notin Q$, then $|U \setminus A_s| \ge \varepsilon_2 |V_i|$. On the other hand, suppose that $x_s \in Q$. As observed above, $d_t(x_s)$ is constant for $t=0,1,2\cdots, m-1$. Therefore the size of $U \setminus A_s$ can change by at most one at each step. Since $Q$ is a first-in first-out queue, by Property (iii), there are at most $\varepsilon_1 m$ steps between the time that $x_s$ was first added to the queue and time $s$. This implies that $|U \setminus A_s| \ge \varepsilon_2 |V_i| - \varepsilon_1 m \ge \frac{1}{2} \varepsilon_2 |V_i|$. Therefore in both cases, we have $|U \setminus A_s| \ge \frac{1}{2}\varepsilon_2 |V_i|$. For each $y \in N(x_s) \cap B_s$, since $f$ is a homomorphism from $G$ to $R$, we know that the pair $(V_i, V_{f(y)})$ is $\varepsilon$-regular of density at least $\delta$. Moreover since $U \subseteq V_i, U_s(y) \subseteq V_{f(y)}$, and $|U_s(y)| \ge (\delta - \varepsilon)^{d_s(y)}|V_{f(y)}|$ (by Property (i)), the set of vertices $Z_y \subseteq U$ with less than $(\delta - \varepsilon)|U_s(y)|$ neighbors in $U_s(y)$ has size $|Z_y| \le \varepsilon |V_{i}|$. Define $U' = (U \setminus A_s) \setminus \bigcup_{y \in N(x_s) \cap B_s} Z_y$ and note that \[ |U'| \ge \frac{1}{2}\varepsilon_2 |V_i| - \Delta\varepsilon |V_i| \ge \frac{1}{4}\varepsilon_2 |V_i|. \] Let $\phi(x_s)$ be a vertex in $U'$ chosen uniformly at random. The following lemma shows that Property $\mathcal{P}(i+1)$ holds with high probability after we finish embedding $f^{-1}(i)$ to $V_i$. \begin{lem} \label{lem:prob} The probability that $\mathcal{P}(i+1)$ does not holds but $\mathcal{P}(1), \cdots, \mathcal{P}(i)$ holds is at most $e^{-\Omega(m)}$. \end{lem} Given this lemma, by taking the union bound, we see that the probability that $\mathcal{P}(i)$ does not hold for some $i$ is at most $ke^{-\Omega(m)} = o(1)$. Hence with non-zero probability, the algorithm will successfully terminate and embed $G$ to $\Gamma$. \begin{proof}[Proof of Lemma~\ref{lem:prob}] Let $\mathcal{E}$ be the event that $\mathcal{P}(1), \cdots, \mathcal{P}(i)$ holds. Fix a set $X \subseteq V_{i+1}$ of size $\varepsilon_1 |V_i| \le |X| \le m$. Define $A = \bigcup_{j=1}^{i} f^{-1}(j)$ and $B = V(G) \setminus A$. For each $y \in f^{-1}(i+1)$, define $U(y) = V_{i+1} \cap \bigcap_{z \in N(y) \cap A} N(\phi(z))$. Let $R \subseteq f^{-1}(i+1)$ be a fixed set of size at least $\varepsilon_1 m$. We first compute the probability that \begin{quote} (*) all vertices $y \in R$ satisfies $|U(y) \cap X| \ge (1-\varepsilon_1)|U_0(y)|$. \end{quote} Note that $\mathcal{P}(i+1)$ holds if there are no such pair of sets $(X,R)$. Since $G$ has maximum degree at most $\Delta$, we can find a subset $R' \subseteq R$ of size at least $\frac{|R|}{\Delta^2+1}$ whose pairwise distance is at least 3 in $G$. In other words, the sets $N(y)$ are disjoint for vertices $y \in R'$. Fix a vertex $y \in R'$. We examine the probability that $|U(y) \cap X| \ge (1-\varepsilon_1)|U(y)|$. Let $z_1, z_2, \cdots, z_d$ be the vertices in $A \cap N(y)$ in the order of embedding (note that $d \le \Delta$). Then \[ U(y) = V_{i+1} \cap N(\phi(z_1)) \cap N(\phi(z_2)) \cdots \cap N(\phi(z_d)). \] For $j=0,1,2,\cdots, d$, define $W_j(y) = V_{i+1} \cap N(\phi(z_i)) \cap \cdots \cap N(\phi(z_j))$. By the definition of our embedding algorithm, either $\mathcal{E}$ does not hold, or we have $|W_j(y)| \ge (\delta - \varepsilon)^j |V_{i+1}|$ for all $j=1,2,\cdots, d$. Since $U(y) = W_d(y)$, we have \[ |U(y) \cap X| \ge (1-\varepsilon_1) |U(y)| \ge (1-\varepsilon_1) \cdot (\delta - \varepsilon)^d |V_{i+1}| > (\delta + \varepsilon)^{d} |X|. \] Therefore there exists some $t$ such that \begin{align} \label{eq:breakpoint} |W_{t}(y) \cap X| > (\delta+\varepsilon)^{t} |X| \quad \textrm{ but } \quad |W_{t-1}(y) \cap X| \le (\delta+\varepsilon)^{t-1} |X|. \end{align} Since $|X| \ge \varepsilon_1 |V_{i+1}|$ and $(\delta+\varepsilon)^{\Delta}\varepsilon_1 \ge \varepsilon$, the above can hold only if $|W_{t}(y) \cap X| \ge \varepsilon |V_{i+1}|$. This implies that $|W_{t-1}(y) \cap X| \ge \varepsilon |V_{i+1}|$. Furthermore, since $f$ is a homomorphism from $G$ to $R$, the pair $(V_{i+1}, V_{f(z_t)})$ is $\varepsilon$-regular with density at least $\delta$. Since $X \subseteq V_{i+1}$, there are at most $\varepsilon|V_{f(z_t)}|$ vertices $z \in V_{f(z_t)}$ for which defining $\phi(z_t) = z$ would cause \eqref{eq:breakpoint}. Thus we can conclude that $y \in R'$ only if there exists $z_y \in A \cap N(y)$ whose image $\phi(z_y)$ was chosen in a set of size at most $\varepsilon |V_{f(z_y)}|$. Therefore (*) holds only if for each $y \in R'$, there exists $z_y \in A \cap N(y)$ as above. On the other hand if $\mathcal{E}$ holds, then $\phi(z_y)$ was chosen inside a subset of $V_{f(z_y)}$ of size at least $\frac{1}{4}\varepsilon_2 |V_{f(z_y)}|$. Since the vertices in $R'$ have pairwise distance at least 3, all these vertices are distinct. Moreover, the number of choices of these vertices $z_y$ is at most $\Delta^{|R'|}$ and thus the probability that $\mathcal{E}$ holds and (*) holds is at most \[ \Delta^{|R'|} \cdot \left( \frac{\varepsilon}{\varepsilon_2 / 4} \right)^{|R'|} \le \left( \frac{4\varepsilon \Delta}{\varepsilon_2} \right)^{|R|/(\Delta^2 + 1)} \le \left( \frac{4\varepsilon \Delta}{\varepsilon_2} \right)^{\varepsilon_1 m/(\Delta^2 + 1)}. \] The number of choices for $R$ is at most $2^m$. Since the size of $X$ satisfies $\varepsilon_1|V_i| \le |X| \le m$, we must have $|V_i| \le \varepsilon_1^{-1}m$ or otherwise the lemma is vacuously true. Therefore the number of choices for the set $X$ is at most $2^{\varepsilon_1^{-1}m}$. Hence if $\varepsilon$ is sufficiently small, then the lemma follows from the union bound. \end{proof} \section{Bipartite graphs of small bandwidth} \label{sec:bipartite_ramsey} In this section we prove Theorem~\ref{cor:bandwidth}. Our proof is based on a variant on the idea independently used by Conlon \cite{Conlon}, and by Fox and Sudakov \cite{FoSu0} based on dependent random choice. This variant of depenent random choice has been recently used in \cite{Lee} to establish some embedding results for degenerate graphs. The following lemma is the main ingredient of the proof. \begin{lem} \label{lem:maxdegree_drc} Let $G$ be an $n$-vertex graph of minimum degree at least $\alpha n$ and let $X_0$ be a subset of vertices. For every positive real number $\beta$, there exists a set $X \subseteq V(G)$ satisfying the following properties: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] $|X| \ge \frac{1}{2}\alpha^{2\Delta} |V(G)|$, \item[(ii)] $|X \cap X_0| \ge \frac{1}{2}\alpha^{2\Delta} |X_0|$, and \item[(iii)] the number of $\Delta$-tuples in $X^\Delta$ with less than $\beta n$ common neighbors is at most $(\frac{2\beta}{\alpha^{2\Delta}}|X|)^{\Delta}$. \end{itemize} \end{lem} \begin{proof} Define $V = V(G)$. Choose $\Delta$ vertices ${\bf v_1, \ldots, v_\Delta} \in V$ independently and uniformly at random, and let ${\bf X} = \bigcap_{i=1}^{\Delta} N({\bf v_i})$. By linearity of expectation, \begin{align} \BBE\left[ \big| X_0 \cap {\bf X} \big| \cdot \big| {\bf X} \big| \right] &= \sum_{x \in X_0, \,y \in V} \BFP(x, y \in {\bf X}) = \sum_{x \in X_0, \,y \in V} \left(\frac{\codeg(x,y)}{n}\right)^{\Delta} \nonumber \\ &\ge |X_0| n \left(\frac{1}{|X_0| n^2} \sum_{x \in X_0} \sum_{y \in V} \codeg(x,y) \right)^{\Delta}, \label{eq:maxdegree_drc_product} \end{align} where the inequality follows from convexity. For a fixed vertex $x \in X_0$, the sum $\sum_{y\in V} \codeg(x,y)$ counts the number of walks of length 2 in $V$ that starts at $x$. Since $G$ has minimum degree at least $\alpha n$, for all $x \in X_0$, we have $\sum_{y \in V}\codeg(x,y) \ge (\alpha n)^2$. Hence from \eqref{eq:maxdegree_drc_product}, \[ \BBE\left[ \big| X_0 \cap {\bf X} \big| \cdot \big| {\bf X} \big| \right] \ge |X_0| n \left(\frac{ |X_0| \cdot \alpha^2 n^2}{|X_0| n^2} \right)^{\Delta} \ge \alpha^{2\Delta} |X_0|n, \] and by convexity, \[ \BBE\left[ \big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta} \right] \ge \alpha^{2\Delta^2} |X_0|^{\Delta} n^{\Delta}. \] Call a $\Delta$-tuple of vertices {\em bad} if it has less than $\beta n$ common neighbors. For a set $A$, define $\xi(A)$ as the number of bad $\Delta$-tuples in $A^\Delta$. The probability of a fixed bad $\Delta$-tuple $T$ being in ${\bf X}^\Delta$ is at most $(\frac{\codeg(T)}{n})^\Delta \le \beta^\Delta$. Hence by linearity of expectation, $\BBE[\xi({\bf X})] \le \beta^{\Delta} \cdot n^\Delta$. Since \begin{align*} \BBE\left[ \frac{\big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta}}{\BBE[\big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta}]} - \frac{\xi({\bf X}) \cdot \big|X_0 \cap {\bf X}\big|^{\Delta}}{2\BBE[\xi({\bf X})\cdot \big|X_0 \cap {\bf X}\big|^{\Delta}]} \right] = \frac{1}{2}, \end{align*} there exists a set $X$ for which \begin{align*} \frac{\big| X_0 \cap X \big|^{\Delta} \cdot \big| X \big|^{\Delta}}{\BBE[\big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta}]} - \frac{\xi(X) \cdot \big|X_0 \cap X\big|^{\Delta}}{2\BBE[\xi({\bf X}) \cdot \big|X_0 \cap {\bf X}\big|^{\Delta}]} \ge \frac{1}{2}. \end{align*} In particular, \[ \big| X_0 \cap X \big|^{\Delta} \cdot \big| X \big|^{\Delta} \ge \frac{1}{2} \BBE[\big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta}] \ge \frac{1}{2} \alpha^{2\Delta^2} |X_0|^{\Delta} n^{\Delta}, \] and since $|X_0 \cap X| \le |X_0|$ and $|X| \le n$, this implies that $|X| \ge \frac{\alpha^{2\Delta}}{2^{1/\Delta}} n$ and $|X_0 \cap X| \ge \frac{\alpha^{2\Delta}}{2^{1/\Delta}} |X_0|$ thus proving Properties (i) and (ii). Furthermore, \begin{align*} \xi(X) &\le \big| X \big|^{\Delta} \frac{2\BBE[\xi({\bf X}) \cdot \big|X_0 \cap {\bf X}\big|^{\Delta}]}{\BBE[\big| X_0 \cap {\bf X} \big|^{\Delta} \cdot \big| {\bf X} \big|^{\Delta}]} \le \big| X \big|^{\Delta} \frac{2\beta^{\Delta} n^\Delta |X_0|^{\Delta}}{\alpha^{2\Delta^2} |X_0|^{\Delta} n^{\Delta}} \le |X|^{\Delta} \left(\frac{2\beta}{\alpha^{2\Delta}}\right)^{\Delta}, \end{align*} and thus Property (iii) holds. \end{proof} We now prove Theorem~\ref{thm:densityembedding} using Lemma~\ref{lem:maxdegree_drc}. \begin{thm*} Let $\delta$ and $\alpha$ be positive real numbers. Let $G$ be an $n$-vertex graph of minimum degree at least $(\delta + \alpha)n$. Then $G$ contains all bipartite graphs $H$ on at most $\delta n$ vertices with maximum degree at most $\Delta$ and bandwidth at most $\frac{1}{256\Delta}\alpha^{6\Delta+1}n$. \end{thm*} \begin{proof} Let $G$ and $H$ be graphs given as above. Define $m = |V(H)|$. Since $|V(H)| \le |V(G)|$, we can always embed the isolated vertices in the end. Thus we may assume for simplicity that $H$ has no isolated vertex. Let $V = V(G)$ and let $A \cup B$ the bipartition of $H$. Define $\beta = \frac{1}{256\Delta}\alpha^{6\Delta+1}$ and label the vertices of $H$ using $[m]$ so that $|i - j| \le \beta n$ whenever the vertices with labels $i$ and $j$ are adjacent. For $t \ge 0$, define $B_t := [2t\beta n] \cap B$ and define $A_t$ as the set of vertices $a \in A$ for which $N_H(a) \subseteq B_t$. Note that since $H$ has bandwidth at most $\beta n$, we have $(A_{t+1} \cup B_{t+1}) \setminus (A_t \cup B_t) \subseteq ((2t-3)\beta n, (2t+1)\beta n]$. Therefore \begin{align} \label{eq:bandwidth_space} |(A_{t+1} \cup B_{t+1}) \setminus (A_t \cup B_t)| < 4\beta n \end{align} for all $t \ge 0$. Note that $A_0 = B_0 = \emptyset$ since $H$ has no isolated vertex. We embed $H$ into $G$ using an iterative algorithm. Define $\gamma = 16\beta \alpha^{-2\Delta}$. As an initialization, apply Lemma \ref{lem:maxdegree_drc} to $G$ with $(X_0)_{\ref{lem:maxdegree_drc}} = V$, $\beta_{\ref{lem:maxdegree_drc}} = 8\beta$, and $\alpha_{\ref{lem:maxdegree_drc}} = \alpha$ to obtain a set $X_0$ (which is the set $X$ that we obtain by applying the lemma) of size $|X_0| \ge \frac{1}{2}\alpha^{2\Delta}n$ where the number of $\Delta$-tuples in $X^{\Delta}$ with less than $8\beta n$ common neighbors is at most $(\gamma |X_0|)^{\Delta}$. Define $\phi$ as the trivial partial embedding of $H$ to $G$ defined on $A_0 \cup B_0 = \emptyset$. For $t \ge 0$, at the $t$-th step of the algorithm, we are given as input a set $X_{t}$ and a partial embedding $\phi$ of $H$ to $G$ defined on $A_{t} \cup B_{t}$. Define $V_{t} = V \setminus \phi(A_{t} \cup B_{t})$. We say that a $\Delta$-tuple of vertices $T$ is {\em $V_{t}$-bad} if the number of common neighbors of $T$ in $V_t$ is less than $8\beta n$. Otherwise, we say that $T$ is {\em $V_{t}$-good}. The given input satisfies the following properties: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(a)] $X_{t} \subseteq V_{t-1}$, \item[(b)] $|X_{t}| \ge \frac{1}{2}\alpha^{2\Delta+1}n$, \item[(c)] $\phi(B_{t} \setminus B_{t-1}) \subseteq X_{t}$, and \item[(d)] for all $a \in A_{t+1} \setminus A_{t}$, the set $\phi(N(a) \cap B_{t})$ is contained in at most $(\gamma|X_{t}|)^{\Delta - |N(a) \cap B_t|}$ $V_{t-1}$-bad $\Delta$-tuples in $X_{t}$. \end{itemize} Note that the above properties hold for $t=0$ since $N(a) \cap B_0 = \emptyset$ for all vertices $a$ (where we define $B_{-1} = V_{-1} = \emptyset$). For some $t \ge 0$, suppose that we are given a set $X_{t}$ and a map $\phi$ defined on $A_{t} \cup B_{t}$ that satisfies the above properties. Define $G_t$ as the subgraph of $G$ induced on $V_t = V \setminus \phi(A_t \cup B_t)$. Since $|A_t \cup B_t| \le |V(H)| \le \delta n$, the given minimum degree condition on $G$ implies that $G_t$ has minimum degree at least $\alpha n \ge \alpha |V(G_t)|$. In particular, this implies that $G_t$ has at least $\alpha n$ vertices. Apply Lemma \ref{lem:maxdegree_drc} to $G_t$ with $(X_0)_{\ref{lem:maxdegree_drc}} = X_t \setminus \phi(A_t \cup B_t)$, $\beta_{\ref{lem:maxdegree_drc}} = 8\beta$, and $\alpha_{\ref{lem:maxdegree_drc}} = \alpha$ to obtain a set $X_{t+1}$ satisfying the following properties: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] $|X_{t+1}| \ge \frac{1}{2} \alpha^{2\Delta} |V(G_t)| \ge \frac{1}{2} \alpha^{2\Delta+1}n$, \item[(ii)] $|X_t \cap X_{t+1}| \ge \frac{1}{2} \alpha^{2\Delta} | (X_t \setminus \phi(A_t \cup B_t)) | \ge \frac{1}{2} \alpha^{2\Delta} (| X_t | - 4\beta n) \ge \frac{1}{8} \alpha^{4\Delta+1}n$, and \item[(iii)] the number of $V_{t}$-bad $\Delta$-tuples in $X_{t+1}^{\Delta}$ is at most $(\gamma|X_{t+1}|)^{\Delta}$, \end{itemize} Note that Properties (a) and (b) immediately follow. To extend $\phi$ to $A_{t+1} \cup B_{t+1}$, we first extend $\phi$ to $B_{t+1} \setminus B_t$. We embed vertices in $B_{t+1} \setminus B_t$ one at a time according to the order given by the labelling. Let $b \in B_{t+1} \setminus B_t$ be the current vertex where we identify $b$ with the integer in $[m]$. Define $B[b] = B \cap [b]$ and for each vertex $a \in A$, define $d_b(a) = |N(a) \cap B[b]|$. We maintain the following three properties while extending $\phi$: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(c')] $\phi(B[b] \setminus B_{t}) \subseteq X_{t} \cap X_{t+1}$, \item[(d1)] for all $a \in A_{t+1} \setminus A_{t}$, the set $\phi(N(a) \cap B[b])$ is contained in at most $(\gamma|X_t|)^{\Delta - d_b(a)}$ $V_{t-1}$-bad $\Delta$-tuples of $X_t$, and \item[(d2)] for all $a \in A_{t+2} \setminus A_{t+1}$, the set $\phi(N(a) \cap B[b])$ is contained in at most $(\gamma|X_{t+1}|)^{\Delta - d_b(a)}$ $V_t$-bad $\Delta$-tuples of $X_{t+1}$. \end{itemize} Initially, we may assume that $b = 2t\beta n$ so that $B[b] = B_t$. Then Property (c') holds vacuously, and Property (d1) holds by Property (d) of the previous iteration. Moreover, note that if $a \in A_{t+2} \setminus A_{t+1}$, then $a$ is adjacent to a vertex in $B_{t+2}$, thus to a vertex with label at least $2(t+1)\beta n + 1$. Hence by the definition of bandwidth, it cannot be adjacent to a vertex in $B_{t}$, implying that $N(a) \cap B_t = \emptyset$. This implies (d2) at the initial stage, by Property (iii). Let $b \in B_{t+1} \setminus B_t$ be the next vertex to embed. Let $a_1, a_2, \ldots, a_d$ be the neighbors of $b$ (for $d \le \Delta$). Note that by the definition of $A_t$, we have $a_i \notin A_t$ for all $i \in [d]$. On the other hand for each $i \in [d]$, since $b \in B_{t+1} \subseteq [(2t+2)\beta n]$ and $H$ had bandwidth at most $\beta n$, the vertex $a_i$ cannot be adjacent to a vertex in $((2t+4)\beta n, m]$. This implies that $a_i \in A_{t+2} \setminus A_t$. For each $i \in [d]$, define $N_i = N(a_i) \cap [b-1]$ and note that $\phi$ is already defined on $N_i$. For each $i \in [d]$, since $a_i$ is adjacent to $b$ and $H$ has bandwidth at most $\beta n$, the vertex $a_i$ cannot be adjacent to a vertex in $[b-2\beta n-1] \cap B \subseteq B_{t-1}$, thus implying that $N_i \subseteq B_{t+1} \setminus B_{t-1}$. Fix an index $i \in [d]$. If $a_i \in A_{t+1} \setminus A_t$, then Property (c') implies that $\phi(N_i) \subseteq X_{t}$, and Property (d1) implies that $\phi(N_i)$ is contained in at most $(\gamma|X_t|)^{\Delta - |N_i|}$ $V_{t-1}$-bad $\Delta$-tuples of $X_t$. Hence there are less than $\gamma|X_t|$ vertices $x \in X_t$ for which the $(|N_i|+1)$-tuple $N_i \cup \{x\}$ is contained in more than $(\gamma|X_t|)^{\Delta - |N_i|-1}$ $V_{t-1}$-bad $\Delta$-tuples of $X_t$. If $a_i \in A_{t+2} \setminus A_{t+1}$, then Property (c') implies that $\phi(N_i) \subseteq X_{t+1}$. Hence similarly as above Property (d2) implies that there are less than $\gamma|X_{t+1}|$ vertices $x \in X_{t+1}$ for which the $(|N_i|+1)$-tuple $N_i \cup \{x\}$ is contained in more than $(\gamma|X_{t+1}|)^{\Delta - |N_i|-1}$ $V_t$-bad $\Delta$-tuples of $X_{t+1}$. Since \[ |X_t \cap X_{t+1}| \ge \frac{1}{8} \alpha^{4\Delta+1}n \ge 2\beta n + \frac{1}{16} \alpha^{4\Delta+1} n \ge 2\beta n + d \gamma n, \] we have $|(X_t \cap X_{t+1}) \setminus \phi(B[b-1])| \ge |X_t \cap X_{t+1}| - (2\beta n-1) \ge d \gamma n + 1$ (by Property (c')). Therefore we can choose $\phi(b)=x$ to maintain Properties (d1) and (d2) by avoiding the vertices identified above for each $i=1,2,\cdots,d$. Once we finish embedding $B_{t+1}$, we greedily embed the vertices $a \in A_{t+1}$ one at a time. Note that $\phi(N(a) \cap B)$ is contained in less than $(\gamma |X_{t+1}|)^{\Delta - |N(a) \cap B|} < |X_{t+1}|^{\Delta - |N(a) \cap B|}$ $V_{t-1}$-bad $\Delta$-tuples. Since the number of $\Delta$-tuples containing $\phi(N(a) \cap B)$ is $|X_{t+1}|^{\Delta - |N(a) \cap B|}$, this in particular implies that there exists a $V_{t-1}$-good $\Delta$-tuple containing $\phi(N(a) \cap B)$. Since every $V_{t-1}$-good tuple has at least $8\beta n$ common neighbors in $V_{t-1}$, we thus see that $\phi(N(a) \cap B)$ has at least $8\beta n$ common neighbors in $V_{t-1}$. By \eqref{eq:bandwidth_space}, we see that $|V_t \setminus V_{t-1}| \le 4\beta n$ and thus $\phi(N(a) \cap B)$ has at least $4\beta n$ common neighbors in $V_t$. Therefore again by \eqref{eq:bandwidth_space}, we will never run out of vertices while greedily embedding the vertices in $A_{t+1}$ to appropriate vertices in $V_t$. Note that Property (c) for the next step is satisfied by Property (c'), and Property (d) for the next step is satisfied by Property (d2). \end{proof} Theorem~\ref{thm:densityembedding} has the following interesting corollary which shows that a transference-type result holds even if $\beta$ is as large as $c^{\Delta}$ for some constant $c$ when the given graph is bipartite and has small bandwidth. \begin{cor} \label{cor:bandwidth} For every positive real number $\varepsilon$, there exists a real number $c < 1$ such that the following holds. If $G$ is a $n$-vertex bipartite graph of maximum degree at most $\Delta$ and bandwidth at most $c^{\Delta}n$, then $r(G) \le (4+\varepsilon)n$. \end{cor} \begin{proof} Define $c = \frac{1}{256\Delta}\left(\frac{4(4+\varepsilon)}{\varepsilon}\right)^{6\Delta+1}$. Let $N = (4+\varepsilon)n$ and suppose that the edge set of $K_N$ has been two-colored using red and blue. Without loss of generality, we may assume that the red graph has density at least $\frac{1}{2}$. Then we can find a subgraph of the red graph having minimum degree at least $(1 + \frac{\varepsilon}{4})n$. Apply Theorem~\ref{thm:densityembedding} to this graph with $n_{\ref{thm:densityembedding}} = N$, $\delta_{\ref{thm:densityembedding}} = \frac{1}{4+\varepsilon}$, and $\alpha_{\ref{thm:densityembedding}} = \frac{\varepsilon}{4(4+\varepsilon)}$ to find a monochromatic copy of $G$. \end{proof} \section{Concluding Remarks} \label{sec:remarks} The main theorem of this paper (Theorem~\ref{thm:transference}) is a transference principle for Ramsey numbers of bounded degree graphs. It asserts that for all $\Delta, \xi$ and $\varepsilon$, there exists $\beta$ and $n_0$ such that the following holds for all $n \ge n_0$: if $G$ is a $n$-vertex graph of maximum degree at most $\Delta$ having a homomorphism $f$ to $H$ such that $|f^{-1}(v)| \le \beta n$ for all $v \in V(H)$, then $r(G) \le (1+\xi)\hat{r}_{\varepsilon}(H,w) \cdot \beta n$. Similar result can be proved for more than two colors and for off-diagonal Ramsey numbers using the same approach. The bound on $\beta$ that we obtain is of tower-type which is unlikely to be best possible. For example, Corollary~\ref{cor:bandwidth} shows that we may take $\beta \le c^{\Delta}$ for some special case. It might be the case that the transference principle holds for classes of graphs more general than bounded degree graphs. \begin{ques} \label{ques:degen} Can Theorem~\ref{thm:transference} be extended to degenerate graphs? \end{ques} The main difficulty in following the same strategy used in this paper lies in developing a variant of the blow-up lemma that we used. In fact there has been some recent work on extending the blow-up lemma to classes of graphs beyond bounded degree graphs. For an integer $a$, a graph is called $a$-arrangeable if its vertices can be ordered as $x_1, \cdots, x_n$ such that $|N(N(x_i) \cap R_i) \cap L_i\}| \le a$ for all $i \in [n]$. where $R_i = \{x_{i+1}, \cdots, x_n\}$ and $L_i = \{x_1, \cdots, x_i\}$. B\"ottcher, Taraz, and W\"urfl \cite{BoTaWu} extended the blow-up lemma to arrangeable graphs (after adding a weak constraint on the maximum degree). Their result implies that a transference-type result holds if the target graph $H$ is a bounded degree graph. There also has been some partial success towards extending the blow-up lemma to degenerate graphs \cite{Lee} but only when the bandwidth is small and for almost spanning subgraphs. It is plausible that some of the ideas used in these papers will help answering Question~\ref{ques:degen}. \medskip Recall that for a given weighted graph $(G,w)$, $\hat{r}_\varepsilon(G)$ is not necessarily finite if $\varepsilon$ is large. In fact $\hat{r}_\varepsilon(G)$ is finite if and only if $\varepsilon < \frac{1}{r(\chi(G))-1}$ (where $r(k)$ is the Ramsey number of $K_k$). Let $s = r(\chi(G)) - 1$. If $\varepsilon \ge \frac{1}{s}$, then one can consider a red/blue coloring of $K_s$ with no monochromatic copy of $K_{\chi(G)}$ and take a balanced blow-up of this coloring to find an arbitrarily large $n$-vertex graph with minimum degree at least $(1 - \frac{1}{s})n$ having no monochromatic subgraph of chromatic number at least $\chi(G)$. In particular, it does not contain a monochromatic copy of $G$. On the other hand if $\varepsilon < \frac{1}{s}$, then one can show that by supersaturation, for sufficiently large $n$ there exists $\Omega(n^{\chi(G)})$ monochromatic copies of $K_{\chi(G)}$ in every red/blue coloring of an $n$-vertex graph $\Gamma$ of minimum degree at least $(1-\varepsilon)n$. Without loss of generality, assume that at least half of such copies of $K_{\chi(G)}$ are red. Consider a $\chi(G)$-uniform hypergraph over the vertex set of $\Gamma$ where we place a hyperedge over all red copies of $K_{\chi(G)}$ in the coloring above. By K\"ov\'ari-S\'os-Tur\'an theorem for hypergraphs, we can find a complete $\chi(G)$-partite graph with $|V(G)|$ vertices in each part if $n$ is sufficiently large. This implies that we can find a monochromatic copy of $G$ in $\Gamma$. \medskip \noindent {\bf Acknowledgements}. I thank David Conlon, Jacob Fox, and Benny Sudakov for fruitful discussions.
{ "timestamp": "2015-04-24T02:12:28", "yymm": "1504", "arxiv_id": "1504.06285", "language": "en", "url": "https://arxiv.org/abs/1504.06285" }
\section{Introduction} Let $R$ be a commutative noetherian local ring with maximal ideal $\m$, and let $\mod R$ denote the category of all finitely generated $R$-modules. For any non-negative integer $n$, the elements of $R$ annihilating $\Ext_R^n(M, N)$, for all $M$ and $N$ in $\mod R$, form an ideal which, following \cite{ua}, we denote by $\ca^n(R)$. It is easy to see that there is a tower of ideals $$ \cdots\subseteq\ca^n(R)\subseteq\ca^{n+1}(R)\subseteq\cdots, $$ so their union $\ca(R)$ is also an ideal of $R$, which is said to be {\em cohomology annihilator} of $R$. As $R$ is noetherian, there exists an integer $s$ such that $\ca(R)=\ca^s(R)$. Unless $R$ is regular, $\ca(R)$ is a proper ideal. The notion of the cohomology annihilator was introduced and studied independently by Dieterich \cite{di} and Yoshino \cite{y} in connection with the Brauer--Thrall conjectures for maximal Cohen--Macaulay modules, where they proved that the cohomology annihilator of a $d$-dimensional Cohen-Macaulay complete local ring with perfect coefficient field is $\m$-primary, provided that $R$ is an isolated singularity. Later on, Popescu and Roczen \cite{pr} removed the assumption being an isolated singularity from the result of Dieterich and Yoshino. A nice theorem of Auslander \cite{A} states that every complete Cohen--Macaulay local ring of finite Cohen--Macaulay representation type is an isolated singularity. In fact, he essentially proved that in this case, the cohomology annihilator $\ca(R)$ is $\m$-primary. This result was extended by Leuschke and Wiegand \cite{LW} to the case where the ring is excellent, and by Huneke and Leuschke \cite{HL} to all Cohen--Macaulay local rings. Recently, Iyengar and Takahashi \cite{ua} considered the cohomology annihilator of a noetherian ring that is finitely generated as a module over its center, that is, a noether algebra. On the other hand, the notion of dimension for triangulated categories was introduced by Bondal--Van den Bergh and Rouquier in \cite{BV, R} and analogues for abelian categories by Dao--Takahashi \cite{radius, dim}. These essentially indicate the number of {\em extensions} necessary to build all objects out of a single object. Using the dimension of a bounded derived category, Rouquier presented the first example of an artin algebra of representation dimension greater than three \cite{R1}. Since then, various studies concerning the finiteness of the dimension of a bounded derived category have been made; see Remark \ref{hist}. Iyengar and Takahashi \cite{ua} investigated the relationship between the existence of non-trivial cohomology annihilators and the existence of strong generators for the category of finitely generated modules and its derived categories. The main theme of this paper is to study cohomology annihilators of commutative noetherian rings and investigate their connections with the dimensions of a bounded derived category and a singularity category, as well as the radius of a subcategory of finitely generated modules. It is shown that having a non-trivial cohomology annihilator guarantees the existence of a non-negative integer $n$ such that a given $R$-module $M$ is built out of the syzygies of an $R/{\ca(R)}$-module by taking $n$-extensions, up to finite direct sums and direct summands. It also turns out that the subcategory of $\mod R$ consisting of modules whose nonfree loci are those prime ideals containing $\ca(R)$, is an extension closure of syzygies of $R/{\p}$, where $\p$ runs over the prime ideals containing $\ca(R)$. Indeed, we shall prove a more general result in Theorem \ref{mthm}, which is the main result of this paper and all of the other results given in this paper are deduced from this. As the first application of this result, we show that $\ca(R)$ is $\m$-primary if and only if for some integer $n\ge 0$, the subcategory consisting of $n$-th syzygies, $\syz^n(\mod R)$, is built out of syzygies of the residue field of $R$ in a finite number of extensions, direct sums and direct summands. Moreover, these statements imply that the dimensions of the bounded derived category and the singularity category are finite. Surprisingly, it is shown all of these statements are equivalent, assuming $R$ is Gorenstein; see Theorem \ref{5cond}. It is well-known that the singular locus of $R$ is contained in the defining closed subset of the cohomology annihilator of $R$, $V(\ca(R))$. In Theorem \ref{sing}, we investigate when this containment becomes an equality and when the cohomology annihilator of $R$ is non-trivial. Another application of Theorem \ref{mthm} leads us to showing that the subcategory of $R$-modules that are locally free on the punctured spectrum of $R$ are built out of syzygies of modules of finite length by taking $d$ {\em extensions} in $\mod R$ up to finite direct sums and direct summands, where $d$ is the Krull dimension of $R$, and consequently, this subcategory is contained in the extension closure of syzygies of the residue field $k$; see Theorem \ref{tdthm}. So, this result removes the assumption that $R$ is Cohen--Macaulay from \cite[Theorem 2.4]{stcm}. Also, it is shown that, $\ca(R)$ is $\m$-primary if and only if the cohomology annihilator the ($\m$-adic) completion $\widehat R$ of $R$, $\ca(\widehat{R})$, is $\widehat{\m}$-primary, provided that $\widehat{R}$ is an isolated singularity; see Theorem \ref{cahat}. The importance of this result comes from the fact that it has long been known that many nice properties of local rings need not be inherited by their completions and this has always regarded as a theoretical limitation. It is also proved that finiteness and countability of the set of isomorphism classes of indecomposable maximal Cohen--Macaulay modules ascends and descends between a Henselian local ring $R$ and its completion, whenever $\widehat{R}$ is an isolated singularity; see Corollary \ref{fcmth}. So, in this case Schreyer's conjecture \cite[Conjecture 7.3]{S} holds true. The proof of this result enables us to show that, under the same assumption, $R$ satisfies the Auslander--Reiten conjecture if and only its completion does so. \section{Basic definitions} This section is devoted to stating the definitions and basic properties of notions which we will freely use in the later sections. Let us start with our convention. \begin{conv} Throughout the paper, let $R$ be a commutative noetherian ring with identity. We assume that all modules are finitely generated and that all subcategories are full and strict (i.e., closed under isomorphism). \end{conv} \begin{dfn} (1) We denote by $\mod R$ the category of (finitely generated) $R$-modules.\\ (2) The {\em singular locus} of $R$, denoted by $\sing R$, is by definition the set of prime ideals $\p$ of $R$ such that $R_{\p}$ is not a regular local ring.\\ (3) A local ring $(R,\m)$ is called an {\em isolated singularity} if $R_\p$ is regular for all nonmaximal prime ideals $\p$, that is, $\sing R\subseteq \{\m\}$. \end{dfn} \begin{dfn} Let $M$ be an $R$-module.\\ (1) The {\em nonfree locus} $\nf(M)$ of $M$ is defined as the set of prime ideals $\p$ of $R$ such that $M_\p$ is a nonfree $R_\p$-module. It is well-known (and easy to see) that $\nf(M)$ is a closed subset of $\spec R$ in the Zariski topology.\\ (2) We say that $M$ is {\em locally free on the punctured spectrum of $R$} if $M_\p$ is a free $R_\p$-module for all nonmaximal prime ideals $\p$, namely, $\nf(M)\subseteq\{\m\}$.\\ (3) Let $\xx$ be a sequence of elements of $R$. Then $\K(\xx, M)$ denotes the {\em Koszul complex} of $M$ with respect to $\xx$. For each integer $i$ the $i$-th homology $\H_i(\xx, M):=\H_i(\K(\xx, M))$ is called the {\em $i$-th Koszul homology} of $M$ with respect to $\xx$. The direct sum $\H(\xx, M):=\bigoplus_{i\in\Z}\H_i(\xx, M)$ is called the {\em Koszul homology} of $M$ with respect to $\xx$. \end{dfn} \begin{dfn} Let $\X$ be a subcategory of $\mod R$.\\ (1) We say that $\X$ is a {\em resolving} subcategory of $\mod R$ if it contains projective modules and closed under direct summands, extensions and kernels of epimorphisms. This notion has been introduced by Auslander and Bridger \cite{ab}.\\ (2) $\X$ is said to be a {\em Serre} subcategory if it is closed under submodules, quotient modules and extensions. This is equivalent to saying that for each short exact sequence $0\to L\to M\to N\to 0$ in $\mod R$ the module $M$ is in $\X$ if and only if $L,N$ are in $\X$.\\ (3) We say that $\X$ is a {\em thick} subcategory of $\mod R$ if is closed under direct summands and short exact sequences. The latter condition means that for each short exact sequence $0\to L\to M\to N\to 0$ in $\mod R$, if two of $L,M,N$ are in $\X$, then so is the third.\\ (4) For each integer $n\ge 0$, let $\syz^n\X$ denote the subcategory of $\mod R$ consisting of $n$-th syzygies of $R$-modules in $\X$, namely, those modules $M$ which admits an exact sequence $0 \to M \to P_{n-1} \to \cdots \to P_0 \to X \to 0$ with each $P_i$ free and $X\in\X$.\\ (5) The {\em additive closure} $\add\X$ (repsectively, {\em extension closure} $\ext\X$) of $\X$ is by definition the smallest subcategory of $\mod R$ containing $\X$ and closed under finite direct sums and direct summands (respectively, direct summands and extensions). We denote by $\thick\X$ the {\em thick closure} of $\X$, namely, the smallest thick subcategory of $\mod R$ containing $\X$.\\ (6) Let $\CT$ be a triangulated category. A {\em thick subcategory} of $\CT$ is by definition a triangulated subcategory of $\CT$ closed under direct summands. A {\em thick closure} of a subcategory $\Y$ of $\CT$, denoted $\thick\Y$, is defined as a smallest thick subcategory of $\CT$ containing $\Y$. When $\Y$ consists of a single object $M$, we denote it by $\thick M$. \end{dfn} \begin{rem}\label{rem} If $n\ge1$, then the subcategory $\syz^n\X$ is closed under direct sums with free modules, that is to say, if $M$ is an $R$-module in $\syz^n\X$, then so is $M\oplus F$ for all free $R$-modules $F$. In fact, if there is an exact sequence $0 \to M \xrightarrow{f} P_{n-1} \xrightarrow{g} P_{n-2} \to \cdots \to P_0 \to X \to 0$ with each $P_i$ free and $X\in\X$, then the sequence $0 \to M\oplus F \xrightarrow{\left(\begin{smallmatrix} f&0\\ 0&1 \end{smallmatrix}\right)} P_{n-1}\oplus F \xrightarrow{\left(\begin{smallmatrix} g&0 \end{smallmatrix}\right)} P_{n-2} \to \cdots \to P_0 \to X \to 0$ is exact. \end{rem} \begin{dfn} Let $W$ be a subset of $\spec R$.\\ (1) The {\em dimension} of $W$, denoted $\dim W$, is defined as the supremum of $\dim R/{\p}$ where $\p$ runs through all prime ideals in $W$. Hence $\dim W=-\infty$ if and only if $W$ is empty. \\ (2) We denote by $\supp^{-1}(W)$ (respectively, $\nf^{-1}(W)$) the subcategory of $\mod R$ consisting of $R$-modules whose supports (respectively, nonfree loci) are contained in $W$. Note that $\supp^{-1}(W)$ (respectively, $\nf^{-1}(W)$) is a Serre (respectively, resolving) subcategory of $\mod R$. \end{dfn} \begin{dfn} For subcategories $\X_1,\dots,\X_n$ of $\mod R$, we denote by $\bigoplus_{i=1}^n\X_i$ the subcategory of $\mod R$ consisting of modules of the form $\bigoplus_{i=1}^nX_i$ with $X_i\in\X_i$. For a family ${\{\X_\lambda\}}_{\lambda\in\Lambda}$ of subcategories of $\mod R$ we denote by $\bigcup_{\lambda\in\Lambda}\X_\lambda$ the subcategory of $\mod R$ consisting of modules $M$ such that $M\in\X_\lambda$ for some $\lambda\in\Lambda$. \end{dfn} \begin{dfn}\cite[Definition 3.2]{R} Let $\CT$ be a triangulated category.\\ (1) Let $\CI,\CI_1,\CI_2$ be subcategories of $\CT$. Let $\CI_1 \ast\CI_2$ denote the subcategory of $\CT$ consisting of objects $M$ such that there exists an exact triangle $I_1\to M\to I_2\to I_1[1]$ with $I_1\in\CI_1$ and $I_2\in\CI_2$. Denote by $\langle\CI\rangle$ the smallest subcategory of $\CT$ contains $\CI$ and is closed under taking finite direct sums, direct summands and shifts. Inductively one defines $\langle\CI\rangle_0=0$ and $\langle\CI\rangle_r=\langle\langle\CI\rangle_{r-1}\ast \langle\CI\rangle\rangle$ for $r\geq 1$. For $\CI=\{M\}$ we simply denote it by $\langle M\rangle_r$.\\ (2) The dimension $\dim\CT$ of $\CT$ is defined as the infimum of integers $n\ge0$ such that $\CT=\langle M\rangle_{n+1}$ for some $M\in\CT$. \end{dfn} \begin{dfn}\cite[Definition 2.3]{radius}\label{raddef} Let $\X,\Y,\C$ be subcategories of $\mod R$.\\ (1) We denote by $[\X]$ the smallest subcategory of $\mod R$ containing $\{R\}\cup\X$ that is closed under finite direct sums, direct summands and syzygies, i.e, $[\X]=\add\{\,\syz^iX\mid i\ge0,\,X\in\X\,\}$. When $\X$ consists of a single object $M$, we simply denote it by $[M]$.\\ (2) We denote by $\X \circ \Y$ the subcategory of $\mod R$ consisting of objects $M$ which fits into an exact sequence $0\lrt X\lrt M\lrt Y\lrt 0$ in $\mod R$ with $X\in\X$ and $Y\in\Y$. We set $\X \bullet \Y=[[\X]\circ [\Y]]$.\\ (3) The {\em ball of radius of $r$ centered at $\C$}, denoted by $[\C]_r$, is defined by $[\C]_0=0$ and $[\C]_r=[\C]_{r-1}\bullet[\C]=[[\C]_{r-1}\circ[\C]]$ for $r\ge2$. For $\C=\{M\}$ we simply denote it by $[M]_r$.\\ (4) The {\em radius of $\X$}, denoted by $\radius\X$, is defined as the infimum of integers $n\ge 0$ such that there exists a ball of radius $n+1$ centered at a module containing $\X$. \end{dfn} \begin{dfn}\cite[Definition 5.1]{radius} Let $\X,\Y$ be subcategories of $\mod R$. Denote by $\X \circ \Y$ the subcategory of $\mod R$ consisting of objects $M$ which fits into an exact sequence $0\lrt X\lrt M\lrt Y\lrt 0$ in $\mod R$ with $X\in\X$ and $Y\in\Y$. Set $\X \bullet \Y=||\X|\circ |\Y||$, where $|\X|:=\add\X$. Define $|\X|_r$ for each $r\ge0$ analogously to Definition \ref{raddef}(3). This is nothing but the subcategory of $\mod R$ consisting of $R$-modules $M$ which is a direct summand of an $R$-module $N$ admitting a filtration $0=N_0\subseteq N_1\subseteq \cdots\subseteq N_r=N$ of $R$-submodules whose subquotients are in $\add\X$. Note that $\ext\X={|\X|}_\infty:=\bigcup_{r\ge0}{|\X|}_r$. \end{dfn} \begin{dfn} (1) We denote by $\db(R)$ the derived category of bounded complexes of $R$-modules, and identify $\mod R$ with the subcategory of $\db(R)$ consisting of complexes concentrated in degree zero. Recall that the {\sl derived dimension} of $R$, denoted $\der.\dim R$, is defined to be the dimension of the triangulated category $\db(R)$.\\ (2) An $R$-complex is called {\it perfect} if it is bounded complex of projective $R$-modules. The {\it singularity category} $\ds(R)$ of $R$, which is also called the {\em stable derived category} of $R$, is defined to be the Verdier quotient of $\db(R)$ by the perfect complexes. For the definition of the Verdier quotient, we refer to \cite[Remark 2.1.9]{Ne}. Whenever the singularity category $\ds(R)$ is discussed, we identify each object or subcategory of $\mod R$ with its image in $\ds(R)$ by the composition of the canonical functors $\mod R\rt\db(R)\rt\ds(R)$. The category $\ds(R)$ has been introduced and studied by Buchweitz \cite{bu} in connection with maximal Cohen--Macaulay modules over Gorenstein rings. In recent years, it has been investigated by Orlov \cite{o1} in relation to the Homological Mirror Symmetry Conjecture.\\ (3) Let $R$ be a {\em (Iwanaga-)Gorenstein} ring, that is, $R$ has finite injective dimension as an $R$-module. The {\em stable category} $\lcm(R)$ of maximal Cohen--Macaulay modules over $R$ is defined as follows. The objects are the maximal Cohen--Macaulay $R$-modules, i.e., the $R$-modules $M$ with $\Ext^i_R(M, R)=0$ for all $i>0$. The hom-set $\Hom_{\lcm(R)}(M, N)$ is the quotient of $\Hom_R(M, N)$ by the $R$-submodule consisting of homomorphisms $M\to N$ factoring through some projective $R$-modules. It is known that $\lcm(R)$ is triangulated, and triangle equivalent to $\ds(R)$. \end{dfn} \begin{rem}\label{hist} The importance of the notion of derived dimension was first recognized by Bondal and Van den Bergh \cite{BV} in relation to representability of functors. In fact, they proved that smooth proper commutative/non-commutative varieties have finite derived dimension, which yields that every contravariant cohomological functor of finite type to vector spaces is representable. Rouquier \cite{R} has proved that the derived dimension of coherent sheaves on a separated scheme of finite type over a field is finite. Using the notion of derived dimension, Rouquier \cite{R1} constructed an example of artin algebra of representation dimension greater than three for the first time. Therefore he could solve a long standing open problem started with Auslander's work in his Queen Mary notes in 1971. It is known that artinian algebras have finite derived dimension \cite{R}. Christensen, Krause and Kussin \cite{C, KK} showed that rings of finite global dimension have finite derived dimension, see also \cite[Proposition 8.3]{R}. More recently, Aihara and Takahashi \cite{AT} proved that derived dimension of a complete local ring with perfect coefficient field is finite. For small values of derived dimension, a number of definitive results have been obtained. Rings of derived dimension zero have been classified. It is shown, by Chen, Ye and Zhang \cite{CYZ} (see also \cite [Theorem 12.20]{Be}), that a finite dimensional algebra over an algebraically closed field has derived dimension zero if and only if it is an iterated tilted algebra of Dynkin type. Recently, Iyengar and Takahashi \cite{ua} proved that an equicharacteristic excellent local ring and a commutative ring essentially of finite type over a field have finite derived dimension. \end{rem} \begin{dfn}\cite[Definition 2.1]{ua} For each integer $n\ge0$, we set $$ \textstyle\ca^n(R)=\ann_R\Ext_R^{\geqslant n}(\mod R,\mod R)=\bigcap_{M,N\in\mod R,\,i\ge n}\Ext_R^i(M,N), $$ and call $\ca(R)=\bigcup_{n\ge0}\ca^n(R)$ the {\em cohomology annihilator} of $R$. There is an ascending chain $0=\ca^0(R)\subseteq\ca^1(R)\subseteq\ca^2(R)\subseteq\cdots$ of ideals of $R$, and this stabilizes as $R$ is noetherian, namely, $\ca(R)=\ca^n(R)$ for $n\gg0$. Note also that $\ca^n(R)=\ann_R\Ext_R^n(\mod R,\mod R)=\bigcap_{M,N\in\mod R}\ann_R\Ext_R^n(M,N)$. As we have mentioned in the introduction, Iyengar and Takahashi \cite{ua} mainly investigated the relation of the existence of non-trivial cohomology annihilators and the existence of strong generators for the category of finitely generated modules. Recall that a finitely generated $R$-module $G$ is a {\em strong generator} for $\mod R$ if there exist non-negative integers $s$ and $n$ such that $\syz^s(\mod R)\subseteq|G|_n$. \end{dfn} \section{annihilation of cohomology and finiteness of derived dimension} This section reveals a close link between the notion of cohomology annihilator and finiteness of the derived dimension as well as the dimension of the singularity category. We start by stating and proving the most general structure theorem; actually, all of the other results of this paper are deduced from this. \begin{theorem}\label{mthm} Let $I$ be an ideal of $R$. Let $\xx=x_1,\dots,x_n$ be a system of generators of $I$. \begin{enumerate}[\rm(1)] \item Let $M$ be an $R$-module such that $I\Ext_R^1(M,\syz_RM)=0$. Then there exists an $R/I$-module $L$ such that $M$ belongs to ${|\bigoplus_{i=0}^n\syz_R^iL|}_{n+1}$. \item One has \begin{align*} \nf^{-1}(\v(I))&=\textstyle\ext\left\{\syz_R^i(R/\p)\mid0\le i\le n,\,\p\in\v(I)\right\}\\ &=\textstyle{\left|\bigoplus_{i=0}^n\ext\syz_R^i\{R/\p\}_{\p\in\v(I)}\right|}_{n+1}\\ &=\textstyle{\left|\bigcup_{e>0,\,0\le i\le n}\syz_R^i(\mod R/I^{[e]})\right|}_{n+1}\\ &=\textstyle\bigcup_{e>0}\bigcup_{X\in\,\bigoplus_{i=0}^n\syz_R^i(\mod R/I^{[e]})}{|X|}_{n+1}, \end{align*} where $I^{[e]}$ is the ideal generated by $\xx^e=x_1^e,\dots,x_n^e$. \end{enumerate} \end{theorem} \begin{proof} (1) Set $H_i=\H_i(\xx,M)$ and $L=H_0\oplus\cdots\oplus H_n$. Since $\xx$ annihilates each $H_i$, one can regard $L$ as a module over $R/I$. Using \cite[Lemma 2.13]{ua} and \cite[Corollary 3.2(2)]{kos}, we find exact sequences $$ 0 \to H_i \to E_i \to \syz_RE_{i-1} \to 0\quad(1\le i\le n) $$ of $R$-modules with $E_0=H_0$ such that $M$ is a direct summand of $E_n$. Hence for each $1\le i\le n$ there is an exact sequence $$ 0 \to \syz_R^{n-i}H_i \to \syz_R^{n-i}E_i \to \syz_R^{n-i+1}E_{i-1} \to 0, $$ and an inductive argument shows that $M$ is in ${|\bigoplus_{i=0}^n\syz_R^iL|}_{n+1}$. (2) We begin with showing the inclusion \begin{equation}\label{1} \nf^{-1}(\v(I))\supseteq\ext\{\syz_R^i(R/\p)\mid0\le i\le n,\,\p\in\v(I)\}. \end{equation} Since $\nf^{-1}(\v(I))$ is resolving, it is enough to check that $\syz_R^i(R/\p)$ belongs to $\nf^{-1}(\v(I))$ for $0\le i\le n$ and $\p\in\v(I)$. Let $\q$ be a prime ideal of $R$. The $R_\q$-module $\syz_R^i(R/\p)_\q$ is stably isomorphic to $\syz_{R_\q}^i(R_\q/\p R_\q)$, and hence if $\q$ does not contain $\p$, then it is $R_\q$-free. Hence we have $\nf(\syz_R^i(R/\p))\subseteq\v(\p)\subseteq\v(I)$, which implies that $\syz_R^i(R/\p)$ is in $\nf^{-1}(\v(I))$. Thus \eqref{1} holds. It is easy to see that the following two inclusions hold. \begin{equation}\label{1.5} \textstyle\ext\{\syz_R^i(R/\p)\mid0\le i\le n,\,\p\in\v(I)\}\supseteq{\left|\bigoplus_{i=0}^n\ext\syz_R^i\{R/\p\}_{\p\in\v(I)}\right|}_{n+1}, \end{equation} \begin{equation}\label{3} \textstyle{\left|\bigcup_{e>0,\,0\le i\le n}\syz_R^i(\mod R/I^{[e]})\right|}_{n+1}\supseteq\bigcup_{e>0}\bigcup_{X\in\,\bigoplus_{i=0}^n\syz_R^i(\mod R/I^{[e]})}{|X|}_{n+1}. \end{equation} Next, let us prove the inclusion \begin{equation}\label{2} \textstyle{\left|\bigoplus_{i=0}^n\ext\syz_R^i\{R/\p\}_{\p\in\v(I)}\right|}_{n+1}\supseteq{\left|\bigcup_{e>0,\,0\le i\le n}\syz_R^i(\mod R/I^{[e]})\right|}_{n+1}. \end{equation} Fix integers $e>0$ and $0\le i\le n$. Pick an $R$-module $M$ in $\mod R/I^{[e]}$. Take a filtration $$ 0=M_0\subseteq M_1\subseteq\cdots\subseteq M_r=M $$ of $R$-submodules such that for each $1\le j\le r$ one has $M_j/M_{j-1}\cong R/\p_j$ with $\p_j\in\supp M$. As $I^{[e]}$ annihilates $M$, the support of $M$ is contained in $\v(I)$. Hence $\p_j$ is in $\v(I)$. Applying the syzygy functor $\syz_R^i$ to the above filtration, we observe that $\syz_R^iM$ belongs to $\ext\syz_R^i\{R/\p\}_{\p\in\v(I)}$. Now \eqref{2} follows. Finally, we show that \begin{equation}\label{4} \textstyle\bigcup_{e>0}\bigcup_{X\in\,\bigoplus_{i=0}^n\syz_R^i(\mod R/I^{[e]})}{|X|}_{n+1}\supseteq\nf^{-1}(\v(I)). \end{equation} Let $M$ be an $R$-module whose nonfree locus is contained in $\v(I)$. It then follows from \cite[Lemma 3.4]{kos} that there exists an integer $e>0$ such that the sequence $\xx^e=x_1^e,\dots,x_n^e$ annihilates $\Ext_R^i(M,N)$ for all $i>0$ and all $N\in\mod R$. By (1) there is an $R/I^{[e]}$-module $L$ such that $M$ belongs to ${|X|}_{n+1}$, where $X:=\bigoplus_{i=0}^n\syz_R^iL$. This shows the inclusion \eqref{4}. Combining \eqref{1}, \eqref{1.5}, \eqref{3}, \eqref{2} and \eqref{4} completes the proof of the theorem. \end{proof} The theorem below highlights the benefit of considering cohomology annihilators for commutative rings. In fact, this result makes precise the close link between the notion of cohomology annihilator and other well-studied notions such as derived dimension, singularity dimension and strong generators for module categories. \begin{theorem}\label{5cond} Let $(R,\m,k)$ be a $d$-dimensional local ring. Consider the following five conditions. \begin{enumerate}[\rm(1)] \item $\ca(R)$ is $\m$-primary. \item $\syz^n(\mod R)\subseteq{|\bigoplus_{i=0}^d\syz^ik|}_r$ for some $n\ge0$ and $r\ge1$. \item $R$ is an isolated singularity, and $\syz^n(\mod R)$ has finite radius for some $n\ge0$. \item $R$ is an isolated singularity, and $\db(R)$ has finite dimension. \item $R$ is an isolated singularity, and $\ds(R)$ has finite dimension. \end{enumerate} Then the implications {\rm(1)} $\Leftrightarrow$ {\rm(2)} $\Leftrightarrow$ {\rm(3)} $\Rightarrow$ {\rm(4)} $\Rightarrow$ {\rm(5)} hold. If $R$ is Gorenstein, then the five conditions are equivalent. \end{theorem} \begin{proof} (1) $\Rightarrow$ (2): There are integers $n\ge0$ and $t\ge1$ such that $\m^t\Ext_R^{>n}(\mod R,\mod R)=0$. Take a parameter ideal $Q$ of $R$ contained in $\m^t$. Then $Q\Ext_R^{>0}(\syz^n(\mod R),\mod R)=0$. Let $s\ge1$ be the Loewy length of the artinian ring $R/Q$, i.e., the minimal integer $i$ with $\m^i(R/Q)=0$. The first assertion of Theorem \ref{mthm} implies that for each $R$-module $M$ in $\syz^n(\mod R)$ there exists an $R/Q$-module $L$ such that $M$ is in ${|\bigoplus_{i=0}^d\syz^iL|}_{d+1}$. Note that $L$ belongs to ${|k|}_s$ as an $R$-module. Hence $M$ is in ${|\bigoplus_{i=0}^d\syz^ik|}_{s(d+1)}$. (2) $\Rightarrow$ (3): For each nonmaximal prime ideal $\p$ of $R$ we have $\syz^n(\mod R_\p)\subseteq{|(\bigoplus_{i=0}^d\syz^ik)_\p|}_r=\add R_\p$, which shows that $R_\p$ is regular (of dimension at most $n$). Hence $R$ is an isolated singularity. It is obvious that $\syz^n(\mod R)$ has finite radius. (3) $\Rightarrow$ (1): We see from \cite[Theorem 4.3]{ua} that $\sing R=\v(\ca(R))$. Since $R$ is an isolated singularity, the ideal $\ca(R)$ is $\m$-primary. (2) $\Rightarrow$ (4): Set $G=\bigoplus_{i=0}^d\syz^ik$ and pick any module $M$ in $\mod R$. Condition (2) implies that $\syz^nM$ is in $|G|_r$. It is easy to see that in $\db(R)$ the module $M$ belongs to $\langle G\oplus R\rangle_{r+n}$. Hence $\db(R)=\langle G\oplus R\rangle_{2(r+n)}$ by \cite[Proposition 2.6]{AAITY}. (4) $\Rightarrow$ (5): The implication holds trivially. Now, suppose that $R$ is Gorenstein, and let us show the implication (5) $\Rightarrow$ (1). By \cite[Proposition 4.3]{BFK} (see also \cite[Lemma 5.3(2)]{dim}), there exists an integer $t\ge1$ such that $\m^t\Hom_{\ds(R)}(X,Y)=0$ for all $X,Y\in\ds(R)$. Since $R$ is Gorenstein, Theorem 4.4.1 of \cite{bu} yields that the singularity category $\ds(R)$ is equivalent to the stable category $\lcm(R)$ of maximal Cohen--Macaulay $R$-modules as an $R$-linear triangulated category, and hence $\m^t\Hom_{\lcm(R)}(M,N)=0$ for all $M,N\in\lcm(R)$. For $R$-modules $A,B$ there are isomorphisms $$ \Ext_R^{d+1}(A,B)\cong\Ext_R^1(\syz^dA,B)\cong\Ext_R^2(\syz^dA,\syz B)\cong\cdots\cong\Ext_R^{d+1}(\syz^dA,\syz^dB) $$ of $R$-modules; the first isomorphism is clear, and the other isomorphisms follow from the fact that $\syz^dA$ is a maximal Cohen--Macaulay module over the Gorenstein local ring $R$. Since there is an isomorphism $\Ext_R^{d+1}(\syz^dA,\syz^dB)\cong\Hom_{\lcm(R)}(\syz^dA,\syz^{-d-1}\syz^dB)$, we observe $\m^t\Ext_R^{d+1}(\mod R,\mod R)=0$, which implies that $\m^t$ is contained in $\ca^{d+1}(R)$, whence in $\ca(R)$. \end{proof} It is fairly easy to see that the singular locus of $R$ is contained in the defining closed subset of the cohomology annihilator ideal of $R$, $V(\ca(R))$; see \cite[Lemma 2.9(2)]{ua}. So it is natural to ask that: For which rings is there an equality $\sing R=V(\ca(R))$? As an attempt in this direction, Iyengar and Takahashi have shown that for a commutative noetherian ring $R$ which is either a finitely generated algebra over a filed or an equicharacteristic excellent local ring the equality holds; see \cite[Theorems 5.3, 5.4]{ua}, (see also \cite[Theorem 4.3]{ua}). The result below, can be considered as a partial answer to the raised question. \begin{theorem}\label{sing} Let $(R, \m)$ be a $d$-dimensional Gorenstein local ring with $\dim\ds(R)<\infty$ (e.g., with finite derived dimension). Then $V(\ca(R))=\sing R$. In particular, if $R$ is reduced, then the ideal $\ca(R)$ contains a nonzerodivisor. \end{theorem} \begin{proof} If $R$ is reduced, then $\dim\sing R<d$. The last assertion follows from this. Let us show the first assertion. Combining our assumption with \cite[Theorem 4.4.1]{bu}, we have $\dim\lcm(R)<\infty$. Take an object $M\in\cm(R)$ such that $\lcm(R)={\langle M\rangle}_n$, for some integer $n>0$. Set $I=\ann_{R}\Ext_{R}^1(M, \syz^1M)$, and apply \cite[Lemma 2.13]{ua} to conclude that $I\Ext_{R}^i(M, \cm(R))=0$ for all $i>0$. It is seen that $I\Hom_{\lcm(R)}({\langle M\rangle},\lcm(R))=0$. Pick any $X,Y\in\lcm(R)={\langle M\rangle}_n$. Then there is an exact triangle $A \to B \to C \to A[1]$ in $\lcm(R)$ with $A\in{\langle M\rangle}_{n-1}$ and $C\in\langle M\rangle$ such that $X$ is a direct summand of $B$. There is an exact sequence $$ \Hom_{\lcm(R)}(C,Y)\to\Hom_{\lcm(R)}(B,Y)\to\Hom_{\lcm(R)}(A,Y). $$ By the induction hypothesis, $I^{n-1}$ and $I$ annihilate $\Hom_{\lcm(R)}(C,Y)$ and $\Hom_{\lcm(R)}(A,Y)$ respectively. Hence $I^n$ annihilates $\Hom_{\lcm(R)}(B,Y)$, and $\Hom_{\lcm(R)}(X,Y)$. It follows that $I^n\Hom_{\lcm(R)}(\lcm(R),\lcm(R))=0$, which implies $I^n\Ext_R^{>0}(\cm(R),\cm(R))=0$. We thus obtain $V(\ca(R))\subseteq V(I)$. In view of \cite[Lemma 2.9(2)]{ua}, it suffices to show that $V(I)\subseteq\sing R$. To see this, let $\p$ be a prime ideal which is not in $\sing R$. Then $R_{\p}$ is a regular ring. As $M$ is maximal Cohen--Macaulay, $M_{\p}$ is a free $R_{\p}$-module, implying that $\p$ does not belong to $V(I)$. \end{proof} \begin{rem} A more general version than Theorem \ref{sing} (for not necessarily local irngs) appears in \cite{jc}. \end{rem} The next theorem, asserts that existence of an ideal of $R$ whose defining closed subset covers the nonfree locus of $\syz^n(\mod R)$, for any $n\ge 1$, guarantees the existence of a subcategory $\X$ of $\mod R$ such that $\syz^n(\mod R)$ is contained in extension closure of syzygies of $\X$ as well as in the thick closure of $\X\cup R$, ensuring that singular locus of $R$ has finite dimension. We first state a lemma, which is essentially included in \cite[Lemma 3.4]{kos}. \begin{lem}\label{nfis} Let $I$ be an ideal of $R$. Let $M$ be an $R$-module. The following are equivalent for each integer $n\ge0$. \begin{enumerate}[\rm(1)] \item The syzygy $\syz^nM$ is in $\nf^{-1}(\v(I))$. \item There exists an integer $t>0$ such that $I^t\Ext_R^{>n}(M,\mod R)=0$. \end{enumerate} \end{lem} \begin{proof} (1) $\Rightarrow$ (2): It follows from \cite[Lemma 3.4]{kos} that $I^t\Ext_R^{>0}(\syz^nM,\mod R)=0$ for some $t>0$. Hence $I^t\Ext_R^{>n}(M,\mod R)=0$. (2) $\Rightarrow$ (1): Setting $N=\syz^nM$, we have $I^t\Ext_R^1(N,\syz N)=0$. Therefore $\v(I)$ contains the support of the $R$-module $\Ext_R^1(N,\syz N)$, which coincides with $\nf(N)$. \end{proof} For a prime ideal $\p$ of $R$ we denote by $\X_\p$ the subcategory of $\mod R_\p$ consisting of modules of the form $X_\p$ with $X\in\X$. \begin{theorem}\label{1dim} Let $R$ be a local ring, and let $r\ge0$ be an integer. Consider the following four conditions. \begin{enumerate}[\rm(1)] \item There exist an ideal $I$ of $R$ with $\dim R/I\le r$ and an integer $n\ge0$ such that for each $R$-module $M$ there is an integer $t>0$ with $I^t\Ext_R^{>n}(M,\mod R)=0$. \item There exist a subcategory $\X$ of $\mod R$ with $\dim X\le r$ for all $X\in\X$ and an integer $n\ge0$ such that $\syz^n(\mod R)\subseteq{\left|\bigoplus_{i=0}^n\ext\syz^i\X\right|}_{n+1}$. \item There exist a subcategory $\X$ of $\mod R$ with $\dim X\le r$ for all $X\in\X$ and an integer $n\ge0$ such that $\syz^n(\mod R)\subseteq\thick(\X\cup\{R\})$. \item One has $\dim\sing R\le r$. \end{enumerate} Then the implications {\rm(1)} $\Rightarrow$ {\rm(2)} $\Rightarrow$ {\rm(3)} $\Rightarrow$ {\rm(4)} hold. If $\sing R$ is closed, the four conditions are equivalent. \end{theorem} \begin{proof} (1) $\Rightarrow$ (2): Lemma \ref{nfis} implies that $\syz^n(\mod R)$ is contained in $\nf^{-1}(\v(I))$. Putting $\X=\{R/\p\}_{\p\in\v(I)}$, we see from the second assertion of Theorem \ref{mthm} that $\nf^{-1}(\v(I))={\left|\bigoplus_{i=0}^n\ext\syz^i\X\right|}_{n+1}$. For each $\p\in\v(I)$ we have $\dim R/\p\le\dim R/I\le r$. (2) $\Rightarrow$ (3): It is straightforward that ${\left|\bigoplus_{i=0}^n\ext\syz^i\X\right|}_{n+1}$ is contained in $\thick(\X\cup\{R\})$. (3) $\Rightarrow$ (4): Take a prime ideal $\p$ of $R$ with $\dim R/\p>r$. Localization at $\p$ shows that $\syz^n(\mod R_\p)$ is contained in $\thick(\X_\p\cup\{R_\p\})$, which coincides with the subcategory of $\mod R_\p$ consisting of modules of finite projective dimension since $\X_\p=0$. Therefore $R_\p$ is regular. This shows that $\sing R$ has dimension at most $r$. Now assume that $\sing R$ is closed, and let us show that (4) implies (1). There is an ideal $I$ of $R$ with $\sing R=\v(I)$. As $\dim\sing R\le r$, we have $\dim R/I\le r$. Let $\p$ be a prime ideal in $\nf(\syz^d(\mod R))$, where $d=\dim R$. Then $\syz^dM_\p$ is not $R_\p$-free for some $R$-module $M$. Hence $R_\p$ is not regular, that is, $\p$ is in $\sing R$. Thus $\syz^d(\mod R)$ is contained in $\nf^{-1}(\v(I))$, and Lemma \ref{nfis} completes the proof. \end{proof} \section{extension-closed subcategories and annihilation of cohomology} In this section we will obtain a structure theorem on modules over a local ring $(R,\m)$ that are locally free on the punctured spectrum. Using this result, we study extension closures of syzygies of the residue field and investigate the relationships between cohomology annihilators of $R$ and its $\m$-adic completion. We denote by $\fl R$ the subcategory of $\mod R$ consisting of $R$-modules of finite length and by $\mod_0R$ the subcategory of $\mod R$ consisting of $R$-modules that are locally free on the punctured spectrum of $R$. The following result, which is a consequence of the second assertion of Theorem \ref{mthm}, concerns the structure of modules which are locally free on the punctured spectrum. We will observe that this result plays a key role in the proofs of other results in this section. \begin{theorem}\label{tdthm} Let $(R,\m,k)$ be a local ring of dimension $d$. Let $M$ be an $R$-module that is locally free on the punctured spectrum of $R$. Then $M$ belongs to ${|\bigcup_{i=t}^d\syz^i\fl R|}_{d-t+1}$, where $t=\depth M$. In particular, $M$ is in $\ext(\bigoplus_{i=t}^d\syz^ik)$. \end{theorem} \begin{proof} Take any system of parameters $\xx=x_1,\dots,x_d$ of $R$. Then we have $\nf(M)\subseteq\{\m\}\subseteq\v(\xx)$, so $M$ is in $\nf^{-1}(\v(\xx))$. It follows from the second assertion of Theorem \ref{mthm} that \begin{equation}\label{in} M\in{\left|\bigcup_{i=0}^d\syz^i\fl R\right|}_{d+1}. \end{equation} We can choose a sequence $\yy=y_1,\dots,y_t$ of elements in $R$ that is both an $M$-sequence and a subsystem of parameters of $R$. As $\nf(M)\subseteq\{\m\}\subseteq\v(\yy)$, applying \cite[Lemma 3.4]{kos} again, we obtain $\yy^e\Ext_R^{>0}(M,\mod R)=0$ for some $e>0$. Replacing $\yy$ with $\yy^e$, we may assume $\yy\Ext_R^{>0}(M,\mod R)=0$. By \cite[Corollary 3.2(1)]{kos} the module $M$ is isomorphic to a direct summand of $\syz_R^t(M/\yy M)$. In view of the containment \eqref{in} for the $R/\yy R$-module $M/\yy M$, it is seen that $M/\yy M$ is in ${|\bigoplus_{i=0}^{d-t}\syz_{R/\yy R}^i\fl(R/\yy R)|}_{d-t+1}$, where $|\ |$ is taken in $\mod R/\yy R$. Appling the $t$-th syzygy functor over $R$ yields $$ M\in{\left|\bigcup_{i=0}^{d-t}\syz_R^t\syz_{R/\yy R}^i\fl(R/\yy R)\right|}_{d-t+1}. $$ Let $L$ be a module in $\fl(R/\yy R)$, and take an exact sequence $$ 0 \to \syz_{R/\yy R}^iL \to P_{i-1} \to \cdots \to P_0 \to L \to 0 $$ of $R/\yy R$-modules with $P_0,\dots,P_{i-1}$ free. Appling the $t$-th syzygy functor over $R$ to this, we get an exact sequence $$ 0 \to \syz_R^t\syz_{R/\yy R}^iL \to \syz_R^tP_{i-1} \to \cdots \to \syz_R^tP_0 \to \syz_R^tL \to 0. $$ Note that $\syz_R^tP_0,\dots,\syz_R^tP_{i-1}$ are free $R$-modules. It follows that $\syz_R^t\syz_{R/\yy R}^iL$ is isomorphic to $\syz_R^{t+i}L\oplus F$ for some free $R$-module $F$. As $L$ is also of finite length over $R$, we obtain $$ M\in{\left|\bigcup_{i=0}^{d-t}\syz_R^{t+i}\fl R\cup\{R\}\right|}_{d-t+1}={\left|\bigcup_{i=t}^d\syz_R^i\fl R\cup\{R\}\right|}_{d-t+1}. $$ We claim that $R$ belongs to $\add(\syz^d\fl R)$. Indeed, if $d=0$, then $R$ has finite length, and belongs to $\add(\syz^d\fl R)=\mod R$. If $d\ge1$, then taking any $R$-module $K$ of finite length, we see from Remark \ref{rem} that $\syz^dK\oplus R\in\syz^d\fl R$, and hence $R\in\add(\syz^d\fl R)$. Consequently, $M$ belongs to ${|\bigcup_{i=t}^d\syz_R^i\fl R|}_{d-t+1}$, and the proof of the first assertion of the theorem is completed. The second assertion follows from the first assertion and the fact that $\fl R=\ext(k)$. \end{proof} \begin{rem} In the case where $R$ is Cohen--Macaulay, the last assertion in Theorem \ref{tdthm} is nothing but \cite[Theorem 2.4]{stcm}. Theorem \ref{tdthm} not only removes the Cohen--Macaulay assumption from \cite[Theorem 2.4]{stcm} but also gives more precise structure of the module $M$. \end{rem} We record here an immediate consequence of Theorem \ref{tdthm}. \begin{cor}\label{strmod0} Let $(R,\m,k)$ be a local ring of dimension $d$. Then $$ \mod_0R={\left|\bigcup_{i=0}^d\syz^i\fl R\right|}_{d+1}=\ext\left(\bigoplus_{i=0}^d\syz^ik\right). $$ \end{cor} \begin{proof} Theorem \ref{tdthm} implies $\mod_0R\subseteq{|\bigcup_{i=0}^d\syz^i\fl R|}_{d+1}\subseteq\ext(\bigoplus_{i=0}^d\syz^ik)$. Since $\mod_0R$ contains the module $\bigoplus_{i=0}^d\syz^ik$ and is closed under direct summands and extensions, it also contains $\ext(\bigoplus_{i=0}^d\syz^ik)$. \end{proof} The following result is a consequence of Corollary \ref{strmod0}. An analogous result is obtained in \cite[Theorem 3.2]{stcm} over a Cohen--Macaulay local ring. \begin{cor}\label{comp0} Let $R$ be a local ring. For every $M\in\mod_0\widehat R$ there exists $N\in\mod_0R$ such that $M$ is a direct summand of $\widehat N$. \end{cor} \begin{proof} Set $G=\bigoplus_{i=0}^d\syz^ik\oplus R$. The module $M$ is in $\ext_{\widehat R}(\widehat G)$ by Corollary \ref{strmod0}. Using \cite[Proposition 3.1]{stcm}, we observe that there is an $R$-module $N$ in $\ext_R(G)$ such that $M$ is a direct summand of $\widehat N$, and $\ext_R(G)=\mod_0R$ by Corollary \ref{strmod0} again. \end{proof} The result below, which compares cohomology annihilators of a local ring $R$ and its completion, is an application of Corollary \ref{comp0}. \begin{theorem}\label{cahat} Let $R$ be a $d$-dimensional local ring. Let $n\ge0$ be an integer. \begin{enumerate}[\rm(1)] \item One has $\ca^n(\widehat R)\cap R\subseteq\ca^n(R)$. \item Suppose that $\widehat R$ is an isolated singularity. Then $\ca^n(R)\subseteq\ca^{n+d}(\widehat R)\cap R$. Hence $$ \ca(R)=\ca(\widehat R)\cap R. $$ In particular, $\ca(R)$ is $\m$-primary if and only if $\ca(\widehat R)$ is $\widehat\m$-primary. \end{enumerate} \end{theorem} \begin{proof} (1) Let $a\in\ca^n(\widehat R)\cap R$. Then $a$ is an element of $R$ with $a\Ext_{\widehat R}^n(\mod\widehat R,\mod\widehat R)=0$. Let $M,N$ be $R$-modules. The element $a$ annihilates $\Ext_{\widehat R}^n(\widehat M,\widehat N)$, which is isomorphic to the completion of $\Ext_R^n(M,N)$. Since the canonical map $R\to\widehat R$ is faithfully flat (and hence pure), $a$ annihilates $\Ext_R^n(M,N)$. Therefore $a\in\ca^n(R)$. (2) There is nothing to show for $n=0$, so let us assume $n>0$. Take an element $a\in\ca^n(R)$. Let $X,Y$ be $\widehat R$-modules. Since $\widehat R$ is an isolated singularity, it is seen that $\syz_{\widehat R}^dX,\syz_{\widehat R}^dY$ are in $\mod_0\widehat R$. Corollary \ref{comp0} implies that there exist $R$-modules $M,N\in\mod_0R$ such that $\syz_{\widehat R}^dX,\syz_{\widehat R}^dY$ are direct summands of $\widehat M,\widehat N$ respectively. The element $a$ annihilates $\Ext_R^n(M,N)$, and completion shows that it also annihilates $\Ext_{\widehat R}^n(\widehat M,\widehat N)$. Hence $a$ annihilates $\Ext_{\widehat R}^n(\syz_{\widehat R}^dX,\syz_{\widehat R}^dY)$, which implies $$ a\Ext_{\widehat R}^{n+d}(X,\syz_{\widehat R}^dY)=0 $$ as $n>0$. Letting $Y=\syz^nX$, we get $a\Ext_{\widehat R}^{n+d}(X,\syz_{\widehat R}^{n+d}X)=0$, and $a\Ext_{\widehat R}^{\geqslant(n+d)}(X,\mod\widehat R)=0$ by \cite[Lemma 2.13]{ua}. Thus we obtain $a\in\ca^{n+d}(\widehat R)$. Now we have proved $\ca^n(R)\subseteq\ca^{n+d}(\widehat R)\cap R$. Combining this with (1) gives rise to the equality $\ca(R)=\ca(\widehat R)\cap R$. The last assertion is straightforward from this. \end{proof} \begin{rem} As we have mentioned in the introduction, rings whose cohomology annihilators are $\m$-primary, can be viewed as a generalization of those of finite CM-type. So, it seems interesting to ask whether this result remains true, if one replaces the condition that $R$ has finite CM-type with the condition that the cohomology annihilator of $R$ is $\m$-primary. Theorem \ref{cahat} answers this question positively without the Cohen--Macaulay assumption of the base ring. \end{rem} As a direct consequence of the above theorem in conjunction with Theorem \ref{5cond}, we include the following result. \begin{cor}Let $(R, \m)$ be a Gorenstein local ring such that $\widehat{R}$ is an isolated singularity. Then $\db(R)$ has finite dimension if and only if so does $\db(\widehat{R})$. \end{cor} \begin{proof}Assume that $\db(R)$ has finite dimension. Since $\widehat{R}$ is an isolated singularity, it is an elementary fact that the same is true for $R$. So Theorem \ref{5cond} forces $\ca(R)$ to be $\m$-primary and then, one may apply the above theorem and conclude that $\ca(\widehat{R})$ is $\widehat{\m}$-primary, as well. Now, another use of Theorem \ref{5cond} finishes the proof. \end{proof} In the Henselian case, we have the following stronger result than Corollary \ref{comp0}. \begin{prop}\label{twice} Let $R$ be a Henselian local ring. \begin{enumerate}[\rm(1)] \item If $M$ be an indecomposable $R$-module, then $\widehat M$ is an indecomposable $\widehat R$-module. \item For each $M\in\mod_0\widehat R$ there exists $N\in\mod_0R$ such that $M\cong\widehat N$. \end{enumerate} \end{prop} \begin{proof} (1) This result is essentially proved in \cite[Proposition 3.1]{LW2}. For the convenience of the reader, we give a proof. Let $E=\End_{R}(M)$ be the endomorphism ring of $M$, and let $J$ be the Jacobson radical of $E$. Since $E$ is a module-finite $R$-algebra, it follows from \cite[Lemma 1.7]{lw} that $\m E\subseteq J$. Therefore $E/J$ is an $R$-module of finite length, and so we obtain isomorphisms $\widehat{E}/{\widehat{J}}\cong {{E}/J}\otimes_{R}\widehat{R}\cong E/J$. As $R$ is Henselian, according to \cite[Theorem 1.8]{lw}, $E/J$ is a division ring and consequently $\widehat{E}/{\widehat{J}}$ is a division ring as well, meaning that $\widehat{J}$ is a maximal ideal of $\widehat{E}$. On the other hand, as $E/{\m E}$ is artinian, $J/{\m E}$ is a nilpotent ideal of $E/{\m E}$ and so the isomorphisms $\widehat{E}/\m\widehat{E}\cong E/\m E$ and $\widehat{J}/{\m\widehat{E}}\cong J/{\m E}$ yield that $\widehat{J}/{\m\widehat{E}}$ is also a nilpotent ideal of $\widehat{E}/{\m\widehat{E}}$. Hence $\widehat{J}/{\m\widehat{E}}$ is contained in the Jacobson radical of $\widehat{E}/\m\widehat{E}$, which is equal to $J'/{\m\widehat{E}}$ with $J'$ the Jacobson radical of $\widehat{E}$. Thus we obtain $\widehat{J}=J'$, ensuring that $\widehat{E}$ is a local ring, whence $\widehat{M}$ is an indecomposable $\widehat{R}$-module. (2) We may assume that $M$ is an indecomposable $R$-module. Corollary \ref{comp0} implies that there exists a module $X\in\mod_0R$ such that $M$ is a direct summand of $\widehat X$. Let $X=\bigoplus_{i=1}^nX_i$ be an indecomposable decomposition. Note that $X_i$ belongs to $\mod_0R$ for all $i$. Since $R$ is Henselian, assertion (1) guarantees that each $\widehat R$-module $\widehat{X_i}$ is indecomposable. We have $\widehat X\cong\bigoplus_{i=1}^n\widehat{X_i}$, and as $\widehat R$ is Henselian, $M\cong\widehat{X_t}$ for some $1\le t\le n$ by the Krull--Schmidt theorem. \end{proof} Recall that a commutative neotherian ring $R$ is said to be of {\em finite} (respectively, {\em countable}) {\em Cohen--Macaulay representation type} (CM-type, for short), if there exist only finitely (respectively, countably) many isomorphism classes of indecomposable maximal Cohen--Macaulay $R$-modules. It was conjectured by Schreyer \cite[Conjecture 7.3]{S} that, a local ring $R$ is of finite CM-type if and only if the ($\m$-adic) completion of $R$ is of finite CM-type. Examples, discovered by Leuschke and Wiegand \cite[Examples 2.1, 2.2]{LW}, disproved this conjecture. However, several classes of rings do satisfy Schreyer's conjecture. Namely, the conjecture was answered affirmatively by Wiegand \cite[Theorem 2.9(4)]{W} in the case where $R$ is a Cohen--Macaulay ring whose completion is an isolated singularity, and by Leuschke and Wiegand \cite[Main Theorem]{LW} in the case where $R$ is a Cohen--Macaulay excellent ring. We should also point out that Schreyer's conjecture holds true for all one dimensional local rings; see \cite[Theorem 2.9(2)]{W}. The next result guarantees the validity of Schreyer's conjecture in the case where $R$ is a Henselian local ring whose completion $\widehat R$ is an isolated singularity. \begin{cor}\label{fcmth} Let $(R, \m)$ be a $d$-dimensional local ring such that $\widehat{R}$ is an isolated singularity. Then the following statements hold true. \begin{enumerate}[\rm(1)] \item Suppose that $R$ is Henselian. Then $R$ has finite (respectively, countable) $CM$-type if and only if so does $\widehat{R}$. \item Let $n\ge0$ be an integer. If $\add\syz^n(\mod R)$ contains only finitely (respectively, countably) many nonisomorphic indecomposable modules, then so does $\add\syz^{n+d}(\mod \widehat{R})$. \end{enumerate} \end{cor} \begin{proof} We only deal with the finite case; the statements for the countable case are similarly shown. (1) The `if' part holds true regardless of the assumption that $\widehat{R}$ is an isolated singularity, thanks to the fact that $\widehat{R}$ is faithfully flat over $R$; see \cite[Corollary 1.6]{W}. So we have only to prove the `only if' part. Let $\{X_1,\dots,X_t\}$ be a complete list of representatives for the isomorphism classes of indecomposable maximal Cohen--Macaulay $R$-modules. Since $\widehat{R}$ is an isolated singularity, we see from Proposition \ref{twice} that $\{\widehat{X_1},\dots,\widehat{X_t}\}$ is a complete list of representatives for the isomorphism classes of indecomposable maximal Cohen--Macaulay $\widehat{R}$-modules. (2) Let $\{X_1,\dots,X_t\}$ be a complete list of representatives for the isomorphism classes of indecomposable $R$-modules in $\add\syz^n(\mod R)$. Let $M$ be an indecomposable $\widehat R$-module in $\add\syz^{n+d}(\mod \widehat{R})$. Then $M$ is a direct summand of $\syz_{\widehat R}^{n+d}N$ for some $\widehat R$-module $N$. Since $\widehat R$ is an isolated singularity, $\syz_{\widehat R}^dN$ is locally free on the punctured spectrum of $\widehat R$. Hence there exists an $R$-module $L$ such that $\syz_{\widehat R}^dN$ is a direct summand of $\widehat L$ by Corollary \ref{comp0}. Therefore $M$ is a direct summand of $\syz^n_{\widehat R}\widehat L=\widehat{\syz_R^nL}$. Setting $X=X_1\oplus\cdots\oplus X_t$, one has that $\syz_R^nL$ belongs to $\add(X)$, and $M$ is in $\add(\widehat X)$ Thus $\add\syz^{n+d}(\mod\widehat R)=\add(\widehat{X})$. Since $\widehat R$ is Henselian, the assertion follows from this. \end{proof} \begin{rem} The {\sl Auslander--Reiten conjecture \cite{ar}} claims that, over an artin algebra $\Lambda$, if $M$ is a finitely generated $\Lambda$-module such that $\Ext_{\Lambda}^{i>0}(M,M\oplus \Lambda)=0$, then $M$ is projective. This long-standing conjecture, which is rooted in a conjecture of Nakayama \cite{Na}, is known to be true for several classes of algebras including, algebras of finite representation type \cite{ar} and symmetric artin algebras with radical cube zero \cite{Ho}. The Auslander--Reiten conjecture actually makes sense for any noetherian ring. In particular, Auslander, Ding and Solberg in \cite{ads}, studied the following condition on a commutative noetherian ring, not necessarily an artin algebra. \begin{enumerate} \item[\textsf{(ARC)}] Let $R$ be a commutative noetherian ring and $M$ a finitely generated $R$-module. If $\Ext_{R}^i(M, M\oplus R)=0$ for all $i>0$, then $M$ is projective. \end{enumerate} \noindent There are already some results in the study of classes of commutative rings satisfying \textsf{(ARC)}; see for instance \cite{a, bfs, ct, ch, hl}. In the next result, by using the above corollary, we investigate ascent and descent of \textsf{(ARC)} between a local ring and its completion. \end{rem} \begin{cor}\label{arc} Let $(R, \m)$ be a $d$-dimensional Henselian local ring such that $\widehat{R}$ is an isolated singularity. Then $R$ satisfies \textsf {(ARC)} if and only if so does $\widehat{R}$. \end{cor} \begin{proof} Since $\widehat{R}$ is faithfully flat over $R$, it is easy to verify that descent holds true even without assuming that $R$ is Henselian or that $\widehat{R}$ is an isolated singularity. So let us show the ascent. To this end, assume that $M$ is an arbitrary $\widehat{R}$-module such that $\Ext_{\widehat{R}}^{>0}(M, M\oplus\widehat{R})=0$. We then deduce $\Ext_{\widehat{R}}^{>0}(\syz^nM, \syz^nM\oplus\widehat{R})=0$ for each $n\ge0$. Moreover, since $\Ext_{\widehat{R}}^{i>0}(M, \widehat{R})=0$, one concludes that $M$ is free if and only if so is $\syz^nM$. Thus, replacing $M$ with $\syz^dM$, we may assume $M\in\mod_0{\widehat{R}}$, since $\widehat R$ is an isolated singularity. By Proposition \ref{twice} we have $M\cong\widehat N$ for some $N\in\mod_0R$. Hence $\Ext_{R}^i(N, N\oplus R)\otimes_{R}\widehat{R}\cong\Ext_{\widehat{R}}^i(\widehat{N}, \widehat{N}\oplus\widehat{R})\cong\Ext_{\widehat R}^i(M,M\oplus\widehat R)=0$ for all $i>0$, and therefore $\Ext_{R}^i(N, N\oplus R)=0$ for all $i>0$, as $\widehat R$ is a pure extension of $R$. Our hypothesis yields that $N$ is a free $R$-module, so that $M=\widehat{N}$ is a free $\widehat{R}$-module, as desired. \end{proof} \begin{ac} The authors thank Srikanth Iyengar for his valuable comments. \end{ac}
{ "timestamp": "2015-04-24T02:09:11", "yymm": "1504", "arxiv_id": "1504.06163", "language": "en", "url": "https://arxiv.org/abs/1504.06163" }
\section{Introduction} Graphene is a two dimensional (2D) monolayer of graphite atoms~\cite{gei,neto}. It has a honeycomb lattice structure of carbon atoms packed in a 2D system. Its single layer has a band structure analogous to massless relativistic particle, where the valence and the conduction bands meet in two in-equivalent points $K$ and $K'$, called {\it Dirac} points, at the corners of Brillouin zone. The quantum Hall effect (QHE)~\cite{Geim, Zheng} in graphene is one of most remarkable phenomena, not only because of the Hall conductivity is quantized on plateaus and magnetoconductance vanishing in magnetic field but also provides a bridge between condensed matter physics and quantum electrodynamics~\cite{Novo}. The successful experimental works~\cite{Geim, Zheng} and several theoretical attempts~\cite{ Gui, Shara, Ben} established the Hall conductivity expression as $\sigma_{\sf H}=4\left(n+{1\over 2}\right){e^2\over h}$ with $n$, an integer including zero, characterizes the integer quantum Hall effect (IQHE) in a monolayer graphene. The prefactor 4 reflects the two-fold spin and two-fold valley degeneracy in the graphene band structure. The term $\frac{1} {2}$ comes from the Berry phase due to the pseudospin (or valley) precession when a massless (chiral) Dirac particle exercises cyclotron motion~\cite{Champel}. The conduction in graphene device may be produced by two types of charge carriers: the electrons and the holes. The Fermi energy changes the position with changing the type of the carriers charges~\cite{Novo}, such that this energy is in the valence (conduction) band when the holes (electrons) are responsible to conduction. The quantization of the Hall conductivity is determined also by the fact of the number of the edges states band crossing the Fermi level~\cite{fertig}. Our main objective is to introduce a new approach based on the thermodynamical properties~\cite{jellalTPG} to study the quantum Hall effect in graphene. For this, we consider {\it Dirac} particles living in a rectangular plane under the action of the very weak transverse electric and a strong perpendicular magnetic fields. Taking into account of a continuum pseudo-potential varying a long of $x$ axis, we explicitly evaluate the Hall conductivity. As an interesting result, we end up with the quantized plateaux characterizing the integer quantum Hall effect in graphene. This letter is organized as follows. In section 2, we formulate our problem by setting the Hamiltonian describing {\it Dirac} particle in the presence of the electromagnetic fields and involving a pseudo-potential along the $x$-axis. After some algebra, we diagonalize our Hamiltonian to get the solutions of the energy spectrum. In section 3, using {\it Fermi-Dirac} statistics and Mellin transformation to explicitly evaluate the grand thermodynamical potential. In section 4, we calculate the particle number to end up with the Hall conductivity and therefore the corresponding filling factors. Finally, we conclude in last section. \section{Solutions of the energy spectrum} We consider a rectangular sheet of graphene parameterized by two sides $(L_x,L_y)$ and subjected to an electromagnetic field $(\vec E, \vec B)$. To deal with our task, we describe the present system by the Hamiltonian \begin{equation}\lb{1} H=v_F \vec{\sigma} \vec{\pi}+\sigma_y eEy+\Delta{\tilde p}+g\mu_B\vec{B} \cdot \vec{S} \end{equation} where the first term is the Dirac operator in the presence of $\vec B$ and second is resulting from an applied electric field along $y$-direction, i.e. $\vec E= E_y \vec e_y$. The continuum pseudo-potential $\Delta{\tilde p}$ is reflecting the edges effect contribution and the last one is the magnetic coupling. $\vec \sigma$ are Pauli matrices, $g$ is the Land\'e factor, $v_F\app{c\over 100}$ is the Fermi velocity and $\mu_B$ is the Bohr magneton. It is convenient to consider the Landau gauge $\vec A=(-By,0)$ where the momentum operators read as $ \pi_x=p_x-{eB\over c}y$ and $\pi_y=p_y$. For simplicity, we decompose~(\ref{1}) into three parts. These are \beq\lb{4} H=H_0+ \Delta{\tilde p}+g\mu_B\vec{B} \cdot \vec{S} \eeq where $H_0$ is corresponding to the two first terms in \eqref{1}. This decomposition is helpful in sense that we can treat each part separately and therefore derive easily the spectrum of~(\ref{4}). Now solving the eigenvalue equation to end up with the eigenvalues \begin{equation}\label{landaulevels} E_{n \tilde{p}s}={\rm sgn}(n)\sqrt{2\left(\frac{\hbar v_F}{l_B}\right)^2 |n|} + \Delta{\tilde p} + g \mu_B B m_s \end{equation} as well as the corresponding eigenfunctions \begin{equation}\lb{efn} \Psi_{n\neq 0, k, m_s}={1\over \sqrt{2}}\left \begin{array}{c} -{\rm sgn}(n)i\phi_{|n|-1} \\ \phi_{|n|} \\ \end{array \right) e^{ikx} {\mathcal\Omega}\ \al_{m_s} \end{equation} and eigenfunction are \begin{equation}\lb{herp} \phi_{n}= \sqrt{1\over 2^n \pi^{1/2} n!l_B} e^{-{(y-y_0)^2\over 2l_{B}^{2}}} H_n\left(y-y_0\over l_B\right) \end{equation} where $|n|=0, 1, 2,\cdots$ is the LL index, $y_0=-k l_{B}^{2}$, the magnetic length $l_{B}=\sqrt{\hbar c\over eB}$ and $H_n$ being the Hermite polynomial. The zero-energy mode is \begin{equation}\lb{ef0} \Psi_{n=0, k, m_s}=\left \begin{array}{c} 0 \\ \phi_{0} \\ \end{array \right) e^{ikx} {\mathcal\Omega}\ \al_{m_s} \end{equation} where $m_s=\pm{1\over 2}$ is the azimuthal number of spin operator $S_z$ whose associated states are \beq \al_{1\over 2}=\left \begin{array}{c} 1 \\ 0 \\ \end{array \right),\qquad \al_{-{1\over 2}}=\left \begin{array}{c} 0 \\ 1 \\ \end{array \right) \eeq and by convention we choose ${\rm sgn}(0)=0$. \section{Grand thermodynamical potential} To achieve our goal we start by determining the grand thermodynamical potential (GTP) \beq\lb{GTP1} \Omega=-k_BT \ln\left({\mathcal Z}\right) \eeq such that the partition function associated to our system is given by the {\it Fermi-Dirac} Distribution \begin{equation}\lb{pf} {\mathcal Z}=\prod_{\tau,\tau n, {\tilde p}, m_s} \left[1+e^{{\be(\tilde{\mu}- E_{\tau, \tau n, {\tilde p}, m_s})}}\right] \end{equation} where ${\tilde{\mu}}$ is the chemical potential of particles, $\tau$ takes plus one when $n>0$ and minus one otherwise. $\be={1}{k_BT}$ with $K_B$ is Bolzaman constant and $T$ is the temperature. We define the shorthand notation $\{{\bf l}\}=\tau, \tau n, {\tilde p}, m_s$ to be used in the next. Using~(\ref{pf}) to write \eqref{GTP1} \begin{equation} \Omega=-\frac{1} {\be}\sum_{\{{\bf l}\}}\ln\left[ 1+e^{\beta({\tilde{\mu}-E_{n{\tilde p}s}})}\right]. \end{equation} It is convenient to adopt the dimensionless variable $\mu={\tilde{\mu}\over mc^2},\ \varepsilon_{n{\tilde p}s}= {E_{n{\tilde p}s}\over mc^2}\ \textrm{and} \ \theta={1\over \be mc^2}$. Requiring $\Delta\tilde p=-c{E\over B}{\tilde p}$ and assuming that $\arrowvert{\tilde p}\arrowvert\leq {eBL_y\over 2c}$ is fulfilled, we write GTP as \begin{equation} \Omega=-mc^2\theta N_{\phi} \int_{-b/2}^{b/2} \frac{d\tilde p}{b}\sum_{\{{\bf l}\}}\ln\left[ 1+e^{{\mu\over \theta}}\ e^{-{\varepsilon_{n{\tilde p}s} \over \theta}}\right]. \end{equation} where $b=\frac{eBL_y}{mc^2}$ and $N_{\phi}= \frac{eBS}{hc}$ is the number of quantum electron states in the magnetic field for a given $n$ in an area $S=L_xL_y$. To evaluate GTP, we use the Mellin transformation method with respect to the variable $e^{\mu\over \theta}$. After calculation, we obtain \begin{equation} \Omega=\mp 2\epsilon\te\sum_{s=-\infty}^{\infty}{\sf{Res}}\left [\sum_{\{\bf l\}} {\pi e^{s\mu\over \theta}\over s\sin\left(\pi s\right)} e^{-s{\varepsilon_{\{{\bf l}\}}\over \theta}}\right] \end{equation} where the minus (plus) sign refers the closing sense of the counter to the left (right) of the imaginary axis for $\mu>0$ ($\mu<0$). Now we show \begin{widetext} \begin{eqnarray} \Omega = \mp \epsilon\te N_{\phi}\sum_{s=-\infty}^{\infty}{\sf{Res}} \left[{\pi e^{s{\kappa\over \theta}{z\over 2\pi}} \over s\sin\left(\pi s\right)}\left\{ -1+2\sum_{(\tau^3n)=0}^{+\infty}\left(e^{-{s\over \te}\sqrt{2\kappa v_{F}^{2}}}\right)^{\sqrt{\tau^3n}}\right\}\right. \left. \sum_{m_s=\pm {1\over 2}}\left(e^{-s{g^{\ast}\kappa\over\theta}}\right)^{m_s} \int_{-b/2}^{b/2}e^{s{ec{\tilde p}\hbar E\over \epsilon^2\te\kappa}}{d{\tilde p}\over b}\right]. \end{eqnarray} \end{widetext} where $g^{\ast}={g\epsilon\mu_B\over e\hbar c^2}$, $\kappa={ec\hbar B\over \epsilon^2}$ and $\epsilon=mc^2$. After integration, we end up with \begin{widetext} \begin{equation}\lb{GTP2} \Omega=\mp 2 \epsilon \te N_{\phi}\sum_s {\sf{Res}}\left[{\pi e^{s{\kappa\over \theta}{z\over 2\pi}}\over s\sin\left(\pi s\right)}\coth\left({s\over \theta}\sqrt{\kappa v_{F}^{2}\over2}\right)\cosh\left({sg^{\ast}\kappa\over2\theta}\right){\sinh\left(seEL_y/2\epsilon \te\right)\over seEL_y/2\epsilon \te}\right] \end{equation} \end{widetext} where $z={2\pi\mu\over\kappa}$. For the residue calculations, we distinguish two specials parts of GTP $\Omega=\Omega_{\sf mon}+\Omega_{\sf mon}$. The first concerning the real poles called the monotonic part $(\Omega_{\sf mon})$. But the second is related to the imaginary poles called the oscillating part $(\Omega_{\sf osci})$. In our analysis, we restrict the calculation of $\Omega_{\sf mon}$ and $\Omega_{\sf osci}$ only for the minus sign, i.e for $\mu>0$. We calculate $\Omega_{\sf mon}$ in $s=0$ and we neglect the contribution of other real poles. This gives \begin{widetext} \begin{equation} \Omega_{\sf mon}\approx -2\epsilon N_{\phi}\beta\left[{1\over 3}+{ {g^{\ast}}^2\over 8}\left({\kappa\over \lam}\right)^2+{z^2\over 8\pi^2}\left({\kappa\over \lam}\right)^2+{\al^2 \over 24}\left({\kappa\over \lam}\right)^2+{\al\pi^2\over 6}\left({\te\over \lam}\right)^2\right]. \end{equation} \end{widetext} Let us evaluate $\Omega_{\sf osci}$ in the poles $s_l={i\pi l\theta\over \lam}$ with $l=1, 2, 3,\cdots$. Indeed, at low temperature and strong magnetic field, i.e $\te\ll\kappa$, we obtain \begin{widetext} \begin{equation} \Omega_{\sf osci}\approx-4 \epsilon N_{\phi}\lam\sum_{l=1}{(-1)^{l+1}\over\pi^2l^2}\cos\left({z\kappa\over 2\lam}l\right)\cos\left({\kappa g^{\ast}\pi\over \lam}l\right) {\sin\left({\al\pi\kappa/2\lam}\right)\over{\al\pi\kappa/ 2\lam}} \end{equation} \end{widetext} where $\al={eEL_y\over\kappa \epsilon}$ and $\lam=\sqrt{\kappa v_{F}^{2}\over 2}$. Now combining all and using the assumption of very weak electric field ($\al\ll 1$) to write \eqref{GTP2} as \begin{equation}\lb{omi} \Omega\approx- \epsilon N_{\phi}\lam\left[{2\over 3}+ \left({g^{\ast}\kappa\over 2\lam}\right)^2+\left({z\kappa\over \pi\lam}\right)^2 +{4\over \pi^2}\Gamma(z)\right] \end{equation} where $\Gamma(z)$ is a periodic function of ${z\kappa\over 2\lam} $ defined as \beq\lb{ga} \Gamma(z)=\sum_{l=1}{(-1)^{l+1}\over\l^2}\cos\left({z\kappa\over 2\lam}l\right)\cos\left({\kappa g^{\ast}\pi\over \lam}l\right). \eeq In what follows, the above function will play a crucial role in getting the quantized Hall plateaux for {\it Dirac} particles in graphene. \section{Hall conductivity} To evaluate the Hall conductivity, we determine the number of charge carriers responsible for conduction in our system through the relation \begin{equation} N=-{1\over \epsilon}{\partial\Omega\over \partial\mu}. \end{equation} Thus according to~(\ref{omi}), $N$ can be easily derived as \begin{equation}\lb{n} N={N_{\phi}\over\pi}\left( {\kappa\over \lam}\right) \left[ z+8\left({\lam\over\kappa}\right)^2{d\Gamma(z)\over dz}\right]. \end{equation} To proceed further, let us write (\ref{ga}) as \begin{widetext} \beq\lb{rol} \Gamma(z)={1\over 2}\sum_{l=1}^{\infty}{(-1)^{l+1}\over l^2}\left\{\cos\left(\left[{k\over \lam}\left({z\over 2}+g^{\ast}\pi\right)-2\pi\right]l\right) +\cos\left({k\over \lam}\left({z\over 2}-g^{\ast}\pi\right)l\right)\right\} \eeq \end{widetext} and using the relation \cite{grad} \beq\lb{rl} \sum_{l=1}^{\infty}{(-1)^{l+1}\over l^2} \cos(lx)={\pi^2\over 12}-{x^2\over 4},\qquad -\pi\leq x\leq\pi \eeq to show the result \begin{widetext} \beq\lb {gam} \Gamma(z)=\left\{ \begin{array}{ll} -{5\pi^2\over12} - {1\over 4}\left({\kappa\over \lam}\right)^2 \left({z^2\over 4}+ { g^\ast}^2\pi^2\right) +{\pi\over 2}\left({\kappa\over \lam}\right)\left({z\over 2}+ g^\ast\pi\right) \qquad \qquad \qquad \qquad\qquad\qquad\ \ \textrm{if }\ z\in{\mathbf{ I_1}}\cap {\mathbf{ I_2}} \\ {\pi^2\over12} - {1\over 4}\left({\kappa\over \lam}\right)^2\left({z^2\over 4}+ { g^\ast}^2\pi^2\right) - \left({i^2+\left(i+2\right)^2\over 8}+{1\over 2}\left({\kappa\over \lam}\right)g^\ast\right)\pi^2+{1\over 4}\left(i+1\right)\left({\kappa\over \lam}\right)\pi z \qquad \textrm{if }\ z\in{\mathbf{ I_3}}\cap {\mathbf{ I_4}} \end{array} \right. \eeq \end{widetext} where $i$ is an even integer and ${\mathbf{ I_j}},\ j=1, 2, 3, 4$ are intervals defined as \newpage \begin{widetext} \bqr &&{\mathbf{I_1}}=\left[2\left({\lam\over\kappa}-g^\ast\right)\pi, 2\left({3\lam\over\kappa}-g^\ast\right)\pi \right], \qquad {\mathbf{I_3}}=\left[2\left({\beta\over \kappa}\left(i+1\right)-g^\ast\right)\pi, 2\left({\lam\over \kappa}\left(i+3\right)-g^\ast\right)\pi \right] \nonumber\\ &&{\mathbf{I_2}} = \left[2\left( g^\ast-{\lam\over \kappa}\right)\pi, 2\left( g^\ast+{\lam\over \kappa}\right)\pi \right], \qquad {\mathbf{I_4}}= \left[2\left({\lam\over \kappa}\left(i-1\right)+g^\ast\right)\pi,2\left({\lam\over \kappa}\left(i+1\right)+g^\ast\right)\pi \right]\nonumber \eqr \end{widetext} To describe the quantum Hall effect, it is essential to evaluate the Hall conductivity $\sigma_{\sf H}$. Hence, using the Drude model, $\sigma_{\sf H}$ is \beq\lb{hc} \sigma_{\sf H}=-{\rho c e\over B} \eeq where $\rho$ is the particle number per unit area. In function of the degree of the degeneracy of each LL $ N_{\phi}$ and the particle number, $\sigma_{\sf H}$ is expressed in terms of {\it von Klitzing} conductance ${e^2\over h}$, as \beq\lb{hc1} \sigma_{\sf H}=-{N\over N_{\phi}}{e^2\over h}=-\nu{e^2\over h} \eeq where $\nu$ is the filling factor of LL. Using (\ref{n}), to obtain \beq\lb{hc2} \sigma_{\sf H}=-{1\over\pi}\left( {\kappa\over \lam}\right)\left[ z+8\left({\lam\over\kappa}\right)^2{d\Gamma(z)\over dz}\right]{e^2\over h}. \eeq This compared to \eqref{hc1} gives \beq\lb{ff} \nu={1\over\pi}\left( {\kappa\over \lam}\right) \left[ z+8\left({\lam\over\kappa}\right)^2{d\Gamma(z)\over dz}\right]. \eeq Now using (\ref{gam}) to find \beq \nu=\left\{ \begin{array}{ll} 2 \qquad \qquad \qquad \qquad\textrm{if }\ z\in{\mathbf{ I_1}}\cap {\mathbf{ I_2}} \\ 2\left(i+1\right) \qquad \qquad \ \textrm{if }\ z\in{\mathbf{ I_3}}\cap {\mathbf{ I_4}} \end{array} \right. \eeq Recall that $i$ is taking even value and then we can write $i=2n$ to recover the famous result $\nu=4\left(n+\frac{1}{2}\right)$, with $n$ is an integer. This clearly shows how one can describe the integer quantum Hall effect in graphene based on the thermodynamical properties of our system. \section{Conclusion} By taking into account of the edges effects in terms of a pseudo-potential in a monolayer graphene, we have shown that the corresponding Hall conductivity undergoes to a sequence of plateaux. Its quantization is performed by make using of the {\it Fermi-Dirac} statistical. This has been done by considering a system of Dirac fermions in graphene submitted to an electromagnetic field and evaluating the grand thermodynamical potential as well as related physical quantities like number of particles. \section{Acknowledgment} The generous support provided by the Saudi Center for Theoretical Physics (SCTP) is highly appreciated by the author.
{ "timestamp": "2015-04-27T02:06:36", "yymm": "1504", "arxiv_id": "1504.06445", "language": "en", "url": "https://arxiv.org/abs/1504.06445" }
\section{Introduction} \textit{Introduction.-} Random number generators are ubiquitous, finding applications in varied domains such as statistical sampling, computer simulations and gambling scenarios. Certain physical phenomena such as radioactive decay or thermal radiation have high natural entropy, there are also computational algorithms that produce sequences of apparently random bits. In many cryptographic tasks however, it is necessary to have trustworthy sources of randomness. As such, developing device-independent protocols for generating random bits is of paramount importance. We consider the task of randomness amplification, to convert a source of partially random bits to one of fully random bits. The paradigmatic model of a source of randomness is the Santha-Vazirani (SV) source \cite{SV}, a model of a biased coin where the individual coin tosses are not independent but rather the bits $Y_i$ produced by the source obey \begin{equation} \label{SVdef} \frac{1}{2} - \varepsilon \leq P(Y_i = 0 | Y_{i-1}, \dots, Y_1) \leq \frac{1}{2} + \varepsilon. \end{equation} Here $0 \leq \varepsilon < \frac{1}{2}$ is a parameter describing the reliability of the source, the task being to convert a source with $\varepsilon < \frac{1}{2}$ into one with $\varepsilon \rightarrow 0$. Interestingly, this task is known to be impossible with classical resources, a single SV source cannot be amplified \cite{SV}. In \cite{Renner}, the non-local correlations of quantum mechanics were shown to provide an advantage in the task of amplifying an SV source. A device-independent protocol for generating truly random bits was demonstrated starting from a critical value of $\varepsilon (\approx 0.06)$ \cite{Renner, Grudka}, where device-independence refers to the fact that one need not trust the internal workings of the device. An improvement was made in \cite{Acin} where using an arbitrarily large number of spatially separated devices, it was shown that one could amplify randomness starting from any initial $\varepsilon < \frac{1}{2}$. In \cite{our2}, we demonstrated a device-independent protocol which uses a \textit{constant} number of spatially separated components and amplifies sources of arbitrary initial $\varepsilon < \frac{1}{2}$ while simultaneously tolerating a constant amount of noise in its implementation. All of these protocols were shown to be secure against general adversaries restricted only by the no-signaling principle of relativity under a technical assumption of independence between the source and the device. In \cite{CSW14}, a randomness amplification protocol was formulated for general min-entropy sources and shown to be secure against quantum adversaries without the independence assumption, the drawback of this protocol being that it requires a device with a large number of spatially separated components for its implementation. Other protocols have also been proposed \cite{Pawlowski, Plesch}, for which full security proofs are missing. For fundamental as well as practical reasons, it is vitally important to minimize the number of spatially separated components in the protocol. As such, devising a protocol with the minimum possible number of components (two space-like separated ones for a protocol based on a Bell test) while at the same time, allowing for robustness to errors in its implementation is crucial. Let $\textbf{U}, \textbf{X}$ denote the input and output sets respectively, of honest parties in a device-independent Bell-based protocol for randomness amplification. A necessary condition for obtaining randomness against general no-signaling (NS) attacks is that for some input $\textbf{u}^* \in \textbf{U}$, output $\textbf{x}^* \in \textbf{X}$ and a constant $c < 1$, \textit{every} no-signaling box $\{P(\textbf{x} | \textbf{u})\}$ that obtains the observed Bell violation has $P(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*) \leq c$. i.e., \begin{eqnarray} \label{min-req} \exists (\textbf{x}^*, \textbf{u}^*) \; \; \text{s.t.} \; \; &&\forall \{P(\textbf{x} | \textbf{u})\} \; \; \text{with} \; \; \textbf{B} \cdot \{P(\textbf{x} | \textbf{u})\} = 0 \nonumber \\ &&P(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*) \leq c < 1, \end{eqnarray} where $\textbf{B}$ is an indicator vector (with entries $B(\textbf{x}, \textbf{u})$) encoding the Bell expression and $\textbf{B} \cdot \{P(\textbf{x} | \textbf{u})\} = \sum_{\textbf{x}, \textbf{u}} B(\textbf{x}, \textbf{u}) P(\textbf{x} | \textbf{u}) = 0$ denotes that the box $\{P(\textbf{x} | \textbf{u})\}$ algebraically violates the inequality. Note that while the Bell inequality violation guarantees Eq.(\ref{min-req}) for some $\textbf{x}^*, \textbf{u}^*$ for each NS box, here the requirement is for a strictly bounded \textit{common} entry $P(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*)$ for all boxes leading to the observed Bell violation. It is straightforward to see that if Eq. (\ref{min-req}) is not met, then the observed Bell violation does not guarantee any randomness and a device-independent protocol to obtain randomness cannot be built on the basis of this violation. If in addition to the necessary condition in Eq. (\ref{min-req}), we also had for the same input-output pair $(\textbf{u}^*, \textbf{x}^*)$ that \begin{equation} \label{eq:suff-cond} \tilde{c} \leq P(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*) \end{equation} for some constant $\tilde{c} > 0$, then clearly all the outputs for input $\textbf{u}^*$ possess randomness and extraction of this randomness may be feasible. Here, we present a fully device-independent protocol that allows to amplify the randomness of any $\varepsilon$-SV source under the minimal necessary condition in Eq. (\ref{min-req}). A novel element of the protocol is an additional test (to the usual Bell test) akin to partial tomography of the boxes that the honest parties perform, to lower bound (in a linear number of runs) $P(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*) =: \textbf{D} \cdot \{P(\textbf{x} | \textbf{u}) \}$. Here $\textbf{D}$ is an indicator vector with entries $D(\textbf{x}, \textbf{u})$ such that $D(\textbf{x}, \textbf{u}) = 1$ iff $(\textbf{x}, \textbf{u}) = (\textbf{x}^*, \textbf{u}^*)$. This test ensures that additionally Eq.(\ref{eq:suff-cond}) is also met for a sufficient number of runs, a detailed description is provided in the Supplemental Material. The protocol uses a device consisting of only two no-signaling components and tolerates a constant error rate. We show that the output bits from the protocol satisfy universally-composable security, the strongest form of cryptographic security, for any adversary limited only by the no-signaling principle. \textit{Main Result.-} We present a two-party protocol to amplify the randomness of SV sources against no-signaling adversaries, formally we show the following (the detailed security proof is presented in the Supplemental Material): \begin{thm} [informal] \label{mainthm} For every $\varepsilon < \frac{1}{2}$, there is a protocol using an $\varepsilon$-SV source and a device consisting of two no-signaling components with the following properties: \begin{itemize} \item Using the device ${\sf{poly}}(n, \log(1/\gamma))$ times, the protocol either aborts or produces $n$ bits which are $\gamma$-close to uniform and independent of any no-signaling side information about the device and classical side information about the source (e.g. held by an adversary). \item Local measurements on many copies of a two-party entangled state, with ${\sf{poly}}(1 - 2\varepsilon)$ error rate, give rise to a device that does not abort the protocol with probability larger than $1 - 2^{- \Omega(n)}$. \end{itemize} The protocol is non-explicit and runs in ${\sf{poly}}(n, \log{(1/\gamma)})$ time. {Alternatively it can use an explicit extractor to produce a single bit of randomness that is $\gamma$-close to uniform in ${\sf{poly}}(\log{(1/\gamma)})$ time.} \end{thm} \begin{figure} \begin{protocol*}{Protocol I} \begin{enumerate} \item The $\varepsilon$-SV source is used to choose the measurement settings $u = ({\it \textbf{u}}^1_{\leq n}, {\it \textbf{u}}^2_{\leq n})$ for $n$ runs on the single device consisting of two components. The device produces output bits $x = (\textbf{x}^1_{\leq n}, \textbf{x}^2_{\leq n})$. \item The parties perform an estimation of the violation of the Bell inequality in the device by computing the empirical average $L_n(x,u) \mathrel{\mathop\ordinarycolon}= \frac{1}{n} \sum_{i=1}^{n} B(\textbf{x}_i, \textbf{u}_i)$. The protocol is aborted unless $L_n(x,u) \leq \delta$ for fixed constant $\delta > 0$. \item Conditioned on not aborting in the previous step, the parties subsequently check if $\textit{S}_n(x,u) \mathrel{\mathop\ordinarycolon}= \frac{1}{n} \sum_{i=1}^{n} D(\textbf{x}_i, \textbf{u}_i) \geq \mu_1$. The protocol is aborted if this condition is not met for fixed $\mu_1 > 0$. \item Conditioned on not aborting in the previous steps, the parties apply an independent source extractor \cite{CG, one_bit_extr} to the sequence of outputs from the device and further $n$ bits from the SV source. \end{enumerate} \end{protocol*} \caption{Protocol for device-independent randomness amplification from a single device with two no-signaling components.} \label{protocolsingle} \end{figure} \textit{Protocol.-} The protocol for the task of randomness amplification from $\varepsilon$-SV sources is given explicitly in Fig. \ref{protocolsingle} and illustrated in Fig. \ref{fig:bip-fig-prot}, its structure is as follows. The two honest parties Alice and Bob use bits from the $\varepsilon$-SV source to choose the inputs to their no-signaling boxes in multiple runs of a Bell test and obtain their respective outputs. They check for the violation of a Bell inequality and abort the protocol if the test condition is not met. The novel part of the protocol is a subsequent test that the honest parties perform which ensures when passed that a sufficient number of runs were performed with boxes that have randomness in their outputs. If both tests are passed, the parties apply a randomness extractor to the output bits and some further bits taken from the SV source. The output bits of the extractor constitute the output of the protocol, which we show to be close to being fully random and uncorrelated from any no-signaling adversary. \begin{figure}[t] \includegraphics[scale=0.4]{bip-fig-prot.pdf} \caption{An illustration of the protocol for randomness amplification using two no-signaling components. The bits from the SV source (black arrows) are used as inputs $(\textbf{u}^1_j, \textbf{u}^2_j)$ for the $j$-th run of the two spatially separated devices, with $1 \leq j \leq n$, and the corresponding outputs $(\textbf{x}^1_j, \textbf{x}^2_j)$ are obtained. The inputs and outputs of all the $n$ runs $(u,x)$ are subjected to two tests: a Bell test for the violation of a specific Bell inequality and a (partial) tomographic test counting a specific number of input-output pairs $(\textbf{u}^*, \textbf{x}^*)$. If both tests are passed (denoted by $\text{ACC}$), the outputs $x$ (orange arrows) are hashed together with further $n$ bits $t$ from the SV source using an extractor.} \label{fig:bip-fig-prot} \end{figure} \textit{Description of the setup.-} The setup of the protocol is as follows. The honest parties and Eve share a no-signaling box $\{p(x, z|u', w)\}$ where $u' = \textbf{u'}_{\leq n} := (\textbf{u'}_{1}, \dots, \textbf{u'}_n)$ and $x = \textbf{x}_{\leq n} := (\textbf{x}_1, \dots, \textbf{x}_n)$ denote the input and output, respectively, of the honest parties for the $n$ runs of the protocol, with $w$ and $z$ being the inputs and outputs of the adversary Eve. The devices held by the honest parties are separated into $2$ components with corresponding inputs and outputs $u'^i$ and $x^i$, respectively, for $i = 1, 2$, i.e., $u' = (u'^1, u'^2)$ and $x = (x^1, x^2)$. Note that $u'^i, x^i$ themselves denote the inputs and outputs of the $n$ runs of the protocol for party $i$, i.e., $u'^i = \textbf{u'}^i_{\leq n} := (\textbf{u'}^i_1, \dots, \textbf{u'}^i_n)$ and $x^i = \textbf{x}^i_{\leq n} = (\textbf{x}^i_1, \dots, \textbf{x}^i_n)$. Here, for the $j$-th run of the Bell test, we have labeled the measurement settings of Alice $\textbf{u'}^1_j$ and those of Bob $\textbf{u'}^2_j$ with the corresponding outcomes $\textbf{x}^1_j$ and $\textbf{x}^2_j$, and denoted the joint inputs of Alice and Bob in this run as $\textbf{u'}_j = (\textbf{u'}^1_j, \textbf{u'}^2_j)$ with corresponding joint output $\textbf{x}_j = (\textbf{x}^1_j, \textbf{x}^2_j)$. The honest parties draw bits $u$ from the SV source to input into the box, i.e., they set $u' = u$. They also draw further $n$ bits $t$, which will be fed along with the outputs $x$ into the randomness extractor to obtain the output of the protocol $s := \text{Ext}(x,t)$. The adversary has classical information $e$ correlated to $u, t$. The box we consider for the protocol is therefore given by the family of probability distributions $\{p(x,z,u,t,e|u',w)\}$. \textit{Assumptions.-} Let us first state formally the assumptions on $\{p(x,z,u,t,e|u',w)\}$, see also \cite{our2}. \begin{itemize} \item {\bf No-signaling (shielding) assumption:} The box satisfies the constraint of no-signaling between the honest parties and Eve as well as between the different components of the device \begin{eqnarray} \label{eq:ns1} p(x|u', w) &=& p(x|u'), \nonumber \\ p(z|u',w) &=& p(z|w), \nonumber \\ p(x^i|u') &=& p(x^i|u'^i) \; \; \; i = 1, 2. \label{eq:as-nosig} \end{eqnarray} Each device component also obeys a time-ordered no-signaling (\texttt{tons}) condition for the $k \in [n]$ runs performed on it: \begin{eqnarray} \label{eq:tons1} &&p(x^i_{k}|z,u'^i, w, u, t, e) = \nonumber \\ && \; \; \; p(x^i_{k} | z,u'^i_{\leq k},w,u, t, e) \; \; \; \forall k \in [n] \label{eq:tons} \end{eqnarray} where $u'^i_{\leq k} := u'^i_{1}, \ldots, u'^i_{k}$. \item {\bf SV conditions:} The variables $(u,t,e)$ form an SV source, that is satisfy Eq. (\ref{SVdef}). In particular, $p(t|u,e)$ and $p(u|e)$ also obey the SV source conditions. The fact that $e$ cannot be perfectly correlated to $u$, $t$ is called the \textit{private SV source assumption} \cite{our2}. \item {\bf Assumption A1:} The devices do not signal to the SV source, i.e., the distribution of $(u,t,e)$ is independent of the inputs $(u',w)$: \begin{eqnarray} &&\sum_{x,z} p(x,z,u,t,e|u',w) = p(u,t,e) \; \; \; \forall{(u, t, e, u', w)}. \label{eq:assumption1} \end{eqnarray} \item {\bf Assumption A2:} The box is fixed independently of the SV source: \begin{eqnarray} &&p(x,z|u',w,u,t,e) = p(x,z|u',w) \; \; \forall{(x, z, u',w, u, t,e)}. \nonumber \\ \label{eq:assumption2} \end{eqnarray} \end{itemize} In words, the main assumptions are that the different components of the device do not signal to each other and to the adversary Eve. Additionally, there is also a time-ordered no-signaling ($tons$) structure assumed on different runs of a single component, the outputs in any run may depend on the previous inputs within the component but not on future inputs. Moreover, we also assume that the structure of the box $p(x,z|u',w)$ is fixed independently of the SV source $p(u,t,e)$, i.e., the box is an unknown and arbitrary input-output channel independent of the SV source. This precludes malicious correlations such as in the scenario where for each bit string $u$ taken from the source, a different (possibly local) box tuned to $u$ is supplied, in which case the Bell test may be faked by local boxes \cite{our3}. Finally, it is worth noting that no randomness may be extracted under the assumptions stated above in a classical setting, whereas the Bell violation by quantum boxes allows to amplify randomness in a device-independent setting. \textit{Security definition.-} For $L_n(x,u) = \frac{1}{n} \sum_{i=1}^{n} B(\textbf{x}_i, \textbf{u}_i)$, the first (Bell) test in the protocol is passed when $L_n(x,u) \leq \delta$. We define the set $\text{ACC}_1$ as the set of $(x,u)$ such that this test is passed: \begin{equation} \text{ACC}_1 \mathrel{\mathop\ordinarycolon}= \{(x,u) : L_n(x,u) \leq \delta\}. \end{equation} The $\delta$ is the noise parameter in the Bell test which is chosen to be a positive constant depending on the initial $\varepsilon$ of the SV source, going to zero in the limit of $\varepsilon \rightarrow \frac{1}{2}$ (see Theorem 8 in the Supplemental Material). Similarly, we define $\text{ACC}_2$ as the set of $(x,u)$ for which the second test is passed, i.e., \begin{equation} \text{ACC}_2 \mathrel{\mathop\ordinarycolon}= \left\lbrace (x,u) : \textit{S}_n(x,u) \geq \mu_1 \right\rbrace. \end{equation} We also define the set $\text{ACC} = \text{ACC}_1 \cap \text{ACC}_2$ of $(x,u)$ for which both tests in the protocol are passed and $\text{ACC}_u$ as the cut \begin{equation} \text{ACC}_u \mathrel{\mathop\ordinarycolon}= \{ x : (x,u) \in \text{ACC} \}. \end{equation} After $u$ is input as $u'$ and conditioned on the acceptance of the tests $\text{ACC}$, applying the independent source extractor \cite{CG, Xin-Li, one_bit_extr} $s=\text{Ext}(x,t)$, one gets the box \begin{eqnarray} &&p(s,z,e|w,\text{ACC} ) \nonumber \\ &&\; \; \equiv \sum_{u}\sum_{\text{Ext}(x,t)=s} p(x,z,u,t,e|w,\text{ACC} ). \end{eqnarray} The composable security criterion is now defined in terms of the distance of $p(s,z,e|w, \text{ACC} )$ to an ideal box $p^{id} = \frac{1}{|S|} p(z,e|w,\text{ACC})$ with $p(z,e|w,\text{ACC}) = \sum_{s} p(s,z,e|w,\text{ACC})$. Formally, the security is given by the distance $d_c$ defined as \begin{equation} \label{dist-comp} d_c :=\sum_{s,e} \max_{w } \sum_{z} \left |p(s,z,e|w, \text{ACC} ) - \frac{1}{|S|}p(z,e|w, \text{ACC} )\right|. \end{equation} \textit{Outline of the proof.-} The proof of security of the protocol is a modification of the proof we presented in \cite{our2} with the crucial differences being due to the weak randomness that the two-party Bell inequality violation gives and an additional partial tomographic test imposed on the device. To amplify SV sources, one needs Bell inequalities where quantum theory can achieve the maximal no-signaling value of the inequality \cite{Renner}, failing which, for sufficiently small $\varepsilon$, the observed correlations may be faked with classical deterministic boxes. However, Bell inequalities with this property are not sufficient, this is exemplified by the tripartite Mermin inequality \cite{Mermin, Renner}. This inequality is algebraically violated in quantum theory using a GHZ state, however for any function of the measurement outcomes one can find no-signaling boxes which achieve its maximum violation and for which this particular function is deterministic thereby providing an attack for Eve to predict with certainty the final output bit. While \cite{Acin} and \cite{our2} considered Bell inequalities with more parties, the problem of finding two-party algebraically violated Bell inequalities (known as pseudo-telepathy games) \cite{RMP-Bell14} with the property of randomness for some function of the measurement outcomes was open. Unfortunately, none of the bipartite Bell inequalities tested so far have the property that \textit{all} no-signaling boxes which maximally violate the inequality have randomness in any function of the measurement outcomes $f(\textbf{x})$ for some input $\textbf{u}$ in the sense that for all such boxes \begin{equation} \label{eq:strong-rand} \frac{1}{2} - \kappa \leq p(f(\textbf{x})|\textbf{u}) \leq \frac{1}{2} + \kappa \end{equation} for some $0 < \kappa < \frac{1}{2}$. We say that Bell inequalities with property (\ref{eq:strong-rand}) guarantee \textit{strong randomness}. The Bell inequality we consider for the task of randomness amplification is a modified version of a Kochen-Specker game from \cite{Aolita}. The inequality involves two parties Alice and Bob, each making one of nine possible measurements and obtaining one of four possible outcomes, which is explained further in the Supplemental Material. Even though it does not guarantee the strong randomness in Eq.(\ref{eq:strong-rand}) for any function of the measurement outcomes $f(\textbf{x})$ for any input $\textbf{u}$, it has the redeeming feature of guaranteeing \textit{weak randomness} in the following sense. For all no-signaling boxes which algebraically violate the inequality, there exists one measurement setting $\textbf{u}^*$ and one outcome $\textbf{x}^*$ for this setting such that \begin{eqnarray} &&0 \leq p(\textbf{x} = \textbf{x}^* | \textbf{u} = \textbf{u}^*) \leq \gamma \nonumber \\ && \forall \{p(\textbf{x}| \textbf{u})\} \; \; \; \text{s.t} \; \; \; \textbf{B} \cdot \{p(\textbf{x}| \textbf{u})\} = 0 \end{eqnarray} for some $0 < \gamma < 1$. The above fact is checked by linear programming and is shown in Lemma $1$ in the Supplemental Material. The novel technique in the form of a partial tomographic test, subsequent to the Bell test, allows us to extract randomness in this minimal scenario of weak randomness. This simply checks for the number of times a particular input-output pair $(\textbf{u}^*, \textbf{x}^*)$ appears, the analysis of this test is done by an application of the Azuma-Hoeffding inequality. We show that the SV source obeys a generalized Chernoff bound that ensures that with high probability when the inputs are chosen with such a source, the measurement setting $\textbf{u}^*$ appears in a linear fraction of the runs. Thus, conditioned on both tests in the protocol being passed (which happens with large probability with the use of the SV source and good quantum boxes by the honest parties), we obtain that with high probability over the input, the output is a source of linear min-entropy. This allows us to use known results on randomness extractors for two independent sources of linear min-entropy \cite{CG, one_bit_extr}, namely one given by the outputs of the measurement and the other given by the SV source. As shown in Proposition $16$ of \cite{our2}, one can use extractors secure against classical side information even in the scenario of general no-signaling adversaries by accepting a loss in the rate of the protocol, i.e., increasing the output error. The randomness extractor used in the protocol is a non-explicit extractor from \cite{CG}. Alternatively, there is an explicit extractor that can be employed in the protocol that has been found recently \cite{one_bit_extr}, but then it can produce just one bit of randomness. It also follows from \cite{our2} that there exists a protocol to obtain more bits with an explicit extractor using a device with three no-signaling components by employing additionally a de-Finetti theorem for no-signaling devices \cite{Brandao} (see Protocol II in \cite{our2}). \textit{Conclusion and Open Questions.-} We presented a device-independent protocol to amplify randomness in the minimal conditions under which such a task is possible, and used it to obtain secure random bits from an arbitrarily (but not fully) deterministic Santha-Vazirani source. The protocol uses a device consisting of only two non-signaling components, and works with correlations attainable by noisy quantum mechanical resources. Moreover, its correctness is not based on quantum mechanics and only requires the no-signaling principle. Important open questions still remain. One interesting question is whether the requirement of strict independence between the SV source and the devices can be relaxed to only require limited independence \cite{our3}. Another is to amplify the randomness of more general min-entropy sources that do not possess the structure of the Santha-Vazirani source. Finally, a significant open problem is to realize device-independent quantum key distribution with an imperfect source of randomness, tolerating a constant error rate and achieving a constant key rate. {\it Acknowledgments.} The paper is supported by ERC AdG grant QOLAPS, EU grant RAQUEL and by Foundation for Polish Science TEAM project co-financed by the EU European Regional Development Fund. FB acknowledges support from EPSRC and Polish Ministry of Science and Higher Education Grant no. IdP2011 000361. Part of this work was done in National Quantum Information Center of Gda\'{n}sk. Part of this work was done when F. B., R. R., K. H. and M. H. attended the program ``Mathematical Challenges in Quantum Information" at the Isaac Newton Institute for Mathematical Sciences in the University of Cambridge. \bibliographystyle{apsrev}
{ "timestamp": "2016-12-02T02:04:17", "yymm": "1504", "arxiv_id": "1504.06313", "language": "en", "url": "https://arxiv.org/abs/1504.06313" }
\section{Introduction} \label{sec:intro} The existence of a nonperturbative intrinsic heavy quark component in the nucleon is a rigorous prediction of Quantum Chromodynamics (QCD). An unambiguous experimental confirmation is still missing and would represent a major discovery. The goal of this article is to summarize our current understanding of this subject with a particular focus on the potential of a high energy and high luminosity fixed-target experiment using the LHC beams (AFTER@LHC) \cite{Brodsky:2012vg,Lansberg:2012kf,Lansberg:2013wpx,Rakotozafindrabe:2013cmt} to search for intrinsic charm. Production processes sensitive to the intrinsic heavy quark distributions of protons and nuclei are among the most interesting hadronic physics topics that can be investigated with AFTER@LHC. In contrast to the familiar extrinsic contributions which arise from gluon splitting in perturbative QCD, the intrinsic heavy quarks have multiple connections to the valence quarks of the proton and thus are sensitive to its nonperturbative structure. For example, if the gluon-gluon scattering box diagram, $gg \to Q \overline Q \to gg$ (the analog of QED light-by-light scattering), is inserted into the proton self-energy, the cut of this amplitude generates five-quark Fock states of the proton $|uud Q\overline Q\rangle$, see Fig.~\ref{fig:IQ}. \begin{figure} \begin{center} \includegraphics[width=7cm]{Fig_uudQQbar.png} \end{center} \caption{Five-quark Fock state $|uudQ\overline Q\rangle$ of the proton and the origin of the intrinsic sea.} \label{fig:IQ} \end{figure} Intrinsic strange, charm, and bottom quarks are thus a fundamental property of the wavefunctions of hadronic bound states~\cite{Brodsky:1980pb,Brodsky:1984nx,Harris:1995jx,Franz:2000ee}. While the extrinsic contributions to the heavy quark parton distribution functions (PDFs) are most important at low $x$ and depend logarithmically on the heavy quark mass $M_Q$, the intrinsic heavy quark contributions are dominant at high $x$ and depend on $1/M^2_Q$. Because the extrinsic heavy quarks are generated by gluon splitting, their PDFs are always softer than those of the parent gluon by a factor of $(1-x)$. In contrast, the high $x$ intrinsic heavy quark contributions are kinematically dominated by the regime where the $|uud Q \overline Q \rangle$ state is minimally off shell, corresponding to equal rapidities of the constituent quarks. The resulting momentum and spin distributions of the intrinsic $Q$ and $\overline Q$ can be distinct, {\it e.g.}, $s(x) \ne \overline s(x)$ since the comoving $uud Q \overline Q$ quarks are sensitive to the global quantum numbers of the proton. A finite intrinsic charm contribution to the nucleon has been extracted from lattice QCD. An analysis by the MILC collaboration~\cite{Freeman:2012ry} yields a probability for the charm matrix element $\langle N| c\overline c |N \rangle$ in the range of $5 - 6$\%, consistent with a four-loop perturbative QCD calculation~\cite{Kryjevski:2003mh}. While the first experimental evidence of intrinsic heavy quarks came from the EMC measurement of the large $x$ charm structure function \cite{Aubert:1982tt}, a variety of other charm hadron and charmonium measurments are consistent with the existence of intrinsic charm. Open charm observables in hadroproduction include forward $\Lambda_c$ production at the ISR \cite{Bari:1991ty}\footnote{Similarly, the coalescence of comoving $b$, $u$ and $d$ quarks from the $|uud \bar b b>$ intrinsic bottom Fock state in the proton can explain the high $x_F$ production of the $\Lambda_b(udb)$ baryon, as observed at the ISR~\cite{Bari:1991ty}.} and asymmetries between leading and nonleading charm ($\overline D$ mesons which share valence quarks with the projectile and $D$ mesons which do not, respectively) measured as functions of $x_F$ and $p_T$ in fixed-target experiments, WA89 and WA82 at CERN; E791 and SELEX at Fermilab, see Refs.~\cite{Vogt:1992ki,Vogt:1995fsa,Gutierrez:1998bc} and references therein. Previous fixed-target $J/\psi$ measurements also give indications of important intrinsic charm contributions, particularly from the nuclear mass, or $A$, dependence, as measured by NA3 at CERN as well as E772 and, later, E866 at Fermilab, see {\it e.g.} \cite{Vogt:1991qd}. Indeed, the $A$ dependence, proportional to $A^\alpha$, is quite different than the $\alpha \sim 1$ expected from extrinsic-type production \cite{Hoyer:1990us}. At large $x_F$, there are indications of a $A^{2/3}$ dependence, consistent with a nuclear surface-type interaction instead of the volume dependence of pQCD. In addition, the NA3 collaboration measured double $J/\psi$ production at forward $x_F$ in $\pi A$ interactions, difficult to explain without an intrinsic charm mechanism \cite{Vogt:1995tf}. All of these observables can be studied with higher energies and luminosities at AFTER@LHC, making precision measurements possible for the first time. In addition to the typical observables for intrinsic heavy quarks, these intrinsic heavy quarks also contribute to a number of more exotic observables and inclusive and diffractive Higgs production $pp \to p p H$, in which the Higgs boson carries a significant fraction of the projectile proton momentum \cite{Brodsky:2006wb,Brodsky:2007yz}. There are also important implications for intrinsic charm and bottom quarks in Standard Model physics, as in the weak decays of the $B$-meson~\cite{Brodsky:2001yt} and a novel solution to the $J/\psi \to \rho \pi$ problem~\cite{Broadsky:2012rw}. AFTER@LHC could also shed light on these topics. The rest of this paper is organized as follows. In Sec.\ \ref{sec:theory}, we give an overview of the theoretical models predicting the $x$-shape (but not the normalization) of the intrinsic charm and bottom parton distribution functions. In Sec.\ \ref{sec:pdfs}, we discuss the constraints on the normalization of the intrinsic charm (IC) obtained in global analyses of PDFs. Section \ref{sec:ib} is devoted to the intrinsic bottom (IB) content of the nucleon for which there are currently no quantitative constraints. In Sec. \ref{sec:observables} we review collider observables sensitive to an intrinsic charm or bottom PDF. Finally, in Sec.\ \ref{sec:conclusions} we present our conclusions. \section{Theoretical models} \label{sec:theory} The QCD wavefunction of a hadron can be represented as a superposition of quark and gluon Fock states. For example, at fixed light-front time, a hadron wavefunction can be expanded as a sum over the complete basis of free quark and gluon states: $\vert \Psi_h \rangle = \sum_m \vert m \rangle \, \psi_{m/h}(x_i, k_{T,i})$ where the color-singlet states, $\vert m \rangle$, represent the fluctuations in the hadron wavefunction with the Fock components $\vert q_1 q_2 q_3 \rangle$, $\vert q_1 q_2 q_3 g \rangle$, $\vert q_1 q_2 q_3 c \overline c \rangle$, {\it etc}. The boost-invariant light-front wavefunctions, $\psi_{m/h}(x_i, k_{T,i})$ are functions of the relative momentum coordinates $x_i = k_i^+/P^+$ and $k_{T,i}$ where $k_i$ denotes the parton momenta and $P$ the hadron momentum. Momentum conservation demands $\sum_{i=1}^n x_i = 1$ and $\sum_{i=1}^n \vec{k}_{T,i}=0$ where $n$ is the number of partons in state $\vert m \rangle$. For example, as predicted by Brodsky and collaborators, in the BHPS model intrinsic charm fluctuations \cite{Brodsky:1980pb,Brodsky:1981se} can be liberated by a soft interaction which breaks the coherence of the Fock state \cite{Brodsky:1991dj} provided the system is probed during the characteristic time that such fluctuations exist. Microscopically, the intrinsic heavy quark Fock component in the proton wavefunction, $|u u d c \overline c \rangle$, is generated by virtual interactions such as $g g \rightarrow Q \overline Q$ where the gluons couple to two or more valence quarks. The probability for $c \overline c$ fluctuations to exist in a hadron is higher twist since it scales as $1/m_c^2$ relative to the extrinsic, EC, leading-twist production by photon-gluon fusion \cite{Vogt:1995tf}. The dominant Fock state configurations are not far off shell and thus have minimal invariant mass, $M^2 = \sum_i^n \widehat{m}_i^2/ x_i$ where $\widehat{m}_i^2 = m_i^2 + \langle \vec k_{T, i}^2 \rangle$ is the square of the average transverse mass of parton $i$. The general form of the Fock state wavefunction for a hadron with mass $m_h$ appropriate to any frame at fixed light-front time is \begin{equation} \Psi(x_i, \vec k_{\perp i}) = \frac{\Gamma(x_i, \vec k_{\perp i}) }{m_h^2 - M^2 } \, \, \end{equation} where $\Gamma$ is a vertex function, expected to be a slowly-varying, decreasing function of $m_h^2 - M^2$. The particle distributions are then controlled by the light-front energy denominator and phase space. This form for the higher Fock components is applicable to an arbitrary number of light and heavy partons. Intrinsic $c \overline c$ Fock components with minimum invariant mass correspond to configurations with equal rapidity constituents. Thus, unlike extrinsic heavy quarks generated from a single parton, intrinsic heavy quarks carry a larger fraction of the parent momentum than the light quarks in the state \cite{Brodsky:1980pb,Brodsky:1981se}. The parton distributions reflect the underlying shape of the Fock state wavefunction. Assuming it is sufficient to use $\langle k_T^2 \rangle$ for the transverse momentum, the probability distribution as a function of $x$ in a general $n$--particle intrinsic $c \overline c$ Fock state is \begin{equation} \label{icprobtot} \frac{dP_{\rm IC}}{dx_i \cdots dx_n} = N_n \ \frac{\delta(1-\sum_{i=1}^n x_i)}{(m_h^2 - \sum_{i=1}^n (\widehat{m}_i^2/x_i) )^2} \, \, , \end{equation} where $N_n$ normalizes the $n$-particle Fock state probability. At LO in the heavy quark limit, $\widehat{m}_c$, $\widehat{m}_{\overline c} \gg m_h$, $\widehat{m}_q$, \begin{equation} \frac{dP_{\rm IC}}{dx_i \cdots dx_n} = N_n \frac{x_c^2 x_{\overline c}^2}{(x_c + x_{\overline c})^2} \ \delta\Big(1-\sum_{i=1}^n x_i\Big) \, , \label{massless1} \end{equation} leading to \begin{eqnarray} F_{2 \, c}^{\rm IC \, LO}(x) & = & \frac{8}{9} xc(x) \nonumber\\ &=& \frac{8}{9}x \int dx_1 \cdots dx_{\overline c} \frac{dP_{\rm IC}}{dx_i \cdots dx_{\overline c} dx_c} \, \, . \label{massless2} \end{eqnarray} There are many applications of intrinsic charm in charm hadron production. See, e.g., Refs.~\cite{Vogt:1995tf,Vogt:1991qd,Vogt:1992ki,Vogt:1995fsa,Gutierrez:1998bc} for more details. Paiva {\it et al.} have also calculated an intrinsic charm component of the nucleon sea within the context of the meson cloud model \cite{Paiva:1996dd}. They assumed that the nucleon can fluctuate into $\overline D \Lambda_c$. The $\overline c$ distribution in the nucleon is then \begin{equation} x {\overline c}_N (x) = \int_{x}^{1} dy\, f_{\overline D} \left(y\right)\, \frac{x}{y} \, {\overline c}_{\overline D} \left(\frac{x}{y}\right)\; . \label{cn} \end{equation} where \begin{equation} f_{\overline D} (y) = \frac{g^2_{ \overline D N\Lambda_c}}{16 \pi^2} \, y \, \int_{-\infty}^{t_{\rm max}}dt \, \frac{[-t+(m_{\Lambda_c}-m_N)^2]}{[t- m_{\overline D}^2]^2}\, F^2 (t)\; , \label{fdbar} \end{equation} with $F(t)$ a form factor at the $DN\Lambda$ vertex and $t_{\rm max} = m^2_N y- m^2_{\Lambda_c} y/(1-y)$. In this case they chose a monopole form factor with $\Lambda_m = 1.2$ GeV. The coupling constant was assumed to be $g_{\overline D N \Lambda_c} = -3.795$. From heavy quark effective theories \cite{Neubert:1993mb}, the $\overline c$ distribution in the $\overline D$ is expected to be hard because in the bound state, the $\overline c$ exchanges momenta much less than $m_c$. They make the extreme assumption that the entire $\overline D$ momentum is carried by the charm quark, $\overline c_{\overline D} = x \delta(x-y)$. Next, Steffens {\it et al.}\ investigated all the charm structure function data with two variants of intrinsic charm \cite{Steffens:1999hx}. The first was that of Eq.~(\ref{massless2}), called IC1 in their paper, while the second was a meson cloud model, IC2. In the second approach, the $\overline c$ distribution is obtained from the light-front distribution of $\overline D^0$ mesons in the nucleon, \begin{eqnarray} \overline c^{\rm IC2}(x) & \approx & f_{\overline D}(x) = \frac{1}{16\pi^2} \int_0^\infty dk_\perp^2 \frac{g^2(x,k_\perp^2)}{x(1-x)(s_{\overline D \Lambda_c}-m_N^2)^2} \nonumber\\ &&\times \frac{k_\perp^2 + (m_{\Lambda_c} - (1-x)m_N)^2}{1-x} \, \, . \label{cbaric2} \end{eqnarray} A hard charm momentum distribution was assumed in the $\overline D$, similar to that of Ref.~\cite{Paiva:1996dd}. The vertex function $g^2(x,k_\perp^2)$ is parameterized as $g^2 = g_0^2(\Lambda^2 + m_N^2)/(\Lambda^2 + s_{\overline D \Lambda_c})$ where $ s_{\overline D \Lambda_c}$ is the square of the center of mass energy of the $\overline D \Lambda_c$ system and $g_0^2$ the coupling constant at $ s_{\overline D \Lambda_c} = m_N^2$. For an intrinsic charm probability of 1\%, $\Lambda \approx 2.2$ GeV. The charm distribution is then \begin{equation} c^{\rm IC2}(x) \approx \frac{3}{2} f_{\Lambda_c} \left(\frac{3x}{2} \right) \label{cic2} \end{equation} where the charm distribution in the $\Lambda_c$ is assumed to be $c_{\Lambda_c} \sim \delta(x - 2/3)$ and $f_{\Lambda_c}(x) = f_{\overline D}(1-x)$. Pumplin \cite{Pumplin:2005yf} considered a model where a point scalar particle of mass $m_0$ couples with strength $g$ to $N$ scalar particles with mass $m_1$, $m_2$, $\cdots$, $m_N$. The probability density is then \begin{eqnarray} dP &=& \frac{g^2}{(16\pi^2)^{N-1}(N-2)!} \prod_{j=1}^N dx_j \delta \bigg(1 - \sum_{j=1}^N x_j \bigg) \times \nonumber\\ && \int_{s_0}^\infty ds \frac{(s - s_0)^{N-2}}{(s - m_0^2)^2} |F(s)|^2\, , \label{Pumplin_eq} \end{eqnarray} where $s_0 = \sum_{j=1}^N (m_j^2/x_j)$. The form factor $F(s)$ suppresses higher mass state contributions. If the quark transverse momenta are neglected, with $m_c$ much greater than all other mass scales, and $F(s) = 1$, then the BHPS model is recovered. Two types of form factors were studied, an exponential $|F(s)|^2 = \exp[-(s-m_0^2)/\Lambda^2]$, and a power law, $|F(s)|^2 = 1/(s + \Lambda^2)^n)$ where the cutoff $\Lambda$ is varied between 2 and 10 GeV. Hobbs {\it et al.} employed a meson cloud type approach but specified the spin and parity of all lowest mass charm meson-baryon combinations from the 5-particle $|uudc \overline c \rangle$ Fock states of the proton \cite{Hobbs:2013bia}. They pointed out that treating quarks as scalar point-like particles, as in {\it e.g.} Ref.~\cite{Pumplin:2005yf}, does not conserve spin and parity. They calculated the appropriate meson-baryon splitting functions for the meson-baryon combinations and found that the production of charm mesons would be almost entirely through $D^*$ mesons. To study the phenomenological distributions of charm mesons and baryons in this approach, they studied exponential and confining vertex functions, $\propto \exp[-(s - m_D^2)/\Lambda^2]$ and $(s-m_D^2)\exp[-(s - m_D^2)/\Lambda^2]$ respectively. They used these results to compare to the $\Lambda_c$ distribution from the ISR \cite{Chauvat:1987kb} and the $\Lambda_c/\overline \Lambda_c$ asymmetry from SELEX \cite{Garcia:2001xj}. See Ref.~\cite{Hobbs:2013bia} for details. \section{Global analyses of PDFs with intrinsic charm} \label{sec:pdfs} In the standard approach employed by almost all global analyses of PDFs, the heavy quark distributions are generated {\em radiatively}, according to DGLAP evolution equations~\cite{Altarelli:1977zs,Gribov:1972ri,Dokshitzer:1977sg}, starting with a perturbatively calculable boundary condition \cite{Collins:1986mp,Buza:1996wv} at a scale of the order of the heavy quark mass. In other words, there are no free fit parameters associated to the heavy quark distribution and it is entirely related to the gluon distribution function at the scale of the boundary condition. As a consequence, also the PDF uncertainties for the heavy quark and the gluon PDFs are strongly correlated as has been discussed in the context of inclusive Higgs production at the Tevatron and the LHC \cite{Belyaev:2005nu}. However, a purely perturbative treatment might not be adequate, in particular for the charm quark with a mass $m_c \simeq 1.3$ GeV which is not much bigger than typical hadronic scales but also for the bottom quark with a mass $m_b \simeq 4.5$ GeV. Indeed, as discussed above, light-front models predict a nonperturbative ('intrinsic') heavy quark component in the proton wave-function~\cite{Brodsky:1980pb,Brodsky:1981se}. Motivated by the theoretical predictions of the BHPS light-front model, analyses of the charm distribution in the proton going beyond the common assumption of purely radiatively generated charm date back almost as far as the BHPS predictions themselves. For definiteness, in the following we refer to the radiatively generated charm by $c_0(x,Q)$ and to the intrinsic charm by $c_1(x,Q)$. The full charm parton distribution is then given by the sum $c(x,Q)=c_0(x,Q)+c_1(x,Q)$. Strictly speaking, this decomposition is defined at the initial scale $Q_0 \simeq m_c$ of the DGLAP evolution but holds to a good approximation at any scale since the intrinsic component $c_1$ is governed (to a very good approximation) by a standalone non-singlet evolution equation \cite{Lyonnet:2015dca}. A similar decomposition is understood for the bottom quark which will be discussed in Sec.\ \ref{sec:ib}. The BHPS model of the $|uud c \overline c \rangle$ Fock state predicts a simple form for $F_{2 \, c}(x)$, \begin{eqnarray} \label{eq:IC} F_{2\, c}^{\rm IC}(x) &=& \left(\frac{8}{9} x\right) \frac{1}{2} N_5 x^2 \times \\ && \left[\frac{1}{3}(1-x)(1 + 10x + x^2)+ 2x(1+x)\ln x\right] \, \, . \nonumber \end{eqnarray} If there is a 1\% intrinsic charm contribution to the proton PDF, $N_5 = 36$. Hoffman and Moore incorporated mass effects and introduced next-to-leading order corrections as well as scale evolution \cite{Hoffmann:1983ah}. They compared their result to the EMC $F_{2 \, c}$ data from muon scattering on iron at high $x$ and $Q^2$ with the intrinsic charm contribution added to the leading order calculation of $F_{2 \, c}$ by photon-gluon fusion. A complete next-to-leading order analysis of both the `extrinsic' radiatively-generated charm component and the intrinsic component was later carried out by Harris {\it et al.} \cite{Harris:1995jx}. The EMC data with $\overline{\nu} = \overline{Q^2} / 2m_p \overline{x} = 53,\, 95, \,$ and 168 GeV were fit by a sum of the extrinsic and intrinsic components \cite{Harris:1995jx}. The normalization of the two components were left as free parameters, \begin{eqnarray} \label{f2ccombo} F_{2 \,c}(x,\mu^2,m_c^2) &=& \epsilon F_{2 \, c}^{\gamma p}(x,\mu^2,m_c^2) + \delta F_{2 \, c}^{\rm IC}(x,\mu^2,m_c^2)\, , \nonumber\\ \end{eqnarray} with the scale $\mu = \sqrt{m_{c \overline c}^2 + Q^2}$. The parameter $\epsilon$, typically larger than unity, was considered to be an estimate of the NNLO contribution to the extrinsic contribution. Since a 1\% normalization of the IC component was assumed in Eq.~(\ref{f2ccombo}), the fitted value of $\delta$ is the fraction of this normalization. Given the quality of the data, no statement could be made about the intrinsic charm content of the proton when $\bar{\nu}=53$ and $95 \: {\rm GeV}$. However, with $\bar{\nu}=168 \: {\rm GeV}$ an intrinsic charm contribution of $(0.86\pm0.60)\%$ was indicated. These results were consistent with those of the original analysis by Hoffman and Moore \cite{Hoffmann:1983ah}. The BHPS light-front model assumes that $c_1(x) = \overline c_1(x)$. Meson cloud models, introduced later, treat the 5-particle Fock state as a combination of (predominantly) $\overline D^0 \Lambda_c^+$. In this case, of course, $c_1(x) \neq \overline c_1(x)$ with the $\overline c$ quark in the $\overline D^0$ carrying more momentum than the $c$ quark in the charm baryon. An analysis by Steffens {\it et al.} in the context of the meson cloud model and using a hybrid scheme to interpolate between massless evolution at high $Q^2$ and `extrinsic' production at low $Q^2$ found a limit of $\sim 0.4$\% \cite{Steffens:1999hx}. Regardless of whether or not the models predict $\overline c_1(x) - c_1(x) > 0$, intrinsic charm should provide the dominant contribution to the charm density in the proton at large $x$ \cite{Pumplin:2005yf}. For some time, no other analyses of the charm structure function were made. The EMC data remain the only measurement of the charm structure function in the relevant $(x,Q^2)$ regime and are the only DIS data cited as evidence for intrinsic charm. The HERA data on $F_{2 \, c}$ were at too low $x$ to address the issue. The first global analyses of the proton PDFs with an intrinsic charm contribution included were performed by members of the CTEQ collaboration \cite{Pumplin:2007wg,Nadolsky:2008zw}. In addition to the BHPS and meson cloud approaches, they also allowed for a `sea-like' contribution with the same shape as the radiatively-generated charm distribution. They characterized the magnitude of the intrinsic charm component ($c_1(x,Q^2)$) by the first moment of the charm distribution at the input scale $Q_0=m_c=1.3$ GeV:\footnote{Note that at $Q_0=m_c$ the radiatively generated charm component ($c_0(x,Q^2)$) vanishes at NLO in the ${\overline{\rm MS}}$ scheme so that $c(x,Q_0^2)=c_1(x,Q_0^2)$.} \begin{equation} c_1(N=1,Q_0^2) = \int_0^1 dx\ c_1(x,Q_0^2) = 0.01\, , \label{eq:norm} \end{equation} which translates into a momentum fraction \begin{equation} \langle x \rangle_{c_1 + \overline c_1} = \int_0^1 dx \, x[c_1(x,Q_0^2) + \overline c_1(x,Q_0^2)] = 0.0057 \,\, . \label{Eq:avex} \end{equation} They found that the global analyses of hard-scattering data provided no evidence for or against the existence of intrinsic charm up to $\langle x \rangle_{c_1 + \overline c_1} = 0.0057$, {\it i.e.} the quality of the fit is insensitive to $\langle x \rangle_{c_1 + \overline c_1}$ in this interval. They also found that the allowed range was greatest for the sea-like IC, expected since this shape is rather easily interchangeable with other sea quark components while the other, harder, charm distributions are not \cite{Pumplin:2007wg}. In addition, they concluded that the enhancement due to IC relative to analyses without it persisted up to scales of $\sim 100$ GeV and could have an influence on charm-initiated processes at the LHC, as is discussed later. The CTEQ6.6C proton PDFs were generated as a result of this analysis \cite{Nadolsky:2008zw}. There are two recent updates to the global analyses, reaching different conclusions about the importance of intrinsic charm. The first, by Dulat {\it et al.} \cite{Dulat:2013hea}, follows the previous work in the context of the CTEQ collaboration \cite{Pumplin:2007wg,Nadolsky:2008zw}. The second, by Jimenez-Delgado {\it et al.} \cite{Jimenez-Delgado:2014zga}, included more lower energy data than the previous global analyses. The result of Dulat {\it et al.} \cite{Dulat:2013hea} was based on the CT10 NNLO parton densities. Here the strong coupling, $\alpha_S(Q^2)$, the evolution equations and the matrix elements are calculated at NNLO. Only the inclusive jet data still required NLO expressions. Their analysis included DIS data from BCDMS, NMC, CDHSW, and CCFR; SIDIS data from NuTeV and CCFR; the combined DIS and $F_{2\,c}$ data from HERA; Drell-Yan production; the $W$ charge asymmetry and $Z^0$ rapidity from CDF and D0; and the inclusive jet measurements from CDF and D0, see Ref.~\cite{Dulat:2013hea} for a complete list. Two models of IC were considered: the BHPS light-front model and the sea-like IC introduced in Ref.~\cite{Pumplin:2007wg}. They found a broader possible probability range for IC in this analysis, $\langle x \rangle_{\rm IC} = \langle x \rangle_{c_1 + \overline c_1}(Q_0^2) \lesssim 0.025$ for BHPS and $\langle x \rangle_{\rm IC} \lesssim 0.015$ for the sea-like IC, summarized in Fig.~\ref{fig:Dulat}. This finding differs from the previous work which found a larger upper limit on IC for the sea-like model. They believe that the difference is caused by the improved treatment of the charm quark mass in the later study \cite{Dulat:2013hea}. \begin{figure}[thp!] \centering \includegraphics[width=0.45\textwidth]{IC_702_chi2t+T2-l2.pdf} \caption{ (Color online) The global chi-square function versus charm momentum fraction $\langle x \rangle_{\rm IC}$. The two curves are determined from fits with many values of $\langle x \rangle_{\rm IC}$. Two exemplary fits for each IC model are shown as dots. Blue dots denotes the BHPS model; the dots have $\langle x \rangle_{\rm IC}=0.57\%$ and 2\%, which are denoted as BHPS1 and BHPS2. Red denotes SEA model; the dots have $\langle x \rangle_{\rm IC}=0.57\%$ and 1.5\%, which are denoted SEA1 and SEA2. Additionally the dotted lines show global chi-square function with additional penalty, $T_2(i)$, used to set the upper limits on the allowed IC component. \\(Figure taken from \cite{Dulat:2013hea})} \label{fig:Dulat} \end{figure} In addition to the global fit, they also tested the sensitivity of their result to individual experiments by introducing a penalty factor, $T_2(i)$, for each experiment $i$. This penalty factor is designed to increase more rapidly than the $\chi^2_i$ for that experiment when $\chi^2_i$ goes beyond the 90\% confidence level. The penalty factor employs an equivalent Gaussian variable $S_n$ which measures the goodness of fit for each individual data set. Values of $S_n \leq |1|$ are considered good fits, $S_n > 3$ is considered to be a poor fit, and values of $S_n < -3$ are better fits than expected from usual statistical analyses. Using the $S_n$ dependence on $\langle x \rangle_{\rm IC}$, they determined which of the data sets used in the global analyses are most sensitive to intrinsic charm. The upper limit on the BHPS value of $\langle x \rangle_{\rm IC}$ comes from the CCFR structure function data while the HERA combined charm data sets the upper limit on IC from the sea-like model \cite{Dulat:2013hea}. They also studied the sensitivity of their sea-like result to the charm quark mass and found that, if the charm quark mass was raised from 1.3 GeV, as in the CT10 fits, to 1.67 GeV, then the minimum $\chi^2$ for the global analyses would support $\langle x \rangle_{\rm IC} = 0.01$ rather than 0 although the global $\chi^2$ is worse for the larger charm mass \cite{Dulat:2013hea}. Finally, they showed how $W$ and $Z$ production at the LHC might be affected by a nonzero IC contribution. In the most recent study, Jimenez-Delgado {\it et al.} \cite{Jimenez-Delgado:2014zga} included the full range of high energy scattering data by using looser kinematic cuts $Q^2 \geq 1$ GeV$^2$ and $W^2 \geq 3.5$ GeV$^2$. In particular, they included the lower energy SLAC fixed-target data which did not pass the more stringent standard DIS cuts on the $(Q^2,W^2)$ plane applied in the previous work \cite{Pumplin:2007wg,Nadolsky:2008zw,Dulat:2013hea}. The EMC $F_{2\,c}$ data, cited as the strongest evidence for intrinsic charm in DIS, are used as a consistency check. The low energy, high-$x$, fixed target data lie precisely in the region where IC is expected to be most important. Thus including these data could enhance the sensitivity of the global fit to IC. Note, however, that some of these newly-added data are on heavier targets than the deuteron and thus target mass corrections, nuclear corrections for $A>2$, and higher-twist effects need to be taken into account \cite{Jimenez-Delgado:2014zga}. They followed the framework of the JR14 \cite{Jimenez-Delgado:2014twa} global fit which decomposed $F_2$ into light and heavy components. The charm component is itself separated into the 'extrinsic' and intrinsic charm components. The fixed-flavor number scheme is used to compute the extrinsic contribution. In this scheme, the charm quark mass enters the PDF evolution only indirectly through the running of $\alpha_s$ \cite{Jimenez-Delgado:2014zga}. They employed a charm quark mass of 1.3 GeV, as did Dulat {\it et al.} \cite{Dulat:2013hea}. They used all three intrinsic charm models previously considered: BHPS, the meson-cloud model (this time including pseudoscalar and vector mesons as well as spin $1/2$ and spin $3/2$ charm baryons -- the CTEQ analyses only included the scalar $\overline D \Lambda_c$ fluctuation), and the sea-like component \cite{Jimenez-Delgado:2014zga}. The IC contribution was evolved up to NLO. They found that the total $\chi^2$ is minimized for $\langle x \rangle_{\rm IC} = 0$ with $\langle x \rangle_{\rm IC} < 0.1$\% at the $5\sigma$ level. When a hadron suppression factor to suppress charm contributions near threshold is applied, they find a minimum $\chi^2$ at $\langle x \rangle_{\rm IC} = (0.15 \pm 0.09)$\% for the full data set. The SLAC $F_2$ (large $x$), NMC cross sections (medium $x$) and HERA $F_{2\, c}$ (low $x$) display the greatest sensitivity to IC, see Fig.~\ref{fig:Delgado} for details. However, fits without the SLAC data still give a low IC contribution \cite{Jimenez-Delgado:2014zga}. The difference between their results and previous results is in part due to the very different tolerance criteria, $\Delta \chi^2 = 1$ for their fit and $\Delta \chi^2 = 100$ for Dulat {\it et al.} \cite{Dulat:2013hea}. Increasing the tolerance to $\Delta \chi^2 = 100$ would also accommodate $\langle x \rangle_{\rm IC} = 1$\% at the $1\sigma$ level \cite{Jimenez-Delgado:2014zga}.\footnote{For a critical discussion of the analysis in \protect\cite{Jimenez-Delgado:2014twa} and in particular of the tolerance criterion $\Delta \chi^2 = 1$ see Ref.~\protect\cite{Brodsky:2015uwa}.} \begin{figure}[thp!] \centering \includegraphics[width=0.45\textwidth]{global_fig1.pdf} \caption{ (Color online) Contributions to the total $\chi^2$ (black circles), relative to the value $\chi_0^2$ for no IC, of various data sets as a function of the momentum fraction $\langle x \rangle_{\rm IC}$.\\ (Figure taken from \cite{Jimenez-Delgado:2014zga})} \label{fig:Delgado} \end{figure} When checked against the EMC $F_{2\, c}$ data, a clear preference for IC is found, as expected, for the highest-$x$ data. However, these data are typically not included in global analyses due to their greater tension with other global data sets. Given that the two most recent analyses set significantly different limits on IC, it is important to collect further large-$x$ data, particularly on $F_{2\, c}$ to try and place greater confidence on the limit of IC in the nucleon. This would be an important measurement at the future electron-ion collider. \section{Predictions for intrinsic bottom} \label{sec:ib} In contrast to the case of intrinsic charm, there is currently no global analysis available that investigates the possibility of an intrinsic bottom (IB) content of the nucleon. The main reason for this is the lack of experimental data that could constrain it. The BHPS light-front model \cite{Brodsky:1980pb} predicts the existence of IB with an $x$-shape very similar to the one of IC given in Eq.~\eqref{eq:IC} but with a normalization which is parametrically suppressed by the ratio $m_c^2/m_b^2$. This fact, together with the observation that the IB PDF is governed (to an excellent approximation) by an independent non-singlet evolution equation \cite{Lyonnet:2015dca}, can be used to investigate IB in a flexible way without the need of a dedicated global analysis. Such a study has been done in Ref.~\cite{Lyonnet:2015dca} where a set of decoupled IB (and IC) PDFs has been provided and used together with the CTEQ6.6 PDFs~\cite{Nadolsky:2008zw} to estimate the impact of the IB on new physics searches at the LHC. The advantage of this approach is that the provided IB (IC) PDF can be used with any standard set of PDFs and the normalization of the intrinsic component can be freely adjusted. This is especially useful for studies of possible IB effects, as in that case, there are no experimental limits on what amount of IB is allowed. In the following we show some of the results found in Ref.~\cite{Lyonnet:2015dca}. In this work, the boundary condition for the IB distribution was modeled using the IC distributions in the CTEQ analyses \cite{Pumplin:2007wg,Nadolsky:2008zw} scaled down by the mass factor $m_c^2/m_b^2$. The result of such an intrinsic bottom distribution $b_1(x,Q^2)$, with normalization $\int_0^1 dx b_1(x,m_c^2) = 0.01\times m_c^2/m_b^2$, is shown in Fig.~\ref{fig:b1-b0}, where the ratio of the intrinsic ($b_1$) and the radiatively generated ($b_0$) component of the bottom PDF is plotted. \begin{figure}[thp!] \centering \includegraphics[width=0.45\textwidth]{./Figures/ratio_b0_b1_v3.pdf} \caption{ (Color online) Ratio of intrinsic ($b_1$) and dynamically generated ($b_0$) bottom PDFs for various $Q$ scales. The perturbative bottom PDF from CTEQ6.6c0 \cite{Nadolsky:2008zw} is used, the normalization of the IB is taken to be such that $\int_0^1 dx b_1(x,m_c^2) = 0.01 \times m_c^2/m_b^2$. \\(Figure taken from \cite{Lyonnet:2015dca}) } \label{fig:b1-b0} \end{figure} As always in the light-front models the intrinsic component is mostly present at large $x$ values. We can see that for low scales $Q\sim10$ GeV the modification of the bottom PDF, $\kappa_b=1+b_1/b_0$, can reach $\kappa_b=2.5$. However, it decreases rapidly with the rising scale. Since $b_1$ evolves independently of the other PDFs the change in the normalization of the IB component in Fig.~\ref{fig:b1-b0} can be done by simply rescaling the curves in the figure. If we allowed for a $0.035 \times m_c^2/m_b^2$ normalization of the IB the modification of the bottom PDF would be given by $\kappa_b=1+b_1/b_0\times3.5$, which for high $x$ and $Q\sim10$ GeV would result in an enhancement of the bottom PDF by a factor $\sim6.25$. However, at a scale of around 100 GeV and $x$ below 0.2-0.3, even with the higher IB normalization, the effect is becoming negligible. In Fig.~\ref{fig:intrinsic_bottom_fullpdf} we show the sum of the intrinsic bottom PDF $b_1$ and the dynamically generated PDF $b_0$ from CTEQ6.6 for different normalizations of the IB component, namely 0.01 and 0.035 $\times m_c^2/m_b^2$. We compare this sum to the asymmetric uncertainties\footnote{The asymmetric errors are computed following \cite{Stump:2003yu,Nadolsky:2001yg}.} of the CTEQ6.6 PDF set (upper panel). In the same figure is also shown the ratio of the same PDFs to the central value of CTEQ6.6 (lower panel). As can be seen, the IB curve with the 0.035 $\times m_c^2/m_b^2$ normalization clearly lies outside the uncertainty band whereas the one with the smaller normalization is marginally outside the band (up to $x\lesssim 0.6$). \begin{figure}[thp!] \centering \includegraphics[width=0.45\textwidth]{./Figures/combine_IB} \caption{ (Color online) CTEQ6.6 + $b_1$ for different normalizations of the intrinsic bottom-quark PDF at the scale $Q=10$ GeV, compared to the asymmetric PDF errors from the same set (upper panel). Also shown is the ratio of the same PDF sets to the central value of CTEQ6.6 (lower panel).} \label{fig:intrinsic_bottom_fullpdf} \end{figure} If we are looking for new physics with couplings proportional to the mass, the suppression of IB compared to the IC would be partly compensated by the square of the coupling. For a more detailed study of the relevant parton-parton luminosities please see Ref.\ \cite{Lyonnet:2015dca}. \section{Collider observables} \label{sec:observables} Several collider observables receive large contributions from heavy quark initiated subprocesses and are hence potentially sensitive to an intrinsic charm content in the nucleon. In order to expect optimal effects the heavy quark PDF should be probed at large $x \gtrsim 0.2$ (for light-front models) and not too large factorization scales. This kinematic region is best accessible at lower energies in the center-of-mass system (cms) and/or large rapidities. Therefore, a fixed target experiment like AFTER@LHC \cite{Brodsky:2012vg,Lansberg:2012kf,Lansberg:2013wpx,Rakotozafindrabe:2013cmt} operating at a cms energy $\sqrt{s}=115$ GeV with a high luminosity is ideally suited for searches of IC effects. In the following we review some of the collider processes which have been studied in the literature in this respect. \subsection{Open heavy flavor production} Inclusive charm hadron ($D^0, D^+, D^{\star +}, \Lambda_c, \ldots$) production in hadronic collisions was advocated in Ref.\ \cite{Kniehl:2009ar} as a laboratory to probe IC inside the colliding hadrons. In this analysis, predictions for the differential cross section in dependence of the transverse momentum $p_T$ were obtained in the general-mass variable-flavor-number scheme (GM-VFNS) \cite{Kniehl:2004fy,Kniehl:2005mk,Kniehl:2005st} at next-to-leading order (NLO). In this scheme, the charm quark is an active parton and the differential cross sections of inclusive charm meson production depend heavily on the PDF of the charm quark. The sensitivity of these cross sections to IC was studied for the Tevatron at a cms energy of 1960 GeV and the Relativistic Heavy Ion Collider (RHIC) at cms energies of 200 GeV (RHIC200) and 500 GeV (RHIC500). The different IC models from the CTEQ6.5c global analysis \cite{Pumplin:2007wg} were employed together with the fragmentation functions for charm mesons from Ref.~\cite{Kneesch:2007ey}. While the effects at the Tevatron were found to be very moderate and likely not testable, large enhancements were found at RHIC200 reaching values of $\sim 3$ at $p_T=20$ GeV. Unfortunately, the measurements at RHIC200 are limited by the luminosity. At RHIC500 the cross section is increased by about a factor 3.6. However, the sensitivity to IC for the light-front models is greatly reduced. More recently, the GM-VFNS was applied to obtain predictions for the production of inclusive $D$ mesons at the LHC for a cms energy of 7 TeV (LHC7) \cite{Kniehl:2012ti}. It was found that the production cross sections at large rapidities $y \gtrsim 4$ are sensitive to an IC component. These predictions can be tested by measurements at forward rapidities with the LHCb detector. The ideal experiment to search for the effects of IC would be a high luminosity fixed target experiment such as AFTER@LHC operating at a cms energy of 115 GeV. In Fig.\ \ref{fig:D+X} we show results for inclusive $D^\star$ meson production as a function of the transverse momentum of the $D^\star$ meson and integrated over the rapidity range $2 < y < 5$ (in the laboratory frame) in essentially the same setup as in Ref.\ \cite{Kniehl:2009ar} to which we refer for details. The only difference is that, following Ref.~\cite{Kniehl:2015fla}, the default choice for the renormalization and factorization scales is $\mu_R = m_T$, $\mu_F = \mu_F' = m_T / 2$ where $m_T = \sqrt{p_T^2 + m^2}$ is the transverse mass. The theoretical predictions are shown on an absolute scale in Fig.~\ref{fig:D+X} (left) and as a ratio with respect to the default results in Fig.~\ref{fig:D+X} (right). In both figures, the black dotted lines have been obtained by varying the renormalization scale around the central choice to $\mu_R= m_T/2$ (upper line) and $\mu_R = 2 m_T$ (lower line). In the right figure we repeat the calculation of the central prediction in turn with PDF sets CTEQ6.5Cn for $n=1,\ldots,6$ and normalize the outcome to the default prediction with zero IC of Fig.~\ref{fig:D+X} (left). We observe that the ratios for $n=1,2,3,4$ corresponding to the BHPS ($n=1,2$) or meson-cloud ($n=3,4$) models become very large at large $p_T$. Indeed, the default cross section can be increased by more than a factor 5 at $p_T= 20$ GeV in scenarios with maximally allowed intrinsic charm ($n=2,4$). Even for the IC sets with smaller normalization ($n=1,3$) corresponding to $\langle x \rangle_{c_1 + \overline c_1} = 0.57\%$ and $\langle x \rangle_{c_1 + \overline c_1} = 0.96\%$ the cross section would be enhanced by a factor larger than 2 (red solid line) or 3 (blue dashed line) at $p_T=20$ GeV. It is also interesting to note that the phenomenological models for a sea like IC ($n=5,6$) lead to a significant enhancement of the cross section at small $p_T \sim m_c$ which would be probed at AFTER@LHC as well. \begin{figure*}[thp!] \centering \includegraphics[width=0.48\textwidth]{./Figures/AIC-fig1-crop} \includegraphics[width=0.45\textwidth]{./Figures/AIC-fig2-crop} \caption{ (Color online) NLO predictions for inclusive $D^\star$ meson production at AFTER@LHC vs the transverse momentum of the $D$ meson. (Left) Differential cross section on an absolute scale without intrinsic charm. (Right) Ratio w.r.t. to the central prediction of the left plot. Shown are results using the IC parametrizations from Ref.\ \cite{Pumplin:2007wg} for $n=1$ (red, solid line), 2 (violet, dotted line), 3 (blue, dashed line), 4 (green, long dashed line), 5 (cyan, dot-dashed line), 6 (orange, double-dot-dashed line). In both figures, the black dotted lines have been obtained by varying the renormalization scale around the central choice ($\mu_R = m_T$) to $\mu_R= m_T/2$ (upper line) and $\mu_R = 2 m_T$ (lower line). } \label{fig:D+X} \end{figure*} \subsection{Production of a photon in association with a charm quark} Another process with a wide range of phenomenological applications in $pp$, $pA$, and $AA$ collisions \cite{Stavreva:2009vi,Stavreva:2010mw,Stavreva:2012aa} which is very sensitive to the heavy quark PDF is the associated production of a photon with a heavy quark. A dedicated study of this process at the LHC operating at $\sqrt{s}=8$ TeV (LHC8) was performed in Refs.\ \cite{Bednyakov:2013zta,Bednyakov:2014pqa} where it was demonstrated that the existence of IC in the proton can be visible at large transverse momenta of the photons and heavy quark jets at rapidities $1.5 < |y_\gamma|<2.4, |y_c|<2.4$. Indeed, for the BHPS model the cross section can be enhanced by a factor of 2-3 for $p_T^\gamma > 200$ GeV (see Fig.\ 5 in \cite{Bednyakov:2014pqa}). This comes with the penalty that the cross section falls rapidly with increasing transverse momentum so that this measurement will be limited by statistics. Again, as for open heavy flavor production, the lower cms energy together with the high luminosity makes a fixed target experiment like AFTER@LHC the ideal place to discover IC using $\gamma+c$ production. This can be seen in Fig.\ \ref{fig:gamma+c}, where the differential cross section is enhanced by a factor 5 at $p_T^\gamma=20$ GeV (right panel) with a not too small cross section (left panel). \begin{figure*}[thp!] \centering \includegraphics[width=0.48\textwidth]{./Figures/after1} \includegraphics[width=0.45\textwidth]{./Figures/after2} \caption{ (Color online) NLO predictions for the production of a prompt photon in association with a charm quark jet in $pp$ collisions at AFTER@LHC vs the transverse momentum of the photon. Shown are results for an BHPS and a sea like intrinsic charm using the CTEQ6.6c PDFs. For comparison, the predictions without an IC using the CTEQ6.6M PDFs are shown as well together with the uncertainty band obtained by varying the central factorization scale $\mu_F=p_T^\gamma$ a factor 2 up and down (blue, dotted curves). The right panel depicts the ratio of the curves in the left panel with respect to the central prediction without intrinsic charm. } \label{fig:gamma+c} \end{figure*} \subsection{Vector boson production} Dulat {\it et al.} \cite{Dulat:2013hea} studied the sensitivity of $W^\pm$ and $Z^0$ production to the presence of IC. Vector boson production at the LHC is an interesting ground for IC because it is at relatively large $x$ for colliders and $Z^0 \rightarrow l^+ l^-$ is a rather clean final state. They did a NNLO calculation of $W$ and $Z$ production including IC based on their global fits at $\sqrt{s} = 8$ and 14 TeV. They also studied the ratio $d\sigma_{W^+ + W^-}(y)/d\sigma_{Z^0}(y)$ relative to the result with no IC. Neither of these calculations showed an effect larger than the uncertainties due to the CT10 sets themselves. However, when the $Z^0$ $p_T$ distribution with IC was compared to that without, they saw a factor of two enhancement at $p_T \sim 500$ GeV for $\sqrt{s} = 8$ TeV in the range $|\eta| < 2.1$. The corresponding enhancement at 14 TeV was smaller at the same $p_T$ because the $x$ value reached is reduced at the higher energy \cite{Dulat:2013hea}. We show a simple test case here for $W$ and $Z$ production to NLO at $\sqrt{s} = 7$ TeV. We use only the BHPS IC parameterization for the five-particle Fock state, shown in Eq.~(\ref{eq:IC}). We assume a 1\% normalization and no $Q^2$ evolution to maximize the possible effect at forward rapidity. The $p_T$-integrated rapidity distribution is shown in Fig~\ref{fig:RV_vecbos}, as is the ratio of the result with IC to that without as a function of rapidity. \begin{figure*}[thp!] \includegraphics[width=0.45\textwidth]{./Figures/wmwpz_7TeV} \includegraphics[width=0.45\textwidth]{./Figures/wmwpz_rat} \caption{(Color online) The $W^+$ (black), $W^-$ (blue) and $Z^0$ (red) rapidity distributions (left). The solid curves are the results without IC while the dashed curves include 1\% BHPS IC. The ratios of the dashed curves to the solid curves, showing the enhancement of the rapidity distributions due to IC for $W^+$ (solid black), $W^-$ (blue dashed) and $Z^0$ (red dot dashed) are shown in the right plot. } \label{fig:RV_vecbos} \end{figure*} The rapidity distributions without IC are given by the solid curves while the dashed curves are the calculations with the BHPS IC contribution to the charm parton density. With BHPS IC, one expects enhancement only at forward rapidity. The enhancement from IC appears for $|y| > 2.5$. Note that if the sea-like IC would be used instead, the enhancement would be small but finite over all rapidity. The $W^+$ cross section is largest and most forward peaked, because of the $u \overline d$ contribution. The contribution from the $c \overline d$ part is a very small addition since the $u$ valence contribution is large and peaks at large $x$, making the $y$ distribution larger at $|y| \sim 2$ than at $y = 0$. Indeed, it gives the smallest IC contribution. The $W^-$ distribution should have the largest possible contribution from IC because both the $d \overline u$ and $d \overline c$ peak at low $x$ and because the $d$ valence distribution peaks at lower $x$ so that the $W^-$ rapidity distribution has a maximum at $y=0$. At $|y| \sim 4$, the IC enhancement is $\sim 40$\%. Finally, the $Z^0$ distribution, with a plateau over $|y| < 1.5$, also has a very small IC contribution because the charm enhancement only comes through $c \overline c$. Such IC enhancements are only visible outside the midrapidity acceptance of the collider detector coverage of CMS and ATLAS. However, LHCb or ALICE cover this forward rapidity range with muons and could detect forward $Z^0$. They could also look at the lepton rapidity asymmetry, $(W^+ - W^-)/(W^+ + W^-)$, at forward rapidity. The statistical accuracy of the measurement would need to be high to distinguish an IC enhancement from the no IC result, especially since the 1\% BHPS IC is likely an upper limit on this enhancement. Note that the higher energy of LHC Run 2 will reduce the potential enhancement even though it would increase the rates. \section{Conclusions} \label{sec:conclusions} The existence of non-perturbative intrinsic charm and bottom components is a fundamental prediction of QCD. In this article, we have reviewed the current status of our understanding of this intrinsic heavy quark content of the nucleon which yet remains to be confirmed experimentally. In particular, after introducing theoretical models predicting the intrinsic heavy quark distributions we have turned to a summary of the available information on intrinsic charm coming from global analyses of parton distribution functions. There are no global analyses of intrinsic bottom available and we have described how IB can be modeled in order to explore its impact on collider observables keeping in mind that bottom quark initiated subprocesses play an important role in certain electroweak observables and in models for physics beyond the Standard Model. We then have turned to a discussion of collider processes where IC could be discovered. Generally, the effects of IC are larger at colliders with a lower center-of-mass energy and for hard processes with moderate factorization scales. Therefore, a high-luminosity fixed target experiment like AFTER@LHC operating at a center-of-mass energy $\sqrt{s}=115$ GeV would be ideally suited to discover or constrain IC. \section*{Acknowledgments} We are grateful to T.~Stavreva for providing Fig.~\ref{fig:gamma+c}. The work of S.~J.~B. was supported by the Department of Energy Contract No.~DE-AC02-76SF00515. The work of RV was performed under the auspices of the U.S.\ Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. \bibliographystyle{apsrev4-1}
{ "timestamp": "2015-04-24T02:12:31", "yymm": "1504", "arxiv_id": "1504.06287", "language": "en", "url": "https://arxiv.org/abs/1504.06287" }
\section{Introduction} \label{sec:intro} With the increasing popularity of social networks, there exist many interesting and difficult problems, such as friends recommendation, information propagation, etc. In this paper, we study the problem of social trust prediction, which aims to estimate the positive or negative relationship among the users based on the existing trustness information associated with them. This problem plays an important role in social networks as the system can block the invitation from someone that the user does not trust, or recommend new friends who enjoys a high reputation. Naturally, the social trust problem can be formulated within the matrix completion framework~\cite{liben2007link} \cite{billsus1998learning}. That is, the $(i, j)$-th entry of the observed data matrix $Z \in \mathbb{R}^{n\times n}$ is a 1-bit code implying that the $i$-th user trusts the $j$-th user if $Z_{ij}= 1$. Here, $n$ denotes the number of users. However, what we observe is only a small fraction of the entries, whose values are zero. And our goal is to estimate the missing entries according to the 1-bit measurements in $Z$. Note that the problem is ill-posed if no assumption is imposed on the structure of the data. To solve the problem, a number of methods are proposed. Generally, existing social trust prediction methods fall into three categories. The first category is based on similarity measures or the structural context similarity ~\cite{newman2001clustering} \cite{chowdhury2010introduction} \cite{katz1953new} \cite{jeh2002simrank}, motivated by the intuition that an individual tends to trust their neighbors, or the ones with similar trusted people. The second is based on low rank matrix completion~\cite{billsus1998learning} \cite{cai2010singular} \cite{huang2013social}, which assumes that the underlying data matrix is low-rank or can be approximate by a low-rank matrix. The third one models the problem as a binary classification one and utilizes techniques such as logistic regression \cite{leskovec2010predicting}. \textbf{Challenges.} However, there are two issues emerging in social trust prediction which are not well characterized by the algorithms in previous works. First, the value of the observed entry is either 1 or $-1$, which is analogous to the binary classification problem. But in our problem, we are handling much more complex matrix data. Fortunately,~\cite{srebro2004mmmf} presented a maximum margin matrix factorization framework that unifies the binary problem for vector case and matrix case. The key idea in their work is a low-norm matrix factorization, which will also be utilized in this paper. Second, the locations of the entries are sampled non-uniformly, which gaps the theory and practice for a lot of matrix completion algorithms. To tackle this challenge, we suggest using the max-norm as a convex surrogate for the rank function, which is shown to be superior to the well-known nuclear norm when addressing the non-uniform data~\cite{srebro2010non-uniform}. \textbf{Our contributions} are two-folds: 1) To the best of our knowledge, we are the first to address the social trust prediction problem by utilizing a max-norm constrained formulation. 2) Although a max-norm constrained problem can be solved by SDP solvers and an accurate enough solution can be achieved, we here utilize a projected gradient algorithm that is scalable to large scale datasets. We empirically show the improvement of our formulation for the non-uniform 1-bit benchmarks compared to state-of-the-art solvers. \section{Related Work} \label{relat} Social interaction is investigated intensively in the last decades. The social interaction indicates the friendship, support, enemy or disapproval as shown in Figure \ref{trusg}. Online users rely on the trustworthiness information to filter information, establish collaboration or build social bonds. Social networks rely on the trust information to make recommendation, attract users from other circles, or influencing public opinions. Thus, the exploration of social trust has a wide range of applications, and has emerged as an important topic in social network research. A number of methods are proposed. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{trust.eps} \vspace{-3em} \caption{Illustration of the data matrix of social trust. Each column of the data matrix is a rating sample. Each entry is a tag that a user assigns to another user. The symbol ``+'' denotes the ``trust'' relationship, ``-'' denotes ``distrust'' and ``?'' means unknown relationship (that we aim to predict). Most of the relationships are unknown, so the data is sparse. Typically, each individual has own preference and friendship network, making the data non-uniform. Also noted that the data matrix may not be symmetric.} \label{trusg} \vspace{-1em} \end{figure} One kind of methods are based on the similarity measurement. Specifically, Jaccard's coefficient is commonly used to measure the probability two items that have a relationship. Inspired by the metric, Jeh and Widom~\cite{jeh2002simrank} proposed a domain-independent similarity measurement, SimRank. \cite{newman2001clustering} directly defined a score to verify the correlation between common neighbors. Some methods are based on relational data modeling, structural proximity measures and stochastic relational model \cite{getoor2005link} \cite{liben2007link} \cite{yu2009large}. The above mentioned methods are mainly derived from the solutions of link prediction. The link prediction is oriented to network-level prediction, whereas social trust prediction focuses on person-level. Another class of methods are derived from the collaborative filtering methods, such as clustering techniques \cite{sarwar2001item}, model-based methods \cite{hofmann1999latent}, and the matrix factorization models \cite{srebro2003weighted} \cite{mnih2007probabilistic}. However, the data matrix of trust has some structure properties different from the user-item matrix, such as transitivity. Meanwhile, the social trust in reality is extremely sparse. For instance, Facebook has hundreds of millions of users, but most of them have less than 1,000 friends. Besides, the people with similar personality tend to behave similarly. To sum up, the data matrix of social trust has both sparse and low-rank structure. Thus, the social trust prediction problem is especially suitable for the matrix completion model. That is the focus of our paper. The problem of matrix completion is to recover a low-rank matrix from a subset of entries \cite{candes2009exact}, which is given by: \begin{equation} \label{eqRank} \begin{split} \min_X~ & \text{rank}(X) \\ s.t. ~~&P_{\Omega}(X)= P_{\Omega}(Z) \end{split} \end{equation} where $Z\in \mathbb{R}^{p\times n}$ is the data matrix, $X$ is the recovered matrix, and $\Omega$ is the index set of observed entries. Th optimization problem~\eqref{eqRank} is not only NP-hard, but requires double exponential time complexity with the number of samples~\cite{recht2010guaranteed}. To solve the above problem, one alternative is to use nuclear norm as a relaxation to the rank function: \begin{equation} \label{traceN} \begin{split} \min_X~ & ||X||_{\ast} \\ s.t.~~&P_{\Omega}(X)= P_{\Omega}(M) \end{split} \end{equation} where $||X||_{\ast}$ denotes the sum of singular values of matrix $X$. \cite{cai2010singular} developed a first-order procedure to solve the convex problem (\ref{traceN}), namely singular value thresholding (SVT). \cite{jain2010guaranteed} minimized the rank minimization by the singular value projection (SVP) algorithm. \cite{keshavan2010matrix} solved the problem by first trimming each row and column with too few entries, then compute the truncated SVD of the trimmed matrix. Under certain conditions, it showed accurate recovery on the order of $nd \log n$ samples ($n$ is the number of samples, $d$ is the rank of recovered matrix). With the rapid development of matrix completion problem, some more efficient methods have been proposed \cite{candes2010power}\cite{gross2011recovering}\cite{wang2012stability}\cite{huang2013social}. However, all the methods mentioned above use the nuclear norm as the surrogate to the rank, whose exact recovery can be guaranteed only when the data are sampled uniformly, which is not practical in real world applications. On the other hand, recent empirical on max-norm~\cite{srebro2004mmmf} shows promising results for non-uniform data if one utilize the max-norm as a surrogate~\cite{srebro2010non-uniform}. Notably, for some specific problems, such as collaborative filtering,~\cite{srebro2005rank} proved that the generalization error bound for max-norm is better than the nuclear norm. More recently,~\cite{shen2014online} reported encouraging results on the subspace recovery task (which is closely relevant to matrix completion). Since the social trust data is non-uniformly sampled, we believe that a max-norm regularized formulation can better handle the challenge than the nuclear norm. Our formulation is also inspired by a recent theoretical study on matrix completion with 1-bit measurement~\cite{cai2013max}, which established a minimax lower bound on the general sampling model and derived the optimal convergence rate in terms of Frobenius norm loss. Furthermore, there are several practical algorithms to solve max-norm regularized or max-norm constrained problems, see~\cite{lee2010practical} and~\cite{shen2014online} for example. \subsection{Overview} After review of related work in Section~\ref{relat}, we introduce the notations and formulate the problem in Section~\ref{sec:notation}. Then we give algorithm to solve the max-norm constrained 1-bit matrix completion (MMC) problem in Section~\ref{sec:alg}. Meanwhile, we also provide an equivalent SDP formulation for the MMC, which can be accurately solved at the expense of efficiency. Then we report the empirical study on two benchmark datasets in Section~\ref{sec:exp}. Section~\ref{sec:conclusion} concludes this paper and discusses possible future work. \section{Notations and Problem Setup} \label{sec:notation} In this section, we introduce the notations that will be used in this paper. Capital letters such as $M$ are used for matrices and lowercase bold letters such as $\mathbf{v}$ denotes vectors. The $i$-th row and $j$-th column of a matrix $M$ is denoted by $\mathbf{m}(i)$ and $\mathbf{m}_j$ respectively, and the $(i, j)$-th entry is denoted by $M_{ij}$. For a vector $\mathbf{v}$, we use $v_i$ to denote its $i$-th element. We denote the $\ell_2$ norm of a vector $\mathbf{v}$ by $\twonorm{\mathbf{v}}$. For a matrix $M \in \mathbb{R}^{p\times n}$, we denote the Frobenius norm by $\fronorm{M}$ and $\twoinfnorm{M}$ denotes the maximum $\ell_2$ row norm of $M$, {\em i.e.,} \begin{equation*} \twoinfnorm{M} := \max_{i=1}^{p} \twonorm{\mathbf{m}(i)}. \end{equation*} We further define the {\em max-norm} of $M$ \cite{linial2007complexity}, \begin{equation} \label{eq:max def} \maxnorm{M} = \min_{U, V, M=UV^\top} \max \{ \twoinfnorm{U}^2, \twoinfnorm{V}^2 \}, \end{equation} where we enumerate all possible factorizations to obtain the minimum. \textbf{Intuition on max-norm.} At a first sight, the max-norm is hard to understand. We simply explain why it is a tighter approximation to the rank function than the nuclear norm. Again, we write the nuclear norm of $M$ as a factorization form~\cite{recht2010guaranteed}: \begin{equation*} \nuclearnorm{M} := \min_{U, V, M=UV^\top} \frac{1}{2} \( \fronorm{U}^2 + \fronorm{V}^2 \). \end{equation*} Note that the Frobenius norm is the sum of the square of the $\ell_2$ row norm. Thus, a nuclear norm regularizer actually constrains the average of the $\ell_2$ row norm, while the max-norm constrains the maximum of the $\ell_2$ row norm! Given the observed data $Z\in \mathbb{R}^{p\times n}$, we are interested in approximating $Z$ with a low-rank matrix $X$, which can be formulated by, \begin{equation*} \begin{split} \text{minimize}\ & \frac{1}{2} \fronorm{\mathcal{P}_{\Omega}(Z-X)},\\ \textrm{s.t.}\ & \text{rank}(X) \leq d, \end{split} \end{equation*} where $\Omega$ is an index set of observed entries and $d$ is some expected rank. $\mathcal{P}_{\Omega}(M)$ is a projection operator on a matrix $M$ such that $\mathcal{P}_{\Omega}(m_{ij}) = m_{ij}$ if $(i ,j)\in \Omega$ and zero otherwise. However, it is usually intractable to optimize the above program as the rank function is non-convex and non-continuous~\cite{recht2010guaranteed}. One common approach is to use the nuclear norm as a convex surrogate to the rank function. However, it is well known that the nuclear norm cannot well handle the non-uniform data. Motivated by the recent progress in max-norm~\cite{srebro2010non-uniform,cai2013max,shen2014online}, we use the max-norm as an alternative convex relaxation, which gives the following formulation: \begin{equation} \label{eq:main prob} \begin{split} \min_X\ & \frac{1}{2}\fronorm{\mathcal{P}_{\Omega}(Z-X)}^2,\\ \textrm{s.t.}\ & \maxnorm{X} \leq \lambda^2, \end{split} \end{equation} where $\lambda$ is some tunable parameter. \section{Algorithm} \label{sec:alg} The max-norm is convex and moreover, it can be solved by any SDP solver. Formally, we have the following lemma: \begin{lemma}[\cite{srebro2004mmmf}] \label{lem:max} For any matrix $X \in \mathbb{R}^{p\times n}$ and $\lambda \in \mathbb{R}$, $\maxnorm{X} \leq \lambda$ if and only if there exist $A \in \mathbb{R}^{p\times p}$ and $B \in \mathbb{R}^{n\times n}$, such that $\begin{bsmallmatrix} A & X\\ X^\top & B \end{bsmallmatrix}$ is semi-definite positive and each diagonal element in $A$ and $B$ is upper bounded by $\lambda$. \end{lemma} With Lemma~\ref{lem:max} on hand, one can formulate Problem~\ref{eq:main prob} as an SDP: \begin{equation} \label{eq:sdp prob} \begin{split} \min_{X,A,B}\ & \frac{1}{2}\fronorm{\mathcal{P}_{\Omega}(Z-X)}^2,\\ \textrm{s.t.}\ & A_{ii} \leq \lambda^2,\ B_{jj} \leq \lambda^2,\ \forall\ i \in [p],\ j \in [n],\\ & \begin{bmatrix} A & X\\ X^\top & B \end{bmatrix} \succeq 0. \end{split} \end{equation} And this program can be solved by any SDP solver to obtain accurate enough solution. However, SDP solvers are not scalable to large matrices. Thus, in this paper, we apply a projected gradient method to solve Problem~\eqref{eq:main prob}, which is due to~\cite{lee2010practical}. A key technique is the reformulation of the max-norm~\eqref{eq:max def}. Assume that the rank of the optimal solution $X^*$ produced by the SDP~\eqref{eq:sdp prob} is at most $d$. Then we can safely factorize $X = UV^\top$, with $U \in \mathbb{R}^{p\times d}$ and $V \in \mathbb{R}^{n\times d}$. Combining the factorization and the definition, we obtain the following equivalent program : \begin{equation} \label{eq:uv prob} \begin{split} \min_{U,V}\ & \frac{1}{2}\fronorm{\mathcal{P}_{\Omega}(Z-UV^\top)}^2,\\ \textrm{s.t.}\ & \twoinfnorm{U} \leq \lambda,\ \twoinfnorm{V} \leq \lambda. \end{split} \end{equation} Note that the gradient of the objective function w.r.t. $U$ and $V$ can be easily computed. That is, \begin{equation} \label{eq:gradient} \begin{split} \nabla_U^{} f(Z, U, V) &= \mathcal{P}_{\Omega}\((UV^\top - Z)V \),\\ \nabla_V^{} f(Z, U, V) &= \mathcal{P}_{\Omega}\( (VU^\top - Z^\top)U \). \end{split} \end{equation} Here, for simplicity we define \begin{equation*} f(Z, U, V) = \frac{1}{2}\fronorm{\mathcal{P}_{\Omega}(Z-UV^\top)}^2. \end{equation*} The inequality constraints can be addressed by a projection step. That is, when we have a new iterate $(U_t, V_t)$ at the $t$-th iteration, we can check if they violate the constraints. If not, we can proceed to the next iteration. Otherwise, we can scale the rows of $U$ and/or $V$ by $\frac{\lambda}{\twoinfnorm{U}}$ and/or $\frac{\lambda}{\twoinfnorm{V}}$ respectively. In this way, we have the projection operator: \begin{equation} \Pi(M) = \begin{cases*} \frac{\lambda}{\twoinfnorm{M}} M, \ &\text{if}\ $\twoinfnorm{M} > \lambda$,\\ M,\ &\text{otherwise}. \end{cases*} \end{equation} If we further pick the step size $\alpha_t$ via the Armijo rule~\cite{armijo1966minimization}, it can be shown that the sequence of $(U_t, V_t)$ will converge to a stationary point~\cite{bertsekas1999nonlinear}. The algorithm is summarized in Algorithm~\ref{alg:all}. \begin{algorithm}[tb] \caption{Max-norm Constrained 1-Bit Matrix Completion (MMC)} \label{alg:all} \begin{algorithmic}[1] \REQUIRE $Z \in \mathbb{R}^{p\times n}$ (observed samples), parameters $\lambda$, initial solution $(U_0, V_0)$, maximum iteration $\tau$. \ENSURE Optimal solution $(U^*, V^*)$. \FOR{$t=1$ to $\tau$} \STATE Compute the gradient by Eq.~\eqref{eq:gradient}: \begin{align*} U'_t &= \nabla_U^{} f(Z, U, V_{t-1}) \mid_{U=U_{t-1}},\\ V'_t &= \nabla_V^{} f(Z, U_{t-1}, V) \mid_{V=V_{t-1}}. \end{align*} \STATE Compute the step size $\alpha_t$ according to Armijo rule. \STATE Compute the new iterate: \begin{align*} U_t &= \Pi(U_{t-1} - \alpha_t U'_t),\\ V_t &= \Pi(V_{t-1} - \alpha_t V'_t). \end{align*} \ENDFOR \end{algorithmic} \end{algorithm} The benefits of applying the factorization on the max-norm are two-folds: 1) the memory cost can be significantly reduced from $O(pn)$ of SDP to $O(d(p+n))$. 2) it facilitates the projected gradient algorithm, which is computationally efficient when working on large matrices (see Section~\ref{sec:exp}). However, note that Problem~\eqref{eq:uv prob} is non-convex. Fortunately,~\cite{burer2005local} proved that as long as we pick a sufficiently large value for $d$, then any local minimum of Eq.~\eqref{eq:uv prob} is a global optimum. In Section~\ref{sec:exp}, we will report the influence of $d$ on the performance. Actually, in Algorithm~\ref{alg:all}, the stopping criteria is set to be a maximum iteration. One may also check if it reaches a local minima as the stopping criteria, as discussed in~\cite{cai2013max}. \subsection{Heuristic on $\lambda$} The $\lambda$ is the only tunable parameter in our algorithm. For our problem, note that the data is of 1-bit measurements, {\em i.e.}, $ \left\vert Z_{ij} \right\vert = 1$ for $(i, j) \in \Omega$. Also note that $Z_{ij} = \mathbf{u}(i) \mathbf{v}(j)^\top$. Thus, $ \left\vert \mathbf{u}(i) \mathbf{v}(j)^\top \right\vert = 1$. So we have $\lambda \geq 1$. However, if we choose a large $\lambda$, the estimation $\left\vert X_{ij} \right\vert$ may deviate away from $1$. We find that $\lambda = 1.2$ lead to satisfactory improvement. \section{Experiments} \label{sec:exp} In this section, we empirically evaluate our method for the matrix completion performance. We will first introduce the used datasets. In the experimental settings, we present the comparative methods and evaluation metrics. Then we report encouraging results on two benchmark datasets. We also examine the influence of matrix rank $d$. \subsection{Datasets} We conduct the experiments on two benchmark datasets: Epinions and Slashdot. In these two datasets, the users are connected by explicit positive (trust) or negative (distrust) links ({\em i.e.,} the 1-bit measurements in $Z$). The first dataset contains 119,217 nodes (users) and 841,000 edges (links), 85.0\% of which are positive. The Slashdot dataset contains 82,144 users and 549,202 links, and 77.4\% of the edges are labeled as positive. Table \ref{dataSet} gives a summary description about the subset used in our experiment. It is clear that the distribution of links are not uniform since each user has his/her individual preference and own friendship network. Following~\cite{huang2013social}, we select 2,000 users with the highest degrees from each dataset to form the observation matrix $Z$. \begin{table}[h] \centering \caption{Description of 2 datasets} \label{dataSet} \centering \begin{tabular}{|l| r| r|} \hline Dataset & Epinions &Slashdot\\ \hline $\#$of Users &2,000 &2,000 \\ \hline $\#$of Trust &171,731 &68,932 \\ \hline $\#$of Distrust &18,916 &20,032 \\ \hline \end{tabular} \end{table} \subsection{Experimental Settings} \textbf{Baselines.} We choose four state-of-the-art methods as baselines, including SVP \cite{jain2010guaranteed}, SVT \cite{cai2010singular}, OPTSpace \cite{keshavan2010matrix} and RRMC \cite{huang2013social}. Since SVT and RRMC need a specified rank, we tune the rank for these methods and choose the best performance as the final result. \textbf{Evaluation Metric.} Let $T$ be the index set of all observed entries. We use two evaluation metrics to measure the performance, mean average error (MAE) and root mean square error (RMSE), computed as follows: \begin{equation*} \begin{split} MAE = & \sum _{(i,j)\in T \backslash \Omega} (X_{ij}-M_{ij})/(|T|-|\Omega|),\\ RMSE = & \sqrt{\sum _{(i,j)\in T \backslash \Omega} (X_{ij}-M_{ij})^2/(|T|-|\Omega|)}, \end{split} \end{equation*} where $|T|$ denotes the cardinality of $T$. \begin{table*}[t] \centering \caption{MAE Results on Epinions Dataset } \label{maeEpin} \centering \begin{tabular}{|c| c| c| c| c| c|} \hline \multirow{2}*{Observed entries (\%)} &\multicolumn{5}{c|} {Methods}\\ \cline{2-6}&SVT &OPTSpace &SVP &RRMC &MMC\\ \hline 10 &0.359$\pm$0.004 &0.289$\pm$0.019 &0.450$\pm$0.008 &0.576$\pm$0.001 & \textbf{0.254}$\pm$0.003 \\ \hline 20 &0.394$\pm$0.022 &0.236$\pm$0.005 &0.294$\pm$0.002 &0.518$\pm$0.002 & \textbf{0.212}$\pm$0.003 \\ \hline 30 &0.360$\pm$0.057 &0.219$\pm$0.009 &0.248$\pm$0.001 &0.460$\pm$0.002 & \textbf{0.201}$\pm$0.002 \\ \hline 40 &0.410$\pm$0.099 &0.205$\pm$0.008 &0.224$\pm$0.001 &0.418$\pm$0.002 & \textbf{0.193}$\pm$0.001\\ \hline 50 &0.471$\pm$0.129 &0.197$\pm$0.007 &0.210$\pm$0.001 &0.386$\pm$0.001 & \textbf{0.190}$\pm$0.002 \\ \hline 60 &0.476$\pm$0.146 &\textbf{0.197}$\pm$0.003 &0.199$\pm$0.001 &0.362$\pm$0.002& 0.206$\pm$0.003 \\ \hline \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{RMSE Results on Epinions Dataset } \label{rmseEpin} \centering \begin{tabular}{|c| c| c| c| c| c|} \hline \multirow{2}*{Observed entries (\%)} &\multicolumn{5}{c|} {Methods}\\ \cline{2-6}&SVT &OPTSpace &SVP &RRMC &MMC\\ \hline 10 &0.513$\pm$0.010 &0.530$\pm$0.021 &0.610$\pm$0.010 &0.650$\pm$0.001 & \textbf{0.466}$\pm$0.004 \\ \hline 20 &0.559$\pm$0.031 &0.456$\pm$0.005 &0.459$\pm$0.002 &0.606$\pm$0.002 & \textbf{0.406}$\pm$0.004 \\ \hline 30 &0.532$\pm$0.082 &0.422$\pm$0.011 &0.415$\pm$0.002 &0.563$\pm$0.002 & \textbf{0.383}$\pm$0.002 \\ \hline 40 &0.620$\pm$0.171 &0.406$\pm$0.015 &0.394$\pm$0.002 &0.533$\pm$0.002 & \textbf{0.371}$\pm$0.002\\ \hline 50 &0.719$\pm$0.225 &0.398$\pm$0.016 &0.381$\pm$0.001 &0.509$\pm$0.001 & \textbf{0.364}$\pm$0.001 \\ \hline 60 &0.728$\pm$0.288 &0.403$\pm$0.009 &0.371$\pm$0.002 & 0.491$\pm$0.002 &\textbf{0.365}$\pm$0.002\\ \hline \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{MAE Results on Slashdot Dataset } \label{maeSlash} \centering \begin{tabular}{|c| c| c| c| c| c|} \hline \multirow{2}*{Observed entries (\%)} &\multicolumn{5}{c|} {Methods}\\ \cline{2-6}&SVT &OPTSpace &SVP &RRMC &MMC\\ \hline 10 &0.679$\pm$0.008 &0.554$\pm$0.017 &0.755$\pm$0.005 &0.715$\pm$0.001 &\textbf{0.546}$\pm$0.008 \\ \hline 20 &0.562$\pm$0.004 &0.458$\pm$0.008 &0.582$\pm$0.007 &0.704$\pm$0.001 & \textbf{0.437}$\pm$0.008 \\ \hline 30 &0.513$\pm$0.030 &0.427$\pm$0.009 &0.501$\pm$0.003 &0.686$\pm$0.003 & \textbf{0.395}$\pm$0.006 \\ \hline 40 &0.506$\pm$0.041 &0.395$\pm$0.009 &0.460$\pm$0.002 &0.647$\pm$0.006 & \textbf{0.380}$\pm$0.009\\ \hline 50 &0.495$\pm$0.060 &0.376$\pm$0.004 &0.432$\pm$0.002 &0.609$\pm$0.002 & \textbf{0.366}$\pm$0.003 \\ \hline 60 &0.520$\pm$0.065 &0.362$\pm$0.011 &0.413$\pm$0.002 &0.585$\pm$0.002 & \textbf{0.350}$\pm$0.002\\ \hline \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{RMSE Results on Slashdot Dataset } \label{rmseSlash} \centering \begin{tabular}{|c| c| c| c| c| c|} \hline \multirow{2}*{Observed entries (\%)} &\multicolumn{5}{c|} {Methods}\\ \cline{2-6}&SVT &OPTSpace &SVP &RRMC &MMC\\ \hline 10 &0.788$\pm$0.008 &0.826$\pm$0.021 &0.873$\pm$0.004 &0.829$\pm$0.001 & \textbf{0.774}$\pm$0.008 \\ \hline 20 &0.718$\pm$0.020 &0.704$\pm$0.013 &0.746$\pm$0.007 &0.821$\pm$0.001 & \textbf{0.679}$\pm$0.009 \\ \hline 30 &0.680$\pm$0.043 &0.652$\pm$0.006 &0.680$\pm$0.003 &0.807$\pm$0.002 & \textbf{0.633}$\pm$0.008 \\ \hline 40 &0.670$\pm$0.056 &0.620$\pm$0.009 &0.647$\pm$0.003 &0.778$\pm$0.005 & \textbf{0.615}$\pm$0.011\\ \hline 50 &0.642$\pm$0.055 &0.596$\pm$0.009 &0.624$\pm$0.003 &0.749$\pm$0.002 & \textbf{0.581}$\pm$0.006 \\ \hline 60 &0.699$\pm$0.084 &0.577$\pm$0.011 &0.609$\pm$0.002 &0.730$\pm$0.002 & \textbf{0.566}$\pm$0.002\\ \hline \end{tabular} \end{table*} \begin{table*}[!t] \centering \caption{The trade-off between accuracy and efficiency.} \label{time} \centering \begin{tabular}{|l| p{0.6cm}| p{0.8cm}| p{0.6cm}|p{0.6cm}| p{0.8cm}| p{0.6cm}| p{0.6cm}| p{0.8cm}| p{0.6cm}| p{0.6cm}| p{0.8cm}| p{0.6cm}| p{0.6cm}| p{0.8cm}| p{0.6cm}|} \hline \multirow{2}*{Dataset} &\multicolumn{3}{c|} {SVT} &\multicolumn{3}{c|} {OPTSpace} &\multicolumn{3}{c|} {SVP} &\multicolumn{3}{c|} {RRMC} &\multicolumn{3}{c|} {MMC}\\ \cline{2-16}&MAE &RMSE &Time &MAE &RMSE &Time &MAE &RMSE &Time &MAE &RMSE &Time &MAE &RMSE &Time\\ \hline Epinions &0.466 &0.618 &26.76 &0.288 &0.531 &15.26 &0.450 &0.610 &\textbf{0.84} &0.576 &0.651 &43.11 &\textbf{0.262} &\textbf{0.481} &1.92\\ \hline Slashdot &0.618 &0.728 &50.73 &0.458 &0.705 &9.23 &0.582 &0.746 &\textbf{0.83} &0.715 &0.830 &44.12&\textbf{0.437} &\textbf{0.679} &2.41 \\ \hline \end{tabular} \end{table*} \textbf{Training and Testing.} We randomly split the dataset for training and testing. In particular, the number of observation measurements $\Omega$ for training ranges from 10\% to 60\%, with step size 10\%. For each split, we run all the algorithms for 20 trials, with the training data in each trail being randomly sampled. Then, we report the mean and standard deviation of MAE and RMSE over all 20 trials. \subsection{Experimental Results} We report detailed results from Table~\ref{maeEpin} to Table~\ref{rmseSlash}. From the results in Tables \ref{maeEpin} and \ref{rmseEpin}, we observe that MMC outperforms the other methods in terms of both evaluation metrics on the Epinions dataset most of the time. In particular, when there are few observations 10 \% (which indicates a hard task), MMC obtains the RMSE of 0.466, much better than OPTSpace (0.530), SVP (0.610) and RRMC (0.650). Except on the case with 60\% observed entries, OPTSpace obtains the smallest MSE with 0.197, but our algorithm is comparative with 0.206. In a nut of shell, the gap between MMC and the baselines becomes larger as the fraction of observed entries decreases. Similarly, our method achieves the least MAE and RMSE on the Slashdot dataset (see Table \ref{maeSlash} and \ref{rmseSlash}). For instance with 30 \% observed entries, MMC obtains the MAE with less than 0.4, much better than the comparative methods, such as SVT (0.513), OPTSpace (0.427), SVP (0.501) and RRMC (0.686). In terms of RMSE, in the case of 20\% observed entries, the RMSE values of other methods are all above 0.7 while our method reaches 0.679. In sum, our method is superior than the comparative methods on two real-life datasets in terms of MSE and RMSE most of the time. \begin{figure}[] \centering {\includegraphics[width=0.45\textwidth]{epinMae.eps}} {\includegraphics[width=0.45\textwidth]{epinRmse.eps}} \caption{MAE and RMSE in terms of different rank $d$ on the Epinions dataset.} \label{figrank} \vspace{-1em} \end{figure} Since we have studied the effectiveness of our method, here we examine the computational efficiency in Table~\ref{time}, which is important for practical applications. To test the time complexity of the methods, we report the averaged time cost on the Epinions dataset with 10\% observed entries and Slashdot with 20\% observed entries. To illustrate the trade-off between accuracy and efficiency, we also report the MAE and RMSE. As we see, SVP is the most efficient method, whose running time is 0.84 seconds on Epinions while ours is 1.92 seconds. On Slashdot, it also achieves the best performance in terms of efficiency. However, our method enjoys a significant improvement of MAE and RMSE compared to all baselines. Also, our algorithm is orders of magnitude faster than other three baselines. This implies that MMC favors a good trade-off between the accuracy and efficiency. \subsection{Examine The Influence of $d$} The non-convex reformulation~\eqref{eq:uv prob} requires an explicit rank estimation $d$ on the true matrix. In this section, we investigate the influence of $d$ on the Epinions dataset as an example. The rank $d$ is chosen from [1, 5, 50 ,100, 300, 500] and the results are plot in Figure~\ref{figrank}. We observe that the rank has little influence on the performance. This is possibly because that the actual data has a low-rank structure (close to rank one). And from~\cite{burer2005local}, we know that if $d$ is larger than the actual rank, any local minimum of Eq.~\eqref{eq:uv prob} is also a global optima. \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we formulated the social trust prediction in the matrix completion framework. In particular, due to the special structure of the social trust problem, {\em i.e.,} the measurements are 1-bit and the observed entries are non-uniformly sampled, we presented a max-norm constrained 1-bit matrix completion (MMC) algorithm. Since SDP solvers are not scalable to large scale matrices, we utilized a non-convex reformulation of the max-norm, which facilitates an efficient projected gradient decent algorithm. We empirically examined our algorithm on two benchmark datasets. Compared to other state-of-the-art matrix completion formulations, MMC consistently outperformed them, which meets with recently developed theories on max-norm. We also studied the trade-off between the accuracy and efficiency and observed that MMC achieved superior accuracy while keeping comparable computational efficiency. The max-norm has been studied for several years and in many applications, such as collaborative filtering, clustering, subspace recovery. It is empirically and theoretically shown to be superior than the popular nuclear norm. This work investigates the power of max-norm for social trust problem and demonstrates encouraging results. It is interesting and promising to apply max-norm as a convex surrogate to other practical problems such as face recognition, subspace clustering etc. \clearpage \newpage {\small \bibliographystyle{named}
{ "timestamp": "2015-04-27T02:04:16", "yymm": "1504", "arxiv_id": "1504.06394", "language": "en", "url": "https://arxiv.org/abs/1504.06394" }
\section{Introduction} \label{sec:intro} Throughout this work we let $W$ be a non-negative, integer-valued random variable with expectation $\lambda>0$. We focus our attention here on those $W$ which satisfy a certain negative dependence assumption, which we explicitly state in (\ref{eq:order1}) below as a stochastic ordering between $W+1$ and the size-biased version of $W$. Random variables satisfying this stochastic ordering occur naturally in many applications. For example, if we may write $W$ as a sum of negatively related Bernoulli random variables, the assumption (\ref{eq:order1}) is satisfied. Examples of such sums appear in various urn models and occupancy problems, for example. Several explicit examples of random variables satisfying our negative dependence assumption are discussed in Section \ref{sec:order}. We are motivated by the work of Daly et al. \cite{dlu12}, who explore links between Stein's method for probability approximation and stochastic orderings. In their work, as here, these stochastic orderings often reflect the dependence structure of the underlying random variables. In particular, \cite{dlu12} shows that the stochastic ordering assumption we make here implies a straightforward upper bound on the total variation distance between $W$ and a Poisson random variable. In this work (and in particular in Section \ref{sec:order} below), we explore further consequences of our stochastic ordering assumption. In particular, we will see that our negative dependence assumption leads naturally to bounds on the entropy of $W$, concentration inequalities for $W$ and some further Poisson approximation results which complement and enhance those of \cite{dlu12}. The bounds we derive on entropy generalise entropy maximisation results of Johnson \cite{j07} and Yu \cite{y09}. See also \cite{jkm13}. Such results are useful, for example, in understanding probabilistic limit theorems in an information theoretic context. Our proofs will make use of the $s$-convex stochastic orders defined by Lef\`evre and Utev \cite{lu96}, which generalise the usual stochastic and convex orderings. We will also need a lemma of Johnson \cite{j07} which links the operations of size-biasing and thinning. This is stated as Lemma \ref{lem1} later in this section and will be a key tool in what follows. Further consequences of this lemma will be explored in Section \ref{sec:mp}, where we consider how these thinning and size-biasing results may be applied to Poisson approximation both with and without making any stochastic ordering assumptions. In particular, we will explore Poisson approximation for a mixed Poisson random variable using these techniques. The results and applications we consider in Sections \ref{sec:order} and \ref{sec:mp} are closely related to the Poisson distribution. This is natural, since Lemma \ref{lem1} is itself closely related to the Poisson distribution. We will also explore what can be said in relation to the binomial distribution. This is done in Section \ref{sec:bin}. We seek the analogues of many of our other results in this case. For example, under a somewhat different assumption on the dependence structure of our random variable $W$ to that used in Section \ref{sec:order}, we find binomial approximation results and some further concentration inequalities and bounds on entropy. We use the remainder of this section to introduce the notation and ideas common to all the work that follows. We also state the lemma, due to Johnson \cite{j07}, which forms the key to many of the proofs that follow. For any $\alpha\in[0,1]$, we define the thinning operator $T_\alpha$ by letting $T_\alpha W=\sum_{i=1}^W\eta_i$, where $\eta_1,\eta_2,\ldots$ are iid Bernoulli random variables (independent of $W$) with mean $\alpha$. Throughout this note, we will let $Z_\mu\sim\mbox{Po}(\mu)$ have a Poisson distribution with mean $\mu$. The main object we will study in the work that follows is the operator $U_\alpha$, given by \begin{equation}\label{eq:udefn} U_\alpha W = T_\alpha W +Z_{(1-\alpha)\lambda}\,, \end{equation} where $Z_{(1-\alpha)\lambda}$ is independent of all else. In what follows, for notational convenience we will write $W_\alpha$ for a random variable equal in distribution to $U_\alpha W$ for $\alpha\in[0,1]$. We note that $W_1$ is equal in distribution to $W$, and that $W_0\sim\mbox{Po}(\lambda)$. It is easy to see that for any $\alpha\in[0,1]$ we have $\mathbb{E}[W_\alpha]=\mathbb{E}[W]=\lambda$. We also note that for any $\alpha,\beta\in[0,1]$, $U_\beta(U_\alpha W)$ is equal in distribution to $U_{\alpha\beta} W$. Finally, it is useful to note that $U_\alpha$ acts trivially on Poisson distributions. That is, $U_\alpha Z_\lambda$ is equal in distribution to $Z_\lambda$ for any $\lambda\geq0$ and $\alpha\in[0,1]$. Further properties of the operators $U_\alpha$, and their link with the M/M/$\infty$ queue, are discussed in \cite{j07}. In what follows, we will also need to employ size biasing. For any non-negative, integer-valued random variable $W$ with mean $\lambda>0$, we let $W^\star$ denote a random variable with the $W$-size-biased distribution, with mass function given by \begin{equation}\label{eq:sbdef2} \mathbb{P}(W^\star=j)=\frac{j\mathbb{P}(W=j)}{\lambda}\,, \end{equation} for any $j\in\mathbb{Z}^+=\{0,1\ldots\}$. Equivalently, we may define $W^\star$ by letting \begin{equation}\label{eq:sbdef} \mathbb{E}[Wg(W)]=\lambda\mathbb{E}[g(W^\star)]\,, \end{equation} for all functions $g:\mathbb{Z}^+\mapsto\mathbb{R}$ for which the expectation exists. In a context similar to that considered here, size biasing appears throughout Stein's method for Poisson approximation: we refer the interested reader to \cite{bhj92}, \cite{dlu12}, and references therein. Note that the work we present here is completely distinct from Stein's technique, however. We define the forward difference operator $\Delta$ and its inverse by writing $\Delta f(j)=f(j+1)-f(j)$ and $\Delta^{-1}f(j)=-\sum_{i=j}^\infty f(i)$ for $f:\mathbb{Z}^+\mapsto\mathbb{R}$. Letting $\Delta^0f(j)=f(j)$, we may then define recursively $\Delta^nf(j)=\Delta(\Delta^{n-1}f(j))$ and $\Delta^{-n}f(j)=\Delta^{-1}(\Delta^{-n+1}f(j))$ for any $n\geq1$. We are now in a position to be able to state the following lemma, which appears as Corollary 4.2 of \cite{j07}. \begin{lmm}\label{lem1} With $W_\alpha$ as above and $j\in\mathbb{Z}^+$, $$ \frac{\partial}{\partial\alpha}\mathbb{P}(W_\alpha=j) = \frac{\lambda}{\alpha}\Delta\left[\mathbb{P}(W_\alpha+1=j)-\mathbb{P}(W_\alpha^\star=j)\right]\,. $$ \end{lmm} Lemma \ref{lem1} relates the operations of thinning and size biasing, and will be used in establishing stochastic ordering and Poisson approximation results in Sections \ref{sec:order} and \ref{sec:mp}. A result analogous to Lemma \ref{lem1} will also be needed for the results established in the binomial case and presented in Section \ref{sec:bin}. \section{Negative dependence and convex orderings} \label{sec:order} In this section we consider the relationship between negative dependence and stochastic ordering. We will make use of the $s$-convex orderings, defined by Lef\`evre and Utev \cite{lu96} for any integer $s\geq1$. Letting $X$ and $Y$ be non-negative integer-valued random variables, we write $X\leq_{s-cx}Y$ if $\mathbb{E}f(X)\leq\mathbb{E}f(Y)$ for all $f\in\mathcal{F}_s$, where $$ \mathcal{F}_s = \left\{f:\mathbb{Z}^+\mapsto\mathbb{R} \,|\, \Delta^if(j)\geq0\mbox{ for all }j\in\mathbb{Z}^+\mbox{ and }i=1,\ldots,s\right\}\,. $$ Note that the case $s=1$ corresponds to the usual stochastic ordering (often denoted by $X\leq_{st}Y$ in what follows) and the case $s=2$ is the increasing convex ordering, written $X\leq_{icx}Y$. For future use, we recall also the standard result that if $\mathbb{E}X=\mathbb{E}Y$ and $X\leq_{icx}Y$ then $X\leq_{cx}Y$, where this denotes the usual convex ordering of such random variables. The interested reader is referred to \cite{ss07} for an introduction to the subject of stochastic orderings. Daly et al. \cite{dlu12} give bounds on the Poisson approximation of $W$ in total variation distance under the assumption that \begin{equation}\label{eq:order1} W^\star\leq_{s-cx}W+1\,, \end{equation} for some $s\in\mathbb{N}=\{1,2\ldots\}$, where $W^\star$ is defined by (\ref{eq:sbdef2}). The main result of this section (Theorem \ref{thm:order}) is that the ordering assumption (\ref{eq:order1}) implies an ordering between $W$ and a Poisson random variable of the same mean. This yields as an immediate corollary some bounds on Poisson approximation for $W$ and a concentration inequality for $W$. From Theorem \ref{thm:order}, we may also derive an upper bound on the entropy of $W$, and hence generalise results of \cite{j07} and \cite{y09}. Before proceeding further, we note that the stochastic ordering (\ref{eq:order1}) with $s=1$ is closely related to well-known, often applied concepts of negative dependence. For example, if $W=X_1+\cdots+X_n$ for some (dependent) Bernoulli random variables $X_1,\ldots,X_n$ such that \begin{equation}\label{eq:tnd} \mbox{Cov}(f(X_i),g(W-X_i))\leq0\,, \end{equation} for each $i$ and all increasing functions $f,g:\mathbb{Z}^+\mapsto\mathbb{R}$ then $W+1\geq_{st}W^\star$. See \cite{pp02} and \cite{dlu12}, where the property (\ref{eq:tnd}) is referred to as total negative dependence. Recall that Bernoulli random variables $X_1,\ldots,X_n$ are said to be negatively related if \begin{equation}\label{eq:nr} \mathbb{E}\left[\phi(X_1,\ldots,X_{i-1},X_{i+1},\ldots,X_n)|X_i=1\right]\leq\mathbb{E}\left[\phi(X_1,\ldots,X_{i-1},X_{i+1},\ldots,X_n)\right]\,, \end{equation} for each $i$ and all increasing functions $\phi:\{0,1\}^{n-1}\mapsto\mathbb{R}$. Papadatos and Papathanasiou \cite{pp02} showed that if $X_1,\ldots,X_n$ are negatively related then (\ref{eq:tnd}) holds, and hence the stochastic ordering (\ref{eq:order1}) holds with $s=1$. There are thus many examples and applications which fit into this framework. We give some illustrative examples below. In each of these examples the random variable $W$ may be written as a sum of negatively related Bernoulli variables, and therefore satisfies $W+1\geq_{st}W^\star$. The negative relation property may be established by a straightforward and natural coupling argument in each case. \renewcommand{\labelenumi}{(\roman{enumi})} \begin{enumerate} \item If $W=X_1+\cdots+X_n$, where $X_1,\ldots,X_n$ are independent Bernoulli random variables then clearly (\ref{eq:nr}) holds. \item If $W$ has a hypergeometric distribution then Barbour et al. \cite[Section 6.1]{bhj92} show that $W$ may be written as a sum of negatively related Bernoulli random variables. \item More generally, if we distribute $m$ balls uniformly into $n$ urns and let $W$ count the number of urns which contain at least $c$ balls, Papadatos and Papathanasiou \cite[Section 4]{pp02} show that $W$ may be written as a sum of negatively related Bernoulli random variables. \item Suppose we have an urn which initially contains balls of $n$ different colours. We proceed by P\'olya sampling: on each of $m$ draws we choose a ball uniformly from the urn, note its colour and return it to the urn along with an additional ball of the same colour. Let $X_i$ be the indicator that no ball of colour $i$ was seen during these $m$ draws. Then $X_1,\ldots,X_n$ are negatively related: see \cite[Section 6.3]{bhj92}. Here $W=X_1+\cdots+X_n$ counts the total number of colours not seen during the $m$ draws. \item Consider the following matrix occupancy problem. Suppose we have an $r\times n$ matrix and in row $k$ we place $s_k$ 1s, their positions being chosen by uniform sampling without replacement. All remaining entries of the matrix are set to 0. Let $T_i$ count the number of 1s in column $i$ and $X_i=I(T_i\leq m)$, the indicator that column $i$ contains at most $m$ nonzero entries. Then $W=X_1+\cdots+X_n$ counts the number of such columns. Barbour et al. \cite{bhj92} show in Section 6.4 that $X_1,\ldots,X_n$ are negatively related. \item Distribute $n$ points uniformly on the circumference of a circle. Let $S_1,\ldots,S_n$ be the arc-length distances between adjacent points and $X_i=I(S_i<a)$, the indicator that $S_i$ falls below some threshold $a$. Then \cite[Section 7.1]{bhj92} shows that $X_1,\ldots,X_n$ are negatively related. Their sum $W$ counts the number of small spacings on our circle. \item Let $(\sigma_1,\ldots,\sigma_n)$ be a permutation of $\{1,\ldots,n\}$ drawn uniformly from the group of such permutations. Let $X_i=I(\sigma_i\leq a_i)$ for some given $a_1,\ldots,a_n$, and $W=X_1+\cdots+X_n$. In Section 4.1, \cite{bhj92} shows that $X_1,\ldots,X_n$ are negatively related. \end{enumerate} In each of these examples we may apply the results of this section. For further discussion of these examples, and many others, we refer the reader to \cite{bhj92,pp02,ab13}, and references therein. We now state the main result of this section, Theorem \ref{thm:order}. In Theorem \ref{thm:order2} we give a slightly stronger result for the case $s=1$. The proofs of these theorems are deferred until Section \ref{subsec:orderproof}, before which we consider some applications and corollaries. Note that throughout what follows we let $\binom{a}{b}=0$ if $b>a$. \begin{thrm}\label{thm:order} Let $W$ be a non-negative, integer-valued random variable with $\mathbb{E}[W]=\lambda>0$ and $$ \mathbb{E}\binom{W}{k}\leq\mathbb{E}\binom{Z_\lambda}{k}\,,\hspace{20pt} k=3,\ldots,s\,, $$ for some $s\in\mathbb{N}$, where $Z_\lambda\sim\mbox{Po}(\lambda)$. Let $W^\star$ be defined by (\ref{eq:sbdef2}). If $W^\star\leq_{s-cx}W+1$ then $W\leq_{(s+1)-cx}Z_\lambda$. \end{thrm} \begin{thrm}\label{thm:order2} Let $W$ be a non-negative, integer-valued random variable with $\mathbb{E}[W]=\lambda>0$. Let $W^\star$ be defined by (\ref{eq:sbdef2}). If $W^\star\leq_{st}W+1$ then $W_\alpha \leq_{cx} W_\beta$ for $\alpha\geq\beta$. In particular, $W\leq_{cx}Z_\lambda$, where $Z_\lambda\sim\mbox{Po}(\lambda)$. \end{thrm} \subsection{Applications to bounds on entropy} \label{subsec:entropy} We use this section to give some applications of our Theorem \ref{thm:order2} to upper bounds for entropy. The bounds we establish generalise results of \cite{j07} and \cite{y09}. See also \cite{jkm13}. We define the entropy $H(W)$ of a non-negative, integer-valued random variable $W$ in the usual way, although for convenience we take natural logarithms. $$ H(W)=-\sum_{i=0}^\infty\mathbb{P}(W=i)\log(\mathbb{P}(W=i))\,. $$ For the random variables we consider here, results are stated which compare their entropy to that of a Poisson random variable with the same mean. Although no closed-form expression exists for $H(Z_\lambda)$, there are several bounds on this quantity available in the literature. For example, there is the well-known bound $$ H(Z_\lambda)\leq \frac{1}{2}\log\left(2\pi e\left(\lambda+\frac{1}{12} \right)\right)\,. $$ In the results that follow, we will also need the notion of log-concavity for a non-negative, integer-valued random variable. Recall that such a random variable $W$ is log-concave if its support is an interval in $\mathbb{Z}^+$, and its mass function forms a log-concave sequence. That is, $$ \mathbb{P}(W=i)^2\geq\mathbb{P}(W=i-1)\mathbb{P}(W=i+1)\,, $$ for all integers $i\geq1$. \begin{crllr}\label{cor:entropy1} Let $W$ be a non-negative, integer-valued random variable with $\mathbb{E}[W]=\lambda>0$. Let $Z_\lambda\sim\mbox{Po}(\lambda)$. If $W+1\geq_{st}W^\star$ then \begin{equation}\label{eq:ent1} H(W) \leq H(Z_\lambda)\,. \end{equation} \end{crllr} \begin{proof} Since $W\leq_{cx}Z_\lambda$ (by Theorem \ref{thm:order2}) and $Z_\lambda$ is a log-concave random variable, the result follows from Lemma 1 of \cite{y09}. \end{proof} Corollary \ref{cor:entropy1} shows that $Z_\lambda$ maximises the entropy within our class of $W$ with expectation $\lambda$ and such that $W+1\geq_{st}W^\star$. We again note that the conclusion of Corollary \ref{cor:entropy1} holds if $W$ may be written as a sum of totally negatively dependent (or negatively related) Bernoulli random variables, as in our examples above. Such maximum entropy results are of importance in understanding probabilistic limit theorems in an information theoretic context. For further discussion of this, we refer the reader to \cite{j07} and references therein. Corollary \ref{cor:entropy1} generalises Theorem 2.5 of \cite{j07}, which states that (\ref{eq:ent1}) holds under the assumption that $W$ is ultra log-concave (of degree $\infty$), denoted ULC($\infty$) in what follows. Recall that $W$ is ULC($\infty$) if $$ (j+1)!^2\mathbb{P}(W=j+1)^2 \geq j!(j+2)!\mathbb{P}(W=j)\mathbb{P}(W=j+2),\hspace{15pt}j\geq0\,, $$ or, equivalently, if $$ \frac{(j+1)\mathbb{P}(W=j+1)}{\mathbb{P}(W=j)}\,, $$ is increasing in $j$. We note that this is equivalent to $W+1\geq_{lr}W^\star$, where `$\geq_{lr}$' denotes the likelihood ratio ordering. Since this is stronger than stochastic ordering \cite[Theorem 1.C.1]{ss07}, our Corollary \ref{cor:entropy1} strengthens Theorem 2.5 of \cite{j07}. Similarly, Corollary \ref{cor:entropy2} below generalises Theorem 3 of \cite{y09}. See also \cite{jkm13}. \begin{crllr}\label{cor:entropy2} Let $W$ be a non-negative, integer-valued random variable such that $\mathbb{E}[W]=\lambda>0$ and $W+1\geq_{st}W^\star$. Let $Z_\lambda\sim\mbox{Po}(\lambda)$ and $X_1,X_2,\ldots,$ be iid non-negative, integer-valued random variables. Let $$ \widehat{W}=\sum_{i=1}^WX_i\,,\hspace{20pt}\mbox{ and }\hspace{20pt}\widehat{Z_\lambda}=\sum_{i=1}^{Z_\lambda}X_i\,. $$ If $\widehat{Z_\lambda}$ is log-concave, then $H(\widehat{W})\leq H(\widehat{Z_\lambda})$. \end{crllr} \begin{proof} Combine our Theorem \ref{thm:order2} with Theorem 1 of \cite{y09}. \end{proof} For discussion of the log-concavity assumption used in this result (including some sufficient conditions for $\widehat{Z_\lambda}$ to be log-concave), we refer the reader to Section 3 of \cite{y09} and Section 5 of \cite{jkm13}. In particular, \cite[Theorem 4]{y09} shows that if $X_1$ is log-concave and $$ \lambda\mathbb{P}(X_1=1)^2\geq2\mathbb{P}(X_1=2)\,, $$ then $\widehat{Z_\lambda}$ is log-concave. Johnson \cite{j07} goes further than establishing that the Poisson distribution maximises entropy within the class of ULC($\infty$) random variables of mean $\lambda$. In his Theorem 5.1 he shows that for such $W$ the entropy of $W_\alpha$ is a decreasing and concave function of $\alpha$. Using our stochastic ordering arguments we may also generalise this result, and show it applies to $W$ satisfying $W+1\geq_{st}W^\star$. This is done in Theorem \ref{thm:entropy}. \begin{thrm}\label{thm:entropy} Let $W$ be a non-negative, integer-valued random variable satisfying $W+1\geq_{st}W^\star$, where $W^\star$ is defined by (\ref{eq:sbdef2}). Then \begin{equation}\label{eq:entthm} \frac{\partial}{\partial\alpha}H(W_\alpha)\leq0\hspace{20pt}\mbox{ and }\hspace{20pt}\frac{\partial^2}{\partial\alpha^2}H(W_\alpha)\leq0\,, \end{equation} with equality if and only if $W$ has a Poisson distribution. \end{thrm} \begin{proof} Our proof uses many of the same components of that of Theorem 5.1 of \cite{j07}, but replacing the arguments based on ultra log-concavity with stochastic ordering results. Following \cite{j07} we decompose the entropy as $$ H(W_\alpha) = \Lambda(W_\alpha)-D(W_\alpha\lVert Z_\lambda)\,, $$ where $Z_\lambda\sim\mbox{Po}(\lambda)$, \begin{eqnarray*} \Lambda(W_\alpha)&=&-\sum_{j=0}^\infty\mathbb{P}(W_\alpha=j)\log\left(\mathbb{P}(Z_\lambda=j)\right)\,,\\ D(W_\alpha\lVert Z_\lambda)&=&\sum_{j=0}^\infty\mathbb{P}(W_\alpha=j)\log\left(\frac{\mathbb{P}(W_\alpha=j)}{\mathbb{P}(Z_\lambda=j)}\right)\,. \end{eqnarray*} Note that $D$ here is the relative entropy. Lemmas 5.2 and 5.5 of \cite{j07} give us immediately that $$ \frac{\partial}{\partial\alpha}D(W_\alpha\lVert Z_\lambda)\geq0\hspace{20pt}\mbox{ and }\hspace{20pt}\frac{\partial^2}{\partial\alpha^2}D(W_\alpha\lVert Z_\lambda)\geq0\,, $$ since for $W$ such that $W+1\geq_{st}W^\star$ we have $\mbox{Var}(W)\leq\mathbb{E}[W]$. To prove (\ref{eq:entthm}), it remains only to show that $\Lambda(W_\alpha)$ is a decreasing and concave function of $\alpha$. By equation (15) of \cite{j07} we have that \begin{equation}\label{eq:l1} \frac{\partial}{\partial\alpha}\Lambda(W_\alpha)=\frac{\lambda}{\alpha}\left\{\mathbb{E}\log(W_\alpha^\star)-\mathbb{E}\log(W_\alpha+1)\right\}\,. \end{equation} We will see in Section \ref{subsec:orderproof} that $W$ such that $W+1\geq_{st}W^\star$ satisfy the ordering $W_\alpha+1\geq_{st}W_\alpha^\star$ for each $\alpha\in[0,1]$. Since $\log(\cdot)$ is an increasing function, it immediately follows from (\ref{eq:l1}) that $\Lambda(W_\alpha)$ is a decreasing function of $\alpha$. Similarly, from Lemma 5.3 of \cite{j07} we have that $$ \frac{\partial^2}{\partial\alpha^2}\Lambda(W_\alpha)=\frac{\lambda^2}{\alpha^2}\left\{\mathbb{E}f(W_\alpha^\star)-\mathbb{E}f(W_\alpha+1)\right\}\,, $$ where $$ f(j)=\frac{j-1}{\lambda}\log\left(\frac{j}{j+1}\right)-\log\left(\frac{j+1}{j}\right)\,. $$ Since $f(\cdot)$ is an increasing function, we see that $\Lambda(W_\alpha)$ is a concave function of $\alpha$, completing the proof of (\ref{eq:entthm}). The fact that equality holds in (\ref{eq:entthm}) if and only if $W$ has a Poisson distribution is shown in the same way as the corresponding statement in Theorem 5.1 of \cite{j07}. \end{proof} We have already discussed several examples in which the results of this section may be directly applied. We conclude with an example where we may use our results even without the negative dependence assumption (\ref{eq:order1}): the lightbulb process. This model was introduced by Rao et al. \cite{rrz07}, and is motivated by the pharmaceutical problem of a dermal patch designed to target $n$ receptors. Each receptor is in one of two states. On each day $r=1,\ldots,n$, the patch causes $r$ uniformly selected receptors to switch state. This process has also been studied, for example, by Goldstein and Zhang \cite{gz11}, and Goldstein and Xia \cite{gx12}. See also references therein. It is more often described in terms of lightbulbs being switched on and off, with $r$ of the $n$ lightbulbs chosen uniformly to have their state switched at day $r$, for $r=1,\ldots,n$. For concreteness, we assume that all $n$ bulbs are switched off at the start of the process. The random variable of interest is $W=W(n)$, the number of bulbs switched on after day $n$. We consider here the problem of bounding the entropy of $W$. Goldstein and Zhang \cite{gz11} show that (at least for $n$ even) $W^\star\leq_{st}W+2$, but this is not enough to apply our results directly. Instead, we use the fact, shown by Goldstein and Xia \cite{gx12}, that $W$ is asymptotically distributed as a clubbed binomial distribution. If we let $X\sim\mbox{Bin}(n-1,1/2)$ have a binomial distribution, then we define the clubbed binomial random variable $Y=Y_m$ by writing \begin{equation*} \mathbb{P}(Y_m=j) = \left\{ \begin{array}{ll} \mathbb{P}(X=j-1)+\mathbb{P}(X=j) & \mbox{$m$ and $j$ have the same parity,} \\ 0 & \mbox{otherwise.} \\ \end{array} \right. \end{equation*} That is, the clubbed binomial is formed by combining the mass of the binomial distribution at adjacent integers, so that it is supported on the lattice of non-negative integers with the same parity as $m$. We note that the support of these clubbed binomial distributions is appropriate to the problem at hand since, as shown by Rao et al. \cite{rrz07}, if $n\equiv 0$ (mod 4) or $n\equiv 3$ (mod 4) then $W$ is supported on the set of even integers at most $n$. Otherwise, the support of $W$ is the set of odd integers at most $n$. It what follows, we always choose $m$ in the definition of $Y$ appropriately for the $n$ under consideration. We begin with the straightforward observation that $H(Y)\leq H(X)$. This follows immediately from the definition of $Y$. Since our binomial distribution $X$ satisfies $X^\star\leq_{st}X+1$, it follows from Corollary \ref{cor:entropy1} that \begin{equation}\label{eq:light1} H(Y)\leq H(Z_{(n-1)/2})\,, \end{equation} where $Z_\lambda\sim\mbox{Po}(\lambda)$ as usual. Hence, using (\ref{eq:light1}), we have that $$ H(W) \leq H(Z_{(n-1)/2}) + \left| H(W)-H(Y) \right|\,. $$ This last term may be bounded using Theorem 17.3.3 of \cite{ct}, which states that if $W$ and $Y$ are random variables supported on a subset of $\mathbb{Z^+}$ of size $k$ and $$ \sum_{j\in\mathbb{Z}^+}\left| \mathbb{P}(W=j)-\mathbb{P}(Y=j) \right| \leq \beta \leq \frac{1}{2}\,, $$ then $$ \left| H(W)-H(Y) \right| \leq -\beta\log\left(\frac{\beta}{k}\right)\,. $$ We may apply this result here with the choice \begin{equation}\label{eq:light2} \beta = 5.47\sqrt{n}\exp\left(\frac{-(n+1)}{3}\right)\,, \end{equation} by Theorem 3.1 of \cite{gx12}, noting that $\beta\leq1/2$ for $n\geq10$. Since both $W$ and $Y$ are supported on either the even or odd integers up to $n$, we may take $k=(n/2)+1$. Hence we obtain the following. \begin{crllr} Let $W=W(n)$ be the number of bulbs switched on at the terminal time of the lightbulb process. Then, with $n\geq10$ and $\beta$ given by (\ref{eq:light2}), $$ H(W) \leq H(Z_{(n-1)/2}) - \beta\log\left(\frac{2\beta}{n+2}\right)\,. $$ \end{crllr} \subsection{Applications to Poisson approximation}\label{subsec:approx} For further applications of our Theorems \ref{thm:order} and \ref{thm:order2} we turn to some Poisson approximation results. For use here and in Section \ref{sec:mp}, we define the probability metrics we will use. In this framework, we are inspired by the recent work of R\"ollin and Ross \cite{rr12}. For $1\leq p<\infty$ and $f:\mathbb{Z}^+\mapsto\mathbb{R}$ we let $$ \lVert f \rVert_p = \left(\sum_{j=0}^\infty |f(j)|^p\right)^{1/p}\,, $$ and we let $\lVert f \rVert_\infty=\sup_j|f(j)|$. For distribution functions $F$ and $G$, we then define the distances $$ d_{n,p}(F,G) = \lVert\Delta^nF-\Delta^nG\rVert_p\,, $$ for $1\leq p\leq\infty$ and $n\in\mathbb{Z}$. Many commonly-used probability metrics fit into this framework. For example, \begin{itemize} \item the total variation distance: $d_{TV}(F,G)=\frac{1}{2}d_{1,1}(F,G)$. \item the Kolmogorov distance: $d_K(F,G)=d_{0,\infty}(F,G)$. \item the Wasserstein distance: $d_W(F,G)=d_{0,1}(F,G)$. \item the stop-loss distance: $d_{SL}(F,G)=d_{-1,\infty}(F,G)$. \end{itemize} Note also that $d_{1,\infty}$ is a metric useful in proving local limit theorems. To provide an illustration of the type of Poisson approximation result which may be obtained in our framework, in this section we will consider approximation in the metrics $d_{-k,\infty}$ for $k\geq-1$. The results of Section \ref{sec:mp} below will use some of the other probability metrics we have defined. In the work of this section, we are motivated by the techniques and results of \cite{dlu02}. \begin{crllr}\label{cor:order} Let $W$ be as in Theorem \ref{thm:order}. If $W$ has distribution function $F$ and $Z_\lambda$ has distribution function $G_\lambda$ then $$ d_{-k,\infty}(F,G_\lambda) \leq 2^{(s-k-1)_+}\mathbb{E}\left[\binom{Z_\lambda+s+1}{s+1}-\binom{W+s+1}{s+1}\right]\,, $$ for $k=-1,\ldots,s+1$. \end{crllr} \begin{proof} The result follows from Theorem \ref{thm:order} and an argument analogous to that for Corollary 3.14 of \cite{dlu02}. \end{proof} If we take $s=1$ in Corollary \ref{cor:order} we obtain that, for any $W$ with $W+1\geq_{st}W^\star$, \begin{equation}\label{eq:hyp1} d_{-k,\infty}(F,G_\lambda) \leq 2^{(-k)_+-1}\left(\lambda-\mbox{Var}(W)\right)\,, \end{equation} for $k\in\{-1,0,1,2\}$ (and hence including bounds on the stop-loss, Kolmogorov and local limit distances). This applies in particular if $W$ may be written as a sum of totally negatively dependent Bernoulli random variables, as in the examples discussed previously. We conclude this section with a short example to illustrate this result. \begin{example}\label{eg:h1} \emph{Suppose we distribute $m$ balls uniformly among $N>1$ urns, where each urn has the capacity for up to one ball. Let $W$ count the number of the first $n$ urns that are occupied. Then $W$ has a hypergeometric distribution with mean $\lambda=mn/N$ and variance $$ \frac{mn}{N}\left(\frac{N-n}{N-1}\right)\left(1-\frac{m}{N}\right)\,. $$ As noted earlier, $W$ may be written as a sum of negatively related Bernoulli random variables and so satisfies $W+1\geq_{st}W^\star$. The bound (\ref{eq:hyp1}) then gives $$ d_{-k,\infty}(F,G_\lambda)\leq2^{(-k)_+-1}\frac{mn}{N}\left(\frac{(m+n)N-mn-N}{N(N-1)}\right)\,, $$ for $k\in\{-1,0,1,2\}$, where $F$ is the distribution function of $W$ and $G_\lambda$ the distribution function of a Poisson random variable with mean $\lambda$.} \emph{We note that upper bounds of a better order may be available. For example, let $k=0$ (so that we consider the Kolmogorov distance) and suppose that $m$ and $n$ are both of order $O(N)$. Then our upper bound is also of order $O(N)$, but an upper bound of better order $O(1)$ is available from Theorem 6.A of Barbour et al. \cite{bhj92}. However, our results have the advantage of dealing simultaneously with a range of probability metrics.} \end{example} \subsection{Applications to concentration inequalities} In this section we note that the convex ordering of Theorem \ref{thm:order2} implies a concentration inequality for $W$. \begin{crllr}\label{cor:conc} Let $W$ be a non-negative, integer-valued random variable such that $\mathbb{E}W=\lambda$ and $W+1\geq_{st}W^\star$. Let $t>0$. Then \begin{eqnarray*} \mathbb{P}(W\geq\lambda+t) &\leq& e^t\left(1+\frac{t}{\lambda}\right)^{-(t+\lambda)}\,,\\ \mathbb{P}(W\leq\lambda-t) &\leq& e^{-t}\left(1-\frac{t}{\lambda}\right)^{t-\lambda}\,, \end{eqnarray*} where the latter bound applies if $t<\lambda$. \end{crllr} \begin{proof} To prove the first inequality, let $\theta>0$ and note that (by a standard argument using Markov's inequality) $$ \mathbb{P}(W-\lambda\geq t)\leq \exp\left\{-\theta(t+\lambda)\right\}\mathbb{E}e^{\theta W}\,. $$ Now, for $\theta>0$, the function $e^{\theta x}$ is convex in $x$ and hence we apply Theorem \ref{thm:order2} to note that $$ \mathbb{E}e^{\theta W} \leq \exp\left\{\lambda\left(e^\theta-1\right)\right\}\,. $$ We then minimize the resulting bound over $\theta$. The proof of the second inequality is similar. \end{proof} These inequalities have also been found in recent work by Arratia and Baxendale \cite[Theorem 4.2]{ab13}, who show they perform well compared to other such concentration inequalities which are available. \subsection{Proofs of Theorems \ref{thm:order} and \ref{thm:order2}} \label{subsec:orderproof} We now give the proofs of Theorems \ref{thm:order} and \ref{thm:order2}. We begin with some properties of the $s$-convex orderings. Our Lemmas \ref{lem:order1}--\ref{lem:order3} will make use of results established by Denuit and Lef\`evre \cite{dl97} and Denuit et al. \cite{dlu99}. In particular, we will need closure of the $s$-convex orderings under operations such as convolution and taking mixtures. \begin{lmm}\label{lem:order1} Let $X$ and $Y$ be non-negative, integer-valued random variables. If $X\leq_{s-cx}Y$ for some $s\in\mathbb{N}$ then $T_\alpha X\leq_{s-cx}T_\alpha Y$ for all $\alpha\in[0,1]$. \end{lmm} \begin{proof} To see this, use Property 4.6 of \cite{dlu99} and a proof analogous to that of Theorem 8.A.13 of \cite{ss07}. \end{proof} \begin{lmm}\label{lem:order2} Let $W$ be a non-negative, integer-valued random variable with positive mean. If we have $W^\star\leq_{s-cx}W+1$ for some $s\in\mathbb{N}$ then $(T_\alpha W)^\star\leq_{s-cx}T_\alpha W+1$ for all $\alpha\in[0,1]$. \end{lmm} \begin{proof} Using Lemma \ref{lem:order1} and the closure of the $s$-convex orders under convolution \cite[Proposition 3.7]{dl97} , we have that $$ T_\alpha W+1 \geq_{s-cx} T_\alpha(W^\star-1)+1 = T_\alpha(VW)+1\,, $$ where the operator $V$ is defined by $VW=W^\star-1$. Since the operators $T_\alpha$ and $V$ commute (as can be easily checked) we obtain $$ T_\alpha W+1 \geq_{s-cx} V(T_\alpha W)+1 = (T_\alpha W)^\star\,, $$ as required. \end{proof} \begin{lmm}\label{lem:order3} Let $X_1$ and $X_2$ be independent non-negative, integer-valued random variables with positive mean. If $X_1^\star\leq_{s-cx}X_1+1$ and $X_2^\star\leq_{s-cx}X_2+1$ for some $s\in\mathbb{N}$ then $(X_1+X_2)^\star\leq_{s-cx}X_1+X_2+1$. \end{lmm} \begin{proof} We firstly note that $(X_1+X_2)^\star=X_1+X_2-X_I+X_I^\star$, where the random index $I\in\{1,2\}$ is chosen independently of all else and such that $$ \mathbb{P}(I=1)=1-\mathbb{P}(I=2)=\frac{\mathbb{E}X_1}{\mathbb{E}X_1+\mathbb{E}X_2}\,. $$ See \cite[Corollary 2.1]{cgs12}, for example. Conditioning on the event that $I=1$, we have $$ (X_1+X_2)^\star = X_1^\star+X_2 \leq_{s-cx}X_1+X_2+1\,, $$ by assumption and using Proposition 3.7 of \cite{dl97}. An analogous argument holds if we condition instead on the event that $I=2$. To complete the proof we remove the conditioning using Proposition 3.7 of \cite{dl97}. \end{proof} We are now in a position to give the proof of Theorem \ref{thm:order}. Noting that Poisson random variables trivially satisfy the ordering (\ref{eq:order1}) for all $s\in\mathbb{N}$, Lemmas \ref{lem:order2} and \ref{lem:order3} may be combined to give us that for $W$ satisfying the assumptions of our theorem, \begin{equation}\label{eq:order2} W_\alpha^\star\leq_{s-cx}W_\alpha+1\,, \end{equation} for all $\alpha\in[0,1]$. Now, following Lef\`evre and Utev \cite{lu96}, we let $h_0(X,j)=\mathbb{P}(X=j)$ for any non-negative, integer-valued random variable $X$ and $j\in\mathbb{Z}^+$. We define $h_k(X,j)$ for $k\geq1$ by letting \begin{equation}\label{eq:hdef} h_k(X,j)=-\Delta^{-1}h_{k-1}(X,j)=\sum_{i=j}^\infty h_{k-1}(X,i)=\mathbb{E}\binom{X-j+k-1}{k-1}\,. \end{equation} By Proposition 2.5 of \cite{lu96}, to prove that $W\leq_{(s+1)-cx}Z_\lambda$, we need to show that \begin{equation}\label{eq:lu1} \mathbb{E}\binom{W}{k}\leq\mathbb{E}\binom{Z_\lambda}{k}\,,\hspace{20pt}k=1,\ldots,s\,, \end{equation} and that \begin{equation}\label{eq:lu2} h_{s+1}(W,j)\leq h_{s+1}(Z_\lambda,j)\,,\hspace{20pt}j\geq s+1\,. \end{equation} Beginning with (\ref{eq:lu1}), the inequality with $k=1$ is trivial, since $\mathbb{E}Z_\lambda=\lambda$. In the case $k=2$, it is straightforward to show, using (\ref{eq:sbdef}), that if $\mathbb{E}W^\star\leq\mathbb{E}W+1$ (which holds by the assumption that $W^\star\leq_{s-cx}W+1$) then $\mathbb{E}\binom{W}{2}\leq\mathbb{E}\binom{Z_\lambda}{2}$. The remaining cases, $k=3,\ldots,s$ are covered explicitly in the statement of Theorem \ref{thm:order}. It remains only to establish (\ref{eq:lu2}). Lemma \ref{lem1} gives us that $$ \frac{\partial}{\partial\alpha}h_0(W_\alpha,j) = \frac{\lambda}{\alpha}\Delta\left[h_0(W_\alpha+1,j)-h_0(W_\alpha^\star,j)\right]\,. $$ Applying $\Delta^{-(s+1)}$ to each side of this equation (and interchanging summation and differentiation) we obtain $$ -\frac{\partial}{\partial\alpha}h_{s+1}(W_\alpha,j) = \frac{\lambda}{\alpha}\left[h_s(W_\alpha+1,j)-h_s(W_\alpha^\star,j)\right]\,. $$ By the stochastic ordering (\ref{eq:order2}) and Proposition 2.5 of \cite{lu96}, $h_s(W_\alpha+1,j)\geq h_s(W_\alpha^\star,j)$ for all $\alpha$ and $j$. Hence, letting $j\in\mathbb{Z}^+$, \begin{eqnarray*} 0&\leq&\int_0^1\frac{\lambda}{\alpha}\left[h_s(W_\alpha+1,j)-h_s(W_\alpha^\star,j)\right]\,d\alpha\\ &=& -\int_0^1\frac{\partial}{\partial\alpha}h_{s+1}(W_\alpha,j)\,d\alpha\\&=&h_{s+1}(Z_\lambda,j)-h_{s+1}(W,j)\,, \end{eqnarray*} as required, since $W_1$ is equal in distribution to $W$ and $W_0\sim\mbox{Po}(\lambda)$. This establishes our Theorem \ref{thm:order}. The proof of Theorem \ref{thm:order2} is exactly as for Theorem \ref{thm:order} above (with $s=1$), except for a change in the limits of integration. \subsection{Remarks on some related results} We conclude Section \ref{sec:order} by noting some results related to Theorem \ref{thm:order2}. Before stating these, we need a definition. We recall that random variables $\{X_i:i\in\Gamma\}$ are negatively associated if $$ \mbox{Cov}\left(f(X_i,i\in\Gamma_1),g(X_i,i\in\Gamma_2)\right) \leq0\,, $$ for all increasing functions $f$ and $g$ and all $\Gamma_1,\Gamma_2\subseteq\Gamma$ with $\Gamma_1\cap\Gamma_2=\emptyset$. Negative association is closely related to other concepts of negative dependence we have used. For example, note that negatively associated indicator random variables are negatively related, and hence sums of negatively associated indicator variables satisfy our stochastic ordering assumption (\ref{eq:order1}) with $s=1$. Shao \cite{s00} shows that if $X_1,\ldots,X_n$ are negatively associated and if the random variables $X^\dagger_1,\ldots,X^\dagger_n$ are independent with each of the $X^\dagger_i$ having the same marginal distribution as $X_i$ then \begin{equation}\label{eq:shao} X_1+\cdots+X_n\leq_{cx}X^\dagger_1+\cdots+X^\dagger_n\,. \end{equation} In the case where $X_1,\ldots,X_n$ are indicator random variables, the stochastic comparison (\ref{eq:shao}) with the sum of independent random variables is a stronger result than our Theorem \ref{thm:order2}, in which the comparison is with a Poisson variable. We note, though, that our results apply in a more general negative dependence setting, and that we obtain results for the more general $s$-convex orderings (as in our Theorem \ref{thm:order}). Results analogous to (\ref{eq:shao}) are also available in a positive dependence setting. Recall that random variables $\{X_i:i\in\Gamma\}$ are associated if $$ \mbox{Cov}\left(f(X_i,i\in\Gamma),g(X_i,i\in\Gamma)\right) \geq0\,, $$ for all increasing functions $f$ and $g$. Denuit et al. \cite{ddr01} show that if $X_i,\ldots,X_n$ are associated then $$ X_1+\cdots+X_n\geq_{cx}X^\dagger_1+\cdots+X^\dagger_n\,. $$ In the course of this work, we have been unable to find results in a positive dependence setting, such as for sums of associated random variables. \section{Further results in Poisson approximation} \label{sec:mp} In Section \ref{subsec:approx} we saw how our negative dependence assumption leads to bounds in Poisson approximation for our random variable $W$. These bounds were established using the convex ordering given by Theorem \ref{thm:order}, which was itself proved using Lemma \ref{lem1}. We use this section to give another application of thinning and size biasing (via our Lemma \ref{lem1}) to Poisson approximation. We state a bound in Lemma \ref{lem2} below which will be applied (in Section \ref{subsec:wass}) to give some general results in Poisson approximation which do not need any assumptions of stochastic ordering. We will note, however, the refinements and simplifications available in these results if we introduce the same stochastic ordering assumptions which we used in Section \ref{sec:order}. In Section \ref{subsec:mp} we will apply Lemma \ref{lem2} to the problem of Poisson approximation of the mixed Poisson distribution. Our results will be stated in terms of the distances $d_{n,p}$ defined in Section \ref{subsec:approx}. \begin{lmm} \label{lem2} Let $W$ be a non-negative, integer-valued random variable with distribution function $F$ and $\mathbb{E}[W]=\lambda>0$. Let $G_\lambda$ be the distribution function of $Z_\lambda\sim\mbox{Po}(\lambda)$. Then for $1\leq p\leq\infty$ and $n\in\mathbb{Z}$ $$ d_{n,p}(F,G_\lambda) \leq \lambda\int_0^1\frac{1}{\alpha}d_{n+1,p}(F_\alpha^{(1)},F_\alpha^\star)\,d\alpha\,, $$ where $F_\alpha^{(1)}$ is the distribution function of $W_\alpha+1$ and $F_\alpha^\star$ is the distribution function of $W_\alpha^\star$. \end{lmm} \begin{proof} Let $F_\alpha$ be the distribution function of $W_\alpha$. We use the definition of $d_{n,p}$ and note that $F=F_1$ and $G_\lambda=F_0$ to obtain \begin{eqnarray*} d_{n,p}(F,G_\lambda) &=& \left\lVert\Delta^n\int_0^1\frac{\partial}{\partial\alpha}F_\alpha\,d\alpha \right\rVert_p\\ &\leq& \int_0^1 \left\lVert\Delta^n\frac{\partial}{\partial\alpha}F_\alpha\right\rVert_p\,d\alpha\\ &=&\lambda\int_0^1\frac{1}{\alpha}\left\lVert\Delta^{n+1}F_\alpha^{(1)}-\Delta^{n+1}F_\alpha^\star\right\rVert_p\,d\alpha\,, \end{eqnarray*} where the inequality follows from Minkowski's integral inequality \cite[Appendix A]{x} and the final line uses Lemma \ref{lem1}. \end{proof} \subsection{Poisson approximation using thinning and size biasing}\label{subsec:wass} The main result of this section is Theorem \ref{thm:pois} below. This contains some Poisson approximation results derived from Lemma \ref{lem2} and also shows how these results may be combined with the same stochastic ordering assumption employed in Section \ref{sec:order}. To ease the notational burden on this section we will write $d_{n,p}(X,Y)$ to mean $d_{n,p}(F,G)$ if $X$ and $Y$ are random variables with distribution functions $F$ and $G$, respectively. \begin{thrm}\label{thm:pois} Let $W$ be a non-negative, integer-valued random variable with $\mathbb{E}[W]=\lambda>0$ and let $W^\star$ be defined by (\ref{eq:sbdef2}). Let $Z_\lambda\sim\mbox{Po}(\lambda)$. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\alph{enumi})} \item\label{th32_1} For $s\in\mathbb{Z}^+$ \begin{eqnarray} \label{eq:31}d_{-s,1}(W,Z_\lambda)&\leq&\frac{\lambda}{1+s}d_{1-s,1}(W,W^\star-1)\,,\\ \label{eq:32}d_{-s,\infty}(W,Z_\lambda)&\leq&\frac{\lambda}{s}d_{1-s,\infty}(W,W^\star-1)\,, \end{eqnarray} where this last inequality applies if $s\not=0$. \item\label{th32_2} If, in addition, $W+1\geq_{s-cx}W^\star$ then \begin{eqnarray} \label{eq:33}d_{-s,1}(W,Z_\lambda)&\leq&\frac{1}{1+s}\mathbb{E}\left[\lambda\binom{W+s}{s}-W\binom{W+s-1}{s}\right]\,,\\ \nonumber d_{-k,\infty}(W,Z_\lambda)&\leq&\frac{2^{(s-k-1)_+}}{k}\mathbb{E}\left[\lambda\binom{W+s}{s}-W\binom{W+s-1}{s}\right]\,,\\\label{eq:34} \end{eqnarray} for $k=1,\ldots,s+1$. \end{enumerate} \end{thrm} \begin{proof} We begin by using Corollary 2.1 of \cite{cgs12}, and the fact that $Z_{\lambda}^\star$ is equal in distribution to $Z_\lambda+1$ for all $\lambda$, to note that \begin{equation}\label{eq:sb} W_\alpha^\star=(T_\alpha W+Z_{(1-\alpha)\lambda})^\star=I_\alpha(T_\alpha W)^\star+(1-I_\alpha)(T_\alpha W+1)+Z_{(1-\alpha)\lambda}\,, \end{equation} where $I_\alpha$ is independent of all else and $\mathbb{P}(I_\alpha=1)=\alpha=1-\mathbb{P}(I_\alpha=0)$. Using the functions $h_s(X,j)$ defined by (\ref{eq:hdef}) for any non-negative random variable $X$ and $j\in\mathbb{Z}^+$, we have that for $s\in\mathbb{Z}^+$, $$ d_{-s,p}(W_\alpha+1,W_\alpha^\star)=\lVert h_{s+1}(W_\alpha+1,\cdot)-h_{s+1}(W_\alpha^\star,\cdot)\rVert_p\,. $$ With $W_\alpha^\star$ given by (\ref{eq:sb}), we can condition on $I_\alpha$ and $Z_{(1-\alpha)\lambda}$ to get that $$ d_{-s,p}(W_\alpha+1,W_\alpha^\star)\leq\alpha d_{-s,p}(T_\alpha W,V(T_\alpha W))\,, $$ where the operator $V$ is such that $VX=X^\star-1$ for any non-negative random variable $X$. Since the operators $V$ and $T_\alpha$ commute for any $0\leq\alpha\leq1$, we have that \begin{equation}\label{eq:dist} d_{-s,p}(W_\alpha+1,W_\alpha^\star)\leq\alpha d_{-s,p}(T_\alpha W,T_\alpha (W^\star-1))\,. \end{equation} Recalling that $T_\alpha W=\sum_{j=1}^W\eta_j$, where $\eta_1,\eta_2,\ldots$ are iid Bernoulli variables with mean $\alpha$, in the case that $p\in\{1,\infty\}$ we may bound this latter distance using an argument analogous to that of Proposition 4.2 of Denuit and Van Bellegem \cite{dv01} to get that \begin{eqnarray} \label{eq:31_1}d_{-s,1}(T_\alpha W,T_\alpha (W^\star-1))&\leq&\alpha^{s+1}d_{-s,1}(W,W^\star-1) \,,\\ \label{eq:31_2}d_{-s,\infty}(T_\alpha W,T_\alpha (W^\star-1))&\leq&\alpha^{s}d_{-s,\infty}(W,W^\star-1) \,, \end{eqnarray} We may now complete the proof of the first part of the theorem. Combining Lemma \ref{lem2} with (\ref{eq:dist}) and (\ref{eq:31_1}) we have that $$ d_{-s,1}(W,Z_\lambda)\leq\lambda d_{1-s,1}(W,W^\star-1)\int_0^1\alpha^s\,d\alpha=\frac{\lambda}{1+s}d_{1-s,1}(W,W^\star-1)\,. $$ Similarly, using (\ref{eq:31_2}) in place of (\ref{eq:31_1}), we have that if $s\not=0$ $$ d_{-s,\infty}(W,Z_\lambda)\leq\lambda d_{1-s,\infty}(W,W^\star-1)\int_0^1\alpha^{s-1}\,d\alpha=\frac{\lambda}{s}d_{1-s,\infty}(W,W^\star-1)\,. $$ This completes the proof of part (\ref{th32_1}). For part (\ref{th32_2}), we note that if $W+1\geq_{s-cx}W^\star$ then $$ d_{1-s,1}(W,W^\star-1)=\mathbb{E}\left[\binom{W+s}{s}-\binom{W^\star-1+s}{s}\right]\,. $$ Combining this with (\ref{eq:31}) and (\ref{eq:sbdef}) gives us (\ref{eq:33}). Now let $k\in\{1,\ldots,s+1\}$. Corollary 3.14 of \cite{dlu02} gives us that if $W+1\geq_{s-cx}W^\star$ then $$ d_{1-k,\infty}(W,W^\star-1)\leq2^{(s-k-1)_+}\mathbb{E}\left[\binom{W+s}{s}-\binom{W^\star-1+s}{s}\right]\,. $$ We obtain (\ref{eq:34}) when we combine this with (\ref{eq:32}) and (\ref{eq:sbdef}). \end{proof} To illustrate this result, we consider two examples. \begin{example}\label{eg:h2} \emph{Firstly, we return to the setting of Example \ref{eg:h1} and let $W$ have a hypergeometric distribution (with notation as in Example \ref{eg:h1}). Then, letting $$ \epsilon=\frac{mn}{N}\left(\frac{(m+n)N-mn-N}{N(N-1)}\right)\,, $$ and recalling that $W+1\geq_{st}W^\star$ in this case, Theorem \ref{thm:pois}(\ref{th32_2}) gives $d_{-1,1}(W,Z_\lambda)\leq\epsilon/2$, $d_{-2,\infty}(W,Z_\lambda)\leq\epsilon/2$ and $d_{-1,\infty}(W,Z_\lambda)\leq\epsilon$, where this latter metric is the stop-loss distance.} \end{example} \begin{example}\label{eg:p1} \emph{We consider now the P\'olya distribution, which has found applications in the study of epidemics, genetics and communications. Suppose we have an urn containing $N$ balls, of which $r$ are red and $N-r$ are black. At each step, we draw a ball, note its colour and return it to the urn together with $c\geq1$ additional balls of the same colour. Repeat this for a total of $m$ draws, and let $W$ count the number of red balls chosen in these $m$ draws. Then $W$ has a P\'olya distribution with mean $\lambda=mr/N$ and variance given by $$ \sigma^2=\frac{mr(N+cm)(N-r)}{N^2(N+c)}\,. $$ We use Theorem \ref{thm:pois}(\ref{th32_1}) to give a bound on the Wasserstein distance $d_W(W,Z_\lambda)=d_{0,1}(W,Z_\lambda)$. From that result, we have that $$ d_W(W,Z_\lambda)\leq2\lambda d_{TV}(W,W^\star-1)\leq2\lambda\left\{d_{TV}(W,W^\star)+d_{TV}(W,W+1)\right\}\,, $$ where the final bound is the triangle inequality. From inequalities (5) and (20) of \cite{d11}, respectively, we have that $d_{TV}(W,W^\star)\leq\sigma/2\lambda$ and $$ d_{TV}(W,W+1)\leq\frac{1}{2\lambda(N-r)\sqrt{N+c}}\left\{\sqrt{mr(N-r)(N+cm)}+m\sqrt{cr(N-r)}\right\}\,. $$ Combining these inequalities, we have the following explicit bound: \begin{multline}\label{eq:polya} d_W(W,Z_\lambda)\leq\sqrt{\frac{mr(N+cm)(N-r)}{N^2(N+c)}}\\+\frac{1}{(N-r)\sqrt{N+c}}\left\{\sqrt{mr(N-r)(N+cm)}+m\sqrt{cr(N-r)}\right\}\,. \end{multline} Some further discussion of the P\'olya distribution, and the bound (\ref{eq:polya}), is given in Example \ref{eg:p2} below.} \end{example} We note that the results of Theorem \ref{thm:pois} are not the only way in which our stochastic ordering assumption can be used to get a Poisson approximation result based on Lemma \ref{lem2}. For example, consider the Wasserstein distance $d_W=d_{0,1}$ and total variation distance $d_{TV}=\frac{1}{2}d_{1,1}$. An argument analogous to that used to obtain (\ref{eq:dist}) gives us that $$ d_{TV}(W_\alpha+1,W_\alpha^\star)\leq\alpha d_{TV}(T_\alpha W,T_\alpha(W^\star-1))\,. $$ Combining this with Lemma \ref{lem2} (in the case $n=0$) we have that $$ d_W(W,Z_\lambda)\leq2\lambda\int_0^1d_{TV}(T_\alpha W,T_\alpha(W^\star-1))\,d\alpha\,. $$ If we assume that $W+1\geq_{st}W^\star$, we may use Theorem 7 of \cite{rp03} to obtain $$ d_W(W,Z_\lambda)\leq2\lambda\int_0^1\alpha\mathbb{E}\left[W+1-W^\star\right]\,d\alpha=\lambda-\mbox{Var}(W)\,. $$ In this case, however, better bounds are available by combining Proposition 2 of \cite{dlu12} with Theorem 1.1 of \cite{bx06}. We thus obtain $$ d_W(\mathcal{L}(W),\mathcal{L}(Z_\lambda))\leq\left(1\wedge\frac{1.15}{\sqrt{\lambda}}\right)\left(\lambda-\mbox{Var}(W)\right)\,. $$ \subsection{Poisson approximation for mixed Poisson distributions}\label{subsec:mp} In this section we apply Lemma \ref{lem2} to the case where $W\sim\mbox{Po}(\xi)$ has a mixed Poisson distribution with positive mixture distribution $\xi$ and $\mathbb{E}[\xi]=\lambda$. We begin by showing that in this case, $W_\alpha$ also has a mixed Poisson distribution. Note that we will not make any assumptions of stochastic or convex ordering in this section. \begin{lmm}\label{lem3} If $W\sim\mbox{Po}(\xi)$ then $W_\alpha\sim\mbox{Po}(\alpha\xi+(1-\alpha)\lambda)$ for all $\alpha\in[0,1]$. \end{lmm} \begin{proof} Elementary computations show that for $j\in\mathbb{Z}^+$ $$ \mathbb{P}(T_\alpha W=j) = \sum_{i=0}^\infty\binom{i}{j}\alpha^j(1-\alpha)^{i-j}\mathbb{P}(W=i) = \frac{1}{j!}\mathbb{E}\left[e^{-\alpha\xi}(\alpha\xi)^j\right]\,, $$ so that $T_\alpha W\sim\mbox{Po}(\alpha\xi)$. Since $W_\alpha$ is the convolution of $T_\alpha W$ and an independent Poisson random variable, the result follows. \end{proof} Now, let us write $\xi_{(\alpha)}=\alpha\xi+(1-\alpha)\lambda$ and $$ g_\alpha(j) = \frac{\exp\{-{\xi_{(\alpha)}}\}\xi_{(\alpha)}^{j-1}}{(j-1)!}\,. $$ Since $$ \mathbb{P}(W_\alpha+1=j)-\mathbb{P}(W_\alpha^\star=j)=\mathbb{E}\left[\left(1-\frac{\xi_{(\alpha)}}{\lambda}\right)g_\alpha(j)\right]\,, $$ Lemmas \ref{lem2} and \ref{lem3} give us that \begin{eqnarray*} d_{n,p}(F,G_\lambda) &\leq& \int_0^1\frac{1}{\alpha}\left\lVert\Delta^n\mathbb{E}\left[(\xi_{(\alpha)}-\lambda)g_\alpha\right]\right\rVert_p\,d\alpha\\ &\leq& \int_0^1\frac{1}{\alpha}\mathbb{E}\left[|\xi_{(\alpha)}-\lambda|\left\lVert\Delta^ng_{\alpha}\right\rVert_p\right]\,d\alpha\,, \end{eqnarray*} where we have again used Minkowski's integral inequality. For the remainder of this section we focus only on the case $n\geq0$ and $p=1$. In this case, straightforward calculations using Lemma 3.4 of \cite{rr12} give us that $$ \left\lVert\Delta^ng_{\alpha}\right\rVert_1 \leq \xi_{(\alpha)}^{-n/2}\,, $$ and hence \begin{equation}\label{eq:mp1} d_{n,1}(F,G_\lambda) \leq \int_0^1\mathbb{E}\left[|\xi-\lambda|\left(\alpha\xi+(1-\alpha)\lambda\right)^{-n/2}\right]\,d\alpha\,. \end{equation} If we assume that the expectation in (\ref{eq:mp1}) exists for all $\alpha\in[0,1]$ then we may interchange the order of integration to obtain the following result. \begin{thrm}\label{thm:mp} Let $W\sim\mbox{Po}(\xi)$ for some positive random variable $\xi$ with $\mathbb{E}[\xi]=\lambda$. Let $F$ be the distribution function of $W$ and $G_\lambda$ be the distribution function of $Z_\lambda\sim\mbox{Po}(\lambda)$. Suppose that $$ \mathbb{E}\left[|\xi-\lambda|\left(\alpha\xi+(1-\alpha)\lambda\right)^{-n/2}\right]<\infty\,, $$ for some $n\in\mathbb{Z}^+$ and all $\alpha\in[0,1]$. Then if $n\not=2$, $$ d_{n,1}(F,G_\lambda) \leq \left|\frac{2}{n-2}\right|\mathbb{E}\left|\xi^{(2-n)/2}-\lambda^{(2-n)/2}\right|\,, $$ while if $n=2$, $d_{2,1}(F,G_\lambda)\leq\mathbb{E}\left|\log\left(\frac{\xi}{\lambda}\right)\right|$. \end{thrm} We illustrate this result by returning to the setting of Example \ref{eg:p1}, the P\'olya distribution. \begin{example}\label{eg:p2} \emph{Let $W$ have a P\'olya distribution, as described in Example \ref{eg:p1}. We show how Theorem \ref{thm:mp} may be used to give a bound on the Wasserstein distance $d_W(W,Z_\lambda)=d_{0,1}(W,Z_\lambda)$ between $W$ and a Poisson distribution of the same mean. To do this, we follow \cite{aa08} and construct $W$ as the mixed binomial distribution $\mbox{Bin}(m,\xi)$, where $\xi$ has a beta distribution with density function $$ g(t)=B(\alpha,\beta)^{-1}t^{\alpha-1}(1-t)^{\beta-1}\,, $$ for $t\in(0,1)$, where $B(\cdot,\cdot)$ is the beta function, $\alpha=r/c$ and $\beta=(N-r)/c$.} \emph{Letting $Y\sim\mbox{Po}(m\xi)$ have a mixed Poisson distribution, we may condition on $\xi$ to obtain the bound $d_W(W,Y)\leq1.15\sqrt{m}\mathbb{E}\left[\xi^{3/2}\right]$ from equation (1.8) of \cite{bx06}. Our Theorem \ref{thm:mp} gives $d_W(Y,Z_{\lambda})\leq m\mathbb{E}|\xi-p|$, where $p=\mathbb{E}\xi=r/N$. The triangle inequality and H\"older's inequality then give \begin{eqnarray} \nonumber d_W(W,Z_\lambda)&\leq&1.15\sqrt{m}\mathbb{E}\left[\xi^{3/2}\right]+m\mathbb{E}|\xi-p|\\ \nonumber&\leq&1.15\sqrt{m}\left(\mathbb{E}\xi^2\right)^{3/4}+m\sqrt{\mbox{Var}(\xi)}\\ \label{eq:polya2}&=&1.15\sqrt{m}\left(\frac{r(r+c)}{N(N+c)}\right)^{3/4}+m\sqrt{\frac{cr(N-r)}{N^2(N+c)}}\,. \end{eqnarray} Asymptotically, this bound behaves similarly to that derived in Example \ref{eg:p1} above. For example, if $m$ is of order $O(N)$ and $c$ and $r$ are both of order $O(1)$, then each of the bounds (\ref{eq:polya}) and (\ref{eq:polya2}) are of order $O(1)$. However, numerical studies suggest that in practice (\ref{eq:polya2}) performs better than (\ref{eq:polya}).} \end{example} In the case of the total variation distance $d_{TV}=\frac{1}{2}d_{1,1}$, Theorem \ref{thm:mp} gives the following. \begin{crllr}\label{cor:mp} Let $F$ and $G_\lambda$ be as in Theorem \ref{thm:mp}. For any $\epsilon\in[0,1/2]$ $$ d_{TV}(F,G_\lambda) \leq \frac{\left(\mathbb{E|\xi-\lambda|}\right)^{1/2+\epsilon}}{\lambda^\epsilon}\,. $$ \end{crllr} \begin{proof} From Theorem \ref{thm:mp} we have that $d_{TV}(F,G_\lambda)\leq\mathbb{E}|\sqrt{\xi}-\sqrt{\lambda}|$. This expectation may be bounded using Lemma 1 of \cite{r03} and H\"older's inequality to give the required result. \end{proof} We note however, that bounds superior to that given by Corollary \ref{cor:mp} may be available elsewhere. For example, consider the case where $W$ has a negative binomial distribution. That is, assume that $\xi$ has a gamma distribution with density function $$ g(t) = \frac{1}{\Gamma(\beta)}\left(\frac{1-q}{q}\right)^\beta t^{\beta-1}\exp\left\{-t\left(\frac{1-q}{q}\right)\right\}\,, $$ for $t>0$, for some $\beta\in(0,\infty)$ and $q\in(0,1)$. Note that $\lambda=\beta q(1-q)^{-1}$, $\mbox{Var}(\xi)=\beta q^2(1-q)^{-2}$ and $$ \mathbb{E}|\xi-\lambda| = \frac{2q\beta^\beta e^{-\beta}}{(1-q)\Gamma(\beta)}\leq\frac{q}{1-q}\sqrt{\frac{2\beta}{\pi}}\,, $$ where this inequality uses a slight generalisation of Proposition A.2.9 of \cite{bhj92} whose proof is straightforward. Thus, evaluating the bound of Corollary \ref{cor:mp}, and in particular with the choices $\epsilon=0$ and $\epsilon=1/2$, we obtain that in the negative binomial case \begin{equation}\label{eq:nb1} d_{TV}(F,G_\lambda)\leq\sqrt{\frac{q}{1-q}}\min\left\{\sqrt{\frac{2}{\pi}},\left(\frac{2\beta}{\pi}\right)^{1/4}\right\}\,. \end{equation} For comparison, Roos \cite{r03} obtains the bound \begin{equation}\label{eq:nb2} d_{TV}(F,G_\lambda)\leq\beta\left(\frac{q}{1-q}\right)^2\min\left\{\frac{3(1-q)}{4e\beta q},1\right\}\,, \end{equation} and shows that it is superior to many others available in the literature. Note that, regardless of the value of $\beta$, the bound (\ref{eq:nb1}) is of order $O(\sqrt{q})$ while (\ref{eq:nb2}) has order at least as good as $O(q)$. \section{The binomial case}\label{sec:bin} The results that we have stated in previous sections (based on Lemma \ref{lem1}) are closely related to the Poisson distribution, since Lemma \ref{lem1} is itself closely related to the Poisson distribution. In this section we turn our attention to results in the binomial case. We consider results analogous to those in Sections \ref{sec:order} and \ref{sec:mp}. In doing this, we will use a Markov chain constructed by Yu \cite{y08} and used in proving an upper bound on entropy. We begin with some useful definitions. Throughout this section let $W$ be a non-negative, integer-valued random variable supported on $\{0,1,\ldots,n\}$, for some integer $n>0$, and with mean $\lambda=nr>0$. We will let $Z\sim\mbox{Bin}(n,r)$, a binomial random variable with the same support and mean as $W$. We recall that a random variable $X$ supported on $\{0,1,\ldots,n\}$ is ultra log-concave of degree $n$, denoted ULC($n$) in the sequel, if $$ \frac{\mathbb{P}(X=i+1)^2}{\binom{n}{i+1}^2}\geq\frac{\mathbb{P}(X=i)}{\binom{n}{i}}\frac{\mathbb{P}(X=i+2)}{\binom{n}{i+2}}\,, $$ for $0\leq i\leq n-2$. We refer the reader to \cite{p00} for further discussion of this property. We note here that the ULC($n$) property is intended to capture negative dependence, in a similar way to the ULC($\infty$) property and the other negative dependence assumptions we have discussed in Section \ref{sec:order}. For those $W$ which are ULC($n$), Yu \cite[Theorem 1]{y08} proves that $H(W)\leq H(Z)$. This is an analogue of Theorem 2.5 of \cite{j07}, which we generalised in our Corollary \ref{cor:entropy1}. The proof of Yu's result employs a Markov chain $\{X_t:t\in\mathbb{Z}^+\}$, whose construction we now outline. Further details and discussion are provided by \cite{y08}. We let $X_0$ have the same distribution as $W$. The random variable $X_{t}$ (for $t\geq1$) is given by \begin{equation}\label{eq:xdef} X_{t}=H_n(X_{t-1}+\eta_{t-1})\,, \end{equation} where $\eta_0,\eta_i,\ldots$ are iid Bernoulli random variables with mean $r$ and the operator $H_n$ is such that for a non-negative, integer-valued random variable $X$ supported on $\{0,1,\ldots,n\}$ $$ \mathbb{P}(H_nX=i)=\frac{(n-i)}{n}\mathbb{P}(X=i)+\frac{(i+1)}{n}\mathbb{P}(X=i+1)\,, $$ for $0\leq i\leq n-1$. The operator $H_n$ is referred to as hypergeometric thinning, since, conditional on $X$, $H_nX$ has a hypergeometric distribution. This is the analogue of the (binomial) thinning operator $T_\alpha$ defined in Section \ref{sec:intro}. Recall that, conditional on $X$, $T_\alpha X$ has a binomial distribution. In proving his entropy bound, Yu \cite{y08} uses the random variables $\{X_t:t\in\mathbb{Z}^+\}$ in a role analogous to that of the random variables $\{W_\alpha:0\leq\alpha\leq1\}$ in the corresponding bound for the Poisson case \cite[Theorem 2.5]{j07}. We use the remainder of this section to examine how the techniques we have developed in our previous work may be carried over into this binomial setting. We begin with the analogue of Lemma \ref{lem1}. Writing $p_t(i)=\mathbb{P}(X_t=i)$, Yu \cite{y08} shows that for $t\geq0$ \begin{equation}\label{eq:mc} p_{t+1}(i)=\frac{(n+1-i)(sp_t(i)+rp_t(i-1))+(i+1)(sp_t(i+1)+rp_t(i))}{n+1}\,, \end{equation} where $s=1-r$. We note that $X_t$ is supported on $\{0,1,\ldots,n\}$ and has expectation $nr$ for each $t\in\mathbb{Z}^+$. The key property of this Markov chain is that as $t\rightarrow\infty$, $X_t$ converges in distribution to the binomial distribution $\mbox{Bin}(n,r)$. Now, given a random variable $W$ supported on $\{0,1,\ldots,n\}$, define the random variable $W^+$ by $$ \mathbb{P}(W^+=j)=\frac{n+1-j}{n(1-r)}\mathbb{P}(W+1=j)\,, $$ for $1\leq i\leq n$. Straightforward manipulations of (\ref{eq:mc}) then allow us to see the following result, analogous to our Lemma \ref{lem1} for the Poisson case. \begin{lmm}\label{lem:bin1} Let $W$ be a random variable supported on $\{0,1,\ldots,n\}$ with mean $nr>0$. Then for $t\in\mathbb{Z}^+$ and $0\leq j\leq n$ $$ \mathbb{P}(X_t=j)-\mathbb{P}(X_{t+1}=j) = \frac{nr(1-r)}{n+1}\Delta\left[\mathbb{P}(X_t^+=j)-\mathbb{P}(X_t^\star=j)\right]\,. $$ \end{lmm} \subsection{Convex ordering and ULC($n$)} We use the next part of this section to explore stochastic ordering properties similar to those considered previously in Section \ref{sec:order}. We will make use of ultra log-concavity, and will assume that $W$ is ULC($n$). For such $W$ we have that $W^+\geq_{st}W^\star$ and that $X_t$ is ULC($n$) for all $t\in\mathbb{Z}$. See \cite[Lemma 3]{y08}. Combining these facts we immediately see that if $W$ is ULC($n$) then $X_t^+\geq_{st}X_t^\star$ for all $t\in\mathbb{Z}^+$. We may then derive the following result, which plays the role of Theorem \ref{thm:order2} in the binomial case. \begin{thrm}\label{thm:binorder} Let $W$ be ULC($n$) with support $\{0,1,\ldots,n\}$ and mean $nr>0$. Let $Z\sim\mbox{Bin}(n,r)$ and $X_t$ be given by (\ref{eq:xdef}). Then $X_t\leq_{cx}X_u$ for all $t\leq u$. In particular, $W\leq_{cx}Z$. \end{thrm} \begin{proof} We use the ideas and notation of the proof of Theorem \ref{thm:order}. As in the proof of that result, Proposition 2.5 of \cite{lu96} gives us that we need only show that $h_2(X_t,j)\leq h_2(X_{t+1},j)$ for each $t\in\mathbb{Z}^+$ and $0\leq j\leq n$. The first statement in the theorem follows easily from this, and the final statement by taking $t=0$ and $u\rightarrow\infty$ in the first. As noted before, for $W$ a ULC($n$) random variable, we have that $X_t^+\geq_{st}X_t^\star$ for each $t\in\mathbb{Z}^+$. Hence $h_1(X_t^+,j)\geq h_1(X_t^\star,j)$ for all $t\in\mathbb{Z}^+$ and $0\leq j\leq n$. Now, by Lemma \ref{lem:bin1} we have that $$ h_0(X_t,j)-h_0(X_{t+1},j)=\frac{nr(1-r)}{n+1}\Delta\left[h_0(X_t^+,j)-h_0(X_t^\star,j)\right]\,, $$ for each $t\in\mathbb{Z}^+$ and $0\leq j\leq n$. Applying $\Delta^{-2}$ to this, we have that $$ \frac{nr(1-r)}{n+1}\left[h_1(X_t^+,j)-h_1(X_t^\star,j)\right]=h_2(X_{t+1},j)-h_2(X_t,j)\geq0, $$ as required. \end{proof} From Theorem \ref{thm:binorder}, we may immediately recover the main result of Yu \cite{y08}, his Theorem 1, which we state in Corollary \ref{cor:binentropy} below. \begin{crllr}\label{cor:binentropy} Let $W$ be ULC($n$) with support $\{0,1,\ldots,n\}$ and mean $nr>0$. Let $Z\sim\mbox{Bin}(n,r)$. Then $$ H(W) \leq H(Z)\,. $$ \end{crllr} \begin{proof} Since $W\leq_{cx}Z$ (by Theorem \ref{thm:binorder}) and $Z$ is a log-concave random variable, this follows immediately from Lemma 1 of \cite{y09}. \end{proof} We also have the following, the analogue of Corollary \ref{cor:entropy2}. \begin{crllr}\label{cor:binentropy2} Let $W$ be ULC($n$) with support $\{0,1,\ldots,n\}$ and mean $nr>0$. Let $Z\sim\mbox{Bin}(n,r)$ and $Y_1,Y_2,\ldots,$ be iid non-negative, integer-valued random variables. Let $$ \widehat{W}=\sum_{i=1}^WY_i\,,\hspace{20pt}\mbox{ and }\hspace{20pt}\widehat{Z}=\sum_{i=1}^{Z}Y_i\,. $$ If $\widehat{Z}$ is log-concave, then $H(\widehat{W})\leq H(\widehat{Z})$. \end{crllr} \begin{proof} Combine our Theorem \ref{thm:binorder} with Theorem 1 of \cite{y09}. \end{proof} Note that Corollary \ref{cor:binentropy2} generalises Theorem 2 of \cite{y09}, since a sum of $n$ independent Bernoulli random variables is ULC($n$). We conclude this subsection by observing that we may also obtain concentration inequalities and binomial approximation results as corollaries of our Theorem \ref{thm:binorder}, as in the Poisson case of Section \ref{sec:order}. The proofs of these results are analogous to their Poisson counterparts in Section \ref{sec:order}. \begin{crllr} Let $W$ be ULC($n$) with support $\{0,1,\ldots,n\}$ and mean $\lambda=nr>0$. Let $t>0$. \begin{eqnarray*} \mathbb{P}(W\geq\lambda+t) &\leq& \left[\frac{(1-r)(\lambda+t)}{(1-r)\lambda-rt}\right]^{-(t+\lambda)}\left[1-r+\frac{r(1-r)(\lambda+t)}{(1-r)\lambda-rt}\right]^n\,,\\ \mathbb{P}(W\leq\lambda-t) &\leq& \left[\frac{(1-r)(\lambda-t)}{(1-r)\lambda+rt}\right]^{t-\lambda}\left[1-r+\frac{r(1-r)(\lambda-t)}{(1-r)\lambda+rt}\right]^n\,,\\ \end{eqnarray*} where the last inequality applies if $t<\lambda$. \end{crllr} \begin{crllr} Let $W$ be ULC($n$) with support $\{0,1,\ldots,n\}$ and mean $nr>0$. Let $Z\sim\mbox{Bin}(n,r)$. Then if $W$ has distribution function $F$ and $Z$ has distribution function $G$, $$ d_{-k,\infty}(F,G)\leq2^{(-k)_+-1}\left\{nr(1-r)-\mbox{Var}(W)\right\}\,, $$ for $k\in\{-1,0,1,2\}$. \end{crllr} \subsection{Other results in the binomial case} In Section \ref{sec:mp} we used our Lemma \ref{lem1} directly to provide a Poisson approximation result, Lemma \ref{lem2}. Similarly, we have the following. \begin{prpstn}\label{lem:bin2} Let $W$ be a random variable supported on $\{0,1,\ldots,n\}$ with mean $nr>0$. Let $F_t$ be the distribution function of $X_t$, for $t\in\mathbb{Z}^+$. Then for $1\leq p\leq\infty$ and $n\in\mathbb{Z}$ $$ d_{n,p}(F_0,F_t)\leq\frac{nr(1-r)}{n+1}\sum_{u=0}^{t-1}d_{n+1,p}(F_u^+,F_u^\star)\,, $$ where $F_u^+$ is the distribution function of $X_u^+$ and $F_u^\star$ is the distribution function of $X_u^\star$. \end{prpstn} \begin{proof} From the definition of $d_{n,p}$ we have that \begin{eqnarray*} d_{n,p}(F_0,F_t)&=&\left\lVert\Delta^{n}\sum_{u=0}^{t-1}[F_u-F_{u+1}]\right\rVert_p\\ &=&\frac{nr(1-r)}{n+1}\left\lVert\Delta^{n+1}\sum_{u=0}^{t-1}[F_u^+-F_u^\star]\right\rVert_p\\ &\leq&\frac{nr(1-r)}{n+1}\sum_{u=0}^{t-1}\left\lVert\Delta^{n+1}F_u^+-\Delta^{n+1}F_u^\star\right\rVert_p\,, \end{eqnarray*} where the second line uses Lemma \ref{lem:bin1} and the inequality uses Minkowski's integral inequality. \end{proof} It is worth noting, however, that we do not have a result analogous to Lemma \ref{lem3} here. That is, suppose that $X_0=W\sim\mbox{Bin}(n,\xi)$ for some random variable $\xi$ supported on $[0,1]$, so that $$ \mathbb{P}(W=i)=\binom{n}{i}\mathbb{E}[\xi^i(1-\xi)^{n-i}]\,. $$ Then $X_1$ does not, in general, have a mixed binomial distribution. In the Poisson case, the preservation of Poisson mixtures under the operators $U_\alpha$ ($0\leq\alpha\leq1$), as given by Lemma \ref{lem3}, allowed us to easily and explicitly find a bound on the distance between a mixed Poisson random variable and a Poisson random variable with the same mean. However, no such property holds in the binomial case we are considering here. \subsection*{Acknowledgements} The author gratefully acknowledges useful and interesting discussions with Oliver Johnson and Sergey Utev. Thanks are also due to the Heilbronn Institute for Mathematical Research at the University of Bristol, where part of this work was completed, and to an anonymous referee, whose suggestions improved the quality and presentation of the work.
{ "timestamp": "2016-01-22T02:08:47", "yymm": "1504", "arxiv_id": "1504.06493", "language": "en", "url": "https://arxiv.org/abs/1504.06493" }
\section{Introduction} \labell{intro} In the past decade, the BCFW recursion relation \cite{Britto:2004ap, Britto:2005fq} had been an efficient on-shell method to calculate tree-level scattering amplitudes. Pedagogical reviews on this topic can be found in \cite{Feng:2011np, Elvang:2013cua}. Still, it encounters certain difficulties when there exists no `good' deformation as those found in \cite{ArkaniHamed:2008yf, Cheung:2008dn}, {\it i.e.,} the real amplitude does not vanish under the large $z$ limit, where $z$ is the deformation parameter. The recursion relation then fails to capture a residual part called the boundary term, which corresponds to the residue at infinity of the deformed amplitude. Many related studies have been achieved including: introducing auxiliary fields to eliminate boundary terms \cite{Benincasa:2007xk, Boels:2010mj}, analyzing Feynman diagrams to isolate boundary terms \cite{Feng:2009ei, Feng:2010ku, Feng:2011twa}, expressing boundary terms as roots of amplitudes \cite{Benincasa:2011kn, Benincasa:2011pg, Feng:2011jxa}, collecting factorization limits to interpolate boundary terms \cite{Zhou:2014yaa} and using other deformations for better large $z$ behavior \cite{Cheung:2015cba}. Recently, a new algorithm named as the multi-step BCFW recursion relations \cite{Feng:2014pia} was established to tackle this problem universally. Its major idea of using auxiliary deformations can be traced back to \cite{Berger:2006ci}, while the latter aims for one-loop amplitudes. This approach considerably widens the category of quantum field theories of solvable tree amplitudes by using BCFW deformations only \cite{Jin:2014qya}. However, some common puzzles encountered in practice still lacks a formal study. One core question is: How to reach the correct answer within finite, definite steps, if an amplitude is solvable by the algorithm? In this paper, we will first explore multi-step BCFW recursion relations by investigating the algebra of BCFW deformation generators and the commutativity of constant extractions. Next, we will seek for a universal approach to reach the answer and ensure that it is correct. This safety promise relies on very little knowledge of a particular QFT, besides mass dimension and helicities, hence the algorithm is expected to be able to solve for all massless tree amplitudes, with certain limitation as addressed below. It is well known that on-shell methods heavily rely on factorization properties of amplitudes, and the latter is a reflection of locality and unitarity. These properties are mathematically implemented on poles of amplitudes and their residues. For amplitudes that admit polynomials, no on-shell methods so far can fix this ambiguity. One can list all possible forms of polynomials as basis, but to determine the coefficients will unfavorably call for more traditional means such as Feynman rules. In this work, we will clarify the applicable range of multi-step BCFW recursion relations and explore all possible forms of polynomials and their generalized cousins called pseudo polynomials and saturated fractions. The latter two objects can be fixed by other types of deformations, and having them fully identified is in fact useful. The paper is organized as follows. In section \ref{sec2}, we review the multi-step BCFW recursion relations and explore the commutativity of constant extractions. In section \ref{sec3}, we propose the systematic process to calculate amplitudes after finite, definite steps, and clarify its applicable range and limitation. In section \ref{sec4}, we apply it to the (massless) Standard Model plus gravity to demonstrate its workability. \section{Multi-step BCFW Recursion Relations} \labell{sec2} In this section, we briefly review the multi-step BCFW recursion relations, in the novel language of extraction operators. After that, the commutativity of constant extractions will be explored. \subsection{Extraction operators} For a general BCFW deformation $\<a_i|b_i]$ (only two legs are shifted for each $i$), namely \begin{equation} \lambda_{a_i}\to\lambda_{a_i}-z_i\lambda_{b_i},~\tilde{\lambda}_{b_i}\to\tilde{\lambda}_{b_i}+z_i\tilde{\lambda}_{a_i}, \end{equation} let's define two operations on an amplitude-like rational function $R(\lambda_i,\tilde{\lambda}_i)$ via\footnote{$P_i$ and $C_i$ used here are identical to $\mathcal{P}^{\underline{i}}$ and $\mathcal{C}^{\underline{i}}$ in the appendix of \cite{Feng:2014pia}.} \begin{equation} P_i[R]\equiv-\sum_{\textrm{finite}}\oint\frac{dz_i}{z_i} R(\lambda_{a_i}-z_i\lambda_{b_i},\tilde{\lambda}_{b_i}+z_i\tilde{\lambda}_{a_i}),~ C_i[R]\equiv\oint_\infty\frac{dz_i}{z_i}R(\lambda_{a_i}-z_i\lambda_{b_i},\tilde{\lambda}_{b_i}+z_i\tilde{\lambda}_{a_i}), \end{equation} where $P_i$ and $C_i$ are the pole and constant `extraction operators', which capture residues at finite locations \textit{except} zero and infinity respectively. For a real amplitude $A$, $P_i$ can capture its physical poles only. But a general $R$, such as $P_iA\equiv P_i[A]$ or $C_iA\equiv C_i[A]$, may also contain spurious poles, which is well known. Therefore the detectable poles (those which have dependence on $z_i$, as defined in \cite{Feng:2014pia}) at finite locations can be either physical or spurious. By definition $P_i+C_i=I$, where $I$ is the identity operator. When we calculate an amplitude, starting by the 0th step, the amplitude is unknown, so is the $C_0$ operation. However, the $P_0$ operation represents exactly the BCFW recursion relation, hence we actually reconstruct this part by employing factorization properties, rather than manipulating the unknown amplitude. Conventionally, $C_0$ is called the boundary term with respect to $P_0$, which will be dissected into many parts to be determined. The dissection means, by expanding $I$ for $(n+1)$ times repeatedly, we have \begin{equation} I=P_n+C_nP_{n-1}+\ldots+C_nC_{n-1}\cdots C_2P_1+C_nC_{n-1}\cdots C_2C_1P_0+C_nC_{n-1}\cdots C_2C_1C_0, \end{equation} note that $I$ always acts on $A$ implicitly. If the final boundary term $C_nC_{n-1}\cdots C_2C_1C_0$ vanishes, we have \begin{equation} I=P_n+C_nP_{n-1}+\ldots+C_nC_{n-1}\cdots C_2P_1+C_nC_{n-1}\cdots C_2C_1P_0. \labell{eq-1} \end{equation} This identity formally represents the `multi-step BCFW recursion relations'. Importantly, the workability of this multi-step approach relies on the existence of a sequence of deformations numbered by $0,1,\ldots,n$ for which $C_nC_{n-1}\cdots C_2C_1C_0=0$. The latter is the key condition we will mainly focus on. The operators above have a general algebraic property, namely the projectivity: \begin{equation} C_iC_i=C_i, \end{equation} to prove this, we first explicitly expand the deformed $R$ as\footnote{In practice, one can use the `Apart' function in Mathematica to separate the pole and regular terms with respect to $z$.} \begin{equation} R(z_i)=\sum_k\frac{b_{0k}+b_{1k}z_i}{\(a_{0k}+a_{1k}z_i+a_{2k}z_i^2\)^{d_k}} +c_0+\sum_lc_lz_i^l, \labell{eq-10} \end{equation} with $d_k\geq1$. In the expansion, when $a_{2k}$ vanishes, $b_{1k}$ must also vanish, otherwise a linear recombination of the numerator can further lower $d_k$ by one\footnote{The $z^2$ term in the denominator can only originate from spurious pole $\<i|K|j]$, where $K$ contains at least two external momenta other than $i,j$, when it is deformed by $\<i|j]$. All other physical poles can at most contribute terms linear in $z$ under one BCFW deformation.}. Now observe that performing the same deformation twice is equivalent to replacing $z_i$ by $(z_i+z'_i)$, as \begin{equation} R(z_i,z'_i)=R(z_i+z'_i) =\sum_k\frac{b_{0k}+b_{1k}(z_i+z'_i)}{\(a_{0k}+a_{1k}(z_i+z'_i)+a_{2k}(z_i+z'_i)^2\)^{d_k}} +c_0+\sum_lc_l(z_i+z'_i)^l, \end{equation} then \begin{equation} \oint_\infty\frac{dz'_i}{z'_i}\oint_\infty\frac{dz_i}{z_i}R(z_i,z'_i) =\oint_\infty\frac{dz'_i}{z'_i}\(c_0+\sum_lc_l{z'_i}^l\)=c_0, \end{equation} hence $C_iC_iR=C_iR=c_0$. By using $P_i=I-C_i$ it is trivial to find that \begin{equation} P_iP_i=P_i,~C_iP_i=P_iC_i=0. \end{equation} Besides projectivity, a more intricate property is the commutativity: \begin{equation} C_iC_j=C_jC_i, \end{equation} which demands certain condition, as will be investigated shortly. If it holds, again with $P_i=I-C_i$ one can find that \begin{equation} P_iP_j=P_jP_i,~C_iP_j=P_jC_i. \labell{eq-2} \end{equation} When all $C$'s are chosen to commute with each other in the expansion \eqref{eq-1} for a particular amplitude, each term is `orthogonal' to the others. This orthogonality has a nice meaning: Each term contains non-overlapping pole terms, consequently one can capture all pole terms step by step \textit{without} checking whether the previous parts are disturbed by new operations. While commutativity may considerably simplify the calculation, it is obviously not necessary for \eqref{eq-1} to work. One last digression is when we do practical calculations, it is convenient to use $P_i+C_i=I$ to switch between $P_i$ and $C_i$, depending on which operation is easier. To check the equivalence between two visually different expressions, in appendix \ref{app1} we introduce a simple trick to solve all constraints and get a set of independent kinematic variables. This trick can uniquely fix the form of an expression no matter by which means it is obtained (it is better to use this trick with a computer algebra program). \subsection{Deformation generator algebra} Now we begin to explore the commutativity of $C$'s, which can be decomposed into the commutativity at integrand level and at integral level. The former is encoded in two successive deformations, and the latter is encoded in two successive contour integrals, which will use Laurent expansion in $w=1/z$. Before this, we need to first study the BCFW deformation generators and their algebra. Let's define the BCFW deformation generator with respect to $\<i|j]$ as \begin{equation} D_{\<i|j]}\equiv-\lambda_j^\alpha\frac{\partial}{\partial\lambda_i^\alpha} +\tilde{\lambda}_i^{\dot{\alpha}}\frac{\partial}{\partial\tilde{\lambda}_j^{\dot{\alpha}}}, \end{equation} then the familiar BCFW deformation becomes \begin{equation} \exp\(zD_{\<i|j]}\)R(\lambda_i,\tilde{\lambda}_j)=R(\lambda_i-z\lambda_j,\tilde{\lambda}_j+z\tilde{\lambda}_i). \end{equation} Although by default the spinorial partial derivatives treat all spinors as independent, we must also impose the momentum conservation constraint on real amplitudes. Without doubt, this constraint will affect the independence of spinorial partial derivatives, but it will \textit{not} affect the commutator algebra of $D_{\<i|j]}$. Below we will provide a simple argument. Note that any $D_{\<i|j]}$ automatically annihilates the sum of all external momenta, {\it i.e.,} \begin{equation} D_{\<i|j]}\sum p=D_{\<i|j]}(\lambda_i\tilde{\lambda}_i+\lambda_j\tilde{\lambda}_j)=0, \end{equation} so we claim that momentum conservation is a \textit{trivial} constraint. To get some intuition, one can consider a spherical surface, for which any rotation generator, say $L_{xy}$, annihilates the constraint \begin{equation} x^2+y^2+z^2=r^2. \end{equation} To parameterize one of the spherical symmetries explicitly, we can define an angle $\theta_{xy}$ via \begin{equation} L_{xy}=x\frac{\partial}{\partial y}-y\frac{\partial}{\partial x} \equiv\frac{\partial}{\partial\theta_{xy}}, \end{equation} while $x,y$ are no longer independent on the sphere, $\theta_{xy}$ can be arbitrary, as this degree of freedom moves a given point around on a subset of the spherical surface. From this viewpoint, the commutator algebra of $L_{xy},L_{yz},L_{zx}$ is obviously unaltered. More profoundly, it is these rotation generators that fully generate the spherical surface. Given a particular point in $\mathbb{R}^3$, rotation generators move it around to sweep over the entire surface of a fixed distance from the origin. This picture can be exactly generalized to the case of BCFW deformation generators. We can define an `angle' in a complex spinorial sense for each deformation, via \begin{equation} D_{\<i|j]}\equiv\frac{\partial}{\partial\theta_{\<i|j]}}, \end{equation} then $\theta_{\<i|j]}$ parameterizes one of the symmetries that preserve momentum conservation, and hence momentum conservation will not alter the commutator algebra of $D_{\<i|j]}$ at all. But instead, this constraint is fully generated by $2\,C^2_n=n(n-1)$ BCFW deformation generators. Given a particular point in $\mathbb{C}^{4n}$, namely the complex spinorial space $(\lambda_i,\tilde{\lambda}_i)$, BCFW deformation generators move it around to sweep over the entire codimension-4 surface of a fixed sum of external momenta. And physically, this sum is zero. Since the commutator algebra is unaltered, we are free to treat all spinors as independent to derive the commutation relations. Imagine the 0th step of deformation is $\<i|j]$, then the 1st step can be one of the four types as named below: \begin{equation} \begin{aligned} \<k|l]&=\textrm{independent},\\ \<i|l]\textrm{ or }\<k|j]&=\textrm{straight descendent},\\ \<l|i]\textrm{ or }\<j|k]&=\textrm{skew descendent},\\ \<j|i]&=\textrm{cross descendent}. \end{aligned} \end{equation} The generators of first two types commute with that of $\<i|j]$, {\it i.e.,} \begin{equation} \[D_{\<i|j]},D_{\<k|l]}\]=0,~\[D_{\<i|j]},D_{\<i|l]}\]=\[D_{\<i|j]},D_{\<k|j]}\]=0. \end{equation} For the last two types, \begin{equation} \begin{aligned} \[D_{\<i|j]},D_{\<j|k]}\]=D_{\<i|k]},~\[D_{\<i|j]},D_{\<j|i]}\]=2h_i-2h_j,\\ 2h_i=-\lambda_i^\alpha\frac{\partial}{\partial\lambda_i^\alpha} +\tilde{\lambda}_i^{\dot{\alpha}}\frac{\partial}{\partial\tilde{\lambda}_i^{\dot{\alpha}}},~ \[2h_i,D_{\<i|j]}\]=\[D_{\<i|j]},2h_j\]=D_{\<i|j]}, \end{aligned} \end{equation} where $h_i$ is the helicity operator with respect to the $i$-th particle, for a function covariant under the little group (an amplitude does have this scaling property). For the skew descendent case, using the Baker-Campbell-Hausdorff formula \begin{equation} \exp X\exp Y=\exp(X+Y)\exp\(\frac{1}{2}[X,Y]\),~\textrm{for }[X,[X,Y]]=[Y,[X,Y]]=0, \end{equation} and due to the commutativity of a straight descendent pair, we have \begin{equation} \[\exp\(z_0D_{\<i|j]}\),\exp\(z_1D_{\<j|k]}\)\] =\exp\(z_0D_{\<i|j]}+z_1D_{\<j|k]}\)2\sinh\(\frac{1}{2}z_0z_1D_{\<i|k]}\). \end{equation} Hence skew descendent deformations $\<i|j]$ and $\<j|k]$ commute if $D_{\<i|k]}$ annihilates the amplitude, however, this is a too stringent condition which often trivializes the deformations being used. Therefore in general, skew descendent deformations do not commute. \subsection{Commutativity at integrand and integral levels} By applying the BCFW deformation generators, we perform one constant extraction on rational function $R(\lambda_i,\tilde{\lambda}_i)$ as (different from contours around finite locations, the contour around infinity is clockwise) \begin{equation} CR=\oint_\infty\frac{dz}{z}R(z)=\oint_0\frac{dw}{w}R\(\frac{1}{w}\), \end{equation} where the change of variable is $w\equiv1/z$, so the infinity for $z$ is the zero for $w$. However, the residue at this zero is \textit{not} a naive one. Recall \eqref{eq-10} in terms of $w$, it reads \begin{equation} R\(\frac{1}{w}\)=\sum_k\frac{(b_{1k}+b_{0k}w)w^{2d_k-1}}{\(a_{2k}+a_{1k}w+a_{0k}w^2\)^{d_k}} +c_0+\sum_l\frac{c_l}{w^l}, \end{equation} a naive substitution of $w=0$ will cause divergence in the third term above. On the other hand, it is clear that after the expansion, the third term actually has no simple pole at $w=0$, since there is already one $w$ in the denominator of the integrand. To remove this divergent term, before the contour integration we must Laurent expand $R(1/w)$ around $w=0$, {\it i.e.,} we need to first factor out a divergent factor $1/w^\gamma$ with $\gamma\geq1$, leaving a finite fraction at $w=0$, then Taylor expand it around $w=0$. A simple example is \begin{equation} \frac{b_0+b_1z+b_2z^2}{a_0+a_1z}=\frac{1}{w}\frac{b_2+b_1w+b_0w^2}{a_1+a_0w} =\frac{b_2}{a_1w}\(1+\frac{b_1}{b_2}w+\frac{b_0}{b_2}w^2\)\(1+\frac{a_0}{a_1}w\)^{-1}, \end{equation} then we can Taylor expand the finite fraction around $w=0$, and the contour integral will only pick up the constant part in this expression. Similarly, performing two successive constant extractions gives \begin{equation} C_1C_0R=\oint_0\frac{dw_1}{w_1}\oint_0\frac{dw_0}{w_0}R\(\frac{1}{w_1},\frac{1}{w_0}\), \end{equation} with \begin{equation} R\(\frac{1}{w_1},\frac{1}{w_0}\)=\exp\(\frac{1}{w_1}D_1\)\exp\(\frac{1}{w_0}D_0\)R. \end{equation} For independent and straight descendent cases, $[D_1,D_0]=0$, so the order of deformations is irrelevant, and hence the commutativity of these two types holds at \textit{integrand} level\footnote{It is possible that two constant extractions commute, even if they do not commute at integrand level, but we will not consider this trivial case here.}. However, before performing the integral, to double Laurent expand a fraction is a bit tricky. First, to properly factor out the overall factor in terms of $w_0$ and $w_1$, we need to ensure that in \begin{equation} R\(\frac{1}{w_1},\frac{1}{w_0}\)=\frac{1}{w_0^{\gamma_0}w_1^{\gamma_1}}\frac{P(w_0,w_1)}{\prod_iQ_i(w_0,w_1)}, \end{equation} both $P(w_0,w_1)$ and $Q_i(w_0,w_1)$ are irreducible polynomials, {\it i.e.,} there is no common factor $w_0^{\delta_0}w_1^{\delta_1}$ in each $P_i$ or $Q_i$. Then, we find the expansions of a fraction in opposite orders are different, when any of the $Q_i$'s does not contain a constant term. For example, take $g(w_1,w_0)=1/(w_1+w_0)$ and expand it around $w_0=0$ (it's impossible to further expand around $w_1=0$), we have \begin{equation} g(w_1,w_0)=\frac{1}{w_1}\(1-\frac{w_0}{w_1}+\frac{w_0^2}{w_1^2}-\ldots\), \end{equation} and for the reverse order, \begin{equation} g(w_1,w_0)=\frac{1}{w_0}\(1-\frac{w_1}{w_0}+\frac{w_1^2}{w_0^2}-\ldots\), \end{equation} hence they are clearly different. In general, if any of the $Q_i$'s happens to satisfy $Q_i(0,0)=0$, the double expansion depends on the order. Conversely, if all $Q_i$'s obey $Q_i(0,0)\neq0$, the order of double expansion is irrelevant. Since the contour integral picks up the constant part in the expansion only, the commutativity of the denominator expansion is equivalent to the commutativity at \textit{integral} level. In practice, since it is clear that a detectable pole of merely one of the two successive deformations always contains a constant term after factoring out a proper factor, we should only focus on the overlap of two sets of detectable poles. But it's impractical to trace each term at each step, for seeing whether a constant term exists. Combining everything above, one reaches the conclusion: the commutativity of $C$ operators holds for independent and straight descendent deformations, provided that the condition $Q_i(0,0)\neq0$ is satisfied. These two types may be considered as `good', as they may enjoy the orthogonal property \eqref{eq-2}. However, merely the good types are not sufficient to capture all physical pole terms, as will be explained in the end of appendix \ref{app2}. Hence we will not proceed further in this direction, but return to seek for the condition of $C_n\cdots C_1C_0=0$, such that the $(n+1)$ steps can fully capture the amplitude. Nevertheless, this investigation gives a crucial hint for the subsequent analysis: In the expansion, the coefficient of $z$ after a deformation becomes a new pole of the corresponding boundary term, with order one or higher. To be concrete, consider the example below \begin{equation} \frac{1}{a}\to\frac{1}{a+bz}=\frac{w}{(b+aw)}=\frac{w}{b}\(1+\frac{a}{b}w\)^{-1} =\frac{w}{b}\(1-\frac{a}{b}w+\frac{a^2}{b^2}w^2-\ldots\), \labell{eq-4} \end{equation} where $b$ is the only source of poles afterwards. \section{Systematic Algorithm of Finite, Definite Steps} \labell{sec3} In this section, we propose the systematic process to capture the amplitude after finite, definite steps of BCFW constant extractions. The condition of correctly completing the calculation is simply $C_n\cdots C_1C_0=0$. To achieve this, a form of all poles in the final boundary term is obtained, after a sequence of constant extractions which is called the `pole concentration'. This sequence is designed for covering \textit{all} situations so there is no extra restriction such as color order, and how to optimize it case by case is set aside temporarily. Having the final form of poles, merely using the information of mass dimension and helicities is sufficient to judge whether the final boundary term vanishes. \subsection{Pole concentration} Now we use pole concentration to capture all poles regardless of whether they are physical or spurious by applying the logic of \eqref{eq-4}. Also, each time we perform a BCFW constant extraction on the amplitude, at least one of its physical poles will be filtered out, and consequently each corresponding boundary term will contain at least one pole mutated from the original physical poles. For example, consider denominator $\<12\>\<23\>$ (the numerator is neglected for our purpose), under constant extraction $\<1|3]$, \begin{equation} \frac{1}{\<12\>\<23\>}\to\frac{1}{(\<12\>-z\<32\>)\<23\>}\Rightarrow\frac{1}{\<23\>^2}, \end{equation} where pole $\<12\>$ has been replaced by $\<32\>$. Crucially, under a next constant extraction, pole $\<23\>^2$ is either unchanged or replaced by another pole of the same order. This means once two poles are \textit{stacked}, they are stacked forever. The same logic also works for anti-holomorphic poles. For a multi-particle pole, we first need to turn it into a product of holomorphic and anti-holomorphic poles, with a proper choice of deformation. As one example, under constant extraction $\<1|4]$, \begin{equation} \frac{1}{P_{123}^2}\to\frac{1}{P_{123}^2+z\<4|2+3|1]}\Rightarrow\frac{1}{\<4|2+3|1]}, \end{equation} next, under constant extraction $\<2|5]$, \begin{equation} \frac{1}{\<4|2+3|1]}\to\frac{1}{\<4|2+3|1]-z\<45\>[21]}\Rightarrow\frac{1}{\<45\>[21]}, \end{equation} then we are again left with two-particle poles. In general, one can first turn a multi-particle pole $P^2$ into $\<i_1|P|j_1]$, where $P$ includes either $i_1$ or $j_1$, note that $p_{i_1}$ or $p_{j_1}$ in $P$ is already filtered out by $\<i_1|$ or $|j_1]$. Next, one can split $\<i_1|P|j_1]$ by using $\<k|j_2]$ or $\<i_2|l]$, where $P$ includes $j_2$ or $i_2$ but not $k$ or $l$. This way turns the pole into a product of one holomorphic and one anti-holomorphic pole. Then for two-particle poles, once they are stacked, they must mutate as a whole afterwards. After finite steps, all poles can be encapsulated in only one holomorphic and one anti-holomorphic pole, with orders larger than one in general. In appendix \ref{app2}, one sequence of BCFW constant extractions is presented to turn all poles of the final boundary term into a common denominator, of the expression given by\footnote{The choice of sequence is not unique, and how to optimize it to shorten the steps is a very valuable future problem.} \begin{equation} \frac{(\textrm{polynomial})}{\<i_1i_2\>^m[i_3i_4]^{\overline{m}}}\times(\textrm{remaining factor}), \labell{eq-5} \end{equation} where $i_1,i_2,i_3,i_4$ are four different arbitrary particle labels. The remaining factor is a rational function, which is dimensionless and helicity-neutral, see \eqref{eq-17} for example. Note that we \textit{have not} reduced the denominator against the numerator. At the first glance, the reason to get this final denominator is that we can use one more deformation, say $\<i_1|i_4]$, to get the maximal large $z$ suppression, since all poles after concentration are vulnerable to it. But in fact, there is a less obvious argument for eliminating the final boundary term without introducing one more step, as will be given later. Here, $m$ or $\overline{m}$ gets contribution from physical holomorphic or anti-holomorphic poles, and both of them get contributions from physical multi-particle poles. In general, $m$ and $\overline{m}$ need not be equal, since not all possible poles are physical for a particular amplitude. To see the range of $m,\overline{m}$, we will analyze all possible physical poles for various $n$'s. When $n=4$, only a half of all two-particle poles can appear in the amplitude, since they are doubly duplicated by momentum conservation. When $n=5$, there are only two-particle poles, as three-particle poles are equivalent to them by momentum conservation. When $n\geq6$, multi-particle poles arise. Their particle numbers range from 3 to $(n-3)$, to avoid duplications of two-particle poles by momentum conservation. To further avoid duplications of themselves, one can fix the pole momentum by demanding one pivot particle to be always included, and then the number of multi-particle poles is reduced by one half. According to the counting above, the maxima of $m,\overline{m}$ are \begin{equation} m_{\max}=\overline{m}_{\max}=C_n^2+\frac{1}{2}(C_n^3+\ldots+C_n^{n-3})=2^{n-1}-(n+1), \end{equation} which nicely covers the special cases of $n=4$ and $n=5$. However, there is a little subtlety in \eqref{eq-5}: For a given amplitude, while $m,\overline{m}$ can be easily read off by analyzing all of its non-vanishing factorization limits, the final boundary term in general contains \textit{not only} poles $\<i_1i_2\>^m[i_3i_4]^{\overline{m}}$, but also the same poles of higher orders from the dimensionless, helicity-neutral remaining factor. This also occurs in each intermediate step for each corresponding intermediate boundary term. A simple example is the MHV amplitude $A(1^-,2^-,3^+,4^+)$, given by \begin{equation} A=\frac{\<12\>^3}{\<23\>\<34\>\<41\>}, \end{equation} and deformation $\<1|3]$ turns it into (recall that $z=1/w$) \begin{equation} A\to\frac{(\<12\>-z\<32\>)^3}{\<23\>\<34\>(\<41\>-z\<43\>)} =\frac{1}{w^2\<23\>\<34\>}\frac{(-\<32\>+\<12\>w)^3}{-\<43\>+\<41\>w} =\frac{1}{w^2\<23\>\<34\>}\frac{\<32\>^3}{\<43\>} \(1-\frac{\<12\>}{\<32\>}w\)^3\(1-\frac{\<41\>}{\<43\>}w\)^{-1}, \end{equation} note that pole $\<41\>$ is turned into $\<43\>$, but its order can be larger than one. Explicitly, the corresponding boundary term is \begin{equation} \begin{aligned} C_{\<1|3]}A&=\frac{1}{\<23\>\<34\>\<43\>}\(\frac{\<41\>^2\<32\>^3}{\<43\>^2} -3\frac{\<41\>\<12\>\<32\>^2}{\<43\>}+3\<12\>^2\<32\>\)\\ &=-\frac{\<12\>^2\<32\>}{\<23\>\<34\>^2}\(\frac{\<41\>^2\<32\>^2}{\<43\>^2\<12\>^2} -3\frac{\<41\>\<32\>}{\<43\>\<12\>}+3\), \labell{eq-17} \end{aligned} \end{equation} where the term in parentheses is the remaining factor of \eqref{eq-5}. The advantage of packing up many pole terms into a dimensionless helicity-neutral factor is that, if we can show this representative factor cannot exist, the full expression including terms with higher-order poles must also be forbidden. One digressive comment is that, so far we have found BCFW deformations to be the only type which admits a feasible pole concentration. A counterexample is, there is no straightforward pole concentration for Risager deformations \cite{Risager:2005vk}. We will not further explain the claim here but it is not hard to confirm it. This is another specialty of BCFW deformations, in addition to that BCFW deformations automatically preserve (or generate) the momentum conservation constraint. \subsection{Kinematic mass dimension} To prepare for the later analysis, we will study the general information of an amplitude: mass dimension and helicities, with which the applicable range of multi-step BCFW recursion relations can be clarified. First, for QFTs in 4-dimension, the mass dimension of an $n$-particle amplitude is $(4-n)$. We can use the LSZ reduction formula to prove this. Schematically, an $n$-particle amplitude $A$ is defined via \begin{equation} \prod^n\(\int d^4x\,e^{ipx}\varepsilon\Delta\)\<\Phi_1\ldots\Phi_n\>=\delta^4\(\sum p\)A, \end{equation} where $\<\Phi_1\ldots\Phi_n\>$ is the $n$-point function, $\varepsilon$ and $\Delta$ are the wave-function and kinematic operator for each field $\Phi$. For a bosonic field, the mass dimensions of $\varepsilon$, $\Delta$ and $\Phi$ are 0, 2 and 1 respectively, for a fermionic field, the mass dimensions of $\varepsilon$, $\Delta$ and $\Phi$ are $1/2$, 1 and $3/2$ respectively. Hence the mass dimension of \begin{equation} \int d^4x\,e^{ipx}\varepsilon\Delta\Phi \end{equation} is $-1$. There are $n$ such pieces, plus the momentum conserving delta function, the mass dimension of $A$ is clearly $(4-n)$. One special bosonic field is the graviton. By the perturbative definition $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, it should be dimensionless. To treat it as ordinary bosonic fields, we need to redefine it via \begin{equation} g_{\mu\nu}=\eta_{\mu\nu}+\kappa\,h_{\mu\nu}, \end{equation} such that $h_{\mu\nu}$ carries mass dimension 1 and $\kappa$ carries $-1$. Choosing $\kappa$ to be $\sqrt{8\pi G}$, the free field part of Einstein-Hilbert Lagrangian takes an analogous form as those of ordinary bosonic fields. Consequently, $\kappa$ becomes the coupling constant of gravity. One can also rediscover $\kappa$ via the on-shell method. For gravity, three-particle amplitudes including at least one graviton are \begin{equation} \begin{aligned} &A(1^{-h},2^{+h},3^{-2})=\kappa\,\<12\>^{-2}\<23\>^{-2h+2}\<31\>^{+2h+2},\\ &A(1^{-h},2^{+h},3^{+2})=\kappa\,[12]^{-2}[23]^{+2h+2}[31]^{-2h+2}, \end{aligned} \end{equation} where $h=0,1/2,1,2$, for all `realistic' theories. No matter which value $h$ takes, $\kappa$ always carries mass dimension $-1$, since the mass dimension of three-particle amplitudes is 1. In general, we can reverse the logic above and define the `kinematic mass dimension' of an $n$-particle amplitude as \begin{equation} D=4-n-\sum_i(D_c)_i, \labell{eq-7} \end{equation} where $(D_c)_i$ is the mass dimension of the coupling constant for each vertex. From now on, we will focus on the kinematic part of an amplitude which is a function of spinorial products. For the Standard Model\footnote{Strictly speaking, we mean all Standard-Model-type interactions of massless particles, and only dimensionless couplings are involved.}, all coupling constants are dimensionless so $D=4-n$, which is a non-positive number for $n\geq4$. For gravitational interactions $D_c=-1$ and we will show that $D=2$. When $D<0$, there is at least one irreducible denominator of the amplitude, which means a pole to be detected by BCFW deformations. Conversely, when $D\geq0$, the amplitude may admit some invulnerable terms to BCFW deformations which include polynomials and `pseudo polynomials'. The classification of these objects can be found in appendix \ref{app3}. \subsection{The master formula} By using mass dimension and helicities, we now derive the master formula for the subsequent discussions. After pole concentration, the final boundary term schematically reads\footnote{In general, the final boundary term's numerator is a polynomial but we only focus on one term, as the identical analysis applies to all terms. The remaining factor of \eqref{eq-5} is dropped, since it does not contribute to the master formula.} \begin{equation} \frac{1}{\<12\>^m[34]^{\overline{m}}}\prod_{i=1}^n\<i|^{\alpha_i}\prod_{i=1}^n[i|^{\beta_i}, \labell{eq-13} \end{equation} where we have temporarily taken $i_{1,2,3,4}=1,2,3,4$. The reason to use un-contracted spinors is that, it is more compact to capture the helicity information, and it can save the Schouten identity manipulations, as one can freely recombine them to get the desired spinorial products. Of course, the cost is that one needs to rule out all those illegitimate combinations. This treatment is similar to the methods used in \cite{Cohen:2010mi,McGady:2013sga}. Now the helicity configuration enforces that \begin{equation} \begin{aligned} -2h_1+m&=\alpha_1-\beta_1,\\ -2h_2+m&=\alpha_2-\beta_2,\\ 2h_3+\overline{m}&=\beta_3-\alpha_3,\\ 2h_4+\overline{m}&=\beta_4-\alpha_4,\\ 2h_i&=\beta_i-\alpha_i,~(i=5,\ldots,n)\\ \end{aligned} \end{equation} where $m,\overline{m}$ are known for a particular amplitude. Note that there are $2n$ variables, with only $n$ helicity constraints. We will fully exploit the $n$ remaining degrees of freedom to derive the master formula. The kinematic mass dimension of \eqref{eq-13} is \begin{equation} D'=-(m+\overline{m})+\frac{1}{2}\(\sum_{i=1}^n\alpha_i+\sum_{i=1}^n\beta_i\), \end{equation} where obviously, $\sum\alpha$ and $\sum\beta$ must be both even to form spinorial products. Also, we have $m+\overline{m}\geq1$ with $m,\overline{m}\geq0$, and $\alpha,\beta\geq0$. For a legitimate final boundary term, $D'$ equals to $D$ defined in \eqref{eq-7}. When $D\neq D'$ under all circumstances, the correct dimension and helicities cannot be satisfied simultaneously, and then the final boundary term is eliminated. One direct way to achieve this inconsistency is to show $D'_{\min}$ is larger than $D$. First we need to figure out this minimum by eliminating one variable for each particle, as there are two variables $\alpha_i$ and $\beta_i$ to be chosen. For $i=5,\ldots,n$, \begin{equation} \frac{1}{2}(\alpha_i+\beta_i)=-h_i+\beta_i=h_i+\alpha_i, \end{equation} when $h_i$ is negative, $(-h_i+\beta_i)$ is guaranteed to be positive, similarly when $h_i$ is non-negative, $(h_i+\alpha_i)$ is guaranteed to be non-negative. To manifest the non-negativity of $D'$, our choice is \begin{equation} \frac{1}{2}(\alpha_i+\beta_i)=|h_i|+\min(\alpha_i,\beta_i). \end{equation} Extending this logic for all particles, yields \begin{equation} D'=-(m+\overline{m})+\sum_{i=1,2}\(\left| h_i-\frac{m}{2}\right|+\min(\alpha_i,\beta_i)\) +\sum_{i=3,4}\(\left| h_i+\frac{\overline{m}}{2}\right|+\min(\alpha_i,\beta_i)\) +\sum_{i=5}^n(|h_i|+\min(\alpha_i,\beta_i)), \labell{eq-6} \end{equation} which is the \textit{master formula}, and explicitly, \begin{equation} \sum_{i=5}^n(|h_i|+\min(\alpha_i,\beta_i))=\sum_{h<0}(-h_i+\beta_i)+\sum_{h\geq0}(h_i+\alpha_i), \end{equation} which separates the sum into two parts according to the helicities. The final boundary term \eqref{eq-13} now reads ($p_i=|i\>[i|$ is a helicity-neutral momentum with additional mass dimension 1) \begin{equation} \frac{1}{\<12\>^m[34]^{\overline{m}}}\prod_{i=1,2}\([i|^{2h_i-m}p_i^{\alpha_i}\Big/\<i|^{-2h_i+m}p_i^{\beta_i}\) \prod_{i=3,4}\([i|^{2h_i+\overline{m}}p_i^{\alpha_i}\Big/\<i|^{-2h_i-\overline{m}}p_i^{\beta_i}\) \prod_{h<0}\<i|^{-2h_i}p_i^{\beta_i}\prod_{h\geq0}[i|^{2h_i}p_i^{\alpha_i}, \end{equation} where / means one of two candidate expressions is chosen to manifest the non-negativity of $D'$, as this choice also manifests the `extra neutral momenta', in addition to the `net spinors' that carry the helicity information. While the latter content is mandatory, the former is optional since it is brought in to fill the extra capacity of mass dimension. There is no unique choice of picking these extra $\alpha$'s and $\beta$'s as long as the total dimension is correct. For $i=5,\ldots,n$, $|h_i|$ is trivially non-negative. For $h_1,h_2,h_3,h_4$, careful analysis is needed as $m,\overline{m}$ are involved. Rewrite \eqref{eq-6} as \begin{equation} D'=-(m+\overline{m})+T_{1234}+\sum_{i=1}^n|h_i|+\sum_{i=1}^n\min(\alpha_i,\beta_i), \labell{eq-8} \end{equation} where \begin{equation} T_{1234}\equiv \sum_{i=1,2}\(\left| h_i-\frac{m}{2}\right|-|h_i|\)+\sum_{i=3,4}\(\left| h_i+\frac{\overline{m}}{2}\right|-|h_i|\), \end{equation} since $m,\overline{m}$ and $\sum|h|$ are fixed, we only need to manipulate $T_{1234}$. It's easy to check that \begin{equation} \begin{aligned} \left| h_i-\frac{m}{2}\right|-|h_i|&=\left\{\begin{array}{ll} m/2, & h_i<0\\ m/2-2h_i, & 0\leq h_i<m/2\\ -m/2, & m/2\leq h_i \end{array}\right.\\ \left| h_i+\frac{\overline{m}}{2}\right|-|h_i|&=\left\{\begin{array}{ll} -\overline{m}/2, & h_i<-\overline{m}/2\\ \overline{m}/2+2h_i, & -\overline{m}/2\leq h_i<0\\ \overline{m}/2, & 0\leq h_i \end{array}\right. \end{aligned} \end{equation} to maximize $T_{1234}$, one must take $h_1,h_2$ to be two of the smallest helicities, and $h_3,h_4$ to be two of the largest helicities in the process\footnote{Now we should strictly use $i_1,i_2,i_3,i_4$ instead of $1,2,3,4$ to admit a possible relabeling, when we implement the desired arrangement of pole concentration.}. On the other hand, even if one chooses $h_1,h_2,h_3,h_4$ arbitrarily among all $h_i$'s, $D'_{\min}$ is \textit{no less than zero} after a similar relabeling. To see this, let's rewrite \eqref{eq-6} as \begin{equation} D'=-(m+\overline{m})+T'_{1234}+\sum_{i=5}^n|h_i|+\sum_{i=1}^n\min(\alpha_i,\beta_i), \end{equation} and focus on the quantity \begin{equation} T'_{1234}\equiv\sum_{i=1,2}\left| h_i-\frac{m}{2}\right|+\sum_{i=3,4}\left| h_i+\frac{\overline{m}}{2}\right|, \end{equation} by this definition $T'_{1234}$ has a simple geometric meaning: It is the sum of the distances of four line segments stretching from $h_1,h_2$ to line $h=m/2$, and from $h_3,h_4$ to $h=-\overline{m}/2$, as shown in Figure \ref{fig-0}. It's easy to find that its minimum is $(m+\overline{m})$, when four points are on one horizontal line and the line is within the region between $h=m/2$ and $h=-\overline{m}/2$. When this horizontal line moves outside the region, $T'_{1234}$ increases by $4\times(\textrm{distance above or below the region})$. When this line is not horizontal, one can see that $T'_{1234}$ is always larger than $(m+\overline{m})$ after a relabeling such that $h_1,h_2\leq h_3,h_4$. Since $T'_{1234}$ is no less than $(m+\overline{m})$, $D'_{\min}$ is always non-negative. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{0} \caption{Sum of the distances of four line segments. It is obviously no less than $(m+\overline{m})$.} \labell{fig-0} \end{center} \end{figure} Also note that \begin{equation} \begin{aligned} -\frac{m}{2}+\left| h_i-\frac{m}{2}\right|-|h_i|&=0\Big/(-2h_i)\Big/(-m),\\ -\frac{\overline{m}}{2}+\left| h_i+\frac{\overline{m}}{2}\right|-|h_i|&=(-\overline{m})\Big/(2h_i)\Big/0, \end{aligned} \end{equation} which always take integer values. According to \eqref{eq-8}, if $\sum|h|$ is fractional, $D'$ must be fractional. \subsection{Inconsistency elimination} We are prepared to show that all massless tree amplitudes except those admitting (pseudo) polynomials of given mass dimension and helicities, shall be fully determined by multi-step BCFW recursion relations. In the following analysis, we pretend not to know any knowledge of QFT in the Lagrangian paradigm besides mass dimension and helicities. These are the only data needed to construct three-particle amplitudes, and recursions extend them to all higher-point ones. First note that, after pole concentration, there is no need to find any further deformation to kill the final boundary term, because another choice will not change its pole form, so one can always rearrange the entire sequence of pole concentration to reach the desired relabeling. Hence it must be the last step if chosen properly, and the direct way to use is the `inconsistency elimination'. Within its own framework, if inconsistency elimination can exclude the final boundary term, the new algorithm should be a completely independent approach for calculating a particular amplitude. Otherwise, terms that survive it need to be identified, similar to (pseudo) polynomials. In fact, we do discover a new type of object called the `saturated fraction' in this way. From the master formula \eqref{eq-6}, it is already known that $D'_{\min}\geq0$. Therefore if $D<0$, $D<D'_{\min}$ always holds. This tells us the nontrivial cases are of $D\geq0$. When $D\geq D'_{\min}$, we have the inconsistency criteria below to further eliminate the final boundary term:\\ (1) Fractional Dimension (FD): If $D'=\textrm{fractional}$. This arises when $\sum|h|=\textrm{fractional}$, but Lorentz invariance demands the dimension of \eqref{eq-13} to be an integer. This also implies that fermions must appear in pairs to be consistent.\\ (2) Pair Mismatch (PM): If $\sum\alpha=\textrm{odd}$ and $\sum\beta=\textrm{odd}$, when FD is excluded already. In this case, \eqref{eq-13} cannot be written as a fraction in terms of Lorentz invariant spinorial products, even though the dimension of \eqref{eq-13} is an integer.\\ (3) Spinor Excess (SE): If there exists an $i$, such that $\alpha_i>\sum_{j\neq i}\alpha_j$ or $\beta_i>\sum_{j\neq i}\beta_j$. In this case, spinorial contraction will force \eqref{eq-13} to vanish, even though $\sum\alpha=\textrm{even}$ and $\sum\beta=\textrm{even}$.\\ Altogether, there are four layers of inconsistency criteria: (0) $D<D'_{\min}$. (1) If $D\geq D'_{\min}$, consider FD. (2) If $D\geq D'_{\min}$, and FD is excluded, consider PM. (3) If $D\geq D'_{\min}$, and both FD and PM are excluded, consider SE.\\ It's obvious that these inconsistency criteria only mention general properties of field theories. Hence inconsistency elimination is theory independent, while in practice knowing some theory dependent properties would help simplify discussions case by case. If the final boundary term can survive all four criteria, it must admit `saturated fractions' (SF). Note that we have already set aside (pseudo) polynomials, because they can be identified \textit{without} using the master formula \eqref{eq-6}. Altogether, there are three types of objects invulnerable to BCFW deformations: (1) Polynomials, such as $\<12\>$. (2) Pseudo polynomials of $n=4$, such as $[34]/\<12\>$. When $\<12\>\to0$, we must also have $[34]\to0$ since we are using a BCFW deformation. Then the ratio $[34]/\<12\>$ is finite like a polynomial. (3) Saturated fractions of $n\geq5$, such as $[34][56]/\<12\>$ of $n=6$. When $\<12\>\to0$, the fraction becomes divergent, so it is different from pseudo polynomials. But somehow similarly, any BCFW deformation of $n=6$ fails to render such a fraction vanish under the large $z$ limit. Among these three objects, a polynomial is completely inert to BCFW constant extractions, in fact it is invulnerable to any type of deformation in on-shell methods. A pseudo polynomial is also completely inert to BCFW constant extractions, while this requires momentum conservation. A saturated fraction is form-inert to BCFW constant extractions\footnote{Here we mean a pure saturated fraction. A mixed saturated fraction is a pure saturated fraction times a polynomial. Its transform under a constant extraction will be demonstrated in appendix \ref{app3}.}, but with particle labels rearranged. The latter two objects are vulnerable to other types of deformations, such as Risager deformations \cite{Risager:2005vk}. The detailed exploration of all these three types is presented in appendix \ref{app3}. So far, we have witnessed how the systematic process of multi-step BCFW recursion relations can be arranged to determine a particular amplitude. In summary, there are four steps: (1) Analyze all non-vanishing factorization limits to determine the amplitude's common denominator, which is a product of all physical poles. This stage can be done almost purely diagrammatically. (2) Figure out the amplitude's kinematic mass dimension, then combine this with its helicity configuration to identify all possible (pseudo) polynomials. If none of them arises, we assume the amplitude can be fully determined and proceed to the next step. (3) Choose four particle labels (two of the smallest helicities and two of the largest ones) to determine the denominator's form of the final boundary term, and arrange a sequence of BCFW constant extractions to proceed pole concentration. This sequence must be able to capture all physical poles, such that each of them contributes to the final denominator via powers $m,\overline{m}$. Ensuring this, the sequence should be as concise as possible. Such an optimization is a very valuable future problem. (4) Use all four inconsistency criteria layer by layer to eliminate the final boundary term. If it fails, identify all possible saturated fractions. Then discuss whether these saturated fractions are legitimate, if not, clarify the argument to rule them out as described in the end of appendix \ref{app3}. This delicate treatment to remove all dependence on spurious poles is another valuable future problem. \section{Applications in Standard Model plus Gravity} \labell{sec4} Knowing the general guide of multi-step BCFW recursion relations, naturally we would like to see how it applies to specific theories. As familiar examples, realistic theories, {\it i.e.,} the Standard Model plus gravity\footnote{Any massless theory can be analyzed analogously. However, as the amplitude's kinematic mass dimension goes up, more types of (pseudo) polynomials and saturated fractions may arise, and one needs to identify them carefully.} are being considered. For simplicity, we will assume that all particles are massless. For reader's convenience, we rewrite the master formula below \begin{equation} D'=-(m+\overline{m})+\sum_{i=1,2}\left| h_i-\frac{m}{2}\right|+\sum_{i=3,4}\left| h_i+\frac{\overline{m}}{2}\right| +\sum_{i=5}^n|h_i|+\sum_{i=1}^n\min(\alpha_i,\beta_i), \labell{eq-11} \end{equation} recall that one should take $h_1,h_2$ to be two of the smallest helicities, and $h_3,h_4$ to be two of the largest helicities in the process. \subsection{Two separated sectors} Let's first consider Standard Model and gravity separately. For the Standard Model, the previous section gives $D=4-n$. On the other hand, since $D'_{\min}$ is no less than zero, when $D\leq-1$, the final boundary term must be eliminated. This directly tells that all Standard Model amplitudes of $n\geq5$ are solvable, leaving amplitudes of $n=4$. The $n=4$ case admits (pseudo) polynomials 1, $(\<34\>/[12])^{\pm1}$ and $(\<34\>/[12])^{\pm2}$, but the last one will be excluded. As it is proved in the next subsection, any amplitude's helicity configuration in gauge theory must be between MHV and anti-MHV. For pure gravity, assuming that one of the Feynman diagrams of an $n$-particle amplitude contains $v_m$ $m$-point vertices and $p$ internal propagators, it is clear that \begin{equation} \sum_mm\,v_m-2p=n,~\sum_mv_m=p+1,~\Longrightarrow~\sum_m(m-2)v_m=n-2, \end{equation} and each $m$-point vertex brings in $(m-2)$ $\kappa$'s, hence from \eqref{eq-7} we have \begin{equation} D=4-n-(-1)\sum_m(m-2)v_m=4-n+(n-2)=2, \end{equation} now we compare $D$ with $D'$. When $n\geq6$, $D'_{\min}=4>2$ so this is completely solvable. When $n=5$, from \eqref{eq-11}, and in the most conservative situation $-\overline{m}/2\leq-2<2\leq m/2$, the all-plus helicity configuration (similar for all-minus) admits the saturated fraction \begin{equation} \frac{[34]^2[35]^2[45]^2}{\<12\>^4}. \labell{eq-16} \end{equation} When $n=4$, the all-plus helicity configuration admits pseudo polynomials such as \begin{equation} \frac{[34]^4}{\<12\>^4}P^2_{xy},~\frac{[34]^5}{\<12\>^5}\<13\>\<24\>, \labell{eq-18} \end{equation} where $x,y$ are unspecified, while the all-but-one-minus case gives $D'_{\min}=4>2$ already. However, similar to gauge theory, any amplitude's helicity configuration in pure gravity must also be between MHV and anti-MHV. And for the MHV configuration, \eqref{eq-11} gives $D'_{\min}=8>2$. Therefore pure gravity is completely solvable. In general, for Standard Model plus gravity we have $D\leq2$. Note that an amplitude which contains gravitational vertices only always obeys $D=2$, regardless of how many or what kinds of external legs it owns, as later shown by \eqref{eq-12}. This specialty implies that one can arbitrarily attach more particles to a known amplitude without changing its mass dimension, via gravitational interactions. \subsection{MHV configuration of gauge theory and gravity} For the usual gauge (Yang-Mills) theory and gravity, based on the knowledge of multi-step BCFW recursion relations, one can prove that the helicity configuration of any non-vanishing amplitude must be between MHV and anti-MHV, {\it i.e.,} there is no all-plus or all-but-one-minus configuration. From Lorentz invariance and little group scaling, the three-particle amplitudes for gauge theory and gravity are known to be $(s=1,2)$ \begin{equation} A(1^{-s}\,2^{-s}\,3^{+s})=g_{s,--+}\(\frac{\<12\>^3}{\<23\>\<31\>}\)^s,~ A(1^{+s}\,2^{+s}\,3^{-s})=g_{s,++-}\(\frac{[12]^3}{[23][31]}\)^s, \end{equation} while the $F^3$-type $(s=1)$ or the $R^3$-type $(s=2)$ three-particle amplitudes are \begin{equation} A(1^{-s}\,2^{-s}\,3^{-s})=g_{s,---}\(\<12\>\<23\>\<31\>\)^s,~ A(1^{+s}\,2^{+s}\,3^{+s})=g_{s,+++}\([12][23][31]\)^s. \end{equation} For gauge theory the coupling constant is dimensionless and for gravity the coupling constant carries mass dimension $-1$, while mass dimensions of $g_{s,---}$ and $g_{s,+++}$ are $-2$ for $s=1$ and $-5$ for $s=2$, so the all-plus and all-minus three-particle amplitudes are excluded. To exclude the all-plus and all-but-one-minus amplitudes, we need to show that these configurations have vanishing contributions from either (BCFW detectable) factorization limits, or (BCFW undetectable) invulnerable objects which include (pseudo) polynomials and saturated fractions. Recall the key identity of multi-step BCFW recursion relations \begin{equation} I=P_n+C_nP_{n-1}+\ldots+C_nC_{n-1}\cdots C_2P_1+C_nC_{n-1}\cdots C_2C_1P_0+C_nC_{n-1}\cdots C_2C_1C_0, \end{equation} the terms with $P_i$'s and the term $C_n\cdots C_1C_0$ represent these two parts of contributions respectively, so when they both vanish, the corresponding amplitude must vanish. For factorization limits, an inductive observation is: An all-plus amplitude can only factorize into one lower-point all-plus amplitude and one all-but-one-minus amplitude. An all-but-one-minus amplitude can only factorize into two lower-point all-but-one-minus amplitudes, or one lower-point all-plus amplitude and one MHV amplitude. For the all-plus case, we will finally recurse down to $A(+++)$, so its factorization limit is zero. This also excludes the second possibility of factorization for the all-but-one-minus amplitude, then for the latter we will finally recurse down to $A(+++-)$, which can further factorize into two $A(++-)$'s superficially. However, when we use a BCFW deformation to calculate this part, its contribution is zero. This is due to the fact that both of its sub-amplitudes are anti-holomorphic functions, while the non-vanishing BCFW construction requires one to be holomorphic and the other anti-holomorphic. Therefore (BCFW detectable) factorization limits of the all-plus and all-but-one-minus configurations are zero, then we consider (pseudo) polynomials and saturated fractions. Note the previous discussion has not separated gauge theory and gravity yet, hence it holds for both. For gauge theory, the only invulnerable object is the pseudo polynomial of $n=4$, as the reader may check this in appendix \ref{app3}, given by \begin{equation} \frac{[34]^2}{\<12\>^2}, \end{equation} which is related to $A(++++)$. However, when one uses the holomorphic factorization limit (which is the only type of effective deformation to detect pole $\<12\>^2$, such as Risager deformation as a familiar example) to send $\lambda_1\to\lambda_2$, this pole is spurious since its order is two, which excludes the pseudo polynomial above. For gravity, the only invulnerable objects are saturated fraction \eqref{eq-16} related to $A(+++++)$, and pseudo polynomial \eqref{eq-18} related to $A(++++)$. Since both of them contain spurious poles, the argument above can also excluded them. Now the proof is done. We would like to emphasize that here we have used general information only, namely mass dimension, helicities and factorization limits, without any further aid such as supersymmetry. \subsection{Simplified diagrammatic rules} To simplify the general discussion of Standard Model plus gravity, we introduce the diagrammatic rules called `stretch and shrink'. The first example is the gauge interaction, as shown in Figure \ref{fig-1}. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{1} \caption{4-point gauge vertex is `equivalent' to two connected 3-point vertices. Wavy lines represent gauge bosons.} \labell{fig-1} \end{center} \end{figure} Fixing four external gauge bosons, this 4-point vertex can be `stretched' into two connected 3-point vertices, without changing the vertex's mass dimension. This tricky equivalence holds at the level of mass dimension and helicities, which are the only information required for inconsistency elimination. In other words, we have chosen a representative sub-diagram to encode the same information of mass dimension and helicities, and reduce the types of equivalent sub-diagrams in the analysis. Following this logic, all higher-point vertices in Standard Model plus gravity can be stretched into a number of connected 3-point vertices, except the special $\phi^4$ vertex. This simplified rule is notably advantageous in gravitational interactions. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{2a} \caption{Stretch or shift rule of a gravitational vertex. Bold lines represent Standard Model particles, and zigzag lines represent gravitons.} \labell{fig-2a} \end{center} \end{figure} As shown in Figure \ref{fig-2a}, gravitons can be shifted from any place to any place in a sub-diagram, without changing its mass dimension. Physically, this is because gravity is universal, gravitons can emit from any part of a system. Mathematically, this is because an $m$-point gravitational vertex carries coupling constant $\kappa^{m-2}$, where $\kappa$ carries mass dimension $-1$. This vertex can contain gravitons only or have Standard Model lines attached to it. Therefore, the $m$-point vertex can be stretched into $(m-2)$ connected 3-point vertices, with the exception of $\phi^4$ vertex. For convenience we define the `gravitational component', as shown in Figure \ref{fig-2b}. All vertices within this component are gravitational, while its external legs can be either Standard Model particles or gravitons, or both. There is one special graviton which will attach to another component. A trivial case is that there is no vertex at all, so this special graviton becomes the only component. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{2b} \caption{Gravitational component and its trivial case. The unspecified lines between two external lines can be of either type of these two specified lines `on the boundary'. In this case, they can be either Standard Model particles or gravitons, or a mixture of both.} \labell{fig-2b} \end{center} \end{figure} Gravitational components also obey the simplified rules, and for convenience they are usually shrunk into one component, as shown in Figure \ref{fig-2c}. This pack-up can reduce many sub-diagrams of gravitational components to one sub-diagram. It's free to attach (or detach) a gravitational component to (or from) another component, since this operation will not change its mass dimension. \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth]{2c} \caption{Gravitational components can be packed into one component.} \labell{fig-2c} \end{center} \end{figure} Summarizing the simplified diagrammatic rules, we are now left with the representative vertices only, as shown in Figure \ref{fig-3}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{3} \caption{All representative vertices in Standard Model plus gravity. Solid lines represent fermions, and dashed lines represent scalars.} \labell{fig-3} \end{center} \end{figure} \subsection{Amplitudes and (pseudo) polynomials of $D=0,1,2$} In Standard Model plus gravity, the nontrivial cases are $D=0,1,2$. By using the representative vertices in Figure \ref{fig-3}, the following discussion is considerably shorten. $\mathbf{D=0}$ \textbf{case}: First we consider $D=0$, all possible amplitudes are listed diagrammatically in Figure \ref{fig-4a}. Similar to gravitational components, Standard Model components presented here only contain Standard Model vertices. Also note the Standard Model components attached by a single graviton and a nontrivial gravitational component, are listed \textit{separately} for clarity. For $D'_{\min}=0$, (pseudo) polynomials arise when all helicities are the same. Now the last three diagrams in Figure \ref{fig-4a} are excluded, since a three-point Standard Model vertex can never have three same helicities. The second diagram is also excluded since a graviton has helicity $\pm2$. \begin{figure} \begin{center} \includegraphics[width=0.75\textwidth]{4a} \caption{Amplitudes of $D=0$.} \labell{fig-4a} \end{center} \end{figure} Therefore, (pseudo) polynomials come from the first and third diagrams, as listed in Figure \ref{fig-4b}. The first three diagrams in Figure \ref{fig-4b} admit polynomial 1, while the fourth diagram admits pseudo polynomial $([34]/\<12\>)^{\pm1}$, as the Yukawa interaction permits the two fermions of its vertex to have the same helicities. The fourth diagram cannot have a gravitational component attached to it while maintaining the four same helicities, since fermions and gauge bosons must appear \textit{in pairs of opposite helicities} when coupling with gravitons. Finally, since four gauge bosons must take the MHV configuration, $([34]/\<12\>)^{\pm2}$ is excluded. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{4b} \caption{(Pseudo) polynomials of $D=0$.} \labell{fig-4b} \end{center} \end{figure} All amplitudes of $D=0$ diagrams other than those in Figure \ref{fig-4b} are solvable. We find it convenient to attach a gravitational component to a known diagram to simplify the discussion. This one-line attachment is equivalent to maximally separating gravitational and Standard Model vertices into two sub-diagrams, when building the simplest representative diagram. $\mathbf{D=1}$ \textbf{case}: Continuing in the same fashion for $D=1$, all possible amplitudes are listed in Figure \ref{fig-5a}. Here, the 3-point Standard Model vertex can be one of the following four types: (a) 3-gauge interaction $(\pm1,+1,-1)$; (b) gauge-fermion-fermion interaction $(\pm1,+1/2,-1/2)$; (c) gauge-scalar-scalar interaction $(\pm1,0,0)$; (d) scalar-fermion-fermion (Yukawa) interaction $(0,+1/2,-1/2)$. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{5a} \caption{Amplitudes of $D=1$.} \labell{fig-5a} \end{center} \end{figure} For the left diagram in Figure \ref{fig-5a}, when vertices of type (a), (b), (c) and (d) are attached by a graviton, its four helicities are $(\pm1,+1,-1,\pm2)$, $(\pm1,+1/2,-1/2,\pm2)$, $(\pm1,0,0,\pm2)$ and $(0,+1/2,-1/2,\pm2)$ respectively\footnote{Here $\pm1$ and $\pm2$ are independent, they do not necessarily have the same sign.}. Plugging the data in \eqref{eq-11}, corresponding $D'_{\min}$'s are 3, 3, 3 and 2, which exclude all four cases. In one words, the left diagram is excluded simply due to the single graviton. For the right diagram in Figure \ref{fig-5a}, when vertices of type (a), (b) and (c) are attached by a gravitational component, in the most conservative situation this component only contains external scalars, since higher-spin particles must appear in pairs of opposite helicities, which will not decrease $D'_{\min}$. Its helicities are $(\pm1,+1,-1,0,0,\ldots)$, $(\pm1,+1/2,-1/2,0,0,\ldots)$ and $(\pm1,0,0,0,0,\ldots)$ respectively, and $\ldots$ denotes more scalars besides the minimal five. Applying \eqref{eq-11}, corresponding $D'_{\min}$'s are 3, 2 and 1, which exclude first two cases. However, the third case is also excluded even if its $D'_{\min}$ is allowed. The argument is that no spinorial product can be formed by only $|1\>^2$ or $|1]^2$, which is known as the Spinor Excess of inconsistency elimination. The only polynomial comes from the vertex of type (d), as given in Figure \ref{fig-5b}. This polynomial is $\<12\>$ or $[12]$. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{5b} \caption{Polynomials of $D=1$.} \labell{fig-5b} \end{center} \end{figure} $\mathbf{D=2}$ \textbf{case}: The last case is $D=2$. The possible amplitude is given in Figure \ref{fig-6a}, and corresponding polynomials are listed in Figure \ref{fig-6b}. The first one is $P^2_{xy}$, where $x,y$ are two unspecified scalars. The second one is $\<1x\>[x2]$, with one pair of fermions of opposite helicities. The third one is $\<12\>[34]$, with two pairs of fermions of opposite helicities. One may consider a fourth one, with one pair of gauge bosons of opposite helicities, which is allowed since its $D'_{\min}$ is 2. But this case is also excluded, as no spinorial product can be formed by only $|1\>^2|2]^2$ or $|1]^2|2\>^2$. \begin{figure} \begin{center} \includegraphics[width=0.25\textwidth]{6a} \caption{Amplitudes of $D=2$.} \labell{fig-6a} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.75\textwidth]{6b} \caption{Polynomials of $D=2$.} \labell{fig-6b} \end{center} \end{figure} Note that these $D=2$ polynomials can be of either $n=4$, or $n\geq5$ for which all unspecified particles are scalars. For $n=4$, there are dimensionless pseudo polynomials of the form $([34]/\<12\>)^x$, which can be an additional factor of the polynomials above. This factor will lead to a global shift of all four helicities. But incidentally, there is no extra legitimate pseudo polynomial. Last but not the least, we need to check the analysis above has covered all possible diagrams, by using a compact formula \begin{equation} D=2-\sum_{i=1}^S(s_i-2), \labell{eq-12} \end{equation} where $D$ is the kinematic mass dimension of a Standard Model `skeleton', as shown in Figure \ref{fig-7}. This skeleton amplitude is made of $S$ Standard Model components connected by internal gravitons and $s_i\geq3$ is the number of external legs for each component. When $s_i=2$, there is no Standard Model vertex, and each component reduces to a Standard Model line, which should be excluded by the skeleton's definition. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{7} \caption{Standard Model skeleton. In this case, the unspecified lines can be Standard Model particles only.} \labell{fig-7} \end{center} \end{figure} The proof of this formula is simple. For $S$ connected components, there are $(S-1)$ internal gravitons. Each internal graviton has two attached points, which bring in two $\kappa$'s. Hence from \eqref{eq-7}, we have \begin{equation} D=4-\sum_{i=1}^Ss_i-(-2)(S-1), \end{equation} which is identical to \eqref{eq-12} after a simple rearrangement. Having this skeleton, to build a general amplitude, more gravitational components (either nontrivial components or single gravitons) can be attached to it. By applying the simplified diagrammatic rules, they can be packed into one single gravitational component. Now let's consider $D=0$, we have $S=1$, $s_1=4$ and $S=2$, $s_1=s_2=3$. These two cases correspond to the first and fourth diagrams in Figure \ref{fig-4a}. When attached by gravitational components, they become the rest four diagrams. For $D=1$, we only have $S=1$, $s_1=3$. This diagram cannot stand alone, since there is no physical massless 3-particle amplitude. Hence it must be accompanied by gravitational components, which gives the two diagrams in Figure \ref{fig-5a}. For $D=2$, the amplitude contains only gravitational vertices, which corresponds to the diagram in Figure \ref{fig-6a}. Therefore all possible diagrams have been covered. \section{Discussions} \labell{discuss} In this work, we are mainly concerned with the workability of multi-step BCFW recursion relations. The key techniques of this approach are pole concentration and inconsistency elimination. Its applicable range has been clarified and we find three types of objects invulnerable to BCFW deformations: polynomials, pseudo polynomials and saturated fractions. While the last two objects can be determined by other types of deformations, how to deal with the first one is probably beyond usual on-shell methods and it may lead to important generalizations of the present approach. Moreover, when saturated fractions arise, we need to discuss whether they are legitimate, if not, finding an argument to rule them out is a valuable topic. Again, we would like to emphasize that this systematic algorithm uses general properties of field theories only, such as Lorentz invariance, locality and unitarity. The major information we use are mass dimension and helicities. Ensuring its workability, we try to further improve the efficiency of multi-step BCFW recursion relations by taking two sophisticated aspects into account, as listed below: (a) Knowing the (final or intermediate) boundary term's schematic form (in terms of un-contracted spinors), it is also very natural to seek for a deformation which renders the boundary term vanish under the large $z$ limit, other than employing inconsistency elimination. In practice, one can consider both ways at each step to shorten the sequence of pole concentration. More profoundly, inconsistency elimination is only an argument \textit{afterwards}, as any boundary term that vanishes must be killed by a \textit{good} deformation with respect to that step. (b) In practice, it is often evident that there is no need to reach the \textit{final} denominator \eqref{eq-5}. Merely a particular intermediate form is sufficient to complete the calculation correctly. This is due to the fact that pole concentration is for \textit{eliminating as many neutral momenta as possible} in the denominator, and hence increasing the `net spinors' in the numerator while fixing the helicities. Let's illustrate these two aspects through three simple examples. The first one is the MHV amplitude $A(1^-,2^+,3^-,4^+,\ldots,n^+)$, where $1^-$ and $3^-$ are non-adjacent. Assuming all lower-point MHV amplitudes are known, non-vanishing factorization limits give the common denominator \begin{equation} \frac{1}{\<12\>\<23\>\ldots\<n-1,n\>\<n1\>}, \end{equation} on the other hand, the kinematic dimension of this amplitude is $(4-n)$, as the gauge coupling constant is dimensionless. The correct helicities require its numerator to be $|1\>^4|3\>^4$ schematically, which uniquely fixes the amplitude as \begin{equation} \frac{\<13\>^4}{\<12\>\<23\>\ldots\<n-1,n\>\<n1\>}, \end{equation} from this example, we see the schematic form is a simple but powerful tool. The second example is amplitude $A(1^{-1},2^{+1},3^{-1},4^{+1},5^{-2})$ in Einstein-Maxwell theory. Non-vanishing factorization limits give all physical poles as $[12][32][14][34][15][25][35][45]$. Since its kinematic dimension is 2, the correct helicities require its schematic form to be \begin{equation} \frac{[1|~[2|^5[3|~[4|^5\times\prod^4|\bullet\>[\bullet|}{[12][32][14][34][15][25][35][45]}, \end{equation} where $|\bullet\>[\bullet|$'s are unspecified neutral momenta. At this stage, one can already check that \begin{equation} \<1|5]~~~\<2|5]~~~\<3|5]~~~\<4|5] \end{equation} are \textit{good} deformations since each of them induces $z^3$ in the denominator, while the numerator can at most contain $z^2$ to avoid Spinor Excess, {\it i.e.,} $\prod^4|\bullet\>$ in the numerator can at most contain two identical spinors to form non-vanishing spinorial products, which restricts $\prod^4[\bullet|$ in the same way. For this case, to find a deformation with maximal large $z$ suppression is clearly more straightforward than using inconsistency elimination after a number of constant extractions. The third example is even more interesting. Consider amplitude $A(1^{-1},2^{+1},3^{-1},4^{+1},5^{-1},6^{+1})$ again in Einstein-Maxwell theory. Factorization limits, mass dimension and helicities together fix its schematic form to be \begin{equation} \frac{\<1|^2[2|^2\<3|^2[4|^2\<5|^2[6|^2\times\prod^{32}|\bullet\>[\bullet|}{P_{12}^2P_{14}^2P_{16}^2 P_{32}^2P_{34}^2P_{36}^2P_{52}^2P_{54}^2P_{56}^2\times P_{132}^2P_{134}^2P_{136}^2 P_{152}^2P_{154}^2P_{156}^2P_{352}^2P_{354}^2P_{356}^2}, \end{equation} and we will show that two successive deformations, namely \begin{equation} \<3|1]\to\<5|1] \end{equation} can already capture the full amplitude. First, after constant extraction $\<3|1]$, similar trick of pole concentration gives its schematic form \begin{equation} \frac{\<1|^{14}[2|^2[3|^{10}[4|^2\<5|^2[6|^2\times\prod^{22}|\bullet\>[\bullet|} {P_{52}^2P_{54}^2P_{56}^2P_{132}^2P_{134}^2P_{136}^2\<12\>^2\<14\>^2\<16\>^2[32]^2[34]^2[36]^2 \<1|5+2|3]^2\<1|5+4|3]^2\<1|5+6|3]^2}, \end{equation} and constant extraction $\<5|1]$ turns it into \begin{equation} \frac{\<1|^{20}[2|^2[3|^{10}[4|^2[5|^4[6|^2\times\prod^{18}|\bullet\>[\bullet|} {\<12\>^3\<14\>^3\<16\>^3[32]^2[34]^2[36]^2[52][54][56]\<1|3+2|5]\<1|3+4|5]\<1|3+6|5] \<1|5+2|3]^2\<1|5+4|3]^2\<1|5+6|3]^2}, \end{equation} then there is no need to proceed further, because in the numerator Spinor Excess already arises, as $\prod^{18}|\bullet\>$ can never saturate $\<1|^{20}$ to form non-vanishing spinorial products. By this way, two steps can already get the correct answer, while a blind pole concentration in general requires $4(6-3)=12$ steps. Therefore, it is not always necessary to reach the final denominator, when eliminating part of neutral momenta in the denominator enforces the numerator to contain sufficient net spinors for triggering Spinor Excess. But this is not the end of the story. When proceeding with the calculation \begin{equation} I=P_{\<5|1]}+C_{\<5|1]}P_{\<3|1]}+C_{\<5|1]}C_{\<3|1]}, \end{equation} while it is just shown that $C_{\<5|1]}C_{\<3|1]}=0$, incidentally we also find $C_{\<5|1]}P_{\<3|1]}=0$. This means $\<5|1]$ is a \textit{good} deformation and hence one step is enough. Since particles $1^{-1}$, $3^{-1}$ and $5^{-1}$ are symmetric in the helicity configuration (there is no color order), $\<3|1]$ is also a good deformation. In general, there is a \textit{last good deformation} corollary: After the $n$-th step, when $I=(\textrm{known terms})_n+C_n\cdots C_0$ is reached, we can further expand it by one more step as \begin{equation} I=P_{n+1}+C_{n+1}(\textrm{known terms})_n+C_{n+1}C_n\cdots C_0, \end{equation} assume the $(n+1)$-th step is the last step, for which $C_{n+1}C_n\cdots C_0=0$, and if \begin{equation} C_{n+1}(\textrm{known terms})_n=0, \end{equation} then the $(n+1)$-th step is a good deformation. This corollary is powerful in practical calculations, since unnecessary steps can be saved if we incidentally encounter the condition above. Back to the mainline, these two aspects (a) and (b) will be demonstrated more systematically, with more examples in our future work, with a possible joint use of the last good deformation corollary. The major goal is to improve the efficiency provided the workability is ensured. The exit of this maze is now found, and how to shorten the correct route is a complicated yet fascinating problem. Finally, we would like to highlight the power of simple analysis by mass dimension and helicities, as these cheap information possibly lie in the core of the future study of efficiency. \section*{Acknowledgement} The authors would like to thank Qingjun Jin and Rijun Huang for valuable discussions. JR is grateful to Qingjun Jin for correcting the errors of the early manuscript. This work is supported by Qiu-Shi funding and Chinese NSF funding under contracts No.11031005, No.11135006 and No.11125523.
{ "timestamp": "2015-06-26T02:04:17", "yymm": "1504", "arxiv_id": "1504.06306", "language": "en", "url": "https://arxiv.org/abs/1504.06306" }
\section{Introduction} \label{Intro} Over the last few decades, galaxy surveys such as the Two-degree-Field Galaxy Redshift Survey \citep[2dFGRS;][]{Colless:2001}, the Sloan Digital Sky Survey \citep[SDSS; e.g.][]{Tegmark:2004}, the Two-Micron All-Sky Survey \citep[2MASS;][]{Huchra:2005} and the 6dFGS \citep{Jones:2004} have revealed that galaxies gather in an intricate network, the so-called cosmic web \citep*[CW, after][]{Bond:1996}, made of filaments, walls, nodes which surround vast empty regions, the voids \citep{Zeldovich:1970,Shandarin:1989}. These structures can be found on scales from a few to hundreds of megaparsecs and include huge flat structures like the Great Wall \citep{Geller:1989} and the SDSS Great Wall \citep{Gott:2005}, the largest known structure in the local Universe, with a size larger than $400 h^{-1}$ Mpc and enormous empty regions like the Bo\"{o}tes void \citep{Kirshner:1981,Kirshner:1987}. These results have been complemented by mappings of the dark matter (DM) spatial distribution through weak lensing observations like the Hubble Space Telescope Cosmic Evolution Survey \citep[COSMOS;][]{Massey:2007} and recent results from the Canada--France--Hawaii Telescope Lensing Survey \citep[CFHTLenS;][]{VanWaerbeke:2013}. Summing up, analyses of the current large scale distribution of galaxies and mass show that both are hierarchically organised into a highly interconnected network, displaying a wealth of structures and substructures over a huge range of densities and scales. This web can be understood as the main feature of the anisotropical nature of gravitational collapse \citep{Peebles:1980}, as well as of its intrinsic hierarchical character, and in fact it is the main dynamical engine responsible for structure formation in the Universe \citep{Sheth:2004,ShethVdWeygaert:2004,Shen:2006}, including galaxy scales \citep{DT:2011}. According to the standard model of cosmology, large-scale structures observed in the Universe today are seeded by infinitesimal primordial density and velocity perturbations. The physical processes underlying their dynamical development until the CW emergence can be explained by theories and models on the gravitational instability, later on corroborated by a profusion of cosmological simulations, the first of them purely $N$-body simulations \citep[see e.g.,][]{Yepes:1992,Jenkins:1998,Pogosyan:1998,Colberg:2005,Springel:2005,Dolag:2006}, while recent ones include baryons and stellar physics too \citep[see e.g.,][]{DT:2011,Metuki:2014}. Indeed, the advanced non-linear stages of gravitational instability are described by the Adhesion Model (AM; see \citealt{Gurbatov:1984}; \citealt{Gurbatov:1989}; \citealt{Shandarin:1989}; \citealt{Gurbatov:1991}, \citealt{Vergassola:1994} and \citealt{Gurbatov:2012}, for a recent review), an extension of the popular non-linear Zeldovich Approximation \citep[hereafter ZA; see][]{Zeldovich:1970}. In comoving coordinates the ZA can be expressed as a mapping from the Lagrangian space (the space of initial conditions $\vec{q}$) into the Eulerian space (real space) described as a translation by a generalised irrotational velocity-like vector (the displacement field $\vec{s}(\vec{q})$) times the linear density growth factor $D_{+}(t)$, where the displacement can be written as a scalar potential gradient $\vec{s}(\vec{q}) = - \vec{\nabla}_q \Psi (\vec{q})$. This approximation allows us to predict where singularities (locations with infinite density) will appear as cosmic evolution proceeds (i.e., the $\vec{q}$ points where the map has a vanishing determinant of the Jacobian matrix) and how they evolve into a sequence of caustics in real space. In this way, the ZA correctly but roughly describes the emergence of multistream flow regions, caustics and the structural skeleton of the CW \citep*{Doroshkevich:1973,Buchert:1989,Buchert:1992,Shandarin:1989,Coles:1993,Melotta:1994,Melottb:1994,Melott:1995,Sahni:1995,Yoshisato:1998,Yoshisato:2006}. It is well known, however, that the ZA is not applicable once a substantial fraction of the mass elements are contained in multistream regions, because it predicts that caustics thicken and vanish due to multistreaming soon after their formation. One way of overcoming this issue is to introduce a small diffusion term in Zeldovich momentum equation, in such a way that it has an effect only when and where particle crossings are about to take place. This can be accomplished by introducing a non-zero viscosity, $\nu$, and then taking the limit $\nu \rightarrow 0$: this is the AM, whose main advantage is that the momentum equation looks like the Burgers' equation \citep{Burgers:1974} in the same limit, and hence its analytical solutions are known. A physically motivated derivation of the AM can be found in \citet{Buchert:1998,Buchert:1999,Buchert:2005}. The AM implies that, at a given scale, walls, filaments and nodes (i.e., the cosmic web elements) are successively formed, and then they vanish due to mass piling-up around nodes, to where mass elements travel through walls and filaments\footnote{Recently confirmed in detail through CW element identification in large volume $N$-body simulations by \citet{Cautun:2014}.}. Meanwhile, the same web elements emerge at larger and larger scales, and are erased at these scales after some time. Therefore, the AM conveniently describes both the anisotropic nature of gravitational collapse and the hierarchical nature of the process. In addition, the AM indicates that the advanced stages of non-linear evolution act as a kind of smoothing procedure on different scales, by wiping mass accumulations off walls and filaments, first at small scales and later on at successively larger ones, to the advantage of nodes. Another implication of the AM is that node centres (protohaloes at high $z$) lie on the former filaments at any $z$. A very interesting achievement of the AM is that the first successful reduction of the cosmic large scale structure to a geometrical skeleton was done in this approximation \citep{Gurbatov:1989,Kofman:1990,Gurbatov:2012}, see also \citet{Hidding:2014}. Later on \citet{Novikov:2006,Sousbiea:2008,Sousbieb:2008,Sousbie:2009,Sousbie:2011,SousbiePichon:2011,AragonCalvoa:2010} and \citet{AragonCalvob:2010} also discussed the skeleton or spine of large-scale structures from purely topological constructions in a given density field. Recently, a growing interest to identify and analyse elements of the CW in $N$-body simulations, as well as in galaxy catalogues, has led to the development of different mathematical tools \citep{Stoica:2005,AragonCalvoa:2007,AragonCalvob:2007,AragonCalvob:2010,Hahna:2007,Hahnb:2007,Platen:2007,Stoica:2007,ForeroRomero:2009,Wu:2009,AragonCalvoa:2010,Bonda:2010,Bondb:2010,Genovese:2010,Gonzalez:2010,Jones:2010,Stoica:2010,Hoffman:2012,Cautun:2013,Tempel:2014}. These methods and algorithms are motivated by the study of the influence of large scale structures on galaxy formation \citep{Altay:2006,AragonCalvob:2007,Hahna:2007,Hahnb:2007,Paz:2008,Hahn:2009,Zhang:2009,Godlowski:2011,Codis:2012,Libeskind:2012,Libeskind:2013,AragonCalvo:2014,Metuki:2014}. In a recent paper, \citet{Cautun:2014} have investigated the evolution of the CW from cosmological simulations, focusing on the global evolution of their morphological components and their halo content. From a dynamical point of view, \citet{Hidding:2014} go a step further by establishing the link between the skeleton or spine of the CW, as described by the previous methods, and the development of the density field. In fact, they describe for the first time the details of caustic emergence as cosmic evolution proceeds. Their main result is to show that all dynamical processes related to caustics happen at locations placed near a set of critical lines in Lagrangian space, that, when projected onto the Eulerian space, imply an increasing degree of connectedness among initially disjoint mass accumulations in walls or filaments, until a percolated structure forms, i.e., the spine or skeleton of the large scale mass distribution. These authors compare their results with two dimensional $N$-body simulations. Note that, due to the complexity of the problem, they first work in two dimensional spaces, where caustic emergence and percolation are described. Nevertheless, they expect no important qualitative differences when three-dimensional spaces are considered instead. As we can see, in the last years different methods to quantify the cosmic web structure, classify its elements and study its emergence and evolution have been developed and applied. However, a detailed analysis of the {\it local} development of the density field around galaxy hosting haloes is still missing. This is of major importance because of its close connection to the problem of galaxy formation, in which case the effects of including gas processes need to be considered too. It is worth noting that neither the ZA nor the AM include gas effects in their description of CW dynamics. This analysis should first answer to the simplest questions related to {\it local} shape deformation and spine emergence and the orientation of its main directions or symmetry axes around galaxy-to-be objects. Besides, the very nature of these {\it local} processes, there are other interesting, simple, not-yet-elucidated related issues. For instance the characterisation of the times when deformation stops and orientation gets frozen, whether or not this local web evolution is mass dependent (i.e., the mass of the halo-to-be) or not, and if different components (DM, hot gas, cold baryons) evolve in a similar way or there is a component segregation. We do not have at our disposal an analytical tool to perform such analyses, in consequence we need to resort to numerical simulations. In order to answer these questions, in this paper we investigate the impact of the local features of the Hubble flow imprinted on the deformation of initially spherical Lagrangian volumes (LVs) and the spine emergence, from high to low redshift. As known from previous studies, the local Hubble flow is neither homogeneous nor isotropic, on the contrary, it contains shear terms (and small-scale vorticity at its most advanced stages) that distort cosmological structures. We use cosmological hydrodynamical simulations to study the deformations of a sample of LVs through their reduced inertia tensor at different redshifts, which allows us to describe in a quantitative way the LV shape deformation and evolution, along with that of their symmetry axes. We analyse every component separately, that is, we compute the reduced inertia tensor for DM, cold and hot baryons. This paper is organised as follows. In $\S$\ref{sec:methods}, we outline the simulation method and the algorithms used to study the deformations of LVs. A brief summary on the ZA, the CW emergence in 2D and the AM is given in $\S$\ref{UnderEvol}, where some of their implications, useful in this paper, are also addressed. Some relevant details of the highly non-linear stages of gravitational instability, beyond the ZA or the AM are summarised in $\S$\ref{FurtherEvol}, to help to understand how our results about the LV evolution can be explained in the light of these models. In $\S$\ref{EigenEvol}, the LV evolution is investigated in terms of the reduced inertia tensor eigenvectors, delaying the analysis in terms of its eigenvalues to the next section, $\S$\ref{sec:results}, focused on the mass and component effects and on the shape evolution of the selected LVs. In $\S$\ref{sec:Percola} we study the freezing-out of eigendirections and shapes, presenting the distribution of the corresponding freezing-out times and looking for mass effects. Possible scale effects on the previous results are discussed in $\S$\ref{subsec:scaleeffects}. Finally, we present our summary, conclusions and discussion in $\S$\ref{sec:conclusions}. \section[]{Simulations and Methods} \label{sec:methods} \subsection{Simulations} \label{sec:simul} The simulations analysed here have been run under the GALFOBS I and II projects. The GALFOBS (Galaxy Formation at Different Epochs and in Different Environments: Comparison with Observational Data) project aims to study the generic statistical properties of galaxies in various environments and at different cosmological epochs. This project was a DEISA Extreme Computing Initiative (DECI)\footnote{ The DEISA Extreme Computing Initiative was launched in May 2005 by the DEISA Consortium, as a way to enhance its impact on science and technology}. GALFOBS I was run at LRZ (Leibniz-Rechenzentrum) Munich, as a European project. Its continuation, GALFOBS II, was run at the Barcelona Supercomputing Centre, Spain. All the runs were performed using P-DEVA, the parallelised version of the DEVA code \citep{Serna:2003}. DEVA is an hybrid AP$^3$M Lagrangian code, implemented with a multistep algorithm and smoothed particle hydrodynamics (SPH). The SPH version included in P-DEVA ensures energy and entropy conservation and, at the same time, guarantees a good description of the forces and angular momentum conservation. However, this advantage implies a gain in accuracy and an additional computational cost. Star formation (SF) is implemented through a Kennicutt--Schmidt-like law with a given density threshold, $\rho_*$, and star formation efficiency $c_{*}$ \citep{MartinezSerrano:2008}. The simulations have been carried out in the same periodic box of 80 Mpc side length, using $512^3$ baryonic and $512^3$ DM particles. Due to computational cost, these simulations only include hydrodynamical calculation in a sub-box of 40 Mpc side. The evolution of matter follows the $\Lambda$ cold dark matter ($\Lambda$CDM) model, with parameters $\Omega_{\rm m}=0.295$, $\Omega_{\rm b} =0.0476$, $\Omega_{\Lambda}=0.705$, $h=0.694$, an initial power-law index $n=1$, and $\sigma_{8}=0.852$, taken from cosmic microwave background anisotropy data\footnote{http://lambda.gsfc.nasa.gov/product/map/dr3/params/ lcdm\_sz\_lens\_run\_wmap5\_bao\_snall\_lyapost.cfm} \citep{Dunkley:2009}. The star formation parameters used were a density threshold $\rho_{thres}=4.79\times10^{-25} \mathrm{g}~ \mathrm{cm}^{-3}$ and a star formation efficiency $c=0.3$. The mass resolution is $m_{\rm bar}=2.42\times10^{7} M_{\odot}$ and $m_{\rm DM}=1.26\times10^{8} M_{\odot}$ and a spatial resolution of $1.1$ kpc in hydrodynamical forces. More detailed information of these simulations can be found in \citet{Onorbe:2011}. It is noteworthy that no explicit feedback has been implemented in these simulations, but SF regulation through the values of the SF parameters. Nevertheless, the issues that will be discussed in this paper involve considerably larger characteristic scales than the ones related to stellar feedback. Therefore, it is unlike that the details of the star formation rate, and those of stellar feedback in particular, could substantially alter the conclusions of this paper. \subsection{Methods} \label{subsec:methods} We first describe how the LV sample around simulated galaxies has been built up. The first step is halo selection at $z_{\rm low} = 0.05$ by using the SKID algorithm\footnote{http://www-hpcc.astro.washington.edu/tools/skid.html} \citep{Weinberg:1997}. This multi-step algorithm determines first the smoothed density field, then it moves particles upward along the gradient of this density field using a heuristic equation of motion that forces them to collect at local density maxima. Afterwards, it defines the approximate group to be the set of particles identified with an FOF algorithm with a linking length, $b$. Finally, particles not gravitationally bound to the groups identified in the previous step are removed. Specifically, we have selected a sample of 206 galaxy haloes from two runs of the GALFOBS simulations at $z_{\rm low} $, not involved in violent events at the halo scale at $z_{\rm low}$. Their virial radii $r_{\rm vir, low}$ and masses $M_{\rm vir, low}$ at this redshift go from dwarf galaxies to galaxy groups, see the corresponding histograms in Fig.~\ref{fig:histmassrad} first row. The virial radius ($r_{\rm vir}$) is defined as the radius of the sphere enclosing an overdensity given by \citet{Bryan:1998}. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig1} \caption{Upper panels show the radius and mass distribution of the galaxy haloes at $z_{\rm low}$ in our sample. Lower panels depict the same information for the selected LVs. } \label{fig:histmassrad} \end{figure} Next, for each halo at $z_{\rm low}$ we have traced back all the particles inside the sphere defined by its respective $r_{\rm vir, low}$ to $z_{\rm high} = 10$. Using the position of these particles at $z_{\rm high}$ we have calculated a new centre $\vec{r}_c$. Then, we have selected at $z_{\rm high}$ all the particles enclosed by a sphere of radius $R_{\rm high} = K\times r_{\rm vir, low}$, with $K = 10$ around their respective centres $\vec{r}_c$ (see first row of Fig.~\ref{fig:lagvol}), and we have identified each of the DM and baryonic particles within these spherical volumes. These particles sample the mass elements whose deformations, stretchings, foldings, collapse and stickings we are to trace along cosmic evolution. They follow geodesic trajectories until they possibly get stuck and begin the formation of, or are accreted onto, a CW structure element. For this reason, we have termed them Lagrangian Volumes (LVs). It is worth noting at this point that we are following the evolution of individual LVs, each of them made of a fixed number of particles as they evolve. We do not trace the possible incorporation of off-LV mass elements that could happen along evolution as a consequence of mergers, infalls or other processes. Note also that, due to the very complex evolution of the LVs, their borders are not well defined at $z < z_{\rm high}$. Finally, a technical point to take into account is that the LVs should lie inside the hydrodynamical zoomed box. The choice $K=10$ is motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring that LVs are large enough to meaningfully sample the CW emergence around forming galaxies. The possible effects that different $K$ values could have in our results will be discussed in Section \ref{subsec:scaleeffects}, where we conclude that $K=10$ is the best choice among the three possibilities analysed. Afterwards, we have followed the dynamical evolution of these particles across different redshifts until they reach $z_{\rm low}$, i.e., we have followed the evolution (stretchings, deformations, foldings, collapse, stickings) of a set of 206 LVs from $z_{\rm high}$ until $z_{\rm low}$. By construction, the mass of each of these sets of particles is constant across evolution, and its distribution is given in Fig.~\ref{fig:histmassrad}, second row, where we also show the distribution of their initial sizes at $z_{\rm high}$. The choice of initially {\it spherically} distributed sets of particles aims to unveil the anisotropic nature of the local cosmological evolution, illustrated in Fig.~\ref{fig:lagvol}, where two examples of LVs at $z=10$ and their corresponding final shapes and orientations at $z_{\rm low}$ are displayed. The mass of these LVs are $8.7 \times 10^{12}M_\odot$ (left-hand panels) and $4.4 \times 10^{12}M_\odot$ (right-hand panels), respectively. In this figure we note that, in both cases, a massive galaxy appears at $z_{\rm low}$ in the central region of the LV. It turns out that, by construction, these galaxies are just those identified in the first step of the LV sample building-up, see above. We also notice that the LVs have evolved into a highly irregular mass organisation, including very dense subregions as well as other much less dense and even rarefied ones. Also, some changes of orientation of the emerging spines are visible, mainly in the lighter LV. In addition, the initial cold gaseous configuration at $z=10$ has been transformed into a system where stars (in blue) appear at the densest subregions of the LVs. Hot gas (in red) particles are also present and constitute an important fraction of the LV mass (see $\S$\ref{FurtherEvol} for an explanation about its origin). We also observe that the overall LV shape on the right-hand side of Fig.~\ref{fig:lagvol} is highly elongated at $z_{\rm low}$ and has a prolate-like or filamentary appearance, visually spanning a linear scale of $\sim$ 9 Mpc long by 2 Mpc wide, while that on the left-hand side of Fig.~\ref{fig:lagvol} still keeps a more wall-like structure. These shape transformations illustrate the highly anisotropic character of evolution under gravity. In this respect, it is worth mentioning that anisotropy is a generic property of gravitational collapse for non-isolated systems, as it was pointed out in early works by \citet{Lin:1965,Icke:1973} and \citet{White:1979}. \begin{figure*} \begin{center}$ \begin{array}{cc} \includegraphics[width=8.8cm]{sroblesfig2a} & \includegraphics[width=8.8cm]{sroblesfig2b} \end{array}$ \end{center} \caption{Left. shape evolution of a wall-like LV from $z=10$ to $z_{\rm low}=0.05$. Different columns are three projections of the same LV, with fixed axes taken oriented along the direction of the principal axes at $z_{\rm low}$. Magenta points represent DM, green cold gas, red hot gas ($T \ge 3 \times10^4$ K) and blue stars. First row shows the initially spherical LV at $z=10$, where DM and cold gas are represented in the same plot. Second, third, fourth and fifth group of panels illustrate the LV shape deformation across redshifts $z=3, 1, 0.5$ and $0.05$, where DM and baryonic components are split in different rows. Right. the same for a filament-like LV. The mass of the LVs are $8.7 \times 10^{12}M_\odot$ and $4.4 \times 10^{12}M_\odot$, respectively. } \label{fig:lagvol} \end{figure*} As we mentioned in $\S$\ref{Intro}, the deformation, stretching, folding, multistreaming and collapse of mass elements by cosmological evolution is predicted and described by the ZA, while AM adds a viscosity term making multistreaming regions to get stuck into dense configurations. In the following, we will introduce the mathematical methods we use to quantify the local LV transformations illustrated in Fig.~\ref{fig:lagvol}. To this end, we have calculated, at different redshifts, the reduced inertia tensor of each LV relative to its centre of mass \begin{equation} I_{ij}^{\rm r} =\sum_{n}m_n\frac{(\delta_{ij}r_{n}^2 - r_{i,n}r_{j,n})}{r_{n}^2}, \hspace{0.5cm} n=1, ..., N \label{reducedI} \end{equation} where $r_{n}$ is the distance of the $n$-th LV particle to the LV centre of mass and $N$ is the total number of such particles. We have used this tensor instead of the usual one \citep{Porciani:2002a} to minimise the effect of substructure in the outer part of the LV \citep{Gerhard:1983,Bailin:2005}. In addition, the reduced inertia tensor is invariant under LV mass rearrangements in radial directions relative to the LV centre of mass. This property makes the $I_{ij}^{\rm r}$ tensor particularly suited to describe anisotropic mass deformations as those predicted by the ZA and the AM and observed in Fig.~\ref{fig:lagvol}. In order to measure the LV shape evolution, first, we have calculated the principal axes of the inertia ellipsoid, $a$, $b$, and $c$, derived from the eigenvalues ($\lambda_i$, with $\lambda_1 \leq \lambda_2 \leq \lambda_3$) of the $I_{ij}^{\rm r}$ tensor, so that $a\geq b\geq c$ (see \citet{GonzalezGarcia:2005}), \begin{eqnarray} a = \sqrt{\frac{5(\lambda_2 - \lambda_1 + \lambda_3)}{2M}}, \qquad b = \sqrt{\frac{5(\lambda_3 - \lambda_2 + \lambda_1)}{2M}}, \\ \nonumber c = \sqrt{\frac{5(\lambda_1 - \lambda_3 + \lambda_2)}{2M}}, \end{eqnarray} where $M$ is the total mass of a given LV\footnote{Note that $\lambda_1 + \lambda_2 + \lambda_3 = 2M$ and this implies $a^2+b^2+c^2=5$.}. We denote the directions of the principal axes of inertia by $\hat{e}_i$, $i=1,2,3$, where $\hat{e}_1$ correspond to the major axis, $\hat{e}_2$ to the intermediate one and $\hat{e}_3$ to the minor axis. Afterwards, to quantify the deformation of these LVs, we have computed the triaxiality parameter, $T$, \citep{Franx:1991}, defined as \begin{equation} T = \frac{(1-b^2/a^2)} {(1-c^2/a^2)}, \end{equation} where $T=0$ corresponds to an oblate spheroid and $T=1$ to a prolate one. An object with axis ratio $c/a>0.9$ has a nearly spheroidal shape, while one with $c/a < 0.9$ and $T<0.3$ has an oblate triaxial shape. On the other hand, an object with $c/a < 0.9$ and $T>0.7$ has a prolate triaxial shape \citep{GonzalesGarcia:2009}. We have also calculated other parameters that measure shape deformation such as, ellipticity, $e$ \begin{equation} e=\frac{a^2-c^2}{a^2+b^2+c^2} , \end{equation} that quantifies the deviation from sphericity, and prolateness, $p$ \begin{equation} p=\frac{a^2+c^2-2b^2}{a^2+b^2+c^2}, \end{equation} that compares the prolateness versus the oblateness \citep{Bardeen:1986,Porciani:2002b,Springel:2004}. In this case, a sphere has $e=p=0$, a circular disc has $e=0.5$, $p=-0.5$ and a thin filament has $e=p=1$. Nearly spherical objects have $e<0.2$ and $|p|<0.2$. To sum up, we have performed the computation of the reduced inertia tensor, the principal axes of inertia, the eigendirections and the parameters $T, e$ and $p$ involving each of the selected LVs. Furthermore, we have repeated the same calculation for each component separately, viz. DM, cold and hot baryons. We consider hot gas as the particles shock heated to $3\times10^4$ K. \section{Evolution Under the ZA or the AM} \label{UnderEvol} The advanced non-linear stages of gravitational instability are described by the {\it adhesion model} \citep{Gurbatov:1984,Gurbatov:1989,Shandarin:1989,Gurbatov:1991,Vergassola:1994}, an extension of Zeldovich's (1970) popular non-linear approximation. In this Section, we briefly revisit them as well as some of their implications, useful to understand the results that will be analysed in the next sections. \subsection{The Zeldovich Approximation} In comoving coordinates, Zeldovich's approximation is given by the so-called {\it Lagrangian map}: \begin{equation} x_i(\vec q,t) = q_i + D_{+}(t) s_i(\vec q), \label{ZAppro} \end{equation} where $q_i$ and $x_i, i = 1,2,3$ are comoving Lagrangian and Eulerian coordinates of fluid elements or particles sampling them, respectively (i.e., initial positions at time $t_{in}$ and positions at later times $t$); $D_{+}(t)$ is the linear density growth factor. As already mentioned, it turns out that $s_i(\vec q)$ can be expressed as the gradient of the displacement potential $\Psi(\vec{q})$. The behaviour of $D_{+}(t)$ depends on the cosmological epoch. For the flat concordance cosmological model (see $\S$~\ref{sec:simul}), at high enough $z$, when the Universe evolution is suitably described by the Einstein-de Sitter model, $D_{+}(t) = (3/5) (t/t_i)^{2/3}$. Later on, when $\frac{d^{2}a}{dt^{2}} \simeq 0$ and the effects of the cosmological constant emerge ($z_{\Lambda} \simeq 0.684$ or $t_{\Lambda}/t_{\rm U} = 0.554$ for the cosmological model used in the simulations analysed here), $D_{+}(t)$ is an exponential function of time. Finally, when the cosmological constant dominates, we have: \begin{equation} D_{+}(a(t)) \propto \mathfrak{B}_x(5/6, 2/3) \left( \frac{\Omega_0}{\Omega_{\Lambda}} \right)^{1/3}\left[ 1 + \frac{\Omega_{\rm M}}{a^3 \Omega_{\Lambda}} \right]^{1/2}, \label{CurrentDmas} \end{equation} where $\mathfrak{B}_x$ is the incomplete $\beta$ function, $ \Omega_0 = 1-\Omega_{\Lambda}$, $\Omega_{\rm M}$ is the non-relativistic contribution to $ \Omega_0$, and \begin{equation} x \equiv \frac{a^3 \Omega_{\Lambda}}{\Omega_0 + a^3 \Omega_{\Lambda}}, \label{xDef} \end{equation} describing a frozen perturbation in the limit $t \rightarrow \infty$. Due to mass conservation, equation \ref{ZAppro} implies for the local density evolution: \begin{equation} \rho(\vec{r},t) = \frac{\rho_b(t)}{[1-D_{+}(t)\alpha(\vec{q})][1-D_{+}(t)\beta(\vec{q})][1-D_{+}(t)\gamma(\vec{q})]}, \label{DenZAppro} \end{equation} where $\vec{r} = a(t) \vec{x}$ is the physical coordinate, $\rho_b(t)$ the background density, and $\gamma(\vec{q}) < \beta(\vec{q}) < \alpha(\vec{q})$ are the eigenvalues of the local deformation tensor, $d_{i, j}(\vec{q}) = - \left(\frac{\partial s_i}{\partial q_j}\right)_{\vec{q}}$. Equation \ref{DenZAppro} describes caustic formation in the ZA. Indeed, a caustic first appears when and where $D_{+}(t)\alpha(\vec{q}) = 1$ (i.e., a wall-like one), see details in $\S$ \ref{CWEmer}. Mathematically, caustics at time $t$ can be considered as singularities in the {\it Lagrangian map} (see equation \ref{ZAppro} and more details in the next subsection). \subsection{The CW Emergence in 2D} \label{CWEmer} The emergence of the cosmic skeleton as cosmic evolution proceeds in the frame of the ZA is presented by \citet{Hidding:2014}. Due to the high complexity of the formalism involved, the authors restrict themselves to the two-dimensional equivalent of the ZA, providing us with the concepts, principles, language and processes needed as a first step towards a complete dynamical analysis of the CW emergence in the full three-dimensional space. In this subsection we give a brief summary of some of their results, useful to interpret some of our findings. In 2D, the complexity of the cosmic structure can be understood to a large extent from the properties of the $\alpha(\vec{q})$ landscape field, where $\alpha(\vec{q})$ is the largest eigenvalue of the deformation tensor $d_{i,j}(\vec{q}), i,j=1,2$. The role of the second eigenvalue $\beta(\vec{q})$ is much less relevant, except around the places where the haloes are to form. Of particular relevance are the $A_3$ lines in Lagrangian space, because they are the progenitors of the cosmic skeleton in Eulerian space. Geometrically they can be defined as the locus of the points where the gradient of $\alpha$ (or $\beta$) eigenvalue is normal to its corresponding eigenvector $\vec{e}_{\alpha}$ (or $\vec{e}_{\beta}$). Alternatively, they can also be defined as the locus of the points where $\vec{e}_{\alpha}$ (or $\vec{e}_{\beta}$) is tangential to the contour level of the $\alpha(\vec{q})$ (or $\beta(\vec{q})$) landscape field. The locations where collapse first occurs are around the maxima of the $\alpha(\vec{q})$ field in Lagrangian space. These are the so-called $A_{3}^{+}$ singularities, after Arnold's singularity classification \citep{Arnold:1983}. They are placed on the $A_3$ lines. Subsequently, the evolution under the ZA drives a gradual progression of Lagrangian collapsing regions, consisting, at a given time $t$, of those points such that $\alpha(\vec{q}) = 1/D_{+}(t)$ or $\beta(\vec{q}) = 1/D_{+}(t)$, according to the 2D version of equation \ref{DenZAppro}. These isocontours lines are the so-called $A_{2}^{\alpha}(t)$ and $A_{2}^{\beta}(t)$ lines, and within them matter is multistreaming in Eulerian space, i.e., matter forms a fold caustic or pancake. The height of the $\alpha(\vec{q})$ landscape field portrays the collapse time for a local mass element. Indeed, at a given time $t$, points where the $A_{2}^{\alpha}(t)$ and the $A_{3}^{\alpha}$ lines meet, correspond to points in Eulerian space where a cusp singularity can be found (i.e., the tip of a caustic). The $A_{2}^{\alpha}(t)$ lines descend on the $\alpha(\vec{q})$ landscape field as time elapses, and in this way more and more mass elements get involved in the pancake. The pancake grows in Eulerian space, where the two cusp singularities at their tips move away from each other. A similar description can be made for the $\beta(\vec{q})$ eigenvalue. Note that the height of either the $A_{2}^{\alpha}(t)$ or the $A_{2}^{\beta}(t)$ lines depends only on the $D_{+}(t)$ function, and not on the eigenvalue landscape fields. Therefore, the higher the $\alpha(\vec{q})$ landscape field, the earlier the corresponding pancake in the Eulerian space is formed. The same argument holds for the $\beta(\vec{q})$ eigenvalue. Along the $A_3$ lines there are another types of extrema. First, we have the $A_{3}^{-}$ singularities or saddle points, after Arnold's classification. They are in-between two $A_{3}^{+}$ singularities and are local minima along the $A_3$ lines. They depict the places where two pancakes emerging from each of the $A_{3}^{+}$ points get connected, when the corresponding $A_{2}$ lines met the $A_{3}^{-}$ singularities at their descent. This represents a first percolation event, and a first step towards the emergence of the CW spine. For the aforementioned reasons, the higher the $\alpha(\vec{q})$ landscape field, the earlier the percolation events will occur. The second type are the local maxima points $\vec{q}_4$, where the corresponding eigenvector is tangent to the $A_3$ lines, i.e., the so-called $A_4$ singularities, or swallow tail according to \citet{Arnold:1983}. An $A_4$ singularity at $\vec{q}_4$ exists only at a unique instant $t_4$, when $\alpha(\vec{q}_4) = 1/D_{+}(t_4)$. At this moment, the $A_{2}^{\alpha}(t_4)$ line passes through $A_4$, transforming the cusp singularity at the end of the Eulerian pancake into a swallow tail singularity. After that, there are three intersections of the $A_{2}(t)$ line with two $A_{3}$ lines, giving three connected cusp singularities in Eulerian space. Therefore, the $A_{4}$ singularities are the connection points where disjoint pieces of $A_{3}$ lines get connected in Eulerian space. Then, we get another percolation process. Once again, as explained above, the higher the $\alpha(\vec{q})$ landscape field, the earlier the percolation events will take place. This short summary illustrates some aspects of the effect that the height of the $\alpha(\vec{q})$ landscape field has on the time when simple percolation events occur in 2D, or, in a more general scope, when the CW spine emerges. The conclusion is simple: the higher the eigenvalue landscape, the earlier the percolation events take place. A similar effect can be expected in 3D, provided that the description of the events connecting disjoint caustics in Eulerian space is not dramatically changed with respect to that in 2D. Pancake formation in Eulerian space entails an anisotropic mass rearrangement as matter flows normally to the $\alpha$ (or $\beta$) pancake. These flows consist of mass elements within the $A_{2}^{\alpha}(t)$ (or $A_{2}^{\beta}(t)$) lines in Lagrangian space, and therefore they ideally do not stop while the $A_2$ lines keep on descending on the landscape. Similar ideas apply to other kind of caustic formation, implying shape transformations after the skeleton emergence. Note that matter flows are predominantly anisotropic, except for the places where the haloes are to form, i.e. where flows become more isotropic. \subsection{The Adhesion Model} \label{AdMod} As it is well known, Zeldovich's approximation is not applicable beyond particle crossing, because it predicts that caustics thicken and vanish due to multistreaming soon after their formation. However, $N$-body simulations of large-scale structure formation indicate that long-lasting pancakes are indeed formed, near which particles stick, i.e multistreaming did not take place. The adhesion model was formulated to incorporate this feature to Zeldovich's approximation, by introducing a small diffusion term in Zeldovich's momentum equation, in such a way that it has an effect only when and where particle crossings are about to take place. This can be accomplished by introducing a non-zero viscosity, $\nu$, and then taking the limit $\nu \rightarrow 0$. This is the phenomenological derivation of the adhesion model. Physically motivated derivations can be found in \citet{Buchert:1998}, \citet{Buchert:1999} and others included in the review by \citet{Buchert:2005}. As in the Zeldovich approximation, in the adhesion model, the initial velocity field can be expressed as the gradient of a scalar potential field, $\Phi_0(\vec q)$, describing the spatial structure of the initial perturbation. It can be shown that the solutions for the velocity field behave just as those of Burgers' equation \citep{Burgers:1948,Burgers:1974} in the limit $\nu \rightarrow 0$, whose analytical solutions are known. The most significant characteristic of Burgers' equation solutions is that they are discontinuous and hence they unavoidably develop singularities, i.e., locations where at a given time the velocity field becomes discontinuous and certain particles coalesce into {\it long-lasting} very dense configurations with different geometries, i.e., caustics as in the ZA. The ideas explained in $\S$~\ref{CWEmer} also apply here, but the main difference is that matter gets stuck forming very dense subvolumes (singularities) in Eulerian space, instead of forming multistreaming regions. In this way, a singularity occurs at the time $t$ when a non-zero $d$-dimensional elemental volume $V$ around a point $\vec{q}$ in the initial configuration is mapped to a $d'$-dimensional elemental volume around a point $\vec{x}(\vec{q},t)$ in Eulerian space with $d'<d$. In a three-dimensional space, these singularities can be walls (with dimension $d'=2$), filaments ($d'=1$) and nodes ($d'=0$). The AM model implies that, locally, walls are the first singularities that appear, as denser small surfaces (the so-called pancakes). Later on, filaments form and grow until singularity percolation and spine emergence \citep{Gurbatov:1989,Kofman:1990,Gurbatov:2012}. The singularity pattern implies the emergence of anisotropic mass flows towards the new formed singularities. Locally, emerging walls are the first that attract flows from voids, then they host flows towards filaments, and, finally, filaments are the paths of mass towards nodes. In this way, at a given scale, walls and filaments tend to vanish as the mass piles up at nodes. In addition, cells associated with the deepest minima of $-\Phi_0(\vec q)$, swallow up some of their neighbouring cells related to less deep minima, involving their constituent elements (i.e., walls, filaments and nodes), and causing their merging, as in the ZA. This is observed in simulations as contractive flow deformations that erase substructure at small scales, as mentioned above, while the CW is still forming at larger scales. It is worth noting that Burgers' equation solutions ensure the existence of {\it regular points or mass elements} at any time $t$, as those that have not yet been trapped into a caustic at $t$. Because of that, these regular mass elements are among the least dense in the density distribution. Note, however, that due to the complex structure of the flow, singular (i.e., already trapped into a caustic) and regular (i.e., not yet trapped) mass elements need not be spatially segregated, and in fact, they are mixed ideally at any scale. \subsection{Further implications} \label{ZAImpli} According to the ZA, we have \begin{equation} \nabla_{\vec{q}} \cdot \vec{s} \equiv \alpha(\vec{q}) + \beta(\vec{q}) + \gamma(\vec{q})= \frac{5 \delta \rho}{3 \rho}(t_{in}). \label{LambdaPeak} \end{equation} As suggested by the 2D analysis made in $\S$~\ref{CWEmer}, the height of the $\alpha(\vec{q})$ landscape field in 3D portrays the collapse time for local mass elements (with $\alpha(\vec{q})$ the larger $d_{i,j}$ eigenvalue at $\vec{q}$), as well as the time when different percolation events mark the emergence of the CW spine. Equation~\ref{LambdaPeak} indicates that the eigenvalue landscape fields are closely related to the fluctuation field (FF) $\frac{\delta \rho}{\rho}$ at $t_{in}$. It is well known that the number density of the FF peaks above a given threshold is considerably enhanced by the presence of a (positive) background field \citep{Bardeen:1986}, or, equivalently, when a large-scale varying field is added to $\frac{\delta \rho}{\rho}$. Equation~\ref{LambdaPeak} tells us that such background would increase the height of the landscape fields, thereby speeding up percolation events responsible for the CW emergence. Note that denser LVs, when compared to less dense ones, can be considered as the result of adding a large-scale varying field to the latter. Consequently, we expect that the CW elements appear and percolate earlier on within denser LVs than within less dense ones. These considerations apply to the evolution of the $I_{ij}^{\rm r}$ eigenvectors, $\hat{e}_i(z)$, and to their possible dependence on mass. Regarding shape evolution, as already emphasised, mass anisotropically flows towards new singularities. These anisotropic mass arrangements make the $I_{ij}^{\rm r}$ eigenvalues evolve. Thus, evolution becomes gradually extinct as anisotropic flows tend to vanish. At small scales, the CW structure is swallowed up and removed by contractive deformations, see previous subsection. From a global point of view, the CW dynamic evolution somehow stops and the structure becomes frozen as $\frac{d D_{+}(t)}{dt} \rightarrow 0$, that is after the $\Lambda$ term dominates the expansion at $z_{\Lambda}$, see equation \ref{CurrentDmas}. Therefore, matter flows are expected to become on average less and less relevant after $z_{\Lambda}$, as time elapses. In addition, it is expected that locally the first to vanish are the flows associated with $\alpha(\vec{q})$, the largest eigenvalue of the local deformation matrix $d_{i, j}(\vec{q})$ (i.e. the flows towards walls), and the last to disappear are those flows related to $\gamma(\vec{q})$, the smallest deformation matrix $d_{i, j}(\vec{q})$ eigenvalue (i.e the flows towards nodes). Disentangling how these theoretical local predictions affect the global shape evolution of LVs demands numerical simulations. We will address these issues in the next sections. \section{Evolution Beyond the ZA or the AM} \label{FurtherEvol} Some concepts, not directly described by the ZA or the AM, need to be clarified in order to correctly explain Fig.~\ref{fig:lagvol} at a qualitative level, as well as some results to be discussed in forthcoming sections. \subsection{Caustic dressing} The phenomenological Adhesion Model tells nothing about the internal density or velocity structure of locations where mass gets adhered. Just to have a clue from theory, we recall that in his derivation of a generalised adhesion-like model, \citet{Dominguez:2000} found corrections to the momentum equation of the ZA that regularise (i.e., dress) its wall singularities. These then become long-lasting structures where more mass gets stuck, but within non-zero volumes supported by velocity dispersion coming from the energy transfer from ordered to disordered motions. \citep[see also][for a discussion of these effects in terms of the viscosity, phenomenologically introduced in the AM]{Gurbatov:1989}. The analyses of $N$-body simulations strongly suggest that any kind of flow singularity gets dressed \citep[i.e., not only at pancakes, as it has been analytically proven by][]{Dominguez:2000}. \subsection{Gas in the cosmic web} \label{GasCW} When gas is added, the energy transfer from ordered to disordered motions around singular structures includes the transformation of velocity dispersion into internal gas energy (heating) and pressure. Then, energy is lost through gas cooling, mainly at the densest pieces of the CW, making them even denser. However, as already said in $\S$\ref{AdMod}, singular (i.e., dense) and regular (i.e., not yet involved in singularities, low density) mass elements are mixed at any scale. Therefore, low-density gas is heated too, and, in addition, pressurised. The consequences of these processes cannot be deciphered from theory, but previous analyses of cosmological hydrodynamical simulations in terms of the CW \citep[see, for example][]{DT:2011} suggest that dressing acts on any kind of flow singularity, i.e., also on filaments and nodes. Moreover, these authors conclude that, at (node-like) halo collapse, cooling of low-density gas is so slow that most gravitationally heated gas is kept hot until $z =0$. In any case, because hot gas is pressurised, no anisotropic mass inflows towards singularities can be expected within the hot gas component, on the contrary, possible anisotropic, pressure-induced hot gas outflows are expected from them. These expectations will be explored in the following sections. On the other hand, at the densest gas locations, cold gas is transformed into stars with an efficiency $\epsilon$ when the density is higher than a threshold. In this way, the hot gas component and the stars, observed in Fig.~\ref{fig:lagvol}, arise. \subsection{A visual impression of LV evolution} Fig.~\ref{fig:lagvol} gives us a first visual impression of the evolution of the initially spherical LVs. The former considerations above make it easier a qualitative interpretation of what these figures show. Indeed, the gradual emergence of a local skeleton stands out in both of them, including web-element mergings and some rotations too. Finally, at $z_{\rm low}$, we see an elongated structure, either in the DM, cold or hot baryonic components, where different spherical configurations appear, with a stellar component at the centre of most of them\footnote{We note that there is a component effect, namely different components (i.e. DM, cold and hot baryons) evolve dissimilarly.}. A high fraction of hot gas component (but not its whole mass) is related to these spheres. This complicated structure comes from wall and filament formation, according to the AM, and its dressing and eventual fragmentation into clumps. Clumps are in their turn dressed. Note also that, at each $z$, a fraction of the matter is not yet involved into singularities. Therefore, evolution leads to: (i) a DM component sharing both a diffuse and a dressed singularity configuration, with the particularity that the LV diffuse component present at redshift $z$ has not yet been involved in any singularity at $z$, (ii) a complex cold gas component, sharing also a diffuse as well as a dressed singularity configuration, but with a more concentrated distribution than that of the DM, because gas can lose energy by radiation and (iii) a complex hot gas distribution. As explained in $\S$~\ref{GasCW}, diffuse gas is gravitationally heated at collapse events, but, as will be shown in $\S$~\ref{sec:CompEff}, it is not involved in important anisotropic mass rearrangements. To further advance, we need a quantitative analysis of LV evolution. This is the subject of the next sections. \section{Anisotropic Evolution: Eigenvectors of the mass distribution} \label{EigenEvol} According to the AM, mass elements are anisotropically deformed and a fraction of them pass through one or several singularities in sticking regions. For each mass element placed at a Lagrangian point $\vec{q}$, accretion at high $z$ preferentially occurs along the eigenvector corresponding to the largest eigenvalue of the symmetric deformation matrix at $\vec{q}$, $d_{i, j}(\vec{q}) = - \left(\frac{\partial s_i}{\partial q_j}\right)_{\vec{q}}$. \begin{figure} \includegraphics[width=8cm]{sroblesfig3} \caption{Evolution across redshifts of the $A_i$ distribution, where $A_i$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, and where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$, calculated with all the LV components. } \label{fig:direc-evol} \end{figure} Taking the LV as a whole, the $I_{ij}^{\rm r}$ eigenvector $\hat{e}_3^{\rm tot}(z)$ which corresponds to its larger eigenvalue, $\lambda_3(z)$ at a given redshift $z$, defines the direction along which the overall LV elongation has been maximum until this $z$. Similarly, $\hat{e}_1^{\rm tot}(z)$ corresponds to the direction of overall minimum stretching of the LV up to a given $z$. It is very interesting to analyse whether or not there exists a change in such directions as cosmic evolution proceeds. In Fig.~\ref{fig:direc-evol}, we show the histograms for the quantities $A_i(z)$, the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$ tensor corresponding to the total mass of the LV, at redshifts $z=10,5,3,1,0.7,0.5,0.25,0.25,0.1$. That is, we measure the deviations from the eigendirections at a given $z$ with respect to the final ones\footnote{Note that only two out of the three $A_i$ angles are independent in such a way that if for instance $A_1=0$ then $A_2=A_3$.}. We see that on average these directions are frozen at $z_{\rm froz} \sim 0.5$, in such a way that only a few LVs change the eigenvectors of their total mass distribution at $z \le z_{\rm froz}$, while at $z \ge z_{\rm froz}$ more and more LVs do it. This behaviour is illustrated by Fig.~\ref{fig:angAi}, where the evolution of the $A_i(t)$ for a typical LV case is plotted. We observe that $A_i(z)$ smoothly and gradually vanish before $t/t_{\rm U} = 1$, this behaviour being common to all the LVs. This is particularly interesting, because as we will see in Figs \ref{fig:prinaxes} and \ref{fig:axisratios} the evolution of the $I_{ij}^{\rm r}$ eigenvalues (or, equivalently, that of its principal axes of inertia $a, b, c$), also declines before $t/t_{\rm U} = 1$. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig4} \caption{An example of the $A_i(t)$ evolution, where $A_i$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1, 2, 3$ and $t$ is given in terms of the age of the Universe ($t_{\rm U}$). } \label{fig:angAi} \end{figure} It is also important to investigate if there exists a component effect in the freezing-out of the eigendirections. With this purpose, we have compared the directions of the principal axes of inertia that arise from the whole mass distribution with the ones derived from every component at different redshifts (see Fig.~\ref{fig:histangei}). We have found that the latter are mainly parallel to $\hat{e}_i^{\rm tot}$ in the DM and cold baryon cases. Concerning hot gas, the distribution of the angles, $\theta_i$, formed by $\hat{e}_i^{\rm tot}$ and $\hat{e}_i^{\rm hot~bar}$, the eigenvectors of the hot gaseous component, starts nearly uniform and as time elapses a peak around $0$\textdegree~arises, as we can observe in Fig.~\ref{fig:histangei} for the $\hat{e}_1$ case. \begin{figure} \includegraphics[width=8.7cm]{sroblesfig5} \caption{Distributions of the angles formed, at several redshifts, by the direction of the $ \hat{e}_1^{\rm tot}(z)$ axis of inertia that arise from the overall matter distribution with the same axis calculated with the different components. } \label{fig:histangei} \end{figure} This means that DM dynamical evolution determines the preferred directions of LV stretching, and cold gas particles closely follow them. Hot gas particles (in this case, as explained in $\S$\ref{FurtherEvol}, gaseous particles not trapped into singularities and heated by gravitational collapse), on the contrary, do not follow DM evolution at high redshifts, but they trace at any $z$ the locations where mass sticking events have taken place. Indeed, as explained in $\S$\ref{FurtherEvol}, gas gravitational heating is due to the transformation of the ordered flow energy into internal energy at CW element formation. \section[]{Anisotropic evolution: Shapes} \label{sec:results} Before we focus on the statistical analysis of our results, we present the shape evolution of some selected LVs in order to show how they acquire their filamentary or wall shape. Then, we analyse the shape evolution of all the objects in our sample, by considering component as well as mass effects. To that end, LVs are grouped according to their mass, $M$, into three bins, massive ($M\geq5\times10^{12} M_\odot$), intermediate mass ($5\times10^{11}\leq M<5\times10^{12} M_\odot$) and low-mass LVs ($M<5\times10^{11} M_\odot$). \subsection{Two particular examples of shape evolution} \label{Shape-examples} In Fig.~\ref{fig:prinaxes}, we exemplify the evolution of the principal axes of the inertia ellipsoid for the LVs of Fig.~\ref{fig:lagvol}. The upper plot (LV on the left-hand side of Fig.~\ref{fig:lagvol}) illustrates an LV that has two axes that expand across time, i.e., it has a flat structure. The lower plot corresponds to the LV on the right-hand side of Figure~\ref{fig:lagvol} and portrays the case in which the major axis grows while the other two axes are compressed, giving in consequence a prolate shape. This result can also be inferred from Fig.~\ref{fig:axisratios}, where we can see the evolution of the axis ratios $b/a$ and $c/a$ for the same LVs of Fig.~\ref{fig:prinaxes}. In the lower plot of Fig.~\ref{fig:axisratios}, we observe that the two minor axes end up close to each other in length, therefore the LV has a filamentary structure. The upper plot, in contrast, has the minor axis significantly shorter than the other two, hence having an oblate shape. A remarkable result is the continuity of the $a(t), b(t)$ and $c(t)$ functions for all the LVs, with no mutual exchange of their respective eigendirections across evolution, i.e., the local skeleton is continuously built up, in consistency with \citet{Hidding:2014}. \begin{figure} \includegraphics[width=8.5cm]{sroblesfig6} \caption{Evolution of the principal axes of inertia for two LVs. Top, LV on the left-hand side of Fig.~\ref{fig:lagvol}, with a wall-like structure. Bottom, LV on the right-hand side of Fig.~\ref{fig:lagvol}, which acquires a filamentary shape. } \label{fig:prinaxes} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{sroblesfig7} \caption{Axis ratio evolution of the Lagrangian volumes of Fig.~\ref{fig:prinaxes}. The upper plot shows the evolution towards an oblate shape and the lower plot shows an LV that acquires a prolate shape. } \label{fig:axisratios} \end{figure} \subsection{Generic trends of shape evolution} \label{Shape-Evol} In this subsection, the generic trends of shape evolution are examined at a qualitative level. In Fig.~\ref{fig:axisratiosevol}, where the axis ratios are plotted, we can note that the selected LVs are gathered on the nearly spherical zone ($c/a\geq 0.8$) by construction, except the hot gaseous component. As time elapses, LVs are deformed, and their evolution is shown as they move down inside the triangle described by the axes $b/a$, $c/a$ and $T=1$ (orange line). Accordingly, at $z=0.05$ they tend to be spread over the triangle. Note that intermediate mass and low-mass objects evolve faster than the massive ones. At $z_{\rm low}$, DM is preferentially located in the $T>0.3$ and $c/a<0.4$ region, therefore we end up with more prolate systems than oblate objects. This assertion is valid for the total, DM and cold baryons axis ratio evolution. In contrast, hot gas does not seem to show a remarkable evolution effect as it appears populating roughly the same regions of the aforementioned triangle at redshifts $10, 5$ and $3$, and later on, excluding either the oblate area on the right or the prolate one at the left bottom corner of the triangle. \begin{figure*} \includegraphics[width=16.1cm]{sroblesfig8} \caption{Axis ratio evolution of all the selected LVs, where coloured circles indicate different mass range. Massive LVs with $M\geq5\times10^{12} M_\odot$ are represented in red, LVs with intermediate mass, $5\times10^{11}\leq M<5\times10^{12} M_\odot$, in cyan and low-mass LVs, $M<5\times10^{11} M_\odot$, in blue. The orange line correspond to $T=1$, i.e., to a prolate spheroidal shape. Objects with $c/a<0.9$ and $T>0.7$ (magenta line) have a prolate triaxial shape and LVs with $c/a<0.9$ and $T<0.3$ (green line) are prolate triaxial ellipsoids. We show the axis ratios obtained with the total number of particles, the axis ratios of DM particles, and the axis ratios found for cold and hot baryons. } \label{fig:axisratiosevol} \end{figure*} The shape evolution of the LV mass distribution is also shown in Fig.~\ref{fig:prolatellip}, where shape distortions are represented in the prolateness-ellipticity plane. In this case, LVs move inside the triangle bound by the lines, $e=p$ (prolate spheroids), $p=-e$ (oblate spheroids) and $p=3e-1$ (flat objects). We observe the same pattern as in Fig.~\ref{fig:axisratiosevol}, for the total components, DM and cold baryons. In other words, initially spherical systems, concentrated on one corner of the triangle, evolve across redshifts filling up the triangle, so that, at $z=0.05$, we end up with a high percentage of prolate triaxial objects, $\sim 83\%$ for the total inertia ellipsoid. We have also found that $\sim 91\%$ of the selected LVs have extreme total ellipticities ($e>0.5$), while only $8\%$ have moderate ones. A significant percentage of the analysed objects are extremely prolate, $\sim 31\%$, that is, they have a thin filament-like shape. At $z=0.05$, we can find systems close to the flat limit, specially in the case of cold baryons. As in the previous figure, hot gas does not present a remarkable evolution effect after $z=1$. At higher $z$s, however, the hot gas in some LVs show needle-like as well as flat shapes (see panels corresponding to $z=5$ and, to a lesser extent, at $z=3$), but these shapes do not appear anymore at lower $z$s. Figs~\ref{fig:axisratiosevol} and~\ref{fig:prolatellip} nicely show generic trends of shape evolution. More elaborated, quantitative analyses of component and mass effects are given in the next sub-sections. \begin{figure*} \includegraphics[width=16.1cm]{sroblesfig9} \caption{Prolateness-ellipticity plane for the reduced inertia tensor of the selected LVs for redshifts $10, 5, 3, 1$ and $0.05$. Massive LVs with $M\geq5\times10^{12} M_\odot$ are represented in red, LVs with intermediate mass, $5\times10^{11}\leq M<5\times10^{12} M_\odot$, in cyan and low-mass LVs, $M<5\times10^{11} M_\odot$, in blue. The orange lines correspond to ultimate shapes, $e=p$ (prolate spheroids), $p=-e$ (oblate spheroids) and $p=3e-1$ (flat objects). } \label{fig:prolatellip} \end{figure*} \subsection{Component effects} \label{sec:CompEff} In order to quantitatively determine if there is a component effect on the LV shape evolution (i.e., whether DM, hot and cold baryons behave dissimilarly), we represent the cumulative distribution function (CDF) of the $e, p$ and $T$ parameters in Figs.~\ref{fig:cumhistecomp} and \ref{fig:cumhistpTcomp}. Each row in Fig.~\ref{fig:cumhistecomp} shows the cumulative probability of the $e$ parameter calculated for DM, cold baryons, hot gas and the total components at a given redshift. The first column depicts the result obtained for all the LVs and the other columns display our findings split according to the binning in LV mass. As we can observe, the DM and cold baryonic components move from low ellipticities or high sphericities at high redshifts towards higher ellipticities at $z_{\rm low}$. As a result, these components acquire a filament-like structure (see Fig.~\ref{fig:cumhistecomp}). Note that cold baryons and DM exhibit approximately the same behaviour as time elapses. At $z_{\rm low}$ cold baryons are slightly more prolate than the DM component, specially in the case of low-mass LVs. On the other hand, the hot gaseous component does almost not experience an evolution effect, as can be noted from the ellipticity CDFs in Fig.~\ref{fig:cumhistecomp}, whether or not we group the LVs according to their mass. Hot gas has an $\bar{e}\sim 0.57$ since $z=2$, and does not present any preference for either a spherical or a filamentary structure. \begin{figure*} \includegraphics[width=13cm]{sroblesfig10} \caption{Cumulative distribution function of the ellipticity parameter, $e$, portraying component effects and their evolution in different mass bins. Each column shows the distribution binned according to the LV mass. Plots in the first column are calculated for the total number of LVs. Rows represent different redshifts. The code colour used in each plot is as follows, results obtained with the total reduced inertia tensor are presented in blue, DM results in magenta, cold baryons in green and hot gas in red.} \label{fig:cumhistecomp} \end{figure*} Similar conclusions can be extracted from the DM, cold baryon and hot gas prolateness CDFs (see first row of Fig.~\ref{fig:cumhistpTcomp}). In this case, hot gas has an $\bar{p}$ ranging from $0.25 - 0.34$ since $z=2$. An important difference with respect to the ellipticity CDFs is that at $z_{\rm low}$, hot gas cumulative probabilities show a small deviation from cold baryons CDFs which is bigger in the low-mass bin, while in the $e$ case these components exhibit a large deviation from each other. \begin{figure*} \includegraphics[width=13cm]{sroblesfig11} \caption{Upper panels, CDF of the prolateness parameter, $p$, at $z_{\rm low}$. Lower panels, CDF of the triaxiality parameter, $T$, parameter at $z_{\rm low}$. Each column shows the distribution binned according to the LV mass. Plots in the first column are calculated for the total number of LVs. The code colour is as in Fig.~\ref{fig:cumhistecomp}.} \label{fig:cumhistpTcomp} \end{figure*} Triaxiality CDFs show a tendency of cold baryons to have a prolate shape independently of the mass binning at $z=3$. We observe the same displacement of DM and cold baryon CDFs across redshifts, previously noted from ellipticity and prolateness cumulative probabilities. Concerning hot gas, it has an $\bar{T}$ in the range $0.69 - 0.76$ since $z=2$, showing almost no changes thereafter. This displacement causes that the difference between DM, cold and hot baryons CDFs appears greatly diminished at $z=1$. This fact can also be noticed from ellipticity and prolateness CDFs. It is noteworthy that the cold baryon triaxiality cumulative probability of the massive LV bin is delayed with respect to the DM CDF at $z=1$. This difference is kept at $z=0.05$ (see lower panels of Fig.~\ref{fig:cumhistpTcomp}), this is also true for the prolateness case. On the contrary, at $z_{\rm low}$ DM CDF appears delayed with respect to cold baryons for the low-mass bin. \subsection{Mass effects} To study the impact of the LV mass on its shape deformation, we plot in Figs~\ref{fig:cumhistemass} and ~\ref{fig:cumhistpTmass}, the CDF split by the component considered in the reduced inertia tensor calculation. From left to right we show in each column results obtained with all the particles, taking into account only DM particles, then cold baryons results and finally hot gas. Rows in Fig.~\ref{fig:cumhistemass} show cumulative probabilities at different redshifts. Each panel present the CDF calculated according to the binning in LV mass, massive object CDF are shown in magenta, intermediate mass results in cyan and low-mass CDF in blue. \begin{figure*} \includegraphics[width=13cm]{sroblesfig12} \caption{Cumulative distribution function of the ellipticity parameter, $e$, illustrating mass effects and their evolution according to the LV components. Each column displays the distribution binned according to the components taken into account to calculate the reduced inertia tensor, namely, the total number of particles, DM, cold baryons and hot gas. Rows represent different redshifts. In each plot, massive LVs ($M\geq5\times10^{12} M_\odot$ ) are shown in magenta, LVs with an intermediate mass ($5\times10^{11}\leq M<5\times10^{12} M_\odot$) in cyan and low-mass LVs ($M<5\times10^{11} M_\odot$) in blue.} \label{fig:cumhistemass} \end{figure*} In the first place, we discuss the ellipticity CDFs in Fig.~\ref{fig:cumhistemass}. As we can observe, the mass effects are not very relevant and moreover they almost do not evolve. The most important mass effects appear in cold baryons at any $z$. Indeed, the massive and low-massive LV samples at $z=3$ and $1$ have been determined to be drawn from different populations with the two-sample Kolmogorov--Smirnov test with $90\%$ CI; while the massive and intermediate mass LV samples with $95\%$ CI at $z=3, 1$ and $0.05$. In general, massive LVs tend to be more spherical across redshifts, and they have a narrower $e$ distribution than less massive ones. \begin{figure*} \includegraphics[width=13cm]{sroblesfig13} \caption{Upper panels, CDF of the prolateness parameter, $p$. Lower panels, CDF of the triaxiality parameter, $T$. From left to right the columns show the distribution binned according to the components taken into account to calculate the reduced inertia tensor, i.e., the total number of particles, DM, cold baryons and hot gas. The code colour in each plot is as in Fig.~\ref{fig:cumhistemass}} \label{fig:cumhistpTmass} \end{figure*} In the prolateness case, the mass effects grow with time, except in the hot gaseous component. Hot gas independently on the mass binning is less spherical than the other components at $z=3$. At $z=1$, massive LVs are more spherical than the less massive ones for both DM and cold baryons. The mass effect is less pronounced in the case of hot gas. At $z_{\rm low}$ the tendency described above is kept (see upper panels in Fig.~\ref{fig:cumhistpTmass}). The $p$ distribution in massive LVs is narrower than those in the other mass bins and it becomes wider faster in the low-mass bin. Regarding triaxiality CDFs, again mass effects grow with evolution, mainly in the DM component (see lower panels in Fig.~\ref{fig:cumhistpTmass}). We can also note that in both, the total and the DM case, there are almost no systems with $T<0.6$, specifically, there is a lack of oblate massive objects relative to the other mass groups. We have tested the difference between the massive and the low-massive bins with the two-sample Kolmogorov--Smirnov test at a $90\%$ CI. This mass effect is less significant in the baryon case. Indeed, cold baryons do not present a significant mass effect, only less massive LVs tend to be more oblate than the more massive bins at $z=3$ and $1$. Summing up, except for the hot gas component, more massive LVs tend to evolve slightly more slowly from their initial spherical shape than less massive ones. This can be interpreted in terms of the CW dynamics as follows: more massive objects would appear more frequently in nodes of the CW, versus less massive objects being present in filaments and walls. Therefore, the relative importance of anisotropic mass rearrangements versus radial ones is lower in massive than in less massive LVs. Concerning the hot gas component, no relevant evolution has been detected, particularly after $z \sim 3$, indicating that neither the possible anisotropic flows towards singularities, nor the possible pressure-induced anisotropic outflows, have caused measurable LV mass rearrangements in the LV sample thereafter. \section{Freezing-out of eigendirections and shapes } \label{sec:Percola} \subsection{Freezing-out times} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig14} \caption{Histograms for $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ defined with $\cos (\delta A_i) = 0.9$ and $f=0.1$. } \label{fig:Htmax-tmin} \end{figure} In the previous sections we have become aware that the $A_i(z), i=1,2$ and 3 angles evolve with time and $\rightarrow0$\textdegree \ before $z_{\rm low}$. We remind that $A_i(z)$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$ tensor corresponding to the total mass of the LV. Also, the evolution of the LV inertia ellipsoid declines in the same limit, see Figs ~\ref{fig:angAi} and ~\ref{fig:prinaxes}. In this section, we use the times when these eigendirections and inertia axes become frozen. We have calculated these freezing times to study and compare both processes and to look for possible mass effects. The subject is interesting to elucidate how and when the local CW around galaxies-to-be becomes frozen at the scales analysed in this paper, while it still feeds the protogalaxies at smaller scales. Having the $A_i(z)$ angles $\sim0$\textdegree \ during a $z$ range $z \ge z_{\rm low}$ means that the LV deformations become fixed in their eigendirections before $z_{\rm low}$, or, in other words, mass rearrangements are thereafter organised in terms of frozen symmetry axes making the inertia tensor diagonal, i.e., in terms of a skeleton-like structure. This motivates the search for the moment when a given LV gets its structure frozen. This is not a straightforward issue, however, because this situation is gradually reached: all we can do is to resort to thresholds. In the following, we use time instead of $z$ in order to make our results clearer. Given a threshold angle $\delta A_i$, we define $t_{\delta A_i}$ as the time (Universe age at the event in units of the current Universe age $t_{\rm U}$) when $A_i(t) \le \delta A_i$ if $t \ge t_{\delta A_i}$, (i.e., the Universe age when the $i$th eigendirection of the inertia tensor becomes fixed within an angle $\delta A_i$). Then, we define $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ as the maximum and minimum values of $t_{\delta A_i}, i=1,2,3$, for each LV. That is, $t_{\delta A}^{\rm max}$ for a given LV is the fractional time when the directions of its {\it three } eigen vectors become frozen, or, symbolically, $A_i(t) \le \delta A_i$ if $t \ge t_{\delta A}^{\rm max}$ for any direction\footnote{Note that the second and the third eigendirections become frozen at the same time.}. The minimum $t_{\delta A}^{\rm min}$ satisfies the same condition for just one direction. Fig.~\ref{fig:Htmax-tmin} (upper plots) shows the distribution of $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ for our sample of 206 LVs with $\delta A_i$ such that $\cos (\delta A_i) = 0.9$. A very interesting point is to explore LV shape transformations relative to the freeze-out times for inertia eigendirections. An illustration can be found in Figs~\ref{fig:angAi} and ~\ref{fig:prinaxes}. Comparing both figures, we see that the principal axes change slightly after skeleton emergence for the particular LVs considered in this figure by using a $10\%$ threshold (see below). The differences are larger for other LVs, and, indeed it is worth analysing this issue in more detail. Therefore, to be more quantitative, we define $t_{f, a}$ as the fractional time when the inertia axis $a$ becomes frozen within a threshold $f_a$, which is a fixed fraction of the $a(t)$ value, i.e. $\Delta a(t) \le f_a $ if $t \ge t_{f, a}$, where $\Delta a(t) \equiv \frac{\mid a(t) - a(t_{\rm low})\mid}{a(t_{\rm low})} $. Similarly, we define $t_{f, b}$ and $t_{f, c}$, and then $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$. The former is the time when the three inertia axes become frozen, while the latter is the time when just one axis gets frozen\footnote{Again, once the value of one principal axis becomes fixed, the freezing times for the other two axes are the same.}. To have an insight of the statistical behaviour of these times, in Fig.~\ref{fig:Htmax-tmin} (lower plots) the histograms for $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ are represented for $f = 0.1$. In this figure, right- (left-)panels correspond to the times when one (three) out of the eigenvectors or the principal inertia axes become fixed within a $10\%$ of their final values. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig15} \caption{CDFs for the same quantities in the previous figure, showing possible mass effects.} \label{fig:CumHist_mass_effect} \end{figure} An interesting result is that the time range for $t_{f}^{\rm max}$ is narrow and late. The range of $t_{\delta A}^{\rm max}$ is much wider, which means that a high fraction of LVs get at high $z$ their three eigendirections fixed before the evolution of their inertia axes ends up. During this early time interval, LVs change their shape with frozen symmetry axes, i.e., anisotropic matter inflows onto CW elements. Another result is the $t_{f}^{\rm min}$ accumulation at the first bin of the evolution time: these are the systems having a principal axis of inertia that keeps within a $10\%$ of its initial value along the evolution. They are less prolate than other systems. An even higher fraction of LVs have one of their eigendirections fixed in the first $5\%$ of the evolution time (see Fig.~\ref{fig:Htmax-tmin}.b). A high fraction of systems also got one frozen eigendirection, while none of their principal inertia axes is fixed yet. However, at the end of the evolution this effect vanishes (compare Figs \ref{fig:Htmax-tmin}.b and \ref{fig:Htmax-tmin}.d). Finally, let us mention that LVs also spend an important fraction of their lives with one but not three fixed eigendirections (within the thresholds used to draw these figures, compare Figs \ref{fig:Htmax-tmin}.a and \ref{fig:Htmax-tmin}.b), or one but not three frozen inertia axes (compare Figs \ref{fig:Htmax-tmin}.c and \ref{fig:Htmax-tmin}.d). \subsection{Mass effects} \label{MEffFreez} Next, we look for mass effects in the distributions of $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$, as well as in those of $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$. This is more clearly visualised in terms of cumulative histograms. In Fig.~\ref{fig:CumHist_mass_effect}, we plot the CDF for $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ (i.e., LV eigen directions relative to their final values, first row) and $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ (principal inertia axes, second row), respectively, where no binning has been used. To analyse possible mass effects, results for the three mass groups are shown in each panel. The cumulative histograms in the four panels of this figure are in one-to-one correspondence with the histograms in Fig.~\ref{fig:Htmax-tmin}. The first outstanding result is that the time range for $t_{f}^{\rm max}$ is roughly the same (narrow and late), irrespectively of the mass range used (Fig~\ref{fig:CumHist_mass_effect}.c). This behaviour can be understood as the consequence of $\frac{d D_{+}(t)}{d t} \rightarrow 0$ at late times, a global effect causing anisotropic flows to vanish, see $\S$\ref{ZAImpli} for more details. Nevertheless, there exists a mass effect in $t_{\delta A}^{\rm max}$ (Fig.~\ref{fig:CumHist_mass_effect}.a), with the least-massive LVs showing a delay in the spine emergence or in getting their three eigendirections frozen with respect to more massive ones, the differences being more marked at early times. This is somewhat expected from the previous discussion on the effects of the eigenvalue landscape heights on the timing of spine emergence, in $\S$\ref{ZAImpli}. Fig.~\ref{fig:CumHist_mass_effect}.b exhibits strong mass effects too. Indeed, at early times the most massive systems get one out of their three eigendirections frozen sooner than less massive ones. In fact, $\sim$ $95\%$ of the massive LV subsample has one of their eigendirections fixed at $t/t_{\rm U} \simeq 0.1$. This mass segregation can be understood in the light of the considerations made in $\S$\ref{ZAImpli}, where we concluded that the first CW elements tend to appear and percolate earlier on within massive LVs than within less massive ones. On the other hand, the freezing-out times for the principal axis of inertia (panel \ref{fig:CumHist_mass_effect}.d) display a remarkable mass effect, although just at early times. Later on, irrespective of their mass, no LV gets its first principal axis of inertia fixed later than $t/t_{\rm U} \simeq 0.55$. This upper bound on $t_{f}^{\rm min}$ might be a consequence of both, the $\frac{d D_{+}(t)}{dt} \rightarrow 0$ after the $\Lambda$ term dominates the Universe expansion, and the fact that flows towards walls are the first to vanish at a local level. The mass effect lies in massive systems having their $t_{f}^{\rm min}$ delayed at early times in relation to less massive ones (consistently with what was found in $\S$ \ref{Shape-Evol}), the difference vanishing at $z \sim 1$. Finally, to look for correlations, the $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ for our sample of LVs are plotted versus their respective $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$, in Fig.~\ref{fig:tmax-tminPlot} for $f=0.1$ and $\cos(\delta A_i) = 0.9$. No outstanding correlation exists in any case, but we see that indeed, most systems have their eigendirections fixed before their principal axes got frozen. Summing up, we observe that on average eigendirections (either one or the three) for massive LVs become fixed at earlier stages than that of less massive LVs. Nevertheless, no relevant mass effects are found for principal inertia axis freezing times. In addition, eigendirections become in general fixed before mass flows onto the corresponding CW elements stops, the time delay being particularly long for the first eigendirection relative to the first principal axis in massive systems. Thus, the first eigendirection in massive systems gets fixed quite a while before the accretion onto it stops. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig16} \caption{Scatter plots of $t_{f}^{\rm max}$ versus $t_{\delta A}^{\rm max}$ (left) and $t_{f}^{\rm min}$ versus $t_{\delta A}^{\rm min}$ (right). } \label{fig:tmax-tminPlot} \end{figure} \section{Discussion: Possible Scale Effects} \label{subsec:scaleeffects} In Section \ref{subsec:methods}, when describing how to build up the LV sample, a value of $R_{\rm high} = K\times r_{\rm vir, low}$ with $K = 10$ has been chosen to define the LV at $z_{\rm high}$. As explained there, this choice was motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring LVs with high enough number of particles so that we obtain meaningful LVs. However, $K = 10$ is by no means the unique value that satisfies these constraints. Therefore, it is important to test out the possible effects of changing this value under the same constraints. To this aim, we have repeated all the calculation using $K = 7.5$ and $15$. The LV building up (see section 2.2) has been repeated with the same SKID identified haloes at $z_{\rm low}$ as first step. Nonetheless, when $K = 15$ is used, some of the LVs do not satisfy anymore the condition of having all their particles inside the hydrodynamic zoomed volume. These particular LVs have been removed from the initial sample of 206 LVs, in such a way that we are finally left with 159 LVs for $K = 15$. This problem does not exist when using $K = 7.5$; however, to probe the scale effects, we need samples that contain the same $z_{\rm low}$ SKID-identified haloes as starting point in the three scales. Therefore, only these 159 well-behaved LVs (a subset of the initial $K = 10$ sample) have been used to analyse the scale effects. The first relevant outcome is that there is no substantial difference when results obtained with the subsample of 159 LVs and with the sample used along this paper (206 LVs) for $K = 10$ are compared. In the following subsections, we will compare the results obtained with each of the three samples of 159 LVs, dubbed according to its $K$ value, $K_{7.5}$, $K_{10}$ and $K_{15}$. \subsection{Effects on eigenvector orientation evolution} Concerning the evolution across redshifts of the $I_{ij}^{\rm r}$ eigendirections relative to their final values at $z_{\rm low}$ (Fig.~\ref{fig:direc-evol}), no relevant differences have been found between the histograms obtained with the $K_{15}$ and $K_{10}$ samples at the same redshifts. Fig.~\ref{fig:histangAi_scale} illustrates this behaviour, showing that the $A_1$ angle distributions for $K_{15}$ are similar to those found with $K_{10}$ at different $z$ pairs, see $\S$\ref{sec:EscFree} for more details. In addition, no scale effects appear in the angles formed by the eigenvectors, $\hat{e}_i^{\rm tot}(z)$, $i=1,2,3$, arising from the overall matter distribution with the same eigenvectors calculated with the different components (i.e., those angles whose distribution for the sample of 206 objects is given in Fig.~\ref{fig:histangei}.) \begin{figure} \centering \includegraphics[width=7.5cm]{sroblesfig17} \caption{Histograms of the $A_1$ distribution at different redshifts for the $K_{10}$ and $K_{15}$ samples (left- and right-hand columns, respectively). } \label{fig:histangAi_scale} \end{figure} \subsection{Effects on shape evolution} To gain further insight, the 159 LV subsample has been split according to the LV masses. In order to assure that we are comparing the same mass bins for the three scales, we have mapped the LVs belonging to the three mass ranges defined for the $K_{10}$ sample to the LVs of the $K_{15}$ and $K_{7.5}$ scales. Important results concerning shape evolution are as follows. \begin{enumerate} \item No relevant differences in the evolution patterns have been found between the least massive LV group ($M<5\times10^{11} M_\odot$ in the $K_{10}$ sample) when followed in the $K_{15}$, $K_{10}$ and $K_{7.5}$ samples (see Fig.~\ref{fig:shape_scale}, blue lines). That is, these LVs are hardly sensitive to the $K$ scale in their evolution. The scale effects are only slight between the $K_{15}$ and $K_{10}$ samples when no mass splitting in the LV sample is performed (see Fig.~\ref{fig:shape_scale}, black lines). \item LVs in the massive group are sensitive to the $K$ scale, with the $K_{7.5}$ samples showing particular differences. Fig.~\ref{fig:shape_scale} is an example of such a behaviour, likely due to the wall effect, whose formation is better sampled with $K_{15}$. Also, walls are more frequent in massive LVs. See $\S$\ref{sec:EscFree} for more details. \item In any case, the qualitative results reached in $\S$ \ref{sec:results} about component effects in shape deformations are stable when comparing $K_{15}$ and $K_{10}$ samples. \end{enumerate} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig18} \caption{CDFs of the ellipticity at $z_{\rm low}$ portraying mass effects obtained with the three different scales, $K_{7.5}$ $K_{10}$ and $K_{15}$. } \label{fig:shape_scale} \end{figure} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig19} \caption{Histograms for $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ defined with $\cos (\delta A_i) = 0.9$ and $f=0.1$. Columns show the results obtained for the three samples, $K_{7.5}$ $K_{10}$ and $K_{15}$. } \label{fig:freezingout_scale} \end{figure} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig20} \caption{CDFs of $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ at different $K$ scales. } \label{fig:freezingout_masseff_scale} \end{figure} \subsection{Effects on freezing-out times} \label{sec:EscFree} Fig.~\ref{fig:freezingout_scale} shows the histograms for the $t_{\delta A}^{\rm max}$, $t_{f}^{\rm max}$, $t_{\delta A}^{\rm min}$, and $t_{f}^{\rm min}$ times for samples using different $K$ scales. It is clear from this figure that while the results for the $K_{15}$ and $K_{10}$ samples are roughly consistent with each other, those for the $K_{7.5}$ sample differ. The only exception is the $t_{f}^{\rm max}$ time distribution (second row), whose pattern is the same at any scale, namely rather late and peaked. Recall that $t_{f}^{\rm max}$ is the time when the three inertia axes are fixed to within $10\%$ of their final values, i.e., the time when all anisotropic fluxes stop. This behaviour can be understood as the consequence of $\frac{d D_{+}(t)}{d t} \rightarrow 0$ at late times, that is a global effect. A key point to understand some aspects of Fig.~\ref{fig:freezingout_scale} behaviour, is the fact that the $K_{7.5}$ scale is too short to suitably sample the whole process of wall formation within some LVs. As a consequence, since the first flows to vanish are those towards walls (see $\S$ ~\ref{ZAImpli}), the $t_{f}^{\rm min}$ time (when the first inertia axis is fixed to within $10\%$ of its final value) will be delayed at high $z$ in the $K_{7.5}$ sample, as observed in Fig.~\ref{fig:freezingout_scale}, fourth row. A remarkable results is that, irrespective of the $K$ scale, no LV has its first inertia axis frozen later than $t/t_{\rm U} \simeq 0.55$. This result reinforces our interpretation given in $\S$ \ref{MEffFreez} that this effect is, at least partially, a consequence of the $\frac{d D_{+}(t)}{dt} \rightarrow 0$ tendency at latter times. The process of wall formation could be also the reason of the similarities and differences found in the distributions of the $t_{\delta A}^{\rm min}$ times (when the first eigenvector direction is fixed to within a $10\%$). The panels of the third row of Fig.~\ref{fig:freezingout_scale} show that their distributions are always peaked towards very early times, meaning that the $\hat{e}_3$ eigenvector of the $I^{r}_{ij}$ for some LVs freezes its direction very early, following wall formation. In addition, we see that as we move from $K_{15}$ to $K_{10}$ to $K_{7.5}$, a delay appears, not so relevant between the $K_{15}$ and $K_{10}$ samples. Again, this can be interpreted in terms of the inadequacy of the shorter scale to properly catch the characteristics of wall formation in some LVs. Finally, we address the scale effects on $t_{\delta A}^{\rm max}$ (first row of Fig.~\ref{fig:freezingout_scale}). These are the times when the LV orientations become frozen to within $10\%$ of their final values, i.e., the times marking the skeleton emergence locally within each LV. While its distribution is rather peaked at early times for both, the $K_{15}$ and the $K_{10}$ samples, it flattens as we go to $K_{7.5}$. Once again, the poor wall formation sampling in most $K_{7.5}$ LVs is likely to be the cause of this difference. It is worth noting that the qualitative features found in $\S$ \ref{sec:Percola} are stable under the change in $K$. For instance, mass effects can be analysed from Fig.~\ref{fig:freezingout_masseff_scale}, where we show the $t_{\delta A}^{\rm max}$, $t_{f}^{\rm max}$, $t_{\delta A}^{\rm min}$ and $t_{f}^{\rm min}$ mass-binned CDFs (first, second, third and fourth rows, respectively), at different scales (columns). Then, we can note that, regardless of the $K$ value, the $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ distributions show qualitatively similar mass effects, with the most massive LV group fixing either one or their three eigenvalues earlier on than LVs in the intermediate or less massive group (as expected). Moreover, the $t_{f}^{\rm max}$ distribution does not show relevant mass effects whatever the considered scale. Finally, irrespective of the scale, the $t_{f}^{\rm min}$ distributions do not show relevant mass effects after $t/t_{\rm U} \simeq 0.4$, as expected from the previous analyses. At low $z$, some mass segregation is found, and furthermore, it qualitatively depends on the scale. This is the only one exception to the stability under the change in $K$. These results could reflect the difficulty of catching the end of the mass flows in only one direction when the contribution of wall formation is combined with mass effects. Summing up, the differences in the freezing-out times are not very relevant when using the $K_{15}$ or the $K_{10}$ samples. Their distributions show similar patterns, in particular when mass effects are considered. \section{Summary and conclusions} \label{sec:conclusions} In this paper, we present a detailed analysis of the local evolution of 206 Lagrangian Volumes (LVs) selected at high redshift around proto-galaxies. These galaxies have been identified at $z_{\rm low} =0.05$ in a large-volume hydrodynamical simulation run in a $\Lambda$CDM cosmological context and they have a mass range $1 - 1500 \times 10^{10} M_\odot$. We follow the dynamical evolution of the density field inside these initially spherical LVs from $z_{\rm high}=10$ up to $z_{\rm low}=0.05$, witnessing mass rearrangements within them, leading to the emergence of a highly anisotropic, complex, hierarchical organisation, i.e., the {\it local} cosmic web (CW). Indeed, at $z_{\rm low}$ LVs acquire overall anisotropic shapes as a consequence of mass inflows onto singularities along cosmic evolution, in such a way that some relevant aspects of these mass arrangements can be described in terms of the reduced inertia tensor $I_{ij}^r$ evolution, as given by its principal directions and inertia axes, $ a \ge b \ge c$. Our analysis focuses on the evolution of the principal axes of inertia and their corresponding eigendirections, paying particular attention to the times when the evolution of these two structural elements declines. In addition, mass and component effects (either DM, cold or hot baryons) along this process have also been investigated. In broad terms, we have found that local LV evolution follows the predictions of the Zeldovich Approximation \citep[ZA, ][]{Zeldovich:1970} and the Adhesion Model \citep[AM, ][]{Gurbatov:1984,Gurbatov:1989,Shandarin:1989,Gurbatov:1991,Vergassola:1994} when both caustic dressing \citep{Dominguez:2000} and mutual gas versus CW effects \citep[see Section 3 and][]{DT:2011,Metuki:2014} are taken into account. Evolution also entails baryon transformation into stars inside the densest regions of the web and gravitational gas heating following the collapse. More specifically, these are our main results. Dark matter dominates dynamically the LV shape deformations over the baryonic component, as expected from hierarchical structure formation. Deformations transform most of the initially spherical LVs into prolate shapes, i.e. filamentary structures, in good agreement with previous findings \citep{AragonCalvoa:2010,Cautun:2014}. Cold baryons follow DM behaviour in general, but with some departures from it, departures that rise as evolution proceeds. Accordingly, the number of LVs having their cold baryonic principal axes in directions that differ from the ones calculated with their DM content is negligible at $z_{\rm high}$, and it keeps low along the evolution, but increases with time ($\sim 25\%$ at $z_{\rm low}$). On the contrary, the hot gas eigendirections have a flatter distribution at $z_{\rm high}$ and then they tend to converge to those calculated with DM. However, only $\sim$ half of them reach such convergence at $z_{\rm low}$. This tendency towards convergence is due to the fact that the hot gaseous component traces the locations where sticking events, in particular filament and node formation, have taken place. The mass fraction involved in these processes increases with evolution, and consequently we expect a tendency of the hot gas to be aligned with the total eigendirections. In terms of shape evolution, a clear component effect has been found regarding the way how the evolution occurs. In fact, hot gas shapes do not exhibit important evolution because, as said above, gravitationally heated gas marks out the places where sticking events have taken place, and because, in addition, no evidence for important anisotropic mass rearrangements in this component have been found in this paper. The only remarkable effect is that the needle-like or flat shapes shown by hot gas in some LVs around $z=5$, are transformed at lower $z$s. As mentioned before, DM and cold baryons shapes do evolve, with cold baryons achieving an even more pronounced filamentary structure than DM ones as a consequence of dissipation. Additionally, some mass effects have also been found in the generic evolution of shapes, with lower mass LVs evolving towards more pronounced filamentary structures on average and earlier on than the more massive ones. A remarkable result of our analyses is that the evolution of LV deformations declines. This means that both the LV eigendirections, as well as their principal axes of inertia ($a,b$ and $c$) values become roughly constant before $z_{\rm low}$. This is a smooth effect that can be only defined in terms of thresholds. Taking a $10\%$ of the final values, shape (i.e., $a,b$ and $c$ values) freezing-out time distribution has a narrow peak ($\sim 0.2$ at each side) around $t/t_{\rm U}=0.8$. This happens later than the freezing-out times for the three LV eigendirections, whose distribution peaks around $t/t_{\rm U}=0.1$ and then it is flat until $t/t_{\rm U} \sim 0.8$ when it decays. By plotting individual freezing times for shapes and eigendirections, respectively (see Fig.~\ref{fig:tmax-tminPlot}.a), we note that first, most of the LVs fix their three axes of symmetry (like a skeleton), and later on their shapes are fixed. This result is in good agreement with \citet{vanHaarlem:1993,vandeWeygaert:2008,Cautun:2014} and \citet{Hidding:2014} findings. Moreover, the ZA and the AM predict that walls, filaments and nodes undergo mass flows from underdense regions to denser environments, that continue after skeleton emergence. As a general consideration, it has been found that mass rearrangements at the scales taken into account have always been highly anisotropic. Therefore, the mass streaming towards walls and filaments has been extremely anisotropic, and, to a lesser extent, towards nodes as well. In particular, galaxy systems form in environments that have a rigid spine at scales of a few Mpc, from whose skeleton a high fraction of mass elements that feed protogalaxies are collected. Due to anisotropic mass accretion, it turns out that in general the direction of just {\it one} of the LV eigen vectors or the value of {\it one} of their axes get frozen while the other two still continue changing. Again, for each LV there is a time delay between the moment when the first of its eigendirections get fixed (happening within the first $20\%$ of the Universe age) and the moment when the value of one of its principal axes becomes constant (peaking around $t/t_{\rm U}=0.35$). Therefore, we again find a situation where first the flow direction is fixed (as a first piece in the skeleton emergence) while the mass flows persist. Even more interesting because of its possible astrophysical implications (see discussion below) is our finding that more massive LVs fix their skeleton earlier on than less massive ones, either considering just one or the three eigendirections. These results are not surprising since the dynamical processes involved in the spine emergence are faster around massive potential wells. Concerning shape transformation decline, there are no relevant mass effects as far as the complete shape freezing-out is considered. When just one axis value is taken into account, however, an early delay of more massive LVs compared to less massive ones clearly stands out, delay that vanishes at half of the Universe age. When building up the LV sample at $z_{\rm high}$ a value of $R_{\rm high} = K\times r_{\rm vir, low}$ with $K = 10$ has been used to define the LV at this redshift. This choice was motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring that LVs are large enough to meaningfully sample the CW emergence around forming galaxies. As this $K = 10$ value is not the unique value satisfying these constraints, the complete analysis has been repeated using $K = 7.5$ and $15$ instead. We have found that when using the $K = 15$ or the $K = 10$ samples, no relevant differences in the LV eigenvector orientations, shape deformations and freezing-out times appear. Therefore, using $K = 10$ is in a sense the best choice. It is important to remark that no explicit feedback has been implemented in the simulations analysed here, but SF regulation through the values of the SFR parameters. We remark that the issues discussed in this paper entail considerably larger characteristic scales than the ones related to stellar feedback. Hence, it is unlike that the details of the star formation rate, and those of stellar feedback in particular, could substantially alter the conclusions of this paper, at least at a qualitative level. Concerning the inner halo scale, we recall that to properly explore the impact of SNe feedback into filamentary patterns, high enough resolution in order to resolve SNe remnants into the Taylor--Sedov phase are needed. Such simulations are available (the NUT simulations, at sub-parsec scale), but only up to $z=9$ \citep{Powell:2011}. Therefore, we still have to wait to properly understand how SNE feedback can possibly affect the CW emergence and dynamics. However, the findings so far, at high $z$, suggest that the filamentary patterns are essentially untouched by SNe feedback \citep{Powell:2013}. \subsection{Astrophysical Implications} The results summarised so far could have important implications in our understanding of galaxy mass assembly, raising different interesting issues. According to our results, it takes longer for less massive systems to fix their spine, possibly making it easier for these systems to acquire angular momentum through filament transverse motions relative to the galaxy haloes. In fact, recent studies on galaxy formation \citep{Kimm:2011,Pichon:2011,Tillson:2012,Dubois:2014} in the CW context, underline the role that filament motions in the protogalaxy environment could have had in endowing filaments, and eventually the adult galaxy, with angular momentum. If real, this effect could contribute to the mass-morphology correlation \citep[see for instance][]{Kauffmann:2003}. Our results also point towards (major) mergers events having a high probability to occur within filaments. This is an important issue, though beyond the scope of this paper. In fact, if confirmed, this could decrease the allowed merger orbital parameter values \citep[see for example,][]{Lotz:2010,Barnes:2011}, as most mergers would have these parameters constrained within the filament. Another issue concerns the use of close pairs in merger rate calculations from observational data, under the hypothesis that these systems are bound and about to merge \citep[see, for instance][]{Patton:2000,Bell:2006,Kartaltepe:2007,Patton:2008,Robaina:2010,Tasca:2014,LopezSanjuan:2014}. In this respect, some interesting efforts have been made to correct the statistics of pairs that are close in angular distance from chance superposition effects on the line of sight, \citep[see e.g.,][]{Kitzbichler:2008,Patton:2008}, whose results are used by other authors in this field. Our results reinforce the need for these analyses, in the sense that a detailed determination of these corrections, including their dependence on the galaxy properties, merger parameters and environment, could be crucial for a more elaborated understanding of the relationship among close pair statistics and merger rates. Finally we very briefly address the question of the warm-hot gas distribution at intermediate scales. Our results point to the web structure being marked out by hot gas from high redshifts. Indeed, at scales of $4-8$ Mpc and at $z_{\rm low}$, hot gas traces the CW elements. Note that there is observational evidence of warm-hot gas at large scales in a filament joining Abell clusters A222 and A223 \citep{werner:2008}, where the DM component has also been detected \citep{dietrich:2012}, and more recently preliminary evidence of hot gas in cluster pairs has been found from the redMaPPer catalogue \citep{Rykoff:2014} along the sightline of a QSO by \citet{Tejos:2014b}, (see also his presentation in The Zeldovich Universe, Genesis and Growth of the Cosmic Web, 2014, IAU Symposium). Our results concern smaller scale structures, and they indicate that hot gas traces the CW since the moment when gas is heated at high redshift. Indeed, hot gas maps out the sites where the most violent dynamical events have occurred, such as filament, and, more particularly, node formation. Confirming warm-hot gas in filaments at different scales is a major challenge for the advance of our understanding of galaxy formation \citep[see for example][for details]{Kaastra:2013}. \section*{Acknowledgements} We thank Arturo Serna for allowing us to use results of simulations. We thankfully acknowledge to D. Vicente and J. Naranjo for the assistance and technical expertise provided at the Barcelona Supercomputing Centre, as well as the computer resources provided by BSC/RES (Spain). We thank DEISA Extreme Computing Initiative (DECI) for the CPU time allowed to GALFOBS project. The Centro de Computaci\'on Cientif\'ica (UAM, Spain) has also provided computing facilities. This investigation was partially supported by the MICINN and MINECO (Spain) through the grants AYA2009-12792-C03-02 and AYA2012-31101 from the PNAyA, as well as by the regional Madrid V PRICIT programme through the ASTROMADRID network (CAM S2009/ESP-1496) and the `Supercomputaci\'on y e-Ciencia' Consolider-Ingenio CSD2007-0050 project. SR thanks the MICINN and MINECO (Spain) for financial support through an FPU fellowship. \bibliographystyle{mn2e}
{ "timestamp": "2015-06-10T02:00:52", "yymm": "1504", "arxiv_id": "1504.06297", "language": "en", "url": "https://arxiv.org/abs/1504.06297" }
\section{Introduction} Previous power spectrum analysis of the amplitude of the polarisation intensity vector, $|\bmath{P}|= \sqrt{Q^2+U^2}$, where $Q$ and $U$ are the Stokes parameters, in the Galactic plane has shown evidence of large-scale structures in the Galactic magnetic field \citep{2003A&A...403.1045H, 2014ApJ...787...34S}. The power-law behaviour of these spectra is expected to be related to the energy transfer from larger to smaller scales in the turbulent fluctuations of the magnetic field. Power-law variations as a function of Galactic latitude have also been measured \citep{2003A&A...403.1045H}, as well as localised variations in regions associated with HII regions or supernova remnants \citep{2014ApJ...787...34S}. Unfortunately, the power spectrum of $|\bmath{P}|$ alone is not sensitive to fluctuations of the polarisation angle, $\theta=(1/2)\arctan(U/Q)$, which also show evidence of large-scale variations in the Galactic plane. According to \citet{2010A&A...520A..80L}, large-scale variations of $\theta$ are probably associated with the large-scale features of the magnetic field aligned with the spiral structure of the Galaxy. On the other hand, fluctuations of $\theta$ on smaller scales could be explained by a turbulent Faraday screen in front of a uniform polarised background. However, such perfect conditions are almost never satisfied in the interstellar medium (ISM) and most fluctuations seen in Stokes $Q$ and $U$ are probably due to a combination of Faraday rotation and intervening polarised emission along the line of sight in the Galactic plane. For these reasons, the interpretation of the direction and the amplitude of $\bmath{P}$ when considered separately is very difficult. \citet{2011Natur.478..214G} proposed calculation of the amplitude of the gradient of $\bmath{P}$, $|\nabla \bmath{P}|$, as a new technique to measure variations of the vector $\bmath{P}$ in the $Q$--$U$ plane. It is defined as, \begin{equation} |\nabla \bmath{P}| = \sqrt{ \left(\frac{\partial Q}{\partial x}\right)^2 + \left(\frac{\partial U}{\partial x}\right)^2 + \left(\frac{\partial Q}{\partial y}\right)^2 + \left(\frac{\partial U}{\partial y}\right)^2 }, \label{eq:gradient_P} \end{equation} \noindent This quantity can trace changes in both the direction and the amplitude of the vector $\bmath{P}$. Acting as an edge detector in a map, the gradient of $\bmath{P}$ highlights areas of sharp change in the magnetic field and/or the free-electron density, which are most likely due to turbulent fluctuations or shock fronts in the ISM. Since $|\nabla\bmath{P}|$ is only sensitive to the smallest scales, it is not significantly affected by the loss of large-scale structure in interferometric data. On the other hand, one disadvantage of using the gradient is that it may enhance noise present in the data and the distribution of its amplitude depends on the telescope resolution \citep{2012ApJ...749..145B}. Variations of the emission probability distribution function (PDF) width as a function of angular resolution and angular scale were measured in thermal dust emission, infrared emission as well as in $|\nabla \bmath{P}|$ structures \citep{2010MNRAS.406.1350F, 2012ApJ...749..145B, 2014MNRAS.440.2726R}. In general, this may be explained by small-scale high-extinction cores or structures that are not present on larger scales and do not create the large skewness typical of the lognormal distribution usually measured in star formation regions. The PDF of gas, dust column density and $|\nabla \bmath{P}|$ is also expected to reflect the signature of physical processes occurring in the medium, e.g. turbulence, gravitational collapse or shocks \citep{2011MNRAS.416.1436B, 2012ApJ...749..145B, 2013ApJ...766L..17S}. All those physical processes are scale dependent: shocks usually produce fine scale structures, gravitational collapse depends on the local density of the gas and turbulence has the ability to transfer energy from large to smaller scales. Since the free-electron density and the magnetic field can also be affected by shocks and turbulence in the ISM, fluctuations in the polarisation intensity traced by $|\nabla\bmath{P}|$ should also be present on a broad range of scales. Previous studies using $|\nabla\bmath{P}|$ maps \citep{2011Natur.478..214G, 2012ApJ...749..145B, 2014A&A...566A...5I} concentrated their analysis on small-scale fluctuations, primarily for two reasons: (1) the gradient of a map only samples the smallest scales and (2) many radio observations are made interferometrically and miss information on large-scale structures since they do not completely sample the Fourier plane. In this paper, we propose a method to generalise such $|\nabla\bmath{P}|$ analysis to multiple scales using data where single-dish measurements are present. The calculation of the gradient as an edge detector in two-dimensional images has found multiple applications in different fields. \citet{Canny1986} has shown, with image analysis methods for computer vision, that there is an uncertainty principle related to the detection and the localisation of a noisy step edge. In the presence of low signal to noise data, the precision of the localisation of edges must be traded by applying the gradient to a Gaussian-smoothed image. \citet{Canny1986} also show that the first derivative of Gaussians of different width can be used directly as a multiscale edge detector. In a similar vein, \citet{Mallat1992} generalised the method using wavelet transforms in a singularity detection algorithm applied to one and two-dimensional signals. Later, this generalised method, called the wavelet transform modulus maxima (WTMM) was used by \citet{Arneodo2000} as a multifractal analysis tool. In this work, we propose a similar technique based on wavelet analysis to calculate $|\nabla \bmath{P}|$ as a function of scale for data where single-dish measurements are present (a description of these data is presented in Section \ref{sec:observation}). A description of the resolution effect on the calculation of $|\nabla \bmath{P}|$ is presented in Section \ref{sec:resolution}; the wavelet formalism is presented in Section \ref{sec:DoG}; the formalism is tested on simulations in Section \ref{sec:test}; application to real data and discussion are presented in Sections \ref{sec:results} and \ref{sec:discussion} and conclusions in Section \ref{sec:conclusion}. \section{Observations}\label{sec:observation} The analysis in this work is applied to a field of the Canadian Galactic Plane Survey (CGPS; \citealt{2003AJ....125.3145T}) at 1420 MHz on polarised data including Stokes parameters $Q$ and $U$. For this survey, interferometric data were observed with the Synthesis Telescope at the Dominion Radio Astrophysical Observatory (DRAO; \citealt{2000A&AS..145..509L}). All observations were then completed with lower spatial frequencies from the Effelsberg 100--m telescope and the DRAO 26--m Telescope \citep{2003AJ....125.3145T, 2010A&A...520A..80L}. The chosen field is a combination of four mosaics from the survey available at the Canadian Astronomy Data Centre (CADC)\footnote{http://www1.cadc-ccda.hia-iha.nrc-cnrc.gc.ca}. It covers $\sim 8$\degree\, in Galactic longitude and $\sim 7$\degree\, in Galactic latitude. The field is centred at $l=82.65$\degree\, and $b=0.98$\degree. The amplitude of the polarisation intensity vector, $|\bmath{P}|$, is shown in Fig. \ref{fig:MM12_MN12_P}. CGPS data have an angular resolution of $\sim1 \csc \delta$ arcmin (where $\delta$ is the declination) and a pixel size of 18 arcsec. \begin{figure} \centering \includegraphics[]{figures/fig1.pdf} \caption{The amplitude of the polarisation intensity vector, $|\bmath{P}|$, calculated from Stokes $Q$ and $U$ maps of the CGPS data.\label{fig:MM12_MN12_P}} \end{figure} \section{Resolution effect on $|\nabla \bmath{P}|$} \label{sec:resolution} One advantage of the calculation of $|\nabla \bmath{P}|$ is that the results are not significantly affected by missing large-scale structures in interferometric data not completed with single-dish measurements. However, by sampling only the smallest scales, the gradient may enhance the noise in the data \citep{2012ApJ...749..145B}. Given the angular resolution of the CGPS maps and their pixel size, the synthesised beam is over-sampled by a factor of $\sim3.3$ \citep{2003AJ....125.3145T}, which means that, for these maps, the gradient is sensitive to variations smaller than the synthesised beam. To visualise the resolution effect on the first derivative of a signal, the derivative of a one-dimensional function and its smoothed counterpart are shown in Fig. \ref{fig:1d_deriv}. The signal represents one row of pixels at $l=-0.72$\degree\, from the CGPS Stokes $Q$ image. To create a smoothed version of the signal, a Gaussian filter with a standard deviation of $2^5$ pixels is convolved with the original signal. The smoothed counterpart is shown superposed on the original signal in the top panel of Fig. \ref{fig:1d_deriv}. In the second panel, we show that in spite of obvious variations on larger scales in the one-dimensional signal, the first derivative of the original signal is much more sensitive to the smallest scale variation. On the other hand, the first derivative of the smoothed signal highlights variations on larger scales which are independent of variations seen at smaller scale. \begin{figure} \centering \includegraphics[]{figures/fig2.pdf} \caption{A one dimensional signal representing one row of pixels located at $l=-0.72$\degree\, from the CGPS Stokes $Q$ image. From the top to the bottom are shown the original signal with a superposed smoothed version with a Gaussian filter having a standard deviation of $2^5$ pixels (red line), the first derivative of the original signal and the first derivative of the smoothed version.\label{fig:1d_deriv}} \end{figure} Figure \ref{fig:deltaP_smoothed} shows the spatial gradient of the linearly polarized emission, $|\nabla \bmath{P}|$, for the CGPS field. In the left panel, filamentary structures normally identified from the calculation of $|\nabla \bmath{P}|$ can hardly be distinguished from variations at small scales associated with noise. However, the ``honeycomb'' noise variation pattern caused by the survey mapmaking becomes clearly visible. The right panel shows the gradient calculated from Q and U maps smoothed using a Gaussian beam with a standard deviation of 4 pixels. Since the first derivative is only sensitive to fluctuations larger than the synthesised beam, filamentary structures, similar to those initially presented by \citet{2011Natur.478..214G}, are now clearly visible. \begin{figure*} \centering \includegraphics[]{figures/fig3.pdf} \caption{The spatial gradient of linearly polarized emission, $|\nabla \bmath{P}|$, at the original resolution (left) and for the smoothed Stokes Q and U maps. Smoothed maps are produced with a convoluted Gaussian filter having a standard deviation of $2^2$ pixels.} \label{fig:deltaP_smoothed} \end{figure*} The effect of the map resolution on PDF moments of $|\nabla \bmath{P}|$ values was previously observed by \citet{2012ApJ...749..145B}. They used a Gaussian smoothing with FWHMs from 3 to 9 pixels on $Q$ and $U$ maps and measured significant decreases in all of the first four moments of $|\nabla \bmath{P}|$ for simulations of turbulence with supersonic Mach numbers. Figure \ref{fig:1d_deriv} and \ref{fig:deltaP_smoothed} illustrate that smoothing maps of $Q$ and $U$ not only changes the distribution of $|\nabla \bmath{P}|$ values but can also significantly change its structures. In the next section, we propose a multiscale analysis technique in order to visualise and quantify changes in the distribution of $|\nabla \bmath{P}|$ maps as a function of scale. \section{DoG wavelet analysis} \label{sec:DoG} \subsection{Formalism} \label{subsec:formalism} The convolution of a Gaussian beam with maps of $Q$ and $U$ before the calculation of their gradient can give access to variations and sharp changes of $|\nabla \bmath{P}|$ at different angular resolution. By applying the technique on multiple angular scales, i.e. by gradually changing the Gaussian beam width convolved with maps $Q$ and $U$, it is possible to extend the analysis of $|\nabla \bmath{P}|$ images in the spatial frequency domain. Following the work of \citet{Canny1986} and \citet{Mallat1992}, wavelet transforms can be used as a basis for developing a multiscale edge detector analysis. The wavelet transform of a signal consists of the convolution of a set of functions, called daughter wavelets, each of which represents a scaled version of a mother wavelet. One class of wavelet functions called the Derivative of Gaussian (DoG) is defined by \begin{equation} \psi(\bmath{x}) = (-1)^m \frac{\textrm{d} ^m}{\textrm{d}|\bmath{x}|^m} \phi(\bmath{x}), \label{eq:mother_DoG} \end{equation} \noindent where \begin{equation} \begin{array}{rl} \phi(\bmath{x})& = \frac{1}{2\pi} e^{\frac{-|\bmath{x}|^2}{2}}\\ & = \frac{1}{2\pi } e^{\frac{-(x^2+y^2)}{2}}.\\ \end{array} \label{eq:Gaussian} \end{equation} \noindent The second order ($m=2$) DoG wavelet represents the widely used ``Mexican Hat'' continuous wavelet. Even values of $m$ create symmetric functions which are appropriate for most general applications of wavelet transforms. Odd values of $m$ create asymmetric functions which are useful for revealing directional trends in data. They can also be used as edge detectors for structures present in an image. For the purpose of this analysis, the order of the mother wavelet will take the value of 1 or 3. In this section, polarised data are considered as two-dimensional functions $f(\bmath{x})$, where $\bmath{x}$ is the vector position in a $x$--$y$ plane. The continuous wavelet transform of $f(\bmath{x})$ with the DoG wavelet can be expressed as \begin{equation} \tilde{f}(l,\bmath{x}) = \begin{cases} \tilde{f}_1 = l^{-2} \int \psi_1[l^{-1}(\bmath{x'}-\bmath{x})]f(\bmath{x}) d^2\bmath{x'}\\ \tilde{f}_2 = l^{-2} \int \psi_2[l^{-1}(\bmath{x'}-\bmath{x})]f(\bmath{x}) d^2\bmath{x'}, \end{cases} \label{eq:DoG_transforms} \end{equation} \begin{figure*} \centering \includegraphics[]{figures/fig4.pdf} \caption{Analysing wavelets (a) $-\psi_1$ and (b) $-\psi_2$. (The negative sign is for a better visualisation of the functions.)} \label{fig:DoG} \end{figure*} \noindent where $\psi_1(x,y) = \partial^m \phi(x,y)/ \partial x^m$ and $\psi_2(x,y) = \partial^m \phi(x,y)/ \partial y^m$ (a three dimensional representation of these functions for $m=1$ is shown in Fig. \ref{fig:DoG}). All convolutions can be computed in the Fourier domain, which increases the speed of calculation. The function $\tilde{f}$ represents the wavelet transform of $f$ and $l$, the scaling factor of the wavelet. For $m=1$, it is interesting to note that equation (\ref{eq:DoG_transforms}) is equivalent to the calculation of the gradient of $f(\bmath{x})$ smoothed by dilated versions of a Gaussian beam: \begin{equation} \tilde{f}(l,\bmath{x}) = \nabla \{\phi(l^{-1}\bmath{x}) \otimes f(\bmath{x}) \}, \label{eq:grad_Gauss} \end{equation} \noindent where $\otimes$ is the convolution operation. From this point of view, the wavelet transform gives us a useful mathematical formalism on which a multiscaled version of $|\nabla \bmath{P}|$ can be defined. According to the statements above, $|\nabla \tilde{P}(l,\bmath{x})|$ can now be defined as: \begin{equation} |\nabla \tilde{P}(l,\bmath{x})| = \sqrt{ |\tilde{Q}(l,\bmath{x})|^2 + |\tilde{U}(l,\bmath{x})|^2}, \label{eq:scaled_gradient_P} \end{equation} \noindent where, referring to equation (\ref{eq:DoG_transforms}), \begin{equation} \begin{array}{rl} |\tilde{Q}(l,\bmath{x})| & = \sqrt{ |\tilde{Q}_{1}(l,\bmath{x})|^2 + |\tilde{Q}_{2}(l,\bmath{x})|^2},\\ \\ |\tilde{U}(l,\bmath{x})| & = \sqrt{ |\tilde{U}_{1}(l,\bmath{x})|^2 + |\tilde{U}_{2}(l,\bmath{x})|^2}.\\ \end{array} \label{eq:Stokes_amplitude} \end{equation} Since the work of \citet{Arneodo2000}, the mathematical formalism described by equations (\ref{eq:mother_DoG}) to (\ref{eq:grad_Gauss}) has been usually associated with the WTMM method. In order to extend the $|\nabla \bmath{P}|$ analysis to multiple angular scales, here we apply some components of this method to the CGPS polarization maps, in conjunction with a number of complementary methods inspired by the wavelet analysis techniques, as the $\Delta$-variance. However, a complete multifractal analysis, the original motivation for WTMM methods, is beyond the scope of this paper. \subsection{Maxima chains}\label{subsec:maxima_chains} Visually, the most interesting regions in maps of $|\nabla \bmath{P}|$ are areas showing the sharpest changes of the polarisation vector $\bmath{P}$. According to the WTMM method, one easy way to highlight these regions is by calculating the ``maxima chains'' of modulus values of the gradient. In addition to the magnitude of the polarisation gradient, one can also calculate the argument or the direction of $\nabla \bmath{P}$, at each position in a map \citep{2011Natur.478..214G}: \begin{equation} \begin{array}{l} \arg(\nabla \bmath{P})\equiv \\ \tan^{-1}\left (C_{\textrm{sign}} \sqrt{ \left ( \frac{\partial Q}{\partial y} \right )^2 + \left ( \frac{\partial U}{\partial y} \right )^2 } \bigg / \sqrt{ \left ( \frac{\partial Q}{\partial x} \right )^2 + \left ( \frac{\partial U}{\partial x} \right )^2 } \right ),\\ \end{array} \label{eq:argument} \end{equation} \noindent where, \begin{equation} C_{\textrm{sign}}=\textrm{sign} \left ( \frac{\partial Q}{\partial x} \frac{\partial Q}{\partial y} + \frac{\partial U}{\partial x} \frac{\partial U}{\partial y} \right ). \label{eq:sign} \end{equation} \noindent Following this definition, modulus maxima are positions where $|\nabla \bmath{P}|$ is locally maximum in the direction of $\arg(\nabla \bmath{P})$. Thus, for every pixel on all scales, the argument of $\nabla \bmath{P}$ is calculated and the associated magnitude $|\nabla \bmath{P}|$ is compared with adjacent pixels having a similar $\arg(\nabla \bmath{P})$ value: orientations are divided into only six different directions to take into account the pixelisation effect. After an iterative process for the entire map, maxima positions should lie on connected ``maxima chains''. Those chains allow us to visualise locations where strong fluctuations in the electron density distribution and/or magnetic field strength occur. Chains are also useful to visualise coherent structures that are ``connected'' through multiple scales. \subsection{Wavelet power sprectrum}\label{sec:wav_pow} Similarly to the Fourier transform, the wavelet transform conserves the total energy of the original signal. This property can be defined following the generalisation of the Plancherel identity for the continuous wavelet transform \citep{1992AnRFM..24..395F, 2011AJ....141...41D}: \begin{equation} \int |f(\bmath{x})|^2 d^2\bmath{x} = C_{\psi}^{-1}\int\int \frac{|\tilde{f}(l,\bmath{x})|^2}{l^2} dl d^2\bmath{x}, \label{eq:Plancherel} \end{equation} \noindent where \begin{equation} C_{\psi}=\int \frac{|\hat{\psi}(\bmath{k})|^2}{|\bmath{k}|^2}d^2\bmath{k} < \infty. \label{eq:admissibility} \end{equation} \noindent Equation (\ref{eq:admissibility}) is also called the admissibility condition of the wavelet, where $\hat{\psi}(\bmath{k})$ is the Fourier transform of the mother wavelet $\psi(\bmath{x})$ and $\bmath{k}$ is the wavenumber vector. This condition is satisfied for every $m$-th order of equation (\ref{eq:mother_DoG}). From equation (\ref{eq:Plancherel}), the energy conservation can also be defined as a function of spatial scale only: \begin{equation} E(l)= \int \frac{|\tilde{f}(l,\bmath{x})|^2}{l^2} d^2\bmath{x}. \label{eq:wavelet_energy} \end{equation} \noindent This relation shows that wavelet coefficients can be compared to Fourier coefficients and, as for the calculation of the Fourier power spectrum, wavelet coefficients can be used to measure the energy transfer from large to smaller scales. It is important to note that the normalisation factor $l^{-2}$ in equation (\ref{eq:DoG_transforms}) is only required to ensure the validity of equation (\ref{eq:grad_Gauss}) \citep{Arneodo2000}. In order to calculate the wavelet power spectrum of $|\nabla \tilde{P}(l,\bmath{x})|$, the regular normalisation of a wavelet transform, $l^{-1}$, is used. The wavelet energy spectrum defined by equation (\ref{eq:wavelet_energy}) can also be expressed in terms of the Fourier energy spectrum, $E(\bmath{k})=|\hat{f}(\bmath{k})|^2$ \citep{1992AnRFM..24..395F}: \begin{equation} E(l)= \int E(\bmath{k}) |\hat{\psi}(l\bmath{k})|^2 d^2\bmath{k}. \label{eq:Fourier_wavelet} \end{equation} \noindent This relation means that at a particular scale, the global wavelet energy corresponds to the integral of the Fourier energy spectrum of the analysed function weighted with the energy spectrum of the wavelet at that scale. In order to produce a wavelet power spectrum similar to the classical Fourier power spectrum, which takes into account the finite size of map $f(\bmath{x})$ and the discrete number of pixels, we use the relation: \begin{equation} S_P(l)= \frac{1}{N_xN_y} \sum_{\bmath{x}} |\nabla \tilde{P}(l,\bmath{x})|^2. \label{eq:wavelet_power} \end{equation} \noindent The notation $S_P(l)$ is used here instead of the usual $P(l)$ for the power spectrum, in order to avoid possible confusion with the polarisation intensity $P$. \subsection{Equivalence with Fourier wavelength}\label{sec:Fourier_equiv} If one wants to compare the wavelet scale $l$ with the wavenumber $k$ in the Fourier domain, the equivalence of the scaling factor $l$ in the frequency domain has to be defined. The wavelet analysis described in the previous sections is equivalent to the calculation of the gradient of an image smoothed by Gaussian filters of different widths (see equation (\ref{eq:grad_Gauss})). Following this statement, the scale $l$ defined in the previous sections is related to the standard deviation of the Gaussian filter. For the following analysis, the wavelet scaling factor will be defined as $l_{\textrm{F}} = l\cdot(2\pi)^{-1}$, so that the scale $l$ can be directly compared to the wavenumber $k=1/l$ in the Fourier domain. The $m$th derivative of the Gaussian makes the function oscillate around zero. The wavelength of these oscillations and their amplitude, rapidly decaying towards infinity, are the two properties which allow the wavelet function to be localised in the frequency domain. The width of the function in the frequency domain acts as a bandpass filter. According to \citet{2005CG.....31..846K}, one easy way to define the relationship between the scale of the wavelet function and the frequency content of the signal is to determine the wavenumber at which the wavelet function is maximum in the frequency domain. He determined that, in the case of the DoG wavelet, the equivalence between the wavelet scale and the wavelet scaling factor is $k=\sqrt(m)/l_{\textrm{F}}$, so that the scaling factor becomes $l_{\textrm{F}} = \sqrt{m}\cdot l\cdot(2\pi)^{-1}$. In other words, for this analysis, the wavelet scaling factor is not chosen to correspond to the standard deviation of the initial Gaussian, but it is chosen to correspond instead to the Fourier wavenumber $k$ that is sampled by the bandpass filter in the Fourier domain. This definition allows a better comparison between the wavelet power spectrum and the classical Fourier power spectrum. \section{Tests on simulated data}\label{sec:test} The formalism presented in the previous section shares similarities with the $\Delta$-variance introduced by \citet{1998A&A...336..697S}. This method has been successfully applied in several studies in order to characterise structures at multiple scales induced by turbulence in molecular clouds \citep{2001A&A...366..636B, 2008A&A...485..917O, 2008A&A...485..719O, 2014A&A...568A..98A}. The $\Delta$-variance is defined as a measure of the amount of structure at a given scale $l$ in a map. Its definition is similar to the energy spectrum defined in equations (\ref{eq:wavelet_energy}) and (\ref{eq:Fourier_wavelet}), except that the convolved filter $\psi(\bmath{x})$ is isotropic. As mentioned by \citet{2008A&A...485..917O}, the main advantage of the $\Delta$-variance method comes from its smooth filter shape which ensures a robust angular average of the signal and a lower sensitivity to singular variations and finite map size effects. Similarly to the work of \citet{2001A&A...366..636B} which tested the influence of telescope beam and finite map sizes on the $\Delta$-variance, this section tests those effects using the anisotropic DoG wavelet. The robustness of the wavelet power spectrum calculation is tested on two Gaussian random field (Grf) simulations of $1024\times1024$ pixels for both Stokes $Q$ and $U$ maps. Those images are produced by applying a power law as a function of scale to the squared amplitude of a random-phase map. Similarities between Grfs and interstellar structures was pointed out by \citet{1998A&A...336..697S} in their study of fractal properties of molecular clouds. A Grf simulation with a power law of $\gamma=2.5$ representing Stokes $Q$ maps is displayed in Fig. \ref{fig:Qfbm2p5}. The original Grf is displayed in Fig. \ref{fig:Qfbm2p5}(a), Fig. \ref{fig:Qfbm2p5}(b) shows the same field convolved with a Gaussian filter having a standard deviation of 2 pixels and Fig. \ref{fig:Qfbm2p5}(c) shows the original field added with a random noise having a $\sigma_{\textrm{rms}}$ of 0.5. The original field has a mean pixel value of zero and a standard deviation of 1.0. The Fourier power spectra of $|\bmath{P}|$ for the three fields are shown in Fig. \ref{fig:fbm_power_spec_wavelet} (a). They are calculated on $2048\times2048$ extended maps with zero-padding and an apodised interface \footnote{The apodisation consists of the multiplication of a taper function, in this case the negative slope of a cosine, which smooths the sharp edges in an image.} between the extension and the image on 5 per cent of the border of the original image. To avoid spurious power at smaller scales caused by edges of the image, the mean pixel value of images must be subtracted before the apodisation. In Fig. \ref{fig:fbm_power_spec_wavelet} (a), the spectra are produced by averaging the squared amplitude of complex Fourier coefficients over different annuli in the $u$--$v$ plane. Figure \ref{fig:fbm_power_spec_wavelet} (a) shows that, at small scales, the telescope beam and noise induce a significant departure from the power law. \begin{figure*} \centering \includegraphics[]{figures/fig5.pdf} \caption{The Grf simulation of $1024\times1024$ pixels of Stokes $Q$ map with $\gamma=2.5$: (a) the original Grf with a mean pixel value of zero and a standard deviation of 1.0, (b) the same field convolved with a Gaussian filter with a standard deviation of 2 pixels and (c) the original field with random noise, with an rms of 0.5, added.} \label{fig:Qfbm2p5} \end{figure*} \begin{figure*} \centering \includegraphics[]{figures/fig6.pdf} \caption{Power spectra analysis of Grfs simulated Stokes $Q$ and $U$ maps: (a) The Fourier power spectrum of $|\bmath{P}|$ of the original image (diamond), the image convolved with a Gaussian beam (star) and the image with added noise (plus). (b) The wavelet power spectrum of the three same images using the first order wavelet. (c) The wavelet power spectrum of the three same images using the third order wavelet. (d) shows the values of the fitted power laws to the wavelet power spectra (for $4 < l < 50$ arcmin), for five different power law indices of the original Grfs. Diamonds represent power laws measured with the first order Dog wavelet ($m=1$) and triangles represent power laws measured with the third order Dog wavelet ($m=3$).} \label{fig:fbm_power_spec_wavelet} \end{figure*} \begin{figure} \centering \includegraphics[]{figures/fig7.pdf} \caption{The positive part of the first and the third derivative of a Gaussian centred at zero, respectively $m=1$ and 3 in equation (\ref{eq:mother_DoG}).\label{fig:DoG_1d}} \end{figure} The wavelet power spectra of $|\nabla \tilde{\bmath{P}}|$ calculated following equations (\ref{eq:scaled_gradient_P}) and (\ref{eq:wavelet_power}) are displayed in Fig. \ref{fig:fbm_power_spec_wavelet} (b) and (c). The calculation is done on the same extended maps as for the Fourier power spectra. Equidistant values of $l$ in logarithmic scale are chosen starting from 4 to 1024 pixels with an interval of $2^{0.25}$ pixels. The same pixel resolution as the CGPS data has been assigned to the Grfs. Wavelet power spectra on figures \ref{fig:fbm_power_spec_wavelet} (b) and (c) are respectively associated with the first and the third derivative of a Gaussian, i.e. $m=1$ and 3 in equation (\ref{eq:mother_DoG}). As noticed by \citet{Arneodo2000} and \citet{2006ApJS..165..512K}, the robustness of the results can be tested using the DoG wavelet by repeating the analysis with a wavelet of a higher order. The first and the third derivative of a Gaussian have respectively one and three vanishing moments. A wavelet with more vanishing moments can represent more complex functions. For example, the wavelet transform of a polynomial function of degree $n$ will be equal to zero, if a wavelet has vanishing moments up to the order $m \ge n$ \citep{2010tah..book.....P}. Consequently, repeating the analysis with a wavelet of a higher order can confirm that the scaling behaviour of the wavelet transform is not dominated by the order of the analysing wavelet and can also highlight the effect of polynomial distributions changing the self-similar geometry of the data. For both orders of the DoG wavelet, a power law of $\gamma=2.5$ is fitted for $4 < l < 50$ arcmin on the wavelet power spectra of the original Grfs. The wavelet with more vanishing moments is significantly more sensitive to the beam smoothing effect. The third order wavelet is also more affected by the noise, but less than by the beam convolution. It is important to note that the noise wavelet power spectrum with the first and the third order DoG wavelet has a flat power law, i.e. $\gamma=0$ instead of $\gamma=2$ as with the $\Delta$-variance. This difference comes from the normalisation choice discussed previously in section \ref{sec:wav_pow}. The third order wavelet is also less affected by the edge effect at larger scales than the first order wavelet. As shown in Fig. \ref{fig:DoG_1d}, since wavelet functions decay as $\bmath{x}^{-n}$, where $n$ is the order of the wavelet, the third order wavelet has a better localisation in the spatial domain than the first order. For this reason, the third order wavelet is less affected by the zero-padding which decreases the power of large-scale structures. Figure \ref{fig:fbm_power_spec_wavelet} (d) shows the values of the fitted power laws to the wavelet power spectra between $4 < l < 50$ arcmin, for five different power law indices of the original Grfs. For the first order wavelet analysis, an underestimation of the spectral index is measured for $\gamma > 3$. This effect was also noticed by \citet{2001A&A...366..636B} for the $\Delta$-variance analysis and was attributed to the fact that edge effects are significant for maps covering only a fraction of the spatially extended emission. This statement is true for steep spectral index, where large-scale structures dominate. However, an overestimation of the spectral index is measured for $\gamma \ge 3$ using the third order wavelet, even if edge effects are less important for this wavelet. In that case the overestimation of the spectral index can be attributed to the lower resolution of this wavelet in the spectral domain. As for the $\Delta$-variance, our wavelet power spectrum can satisfactorily recover the power law index of the fractal simulations for $2.0 \le \gamma \le 3.0$. According to their statistical properties, Grfs have the same power law index in every direction. Following this property, the calculation of the power spectrum of $|\nabla\bmath{P}|$ using equations \ref{eq:scaled_gradient_P}, \ref{eq:Stokes_amplitude} and \ref{eq:wavelet_power} can recover the power law index of individual Stokes $Q$ and $U$ simulated maps. Because of the normalisation choice discussed previously in section \ref{sec:wav_pow}, the slope of the wavelet power spectrum of $|\nabla\tilde{\bmath{P}}|$ is equal to the power law of the Grfs. This is similar to the $\Delta$-variance where the slope $\alpha$ is related to the power law $\gamma$ following the relation $\alpha=|\gamma|-2$. Real Stokes $Q$ and $U$ data are spatially correlated but are not assumed to have exactly the same power law index. Intervening polarised emission and faraday rotation along the line-of-sight should induce spatial correlation between $Q$ and $U$ maps and should also modify the measured power law of the wavelet power spectrum of $|\nabla \tilde{P}(l,\bmath{x})|$. Consequently, the wavelet power spectrum of $|\nabla \tilde{P}(l,\bmath{x})|$ is a unique measure of the variations of the polarisation vector $\bmath{P}$ in the $Q$--$U$ plane as a function of the angular scale and should not be directly compared with the Fourier power spectrum of $|\bmath{P}|$ or of Stokes $Q$ and $U$ maps alone. \section{Application to CGPS data}\label{sec:results} The wavelet analysis technique described in Section \ref{sec:DoG} was applied to the CGPS field shown in Fig. \ref{fig:MM12_MN12_P} for both Stokes parameters $Q$ and $U$. Each map was extended to the closest power of 2 - in this case, $2048 \times 2048$, with zero-padding pixels and apodised on 5 per cent of the border of the original image. Angular scales $l$ are chosen following the same rules as for the Grf simulations. \begin{figure*} \centering \includegraphics[]{figures/fig8.pdf} \caption{From left to right: in ``cubehelix'' colour scale \citep{2011BASI...39..289G} are the $|\nabla \tilde{P}(l,\bmath{x})|$ values for the same field as in Fig. \ref{fig:MM12_MN12_P} at four different scales $l$= 9.6, 45.7, 153.6 and 434.4 arcmin. White lines represents maxima chains (see section \ref{subsec:maxima_chains}) corresponding to the scale.} \label{fig:chains_scales} \end{figure*} The amplitude of the gradient of $\bmath{P}$ for the CGPS field at four different scales, overplotted with maxima chains, is displayed in Fig. \ref{fig:chains_scales}. For clarity, maxima chains of the smallest scale, $l=9.6$ arcmin, are only displayed for pixel values above $0.15$ K ($\approx 3\sigma_{\textrm{rms}}$). Each wavelet transform in Fig. \ref{fig:chains_scales} shows very different filamentary structures. The complex network of filaments at smaller scales is replaced by a more extended network of filaments on larger scales. The general pattern of the lower scale is sometimes preserved and sometimes not. Particularly, some features described as ``double jump'' profiles by \citet{2012ApJ...749..145B} (see green boxes in left and right upper panels of Fig. \ref{fig:chains_scales}) appear only for a small range of scales. Such features are associated with the derivative of a delta function that can result from interactions of strong shocks. On the other hand, other subsets of the network persist over multiple scales and create a subset of ``coherent'' structures across the field. An example of a ``coherent'' subset network is displayed in Fig. \ref{fig:coherent_network}. The subset is selected using an iterative algorithm called the scale-wise Coherent Vorticity Extraction (CVE) (see \citet{2012PhyD..241..186N} and \citet{2014MNRAS.440.2726R} for a detailed description). As a function of scale $l$, this algorithm converges to an optimal threshold value to separate outliers, i.e. non-Gaussianities\footnote{By construction, following eq. \ref{eq:gradient_P}, the distribution of $|\nabla\tilde{P}(l,\bmath{x})|$ cannot respect a perfect Gaussian distribution, however the terminology Gaussian and non-Gaussian are used to describe respectively the symmetrical part and the tail of the distribution.}, from randomly distributed wavelet coefficients of $|\nabla\tilde{P}(l,\bmath{x})|$. Figure \ref{fig:coherent_network} shows maxima chains for which the maximum value along the chain is part of the separated outliers. \begin{figure*} \centering \includegraphics[]{figures/fig9.pdf} \caption{The superposition of maxima chains from scale $l=$22.8 to 258.3 arcmin over the map of $|\nabla \tilde{\bmath{P}}|$ at $l=$ 22.8 arcmin. They represent a subset maxima chains for which the maximum value along the chain is part of outliers separated with the scale-wise CVE algorithm.} \label{fig:coherent_network} \end{figure*} The wavelet power spectrum calculated according to equation (\ref{eq:wavelet_power}) is shown in black diamonds in Fig. \ref{fig:power_spec} and \ref{fig:power_spec_Gauss}. Wavelet coefficients $|\nabla\tilde{P}(l,\bmath{x})|$ are calculated using the third order wavelet in order to highlight effects produced by noise and edges on the map (see section \ref{sec:test}). The flattening and the power drop caused by the Gaussian apodisation function applied to the DRAO Synthesis Telescope data \citep{2010A&A...520A..80L, 2014ApJ...787...34S} clearly appears in the power spectrum between $1 \lesssim l \lesssim 6$ arcmin. A flattening of the spectrum is also produced by the finite size of the map around 600 arcmin. The power spectrum shows a power law behaviour between $10 \lesssim l \lesssim 80$ arcmin followed by a small drop of power between $80 \lesssim l \lesssim 300$ arcmin. This drop corresponds well to the overlap in the $u$--$v$ plane between data from the Effelsberg 100--m telescope and the DRAO 26--m telescope, which is between baselines of 3 to 15 m \citep{2010A&A...520A..80L}. The corresponding angular sizes, 48 to 240 arcmin, are indicated by dotted lines in Fig. \ref{fig:power_spec}. The 26--m data were initially undersampled and gaps where no 26--m data were available were filled with smoothed Effelsberg data. This undersampling, and the process applied to correct the data, could have produced an underestimation of the power over that range of scales. The average calibration ratio between the two datasets is $0.96 \pm 0.01$ (26--m/100--m). This small factor could explain the drop in power seen in the wavelet power spectrum of the CGPS field. For this reason, a power law is fitted only between 10 to 60 arcmin following the relation $S_P(l)=S_0 \cdot l^{\gamma}$, where the fitted values of parameters $S_0$ and $\gamma$ are $(4.69\pm0.03)\times 10^{-4}$ and $2.15\pm0.01$ respectively. The wavelet power spectrum has also been calculated using only the Gaussian coefficients of $|\nabla\tilde{P}(l,\bmath{x})|$ separated by the scale-wise CVE. This Gaussian power spectrum is represented by the red stars in Fig. \ref{fig:power_spec_Gauss}. Both distributions, for all coefficients and for the separated Gaussian part, are plotted with the black lines in Fig. \ref{fig:coeff_distributions}. The distributions are normalised following the definition: \begin{equation} I(l,\bmath{x})=\frac{|\nabla\tilde{P}(l,\bmath{x})|}{\langle |\nabla\tilde{P}(l,\bmath{x})| \rangle_{\bmath{x}}}, \label{eq:intermittence} \end{equation} \noindent where $\langle\rangle_{\bmath{x}}$ is the average operator over all $\bmath{x}$. The normalised distributions for all coefficients between scales of 6.8 to 91.3 arcmin on the top panel of Fig. \ref{fig:coeff_distributions} (black lines) are lognormal and consequently, the average value of the coefficients does not accurately characterise the distribution. The separated Gaussian part shows a peak centred on $I(l,\bmath{x})\approx1$, which means that the average value of coefficients is more representative of the general tendency of the distribution. The fitted parameters for the Gaussian power spectrum are $(6.4\pm0.5)\times 10^{-5}$ and $2.52\pm0.02$ for $S_0$ and $\gamma$ respectively for scales between 20 to 60 arcmin. Scales $108.6 \lesssim l < 614.4$ arcmin do not respect the self-similarity of small-scale distributions. However, a clear separation between two different behaviours as a function of scale is hard to establish and the transition from lognormal to Gaussian distributions might also be continuous. \begin{figure} \centering \includegraphics[]{figures/fig10.pdf} \caption{The wavelet power spectrum of $|\nabla\tilde{P}(l,\bmath{x})|$ for the CGPS field for all coefficients (black diamonds).} \label{fig:power_spec} \end{figure} \begin{figure} \centering \includegraphics[]{figures/fig11.pdf} \caption{The wavelet power spectrum of $|\nabla\tilde{P}(l,\bmath{x})|$ for the CGPS field for all coefficients (black diamonds) and for the Gaussian part of the distribution (red stars). The solid line represents the Fourier power spectrum of $|\mathbf{P}|$.} \label{fig:power_spec_Gauss} \end{figure} \begin{figure} \centering \includegraphics[]{figures/fig12.pdf} \caption{Normalised distributions of wavelet coefficients of $|\nabla\tilde{P}(l,\bmath{x})|$ for the CGPS field for all coefficients (top panel). The black lines represent scales between 6.8 and 91.3 arcmin and the blue lines present scales between 108.6 and 614.4 arcmin. The lower panel shows the Gaussian part of the distribution for scales between 6.8 and 91.3 arcmin.} \label{fig:coeff_distributions} \end{figure} \section{Discussion}\label{sec:discussion} The new method proposed in this paper allows us to extend to multiple scales the study of structures produced by the calculation of $|\nabla\bmath{P}|$. As shown in Fig. \ref{fig:chains_scales}, filamentary structures in $|\nabla\tilde{P}(l,\bmath{x})|$ are highly dependant on the angular scale or on the instrumental resolution with which the polarised signal is observed. Furthermore, the wavelet power spectrum analysis of $|\nabla\tilde{P}(l,\bmath{x})|$ allows us to identify scales where the signal to noise ratio or the beam signature becomes important and causes a flattening of the power law behaviour. Consequently, studies of the gradient of $\bmath{P}$ applied only to the smallest spacial scales, without or with little smoothing of original data, should be aware that a significant amount of the structure seen at lower intensity in $|\nabla\bmath{P}|$ images may be associated with noise. Turbulence is expected to be one of the major processes responsible for fluctuations on multiple scales in the ISM. The power law measured across a large range of scales ($10 \lesssim l \lesssim 60$ arcmin), could be associated with the presence of turbulence in the magnetic field. The self-similarity of wavelet coefficient distributions plotted in Fig. \ref{fig:coeff_distributions} is another indication that turbulence can play a major role in fluctuations seen in the magnetic field and/or the electron density over the same range of scales. Although, as demonstrated with magnetohydrodynamic (MHD) simulations by \citet{2011Natur.478..214G} and \citet{2012ApJ...749..145B}, various types of turbulent environment, e.g. subsonic or supersonic turbulence with different Mach numbers, can create various type of structures in $|\nabla\bmath{P}|$ maps. As expected for a large field localised in the Galactic plane, the filament network of $|\nabla\bmath{P}|$ displays a large range of intensities and different types of discontinuity associated with different types of fluctuation in the magnetic field and/or the electron density. We see in Fig. \ref{fig:coeff_distributions} that even if the wavelet coefficient distributions are self-similar for a given range of scales, the power is not randomly distributed. The tails of the lognormal distributions shown in the upper panel are associated with coefficients that contribute more to the average power at a given scale than randomly distributed coefficients. This excess has a significant impact on the measured power law. The Gaussian part of the distributions for the self-similar range of scales possess a steeper power law ($\gamma \approx 2.5$) than the original distribution ($\gamma \approx 2.1$). The spatial distribution of non-Gaussianities displayed for a small range of scales in Fig. \ref{fig:coherent_network} also shows that these structures tend to create a coherent subset of filaments correlated across several scales. These coherent structures can have different origins. \citet{2012ApJ...749..145B} showed that moments of $|\nabla\bmath{P}|$, i.e. mean, variance, skewness and kurtosis, are correlated with the Mach number of MHD simulations. Higher Mach numbers create more asymmetric distributions which have tails at high intensity. Higher intensity structures in $|\nabla\bmath{P}|$ associated with those tails are caused by sharp changes of the polarisation vector $\bmath{P}$ that can be produced by compressive shocks in a supersonic turbulent medium. In dense regions, the magnetic field lines are frozen into the ionised gas and compressive shocks will amplify the magnetic field intensity. Under these conditions, the magnetic field intensity is correlated with the electron density and creates higher intensity structures seen in $|\nabla\bmath{P}|$ \citep{2012ApJ...749..145B}. However, subsonic turbulence induces no compressive motion and in that case fluctuations traced by $|\nabla\bmath{P}|$ are dominated by random fluctuations in the gradient of the magnetic field. It is interesting to note in Fig. \ref{fig:power_spec_Gauss} that the power spectrum associated with the Gaussian part of the wavelet coefficient distribution corresponds well to the Fourier power spectrum of $|\mathbf{P}|$ for $l \lesssim 100$ arcmin. The selection of the Gaussian part of the distribution is dependent on a parameter in the scale-wise CVE algorithm which controls how restrictive the definition of an outlier is \citep{2012PhyD..241..186N}. Nonetheless, the choice of this parameter is based on the symmetry of the separated ``random'' distributions only. By comparing the total wavelet power spectrum of $|\nabla\tilde{P}|$ with the classical Fourier power spectrum of $|\mathbf{P}|$, we see that the excess of power measured at intermediate scales might be associated with non-Gaussianities identified from the wavelet analysis. The amplitude of $\bmath{P}$ and the amplitude of the gradient of $\bmath{P}$ are two different tracers and they are not expected to produce the same power spectrum. The amplitude of the gradient of $\bmath{P}$ traces changes in the polarisation vector more strongly than the amplitude of $\bmath{P}$ alone. Consequently, the excess of power measured at intermediate scales, partly displayed in Fig. \ref{fig:coherent_network}, might be related to the expected correlation between the magnetic field intensity and the electron density produced by compressive shocks. This measured excess of power suggests that the power spectrum of $|\nabla\tilde{P}|$ may trace fluctuations in the electron density as well as fluctuations caused by the Faraday rotation of polarised emission coming from a source localised behind the compressed magnetic field lines, whereas the power spectrum of $|\bmath{P}|$ traces only fluctuations of the electron density. The power law index of the Gaussian power spectrum is shallower than that expected from a 3D Kolmogorov-like power spectrum ($\gamma=3.66$) for a subsonic incompressible turbulent medium, however it is close to the index measured by \citet{2002A&A...387...82G} for the polarised intensity $\bmath{P}$ at 2.4 GHz from the Southern Galactic plane ($\gamma= 2.37 \pm 0.21$). In the case of the non-Gaussian subset, many other physical processes can produce sharp changes in polarised data, such as outflows from massive stars and supernovae. We noted in section \ref{sec:results} that features described as ``double jumps'' by \citet{2012ApJ...749..145B} are visible at different scales in the CGPS field. One of the two features framed by a green rectangle (the upper right panel at $l=45.7$ arcmin) corresponds to the location of the supernova remnant SNR G84.2-0.8, which is clearly seen in the total intensity map. The ``double jump'' feature, which is itself part of the non-Gaussian subset, may be related to shocks produced by the supernova. On scales of $109 \lesssim l \lesssim 614$ arcmin, we can see in Fig. \ref{fig:coeff_distributions} (blue lines) that distributions become more symmetrical on larger scales. Since the distributions are normalised, it is reasonable to consider that the undersampling correction applied to the 26--m Telescope data, which dominate on scales larger than $\sim 80$ arcmin, does not influence or bias distributions in a strong manner. The fitted power law with the Gaussian part of the distributions at lower scales could also be consistent with scales dominated by the DRAO 26--m, which could mean that non-Gaussian coefficients tend to appear at smaller scales. According to \citet{2012ApJ...749..145B}, the random distribution of Gaussian coefficients could be induced by subsonic turbulent fluctuations in electron density and magnetic field. On the other hand, non-Gaussian coefficients may be associated with other physical processes, such as supersonic turbulence, gravitational collapses, massive star outflows and supernovae. However, a clear separation between lognormal distributions at smaller scales and Gaussian distributions at larger scales is hard to confirm. The lack of non-Gaussian contributions at larger scales could also be associated with the smaller number of statistically independent coefficients. A similar analysis realised on an extended range of scales could confirm if a real separation exists between distributions at smaller and larger scales. \section{Conclusion}\label{sec:conclusion} We extend the calculation of $|\nabla\bmath{P}|$ to multiple scales using a wavelet analysis formalism. The new technique shares similarities with the $\Delta$-variance and the WTMM techniques used to characterise respectively the turbulence in molecular clouds and the multifractal nature of a surface or a medium. This approach can overcome the limitation of previous analyses which were only sensitive to the smallest scales. We show that fluctuations traced by $|\nabla\bmath{P}|$ exist at larger scales on data completed with lower spatial frequencies. Using the wavelet formalism, it is possible to measure the power spectrum of $|\nabla\tilde{P}(l,\bmath{x})|$ and evaluate the scaling behaviour of variations of the polarisation vector $\bmath{P}$ in the $Q$--$U$ plane. The scaling behaviour fallows a power law with $\gamma \approx 2.1$. We measure a small drop in the spectrum between $80 \lesssim l \lesssim 300$ arcmin. This drop corresponds well to the overlap in the $u$--$v$ plane between the Effelsberg 100--m telescope and the DRAO 26--m telescope data. The undersampling presents in the 26--m telescope data has been identified as a source of unknown error in the data and could explain the measured drop of power. The wavelet analysis of $|\nabla\bmath{P}|$ also allows us to analyse the distribution of fluctuations in $\bmath{P}$ as a function of angular scale. Distributions show higher skewness at smaller scales than at larger scales. Separation of outliers contributing to the tails of the distributions allows us to measure the power spectrum for the symmetrical part of the distribution. This power spectrum possesses a steeper power law with $\gamma \approx 2.5$. The spatial distribution of some outliers are part of correlated structures across angular scales, which trace the sharpest changes in the polarisation vector $\bmath{P}$ in the field. Such higher intensity structures could be associated with compressive shocks of a supersonic turbulent medium. Future analysis applied over an extended range of scales and at higher Galactic latitude will provide a useful extension to the analysis presented here. Such analysis could confirm the appearance of a distinct type of fluctuation distribution at smaller scales as well as revealing the presence of high intensity structures at higher Galactic latitude or at higher angular scales. \bibliographystyle{apj}
{ "timestamp": "2015-04-24T02:10:38", "yymm": "1504", "arxiv_id": "1504.06217", "language": "en", "url": "https://arxiv.org/abs/1504.06217" }
\section{Introduction} At high energy, gluon-gluon fusion becomes the dominant mechanism of heavy $c \bar c$ or $b \bar b$ pair production. The cross section for single pair production can be calculated either in collinear next-to-leading order approach or the $k_t$-factorization approach. The $Q \bar Q$ and Higgs production are golden reactions for applications of the $k_t$-factorization approach \cite{CCH91,CE91,BE01,Teryaev,Baranov00,Zotov:2003cb,BLZ,Luszczak:2005cq,Shuvaev,LMS09,MSS2011,Pasechnik:2006du,Lipatov:2014mja,Szczurek:2014mwa}. In the $k_t$-factorization approach the basic ingredients are so-called unintegrated gluon distribution functions (UGDFs) and off-shell matrix elements. Different models of UGDFs have been proposed in the literature. The Kimber-Martin-Ryskin (KMR) \cite{KMR} UGDF is believed to include the dominant higher-order corrections. The off-shell matrix elements for $g g \to Q \bar Q$ were calculated already long ago \cite{CCH91,CE91,BE01}. The $k_{t}$-factorization formalism was applied recently in the context of experimental data measured at the LHC \cite{JKLZ,Saleev:2012np,Maciula:2013wg,Nefedov:2014qea,Karpishkov:2014epa} and a relatively good description was obtained when using the KMR UGDF. In the case of the Higgs boson production both $2 \to 1$ and $2 \to 2$ subprocess have to be taken into account \cite{Szczurek:2014mwa}. In Ref.~\cite{Baranov:2008rt} a $2 \to 3$ $g g \to c \bar c \gamma$ subprocess was taken into account when calculating cross sections for $p p \to c \bar c \gamma X$ reaction. Recently the $k_t$-factorization approach was also applied to three-jet \cite{vanHameren:2013fla} and $Zb\bar{b}$ \cite{Zotov2015} production. A convenient formalism for the automation of the calculation of tree-level scattering amplitudes with off-shell gluons for arbitrary processes was recently introduced in Ref.~\cite{vanHameren:2012if}. Off-shell gluons are replaced by eikonal quark-antiquark pairs, and the amplitude can be calculated with the help of standard local Feynman rules, including the eikonal gluon-quark-antiquark vertex and the eikonal quark-antiquark propagator. The well-known successful recursive methods to calculate tree-level amplitudes can directly be applied, including the ``on-shell'' recursion, or Britto-Cachazo-Feng-Witten recursion, as shown in Ref.~\cite{vanHameren:2014iua}. The heuristic introduction of the formalism in Ref.~\cite{vanHameren:2012if} has be given solid ground in Ref.~\cite{Kotko:2014aba}. Most of the afford was devoted to dijet production \cite{vanHameren:2014lna,vanHameren:2014ala} so far. The $p p \to c \bar{c} c \bar{c} X$ reaction is interesting by itself. It was shown by us recently that this reaction is a golden reaction to study double-parton scattering (DPS) processes \cite{Luszczak:2011zp,Maciula:2013kd}. The LHCb collaboration confirmed the theoretical predictions and obtained a large cross section for production of two mesons, both containing $c$ quarks or both containing $\bar c$ antiquarks \cite{Aaij:2012dz}. The single-parton scattering (SPS) contribution was discussed in Refs.~\cite{Schafer:2012tf} and \cite{vanHameren:2014ava}. In the first case \cite{Schafer:2012tf} a high-energy approximation was used neglecting some unimportant at high energies Feynman diagrams. Last year we have calculated the lowest-order SPS cross section(s) including a complete set of Feynman diagrams \cite{vanHameren:2014ava} in the collinear-factorization approach. The final result was only slightly different than that obtained in the high-energy approximation. In the present letter we wish to go one step further and try to calculate the SPS cross sections for the $p p \to c \bar c c \bar c X$ reaction consistently in the $k_t$-factorization approach. Doing so we may hope that a sizeable part of higher-order corrections will be included. On the technical side this will be a first calculation within the $k_t$-factorization approach based on a $2 \to 4$ subprocesses with two off-shell initial-state partons (gluons). The result is also important in the context of studying DPS as the considered SPS mechanism constitutes an irreducible background, and its estimation is therefore of prior importance if deeper conclusions concerning DPS can be drawn from measurements at the LHC. \section{Formalism} \begin{figure}[!h] \begin{minipage}{0.35\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{SPS-double-charm-kT-v2.eps}} \end{minipage} \caption{ \small A diagrammatic representation of the considered mechanism of $c \bar c c \bar c$ final-state production via single-parton scattering within $k_{t}$-factorization approach. } \label{fig:mechanism} \end{figure} Within the $k_t$-factorization approach the SPS cross section for $p p \to c \bar c c \bar c X$ reaction, sketched in Fig.~\ref{fig:mechanism}, can be written as \begin{equation} d \sigma_{p p \to c \bar c c \bar c} = \int d x_1 \frac{d^2 k_{1t}}{\pi} d x_2 \frac{d^2 k_{2t}}{\pi} {\cal F}(x_1,k_{1t}^2,\mu^2) {\cal F}(x_2,k_{2t}^2,\mu^2) d {\hat \sigma}_{gg \to c \bar c c \bar c} \; . \label{cs_formula} \end{equation} In the formula above ${\cal F}(x,k_t^2,\mu^2)$ are unintegrated gluon distributions that depend on longitudinal momentum fraction $x$, transverse momentum squared $k_t^2$ of the gluons entering the hard process, and in general also on a (factorization) scale of the hard process $\mu^2$. The elementary cross section in Eq.~(\ref{cs_formula}) can be written somewhat formally as: \begin{eqnarray} d {\hat \sigma} &=& \frac{d^3 p_1}{2 E_1 (2 \pi)^3} \frac{d^3 p_2}{2 E_2 (2 \pi)^3} \frac{d^3 p_3}{2 E_3 (2 \pi)^3} \frac{d^3 p_4}{2 E_4 (2 \pi)^3} (2 \pi)^4 \delta^{4}(p_1 + p_2 + p_3 + p_4 - k_1 - k_2) \nonumber \\ &&\times\frac{1}{\mathrm{flux}} \overline{|{\cal M}_{g^* g^* \to c \bar c c \bar c}(k_{1},k_{2})|^2} \; , \label{elementary_cs} \end{eqnarray} where only dependence of the matrix element on four-vectors of gluons $k_1$ and $k_2$ is made explicit. In general all four-momenta associated with partonic legs enter. The matrix element takes into account that both gluons entering the hard process are off-shell with virtualities $k_1^2 = -k_{1t}^2$ and $k_2^2 = -k_{2t}^2$. The matrix element squared is rather complicated and explicit formula will be not given here. \begin{figure} \begin{center} \epsfig{figure=kinematics.eps,width=0.9\linewidth} \caption{\label{kinematics} Momenta of the off-shell gluons, represented as double lines on the left hand side, and the eikonal quark-antiquark pairs. The amplitude is independent of a simultaneous momentum shift $k_{1\mathrm{bgn}}+q$, $k_{1\mathrm{end}}+q$ as long as $p_A\!\cdot\!q=0$. The same holds for the other eikonal line with $p_B$.} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{figure=gaugeterms.eps,width=\linewidth} \caption{\label{gaugeterms}Some terms in the expansion of the amplitude in terms of the eikonal propagators. The eikonal quarks are denoted by lines without arrows. The double lines on the left hand side represent the off-shell gluons. This expansion does not represent the organization of the calculation, and only gives an impression of which graphs are included.} \end{center} \end{figure} As mentioned in the introduction, the scattering amplitudes with off-shell initial state gluons are constructed using the formalism of Ref.~\cite{vanHameren:2012if}, in which off-shell gluons are represented by eikonal quark-antiquark pairs in order to arrive at gauge invariant amplitudes. Figure~\ref{gaugeterms} gives an idea of what kind of graphs are included. The Feynman rules related to the eikonal quark-antiquark-gluon vertex and eikonal propagator are \begin{equation} \raisebox{-3.5ex}{\epsfig{figure=eikVertex.eps,width=14.8ex}} \;=\; -\mathrm{i}p_A^\mu T^b_{i,j} \quad\quad,\quad\quad \raisebox{-1.5ex}{\epsfig{figure=eikPropagator.eps,width=8.5ex}} \;=\; \frac{\mathrm{i}}{p_A\!\cdot\!K} \quad\quad, \end{equation} where $p_A$ is the longitudinal momentum associated with the off-shell gluon. The external eikonal quark-antiquark pairs carry fundamental color indices, say $i,j$. It was noted in Ref.~\cite{Bury:2015dla} that the amplitude is traceless with respect to these indices, so an adjoint color index can be assigned to the off-shell gluon by contracting the amplitude with $\sqrt{2}T^a_{ij}$. The squared amplitude summed over colors gives the same result. Denoting by $\mathcal{M}^{a}$ the amplitude with the color of one off-shell gluon highlited explicitly we have \begin{equation} \sum_{a}\left|\mathcal{M}^{a}\right|^2 = \sum_{a}\left|\sqrt{2}\sum_{i,j}\mathcal{M}_{ij}T^a_{ij}\right|^2 = \sum_{i,j,k,l}\mathcal{M}_{ij}\mathcal{M}_{kl}^*\left(\delta_{ik}\delta_{lj}-\frac{1}{N_{c}}\delta_{ij}\delta_{kl}\right) =\sum_{i,j}\left|\mathcal{M}_{ij}\right|^2 ~. \end{equation} The first term on the right hand side of Fig.~\ref{gaugeterms} contains the "actual off-shell gluons" as virtual gluons, representing complete propagators. This term would diverge if $k_{1t}^2\to0$ and/or $k_{2t}^2\to0$, so the whole amplitude has to be multiplied with $\sqrt{k_{1t}^2k_{2t}^2}$ to reproduce correct collinear limit. The calculation has been performed with the help of A Very Handy LIBrary~\cite{Bury:2015dla}. In this Fortran library, scattering amplitudes are calculated numerically as a function of the external four-momenta via Dyson-Schwinger recursion~\cite{Caravaglios:1995cd}. It is a recursion of off-shell currents, which automatically factorizes the calculation of the sum of all Feynman graphs such that the multiplications represented by vertices are executed only once for each vertex, while such vertex may occur in several graphs, for identical kinematics. This recursion is sketched in Fig.~\ref{DysonSchwinger} and Fig.~\ref{DSexample}. The auxiliary eikonal quarks and anti-quarks are treated as external particles, so eventually an eight-point amplitudes are calculated. \begin{figure} \begin{center} \epsfig{figure=DysonSchwinger.eps,width=0.8\linewidth} \caption{\label{DysonSchwinger}Dyson-Schwinger recursion for off-shell currents. The thick lines represent off-shell (virtual) particles and the thin lines represent on-shell external particles. The sum is over all partitions of these external particles over the different blobs and all flavors for the virtual particles that are allowed according to the Feynman rules.} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{figure=DSexample.eps,width=\linewidth} \caption{\label{DSexample}An explicit example of one Dyson-Schwinger recursive step for a certain off-shell current.} \end{center} \end{figure} AVHLIB allows for various choices of the representation of the external helicities and colors. These include both the color-ordered representation~\cite{Kanaki:2000ms,Maltoni:2002mq}, with exact summation over color, and the color-dressed representation~\cite{Papadopoulos:2005ky,Duhr:2006iq}, with Monte Carlo summation for large multiplicities. Helicity configurations can be summed exactly, or, again for large multiplicities, treated in a Monte Carlo approach, both discrete and with continuous random polarizations~\cite{Draggiotis:1998gr}. The library includes a full Monte Carlo program with an adaptive phase space generator~\cite{vanHameren:2007pt,vanHameren:2010gg} that deals with the integration variables related to both the initial-state momenta and the final-state momenta. The program can also conveniently generate a file of unweighted events, which approach was used for the analysis presented in this paper. In the present calculation we use: $\mu_f^2 = (\sum_{i}^4 m_{i,t})^2$ as the factorization scale and $m_c$ = 1.5 GeV in both $k_t$-factorization and in the reference collinear-factorization calculations. Uncertainties related to the choice of the parameters were discussed e.g. in Ref.~\cite{vanHameren:2014ava} and will be not considered here. Here we wish to concentrate on the relative effect and modifications with respect to the results of the collinear-factorization calculations presented already in the literature \cite{Schafer:2012tf,vanHameren:2014ava}. \section{First Results} In this section we wish to compare the new results of the $k_t$-factorization approach to those obtained by us in Ref.~\cite{vanHameren:2014ava} in the collinear-factorization approach. In Fig.~\ref{fig:dsig_dpt_dy} we show standard single particle distributions in charm quark/antiquark transverse momentum (left panel) and its rapidity (right panel). We predict an enhancement of the cross section at large transverse momenta of $c$ or $\bar{c}$ compared to the collinear-factorization approach. The rapidity distributions in both approaches are rather similar (see the left panel of the figure). \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dpt_KMRvsPM.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dy_KMRvsPM.eps}} \end{minipage} \caption{ \small Distributions in $c$ quark ($\bar c$ antiquark) transverse momentum (left panel) and rapidity (right panel). The $k_t$-factorization result (solid line) is compared with the collinear-factorization result (dashed line). } \label{fig:dsig_dpt_dy} \end{figure} Distributions in rapidity of the $c c$ (or $\bar c \bar c$) and $c \bar c$, defined as $Y_{cc} = (y_c + y_c)/2$ and $Y_{c \bar c} = (y_c + y_{\bar c})/2$ respectively, are shown in Fig.~\ref{fig:dsig_dysum}. The distributions are much narrower than those for single quark/antiquark which reflects the fact that the two different $c$ quarks (or two different $\bar c$ antiquarks) have typically different rapidities. The discussed distributions in $y_c$ and $Y_{cc}$ would be identical only if $y_{c,1} = y_{c,2}$ (strong rapidity correlations). This will become clearer when inspecting rapidity difference in the next plots. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dycc_KMRvsPM.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dyccbar_KMRvsPM.eps}} \end{minipage} \caption{ \small Distributions in rapidity $Y_{cc} = (y_c + y_c)/2$ (left panel) and $Y_{c \bar c} = (y_c + y_{\bar c})/2$ (right panel). The meaning of the curves is the same as in Fig.~\ref{fig:dsig_dpt_dy}. } \label{fig:dsig_dysum} \end{figure} Similar distributions but for rapidity distance between two $c$ quarks (or two $\bar c$ antiquarks) and between $c$ and $\bar c$ are shown in Fig.~\ref{fig:dsig_dydiff}. On average the distance between $c$ and $c$ is larger than that for $c$ and $\bar c$. This can be understood easily in the high-energy approximation discussed in Ref.~\cite{Schafer:2012tf} by inspecting the contributing diagrams. Some enhancement at small rapidity separations can be observed in the $k_t$-factorization approach compared to the collinear approach. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dydiff_cc_KMRvsPM.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dydiff_ccbar_KMRvsPM.eps}} \end{minipage} \caption{ \small Distributions in the difference of rapidities $\Delta Y_{cc} = y_c - y_c$ (left panel) and $\Delta Y_{c \bar c} = y_c - y_{\bar c}$ (right panel). The meaning of the curves is the same as in Fig.~\ref{fig:dsig_dpt_dy}. } \label{fig:dsig_dydiff} \end{figure} The distributions in rapidity distance are strongly correlated with $M_{cc}$ or $M_{c \bar c}$ distributions shown in Fig.~\ref{fig:dsig_dMcc}. Those distribution are, however, difficult to measure as rather mesons are measured and not quarks or antiquarks. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dMcc_KMRvsPM.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dMccbar_KMRvsPM.eps}} \end{minipage} \caption{ \small Invariant mass distrtributions in $M_{cc}$ (left panel) and $M_{c \bar c}$ (right panel). The meaning of the curves is the same as in Fig.~\ref{fig:dsig_dpt_dy}. } \label{fig:dsig_dMcc} \end{figure} Quite interesting are azimuthal angle correlations between $c$ and $c$ or $c$ and $\bar c$. The corresponding distributions are shown in Fig.~\ref{fig:dsig_dphi}. We note much bigger decorrelation of two $c$ quarks or $c$ and $\bar c$ in the $k_t$-factorization approach compared to the collinear approach. This is due to explict account of gluon virtualities (transverse momenta). We will return to this point when discussing azimuthal correlations between mesons at the end of this section. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dPhicc_KMRvsPM.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dPhiccbar_KMRvsPM.eps}} \end{minipage} \caption{ \small Azimuthal angle correlations between two $c$ quarks (left panel) and between $c$ and $\bar c$ (right panel). The meaning of the curves is the same as in Fig.~\ref{fig:dsig_dpt_dy}. } \label{fig:dsig_dphi} \end{figure} Next we wish to visualize the regions of the transverse momenta of initial gluons that give sizeable contribution to the SPS $p p \to c \bar c c \bar c X$ cross section. In Fig.~\ref{fig:dsig_dk1tdk2t} we show a two-dimensional distribution in intial-gluon transverse momenta. The dependence on $k_{1t}$ and $k_{2t}$ shown in the figure is determineated by the UGDF used in the calculation as well as by the dependence of the matrix element on $k_{1t}$ and $k_{2t}$. Other models of unintegrated gluon distributions would give different dependencies. Clearly we get large contributions from the regions far from the collinear case ($k_{1t}$ = 0 and $k_{2t}$ = 0). This has of course consequences for other observables discussed above through the dependence of the matrix element on the gluon transverse momenta $\overline{|{\cal M}(k_{1t},k_{2t})|^2}$ and its correlation with other kinematical variables. \begin{figure} \includegraphics[width=8cm]{map_q1tq2t_vH_v2.eps} \caption{ \small Two-dimensional distribution in transverse momenta of initial gluons in the $p p \to c \bar c c \bar c$ SPS process at $\sqrt{s}$ = 7 TeV. } \label{fig:dsig_dk1tdk2t} \end{figure} We will not discuss in the present letter the correlations between the gluon virtualities (or their transverse momenta) and other kinematical variables related to the charm quarks and antiquarks. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dminv_lhcb_D0D0_kT.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dphid_lhcb_D0D0_kT.eps}} \end{minipage} \caption{ \small Distributions in $D^0D^0$ invariant mass (left) and in azimuthal angle between both $D^0$'s (right) within the LHCb acceptance. The DPS contribution (dashed line) is compared with the SPS one calculated within the $k_t$-factorization approach (dashed-dotted line). The SPS result from our previous studies \cite{vanHameren:2014ava}, calculated in the LO collinear-factorization approach, is also shown here (dotted line). } \label{fig:LHCb_D0D0} \end{figure} So far we have considered production of $c \bar c c \bar c$ quarks/antiquarks. As discussed in our previous paper \cite{vanHameren:2014ava} such a final states may lead to the production of two $D$ mesons, both containing $c$ quarks or both containing $\bar c$ antiquarks which is not possible e.g. for the $c \bar c$ final-state case. As explained in Ref.~\cite{vanHameren:2014ava} the DPS gives cross sections very similar to those measured by the LHCb collaboration \cite{Aaij:2012dz}. How important is the SPS contribution discussed in this paper, calculated here in the $k_t$-factorization, is shown in Fig.~\ref{fig:LHCb_D0D0}. For comparison we show also SPS results calculated in collinear-factorization approach \cite{vanHameren:2014ava}. The two approaches give somewhat different shapes of correlation observables, inspite that the integrated cross sections are rather similar as discussed already at the parton level. Our results, so far the most advanced in the literature as for as the SPS contribution is considered, are not able to explain discrepancy between DPS contribution and the LHCb experimental data. If the discrepancies are due to simplifications in the treatment of DPS requires further studies including for example spin and flavour correlations. Some works in this direction already started \cite{Mulders2015}. \section{Conclusions} In the present paper we have made a first calculation of the cross section for $p p \to c \bar c c \bar c X$ in the $k_t$-factorization approach, i.e.\ focussing on single parton scattering process. This is a first $2 \to 4$ process for which $k_t$-factorization is applied. In this calculation we have used the Kimber-Martin-Ryskin unintegrated gluon distribution(s) which effectively takes into account the dominant higher-order corrections. The off-shell matrix element was calculated using a new technique developed recently in Krak\'ow. The results of the $k_t$-factorization approach were compared with the results of the collinear-factorization approach. In general, the $k_t$-factorization results are only slightly bigger than those for collinear approach. An exception is the transverse momentum distribution above 10 GeV where a sizeable enhancement has been observed. Inclusion of gluon virtualities leads to a decorrelation in azimuthal angle between $c$ and $c$ or $c$ and $\bar c$. Since the cross section is in general very similar as for the collinear-factorization approach we conclude that the $c \bar c c \bar c$ final state at the LHC energies is dominantly produced by the double parton scattering as discussed in our recent papers, and the SPS contribution, although interesting by itself, is rather small. A comparison to predictions of double-parton scattering results and recent LHCb data for azimuthal angle correlations between $D^0$ and $D^0$ or $\bar{D}^0$ and ${\bar D}^0$ mesons strongly suggests that the assumption of two fully independent DPS ($g g \to c \bar c \otimes g g \to c \bar c$) may be too approximate or even not valid. Some possible reasons were discussed in Ref.~\cite{Mulders2015}. The effect found there is, however, too small to explain a rather large effect observed by the LHCb collaboration. This remains a challenge for future theoretical studies and should be confirmed by the LHCb collaboration at $\sqrt{s} = 13$, $14$ TeV. \vspace{1cm} {\bf Acknowledgments} This study was partially supported by the Centre for Innovation and Transfer of Natural Sciences and Engineering Knowledge in Rzesz{\'o}w. Graphs were drawn with the help of Jaxodraw.
{ "timestamp": "2015-04-29T02:08:28", "yymm": "1504", "arxiv_id": "1504.06490", "language": "en", "url": "https://arxiv.org/abs/1504.06490" }
\section{Introduction} \iffalse Things to write about. Comparisons to \begin{enumerate} \item Sums of products of univariates. \item Ankit Gupta's PIT paper...need to read it slightly \item Any other non-multilinear PIT of larger than depth 3....Shubhangi might know \item Can handle polylog top fan-in for depth three with bounded bottom fan-in...need to check the exact tradeoffs from depth three papers/survey \end{enumerate} \fi Arithmetic circuits are the most natural model of computation for a wide variety of algebraic problems such as matrix multiplication, computing fast fourier transforms etc. The problem of proving lower bounds for arithmetic circuits is one of the most fundamental and interesting problems in complexity theory. Proving superpolynomial lower bounds for general arithmetic circuits would resolve the $\VP$ versus $\VNP$ conjecture~\cite{Valiant79}, the algebraic analog of the ${\mathbb{P}}$ vs $\NP$ conjecture. This is one of the holy grails of complexity theory and has received a lot of attention, since it is a more structured and potentially easier question to understand and analyse than the ${\mathbb{P}}$ vs $\NP$ problem . The intimately related problem of polynomial identity testing (PIT) is the problem of testing if a polynomial, given as an arithmetic circuit is identically zero. In the setting where the algorithm cannot look inside the circuit, but only has access to evaluations of the circuit, the problem is referred to as blackbox PIT. There is a very simple randomized algorithm for this problem - simply evaluate the polynomial at a random point from a large enough domain. With very high probability, a nonzero polynomial will have a nonzero evaluation~\cite{Schwartz80, Zippel79}. It is a very important and fundamental question to derandomize the above algorithm. In a seminal work, Kabanets and Impagliazzo~\cite{KI04} showed that the problem of proving lower bounds for arithmetic circuits and the problem of derandomizing identity testing are essentially equivalent! These two problems have occupied a central position in complexity theory and despite much attention, our understanding of general arithmetic circuits is still very limited. Thus there has been a great deal of effort in understanding the complexity of restricted classes of arithmetic circuits in an attempt to obtain a better understanding of the general problem. Low depth arithmetic circuits in particular are one such well studied class. \paragraph{Lower bounds for homogeneous low depth arithmetic circuits.} The last few years have seen a tremendous amount of exciting progress on the problems of ``depth reduction" of general arithmetic circuits to low depth arithmetic circuits, and of proving lower bounds for low depth arithmetic circuits. Using depth reduction techniques~\cite{VSBR83, AV08, koiran, Tavenas13} it was shown that $N^{\omega(\sqrt n)}$ lower bounds (for polynomials in $N$ variables and of degree $n$) for just homogeneous depth 4 arithmetic circuits of bottom fan-in $\sqrt n$ would suffice to separate $\VP$ from $\VNP$ and imply superpolynomial lower bounds for general arithmetic circuits. At the same time there was a very exciting line of works proving $N^{\Omega(\sqrt n)}$ lower bounds for the same model of arithmetic circuits (and in fact for even the more general class of homogeneous depth 4 arithmetic circuits with no restriction on bottom fan-in)~\cite{GKKS12, FLMS13, KSS13, KS-formula, KLSS14, KS-full}. \paragraph{Lower bounds for non-homogeneous low depth arithmetic circuits.} Despite all this remarkable progress, and some very strong lower bounds for homogeneous low depth arithmetic circuits, in the nonhomogenous world much less is understood. Only mild lower bounds are known when we drop the condition of homogeneity, even for very simple classes of low depth arithmetic circuits. For depth 3 circuits over fields of characteristic 0, only quadratic lower bounds known~\cite{SW01, Shp01}, and there has been no progress on this question in more than a decade now. In a beautiful depth reduction result over fields of characteristic 0, Gupta et al~\cite{GKKS13} showed that $N^{\omega(\sqrt n)}$ lower bounds (for polynomials in $N$ variables and of degree $n$) for the class of non-homogeneous {\it depth 3} circuits would already separate $\VP$ from $\VNP$. It was recently observed by Kayal and Saha~\cite{KayalSaha14}~\footnote{They attribute the observation to Ramprasad Saptharishi.} that in fact it suffices to prove such lower bounds for depth 3 circuits with bottom fan-in $\sqrt n$. Till recently (in particular till the work of~\cite{KayalSaha14}), the best known lower bounds for depth 3 circuits even with bottom fan-in 2 were still just quadratic. In a very nice recent result, Kayal and Saha~\cite{KayalSaha14} showed an exponential lower bound for depth 3 circuits over fields of characteristic 0, whose bottom fan-in is at most $N^{\mu}$, where $N$ is the number of variables and $0 \leq \mu < 1$ is an arbitrary constant. More precisely, they prove the following. \begin{thm}[Kayal-Saha~\cite{KayalSaha14}]~\label{thm: KSaha} Let ${\mathbb{F}}$ be a field of characteristic zero. Then, for every constant $0 \leq \mu < 1$ there is a family $\{P_N\}$ of degree $n$ polynomials in $N = n^{O_{\mu}(1)}$ variables over ${\mathbb{F}}$ in $\VNP$ such that any depth three circuit of bottom fan-in at most $N^{\mu}$ computing $P_N$ has top fan-in at least $N^{\Omega_{\mu}{(\sqrt{n})}}$. \end{thm} \paragraph{Our Model:} In this work, we consider the model of sums of products of polynomials in few variables. More formally, we consider representations of polynomials $P$ (degree $n$ in $N = n^{O(1)}$ variables) in the form \begin{equation}~\label{def:model intro} P = \sum_{i=1}^T \prod_{j = 1}^d Q_{ij} \end{equation} where each $Q_{ij}$ is an arbitrary polynomial (of arbitrarily high degree) in at most $s$ variables. We call this the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. Observe that the model is more general than that considered in~\cite{KayalSaha14}. The model in ~\cite{KayalSaha14} corresponds to sums of products of {\it linear forms} in few variables. In our case, the $Q_{ij}$ no longer have to be linear forms, but can be general polynomials of arbitrarily high degree. Prior to this work, even for the case when $s = 2$, there were no nontrivial lower bounds known for this model. $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits for $s \geq 2$ can also be seen as a generalization of the model of sums of products of univariate polynomials (which corresponds to $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits with $s=1$), which has been very well studied in the arithmetic circuit complexity literature. Lower bounds for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits follow from works of Nisan~\cite{Nisan91} and Saxena~\cite{S07}. Over the last few years, there have been some very nice results giving quasipolynomial time blackbox identity testers for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits~\cite{ForbesS13, FS13, ASS13}. $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits can also be seen as a generalization of the widely studied model of diagonal circuits, since polynomials computable by diagonal circuits can be represented as a $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuit without much blow up in the size of the representation~\cite{S07}. Although $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits seem fairly well understood from the point of view of lower bounds and derandomization of polynomial identity testing, if one considers the model of sums of products of bivariate polynomials ($\Sigma\Pi\left(\Sigma\Pi\right)^{[2]}$ circuits), then our understanding changes completely. Although only seemingly a mild generalization of $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits, the known proof techniques for lower bounds for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits (which were proved using {\it evaluation dimension} techniques of~\cite{Nisan91, Raz06}) seem to completely break down in this setting. Thus, studying this model seems like an interesting next step towards understanding non-homogeneous small depth algebraic computation. As far as we are aware there are also (not surprisingly) no nontrivial PIT results for the model. We are now ready to state our results. \subsection{Our results} \paragraph{Lower bounds : } We show an exponential lower bound for the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$, when $s$ is at most $N^{\mu}$ for any constant $0 \leq \mu < 1$ ($N$ is the number of variables). More precisely, we show the following. \begin{thm}~\label{thm:mainthm intro} Let ${\mathbb{F}}$ be a field of characteristic zero and $\mu$ be any constant such that $0 \leq \mu < 1$. There exists a family $\{P_N\}$ of polynomials over ${\mathbb{F}}$ in $\VNP$, where $P_N$ is of degree $n$ in $N = n^{O_{\mu}(1)}$ variables, such that for any representation of $P_N$ of the form $$P_N = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$ where each $Q_{ij}$ is polynomial in at most $s =N^{\mu}$ variables, it must be true that $$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$ \end{thm} Given the depth reduction results of~\cite{GKKS13} and the observation mentioned earlier from~\cite{KayalSaha14}, it is known that any asymptotic improvement in the exponent of the lower bound (even for $s = O(\sqrt{n})$) would imply $\VNP$ is different from $\VP$. As discussed in the introduction, even though this model seems a natural generalization of the model of sums of products of univariate polynomials, our lower bound technique is very different from those used in proving lower bounds for sums of products of univariates. Our lower bound proof is based on ideas developed in the course of investigating homogeneous depth four arithmetic circuits~\cite{KLSS14, KS-full}. \iffalse {\bf There are some potential generalization of the lower bound results which could be written as corollaries...such as size lower bounds for sums of products of low things, when the product fan-in is small} \fi \paragraph{Blackbox PIT : } We also consider the problem of PIT for the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. For general sums of products of even bivariate polynomials, this question seems quite difficult, and as of now we are not even able to obtain subexponential time PIT. However, as a consequence of our lower bounds and by suitably adapting hardness randomness tradeoffs for arithmetic circuits developed in~\cite{KI04} and~\cite{DSY09}, we are able to obtain PIT results in the setting where the top fan-in of the circuit is bounded, and when we have the promise that the circuit computes a polynomial of low individual degree. Our understanding of blackbox PIT for depth four circuits is very limited, and the results known are in very restricted settings. Saraf and Volkovich~\cite{SarafV11} gave blackbox PIT algorithms for multilinear depth 4 circuits with bounded top fan-in. To the best of our knowledge, the idea in~\cite{SarafV11} does not extend to the case of non-multilinear depth 4 circuits, even when the individual degree of each of the variables is at most $2$. Recently, Oliveira et al~\cite{OSV14} gave a subexponential time blackbox PIT for all depth four multilinear circuits\footnote{The running time increases with the size of the circuit, and in particular, it is subexponential time for polynomial sized depth four multilinear circuits.}. In the non-multilinear setting, Agrawal et al.~\cite{ASSS12} gave PIT algorithms for constant depth formulas in which the number of {\it occurences} of each variable is bounded. Without going into the technical details, we remark that the notion of {\it bounded occur} is a generalization of the well studied notion of bounded reads. The most closely related results to those in this paper that we are aware of are the recent papers of Gupta~\cite{Gupta14} and Mukhopadhyay~\cite{Mukhopadhyay15}, which give blackbox PIT results for sums of products of low degree polynomials, where the top sum fan-in is bounded and the circuits satisfy certain algebraic geometric restrictions. So, the question of getting PIT results for general depth four circuits (even with bounded top and bottom fan-in) remains wide open. For instance we still do not know any nontrivial PIT results for a sum of constant many products of degree 2 polynomials. Though we still don't know how to deal with this question, when we replace the polynomials of low degree with polynomials of few variables (but of arbitrarily large degree), then we are able to obtain quasipolynomial PIT results. There is one added caveat however, that the final polynomial computed needs to be of low individual degree (as seems necessary for PIT results obtained from the known hardness-randomness tradeoffs for bounded depth circuits~\cite{DSY09}). We now formally state the theorem \begin{thm}~\label{thm:mainthm2 intro} Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that \begin{enumerate} \item $T < \log^c N$ \item $k < \log ^c N$ \item $d < N^c$ \item each $Q_{ij}$ depends on at most $N^{\mu}$ variables \end{enumerate} Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$. \iffalse Let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ If there exist constants $c$ and $\mu$, with $0 \leq \mu < 1/2$ for which \begin{enumerate} \item $T < \log^c N$ \item $k < \log ^c N$ \item $d < N^c$ \item each $Q_{ij}$ depends on at most $N^{\mu}$ variables \end{enumerate} then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$. \fi \end{thm} Moreover, from our proof, it also follows that if each of polynomial $Q_{ij}$ depends only on $\log^{O(1)} N$ variables, then both the size of the hitting set and the time to construct it, are upper bounded by a quasipolynomial function in $N$. \iffalse \paragraph{Computational model : } In this paper, we will study polynomials which can be computed by sums of products of polynomials in few variables. More formally, we say that $P$ has a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and formal degree $d$, if there exist polynomials $Q_{ij}$ on at most $s$ variables and constants $\{a_i : 1 \leq i \leq T\}$ such that \begin{equation}~\label{def:model} P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij} \end{equation} We say that the size of such a representation is given by $Td$. Without loss of generality, we can assume that the degree zero term in each of the $Q_{ij}$ is either zero or one. If it is a non-zero constant other than $1$, we can extract it out and absorb it in $\alpha_i$. Observe that the formal degree of the circuit could be much larger than the degree of the polynomial $P$. The model can also be seen as a generalization of the model of sums of products of univariates studied by Saxena~\cite{S07}. The lower bound of Saxena is proved by using the techniques developed in proving lower bounds for multilinear circuits by Raz~\cite{Raz06}. Our techniques on the other hand, will be based on the ideas developed in the course of investigating homogeneous depth four circuits in a sequence of recent works~\cite{KLSS14, KS-full}. \fi \paragraph{Organisation of the paper:} We provide an overview of the proofs in Section~\ref{sec:overview}. We describe some definitions and preliminaries in Section~\ref{sec:prelims}. We present the proof of the lower bound in Section~\ref{sec:lower bound}. We describe the application to blackbox PIT in Section~\ref{sec:pit} and conclude with some open problems in Section~\ref{sec:open ques}. \section{Proof overview}~\label{sec:overview} In this section, we provide an overview of the main ideas in proofs of Theorem~\ref{thm:mainthm intro} and Theorem~\ref{thm:mainthm2 intro}. \subsection{Overview of proof of Theorem~\ref{thm:mainthm intro}}~\label{sec:overview lower bounds} We restate Theorem~\ref{thm:mainthm intro} for the sake of clarity. \\ {\bf Theorem~\ref{thm:mainthm intro}}~\label{thm:mainthm intro2} {\it Let ${\mathbb{F}}$ be a field of characteristic zero and $\mu$ be any constant such that $0 \leq \mu < 1$. There exists a family $\{P_N\}$ of polynomials over ${\mathbb{F}}$ in $\VNP$, where $P_N$ is of degree $n$ in $N = n^{O_{\mu}(1)}$ variables, such that for any representation of $P_N$ of the form $$P_N = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$ where each $Q_{ij}$ is polynomial in only $N^{\mu}$ variables, it must be true that $$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$ } The key difference between proving the above lower bound and the lower bounds for homogeneous depth four circuits is that the formal degree of the circuit in the above case could be much larger than the degree of the polynomial, which is $n$. In fact, even the fan-in of the product gates at level 2, that is $d$ could be much larger than $n$. Therefore, a straightforward application of homogeneous depth four circuit lower bounds does not seem to work. Our proof is in two steps and at a high level follows the strategy of the lower bound for non-homogeneous depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14} with some key differences. \begin{itemize} \item In the first step, we obtain another representation of $P_N$, as $$P_N = \sum_{i = 1}^{Td2^{O(\sqrt{n})}}\prod_{j = 1}^{n} Q_{ij}'$$ where every monomial in each of the $Q_{ij}'$ has {\it support}\footnote{A monomial is said to have support support $s$ if it depends on at most $s$ distinct variables.} at most $s$, although each $Q_{ij}'$ could now depend on all the variables. The key property that we have gained from this transformation is that the fan-in of the product gates at level two is bounded by $n$ now, which is the degree of $P_N$. However, we have no bound on the degree of the $Q_{ij}'$. Moreover, we have blown up the top fan-in a bit, but we will be able to tolerate this loss if $s$ is small. \item In the second step, the strategy can be seen in two stages. If $\mu$ was very small, say $0.001$, then we could have taken advantage of the fact that in the representation obtained in the first step above, the product fan-in is at most $n$ and the support of every monomial in each of the $Q_{ij}'$ is small, to prove an upper bound on the dimension of the space of projected shifted partial derivatives of the above representation. Comparing this dimension with that of our hard polynomial gives us our lower bound. For larger values of $\mu$, we use random restrictions to ensure that all the monomials of {\it large support} in $Q_{ij}'$ are set to zero. At the end of such a procedure, we are back to the low support case. This step of the proof is closely along the lines of the proof of homogeneous depth four arithmetic circuit lower bounds in~\cite{KLSS14, KS-full} although in the present case, formal degree of the circuit could be as large as $n^2$, which is much larger than the degree of the polynomial $P_N$. For such large formal degrees, in general we do not even know lower bounds for non-homogeneous depth three circuits. \end{itemize} We would like to point out that the first step of the proof above is similar to the homogenization step in the proof of lower bounds for general depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14}. The key difference is that while the circuit they obtain at the end of this step is a strictly homogeneous circuit of formal degree $n$, we are unable to get a similar structure. The complication stems from the fact that when $Q_{ij}$ are not affine forms, they could contain monomials of varying degrees. In this case, it seems difficult to obtain a strict homogenization with a small blow up in size. We get around this deficiency by a more subtle analysis in the second step, where we show a lower bound for a circuit which has a formal degree much larger than the degree of the polynomial being computed, but has some added structure. This step critically uses that the fact that the product fan-in at level two of these circuits is at most $n$, and the support of every monomial in each of the $Q_{ij}'$ is small. \subsection{Overview of proof of Theorem~\ref{thm:mainthm2 intro}} We first restate Theorem~\ref{thm:mainthm2 intro}. \\ {\bf Theorem~\ref{thm:mainthm2 intro}}~\label{thm:mainthm2 intro 2}{\it Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that \begin{enumerate} \item $T < \log^c N$ \item $k < \log ^c N$ \item $d < N^c$ \item each $Q_{ij}$ depends on at most $N^{\mu}$ variables \end{enumerate} Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$. } \vspace{2mm} The construction of the hitting set is based on the well known idea of using hard functions for derandomization. Our goal is to reduce the number of variables from $N$ to at most $N^{\delta}$ for some constant $\delta < 1$, while maintaining the zeroness/nonzeroness of the polynomial being tested~\cite{KI04, DSY09}. Once we have done this, we take a brute force hitting set of size $\text{(Degree + 1)}^{\text{Number of variables}}$ as given by Lemma~\ref{lem: comb nulls}. To reduce the number of variables, we use the framework introduced by Kabanets and Impagliazzo~\cite{KI04}. The key technical step of the proof is to show that for a non-zero polynomial $P$ as defined above, if there exists a polynomial $f \in {\mathbb{F}}[X_1, X_2, \ldots, X_{i-1}, X_{i+1}, X_{i+2}, \ldots, X_{N}]$ such that $X_i-f$ divides $P$, then $f$ can also be expressed as a sum of products of polynomials in few variables of reasonably small size. This step crucially uses a statement about complexity of roots of polynomials computed by low depth circuits from~\cite{DSY09}. Therefore, if $f$ is a polynomial which does not have a small representation as a sum of products of polynomials in {\it few} variables, then $X_i - f$ does not divide $P$. This observation guarantees that the construction of hitting sets from hard polynomials given by~\cite{KI04} works for this class of circuits. \section{Notation and Preliminaries}~\label{sec:prelims} We now introduce some notation and preliminary notions that we use in the rest of the paper. \paragraph{Computational model : } In this work, we consider the model of sums of products of polynomials in few variables. More formally, we consider representations of polynomials $P$ (degree $n$ in $N = n^{O(1)}$ variables) in the form \begin{equation}~\label{def:model} P = \sum_{i=1}^T \alpha_i\cdot \prod_{j = 1}^d Q_{ij} \end{equation} where each $Q_{ij}$ is an arbitrary polynomial (of arbitrarily high degree) in at most $s$ variables and each $\alpha_i$ is a field constant. We call this the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. We use the quantity $Td$ as a measure of the size of a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit. Without loss of generality, we can assume that the degree zero term in each of the $Q_{ij}$ is either zero or one. If it is a non-zero constant other than $1$, we can extract it out and absorb it in $\alpha_i$. For each of the product gates, the fan-in could be different, but we can assume without loss of generality that all the product fan-ins are equal to $d$. Observe that the $d$ could be much larger than the degree of the polynomial $P$. Throughout this paper, we will be working over a field of characteristic zero. \paragraph{Some basic notations : } \begin{enumerate} \item For an integer $i$, we denote the set $\{1, 2, \ldots, i\}$ by $[i]$. \item By $\overline{X}$, we mean the set of variables $\{X_1, X_2, \ldots, X_N\}$. \item For a polynomial $P$ and a positive integer $i$, we represent by $\mathsf{Hom}^i[P]$, the homogeneous component of $P$ of degree equal to $i$. By $\mathsf{Hom}^{\leq i}[P]$ and $\mathsf{Hom}^{\geq i}[P]$, we represent the component of $P$ of degree at most $i$ and at least $i$ respectively. \item The support of a monomial $\alpha$ is the set of variables which appear with a non-zero exponent in $\alpha$. We denote the size of the support of $\alpha$ by $\text{Supp}(\alpha)$. \item Throughout the paper, we say that a function $f(N)$ is subexponential in $N$ if there exists a positive real number $\epsilon$, such that $\epsilon < 1$ and for all $N$ sufficiently large, $f(N) < \exp(N^{\epsilon})$. \item We say that a function $f(N)$ is quasipolynomial in $N$ if there exists a positive absolute constant $c$, such that for all $N$ sufficiently large, $f(N) < \exp(\log^c N)$. \item In this paper, we only consider layered arithmetic circuits and we will be counting levels from top to bottom, starting with the output gates being at level one. \item By a $\Sigma\Pi\Sigma\wedge$ circuit, we refer to a depth four circuit with all the product gates at the lowest level being replaced by powering ($\wedge$) gates. Similarly, by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit, we mean a depth six circuit all of whose product gates at level four from the top are powering gates. \end{enumerate} \iffalse \paragraph{Nisan-Wigderson polynomials : }We will now define a variant of the family of Nisan-Wigderson polynomials polynomials in $\VNP$ defined in~\cite{KayalSaha14}. For every fixed real number $\mu$ such that $0 \geq \mu < 1$, we define the family $NW_{n,{\mu}}$ of degree $n$ in $N = n^{O(1)}$ variables. We first define some parameters below. \begin{enumerate} \item $\delta$ is a positive real number such that $\mu + \delta < 1$. For this paper, we will think of $\delta = (1-\mu)/2$. \item $\gamma = \lceil\frac{2(\mu + \delta) + 1}{1-\mu-\delta} \rceil$ is an integer. \item $N$ is chosen such that $N/n$ is set equal to a prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$. Such a prime number always exists from the Bertrand-Chebychev theorem. Without loss of generality, we pick the smallest one. \item $d = \frac{\gamma(1 + \mu + \delta)}{2(1 + \gamma)}\cdot n$ is such that $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$. (we clarify this below) \end{enumerate} Let $N/n$ be set equal to the prime $\psi$. We are now ready to restate the definition of $NW_{n,{\mu}}$ from~\cite{KayalSaha14}. \begin{define}[Nisan-Wigderson Polynomials]~\label{defn:NW} For the set of $N$ variables $\{X_{ij} : i\in [n], j \in [\psi]\} $, we define the degree $n$ homogeneous polynomial $NW_{n,{\mu}}$ as $$NW_{n,{\mu}} = \sum_{\substack{f(z) \in {\mathbb{F}}_{\psi}[z] \\ deg(f) \leq d-1}} \prod_{i \in [n]} X_{if(i)}$$ \end{define} From the definition, we can observe the following properties of $NW_{n,{\mu}}$. \begin{enumerate} \item The number of monomials in $NW_{n,{\mu}}$ is exactly ${\psi}^{d} = n^{O(d)}$. \item Each of the monomials in $NW_{n,{\mu}}$ is multilinear. \item Each monomial corresponds to evaluations of a univariate polynomial of degree at most $d-1$ at all points of ${\mathbb{F}}_{\psi}$. Thus, any two distinct monomials agree in at most $d-1$ variables in their support. \end{enumerate} \begin{lem}~\label{lem: NW eval} Let $\mu$ be a non-negative real number less than $1$. Given $q \in {\mathbb{F}}^N$, $\mu$, $n$, we can evaluate the polynomial $NW_{n,{\mu}}$ at $q$ in time $N^{O(n)}$. \end{lem} \begin{proof} Given $n$ and $\mu$, we first find $d$, $\psi$ as given by the choice of parameters. Once we have $d$, we iterate through every monomial $\alpha$ of degree $n$ in the $\overline{X}$ variables which is supported on all the rows of the variable matrix and check if it is in the polynomial $NW_n$ by trying to find a univariate polynomial $f(z) \in {\mathbb{F}}_{\psi}[z]$ such that degree of $f$ is at most $d-1$ and $\prod_{i \in [n]} X_{if(i)} = \alpha$. The interpolation takes only $\text{Poly}(n)$ time, and the total number of monomials to try is at most $N^n$. So, we get the lemma. \end{proof} \fi \paragraph{Hitting set : } Let ${\cal C}$ be a set of polynomials in $N$ variables over a field ${\mathbb{F}}$. Then, a set ${\cal H} \subseteq {\mathbb{F}}^{N}$ is said to be a {\it hitting set} for the class ${\cal C}$, if for every polynomial $P \in {\cal C}$ such that $P$ is not the identically zero polynomial, there exists a $p \in {\cal H}$ such that $P(p) \neq 0$. \paragraph{Elementary symmetric polynomials : } For variables $\overline{X} = \{X_1, X_2, \ldots, X_N\}$ and any integer $0 \leq l \leq N$, the elementary symmetric polynomial of degree $l$ on variables $\overline{X}$ is defined as $$\mathsf{ESYM}_l(\overline{X}) = \sum_{S \subseteq [N], |S| = l} \prod_{j \in S} X_j$$ \paragraph{Projected shifted partial derivatives :} A key idea behind the recent progress on lower bounds is the notion of {\it shifted partial derivatives} introduced in~\cite{Kayal12}. In this paper, we use a variant of the measure, called projected shifted partial derivatives introduced in~\cite{KLSS14} and subsequently used in~\cite{KS-full}. Although we never explicitly do any calculations with the measure in this paper, we provide a brief introduction to it below since the bounds are based on it. For a polynomial $P$ and a monomial $\gamma$, ${\partial_{\gamma} (P)}$ is the partial derivative of $P$ with respect to $\gamma$. For every polynomial $P$ and a set of monomials ${\cal M}$, $\partial_{\cal M} (P)$ is the set of partial derivatives of $P$ with respect to monomials in ${\cal M}$. The space of $({\cal M}, m)\mhyphen$projected shifted partial derivatives of a polynomial $P$ is defined below. \begin{define}[$({\cal M}, m)\mhyphen$projected shifted partial derivatives]\label{def:shiftedderivative} For an $N$ variate polynomial $P \in {\field{F}}[X_1, X_2, \ldots, X_{N}]$, set of monomials ${\cal M}$ and a positive integer $m\geq 0$, the space of $({\cal M}, m)$-projected shifted partial derivatives of $P$ is defined as \begin{align} \langle \partial_{\cal M} (P)\rangle_{m} \stackrel{def}{=} \field{F}\mhyphen span\{\sigma(\prod_{i\in S}{X_i}\cdot g) : g \in \partial_{\cal M} (P), S\subseteq [N], |S| = m\} \end{align} \end{define} Here, $\sigma(P)$ of a polynomial $P$ is the projection of $P$ on the multilinear monomials in its support. The measure of complexity of a polynomial that we use in this paper, is the dimension of projected shifted partial derivative space of $P$ with respect to some set of monomials ${\cal M}$ and a parameter $m$. Formally, $$\Phi_{{\cal M}, m} (P) = \mathsf{Dim}( \langle \partial_{\cal M} (P)\rangle_{m})$$ \iffalse In this paper, we carefully choose a set of monomials ${\cal M}$ and a parameter $m$ and use the quantity $\Phi_{{\cal M}, m} (P)$ defined as $$\Phi_{{\cal M}, m} (P) = \mathsf{Dim}( \langle \partial_{\cal M} (P)\rangle_{m})$$ as a measure of complexity of the polynomial $P$. We will now elaborate on this definition of the measure in words - we look at the space of $({\cal M}, m)\mhyphen$projected shifted partial derivatives as the space of polynomials obtained at the end of the following steps, starting with the polynomial $P$. \begin{enumerate} \item We fix a set of monomials ${\cal M}$ and a parameter $m$. \item We take partial derivatives of $P$ with every monomial in ${\cal M}$, to obtain the set $\partial_{\cal M}(P)$. \item We obtain the set of shifted partial derivatives of $P$ by taking the product of every polynomial in $\partial_{\cal M}(P)$ with every monomial of degree $m$. In this paper, we will often be working with restrictions of polynomial $P$ obtained by setting some of the input variables to zero. Even for such restrictions, we consider product of the derivatives by all multilinear monomials of degree $m$ over the complete set of input variables $\{x_1, x_2, \ldots, x_N\}$. \item Then, we consider each polynomial in the set defined in the item above and project it to the polynomial composed of only the multilinear monomials in its support. The span of this set over ${\mathbb{F}}$ is defined to be $\langle \partial_{\cal M}(P)\rangle_m$. \item We define the complexity of the polynomial $\Phi_{{\cal M}, m}(P)$ to be the dimension of $\langle \partial_{\cal M}(P)\rangle_m$ over ${\mathbb{F}}$. \end{enumerate} It follows easily from the definitions that the complexity measure is subadditive. We formalize this in the lemma below. \fi From the definitions, it is straight forward to see that the measure is subadditive. \begin{lem}[Sub-additivity]~\label{subadditive} Let $P$ and $Q$ be any two multivariate polynomials in ${\mathbb{F}}[X_1, X_2, \ldots, X_{N}]$. Let ${\cal M}$ be any set of monomials and $m$ be any positive integer. Then, for all scalars $\alpha$ and $\beta$ $$\Phi_{{\cal M}, m} (\alpha\cdot P + \beta\cdot Q) \leq \Phi_{{\cal M}, m} (P) + \Phi_{{\cal M}, m} (Q)$$ \end{lem} \paragraph{Approximations : }We will refer to the following lemma to approximate expressions during our calculations. \begin{lem}[\cite{GKKS12}]~\label{lem:approx} Let $a(n), f(n), g(n) : {\mathbb{Z}}_{>0}\rightarrow {\mathbb{Z}}_{>0}$ be integer valued functions such that $(f+g) = o(a)$. Then, $$\log \frac{(a+f)!}{(a-g)!} = (f+g)\log a \pm O\left( \frac{(f+g)^2}{a}\right)$$ \end{lem} In the proofs in this paper, we use Lemma~\ref{lem:approx} only in situations where $(f+g)^2$ will be $O(a)$. In this case, the error term will be bounded by an absolute constant. So, up to constant factors, $\frac{(a+f)!}{(a-g)!} = a^{(f+g)}$. We use the symbol $\approx$ to indicate equality up to constant factors. \paragraph{Complexity of coefficients and homogeneous components :} We now summarise two simple lemmas which are useful for our proof. The first lemma summarises that given a circuit $C$ for a polynomial $P \in {\mathbb{F}}[X1, X2, \ldots, X_{N}, Y]$ of degree at most $d$, for every $0 \leq i \leq d$, the coefficient of $Y^i$ in $P$ (when viewing $P$ as a polynomial in ${\mathbb{F}}[X_1, X_2, \ldots, X_{N}][Y]$) can also be computed by a circuit of size not much larger than the size of $C$. \begin{lem}~\label{lem:extracting coefficients} Let $P \in {\mathbb{F}}[X_1, X_2, \ldots, X_{N}, Y]$ be a polynomial of degree at most $d$ in $Y$ over a field ${\mathbb{F}}$ of characteristic zero, such that $P$ is computable by an arithmetic circuit $C$ of size $|C|$. Let $$P = \sum_{i = 0}^d Q_i(X_1, X_2, \ldots, X_{N})\cdot Y^i$$ for polynomials $Q_i(X_1, X_2, \ldots, X_{N}) \in {\mathbb{F}}[X_1, X_2, \ldots, X_{N}]$. Then, for every $i$ such that $0 \leq i \leq d$, the polynomial $Q_i$ can be computed by an arithmetic circuit $C'$ of size at most $|C|\cdot (d+1)$. Moreover, if the output gate of $C$ is a $+$ gate, then the depth of $C'$ is equal to the depth of $C$. Else, the depth of $C'$ is at most $1$ more than the depth of $C$. \end{lem} \begin{proof} We can view $P$ as a univariate polynomial of degree at most $d$ in $Y$ with the coefficients coming from ${\mathbb{F}}(\overline{X})$. From the classical Lagrange interpolation, we know that the coefficient of $Y^i$ in $P$ can be written as an ${\mathbb{F}}(\overline{X})$ linear combination of the evaluations of $P$ at $d+1$ distinct values of $Y$ taken from ${\mathbb{F}}(\overline{X})$. In fact, more strongly, we can evaluate $P$ at $d+1$ values of $Y$ all chosen from ${\mathbb{F}}$ itself, in which case the constants in the linear combination are also from ${\mathbb{F}}$. So, $Q_i$ can be computed by a circuit obtained from taking $d+1$ circuits each obtained from $P$ by substituting $Y$ by a scalar in ${\mathbb{F}}$, and taking their linear combination. Let this circuit be $C'$. Clearly the size of $C'$ is at most $(d+1)$ times the size of $C$. If the output gate of $C$ was an addition gate, then the outer addition for the linear combination can be absorbed into it, and the depth remains the same. Else, the depth increases by one. \end{proof} The second lemma stated below essentially says that the circuit complexity of homogeneous components of a polynomial is not much larger than the circuit complexity of the polynomial itself. \begin{lem}~\label{lem:interpolation} Let $P$ be a polynomial of degree at most $d$ in $N$ variables over a field ${\mathbb{F}}$ of characteristic zero, such that $P$ is computable by an arithmetic circuit $C$ of size $|C|$. Then, for every $i$ such that $0 \leq i \leq d$, the homogeneous component of degree $i$ of $P$ can be computed by an arithmetic circuit $C'$ of size at most $|C|\cdot (d+1)$. Moreover, if the output gate of $C$ is a $+$ gate, then the depth of $C'$ is equal to the depth of $C$. Else, the depth of $C'$ is at most $1$ more than the depth of $C$. \end{lem} \begin{proof} Let $P'(t)$ be the polynomial obtained from $P$ by replacing every variable $X$ in $P$ by $X\cdot t$ for a new variable $t$. We can view $P'$ to be a univariate polynomial of degree at most $d$ in $t$ with the coefficients coming from ${\mathbb{F}}(\overline{X})$. Observe that for every $i$ such that $0 \leq i \leq d$, the homogeneous component of $P$ of degree equal to $i$ is equal to the coefficient of $t^i$ in $P'$. The proof now follows from Lemma~\ref{lem:extracting coefficients}. \end{proof} \section{Proof of the lower bound}~\label{sec:lower bound} In this section, we give the proof of Theorem~\ref{thm:mainthm intro}. We prove the lower bound for a variant of the well known family of Nisan-Wigderson polynomials defined by Kayal and Saha~\cite{KayalSaha14}. \subsection{Target polynomials for the lower bound} We now define the family of polynomials of degree $n$ in $N$ variables for which we prove the lower bounds. The family is a variant of the Nisan-Wigderson polynomials which were introduced by Kayal et al in~\cite{KSS13} in the context of lower bounds for homogeneous depth four circuits. The particular variant we use in the paper is due to Kayal and Saha~\cite{KayalSaha14}. The tradeoff between the number of variables $N$ and the degree $n$ will be parameterized by the parameter $\mu$ where $0 \leq \mu < 1$. First we need some parameters, which we define below. \begin{enumerate} \item $\delta = (1-\mu)/2$ is a positive real number such that $\mu + \delta < 1$. \item $\gamma = \frac{2(\mu + \delta) + 1}{1-\mu-\delta} $. \item $N$ is chosen such that $N/n$ is a prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$. Such a prime number always exists from the Bertrand-Chebychev theorem. Without loss of generality, we pick the smallest one. \item $\rho = (\mu + \delta)\frac{\log N}{\log n}$ \item $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot n$ , where $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$. \end{enumerate} Let $\psi$ be the prime number equalling $N/n$. We are now ready to restate the definition of $NW_{n,{\mu}}$ from~\cite{KayalSaha14}. \begin{define}[Nisan-Wigderson Polynomials~\cite{KayalSaha14}]~\label{defn:NW} Let $\mu$ be a real number such that $0 \leq \mu < 1$. For a given $\mu$ and $n$, let $N$, $D$, $\psi$ be as defined above. For the set of $N$ variables $\{X_{ij} : i\in [n], j \in [\psi]\} $, we define the degree $n$ homogeneous polynomial $NW_{n,{\mu}}$ as $$NW_{n,{\mu}} = \sum_{\substack{f(z) \in {\mathbb{F}}_{\psi}[z] \\ deg(f) \leq D-1}} \prod_{i \in [n]} X_{if(i)}$$ \end{define} From the definition, we can observe the following properties of $NW_{n,{\mu}}$. \begin{enumerate} \item The number of monomials in $NW_{n,{\mu}}$ is exactly ${\psi}^{D} = n^{O(D)}$. \item Each of the monomials in $NW_{n,{\mu}}$ is multilinear. \item Each monomial corresponds to evaluations of a univariate polynomial of degree at most $D-1$ at all points of ${\mathbb{F}}_{\psi}$. Thus, any two distinct monomials agree in at most $D-1$ variables in their support. \end{enumerate} We will also need the following lemma in our proof. \begin{lem}~\label{lem: NW eval} Let $\mu$ be a non-negative real number less than $1$. Given $q \in {\mathbb{F}}^N$, $\mu$, $n$, we can evaluate the polynomial $NW_{n,{\mu}}$ at $q$ in time $N^{O(n)}$. \end{lem} \begin{proof} Given $n$ and $\mu$, we first find $D$, $\psi$ as given by the choice of parameters. Once we have $D$, we iterate through every monomial $\alpha$ of degree $n$ in the $\overline{X}$ variables which is supported on all the rows of the variable matrix and check if it is in the polynomial $NW_{n,{\mu}}$ by trying to find a univariate polynomial $f(z) \in {\mathbb{F}}_{\psi}[z]$ such that degree of $f$ is at most $D-1$ and $\prod_{i \in [n]} X_{if(i)} = \alpha$. The interpolation takes only $\text{Poly}(n)$ time, and the total number of monomials to try is at most $N^n$. So, we get the lemma. \end{proof} \iffalse \subsection{Overview of the proof} The proof proceeds as described in Section~\ref{sec:overview lower bounds}. The high level strategy is the same as that of the proof of lower bounds for general depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14}. There will be two major steps. \begin{itemize} \item In the first step, we show that any polynomial of degree $n$ computed by $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and product fan-in $d$ at the second level, can be written $\sum_{i}^{Tn2^{O(\sqrt{n})}}\prod_{j = 1}^n Q_{ij}'$, where every monomial in each of the $Q_{ij}'$ is of support at most $s$. Clearly, we have blown up the top fan-in of the representation a bit, but each of the product gates at level two has fan-in at most $n$ now. This will be crucial to our proof of a lower bounds for this model. Let us call this circuit $C'$. \item We will then show that, any circuit structured like $C'$ computing $NW_n$ must have size at least $n^{\Omega(\sqrt{n})}$. This implies an exponential lower bound for the original $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit computing $NW_{n,{\mu}}$. This step essentially proceeds by using random restrictions to reduce $C'$ to a depth four circuit with bounded bottom support, and the second level product fan-in being bounded by $n$, and then using the projected shifted partial derivatives to prove a lower bound for this model. \end{itemize} \fi We now proceed with the proof as outlined in Section~\ref{sec:overview lower bounds}. \subsection{Reducing the product fan-in at level two} Let $P$ be a homogeneous polynomial in $N$ variables of degree $n$ which has a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and product fan-in $d$ at the second level. In other words, there exist polynomials $\{Q_{ij} : i \in [T], j \in [d]\}$ in at most $s$ variables each, such that \begin{equation}~\label{def:model2} P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij} \end{equation} Recall that without loss of generality, we can assume that the constant term in each of the $Q_{ij}$ is either $0$ or $1$. We have the following lemma. \begin{lem}~\label{lem:homog} Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a homogeneous polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ as defined above. For each $i$, $1\leq i \leq T$ define the set $$S_i = \{j : 1 \leq j \leq d \text{ and } \mathsf{Hom}^{0}[Q_{ij}] = 1\}$$ Then, \begin{equation} P = \sum_{i = 1}^T \alpha_i \cdot \mathsf{Hom}^n\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right] \end{equation} \end{lem} \begin{proof} To prove the lemma, we will try to extract out the homogeneous part of degree $n$ of each product gate $\prod_{j = 1}^d Q_{ij}$. Together with the fact that the polynomial $P$ is homogeneous of degree $n$, we get the lemma. Every $Q_{ij}$ with a non-zero constant term can be written as $\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1$, since the constant term in each $Q_{ij}$ is either $0$ or $1$. Now, \begin{equation}~\label{eqn:1} \prod_{j = 1}^d Q_{ij} = \prod_{j \notin S_i} Q_{ij} \times \prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1) \end{equation} Decomposing the product $\prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1)$ further, we have \begin{equation}~\label{eqn:2} \prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1] = \sum_{l = 0}^{|S_i|} \sum_{U \subseteq S_i : |U| = l} \prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}] \end{equation} Now, observe that the degree of every monomial in $\prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is at least as large as the size of $U$. So, for every subset $U$ of size larger than $n$, $\prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is a polynomial of degree strictly larger than $n$. Also, for any fixed $l$, the expression $ \sum_{U \subseteq S_i : |U| = l} \prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is precisely the elementary symmetric polynomial of degree $l$ in the set of variables $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\}$. Therefore, \begin{equation}~\label{eqn:3} \mathsf{Hom}^{\leq n}\left[\prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1)\right] = \mathsf{Hom}^{\leq n}\left[\sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i \})\right] \end{equation} Therefore, \begin{equation}~\label{eqn:4} \mathsf{Hom}^{n}\left[\prod_{j = 1}^d Q_{ij}\right] = \mathsf{Hom}^{n}\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i \})\right] \end{equation} Summing up for all $i$, we get the lemma. \end{proof} The lemma above has in some sense helped us locate the monomials of degree $n$ in the circuit, which otherwise has a much higher formal degree. We now combine the above lemma with the well known fact that elementary symmetric polynomial of degree $l$ in $k$ variables can be computed by homogeneous $\Sigma\Pi\Sigma\wedge$ circuits of size at most $k2^{O(\sqrt{l})}$ to obtain a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circut $C'$ such that the fan-in of the product gates at level two is at most $n$. We use the following theorem (Theorem 5.2) by Shpilka and Wigderson~\cite{SW01}. \begin{thm}[Shpilka-Wigderson~\cite{SW01}]~\label{thm : SW} For every set of variables $\{Y_1, Y_2, \ldots, Y_m\}$ and a positive integer $l$, $\mathsf{ESYM}_l(\{Y_1, Y_2, \ldots, Y_m\})$ can be computed by a homogeneous $\Sigma\Pi\Sigma\wedge$ circuit of size $m2^{O(\sqrt{l})}$. \end{thm} We now prove the following lemma. \begin{lem}~\label{lem:depth6} Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computable by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$ of top fan-in $T$ and the degree of product gates at level two being $d$. So, $P$ can be represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$ Then, $P$ can be represented as the homogeneous component of degree $n$ of a polynomial computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ with the following properties : \begin{enumerate} \item The inputs to the $\wedge$ gates are the polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$ \item The fan-in of the $\times$ gates at the second level from the top is at most $n$ \item The top fan-in of $C''$ is at most $Tdn2^{O(\sqrt{n})}$. \end{enumerate} \end{lem} \begin{proof} From Lemma~\ref{lem:homog}, we know that for the set $S_i$ defined as $$S_i = \{j : 1 \leq j \leq d \text{ and } \mathsf{Hom}^{0}[Q_{ij}] = 1\}$$ the polynomial $P$ can be written as $$P = \sum_{i = 1}^T \alpha_i \cdot \mathsf{Hom}^n\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^n \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right]$$ which is the same as $$P = \mathsf{Hom}^n\left[ \sum_{i = 1}^T \alpha_i \cdot \prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^n \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right] $$ Observe that the polynomial $\prod_{j \notin S_i} Q_{ij}$ has degree at least $d-|S_i|$. We remark that if $d-|S_i|$ is larger than $n$, then such product gates do not contribute anything to the degree $n$ component of the polynomial and hence can be discarded without loss of generality; hence we assume $n-(d-|S_i|) > 0$. So, we could confine the inner sum from $l = 0$ to $l = n-(d-|S_i|)$, and still preserve the degree $n$ part of the polynomial, which is what we are interested in. From Theorem~\ref{thm : SW}, we know that for every $0 \leq l \leq n$, we can compute the polynomial $\mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})$ by a $\Sigma\Pi\Sigma\wedge$ circuit of top fan-in at most $d \times 2^{O(\sqrt{l})} $ which takes as input the polynomials $\{\mathsf{Hom}^{\geq 1}(Q_{ij}) : 1\leq j \leq d\}$. From the homogeneity of the circuits given by Theorem~\ref{thm : SW}, it follows that the product gates at level two of these circuits have fan-in at most the degree of polynomial they compute, which is at most $n-(d-|S_i|)$. So, it follows that the polynomial $$\tilde{P} = \left( \sum_{i = 1}^T \alpha_i \cdot \prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n-(d-|S_i|)} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right)$$ can be computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit, with top fan-in at most $Tdn\cdot 2^{O(\sqrt{n})}$, which satisfies the conditions in the lemma. \iffalse Finally, observe that the monomials of degree strictly larger than $n$ in the $Q_{ij}$ do not contribute to degree $n$ part of $\tilde{P}$. So, we can drop them, while still preserving the degree $n$ part of $\tilde{P}$. Therefore, the degree of $\tilde{P}$ can be upper bounded by $n^2d$. We can recover the degree $n$ part of $\tilde{P}$ by interpolation which blows up the top fan-in by a factor of at most $n^2d$. In this process, items 1 and 2 are preserved while the top fan-in becomes at most $Td^2n^32^{O(\sqrt{n})}$. \fi \end{proof} Finally, given the circuit $C''$ constructed above, we can construct a circuit which computes the polynomial $P$ as given by Lemma~\ref{lem:interpolation}. For this, observe that the monomials of degree strictly larger than $n$ in any of the $Q_{ij}$ do not contribute to degree $n$ part of $\tilde{P}$. So, we can drop them, while still preserving the degree $n$ part of $\tilde{P}$. Therefore, the degree of $\tilde{P}$ can be upper bounded by $n^2d$. We can recover the degree $n$ part of $\tilde{P}$ by interpolation which blows up the top fan-in by a factor of at most $n^2d$. In this process, the fan-in of the product gates at level two remains unchanged. Strictly speaking, inputs to the powering gate $\wedge$ at level four may no longer be the polynomials $\mathsf{Hom}^{\geq 1}[Q_{ij}]$, since in the process of interpolation, we replaced every variable $X_i$ by $X_i.t$ in $\tilde{P}$ and looked at the resulting polynomial $\tilde{P'}$ as a univariate polynomial in $t$ over the function field ${\mathbb{F}}(\overline{X})$. We then evaluated $\tilde{P'}$ at sufficiently many values of $t \in {\mathbb{F}}$ and then took their ${\mathbb{F}}$ linear combination. So, each of the polynomials $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ gives rise to many other polynomials, one each for different values of $t$. We will call them the {\it siblings} of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. The key observation for our proof is that the set of variables in the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ is the same as the set of variables in $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. From the lemma and the discussion above, we have the following corollary. \begin{cor}~\label{lem:depth6-cor} Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computable by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$ of top fan-in $T$ and the degree of product gates at level two being $d$. So, $P$ can be represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$ Then, $P$ can be computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ with the following properties : \begin{enumerate} \item The inputs to the $\wedge$ gates are the siblings of polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$ \item The fan-in of the $\times$ gates at the second level from the top is at most $n$ \item The top fan-in of $C''$ is at most $Td^2n^32^{O(\sqrt{n})}$. \end{enumerate} \end{cor} \subsection{Random Restrictions}~\label{sec: random res} From the definition, it follows that the total number of variables in $NW_{n,\mu}$ is $N$. Let the set of all these variables be $\cal V$. We now define our random restriction procedure by defining a distribution $\cal D$ over subsets $V \subset \cal V$. The random restriction procedure will sample $V \gets \cal D$ and then keep only those variables ``alive" that come from $V$ and set the rest to zero. We will denote the restriction of the polynomial obtained by such a restriction as $NW_{n, \mu}|_V$. Observe that a random restriction also results in a distribution over all circuits computing the polynomial $NW_{n, \mu}$. We denote by $C|_V$ the restriction of a circuit $C$ obtained by setting every input gate in $C$ which is labelled by a variable outside $V$ to $0$. \vspace{2mm} \noindent {\bf The distribution ${\cal D}_p$: } Each variable in $\cal V$ is independently kept alive with a probability $p$. We will choose the value of $p$ based on the parameter $\mu$. \subsection{Analysing the circuit under random restrictions} Let $C$ be a $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuit computing the polynomial $NW_{n,{\mu}}$. Let the top fan-in of $C$ be $T$ and the product fan-in at the second level be $d$. So, we have the following expression. $$NW_{n,{\mu}} = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$ where each $Q_{ij}$ depends on at most $N^{\mu}$ variables. Recall that from the choice of parameters $\delta = (1-\mu)/2$. Let $s$ be a parameter, which we later set such that $s = \Theta(\sqrt{n})$. If $T\cdot d \geq N^{\frac{\delta}{4} s}$, then we already have the desired lower bound of $n^{\Omega(\sqrt{n})}$ on the size of $C$ and we are done. Therefore, for the rest of this discussion, we will assume that $T\cdot d \leq N^{\frac{\delta}{4}s}$. We now apply the transformation to $C$ given by Corollary~\ref{lem:depth6-cor} to obtain a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$, which has the following properties: \begin{enumerate} \item The inputs to the $\wedge$ gates are the siblings of polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$ \item The fan-in of the $\times$ gates at the second level from the top is at most $n$ \item The top fan-in of $C''$ is at most $Td^2n^32^{O(\sqrt{n})}$. \end{enumerate} We now analyse the effect of the random restrictions on the circuit $C''$. We will choose a parameter $p = N^{-\mu-\delta}$ and keep every variable alive with a probability $p$. The circuit $C''$ can be represented as $$C'' = \sum_{u}\prod_{v} D_{uv}$$ Here, each $D_{uv}$ is a sum of powers of the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. Our goal is to argue that under random restrictions, all the monomials in each of the $D_{uv}$ are of small support (support at most $s$). For any polynomial $P$ in $N^{\mu}$ variables and any integers $t, t_0$ such that $t_0 < t$, observe that $P^t$ can be written as $$P^t = P_0 + \sum_{\alpha}\alpha\cdot P_{\alpha}$$ where $P_0$ is the part of $P$ consisting of monomials of support strictly less than $t_0$. The inner sum is over all multilinear monomials $\alpha$ of support equal to $t_0$. Such a decomposition may not be unique, but for this application, it would suffice to work with any one such decomposition. The number of such monomials $\alpha$ is at most ${N^{\mu} \choose t_{0}}$. The probability that one such monomial survives the random restriction procedure is equal to $p^{t_0}$. So, the expected number of such multilinear monomials $\alpha$ surviving the random restriction procedure is at most ${N^{\mu} \choose t_{0}}\cdot p^{t_0}$. The crucial observation is that if no such monomials survive, then only the monomials in $P_0$ survive, all of which have support at most $t_0-1$. Now, observe that each of the $D_{uv}$ are a sum of powers of the siblings of polynomials in the set $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$. Define ${\cal B}$ to be the set of all multilinear monomials of support equal to $s$, supported entirely on variables in any of the polynomials ${Q_{ij}}$ for some $1 \leq i \leq T, 1 \leq j \leq d$. From the discussion in the paragraph above, the following observation follows. \begin{obs}~\label{obs: random rest 1} Let the polynomials $D_{uv}$, $Q_{ij}$ and the set ${\cal B}$ be as defined above. Then, \begin{itemize} \item $|{\cal B}| \leq T\cdot d\cdot {N^{\mu} \choose s}$ \item If none of the monomials in ${\cal B}$ survive under some random restrictions, then each of the polynomials $D_{uv}'$ obtained as a restriction of $D_{uv}$ has all monomials of support at most $s$. \end{itemize} \end{obs} \begin{proof} The bound on the size trivially follows from the fact that each of the $Q_{ij}$ depends on at most $N^{\mu}$ variables. For the second item, observe that each of the $D_{uv}$ is a sum of powers of siblings of the $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ and all the siblings are supported on the same set of variables. If all the monomials in the set ${\cal B}$ are set to zero, then the surviving monomials in any power of any of the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ has support at most $s$. \end{proof} We now estimate the probability that at least one of the monomials in the set ${\cal B}$ survives the random restriction procedure. We have the following lemma. \begin{lem}~\label{lem: rand res 2} Let $\delta$ be a positive real number such that $\delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$. Then $$Pr_{V\leftarrow {\cal D}_p}\left[|{\cal B}|_{V}| \geq 1\right] \leq N^{-3/4 \cdot\delta \cdot s}$$ \end{lem} \begin{proof} We know that $$|{\cal B}| \leq T \cdot d \cdot {N^{\mu} \choose s}$$ and the probability that any fixed monomial in ${\cal B}$ survives the random restriction procedure is at most $p^{s}$. So $${\mathbb E}_{V\leftarrow {\cal D}_p}[|{\cal B}_{V}|] \leq T \cdot d \cdot {N^{\mu} \choose s} \cdot p^s $$ Now, observing that the value of $T\cdot d$ is at most $N^{\frac{\delta}{4}s}$ and $p = N^{-\mu-\delta}$, the expected value is at most $$ N^{\frac{\delta}{4}s} {N^{\mu} \choose s} \cdot N^{-(\mu+\delta)s} \leq N^{-3/4 \cdot\delta \cdot s}$$ The lemma then follows by Markov's inequality. \end{proof} As a corollary of Lemma~\ref{lem: rand res 2} and Observation~\ref{obs: random rest 1}, we get the following lemma. \begin{lem}~\label{lem: rand res main} Let $\delta$ be a positive real number such that $ \delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$. Then with probability at least $1- N^{-3/4 \cdot\delta \cdot s}$ over random restrictions $V \leftarrow {\cal D}_p$, the polynomial computed by the circuit $C''|_{V}$ can be written as $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$, where each of the monomials in each of the polynomials $D_{uv}'$ has support at most $s$. \end{lem} \subsection{Upper bound on the complexity of C} In order to upper bound the dimension of the projected shifted partial derivatives (under random restrictions) of the $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$, Corollary~\ref{lem:depth6-cor} implies that it suffices to upper bound the dimension of the space of projected shifted partial derivatives of the $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ given by Corollary~\ref{lem:depth6-cor}. In some sense, $C''$ is more structured than $C$ and this lets us prove a better upper bound. Recall that we are under the assumption that for the circuit $C$, the product of the top fan-in and the product fan-in at level two is at most $N^{\frac{\delta}{4} \cdot s}$, else we are already done. From Lemma~\ref{lem: rand res main}, we know that with a high probability, under random restrictions, we are left with a circuit of the form $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$ where each of the monomials in each of the polynomials $D_{uv}'$ has support at most $s$. The upper bound on the complexity of the projected shifted partial derivatives of $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$ then just follows from the upper bound for homogeneous depth four circuits of bounded bottom support proved in~\cite{KLSS14, KS-full}. We restate the bound from~\cite{KS-full}. \begin{lem}~\label{lem:lowsupbound1} Let $C$ be a depth 4 circuit with the fan-in or product gates at level two bounded by $n$, the bottom support bounded by $s$ and computing a polynomial in $N$ variables. Let ${\cal M}$ be a set of monomials of degree equal to $r$ and let $m$ be a positive integer. Then, $$\Phi_{{\cal M}, m}(C) \leq \text{Top fan-in}(C){n + r \choose r}{N \choose m+ rs}$$ for any choice of $m, r, s, N$ satisfying $m+rs \leq N/2$. \end{lem} The upper bound for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits, follows easily form the above lemma after random restrictions, and we formalize this in the lemma below. \begin{lem}~\label{lem:complexity ub} Let $\mu$ be a positive real number such that $0 \leq \mu < 1$. Let $\delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$ and let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computed by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuit $C$ of top fan-in $T$ and degree of product gates at level two at most $d$, i.e $P$ can represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$ where $\alpha_i$ are field constants. Let $m$ and $r$ be positive integers satisfying $m+rs \leq N/2$ and ${\cal M}$ be any subset of multilinear monomials of degree equal to $r$. If $Td \leq N^{\frac{s\cdot \delta}{4}}$, then with probability at least $1- N^{-3/4 \cdot\delta \cdot s}$ over random restrictions $V \leftarrow {\cal D}_p$, $$\Phi_{{\cal M}, m} (C|_V) \leq Td^2n^3 \cdot rs \cdot 2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r} $$ \end{lem} \begin{proof} The lemma follows immediately from Corollary~\ref{lem:depth6-cor}, Lemma~\ref{lem: rand res main} and Lemma~\ref{lem:lowsupbound1}. \end{proof} \subsection{Nisan-Wigderson polynomial under random restrictions} To complete the proof of Theorem~\ref{thm:mainthm intro}, we need a lower bound on the dimension of the space of projected shifted partial derivatives of the polynomial $NW_{n,{\mu}}$, under random restrictions. To this end, we will use the lower bound proved by Kayal and Saha~\cite{KayalSaha14}. We first enumerate our choice of parameters. Recall that $\delta = (1-\mu)/2$ is a positive real number. \begin{enumerate} \item $\gamma = \frac{2(\mu + \delta) + 1}{1-\mu-\delta}$ \item $N$ is such that $N/n$ is set equal to the smallest prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$. \item $\rho = (\mu + \delta)\frac{\log N}{\log n}$ \item $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot n$ , where $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$. \item $r, s$ which are the order of derivative and the bound on bottom support of the circuit after random restrictions respectively, are chosen such that $r = \epsilon_1\cdot \sqrt{n}, s = \epsilon_2\cdot \sqrt{n}$. Here, $\epsilon_1$ and $\epsilon_2$ are small enough positive real numbers satisfying $\epsilon_1\cdot\epsilon_2 = 0.001 n $. \item $m = \frac{N}{2}(1-r\frac{\ln n}{n})$ is the degree of the shifts. \item $p = N^{-(\mu + \delta)}$ is the probability with which each variable is independently kept alive. \item ${\cal M}$ is the set of all multilinear monomials of degree $r$. We take partial derivatives with respect to monomials in this set. \end{enumerate} We are now ready to state the lower bound on the dimension of projected shifted partial derivatives as in~\cite{KayalSaha14}. \begin{lem}[Kayal-Saha~\cite{KayalSaha14}]~\label{lem: KS main} Let $NW_{n,{\mu}}$ be Nisan-Wigderson polynomials as defined in Definition~\ref{defn:NW}. Let ${\mathbb{F}}$ be any field of characteristic zero. Then, for the choice of parameters defined above $$\Phi_{{\cal M}, m}(NW_{n,{\mu}}|_V) \geq \frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right) $$ with probability at least $1 - \frac{1}{n^{\theta(1)}}$ over random restrictions $V \leftarrow {\cal D}_p$. \end{lem} \subsection{Wrapping up the proof of Theorem~\ref{thm:mainthm intro}} From Lemma~\ref{lem: KS main} and Lemma~\ref{lem: rand res main}, we know that with a non-zero probability over the random restrictions $V$ from the distribution ${\cal D}_p$, the following two conditions hold. \begin{enumerate} \item $$\Phi_{{\cal M}, m}(NW_{n,{\mu}}|_V) \geq \frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right) $$ \item $$\Phi_{{\cal M}, m} (C|_V) \leq Td^2n^3 \cdot rs \cdot 2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r}$$ \end{enumerate} If $C$ computed the polynomial $NW_{n,{\mu}}$, then $$Td^2n^3 \cdot rs \geq \frac{{\frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right)}}{{2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r}}} $$ From the calculations in Appendix~\ref{sec:calc}, it follows that for our choice of parameters, the ratio is at least $\exp(\sqrt{n}\log n)$. So, we have the following theorem. \begin{thm}~\label{thm:mainthm} Let $\mu$ be an absolute constant such that $0 \geq \mu < 1$ and ${\mathbb{F}}$ be a field of characteristic zero. For $1 \leq i \leq T$ and $1 \leq j \leq d$, if there exist polynomials $Q_{ij}$, each dependent on only $s = N^{\mu}$ variables, such that $$NW_{n,{\mu}} = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$ Then $$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$ \end{thm} As a remark, we mention here that the lower bound above also holds for any translation $NW_{n,{\mu}}(\overline{X} + \overline{a})$ of the polynomial $NW_{n,{\mu}}(\overline{X})$. This is because the highest degree term of $NW_{n,{\mu}}(\overline{X} + \overline{a})$ equals the polynomial $NW_{n,{\mu}}(\overline{X})$ and from Lemma~\ref{lem:interpolation}, the homogeneous components of a polynomial computable by small sized $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits also have small sized $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. We leave the details to the interested reader. \section{Application to polynomial identity testing}~\label{sec:pit} In this section, we prove Theorem~\ref{thm:mainthm2 intro}. We are interested in identity testing for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits, i.e for polynomials in $N$ variables $\{X_1, X_2, \ldots, X_N\}$ which can be expressed in the form $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that \begin{enumerate} \item The individual degree in $P$ of every variable is at most $k$ \item Each $Q_{ij}$ depends on at most $s$ variables \end{enumerate} For the case of this application, we will think of $k, T$ being polynomial in $(\log N)$ and $s$ being $N^{1/2-\epsilon}$ for a positive constant $\epsilon$. Observe that the bound on individual degree lets us upper bound the total degree of the polynomials by $Nk$. \iffalse \subsection{Overview of the proof}~\label{sec:pit overview} At a high level, our goal is to reduce the number of variables, while preserving the zeroness/nonzeroness of the polynomial. We will show that we can do this while not blowing up the degree of the polynomial by too much. Once we have reduced the number of variables to $N'$, we will apply a brute force hitting set of size $\text{(Degree + 1)}^{\text{(Number of variables)}}$ as given by Lemma~\ref{lem: comb nulls} In order to reduce the number of variables, we use the well known idea of trading hardness for randomness for arithmetic circuits given by Kabanets and Impagliazzo~\cite{KI04} and a version of it given for low depth circuits by Dvir, Shpilka and Yehudayoff~\cite{DSY09}. \fi We describe the construction of the hitting set in Section~\ref{sec:hitting set} and prove its correctness in Section~\ref{sec:hitting set correct}. We go over some preliminaries that we need in our proof in the next section. \subsection{Some preliminaries} In the following lemma, we prove some properties of the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits, which will be useful in the proof of the identity testing result. \begin{lem}~\label{lem: model props} Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a non-zero polynomial in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, which is computed by a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit C of top fan-in $T$ and product fan-in $d$ at level two, i.e $P$ can be expressed as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that for each $i \in [T]$ and $j \in [d]$, $Q_{ij}$ depends on at most $s$ variables. Then, the following are true. \begin{enumerate} \item For every variable $y$ and integer $1 \leq j \leq k$, $\frac{{\partial}^j P}{{\partial y^j}}$ can be computed by a circuit of the form $$\frac{\partial^j P}{\partial y^j} = \sum_{i = 1}^{T'} \prod_{j = 1}^d Q_{ij}'$$ where $T' \leq T\cdot (k+1)^2$ and each of the polynomials $Q_{ij}'$ depends on at most $s$ variables. \item For any $a \in {\mathbb{F}}^N$, $P(\overline{X} + \overline{a})$ can be computed by a circuit of the form $$P(\overline{X} + \overline{a}) = \sum_{i = 1}^{T} \prod_{j = 1}^d Q_{ij}''$$ where each of the polynomials $Q_{ij}''$ depends on at most $s$ variables. \end{enumerate} \end{lem} \begin{proof} The proof of the second item is immediate from the definitions. The only thing that changes due to a translation is the number of monomials in the $Q_{ij}$. The number of variables that each $Q_{ij}$ depends on remains unchanged, and so does the fan-in of the top sum gate and the product gates at level two. We now prove the first item. Let the set of variables in $P$ be $\overline{X} = \overline{X'} \cup \{y\}$ where $X'$ is of size $N-1$. Since the individual degree of $P$ is at most $k$, we can write $P = \sum_{i = 0}^k C_i(\overline{X'})\cdot y^i$. Here, $C_i(\overline{X'})$ are polynomials only in the $X'$ variables and are the coefficient of $y^i$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{X'}][y]$. Now, for every $0 \leq i \leq k$, we can compute each of $C_i$ by a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit with top fan-in at most $T\cdot(k+1)$ by interpolation as given by Lemma~\ref{lem:extracting coefficients}. All the partial derivatives of $P$ with respect to $y$ are linear combinations of the terms of the form $C_{j_1}\cdot y^{j_2}$. And so, the result follows. \end{proof} We will also need the following simple fact about polynomials. \begin{lem}~\label{lem:non zero derivative} Let ${\mathbb{F}}$ be a field of characteristic zero. Let $R \in {\mathbb{F}}[y]$ be a non-zero polynomial of degree at most $t$ over the field ${\mathbb{F}}$. Then, for every $a \in {\mathbb{F}}$ such that $R(a) = 0$, there exists a $j$ such that $0 \leq j \leq t-1$ and $\frac{\partial^j R}{\partial y^j}(a) = 0$ and $\frac{\partial^{j+1} R}{\partial y^{j+1}}(a) \neq 0$. \end{lem} \begin{proof} Let the degree of $R$ in $y$ be equal to $t'$. This means that the coefficient of highest degree term $y^{t'}$ in $R$ is non-zero. Let us call the coefficient of $y^{t'}$ in $R(y)$ as $C_{t'}$. We know that $C_{t'}$ is nonzero. Consider $j = t'-1$. The lemma immediately follows. \end{proof} We will crucially use the following result of Dvir, Shpilka, Yehudayoff~\cite{DSY09} in the analysis of the hitting set constructed in this paper. \begin{lem}[Dvir, Shpilka, Yehudayoff~\cite{DSY09}]~\label{lem:DSY main} For a field ${\mathbb{F}}$, let $P \in {\mathbb{F}}[X_1, X_2, \ldots, X_N, Y ]$ be a non-zero polynomial of degree at most $k$ in $Y$. Let $f \in {\mathbb{F}}[X_1, X_2, \ldots, X_N]$ be a polynomial such that $P(X_1, X_2, \ldots, X_N, f) = 0$ and $\frac{\partial P}{\partial Y} (0, 0, \ldots, 0, f(0, 0, \ldots, 0))\neq 0$. Let $$P = \sum_{i = 0}^k C_i(X_1, X_2, \ldots, X_N)\cdot y^i$$ Then, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[f(X_1, X_2, \ldots, X_N)] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$ \end{lem} A key technical idea in the proof will be the notion of Nisan-Wigderson designs introduced in~\cite{NW94}. We will use the following lemma. \begin{lem}[Nisan-Wigderson~\cite{NW94}]~\label{lem: designs} For every $a, b \in {\mathbb{N}}$, $b < 2^a$, there exists a family of sets $S_1, S_2, \ldots, S_b \subseteq \{1, 2, \ldots, l\}$ such that \begin{enumerate} \item $l \in O(a^2/\log b)$ \item for all $i$, $|S_i| = a$ \item for all $i \neq j$, $|S_i \cap S_j| \leq \log b$ \end{enumerate} Moreover, such a set family can be constructed in time polynomial in $b$ and $2^l$. \end{lem} We will also use the following lemma of Alon~\cite{AlonCN} very crucially in our proof. \begin{lem}[Combinatorial Nullstellensatz~\cite{AlonCN}]~\label{lem: comb nulls} Let $P$ be a non-zero polynomial of individual degree at most $d$ in $N$ variables over a large enough field ${\mathbb{F}}$. Let $S$ be an arbitrary subset of ${\mathbb{F}}$ of size $d+1$. Then, there exists a point $p$ in $S^{N}$ such that $P(p) \neq 0$. \end{lem} \subsection{Blackbox PIT for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits}~\label{sec:hitting set} In this section, we prove the following theorem. \begin{thm}~\label{thm:mainthm2} Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that \begin{enumerate} \item $T < \log^c N$ \item $k < \log ^c N$ \item $d < N^c$ \item each $Q_{ij}$ depends on at most $N^{\mu}$ variables \end{enumerate} Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$. \end{thm} From our proof, it also follows that if each of polynomial $Q_{ij}$ depends only on $\log^{O(1)} N$ variables, then both the size of the hitting set and the time to construct it, are upper bounded by a quasipolynomial function in $N$. In the rest of the section, we prove Theorem~\ref{thm:mainthm2}. We start by describing the construction of the hitting set $\cal H$. \subsubsection{Construction of hitting sets for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits for $0 \leq \mu < 1/2$} Given $\mu$ such that $0 \leq \mu < 1/2$, we pick the parameter $\mu'$ such that $0 < \mu' < 1$ and $\frac{2\mu}{\mu'}$ is a positive constant strictly smaller than $1$. We construct a family of Nisan-Wigderson designs as described in Lemma~\ref{lem: designs} with the following parameters : \begin{enumerate} \item $b$, the number of sets is set equal to $N$ \item $a$, the size of each of the sets $S_i$ is set equal to $N^{\frac{\mu}{\mu'}}\log^{\frac{1}{\mu'}} N$. \item $l$, the size of the universe is chosen large enough in order to satisfy the hypothesis of Lemma~\ref{lem: designs}. From Lemma~\ref{lem: designs}, it follows that we can pick $l$ which is not too large ($l \in O(a^2/\log b)$). For the above chosen values of $a, b$, there is a choice of $l$ such that $l$ is at most $N^{\frac{2\mu}{\mu'}}\log^{\frac{2}{\mu'}-1} N$. \end{enumerate} Recall that our goal is to construct a hitting set for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits. Observe that the choice of parameters $l, a, b$ satisfy the hypothesis of Lemma~\ref{lem: designs}. So, we get a collection of $N$ subsets $S_1, S_2, \ldots, S_N$ of $\{1, 2, 3, \ldots, l\}$ satisfying \begin{enumerate} \item for all $1\leq i \leq N$, $|S_i| = a$ \item for all $1 \leq i < j \leq N$, $|S_i \cap S_j| \leq \log N$ \end{enumerate} Moreover, these sets can be constructed in time polynomial in $b$ and $2^l$. We identify the set $\{1, 2, 3, \ldots, l\}$ with the set of new variables $\overline{Y} = \{Y_1, Y_2, \ldots, Y_l\}$. Before we proceed further, we need some notation. We will pick $\delta = (1-\mu')/2$ to be a non-negative constant. Given, $a, \mu', \delta$, we define $\gamma = \frac{2(\mu' + \delta) + 1}{1-(\mu' + \delta)}$. Then, we define $q$ to be the smallest prime number between $({a/2})^{\frac{1+\gamma}{2+\gamma}}$ and $2\cdot ({a/2})^{\frac{1+\gamma}{2+\gamma}}$. Also, we set $a'$ to be equal to $({a/2})^{\frac{1}{2+\gamma}}$. Observe that $a/2 \leq a'q \leq a$. For each $i$, such that $1 \leq i \leq N$, let ${S_i}'$ be an arbitrary subset of $S_i$ of size equal to $a'q$. For brevity, we rename the sets $S_i'$ as $S_i$~\footnote{We have replaced the family $\{S_1, S_2, \ldots, S_N\}$ by the set family $\{S_1', S_2', \ldots, S_N'\}$ such that for each $i \in [N]$, $S_i' \subseteq S_i$. Observe that the design based properties of the original system continue to hold. The only thing that changes is that the size of $S_i'$ could be smaller than the size of $S_i$, by at most a factor $2$. }. Let $\rho = (\mu' + \delta)\frac{\log a'q}{\log a'}$ and $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot a'$. Often for the ease of notation we will identify the set $S_i$ of $\{1, 2, \ldots, l\}$ with the set of variables $\{Y_j : j \in S_i\}$. We will think of the variables $\{Y_j : j \in S_i\}$ to be arranged in a $a'\times q$ matrix $V(i)$, with the variables placed in the matrix in some order. For every $i\in \{1, 2, 3, \ldots, N\}$, we define $NW_{{a'}, { \mu'}}(S_i)$ as $$NW_{{a'}, { \mu'}}(S_i) = \sum_{\substack{f(z) \in {\mathbb{F}}_{q}[z] \\ deg(f) \leq D-1}} \prod_{j \in [a']} V(i)_{jf(j)}$$ For a point $p = (p_1, p_2, \ldots, p_l) \in {\mathbb{F}}^l$, we denote by $NW_{{a'}, { \mu'}}(S_i)|p$, the evaluation of $NW_{{a'}, { \mu'}}(S_i)$ when the variable $Y_j$ is set to $p_j$. Let $G$ be an arbitrary subset of ${\mathbb{F}}$ of size $Nka' + 1$. We define the hitting set ${\cal H}$ as follows. \begin{define}[Definition of the hitting set ${\cal H}$]~\label{def:hitting set} $${\cal H} = \left\{ (NW_{{a'}, { \mu'}}(S_1)|p, NW_{{a'}, { \mu'}}(S_2)|p, \ldots, NW_{{a'}, { \mu'}}(S_N)|p) : p \in G^{l} \right\} $$ \end{define} We now proceed to prove the correctness of the construction. We first prove the following lemma which shows that ${\cal H}$ is explicit and has the correct size as per Theorem~\ref{thm:mainthm2}. \begin{lem}~\label{lem: hitting set size} The set ${\cal H}$ as defined in Definition~\ref{def:hitting set} has size at most $(Nka' + 1)^l$ and all its elements can be enumerated in time $a^{a'}\cdot (Nka' + 1)^l\cdot N^{O(1)}$. \end{lem} \begin{proof} The size of the set ${\cal H}$ is equal to $|G|^l = (Nka' + 1)^l$. The set ${\cal H}$ can be enumerated by enumerating through the points $p$ in $G^l$ in some natural order (say lexicographic order) and evaluating the tuple $ (NW_{{a'}, { \mu'}}(S_1)|p, NW_{{a'}, { \mu'}}(S_2)|p, \ldots, NW_{{a'}, { \mu'}}(S_N)|p)$ at each of these points. For every point $p$ and subset $S_i$, the polynomial $NW_{{a'}, { \mu'}}(S_i)$ can be evaluated in time at most $a^{a'}\times \text{Poly}(N)$ from Lemma~\ref{lem: NW eval}. So, the second part of the lemma follows. \end{proof} Observe that for our choice of parameters, the above bounds on the size and the time of enumeration are bounded by a function which is subexponential in $N$. We now show that for every non-zero polynomial $P$ in the class ${\cal C}$, as defined in the statement of Theorem~\ref{thm:mainthm2}, there exists a point $p \in {\cal H}$, such that $P(p)$ is non-zero. We show this in Lemma~\ref{lem: hitting set correctness} below. That will complete the proof of Theorem~\ref{thm:mainthm2}. \subsection{Correctness of the construction}~\label{sec:hitting set correct} For the rest of this section, we denote $N^{\mu}$ by $s$. \begin{lem}~\label{lem: hitting set correctness} Let $P$ be a non-zero polynomial in the set $\cal C$ as defined in the statement of Theorem~\ref{thm:mainthm2}, and let ${\cal H}$ be the set defined in Definition~\ref{def:hitting set}. Then, there is a point $p$ in the set ${\cal H}$ such that $P(p) \neq 0$. \end{lem} \begin{proof} We define $$P_i(\overline{X}, \overline{Y}) := P(NW_{{a'}, { \mu'}}(S_1), NW_{{a'}, { \mu'}}(S_2), \ldots, NW_{{a'}, { \mu'}}(S_i), X_{i+1}, X_{i+2}, \ldots, X_N)$$ to be the polynomial obtained from $P$ by substituting the variables $X_j$ by $NW_{{a'}, { \mu'}}(S_j)$, for every $1 \leq j \leq i$. From the construction of our hitting set, it follows that it would suffice to argue that the polynomial $P_{N}(\overline{X}, \overline{Y})$ is non-zero. If this was true, then the lemma above will follow from Lemma~\ref{lem: comb nulls}, since the degree of any variable $P(\overline{X}, \overline{Y})$ is at most $Nka'$. We proceed via contradiction. If possible, let $P_N(\overline{X}, \overline{Y})$ be identically zero. Since $P = P_0(\overline{X}, \overline{Y})$ is non-zero to start with, by a hybrid argument it follows that there is an index $i$, such that $P_i(\overline{X}, \overline{Y})$ is non-zero while $P_{i+1}(\overline{X}, \overline{Y})$ is identically zero. Observe that $P_i$ is a polynomial in the variables $\overline{Y}$ and $X_{i+1}, X_{i+2}, \ldots, X_N$. In going from $P_{i}$ to $P_{i+1}$, we substituted the variable $X_{i+1}$ by the polynomial $NW_{{a'}, { \mu'}}(S_{i+1})$. Since $P_{i}(\overline{X}, \overline{Y})$ is non-zero by assumption above, there exists a substitution $\overline{c}$ of all variables apart from $\{Y_j : j \in S_{i+1}\}$ and $X_{i+1}$, which keeps the polynomial non-zero. Let the polynomial resulting after this substitution be $P_i'$. From the definitions, it follows that $$P_i' = P(NW_{{a'}, { \mu'}}(S_1)|{\overline{c}}, NW_{{a'}, { \mu'}}(S_2)|{\overline{c}}, \ldots, NW_{{a'}, { \mu'}}(S_i)|{\overline{c}}, X_{i+1}, X_{i+2}|{\overline{c}}, \ldots, X_N|{\overline{c}}) $$ Observe that each of the polynomials $NW_{{a'}, { \mu'}}(S_j)|{\overline{c}}$ depends only on the variables in the set $S_j \cap S_{i+1}$. From the properties of Nisan-Wigderson designs, and the choice of parameters, the size of this intersection is at most $\log N$. From the definition of $P_i$ and the choice of $\overline{c}$, $P_i'$ is not identically zero. We will think of $P_i'$ as a polynomial in $X_{i+1}$ with the coefficients being polynomials in the variables in the set $\{Y_j : j \in S_{i+1}\}$. Now, we know that the the polynomial $P_{i+1}'$ obtained by substituting $X_{i+1}$ by $NW_{{a'}, { \mu'}}(S_{i+1})$ is identically zero. Hence, it must be the case that $X_{i+1} - NW_{{a'}, { \mu'}}(S_{i+1})$ is a factor of $P_i'$. To proceed further, we need the following claim. \begin{claim}~\label{clm: p1} $P_i'$ as defined above can be represented as $$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$ such that each of the polynomials $Q_{rj}'$ depends on at most $s\log N$ variables. \end{claim} \begin{proof} Recall that $P$ can be represented as $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ where each $Q_{ij}$ is a polynomial in at most $s = N^{\mu}$ variables. In going from $P$ to $P_i'$, we have substituted each of the variables outside the set $\{Y_j : j \in S_{i+1}\} \cup \{X_{i+1}\}$ by either a constant or by the polynomial $NW_{{a'}, { \mu'}}(S_{j})|\overline{c}$ (which is a polynomial in at most $|S_j \cap S_{i+1}| \leq \log N$ variables) for some $j$. In either case, after substitution, the polynomials $Q_{rj}'$ obtained from $Q_{rj}$ depends on at most $s\log N$ variables, since $Q_{rj}$ depended on at most $s$ variables. This completes the proof of the claim. \end{proof} Moreover, since the individual degree of variables in $P$ is at most $k$, the individual degree of $X_{i+1}$ in $P_i'$ is at most $k$. The goal now is to invoke Lemma~\ref{lem:DSY main}, which would imply that $NW_{{a'}, { \mu'}}(S_{i+1})$ also has a small circuit as a sum of product of polynomials in {\it few} variables, and together with the lower bound from Theorem~\ref{thm:mainthm}, this would lead to a contradiction We essentially follow this outline. Formally, we use the following claim to complete the proof of Lemma~\ref{lem: hitting set correctness}. We defer the proof of the claim to the end. \begin{claim}~\label{clm:dsy app} If $(X_{i+1} - NW_{{a'}, { \mu'}}(S_{i+1}) )$ divides $P_i'$, then $NW_{{a'}, { \mu'}}(S_{i+1})$ can be written as $$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$ where \begin{enumerate} \item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$ \item $d' \leq d\cdot a'$ \item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables \end{enumerate} \iffalse $$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{T'} \prod_{j = 1}^{d'} \Gamma_{rj}$$ such that \begin{enumerate} \item $T' \leq a'\cdot {{k+a'} \choose k} \times {{T\cdot (k+1)^3 + a'}\choose a'}^k$ \item $d' \leq d\cdot a'$ \item Each of the polynomials $\Gamma_{rj}$ depends on at most $s\log N$ variables \end{enumerate} \fi \end{claim} \iffalse \begin{proof} From Claim~\ref{clm: p1}, we know that $$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$ such that each $Q_{rj}'$ depends on at most $s\log N$ variables. Since $P_i'$ is not identically zero and $NW_{{a'}, { \mu'}}(S_{i+1})$ is a root of $P_i'$, it follows from Lemma~\ref{lem:non zero derivative} that there is an integer $\lambda$ such that $0 \leq \lambda \leq k-1$ and, $$\frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}(NW_{{a'}, { \mu'}}(S_{i+1})) = 0$$ and $$\frac{\partial^{\lambda+1} P_i'}{\partial X_{i+1}^{\lambda+1}}(NW_{{a'}, { \mu'}}(S_{i+1})) \neq 0$$ From Lemma~\ref{lem: model props} it follows that $\tilde{P_i'} = \frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}$ can also be expressed as $$\tilde{P_i'} = \sum_{r = 1}^{T'} \prod_{j = 1}^d \tilde{Q}_{ij}$$ where $T' \leq T\cdot (k+1)^2$ and each of the $\tilde{Q}_{rj}$ depends on at most $s\log N$ variables. \iffalse Let $\tilde{P_i'} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables. \fi Observe that, $\tilde{P_i'}$ vanishes when $NW_{{a'}, { \mu'}}(S_{i+1})$ is substituted for $X_{i+1}$, while its derivative with respect to $X_{i+1}$ does not vanish identically at $X_{i+1} = NW_{{a'}, { \mu'}}(S_{i+1})$. So, in particular, there is a substitution of the $Y$ variables where the derivative $\frac{\partial{\tilde{P_i'}}}{\partial{X_{i+1}}}$ is nonzero. Since the class of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits is closed under translations of variables (from item 2 in Lemma~\ref{lem: model props}), we can assume without loss of generality that the derivative is nonzero when all the variables in $\overline{Y}$ are set to zero. Also observe that by this variable translation, we have actually obtained a polynomial $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$. Moreover, the degree of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $a'$ and the homogeneous component of degree $a'$ of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. Let the polynomial obtained after the variable translation from $\tilde{P_i'}$ as $\tilde{P_i''}$. At this point, the hypothesis of Lemma~\ref{lem:DSY main} is satisfied by $\tilde{P_i''}$. Let $\tilde{P_i''} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $\tilde{P_i''}$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables. Hence, by Lemma~\ref{lem:DSY main}, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$ The goal now is to obtain a representation of $NW_{{a'}, { \mu'}}(S_{i+1})$ as a sum of products of polynomials in few variables and show that this contradicts the lower bound in Theorem~\ref{thm:mainthm}. $NW'_{{a'}, { \mu'}}(S_{i+1})$ is a polynomial of degree at most $a'$. So, there is a polynomial $R_{a'}$ of degree at most $a'$ in $k + 1$ variables such that $$NW'_{{a'}, { \mu'}}(S_{i+1}' = \mathsf{Hom}^{\leq {a'}}[R_{a'}(C_0, C_1, \ldots, C_k)]$$ From the discussion on the relation between $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$, we also know that $$NW_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{a'}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{a'}[R_{a'}(C_0, C_1, \ldots, C_k)]$$ Since $R_{a'}$ is a polynomial in $k+1$ variables of degree $a'$, the number of monomials in $R_{a'}$ is at most ${a' + k + 1} \choose {k+1}$. Therefore, we can represent $R_{a'}(C_0, C_1, \ldots, C_k)$ as a sum of products of the $C_j$'s, with the sum fan-in at most ${a' + k + 1} \choose {k+1}$ and the product fan-in at most $a'$. Moreover, each of the product gates in this representation takes the polynomials $C_j$'s as inputs. We know that each $C_j$ can be written as $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where each $Q_{rl}''$ is a polynomial in at most $s\log N$ variables, and the top sum fan-in $T_j$ is at most $T\cdot (k+1)^3$. For any $t$, the polynomial $C_j^t$, has a similar representation with the top sum fan-in at most ${T\cdot (k+1)^3 + t}\choose t$. Therefore, any product of fan-in at most $a'$ in the $C_j$'s can be written as a sum of product of polynomials in at most $s\log N$ variables, with top fan-in at most $${{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$$ since each $C_j$ is raised to a power of at most $a'$ and there are $k+1$ such $C_j$'s. Therefore, $R_{a'}(C_0, C_1, \ldots, C_k)$ can be written as $$R_{a'}(C_0, C_1, \ldots, C_k) = \sum_{r = 1}^I \prod_{j = 1}^{d'} \Gamma'_{rj}$$ such that \begin{enumerate} \item $I \leq {{k+a' + 1} \choose k+1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$ \item $d' \leq d\cdot a'$ \item Each $\Gamma'_{rj}$ depends on at most $s\log N$ variables \end{enumerate} We would now like to extract the homogeneous part of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$, which we know is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. We do this by a standard application of Lemma~\ref{lem:interpolation}. Since we are interested only in the homogeneous part of degree $a'$, we can assume without loss of generality that each of the polynomials $\Gamma'_{rj}$ is of degree at most $a'$ (we can discard all monomials of degree larger than $a'$ in each of the $\Gamma'_{rj}$, since they do not contribute to the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ ). Hence, the degree of $R_{a'}(C_0, C_1, \ldots, C_k)$ is upper bounded by $da'\cdot a'$. So, from Lemma~\ref{lem:interpolation}, we can extract the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ by blowing up the top fan-in by a factor of at most $da'^2 + 1$. Hence, $NW_{{a'}, { \mu'}}(S_{i+1})$ can be expressed as $$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$ where \begin{enumerate} \item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$ \item $d' \leq d\cdot a'$ \item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables \end{enumerate} \end{proof} \fi From our choice of parameters, recall that $$a = N^{\mu/{\mu'}}\cdot \log^{1/{\mu'}} N$$ and $$s = N^{\mu} $$ Therefore, $s\log N \leq N^{\mu}\cdot \log N \leq a^{\mu'}$. To complete the proof, we observe that by Theorem~\ref{thm:mainthm}, we must have $$I'd' \geq (a')^{\Omega(\sqrt{a'})}$$ But, for our choice of parameters, \begin{enumerate} \item $I' \leq (da'^2+1)\cdot {{k+a'} \choose k} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1} \leq da^{O(Tk^4)} \leq d{a'}^{O(Tk^4)}$ (since $a$ and $a'$ are polynomially related) \item $d' \leq da'$ \end{enumerate} This implies that $I'd' \leq d^2a^{O(Tk^4)}$. From our choice of parameters, $s\log N < a^{\mu'}$ and $Tk^4 + 2\log d \in o(\sqrt{a'})$. This contradicts that $I'd' \geq (a')^{\Omega(\sqrt{a'})}$. This completes the proof of Lemma~\ref{lem: hitting set correctness} assuming Claim~\ref{clm:dsy app}. \end{proof} We now give a proof of Claim~\ref{clm:dsy app}. \begin{proof}[Proof of Claim~\ref{clm:dsy app}] From Claim~\ref{clm: p1}, we know that $$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$ such that each $Q_{rj}'$ depends on at most $s\log N$ variables. Since $P_i'$ is not identically zero and $NW_{{a'}, { \mu'}}(S_{i+1})$ is a root of $P_i'$, it follows from Lemma~\ref{lem:non zero derivative} that there is an integer $\lambda$ such that $0 \leq \lambda \leq k-1$ and, $$\frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}(NW_{{a'}, { \mu'}}(S_{i+1})) = 0$$ and $$\frac{\partial^{\lambda+1} P_i'}{\partial X_{i+1}^{\lambda+1}}(NW_{{a'}, { \mu'}}(S_{i+1})) \neq 0$$ From Lemma~\ref{lem: model props} it follows that $\tilde{P_i'} = \frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}$ can also be expressed as $$\tilde{P_i'} = \sum_{r = 1}^{T'} \prod_{j = 1}^d \tilde{Q}_{ij}$$ where $T' \leq T\cdot (k+1)^2$ and each of the $\tilde{Q}_{rj}$ depends on at most $s\log N$ variables. \iffalse Let $\tilde{P_i'} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables. \fi Observe that, $\tilde{P_i'}$ vanishes when $NW_{{a'}, { \mu'}}(S_{i+1})$ is substituted for $X_{i+1}$, while its derivative with respect to $X_{i+1}$ does not vanish identically at $X_{i+1} = NW_{{a'}, { \mu'}}(S_{i+1})$. So, in particular, there is a substitution of the $Y$ variables where the derivative $\frac{\partial{\tilde{P_i'}}}{\partial{X_{i+1}}}$ is nonzero. Since the class of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits is closed under translations of variables (from item 2 in Lemma~\ref{lem: model props}), we can assume without loss of generality that the derivative is nonzero when all the variables in $\overline{Y}$ are set to zero. Also observe that by this variable translation, we have actually obtained a polynomial $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$. Moreover, the degree of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $a'$ and the homogeneous component of degree $a'$ of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. Let the polynomial obtained after the variable translation from $\tilde{P_i'}$ as $\tilde{P_i''}$. At this point, the hypothesis of Lemma~\ref{lem:DSY main} is satisfied by $\tilde{P_i''}$. Let $\tilde{P_i''} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $\tilde{P_i''}$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables. Hence, by Lemma~\ref{lem:DSY main}, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$ The goal now is to obtain a representation of $NW_{{a'}, { \mu'}}(S_{i+1})$ as a sum of products of polynomials in few variables and show that this contradicts the lower bound in Theorem~\ref{thm:mainthm}. $NW'_{{a'}, { \mu'}}(S_{i+1})$ is a polynomial of degree at most $a'$. So, there is a polynomial $R_{a'}$ of degree at most $a'$ in $k + 1$ variables such that $$NW'_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{\leq {a'}}[R_{a'}(C_0, C_1, \ldots, C_k)]$$ From the discussion on the relation between $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$, we also know that $$NW_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{a'}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{a'}[R_{a'}(C_0, C_1, \ldots, C_k)]$$ Since $R_{a'}$ is a polynomial in $k+1$ variables of degree $a'$, the number of monomials in $R_{a'}$ is at most ${a' + k + 1} \choose {k+1}$. Therefore, we can represent $R_{a'}(C_0, C_1, \ldots, C_k)$ as a sum of products of the $C_j$'s, with the sum fan-in at most ${a' + k + 1} \choose {k+1}$ and the product fan-in at most $a'$. Moreover, each of the product gates in this representation takes the polynomials $C_j$'s as inputs. We know that each $C_j$ can be written as $$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$ where each $Q_{rl}''$ is a polynomial in at most $s\log N$ variables, and the top sum fan-in $T_j$ is at most $T\cdot (k+1)^3$. For any $t$, the polynomial $C_j^t$, has a similar representation with the top sum fan-in at most ${T\cdot (k+1)^3 + t}\choose t$. Therefore, any product of fan-in at most $a'$ in the $C_j$'s can be written as a sum of product of polynomials in at most $s\log N$ variables, with top fan-in at most $${{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$$ since each $C_j$ is raised to a power of at most $a'$ and there are $k+1$ such $C_j$'s. Therefore, $R_{a'}(C_0, C_1, \ldots, C_k)$ can be written as $$R_{a'}(C_0, C_1, \ldots, C_k) = \sum_{r = 1}^I \prod_{j = 1}^{d'} \Gamma'_{rj}$$ such that \begin{enumerate} \item $I \leq {{k+a' + 1} \choose k+1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$ \item $d' \leq d\cdot a'$ \item Each $\Gamma'_{rj}$ depends on at most $s\log N$ variables \end{enumerate} We would now like to extract the homogeneous part of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$, which we know is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. We do this by a standard application of Lemma~\ref{lem:interpolation}. Since we are interested only in the homogeneous part of degree $a'$, we can assume without loss of generality that each of the polynomials $\Gamma'_{rj}$ is of degree at most $a'$ (we can discard all monomials of degree larger than $a'$ in each of the $\Gamma'_{rj}$, since they do not contribute to the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ ). Hence, the degree of $R_{a'}(C_0, C_1, \ldots, C_k)$ is upper bounded by $da'\cdot a'$. So, from Lemma~\ref{lem:interpolation}, we can extract the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ by blowing up the top fan-in by a factor of at most $da'^2 + 1$. Hence, $NW_{{a'}, { \mu'}}(S_{i+1})$ can be expressed as $$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$ where \begin{enumerate} \item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$ \item $d' \leq d\cdot a'$ \item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables \end{enumerate} \end{proof} We remark that if the value of $s$ was $\log^{O(1)} N$ to start with, the same proof as above goes through with $l$ and $a$ being set to polynomials of sufficiently high degree in $\log N$. The size of the hitting set and the time to construct it in this case are upper bounded by a quasipolynomial function in $N$. \section{Open problems}~\label{sec:open ques} We conclude with some open problems. \begin{enumerate} \item An intriguing open question is to obtain PIT for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits without the restriction on the individual degree. The strategy in this paper relies on hardness randomness tradeoffs for bounded depth circuits~\cite{DSY09}. The tradeoffs in~\cite{DSY09} crucially use the fact that the individual degree is bounded. \item Another related question would be to get any non-trivial PIT (even subexponential) for the sum of constant many products of degree two polynomials. \item It would also be interesting to understand if one could obtain any non-trivial PIT for slightly non-multilinear depth four circuits (say individual degree at most 2) with bounded top fan-in. A natural strategy for this question would be to reduce it to the case of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits by either expanding out the polynomials $Q_{ij}$ which depend on too many variables or use a partial derivative like trick, as in~\cite{OSV14}. The immediate challenge in this case is that the top fan-in seems to increase by any of these tricks and the calculations in this paper seem to not work out. \end{enumerate} \section*{Acknowledgements} We would like to thank Rafael Oliveira for many helpful discussions regarding hardness-randomness tradeoffs for bounded depth arithmetic circuits at the early stages of this work. \bibliographystyle{alpha}
{ "timestamp": "2015-04-24T02:10:32", "yymm": "1504", "arxiv_id": "1504.06213", "language": "en", "url": "https://arxiv.org/abs/1504.06213" }
\section{Computing Approximate Correlated Equilibria with Near-Optimal Welfare} \label{sect:blackwell} In this section, we develop an algorithmic framework for computing an $\varepsilon$-CE with welfare additively $\varepsilon$ close to the optimal. The ideas presented in the section can be easily modified to find an $\varepsilon$-CCE with welfare additively $\varepsilon$ close to the optimal CCE.\footnote{In order to find an approximate CCE with near-optimal welfare we can define a different regret vector than the one under consideration in this section, whose components are equal to the regret terms that appear in the definition of a CCE. Note that the regret vector for the CCE case is $nm+1$ dimensional.} Our framework is based on a novel extension of Blackwell's condition, which is used in the analysis of no-regret algorithms (see, e.g.,~\cite{young2004strategic}). The idea here is to define, for each action profile $a \in A$, a vector $r(a)$ whose components list the regret of each player at action profile $a$. Specifically, for each player $p \in [n]$ and action $j \in [m]$ there is component in $r(a)$ that is equal to $u_p(j,a_{-p}) - u_p(a) $; note that this quantity is the the regret of player $p$ at action profile $a$ with respect to deviation $j$. The regret vector $r(a)$ has an additional component that is equal to the the difference between the optimal welfare and the welfare of action profile $a$. Intuitively, the components of $r(a)$ are defined to ensure that $x^*$ is an optimal CE if and only if the following component-wise inequalities hold: $\mathbb{E}_{a \sim x^*} [ r(a) ] \leq 0$. Moreover, to find the desired approximate CE it suffices to determine a distribution $x \in \Delta(A)$ that satisfies $\mathbb{E}_{a \sim x} [ r(a) ] \leq \varepsilon$. Using an extension of Blackwell's condition (see inequality (\ref{ineq:bwcond})), we develop an algorithm for finding such a distribution $x$. In particular, via a gradient-descent like argument, we show that action profiles satisfying the extended Blackwell condition can be used to determine the desired approximate CE $x$; see proof of Theorem~\ref{thm:graddes} for details. It turns out that finding an action profile that satisfies the extended Blackwell condition corresponds to computing an additive approximation of a modified-welfare maximization problem (see Definition~\ref{def:mod-wel}). This overall gives us an algorithmic framework that reduces the problem of determining an approximate CE with near optimal welfare to the problem of additive approximating a modified-welfare maximization problem. We instantiate this framework in the context of \emph{aggregative games} in the next section. Formally, we begin by defining a $d = nm(m-1) + 1$ dimensional regret vector $r(a)$ for each action profile $a \in A$. The first $nm(m-1)$ components of $r(a)$ are indexed by triples $(p,i,j)$ for player $p \in [n]$ and distinct actions $i,j \in [m]$. The $(p,i,j)$th component of $r(a)$ is equal to $u_p(j,a_{-p}) - u_p(a) $ if $a_p = i$, and is zero otherwise. That is, the $(p,i,j)$th component is the regret that player $p$ experiences at action profile $a$ by not playing action $j$. The last ($d$th) component of $r(a)$ is equal to $w^* - w(a)$. Here $w^*$ denotes the optimal welfare over the set of correlated equilibria, i.e., $w^* := \max \{ w(x) \mid x \textrm{ is a correlated equilibrium}\}$. Write $x^*$ to denote the welfare-optimal CE, i.e., $w^* = w(x^*)$. Note that for $x^*$ we have that $\mathbb{E}_{a \sim x^*} [ r(a) ] \leq 0$ holds component-wise . Now, a useful observation is that for any scaling vector $y \in \mathbb{R}_{+}^d$ with nonnegative components, we have $\mathbb{E}_{a \sim x^*} [ y^T r(a) ] \leq 0$. Via the probabilistic method, we get that for any $y \in \mathbb{R}_{+}^d$ there exists an action profile $a^*$ such that \begin{align} \label{ineq:bwcond} y^T r(a^*) \leq 0 \end{align} Inequality (\ref{ineq:bwcond}) can be thought of as an extension of Blackwell's condition. This inequality leads us to the objective of maximizing a modified welfare function that is defined as follows. \begin{definition}[Modified Welfare] \label{def:mod-wel} Given scaling vector $y \in \mathbb{R}_{+}^d$ (where the first $nm(m-1)$ components of $y$ are indexed by $(p,i,j)$ for player $p \in [n]$ and actions $i,j \in [m]$ and we refer to the last component of $y$ as $y_d$), we define modified utilities $\tilde{u}^y_p$ and modified welfare $\tilde{w}^y$ as follows: \begin{align} \tilde{u}^y_p(a) & := y_d u_p(a) + \sum_{j \in A_p} y_{(p, a_p, j)} (u_p(a) - u_p(j, a_{-p})) \\ \tilde{w}^y(a) & := \sum_p \tilde{u}^y_p(a). \end{align} \end{definition} For ease of presentation, when $y$ is clear from context we will drop it from the superscript of $\tilde{u}_p^y$ and $\tilde{w}^y$. \begin{definition}[Modified-Welfare Maximization Problem] \label{def:wel-max} Given a multi-player game and vector $y \in \mathbb{R}_{+}^d$, the modified-welfare maximization problem (MWMP) is to compute an action profile $a$ of the game that maximizes modified welfare $\tilde{w}^y$, i.e., the objective is to obtain $\argmax_{a \in A} \tilde{w}^y(a)$. \end{definition} Note that for any vector $y \in \mathbb{R}_{+}^d$ and any action profile $a$ we have $y^T r(a) = y_d w^* - \tilde{w}^y(a) $. As argued above, for any vector $y$ with non-negative components there exists an action profile $a^*$ that satisfies (\ref{ineq:bwcond}). In particular, $a^*$ satisfies $\tilde{w}^y(a^*) \geq y_d w^*$. Therefore, given a vector $y$, we can compute an action profile that satisfies (\ref{ineq:bwcond}) by solving an instance of MWMP specified via $y$. Moreover, an $\alpha$-\emph{additive} approximation of MWMP is guaranteed to produce an action profile that satisfies $y^T r(a) \leq \alpha$. Below we show that (additively) approximating this welfare maximization problem is sufficient to obtain an approximate CE with near-optimal welfare. The hardness result established earlier (see Theorem~\ref{thm:aop}) implies that MWMP cannot be efficiently approximated in general succinct games. However, it is possible for us to approximate MWMP in specific classes of games; in particular, the next subsection details an efficient algorithm to approximate MWMP in \emph{aggregative games}. Specifically, given a game and vector $y \in \mathbb{R}_+^d$, write $\mathcal{M}(y)$ to denote an $O\left( \frac{\varepsilon^4}{ n^4m^8} \right)$-additive approximation for MWMP with respect to the specified $y$. Here, $\varepsilon$ is an approximation parameter. Note that an additive approximation $a = \mathcal{M}(y)$ satisfies $y^T r(a) \leq O \left( \frac{\varepsilon^4}{ n^4m^8} \right)$. Our algorithm, $\mathcal{A}$, for computing an approximate CE is given below. $\mathcal{A}$ requires access to an additive approximation $M(y)$ for polynomially many $y$s. Note that the $y$s considered during $\mathcal{A}$'s execution satisfy $ y \in [0,n]^d$. \begin{algorithm}{{\bf Given:} an algorithm for computing additive approximation $\mathcal{M}(y)$ in an $n$-player $m$-action game; {\bf Return:} $\varepsilon$-correlated equilibrium of the game with welfare at least $w^* - \varepsilon$.} \caption*{Algorithm for computing $\varepsilon$-correlated equilibrium with near-optimal welfare} \begin{algorithmic}[1] \label{alg:grad-des} \STATE Set $a^0$ to be an arbitrary action profile of the game and $N = O\left(\frac{n^2m^4}{\varepsilon^2}\right)$. \STATE Let $\mathcal{N}$ denote the negative orthant and $\Pi_\mathcal{N} (v)$ denote the Euclidean projection of vector $v$ onto $\mathcal{N}$. \STATE Initialize average regret vector $\bar{r}_0 = r(a^0)$. \FOR{ $t=1$ to $N$ } \STATE Set $y = \bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1} )$. \COMMENT{Note that the components of $y$ are nonnegative and their magnitude is no more than $n$} \STATE Set $a^t = \mathcal{M}(y)$. \COMMENT{Note that $a^t$ satisfies $y^T r(a^t) \leq O \left( \frac{\varepsilon^4}{ n^4m^8} \right) = O\left(\frac{1}{N^2}\right) $} \label{step:inprod} \STATE Set $\bar{r}_t = \frac{t}{t+1} \bar{r}_{t-1} + \frac{1}{t+1} r(a^t) $. \label{step:avgrg} \ENDFOR \STATE Return the empirical distribution over the multiset $\{a^0, a^1, a^2, \ldots, a^N\}$. \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm:graddes} For a given $n$-player $m$-action game, algorithm $\mathcal{A}$ computes an $\varepsilon$-correlated equilibrium with welfare at least $w^* -\varepsilon$. Here $w^*$ denotes the optimal welfare over the set of correlated equilibria of the given game. Moreover, if in the given game additive approximations $\mathcal{M}(y)$ for $y \in [0,n]^d$ can be computed in polynomial (in $n$, $m$, and $1/\varepsilon$) time, then $\mathcal{A}$ runs in polynomial time as well. \end{theorem} \begin{proof} First we establish the stated running-time bound for Algorithm $\mathcal{A}$. Note that $\mathcal{A}$ iterates $N = O\left(\frac{n^2m^4}{\varepsilon^2}\right) $ times. Therefore, if additive approximations $\mathcal{M}(y)$ can be computed in polynomial time, then $\mathcal{A}$ runs in polynomial time as well. Next we establish that $\mathcal{A}$ computes an approximate correlated equilibrium with high welfare. Write $x$ to denote the distribution returned by $\mathcal{A}$, i.e., $x$ is the empirical distribution over the multiset of action profiles $\{a^0, a^1, a^2, \ldots, a^N\}$. We will show that $x$ satisfies $\mathbb{E}_{a \sim x} [ r(a) ] \leq \varepsilon$, component-wise. This inequality and the definition of regret vector $r(a)$ imply that $x$ is an $\varepsilon$-correlated equilibrium with welfare at least $w^* - \varepsilon$. Note that Step (\ref{step:avgrg}) of algorithm $\mathcal{A}$ ensures that $\bar{r}_N = \sum_{t=1}^N \frac{1}{N} r(a^t) = \mathbb{E}_{a \sim x} [ r(a) ] $. Here, the second equality follows from the fact that $x$ is the empirical distribution over action profiles $a^1, a^2, \ldots, a^N$. Write $d(r, \mathcal{N})$ to denote the Euclidean distance between vector $r$ and the negative orthant $\mathcal{N}$. The proof proceeds by showing that $d(\bar{r}_N, \mathcal{N})$ is no more than $\varepsilon$. This implies that component-wise $\bar{r}_N$ is no more than $\varepsilon$, and hence we get the desired claim $\mathbb{E}_{a \sim x} [ r(a) ] \leq \varepsilon$. Recall that $\bar{r}_{t-1}$ denotes the average regret vector considered in the $(t-1)$th iteration of the algorithm and $\Pi_\mathcal{N}( \bar{r}_{t-1})$ denotes the Euclidean projection of this vector onto the negative orthant. The vector $\Pi_\mathcal{N}( \bar{r}_{t-1})$ is found by replacing the positive components of $\bar{r}_{t-1}$ by $0$, i.e., the $i$th component of the projection $(\Pi_\mathcal{N}( \bar{r}_{t-1}))_i $ is equal to $\min\{ 0, (\bar{r}_{t-1})_i \}$. We bound the Euclidean distance of $\bar{r}_{t}$ from the negative orthant as follows: \begin{align} d^2(\bar{r}_{t}, \mathcal{N}) & \leq d^2(\bar{r}_{t} , \Pi_\mathcal{N} (\bar{r}_{t-1})) \nonumber \\ & = \left\| \frac{t}{t+1}\bar{r}_{t-1} + \frac{1}{t+1} r(a^t) - \Pi_\mathcal{N} (\bar{r}_{t-1})\right\|_2^2 \nonumber \\ & = \left(\frac{t}{t+1}\right)^2 \| \bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) \|_2^2 + \left(\frac{1}{t+1}\right)^2 \| r(a^t) - \Pi_\mathcal{N} (\bar{r}_{t-1}) \|_2^2 \nonumber \\ & \qquad + \frac{2t}{t+1} \left(\bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right)^T \left(r(a^t) - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right) \label{eq:interim} \end{align} Next we bound the terms on the right-hand side of equality (\ref{eq:interim}). The fact that the utilities of the players are between $0$ and $1$ implies that for any action profile $a$ the regret vector satisfies $\| r(a) \|_2^2 \leq 2 n^2m^4$. Also, $\| \bar{r}_{t-1} \|_2^2 \leq 2n^2m^4$, since $\bar{r}_{t-1}$ is an average of regret vectors. Therefore, using the triangle inequality, we get the following bound for the second term in (\ref{eq:interim}), $\left(\frac{1}{t+1}\right)^2 \| r(a^t) - \Pi_\mathcal{N} (\bar{r}_{t-1}) \|_2^2 \leq \left(\frac{2 nm^2}{t+1}\right)^2 $. Step (\ref{step:inprod}) ensures that $\left(\bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right)^T r(a^t)$ is no more than $O \left( \frac{1}{N^2} \right)$. In addition, note that the nonzero components of vector $\bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) $ are the positive components of vector $\bar{r}_{t-1}$, and on the other hand the nonzero components of vector $ \Pi_\mathcal{N} (\bar{r}_{t-1}) $ are the negative components of $\bar{r}_{t-1}$. Therefore, we have $\left(\bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right)^T \Pi_\mathcal{N} (\bar{r}_{t-1}) = 0 $. Overall, we get the following bound on the third term in (\ref{eq:interim}): \begin{align*} \frac{2t}{t+1} \left(\bar{r}_{t-1} - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right)^T \left(r(a^t) - \Pi_\mathcal{N} (\bar{r}_{t-1}) \right) & \leq O \left( \frac{1}{N^2} \right) \\ & \leq \frac{1}{(t+1)^2} \end{align*} Using the bounds mentioned above and multiplying equation (\ref{eq:interim}) by $(t+1)^2$ we get \begin{align*} (t+1)^2 d^2(\bar{r}_{t}, \mathcal{N}) & \leq t^2 d^2 (\bar{r}_{t-1}, \mathcal{N}) + O(n^2 m^4). \end{align*} This leads to a telescoping sum for $1 \leq t \leq N$ that overall gives us \begin{align} \label{ineq:bnd} N^2 d^2(\bar{r}_{N}, \mathcal{N}) & \leq d^2 ( \bar{r}_1, \mathcal{N}) + O(n^2 m^4 N). \end{align} Note that $\| \bar{r}_1 \|_2^2 \leq O(n^2 m^4) $, therefore $d^2 ( \bar{r}_1, \mathcal{N}) \leq O(n^2 m^4) $. Hence, inequality (\ref{ineq:bnd}) gives $N^2 d^2(\bar{r}_{N}, \mathcal{N}) \leq O(n^2 m^4 N) $. In other words, $ d (\bar{r}_{N}, \mathcal{N}) \leq O(n m^2 / \sqrt{N})$. Given that $N = O\left(\frac{n^2m^4}{\varepsilon^2}\right)$, we get that the Euclidean distance between $\bar{r}_N$ and the negative orthant is at most $\varepsilon$, i.e., $d (\bar{r}_{N}, \mathcal{N}) \leq \varepsilon$. As discussed above, the last inequality implies that $\mathbb{E}_{a \sim x} [ r(a) ] \leq \varepsilon$, where $x$ is the distribution returned by the algorithm $\mathcal{A}$. Overall, following the argument outlined above, we get the desired claim. \end{proof} \noindent {\bf Remark:} We can adapt this algorithmic framework to the egalitarian objective or Pareto efficiency, instead of welfare maximization. For the egalitarian objective, for each action profile, instead of regret vector $r(a)$, we can consider $(nm(m-1) + n)$-dimensional vector $\rho(a)$. The first $nm(m-1)$ components of $\rho(a)$ and $r(a)$ are the same. But, the last $n$ components of $\rho(a)$ are set equal to $w' - u_p(a)$ for each $p \in [n]$. Here $w'$ is the optimal value of the egalitarian objective, $w' := \argmax_{x \in \textrm{CE }} \min_p u_p(x)$. Working with $\rho(a)$ and the corresponding modified-welfare function, we can obtain an $\varepsilon$-CE $x$ that satisfies $\min_p u_p(x) \geq w' - \varepsilon$. To find an approximate correlated equilibrium that is nearly Pareto efficient, we pick a specific player $q$ and replace the last component of the regret vector $r(a)$ by $w'' - u_q(a)$. Here $w'' : = \argmax_{x \in \textrm{CE }} u_q(x)$. In this case, we can consider the relevant modified-welfare function and overall obtain an $\varepsilon$-CE that satisfies $u_q(x) \geq w'' - \varepsilon$. Since there does not exist a CE wherein the utility of $q$ is $\varepsilon$ more than $u_q(x)$, we get that $x$ is $\varepsilon$-Pareto efficient. \subsection{Aggregative Games} \label{sect:agg-game} This section presents a polynomial-time additive-approximation algorithm for MWMP in aggregative games. An $n$-player $m$-action aggregative game with action profiles $A$ is specified by an aggregator function $S: A \rightarrow [-W, W]^k$ and utility-defining functions $v_p: A_p \times [-W,W]^k \rightarrow [0,1]$. The function $S$ serves as a sufficient statistic for the utilities of the player; specifically, the utility of player $p$ at action profile $a$ (i.e., $u_p(a)$) is equal to $v_p(a_p, S(a))$. Note that here the utility depends on the action of the player, $a_p$, and the aggregated vector, $S(a)$. In aggregative games, the function $S$ is additively separable; in particular, there exist vectors $f_p(a_p) \in [-W', W']^k$ for each player $p \in [n]$ and action $a_p \in A_p$ such that for any action profile $a \in A$ we have the following component-wise equality: $S(a) = \sum_{p} f_p(a_p)$. Here, the dimension $k$ is assumed to be a fixed constant and $W$ and $W'$ are polynomially bounded in $n$ and $m$. Along the lines of prior work (see~\cite{babichenko2013best, cummings2014privacy}), we consider the setting in which the influence of the aggregator on the utilities is bounded: $|v_p(a_p, s) - v_p(a_p, s') | \leq \| s - s' \|_{\infty} $ for all players $p \in [n]$, actions $a_p \in A_p$, and vectors $s, s' \in [-W, W]^k$. The assumption that $k$ is a fixed constant can be mute without this bounded influence property. Recall that modified utilities $\tilde{u}_p^y(a)$ are defined in terms of the utilities $u_p(a)$; see Definition~\ref{def:mod-wel}. In this section we will only consider vectors $y$ whose components are linearly bounded, i.e., $y \in [0,n]^d$; in order to apply Theorem~\ref{thm:graddes} it suffices to consider such linearly-bounded vectors. We begin by discretizing the aggregating vectors $f_p(a_p)$ such that their components are multiples of parameter $\delta$, which will be set appropriately. That is, for all $p \in [n]$ and $a_p \in A_p$, the components of vector $f_p(a_p)$ are rounded to the nearest multiple of $\delta$. Note that component-wise the vectors $f_p(a_p)$ are polynomially bounded; hence, a polynomially small $\delta$ ensures that even after discretization for all action profiles $a$, the aggregated value $S(a)$ remains within $ O \left( \frac{ \varepsilon^4 }{poly(n,m)} \right)$---under the $\ell_\infty$ norm---of the original (undiscretized) value. The bounded influence assumption, $|v_p(a_p, s) - v_p(a_p, s') | \leq \| s - s' \|_{\infty} $ and the fact that $y \in [0,n]^d$ ensures that the discretization process does not change the modified utility $\tilde{w}^y(a)$ by more than $ O \left( \frac{ \varepsilon^4 }{poly(n,m)} \right)$, for any action profile $a$. Overall, this implies that (with a polynomially small $\delta$) if we compute an action profile $a'$ that maximizes $\tilde{w}^y$ with the discretized aggregator function then $a'$ is an $ O\left( \frac{ \varepsilon^4 }{n^4m^8} \right)$-additive approximation for MWMP with the original (undiscretized) aggregator function. Throughout the remainder of the section we will work with the discretized aggregator. Now, all the discretized vectors $f_p(v_p)$s are contained in $\left\{0, \pm \delta, \pm 2 \delta, \pm 3 \delta, \ldots, \pm \left\lceil \frac{W'}{\delta} \right\rceil \delta \right\}^k$. Write $\mathcal{G}$ to denote the $k$-dimensional grid defined as follows: $\mathcal{G} := \{ \sum_{q= 1}^p f_q(a_q) \mid \textrm{ for all } p \in [n] \textrm{ and each } a_q \in A_q \}$. Since $\delta$ is polynomially small we have $|\mathcal{G}| = \left(\frac{nm}{\varepsilon}\right)^{O(k)}$. Also, for all action profiles $a \in A$, the discretized aggregator function value $S(a)$ is contained in $\mathcal{G}$. We develop a dynamic program that works over $\mathcal{G}$ and computes an action profile that maximizes the modified welfare $\tilde{w}^y$. As discussed above, this gives us an $ O\left( \frac{ \varepsilon^4 }{n^4m^8} \right)$-additive approximation for MWMP. Our main result of this section is that an additive approximation for MWMP can be computed efficiently when the scaling vector $y$ is contained in $[0,n]^d$. \begin{theorem} \label{thm:agggame} Given an $n$-player $m$-action aggregative game and a scaling vector $y \in [0,n]^{(nm(m-1) +1)}$, there exists a polynomial-time algorithm that computes an $ O\left( \frac{ \varepsilon^4 }{n^4m^8} \right)$-additive approximation for the MWMP instance specified via $y$. \end{theorem} \begin{proof} Throughout the proof we work with the modified welfare function $\tilde{w}^y$ that is specified by the discretized aggregator function. In particular, via a dynamic program we will compute an action profile that maximizes $\tilde{w}^y$. As mentioned above, this gives the desired additive-approximation guarantee. For each vector $s \in \mathcal{G}$ we will maximize the modified welfare $\tilde{w}^y$ function over the set of action profiles $A(s) := \{ a \in A \mid S(a) = s\}$ in polynomial time. Write $a^*$ to denote an action profile that maximizes $\tilde{w}^y$. Discretization ensures that $S(a^*) \in \mathcal{G}$. Also, the fact that the cardinality of $\mathcal{G}$ is polynomially bounded implies that efficiently optimizing over $A(s)$ for each $s \in \mathcal{G}$ gives an action profile that maximizes $\tilde{w}^y$ in polynomial time. The remainder of the proof details an algorithm that, given vector $s \in \mathcal{G}$, solves the following optimization problem in polynomial time: $\argmax_{a \in A(s)} \tilde{w}^y(a) $. First, we define modified utility function $\tilde{v}_p(a_p, s)$ in terms of the given functions $v_p(a_p,s)$. Recall that the vector $y$ is $d=nm(m-1) + 1$ dimensional and its first $nm(m-1)$ components are indexed by triples $(p,i,j)$ with $p \in [n]$ and distinct $i,j \in [m]$. Using vector $y$ as a parameter we define, \begin{align*} \tilde{v}_p^y(a_p, s) & := y_d \ v_p(a_p,s) + \sum_{j \in A_p} y_{(p, a_p, j) } \cdot \left[ v_p(a_p,s) - v_p(j, s - f_p (a_p) + f_p(j) \right] \end{align*} A key observation is that for any action profile $a$, if $s = S(a)$ then $\tilde{u}^y_p(a) = \tilde{v}^y_p(a_p,s)$ and, hence, $\tilde{w}^y(a) = \sum_{p=1}^n \tilde{v}^y (a_p, s)$. Therefore, by the definition of $A(s)$, for all $a \in A(s)$ the following equality holds $\tilde{w}^y(a) = \sum_{p=1}^n \tilde{v}^y (a_p, s)$. Hence, $\argmax_{a \in A(s)} \sum_{p=1}^n \tilde{v}^y (a_p, s)$ is equal to $\argmax_{a \in A(s)} \tilde{w}^y(a)$. We solve $\argmax_{a \in A(s)} \sum_{p=1}^n \tilde{v}^y (a_p, s)$ via a dynamic program that fills a matrix $M(p, s')$ indexed by $p \in [n]$ and $s' \in \mathcal{G}$. In particular, $M(p, s')$ is set equal to $ \max_{a_1, a_2, \ldots, a_p } \{ \sum_{q=1}^p \tilde{v}^y (a_q, s) \mid \sum_{q=1}^p f_q(a_q) = s' \}$. Here, the entry $M(n, s)$ is equal to the target optimal value $\max_{a \in A(s)} \tilde{w}^y(a)$. We can initialize $M(1, s')$ by going over actions in $A_1$; specifically, $M(1, s') = \max_{a_1 }\{ \tilde{v}^y(a_1, s) \mid f_1(a_1) = s'\}$. In general, we use the recurrence relation $M(p, s') = \max_{a_p, s''} \{ \tilde{v}^y(a_p, s) + M(p-1, s'') \mid f_p(a_p) + s'' = s' \}$ to complete the matrix. A direct inductive argument proves the correctness of this dynamic program. Since, $|\mathcal{G}|$ is polynomially bounded, the size of the matrix is also polynomially bounded. Overall, the dynamic program runs in polynomial time and the stated claim follows. \end{proof} Theorem~\ref{thm:agggame} shows that the additive approximation required in Theorem~\ref{thm:graddes} can be computed in polynomial time. Therefore, the two theorems together imply that in aggregative games an approximate correlated equilibrium with near-optimal welfare can be computed in polynomial time. \begin{corollary} In $n$-player $m$-action aggregative games an $\varepsilon$-correlated equilibrium with welfare $w^* - \varepsilon$ can be computed in time polynomial in $n$, $m$, and $1/\varepsilon$. Here $w^*$ denotes the optimal welfare over the set of correlated equilibria of the given game. \end{corollary} \section{Hardness Results} In this section we show that, given a succinct game, it is $\rm{NP}$-hard to compute a CCE with welfare strictly better than the lowest-welfare CCE. In particular, we develop a reduction that shows that the following decision problem is $\rm{NP}$-hard. \begin{definition}[$\mathrm{NT}$ \hspace*{-4pt}] Let $\Gamma$ be an $n$-player $m$-action game with a succinct representation. $\mathrm{NT}$ is defined to be the problem of determining whether $\Gamma$ admits a coarse correlated equilibrium $x$ such that $w(x) > w(x')$. Here $x'$ denotes the worst CCE of $\Gamma$, in terms of social welfare $w$. \end{definition} \hfill The hardness of $\mathrm{NT}$ implies that, under standard complexity-theoretic assumptions, any nontrivial approximation of the the optimization problem (\ref{opt-cce}) is impossible. Specifically, let $x^*$ denote an optimal CCE of a game (i.e, $x^*$ is an optimal solution of the optimization problem (\ref{opt-cce})) and $x'$ be a CCE with minimum possible welfare. Write $\beta:= w(x') / w(x^*)$, i.e., the ratio of the welfare of the worst CCE to that of the best CCE. In games in which a CCE can be computed efficiently, an efficient $\beta$-approximate solution of (\ref{opt-cce}) is direct; we can simply return an arbitrary CCE. The hardness of $\mathrm{NT}$ implies that no approximation factor better than $\beta$ can be achieved in general games. A proof of the $\rm{NP}$-hardness of $\mathrm{NT}$ is detailed below. \begin{align} \max \ \ & \ \ \sum_{p=1}^n u_p(x) \nonumber \\ \textrm{subject to} \ \ & \ \ x \textrm{ is a CCE} \label{opt-cce} \end{align} \begin{theorem} \label{thm:op} $\mathrm{NT}$ is $\rm{NP}$-hard in succinct multiplayer games. \end{theorem} \begin{proof} We start with a succinct game $G$ from a class of games in which computing a welfare-maximizing action profile is $\rm{NP}$-hard. Multiple examples of such classes of games are given in~\cite{PR}. We reduce the problem of determining an optimal (welfare maximizing) action profile in $G$ to solving $\mathrm{NT}$ in a modified succinct game $G'$. When $G$ is an $n$-player $m$-action succinct game, we construct a modified game $G'$ by providing an additional action, $b_p$, to each player $p \in [n]$. $G'$ is therefore an $n$-player $(m+1)$-action game. Let $A$ denote the set of action profiles of game $G$; similarly, let $A'$ be the action profiles of $G'$. Let $u_p: A \rightarrow [0,1]$ and $u'_p: A' \rightarrow [0,1] $ denote the utility of a player $p$ in $G$ and $G'$, respectively. Along these lines, let $w(\cdot)$ and $w'(\cdot)$ represent the welfare of action profiles in $G$ and $G'$, respectively. Note that for every action profile $a' \in A' \setminus A$ there exists at least one player $p$ who is playing the augmented action $b_p$, i.e., $a'_p = b_p$. Specifically, we start with the following \rm{NP}-hard problem: given succinct game $G$ and parameter $\textrm{OPT}$, determine if there exists an action profile $a \in A$ such that $w(a) \geq \textrm{OPT}$.\footnote{Note that here we are considering an \rm{NP}-hard {\em decision} problem and, hence, parameter $\textrm{OPT}$ is part of the input.} The utilities $u_p$ (and hence also $w$) are given as succinct input. Using them we define $u'_p$ as follows: \begin{enumerate} \item For every action profile $a \in A$, $u'_p(a) := w(a)/n$. In other words, on action profiles that belong to the original game we construct an identical-interest game. \item For every action profile $a' \in A' \setminus A$ such that in $a'$ there is \emph{exactly} one player $p$ who is playing the augmented action $b_p$ (i.e., $a'_p = b_p$ for exactly one player $p$ and $a'_q \neq b_q$ for all $q \neq p$), set $u'_p(a') := \textrm{OPT}/n $ and $u'_q(a') := 0$ for all $q \neq p$. \item For action profiles $a' \in A' \setminus A$ in which more than one player is playing the augmented action $b_p$, we set \[u'_p(a') = \left\{ \begin{array}{ll} \varepsilon/n & \quad \textrm{ if } a'_p = b_p \\ 0 & \quad \textrm{ otherwise } \end{array} \right. \] Here we select $\varepsilon$ to satisfy: $\textrm{OPT} > \varepsilon \geq \textrm{OPT}/n$. \end{enumerate} Note that $G'$ is a succinct game. Specifically, if game $G$ is succinct then, by definition, we have a polynomial-size specification $z$ for $G$. In addition, there exists an algorithm $U$ that takes as input $z$, $p \in [n]$, and $a \in A$, and computes the utility of player $p$ at any action profile $a$, i.e., $u_p(a)$, in polynomial time. Now to obtain a succinct representation for $G'$ we can use $z$ and $U$ (as a subroutine) and compute utilities $u'_p$ for any player $p$ and action profile $a$ in polynomial time. Say $b$ denotes the action profile wherein each player is playing the augmented action, $b:=(b_1, b_2, \ldots, b_n)$. The definition of $u'_p$ implies that $w'(b) := \sum_{p=1}^n u'_p(b) = \sum_{p=1}^n \varepsilon/n = \varepsilon $. We will prove that there exists an action profile $a \in A$ (i.e., an action profile in game $G$) with $w(a) \geq \textrm{OPT}$ \emph{iff} there exists a CCE $x$ in $G'$ that satisfies $w'(x) > w'(b)$. This shows that determining if there exists a CCE $x$ such that $w'(x) > w'(b)$ is $\rm{NP}$-hard. To complete the hardness proof for $\mathrm{NT}$ we will show that action profile $b$ is a pure Nash equilibrium (and, therefore, a CCE), and that no other CCE in $G'$ has welfare $w'$ less than $b$. Suppose $a$ is an optimal action profile in game $G$, i.e., $a \in A $ and $w(a) \geq \textrm{OPT}$. Then $a$ is in fact a pure Nash equilibrium in $G'$. This follows from the fact that $u'_p(a) = w(a)/n \geq \textrm{OPT}/n$ ($G'$ is identical interest on $a \in A$); hence (i) for any possible deviation $\hat{a}_p \neq b_p $ for player $p$ we have $u'_p(a) = w(a)/n \geq w(\hat{a}_p, a_{-p})/n = u'_p(\hat{a}_p, a_{-p})$. The first inequality holds since $a$ is an optimal action profile in $G$; (ii) for deviation $b_p$, note that $u'_p(a) \geq \textrm{OPT}/n = u'_p(b_p, a_{-p})$. Therefore, no player can benefit (increase $u'_p$) by unilaterally deviating from $a$, thereby proving that $a$ is a pure Nash equilibrium in $G'$. Overall, we get that if there exists an action profile $a \in A$ with $w(a) \geq \textrm{OPT}$ then there exists a CCE $x$ (in particular, an optimal action profile $a$ itself) in $G'$ that satisfies $w'(x) > w'(b)$. Recall that $w'(a) \geq \textrm{OPT} > \varepsilon = w'(b)$. It remains to show that if there exists a CCE $x$ such that $w'(x) > w'(b)$ then there exists an action profile $a \in A$ with $w(a) \geq \textrm{OPT}$. We will consider the set of action profiles in the support of $x$ that are also contained in $A$, i.e., $\textrm{Supp}(x) \cap A$. A useful observation is that for all $a' \in A' \setminus A$ the welfare $w'$ satisfies: $w'(a') \leq w'(b)$ (recall, $\varepsilon \geq \textrm{OPT}/n$). This implies that $\textrm{Supp}(x) \cap A \neq \phi$; otherwise, we would have $w'(x) \leq w'(b)$. Write $\pi>0$ to denote the probability mass of $x$ on the set $\textrm{Supp}(x) \cap A$; specifically, $\pi := \sum_{a \in A} x(a)$. Since $x$ is a CCE, deviating to $b_p$ could not increase any player $p$'s expected utility: \begin{align*} w'(x) & = \mathbb{E}_{a \sim x} [ u'_p(a) ] \\ & \geq \mathbb{E}_{a \sim x} [u'_p(b_p, a_{-p})]. \end{align*} We can rewrite the above inequality as follows: $\mathbb{E}_{a \sim x} [ u'_p(a) - u'_p(b_p, a_{-p})] \geq 0$. Next we expand in terms of conditional expectation \begin{align} \mathbb{E}_{a' \sim x} \left[ u'_p(a') - u'_p(b_p, a'_{-p}) \mid a \in A' \setminus A \right] \cdot (1-\pi) \ + \ \mathbb{E}_{a \sim x} \left[u'_p(a) - u'_p(b_p, a_{-p}) \mid a \in A \right] \cdot \pi & \geq 0. \label{ineq:cexp} \end{align} Note that for any action profile $a' \in A' \setminus A$ and each player $p$ we have \begin{align} u'_p(a') - u'_p(b_p, a'_{-p}) \leq 0.\end{align} Either $a'_p = b_p$, in which case $u'_p(a') - u'_p(b_p, a'_{-p}) = 0$; otherwise, $a'_p \neq b_p$ and then $u'_p(a') = 0 < u'_p(b_p, a'_{-p}) $. This implies that the term $\mathbb{E}_{a' \sim x} \left[ u'_p(a') - u'_p(b_p, a'_{-p}) \mid a \in A' \setminus A \right] $ in inequality (\ref{ineq:cexp}) is non-positive for every player $p$. Therefore, the second term in (\ref{ineq:cexp}), $\mathbb{E}_{a \sim x} \left[u'_p(a) - u'_p(b_p, a_{-p}) \mid a \in A \right]$, must be non-negative for every player $p$. Summing the second term over all players we get: \begin{align} \ \mathbb{E}_{a \sim x} \left[ \sum_p \left( u'_p(a) - u'_p(b_p, a_{-p}) \right)\mid a \in A \right] \cdot \pi & \geq 0. \label{ineq:fexp} \end{align} Recall that $\pi > 0$, i.e., there exists an action profile $a \in A$ such that $x(a) > 0$. Therefore, inequality (\ref{ineq:fexp}) and the probabilistic method imply that there exists an action profile $a \in A $ such that $\sum_p \left( u'_p(a) - u'_p(b_p, a_{-p}) \right) \geq 0$. Since $a \in A$, $u'_p(b_p, a_{-p}) = \textrm{OPT}/n$ for all $p$. Hence, $\sum_p u'_p(a) \geq \sum_p \textrm{OPT}/n = \textrm{OPT}$. Thus, the existence of a CCE $x$ in $G'$ such that $w'(x) > w'(b)$ implies that there exists an action profile $a \in A$ with $w(a) \geq \textrm{OPT}$. To complete the proof, we need to show that $b$ is a pure Nash equilibrium and that no other CCE in $G'$ has welfare $w'$ less than $b$. The first part of this claim is direct. To prove the second part, suppose by way of contradiction that there existed a CCE $x'$ in $G'$ such that $w'(x') < w'(b)$. Therefore there would exist a player $p$ such that \begin{align} \mathbb{E}_{a' \sim x'} [ u'_p(a') ] & < w'(b)/n \nonumber \\ & = \varepsilon/n. \label{ineq:switch} \end{align} But, note that for any $a'_{-p} \in A'_{-p}$ we have $u'_p(b_p, a'_{-p}) \geq \varepsilon/n$. This observation along with inequality (\ref{ineq:switch}) implies that $p$ would strictly benefit by unilaterally deviating to $b_p$. Therefore, $x'$ cannot be a CCE. This completes the proof. \end{proof} \noindent {\bf Remark:} The proof of Theorem~\ref{thm:op} can be directly adopted to establish hardness for CE as well. In particular, the fact that any CE $x$ satisfies the inequalities that define a CCE (see Definitions~\ref{def:ce} and~\ref{def:cce}) can be used in the previous proof to show that it is \rm{NP}-hard to determine a CE with welfare strictly better than the worst possible CE. In addition, we show below that the reduction given in the proof of Theorem~\ref{thm:op} establishes a hardness result for the egalitarian objective as well. \begin{theorem} \label{thm:eo} In an $n$-player, $m$-action succinct game it is {\rm NP}-hard to determine if there exists a coarse correlated equilibrium $x$ that satisfies $\min_p u'_p(x) > \min_p u'_p(x')$, where $u'_p$ denotes the utility of player $p$ in the given game and $x'$ is the worst equilibrium with respect to the egalitarian objective, i.e., $x' \in \argmin_{x'' \in \textrm{CCE }} \{ \min_p u'_p(x'') \}$. \end{theorem} \begin{proof}[Sketch] Here we use the same notation as in the proof of Theorem~\ref{thm:op}. Also, as in the previous proof, we obtain a reduction from the following \rm{NP}-hard problem: given succinct game $G$, determine if there exists an action profile $a \in A$ such that $w(a) \geq \textrm{OPT}$. Note that the action profile $b$ in the constructed game $G'$ is the worst equilibrium with respect to the egalitarian objective, i.e., $ b \in \argmin_{x'' \in \textrm{CCE }} \{ \min_p u'_p(x'') \}$. We can establish this fact by contradiction. In particular, if there existed a CCE $x'$ such that $\min_p u'_p(x') < \min_p u'_p(b) = \varepsilon/ n$, then the player $p$ that obtains the minimum utility under $x'$ could benefit by unilaterally deviating to $b_p$, contradicting the assumption that $x'$ is a CCE. To prove this theorem we show that the original game $G$ has an action profile $a$ with $w(a) \geq \textrm{OPT}$ iff there exists a CCE $x$ such that $\min_p u'_p(x) > \min_p u'_p(b)$. The forward direction follows from the fact that an optimal action profile $a$ with welfare at least $\textrm{OPT}$ is a pure Nash equilibrium in $G'$. To establish the reverse direction we note that $u'_p(b) = \varepsilon/n$ for all $p$. Hence if a CCE $x$ satisfies $\min_p u'_p(x) > \min_p u'_p(b)$, then its welfare $w'(x)$ is strictly greater than $\varepsilon$. In other words, $w'(x) > \varepsilon = w'(b)$. But, as shown in the previous proof, this strict inequality suffices to establish the existence of an action profile for which $w(a) \geq \textrm{OPT}$. Hence, we get the desired claim. \end{proof} \noindent {\bf Remark:} The reduction detailed above also proves that there does not exist a polynomial-time algorithm that computes a Pareto-efficient CCE, unless \rm{P=NP}. We can establish this result by noting that a polynomial time algorithm, say $\mathcal{A}$, that computes any Pareto-efficient CCE can be used to determine whether there exists an action profile $a$ that satisfies $w(a) \geq \textrm{OPT}$; as before, this suffices to prove the hardness result. If $\mathcal{A}$ returns $b$ as a Pareto-efficient equilibrium then we know that there does not exist an action profile $a$ such that $w(a) \geq \textrm{OPT}$, since such an action profile would Pareto dominate $b$ in $G'$: $u'_p(a) > u'_p(b) $ for all $p$. Also, note that if $\mathcal{A}$ returns a Pareto-efficient CCE $x$ such that $u'_p(x) = u'_p(b)$ for all $p$, then again we get that $b$ is Pareto-efficient. So this case is subsumed in the first one. Recall that every CCE $x$ of $G'$ satisfies $u'_p(x) \geq u'_p(b)$. Therefore, the final case entails $\mathcal{A}$ returning a CCE $x$ such that for some $p$ we have $u'_p(x) > u'_p(b) $. Hence, we get that $w'(x) > w'(b)$, which again implies the existence of an action profile $a$ with welfare $w(a) \geq \textrm{OPT}$. \\ \noindent {\bf Remark:} Theorems~\ref{thm:op} and~\ref{thm:eo} hold for potential games. This follows from the fact that the reduction used in the proof of these theorems in fact gives us a potential game. Specifically, a potential function $\phi$ for the constructed game $G'$ is as follows: \begin{enumerate} \item $\phi(a) := w(a)/n$ for all $a \in A$. \item For all action profiles $a \in A' \setminus A$ (i.e., in $a$ at least one player is playing is playing its augmented action $b_p$), we set \[ \phi(a) := \frac{\textrm{OPT}}{n} + \frac{(k-1)\varepsilon}{n}. \] Here $k$ is the number of players playing their corresponding augmented action $b_p$ in action profile $a$, $k = \left| \{ p \mid a_p = b_p \} \right|$. \end{enumerate} A case analysis shows that $\phi$ is a potential function for $G'$. In particular, we will show that the following equality holds for each player $p$ and action profiles $(a_p, a_{-p})$ and $(a'_p, a_{-p})$: \begin{align} u'_p(a_p, a_{-p}) - u'_p(a'_p,a_{-p}) & = \phi(a_p, a_{-p}) - \phi(a'_p,a_{-p}) \label{eq:pot} \end{align} \begin{itemize} \item[] Case \rm{I}: Both $(a_p, a_{-p})$ and $(a'_p, a_{-p})$ are action profiles in $A$. Here we have $u'_p(a_p, a_{-p}) = w(a_p, a_{-p})/n = \phi(a_p, a_{-p}) $ and $u'_p(a'_p, a_{-p}) = w(a'_p, a_{-p})/n = \phi(a'_p, a_{-p})$. Hence, in this case (\ref{eq:pot}) holds. \\ \item[] Case \rm{II}: Action profile $(a_p, a_{-p}) \in A$ and $(a'_p, a_{-p}) \notin A$ (i.e., $a'_p = b_p$). Again, following the definitions of utility $u'_p$ and potential function $\phi$ we get the equality (\ref{eq:pot}): $u'_p(a_p, a_{-p}) = w(a_p, a_{-p})/n = \phi(a_p, a_{-p}) $ along with $u'_p(a'_p, a_{-p}) = \textrm{OPT}/n = \phi(a'_p, a_{-p})$. The symmetric case of $(a_p, a_{-p}) \notin A$ and $(a'_p, a_{-p}) \in A$ is similarly addressed. \\ \item[] Case \rm{III}: Both action profiles $(a_p, a_{-p})$ and $(a'_p, a_{-p})$ are not in $A$. If neither $a_p$ nor $a'_p$ is equal to $b_p$ the utility $u'_p$ is zero under both the action profiles. Also, the number of players playing their respective augmented actions $b_q$ is the same in $(a_p, a_{-p})$ and $(a'_p, a_{-p})$, hence $\phi(a_p, a_{-p}) = \phi(a'_p, a_{-p})$. This enforces equality (\ref{eq:pot}). Now we consider the setting in which exactly one of $a_p$ or $a'_p$ is equal to $b_p$; say $a_p = b_p$ (the other possibility (i.e., $a'_p = b_p$) holds by symmetry). Here, $u'_p(a_p, a_{-p}) = \varepsilon/n$ and $u'_p(a'_p, a_{-p}) = 0$. Say $k \in [n]$ is the number of players playing their corresponding augmented action in action profile $(a_p, a_{-p})$, then $\phi(a_p, a_{-p}) =\textrm{OPT}/n + (k-1) \varepsilon/n $ and $\phi(a'_p, a_{-p}) = \textrm{OPT}/n + (k-2) \varepsilon/n$. Therefore, again, (\ref{eq:pot}) holds. \end{itemize} \subsection{Approximate Coarse Correlated Equilibrium} This section establishes the hardness of computing an {\em approximate} CCE that has high social welfare. Specifically, we consider the problem of computing a $\frac{1}{2n^3}$-CCE with welfare $(1 + \frac{1}{n})$ times better than the welfare of the worst CCE. Note that there exist regret-based dynamics (c.f~\cite{young2004strategic}) that converge to the set of $\varepsilon$-CCE in time polynomial in $1/\varepsilon$. Therefore, in polynomial time we can compute \emph{a} $\frac{1}{2n^3}$-CCE. But, as the following theorem shows, it is unlikely that we can efficiently find a $\frac{1}{2n^3}$-CCE with any nontrivial welfare guarantee. Note that in an $n$-player $m$-action game a $\frac{1}{2n^3m}$-CE is guaranteed to be a $\frac{1}{2n^3}$-CCE (see Definitions~\ref{def:eps-CE} and~\ref{def:approxCCE}). Using this fact, one can directly use the proof given in this section to show that, under standard complexity-theoretic assumptions, there does not exist a polynomial time algorithm that determines a $\frac{1}{2n^3m}$-CE with any nontrivial welfare guarantee in succinct multiplayer games. It is worth pointing out that in multiplayer games we can always find {\em a} $\frac{1}{2n^3m}$-CE in polynomial time (c.f~\cite{young2004strategic}). \begin{definition}[$\mathrm{ANT}$] Let $\Gamma$ be an $n$-player $m$-action succinct game. $\mathrm{ANT}$ \ is defined to be the problem of determining whether there exists a $\frac{1}{2n^3}$-CCE $x$ in $\Gamma$ such that $w(x) \geq (1 + \frac{1}{n}) w(x')$, where $x'$ denotes the worst CCE of $\Gamma$, in terms of social welfare $w$. \end{definition} \begin{theorem} \label{thm:aop} In succinct multiplayer games, $\mathrm{ANT}$ \ is $\rm{NP}$-hard under randomized reductions: if $\mathrm{ANT}$ \ admits a polynomial-time algorithm then {\rm NP} admits a polynomial-time randomized algorithm. \end{theorem} \begin{proof} We will extend the construction presented in the proof of Theorem~\ref{thm:op}. We start with a game $G$ from a class of games in which it is {\rm NP}-hard to compute an action profile with welfare within one of the optimal. That is, in $G$ it is \rm{NP}-hard to compute an action profile $a$ such that $w(a) \geq \max_{a' \in A} w(a') - 1$; note that this is a fairly modest hardness of approximation requirement. Write $\textrm{OPT} = \max_{a \in A} w(a)$. Below we develop a polynomial-time randomized algorithm that uses an algorithm for $\mathrm{ANT}$ \ to compute an action profile $a$ that satisfies $w(a) \geq \textrm{OPT} -1$. This establishes the stated claim. To find the desired action profile $a$, we need a parameter $\tau$ that satisfies $\tau \in [\textrm{OPT} -1 , \textrm{OPT}]$. Since the utilities in $G$ are normalized between $0$ and $1$, we have $\textrm{OPT} \leq n$. Therefore, one of the values in $\{0,1,\ldots, n-1\}$ will give $\tau \in [\textrm{OPT} -1 , \textrm{OPT}]$, and we can simply search exhaustively. Applying the same transformations as in the proof of Theorem~\ref{thm:op}, we obtain the succinct game $G'$. While setting utilities in $G'$ we use $\varepsilon = (\tau +1)/n$, where parameter $\tau \in [\textrm{OPT}- 1, \textrm{OPT}]$. Therefore, we have $ \frac{\textrm{OPT}}{n} \leq \varepsilon \leq \frac{\textrm{OPT} + 1}{n}$. We assume that $\textrm{OPT} \geq 1$, else finding an action profile $a$ such that $w(a) \geq \textrm{OPT} -1$ is trivial. Also, we can assume that $n \geq 4$; recall that for a constant number of players, an optimal CCE can be computed in polynomial time. The following inequality holds under these assumptions: $\textrm{OPT} \geq \left( 1 + \frac{1}{n} \right) \frac{\textrm{OPT} + 1}{n} $. As before, the action profile $b$ is a pure Nash equilibrium, and in fact is a CCE with minimum social welfare. First, note that an optimal action profile $a^* \in \argmax_{a \in A} w(a)$ of $G$ is a pure Nash equilibrium (hence, a $\frac{1}{2n^3}$-CCE) in $G'$. Also, we have $w'(a^*) = w(a^*) = \textrm{OPT}$. The bound $w'(a^*) \geq \left(1 + \frac{1}{n}\right) w'(b)$ follows from the following chain of inequalities: $\textrm{OPT} \geq \left(1 + \frac{1}{n} \right) \frac{\textrm{OPT} +1}{n} \geq \left(1 + \frac{1}{n}\right) \varepsilon = \left(1 + \frac{1}{n}\right) w'(b)$. Thus we get that there exists a $\frac{1}{2n^3}$-CCE with welfare strictly better than $(1+1/n) w'(b)$. This overall ensures that a polynomial-time algorithm for $\mathrm{ANT}$ \ is guaranteed to return a solution. Next we show that any such returned solution can be used to compute an action profile $a$ that satisfies $w(a) \geq \textrm{OPT} -1$. The fact that $w'(a) \leq w'(b) $ for all $a \in A' \setminus A$ and the inequality $w'(x) \geq (1 + \frac{1}{n}) w'(b)$ imply that $\sum_{a \in A} w'(a) x(a) \geq \frac{1}{n} w'(b)$. Recall that $w'(b) = \varepsilon \geq \frac{\textrm{OPT}}{n}$. Therefore, $\sum_{a \in A} w'(a) x(a) \geq \frac{1}{n^2} \textrm{OPT}$. Since $w(a) = w'(a)$ for all $a \in A$, we have $\max_{a \in A} \ w'(a) = \textrm{OPT}$. Therefore, $\pi := \sum_{a \in A} x(a) \geq \frac{1}{n^2}$. Given that $x$ is a $\frac{1}{2n^3}$-approximate CCE, analogous to inequality (\ref{ineq:fexp}) here we have \begin{align} \mathbb{E}_{a \sim x} \left[ \sum_p \left( u'_p(a) - u'_p(b_p, a_{-p}) \right)\mid a \in A \right] \cdot \pi & \geq - \frac{1}{2n^2}. \label{ineq:fexpa} \end{align} Since $\pi \geq \frac{1}{n^2}$, inequality (\ref{ineq:fexpa}) implies $\mathbb{E}_{a \sim x} \left[ \sum_p \left( u'_p(a) - u'_p(b_p, a_{-p}) \right)\mid a \in A \right] \geq - 1/2$. For all $a \in A$ and $p \in [n]$, we have $u'_p(b_p, a_{-p}) = \textrm{OPT}/n$. Therefore, we get the following bound on the conditional expectation $\mathbb{E}_{a \sim x} \left[ \sum_p u'_p(a) \mid a \in A \right] \geq \textrm{OPT} - 1/2$. For all action profiles $\sum_p u_p'(a) = w'(a) \leq \textrm{OPT} \leq n$. This implies that in the conditional distribution $\Pr_x( a \mid a \in A)$ the probability mass on action profiles that satisfy $w'(a) \geq \textrm{OPT} -1$ is at least $\frac{1}{2n}$. Therefore, with high probability, we can obtain an action profile that satisfies $w'(a) \geq \textrm{OPT} -1$ by drawing polynomially many independent and identically distributed (i.i.d.) samples from the conditional distribution $\Pr_x( a \mid a \in A)$. Since $\pi = \sum_{a \in A} x(a) \geq \frac{1}{n^2}$, we can obtain polynomially many i.i.d.\ samples from the conditional distribution by drawing polynomially many i.i.d.\ samples from $x$. This overall gives us a polynomial-time randomized algorithm to find an action profile that satisfies $w(a) = w'(a) \geq \textrm{OPT} -1$. Hence, the stated claim follows. % \end{proof} \noindent In this section we considered approximate CCE with a specific approximation factor, i.e., we established hardness for $\frac{1}{2n^3}$-CCE. This was for ease of presentation, and in fact hardness of a parameterized version of $\mathrm{ANT}$ \ can be obtained along the lines of the given proof. In particular, we can show that for any $\delta \in \left[ \frac{1}{\textrm{poly}(n)}, 1 \right]$ it is computationally hard to compute a $\frac{\delta}{2n^2}$-CCE with welfare greater than $\left(1 + \delta \right) w(x')$, where, again, $x'$ denotes the worst CCE. \section{Introduction} Equilibria are central solution concepts in game theory, and questions related to the complexity of equilibrium computation have formed a major thread of research in algorithmic game theory. Arguably the most important equilibrium concepts are the Nash equilibrium~\cite{nash1951non}, correlated equilibrium~\cite{aumann1974subjectivity}, and coarse correlated equilibrium~\cite{hannan1957approximation}. These solution concepts denote distributions over players' action profiles at which no player can benefit by unilateral deviation, and hence represent stable choices of distributions over player actions. Specifically, a Nash equilibrium is defined to be a product of independent distributions (one for each player); correlated and coarse correlated equilibria are general (joint) probability distributions (see Section~\ref{sect:notation} for formal definitions). While computation of Nash equilibria has in recent years been shown to be computationally hard, even in games with two players~\cite{chen2009settling}, the news for correlated equilibria (CE) and coarse correlated equilibria (CCE) has been more positive. Even in games with many players, there exist a number of natural dynamics that quickly converge to these solution concepts (see, e.g.,~\cite{littlestone1994weighted, foster1998asymptotic, hart2000simple, blum2007external}). In particular, these dynamics induce efficient computation of approximate\footnote{A probability distribution over the players' action profiles is said to be an $\varepsilon$-approximate equilibrium if for any player unilaterally deviating increases utility, in expectation, by at most $\varepsilon$.} CE and CCE in multiplayer games; by contrast, computation of approximate Nash equilibria is computationally hard in multiplayer games~\cite{R}. In fact, {\em exact} CE and CCE are efficiently computable in many classes of multiplayer games~\cite{PR,JLB}. Another significant thread of research in algorithmic game theory has been the study of the {\em quality} of equilibria, often as measured by the social welfare of the equilibrium or its ratio to the social welfare of the socially optimal outcome (c.f. the extensive literature on the price of anarchy (PoA) \cite{AGTbook}). Given that we know it is possible to efficiently compute CE and CCE, it is natural to ask {\em how good} are the equilibria we can efficiently compute? For example, do existing efficient dynamics find the best such equilibria, or at least ones that approximately optimize the social welfare? Since the gap between the worst and the best equilibria (CE or CCE), in terms of social welfare, can be large in natural games (see, e.g.,~\cite{lee2013improved, bilo2013price}), it is interesting to understand if there exist efficient dynamics or algorithms that avoid---at least to some extent---the bad outcomes. More generally, one can pose the question of efficiently finding CE and CCE \emph{that optimize an objective} (such as the sum of players' utilities, i.e., the social welfare). In their notable work, Papadimitriou and Roughgarden~\cite{PR} show that determining a \emph{socially optimal} CE is \rm{NP}-hard, in a number of succinct multiplayer games. This result intuitively follows from the fact that determining an action profile with maximum welfare---i.e., solving the problem of welfare optimization even without equilibrium constraints---is \rm{NP}-hard in general. The hardness result of~\cite{PR} leaves open the question of computing \emph{near-optimal} CE/CCE, i.e., whether there exist efficient algorithms that compute CE/CCE with welfare at least, say, $\alpha$ times the optimal, for a nontrivial approximation ratio $\alpha \leq 1$. This question forms the basis of the present work. \emph{Technical Aside (succinct games):} We note that in general multiplayer games the size of the normal form representation, $N$, is exponentially large in the number of players; one can compute a CE/CCE that optimizes a linear objective by solving a linear program of size polynomial in $N$, and hence the computational complexity of equilibrium computation is not interesting for general games. However, most games of interest---such as graphical games, polymatrix games, congestion games, local effect games, network design games, anonymous games, and scheduling games---admit a succinct representation (wherein the above-mentioned linear program can be exponentially large in the size of the representation), and hence it is such succinctly representable games that we (and previous works) study.\footnote{Note that the optimization problem does not become simpler if, instead of a succinct game, one is given access to a game via a black box which, when given an action profile $a$ as a query, returns the utilities of all the players at $a$.} \paragraph{Results} In this paper we establish that, unless $\rm{P=NP}$, there does not exist any efficient algorithm that computes a CCE with welfare better than the \emph{worst possible} CCE, in succinct multiplayer games (Theorem~\ref{thm:op}). We also establish similar hardness results for computing equilibria under the egalitarian objective or Pareto-optimality. Analogous hardness results hold for CE. We note that a classical interpretation of a CE is in terms of a mediator who has access to the players' payoff functions and who draws outcomes from a correlated equilibrium's joint distribution over player actions and privately recommends the corresponding actions to each player. The equilibrium conditions ensure that no player can benefit in expectation by unilaterally deviating from the recommended actions. Therefore, the problem we study here is exactly the computational complexity of the problem that a mediator faces if she wishes to maximize social welfare. We also extend the hardness result to approximate CE and CCE (Theorem~\ref{thm:aop}). Therefore, while one can efficiently compute an approximate CE/CCE in succinct multiplayer games, one cannot provide any nontrivial welfare guarantees for the resulting equilibrium (unless $\rm{P=NP}$). In addition, we show that this hardness result also holds specifically for potential games (generally considered to be a very tractable class of games), and persists even in settings where the gap between the best and worst equilibrium is large. We note that in these results, the hardness is not simply borrowed from welfare maximization;\footnote{Welfare maximization refers to the optimization problem of finding an action profile (not necessarily an equilibrium) with maximum possible welfare.} even if the underlying game admits a nontrivial multiplicative approximation for welfare maximization, the problem of determining a CCE with welfare arbitrarily better than the worst CCE remains hard. Another relevant observation is that there always exists an optimal CE/CCE with support size polynomial in the number of players and the number of actions per player.\footnote{This follows from the fact that CE/CCE are defined by a polynomial number of linear constraints. That is, the set of CE/CCE form a polytope that is defined by a polynomial number of linear inequalities. An optimal CE/CCE is an extreme point of this polytope, and hence its support size is polynomially bounded.} Therefore, the fact that in multiplayer games there might exist CE/CCE with exponentially large support size does not, in and of itself, account for the complexity of this problem. We complement these hardness results by developing an algorithmic framework for computing an $\varepsilon$-approximate CE with welfare that is additively $\varepsilon$ close to the optimal. This framework establishes a sufficient condition under which the above-mentioned complexity barriers can be circumvented. In particular, we show that if in a given game we can efficiently obtain an \emph{additive} approximation for a \emph{modified-welfare maximization problem}, then we can efficiently compute an approximate CE with high welfare. The modified welfare under consideration can be thought of as a Lagrangian corresponding to the equilibrium constraints (see Definition~\ref{def:mod-wel}), and the modified-welfare maximization problem entails finding an action profile that maximizes this modified welfare. Note that even if welfare (specified by the given utilities) is nonnegative, modified welfare can be negative for certain action profiles. This notably differentiates welfare maximization and modified-welfare maximization, and provides an idea of the technical challenges that one faces when approximating the modified-welfare maximization problem. (Recall that typical multiplicative-approximation techniques cannot handle negative quantities.) Hence, in a given game, the problem of (nontrivially) approximating the modified-welfare maximization problem can be hard, even if the game admits a nontrivial multiplicative-approximation for welfare maximization. Further, we instantiate this algorithmic framework to compute high-welfare approximate CE in \emph{aggregative games}. These are games wherein the utility of each player is a function of her own action and an aggregate (a constant-dimensional summary vector) of all players' actions; see Section~\ref{sect:agg-game} for a formal definition. Aggregative games encompass settings like Cournot oligopolies, Bertrand competitions, weighted congestion games, and anonymous games~\cite{jensen2010aggregative, acemoglu2013aggregate, babichenko2013best, cummings2014privacy}. We develop an efficient additive-approximation algorithm for the modified-welfare maximization problem in aggregative games. Therefore, via the above-mentioned framework, we show how to efficiently compute a high-welfare approximate CE in aggregative games. \\ \paragraph{Related Work} Papadimitriou and Roughgarden~\cite{PR} showed that the problem of computing an exactly optimal CE is \rm{NP}-hard for many relevant classes of multiplayer games, including congestion games, graphical games, polymatrix games, local effect games, and scheduling games. Specific instances in which the hardness result of~\cite{PR} can be completely circumvented, i.e., settings where an \emph{exactly optimal} CE can be efficiently computed, were identified by Jiang and Leyton-Brown~\cite{JLB}. The results in~\cite{PR} and~\cite{JLB} leave open the question of efficiently computing a CE with \emph{near-optimal} welfare, i.e., the question of \emph{approximating} the optimization problem under consideration. The complexity of this approximation is the focus of our work. Our main result is negative. In order to prove a positive result for the specific case of computing near-optimal approximate CE in \emph{aggregative games}, we consider a \emph{modified-welfare maximization problem} (MWMP); see Section~\ref{sect:blackwell} for a formal definition. Jiang and Leyton Brown~\cite{JLB} consider classes of games in which the MWMP can be solved optimally, and use the ellipsoid method to find an optimal CE. In our setting, exactly solving the MWMP is not computationally feasible (and hence the framework of~\cite{JLB} cannot be applied to aggregative games\footnote{Knapsack reduces to the problem of welfare maximization in aggregative games.}), but we show that an additive approximation of MWMP suffices to find a near-optimal approximate CE. This entails developing a new algorithm that does not rely on the ellipsoid method. There is prior work~\cite{charikar2008online,kleinberg2009multiplicative,balcan2013circumventing} on dynamics that quickly converge to high-welfare CCE in isolated, specific classes of games, such as fair cost sharing games; our results show that it is unlikely that such results can be significantly generalized. Marden et al.~\cite{marden2012achieving} develop dynamics that {\em eventually} converge to Pareto-optimal CCE (see also~\cite{song2011optimal}); these works do not establish polynomial rate of convergence for the proposed dynamics. \section{Notation} \label{sect:notation} In this paper we consider games with $n$ players and $m$ actions per player. We use $A_p$ to denote the set of actions available to the $p$th player and $A$ to denote the set of action profiles, $A := \prod_p A_p$. We write $u_p : A \rightarrow [0,1]$ for the (normalized) utility of player $p$, and $w: A \rightarrow \mathbb{R}$ is the welfare of an action profile, $w(a) := \sum_{p=1}^n u_p(a)$.\footnote{When there are multiple games under consideration within a single proof, we annotate the $w$ to indicate to which game it pertains.} For an action profile $a \in A$, let $a_{-p}$ denote the profile of actions chosen by players other than $p$. With $A_{-p} :=\prod_{q\neq p} A_q$, we have $a_{-p} \in A_{-p}$. As is typical in the literature, we say that a game is succinct if it has an efficient representation. Formally, an $n$-player $m$-action game is said to be \emph{succinct} if the player utilities are completely specified via a polynomial-sized string from an input set $I$. Specifically, for a succinct game, there exists a polynomial (in $n$ and $m$) time algorithm $U$ that, given a representation $z \in I$ along with a player $p$ and action profile $a$, returns the utility $u_p(a) = U(z, p, a)$. The game is denoted by $\Gamma(z)$. Many important classes of multi-player games are succinct, e.g., symmetric games, anonymous games, local effect games, congestion games, polymatrix games, graphical games, and network design. This paper is focused on succinct games, since this lets us formally treat settings in which the input, i.e., the utilities in the game, can be efficiently represented. Note that the hardness question becomes moot if we consider the normal form representation of an $n$-player $m$-action game as our ``input,'' since in this case the input itself is exponentially large in $n$ and $m$. Our hardness results imply the intractability of determining high-welfare CCE in games wherein the underlying utilities are specified through a black box. We denote the set of probability distributions over a set $B$ by $\Delta(B)$. Given a distribution $x$ over the action profiles $A$, i.e., $x \in \Delta(A)$, we use $u_p(x)$ for the expected utility of player $p$ under distribution $x$. Similarly, we write $w(x)$ to denote the expected welfare under $x$. \begin{definition}[Correlated Equilibrium] \label{def:ce} A probability distribution $x \in \Delta(A)$ is said to be a correlated equilibrium if for every player $p$ and every actions $i,j \in A_p$ we have \begin{align*} \sum_{a_{-p} \in A_{-p}} [u_p(j,a_{-p}) - u_p(i,a_{-p}) ] x(i,a_{-p}) \leq 0, \end{align*} where $(i,a_{-p})$ denotes an action profile in which player $p$ plays action $i$ and the other players play $a_{-p}$. \end{definition} \begin{definition}[Coarse Correlated Equilibrium] \label{def:cce} A probability distribution $x \in \Delta(A)$ is said to be a coarse correlated equilibrium if for every player $p$ and every action $j \in A_p$ we have \begin{align*} \sum_{a \in A} [u_p(j,a_{-p}) - u_p(a) ] x(a) \leq 0, \end{align*} where $(j,a_{-p})$ denotes an action profile in which player $p$ plays action $j$ and the other players play $a_{-p}$. \end{definition} Along these lines, the definition of an approximate correlated equilibrium is as follows: \begin{definition}[$\varepsilon$-Correlated Equilibrium] \label{def:eps-CE} A probability distribution $x \in \Delta(A)$ is said to be an $\varepsilon$-correlated equilibrium if for every player $p$ and every actions $i,j \in A_p$ we have \begin{align*} \sum_{a_{-p} \in A_{-p}} [u_p(j,a_{-p}) - u_p(i,a_{-p}) ] x(i,a_{-p}) \leq \varepsilon. \end{align*} \end{definition} \begin{comment The following definition of approximate correlated equilibrium is based on internal-regret functions $f:A_p \rightarrow A_p$. Such functions are also called switching rules. \begin{definition}[Strong $\varepsilon$-Correlated Equilibrium] \label{def:approxCE} Write $R_f^p(a):=u_p(f(a_p), a_{-p}) - u_p(a)$ to denote the regret of player $p$ for not implementing switching rule $f$ at action profile $a$. A distribution $x \in \Delta(A)$ is a strong $\varepsilon$-correlated equilibrium ($\varepsilon$-CE) if $\mathbb{E}_{a\sim x}[R_f^p(a)] \leq \varepsilon$ for every player $p$ and every mapping $f : A_p \rightarrow A_p$. \end{definition} \end{comment Finally, we define $\varepsilon$-coarse correlated equilibrium. \begin{definition}[$\varepsilon$-Coarse Correlated Equilibrium] \label{def:approxCCE} A probability distribution $x \in \Delta(A)$ is said to be an $\varepsilon$-coarse correlated equilibrium if for every player $p$ and every action $i \in A_p$ we have \begin{align*} \sum_{a \in A} [u_p(i,a_{-p}) - u_p(a) ] x(a) \leq \varepsilon. \end{align*} \end{definition} \section*{Acknowledgements} The authors thank Nikhil Bansal for helpful discussions. This work was supported by NSF grants CNS-0846025, CCF-1101470, and CNS-1254169, along with a Microsoft research faculty fellowship, a Google faculty research award, and a Linde/SISL postdoctoral fellowship. Katrina Ligett gratefully acknowledges the support of the Charles Lee Powell Foundation. \bibliographystyle{plain}
{ "timestamp": "2015-04-24T02:12:58", "yymm": "1504", "arxiv_id": "1504.06314", "language": "en", "url": "https://arxiv.org/abs/1504.06314" }
\section{Introduction} \label{S:Intro} The concept of stationarity is crucial in the statistical theory of time series analysis, especially for the development of asymptotic theory. However, the assumption of stationarity is often not realistic in applications. For example, a time series can display significant changes through time and therefore stationarity is a questionable assumption. One of the most important consequences, is that attempts to develop asymptotic results are, generally speaking, groundless, since future information of the process does not necessarily contain any information regarding the present of the process. In addition, there is no natural generalization of stationarity to non-stationarity, since non-stationary processes might exhibit trend or/and periodicity and other types of non-standard behaviour. \citet{priestley1965evolutionary} considered non-stationary processes whose characteristics are changing slowly over time and developed the theory of evolutionary spectra (see \citet{priestley1981spectral,priestley1988non}). However, such an approach makes it difficult to obtain asymptotic results, which are needed for developing estimation theory. In order to apply standard asymptotic theory for non-stationary processes, Dahlhaus, in a series of contributions, introduced an appropriate theoretical framework, based on the concept of local stationarity (see, for example, \citet{dahlhaus1996kullback,dahlhaus1997fitting,dahlhaus2000}). The definition of local stationarity is based on the existence of a time varying spectral representation (\citet{dahlhaus1996kullback}). \citet{dahlhaus2012locally} gives an excellent and detailed overview of the theory of locally stationary processes. A comparison between the methodology developed by Dahlhaus and Priestley is discussed in \citet{dahlhaus1996asymptotic}. Some other works related to locally stationary time series include the works by \citet{granger1964spectral}, \citet{tjostheim1976spectral}, \citet{martin1981line}, \citet{melard1989contributions}, \citet{neumann1997wavelet}, \citet{nason2000wavelet}, \citet{ombao2002slex}, \citet{sakiyama2004discriminant} and \citet{davis2006structural}, among others. The main goal of this contribution is to utilize the idea of local stationarity, in the sense of the above mentioned papers, for studying the spectral behaviour of time series, based on the system of Walsh functions. These functions led to the development of Walsh-Fourier (square wave) analysis, just like the sinusoidal functions led to Fourier (trigonometric) analysis. The motivation behind Walsh-Fourier analysis was the need to approximate stationary time series, which display square waveforms with abrupt switches (e.g. in communications and engineering), see \citet{Stankovicetal(2005)} for instance. We introduce the concept of local stationarity but based on the orthogonal system of Walsh functions to account for such phenomena that exhibit, in addition, non-stationary behaviour. We study important general classes of time series, similar in concept with the time varying ARMA (tvARMA) process- see \citet{dahlhaus2012locally}. We anticipate that our theory and methods will be applicable to non-stationary data observed in diverse applications like pattern recognition for binary images, linear system theory and other (see \citet{Stankovicetal(2005)}, for more). The Walsh functions were introduced by \citet{walsh1923closed}. They take only two values, $+1$ and $-1$, and have similar properties with the trigonometric functions (although they are not periodic). The introduction of Walsh functions has been followed by a series of papers, related to their mathematical properties and generalizations (\citet{fine1949walsh}), which provided the theoretical framework for various applications, see e.g. \citet{beauchamp1984walsh}, \citet{stoffer1991walsh} and \citet{abbasi2012fpga}, among others. \citet{stoffer1991walsh} gives an excellent account of the history of Walsh functions and a comparison between Walsh and Fourier analysis. The statistical analysis of stationary time series via Walsh functions has been based on real and dyadic time. The dyadic time is based on the concept of dyadic addition (see Subsection \ref{S:Preli:SS:DAD}). For time points $m,n$, the real time sum $m+n$ is now replaced by the dyadic sum $m\oplus n.$ \citet{morettin1981walsh} reviewed work on Walsh spectral analysis in both time scales. Walsh-Fourier analysis of real time stationary processes has been studied by \citet{kohnI1980spectral,kohnII1980spectral}, \citet{morettin1983note} and \citet{stoffer1985central,stoffer1987walsh,stoffer1990multivariate}, among others. The dyadic time stationarity is defined in respect to the real time stationarity as in Subsection \ref{S:Preli:SS:DS} (see also \citet{nagai1977dyadic}). Further references related to the Walsh-Fourier analysis of dyadic stationary processes are \citet{morettin1974walsh,morettin1978estimation,morettin1981walsh}, \citet{nagai1980finite}, \citet{nagai1987walsh} and \citet{taniguchi1989statistical}. In particular, \citet{morettin1974walsh,morettin1978estimation} studied the finite Walsh transform, considered the Walsh periodogram as an estimator of the Walsh spectrum and studied its theoretical properties. \citet{nagai1977dyadic} proved that a dyadic stationary process has always unique spectral representation in terms of the system of Walsh functions and studied the dyadic linear process (see also \citet{morettin1974walsh}). \citet{nagai1980finite} also studied dyadic autoregressive and moving average processes and their relation. In this article, we introduce the concept of local dyadic stationarity and discuss the advantages and the perspectives of such consideration in the framework of Walsh functions. In Section $2$ of this article, we recall some definitions and review some fundamental results for dyadic stationary processes. In Section $3$, we introduce the concept of local dyadic stationarity and study the time varying dyadic moving average process. In Section $4$, we define the general class of time varying dyadic autoregressive moving average processes and show that they exhibit locally dyadic stationarity. The article concludes with several remarks concerning further research in this topic. \section{Preliminaries} \label{S:Preli} \subsection{Dyadic addition} \label{S:Preli:SS:DAD} We recall the definition of dyadic addition and of a dyadic process following \citet{kohnI1980spectral}. Consider $m$ and $n$ to be non-negative integers that have the following dyadic expansions \begin{equation} \label{S:Preli:SS:DAD:E:dyadic expansions} \nonumber m=\sum_{k=0}^{f} m_k 2^k, \; n=\sum_{k=0}^{f} n_k 2^k, \quad \textrm{where} \; m_k, n_k \in \{0,1\}. \end{equation} Then, the dyadic sum $m \oplus n$ is defined as \begin{equation} \label{S:Preli:SS:DAD:E:dyadic addition} \nonumber m \oplus n= \sum_{k=0}^{f} |m_k - n_k| 2^k. \end{equation} Consider now $x$ and $y$ to be real numbers that belong to the interval $I\equiv[0,1).$ We write \begin{equation} \label{S:Preli:SS:DAD:E:inverse dyadic expansions} \nonumber x=\sum_{k=1}^{\infty} x_k 2^{-k}, \; y=\sum_{k=1}^{\infty} y_k 2^{-k}, \quad \textrm{where} \; x_k, y_k \in \{0,1\}. \end{equation} In general, each of the above representations is not unique. We follow the convention that if, e.g. $x,$ can be written both through a finite or an infinite order representation, then choose the representation where $x_k=0,\;\forall \, k>k_0$ . With this convention, the dyadic sum $x \oplus y$ is defined as \begin{equation} \label{S:Preli:SS:DAD:E:dyadic sum} \nonumber x \oplus y= \sum_{k=1}^{\infty} |x_k - y_k| 2^{-k}. \end{equation} Recall that the $k$-$th$ Rademacher function is $\phi_k(x)=(-1)^{x_{k+1}}, \, \forall x \in I, \, \forall \, k \geq 0.$ Then the system of Walsh functions, ${W(n,x),\; n=0,1,2,\ldots,\; x\in I},$ is defined as follows. If $n=0,$ set $W(0,x)=1, \; \forall x\in I.$ For $n>0$, let $n=2^{n_{1}}+2^{n_{2}}+\ldots+2^{n_{\nu}},$ where $0\leq n_1 < n_2 \ldots < n_\nu$ . Then \begin{equation} \nonumber W(n,x) = \left \{\begin{array}{cc} 1, & n=0, \\ \phi_{n_{1}}(x)\phi_{n_{2}}(x)\cdots \phi_{n_{\nu}}(x), & n>0, \\ \end{array} \right. \quad \forall x \in I. \end{equation} We mention briefly some characteristic properties of the Walsh functions. \\ (i) The system of Walsh functions is orthonormal in $I$, that is \begin{equation} \label{S:Preli:SS:DAD:E:orthonormality of WF} \nonumber \int_0^1 W(n,x)W(m,x)dx=\left \{\begin{array}{cc} 1 & \textrm{for} \quad n=m, \\ 0 & \textrm{for} \quad n\neq m, \\ \end{array}\right. \end{equation} and constitutes a complete set. If $f(x), \, x \in I$ is a square integrable function, then it can be expanded in a Walsh-Fourier series, i.e. $$ f(x)=\sum_{n=0}^{\infty}c_n W(n,x),$$ with $c_n= \int_0^1 f(x)W(n,x)dx.$ \\ (ii) $\forall n,m \in \mathbb{N}, x,y \in I, \; W(n,x)W(m,x)=W(n\oplus m,x)\;$ and $\;W(n,x)W(n,y)=W(n,x\oplus y).$ The above properties of Walsh functions motivate the study of stationary time series in terms of this basis. It is well known that a second order stationary process $\{X_t, \, t\in\mathbb{N}\}$ is represented as \begin{equation} \label{S:Preli:SS:DAD:E:spectral representation in the Frequency Domain} \nonumber X_t=\int_{-\pi}^{\pi} e^{i\lambda t} dZ(\lambda), \end{equation} with $\{Z(\lambda), \; \lambda \in (-\infty,\infty) \}$ an orthogonal-increment process such that \begin{equation} \label{S:Preli:SS:DAD:E:Orthogonal-increment process} \nonumber E\big(dZ(\lambda) d \overline{Z(\mu)}\big)= \eta(\lambda-\mu) dF(\lambda) d\mu \end{equation} \noindent where $\eta(\cdot)$ is the Dirac function periodically extended to $\mathbb{R}$ with period $2 \pi$ and $F(\cdot)$ is the spectral distribution function (see \citet{brillinger(1974)}, for instance). The Walsh functions can be used instead to represent $X_t$ under the concept of dyadic stationarity. There are differences though between real and dyadic stationary processes; see \citet{morettin1981walsh} for further discussion. The concept of dyadic stationarity is explained briefly next. \subsection{Dyadic stationarity} \label{S:Preli:SS:DS} We call a stochastic process $\{X_t, \, t\in\mathbb{N}\}$ dyadic stationary if it has constant mean, finite second moment and its covariance function \begin{equation} \label{S:Preli:SS:DS:E:covariance function} \nonumber R(n,m)=\textrm{cov}(X_n,X_m)=E\big[(X_n-E[X_n])(X_m-E[X_m])\big],\quad n,m \in \mathbb{N}, \end{equation} is invariant under dyadic addition, i.e. it depends only on $n\oplus m.$ Hence, we write for notational convenience $R(\tau)=R(n,n\oplus \tau).$ In the following assume that $E[X_t]=0,\; E[X_t^2]=1,\; \forall t \in \mathbb{N}.$ We recall some important results about dyadic stationary processes. A dyadic stationary process $X_t$ has a dyadic spectral representation given by (\citet[p. 193]{morettin1974walsh}) \begin{equation} \label{S:Preli:SS:DS:E:Nagai spectral representation} \nonumber X_t=\int_0^1 W(t,x)dZ_X(x),\quad t\in\mathbb{N}, \end{equation} where $\{Z_X(x), \; x\in I\}$ is a real random process with orthogonal increments, such that $$E[dZ_X(x_1) dZ_X(x_2)]=\eta(x_1 \oplus x_2) dG_X(x_1)dx_{2},$$ where $\eta(\cdot)$ is the Dirac function periodically extended to $\mathbb{R}$ with period 1. The function $G_X(\cdot)$, defined on $I$, is a unique distribution function, which is called the dyadic spectral distribution of the process $X_t.$ In addition \begin{equation} \label{S:Preli:SS:DS:E:Nagai covariance representation} \nonumber R(\tau)=\int_0^1 W(\tau,x)dG_X(x). \end{equation} If $G_X(\cdot)$ is absolutely continuous, then $dG_X(x)=g_X(x) dx,$ where $g_X(x)$ is called the dyadic spectral density of $X_t.$ \begin{Example} \rm \label{S:Preli:SS:DS:Ex:white noise} A simple example of a dyadic stationary process is a sequence $\{\varepsilon_t, \, t\in\mathbb{N}\}$ of independent random variables with $E(\varepsilon_t)=0 \; \textrm{and} \; E(\varepsilon_t^2)=\sigma^2 , \; \forall \;t\in\mathbb{N}$. It is straightforward to show that its covariance function is \begin{equation} \label{S:Preli:SS:DS:E:covariance of white noise} \nonumber E\left(\varepsilon_n \varepsilon_{n\oplus\tau} \right)= R(\tau) =\left\{ \begin{array}{ll} \sigma^2, & \textrm{if}\;\tau=0, \\ 0, & \textrm{if}\;\tau\neq0. \end{array} \right. \end{equation} \end{Example} \noindent Since the sequence $\varepsilon_t$ is dyadic stationary, it has a dyadic spectral representation of the form \begin{equation} \label{S:Preli:SS:DS:E:spectral representation of iid} \nonumber \varepsilon_t=\int_0^1 W(t,x) dZ_\varepsilon(x),\quad t\in\mathbb{N}, \end{equation} with $$E[(dZ_\varepsilon(x))^2] = dG_\varepsilon(x) = \sigma^2 dx,\quad x\in I.$$ This example illustrates the analogy of dyadic and real time stationary processes. Indeed, it is well known that a white noise real time process possesses a flat spectrum; the same is true under dyadic stationarity. A stochastic process $\{X_t, \, t\in\mathbb{N}\}$ is a linear dyadic process if it can be represented as (\citet{morettin1974walsh}) \begin{equation} \label{S:Preli:SS:DS:E:infinite DMA} X_t=\sum_{k=0}^{\infty}a_k \varepsilon_{t\oplus k}, \end{equation} where $\varepsilon_t$ is the sequence of i.i.d. variables, as in Example \ref{S:Preli:SS:DS:Ex:white noise}, and $\{a_k, \, k\in\mathbb{N}\}$ are real numbers, which satisfy $\sum_{k=0}^{\infty}a_k^2<\infty.$ This definition is similar in spirit to the definition of the general linear process \citet[p.415]{priestley1981spectral}. It can be shown that a linear dyadic process of the form \eqref{S:Preli:SS:DS:E:infinite DMA} is dyadic stationary, because \begin{equation} \label{S:Preli:SS:DS:E:covariance of linear dyadic process} R(\tau)=\int_0^1 W(\tau,x) \left(\sigma \sum_{k=0}^{\infty}a_k W(k,x)\right)^2 dx. \end{equation} In addition, it has an absolutely continuous dyadic spectral distribution function and its dyadic spectral density function has the form \begin{equation} \label{S:Preli:SS:DS:E:spectral density function of infinite DMA} g(x)=\sigma^2 \left(\sum_{k=0}^{\infty}a_k W(k,x)\right)^2 = \sigma^2 A^2(x), \end{equation} where $A(x)=\sum_{k=0}^{\infty}a_k W(k,x).$ In this case, note that $G(x)= \sigma^{2} \int_{0}^{x} A^{2}(y) dy$, for $x \in I$. Again, we note the analogy between real time and dyadic stationarity; the above formula is identical to the formula obtained in the real time linear process model (\citet{priestley1981spectral}). Furthermore, if $a_q\neq0$ and $a_k=0,\; \forall \, k>q$ in \eqref{S:Preli:SS:DS:E:infinite DMA}, then $X_t$ is said to be a dyadic moving average process of order $q$, abbreviated as DMA$(q)$. In general, the process $X_t$ defined by \eqref{S:Preli:SS:DS:E:infinite DMA} is called a DMA$(\infty)$ process. \section{Local Dyadic Stationarity} \label{S:LDS} Consider now \eqref{S:Preli:SS:DS:E:spectral density function of infinite DMA}, and suppose, for example, that the function $A(\cdot)$ depends upon time, i.e. it has the form $A_{t,T}(\cdot),$ where $T$ denotes the sample size. $X_t$ is now reexpressed as a triangular array $X_{t,T}.$ We rescale $A_{t,T}(\cdot)$ from the axis of the first $T$ non negative integers ($t=1,2,\ldots,T$) to the unit interval $I$. The reason for this rescaling will be clear later on. The rescaled form of $A_{t,T}(\cdot)$ is denoted by $A\left( t/T,\cdot \right)$. We give a general definition regarding local dyadic stationarity for a process $X_{t,T}$, in the spirit of Dahlhaus (e.g \citet{dahlhaus1996kullback,dahlhaus1997fitting}). \begin{Definition} \label{S:LDS:D:LDS} A sequence of stochastic processes $\{X_{t,T}, \;t=1,2,\ldots,T\}$ is called locally dyadic stationary with transfer function $A_{t,T}(\cdot)$ and trend function $\mu(\cdot),$ where $A_{t,T}(\cdot)$ and $\mu(\cdot)$ are deterministic functions, if there exists a representation \begin{equation} \label{S:LDS:E:Definition, Spectral representation} X_{t,T}=\mu \left(\frac{t}{T}\right)+\int_0^1 W(t,x) A_{t,T}(x) d U(x), \end{equation} where the following hold:\\ (i) $U(x)$ is a real-valued stochastic process on $I$ and \begin{equation} \label{S:LDS:E:cumulants} \nonumber \textrm{cum}\{dU(x_1),\ldots,dU(x_k)\} = \eta(x_1 \oplus x_2 \oplus\ldots \oplus x_k) g_k(x_1,\ldots,x_{k-1}) dx_1,\ldots,dx_k, \end{equation} where $\textrm{cum}\{\ldots\}$ denotes the $k$-${th}$ order cumulant, $g_1 \equiv 0, \; g_2(x_1) \equiv 1, \; |g_k(x_1,\ldots,x_{k-1})|$ are bounded for all $k$ and $\eta(\cdot)$ denotes the Dirac delta function periodically extended to $\mathbb{R}$ with period 1. \\ (ii) There exists a constant $K$ and a function $A:[0,1]\times\mathbb{R}\rightarrow \mathbb{R}$ such that \begin{equation} \label{S:LDS:E:Definition, assumption (ii)} \underset{t,x}{\sup} \left|A_{t,T}(x)-A \left(\frac{t}{T},x\right)\right| \leq \frac{K}{T},\quad \forall \;T. \end{equation} The functions $A(u,x)$ and $\mu(u)$ are assumed to be continuous with respect to $u=t/T$. \end{Definition} The above definition is analogous to the definition given by \cite{dahlhaus1996kullback}. The first condition states that the $U(x)$ has moments of order $k$; the functions $g_k(\cdot)$ are the $(k-1)$-$th$ polyspectrum of $U(x)$ following \citet{brillinger1965introduction}. The second assumption requires that the transfer function $A_{t,T}(\cdot)$ is approximated locally by a function $A(t/T,\cdot),$ which is the transfer function of a dyadic stationary process. Note that the continuity of $A(u,x)$ and $\mu(u)$ in $u$ is required for the process $X_{t,T}$ to exhibit locally dyadic stationary behaviour. Furthermore, without loss of generality, we assume that $g_{2}(x) \equiv 1$ because the transfer function can be always rescaled such that the process $\{ U(x), x \in I\}$ is white noise. Indeed, the boundedness assumption of $g_{2}(.)$ implies again \eqref{S:LDS:E:Definition, assumption (ii)} for the rescaled transfer function. \begin{Example} \rm Suppose $Y_t$ is a dyadic stationary process with dyadic spectral representation $$Y_t = \int_0^1 W(t,x) A(x) d Z(x),$$ where $E|Z^k(x)| < \infty, \, \forall \, k >0$ Define $X_{t,T}$ by $$X_{t,T} = \mu\left(\frac{t}{T}\right) + \sigma\left(\frac{t}{T}\right)Y_t,$$ where $\mu(\cdot),\sigma(\cdot)$ are continuous functions defined on $I\rightarrow \mathbb{R}$. Then $$X_{t,T} = \mu\left(\frac{t}{T}\right) + \int_0^1 W(t,x) A_{t,T}(x) d Z(x),$$ where $A_{t,T}(x)=A(t/T,x)=\sigma(t/T)A(x)$ and the assumptions $(i)$ and $(ii)$ are satisfied. Hence $X_{t,T}$ is locally a dyadic stationary process. \end{Example} Consider now the process $\{X_{t,T}, \, t=1,2,\ldots,T\}$ \begin{equation} \label{S:LDS:E:rescaled infinite tvDMA} X_{t,T}=\sum_{k=0}^{\infty}a_{k,t,T} \varepsilon_{t\oplus k}. \end{equation} where $\varepsilon_t$ is an i.i.d. sequence and $\{a_{k,t,T}, \, k\in\mathbb{N}\}$ is a time-dependent process of real numbers such that $\forall t, \; \sum_{k=0}^{\infty}a_{k,t,T}^2<\infty$. We call this process a time varying dyadic moving average process of infinite order (tvDMA$(\infty)$). If we set in \eqref{S:LDS:E:rescaled infinite tvDMA} $a_{q,t,T}\neq0$ and $a_{k,t,T}=0,\, \forall \, k>q, \, \forall t,$ then we call $X_{t,T}$ a time varying dyadic moving average process of order $q$ (tvDMA$(q)$). We rescale now the parameter curves $a_{k,t,T}$ to the unit interval $I$, assuming that there exist functions $a_{k}(t/T):I\rightarrow \mathbb{R}$ that satisfy $a_{k,t,T} \approx a_{k}(t/T).$ We further assume that $a_{k}(\cdot)$ satisfy some regularity conditions (see Remark \ref{S:LDS:R:difference of the a functions}). The reasons for the rescaling are described in detail, e.g. in \citet[Sec.2]{dahlhaus2012locally}. Briefly, suppose that we choose $a_{k,t,T}$ to be polynomials of $t$. Then, as $t\rightarrow\infty$, $a_{k,t,T}\rightarrow\infty$ as well, which violates the condition $\sum_{k=0}^{\infty}a_{k,t,T}^2<\infty.$ In addition, rescaling enables us to impose smoothing conditions through the continuity of the functions $a_{k}(\cdot),$ ensuring that the process exhibits locally dyadic stationary behaviour. Indeed, the number of observations within the neighbourhood of a fixed point $u_0 \in I$ increases as $T\rightarrow\infty$ enabling to develop and apply locally for $X_{t,T}$ asymptotic results for dyadic stationary processes. Suppose that the process $X_{t,T}$ defined by \eqref{S:LDS:E:rescaled infinite tvDMA} is written as \begin{equation} \label{S:LDS:E:rescaled infinite tvDMA_2nd} \nonumber X_{t,T}=\sum_{k=0}^{\infty}a_k\left(\frac{t}{T}\right) \varepsilon_{t\oplus k}. \end{equation} We assume that $a_k(u)=a_k(0)$ for $u<0$ and $a_k(u)=a_k(1)$ for $u>1$ and that the functions $a_{k}(\cdot)$ satisfy some standard smoothness conditions; see \citet{dahlhaus1997fitting}. Consider now a fixed point $u_0=t_0/T$ and its neighborhood $[u_0\pm\epsilon].$ If the length of this segment is sufficiently small, the process $X_{t,T}$ can be approximated by the process $\tilde{X}_t(u_0),$ which is defined as \begin{equation} \label{S:LDS:E:DS infinite tvDMA} \nonumber \tilde{X}_t(u_0)=\sum_{k=0}^{\infty}a_k(u_0) \varepsilon_{t\oplus k}, \end{equation} where $a_k(u_0)$ are constants, with $u_0$ indicating their dependence from the fixed point $u_0$ (see also \citet{dahlhaus2012locally}). $\tilde{X}_t(u_0)$ is dyadic stationary. Indeed, we can write \begin{eqnarray} \label{S:LDS:E:spectral representation of infinite tvDMA at a fixed point} \tilde{X}_t(u_0)&=& \int_0^1 W(t,x)\left( \sum_{k=0}^{\infty}a_k(u_0) W(k,x)\right) d Z_\varepsilon(x)=\int_0^1 W(t,x) A(u_0,x) d Z_\varepsilon(x), \end{eqnarray} where $A(u_0,x)=\sum_{k=0}^{\infty}a_k(u_0) W(k,x).$ From equations \eqref{S:Preli:SS:DS:E:covariance of linear dyadic process} and \eqref{S:Preli:SS:DS:E:spectral density function of infinite DMA}, $\tilde{X}_t(u_0)$ has covariance function \begin{equation} \nonumber \label{S:LDS:E:local covariance of infinite tvDMA} R(u_0, \tau)=\int_0^1 W(\tau,x) \left(\sigma \sum_{k=0}^{\infty}a_k(u_0) W(k,x)\right)^2 dx, \end{equation} and a unique dyadic spectral density function given by \begin{equation} \label{S:LDS:E:local spectral density function of rescaled infinite tvDMA} \nonumber g\left(u_0,x\right)= \left(\sigma \sum_{k=0}^{\infty}a_k \left(u_0\right) W(k,x)\right)^2=\sigma^2 A^2\left(u_0,x\right), \end{equation} where $u_0$ indicates the dependence from a fixed point. We can show that for $\{ u=t/T:|t/T-u_0|\leq \epsilon \},$ it holds $|X_{t,T}-\tilde{X}_t(u_0)|= O_P(1/T)$, see Corollary \ref{S:LDS:C:LDS for tvDMA infty}. Therefore, we can say that $X_{t,T}$ has locally the same covariance and dyadic spectral density function as $\tilde{X}_t(u_0)$ and therefore exhibits locally dyadic stationary behaviour. Note that the tvDMA$(\infty)$ process $X_{t,T}$ in \eqref{S:LDS:E:rescaled infinite tvDMA} is locally dyadic stationary due to Definition \ref{S:LDS:D:LDS}, since it has a time varying spectral representation as in \eqref{S:LDS:E:Definition, Spectral representation}. Indeed, we have \begin{eqnarray} \label{S:LDS:E:spectral representation of infinite tvDMA} \nonumber X_{t,T}&=&\sum_{k=0}^{\infty} \left(a_{k,t,T} \int_0^1 W(t\oplus k,x)d Z_\varepsilon(x) \right) =\int_0^1 W(t,x) A_{t,T}(x) d Z_\varepsilon(x) , \end{eqnarray} where $A_{t,T}(x)=\sum_{k=0}^{\infty}a_{k,t,T}W(k,x)$ is the time varying transfer function. We show in Theorem \ref{S:LDS:T:approximation by DS process} that, in general, a locally dyadic stationary process is approximated by a dyadic stationary process within a given interval. \begin{theorem} \label{S:LDS:T:approximation by DS process} Suppose that $\{X_{t,T}, \;t=1,2,\ldots,T\}$ is a sequence of stochastic processes which satisfy a representation of the form \eqref{S:LDS:E:Definition, Spectral representation} where $A_{t,T}(x)$ is the time varying transfer function (set $\mu \left(t/T\right)=0$). Suppose that $\{\tilde{X}_{t}(u_0), \;t=1,2,\ldots,T\}$ is a dyadic stationary process with \begin{equation} \label{S:LDS:E:spectral representation of infinite tvDMA at a fixed point_Theorem} \tilde{X}_{t}(u_0)=\int_0^1 W(t,x) A(u_0,x) d U(x), \end{equation} where $A(u_0,x)$ depends on the fixed point $u_0 \in I$. Then within an interval $(u_0\pm \epsilon)$ and under the assumptions of Definition \ref{S:LDS:D:LDS} it holds that \begin{equation} \label{S:LDS:E:approximation by DS process} \nonumber |X_{t,T}-\tilde{X}_t(u_0)|= O_P(1/T). \end{equation} \end{theorem} \begin{proof} From equations \eqref{S:LDS:E:Definition, Spectral representation} and \eqref{S:LDS:E:spectral representation of infinite tvDMA at a fixed point_Theorem} we have that \begin{eqnarray} \label{S:LDS:E:approximation by DS process 2} \nonumber |X_{t,T}-\tilde{X}_t(u_0)| &=& \left| \int_0^1 W(t,x) A_{t,T}(x) d U(x) - \int_0^1 W(t,x) A(u_0,x) d U(x) \right| \\ \nonumber &\leq& \int_0^1 \left| W(t,x) \right| \cdot \left| A_{t,T}(x)- A(u_0,x) \right| d U(x) \\ &=& \int_0^1 \left| A_{t,T}(x)- A(u_0,x) \right| d U(x), \end{eqnarray} since $W(t,x) \in \{-1,1\}.$ In addition \begin{eqnarray} \label{S:LDS:E:transfer functions inequality 1} \nonumber \left| A_{t,T}(x)- A(u_0,x) \right| &\leq& \left| A_{t,T}(x)- A\left( \frac{t}{T},x \right) \right| + \left| A\left( \frac{t}{T},x \right) - A(u_0,x) \right| \\ &\leq& \frac{K}{T} + \left| A\left( u,x \right) - A(u_0,x) \right|, \end{eqnarray} from \eqref{S:LDS:E:Definition, assumption (ii)} in assumption (ii) of Definition \ref{S:LDS:D:LDS}. However, the same assumption states that $A(u,x)$ is continuous. Therefore, since $\{ u=t/T:|t/T-u_0|\leq \epsilon \}$ and for any $\epsilon^\prime>0$ we can choose $\epsilon>0$ to be such that $|A\left( u,x \right) - A(u_0,x)|<\epsilon^\prime,$ \eqref{S:LDS:E:transfer functions inequality 1} becomes \begin{equation} \label{S:LDS:E:transfer functions inequality 2} \left| A_{t,T}(x)- A(u_0,x) \right| \leq \frac{K^\ast}{T}, \end{equation} for some positive constant $K^\ast.$ Finally, from \eqref{S:LDS:E:approximation by DS process 2} and \eqref{S:LDS:E:transfer functions inequality 2}, we obtain that \begin{equation} \label{S:LDS:E:approximation by DS process with the limit} \nonumber E|X_{t,T}-\tilde{X}_t(u_0)| = O(1/T), \end{equation} and hence we have the desired result. \end{proof} \begin{figure} \begin{minipage}[h]{1\linewidth} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=.5\textwidth]{Example1Fnew3} & \includegraphics[width=.5\textwidth]{Example1Wn3} \end{tabular} \caption{The time varying spectral density function for the tvMA($1$) process (left) and the time varying dyadic spectral density function for the tvDMA($1$) process (right).} \label{S:LDS:F:tvMA(1) Vs tvDMA(1)} \end{minipage} \end{figure} \begin{corollary} \label{S:LDS:C:LDS for tvDMA infty} Theorem \ref{S:LDS:T:approximation by DS process} holds for the tvDMA$(\infty)$ process $X_{t,T},$ defined by \eqref{S:LDS:E:rescaled infinite tvDMA}, since this process satisfies the assumptions of Theorem \ref{S:LDS:T:approximation by DS process}, for $\tilde{X}_t(u_0)$ given by \eqref{S:LDS:E:spectral representation of infinite tvDMA at a fixed point}. \end{corollary} \begin{remark} Theorem \ref{S:LDS:T:approximation by DS process} implies that a locally dyadic stationary process could be approximated by dyadic stationary processes within different intervals in $I$ (that may overlap). Thus its behaviour could be described via the behaviour of those dyadic stationary processes. \end{remark} \begin{remark} \label{S:LDS:R:difference of the a functions} Equation \eqref{S:LDS:E:Definition, assumption (ii)} implies a similar assumption for the $\underset{t,x}{\sup}\left|a_{k,t,T}-a_k \left(t/T\right)\right|$ and the above discussion still holds. \end{remark} \begin{Example} \rm Consider, for example, the infinite time varying MA (tvMA$(\infty)$) representation $X_{t,T}=\sum_{k=0}^{\infty}a_{k,t,T} \varepsilon_{t- k},$ in the real time. Then its time varying spectral density function is given by $$f\left(u,\lambda\right)=(\sigma^2/2\pi) \left(\sum_{k=0}^{\infty}a_k(u)\exp(-i\lambda k)\right)^2.$$ Respectively, the time varying dyadic spectral density function of the tvDMA$(\infty)$ is given by $$g\left(u,x\right)= \left(\sigma \sum_{k=0}^{\infty}a_k(u) W(k,x)\right)^2.$$ We compare the behaviour of functions $g(u,x)$ and $f(u,\lambda)$ for the same order of the respective processes and for the same representation of the time varying coefficients $a_k(u)$ (set $\sigma^2=1$). Figure \ref{S:LDS:F:tvMA(1) Vs tvDMA(1)} shows the spectral density function of a tvMA$(1)$ and tvDMA$(1)$ processes. We set $a_0(u)=-1.8\cos(1.5-\cos(4\pi u))$ and $a_1(u)=0.81.$ Figure \ref{S:LDS:F:tvMA(2) Vs tvDMA(2)} shows the spectral density function of a tvMA$(2)$ and tvDMA$(2)$ processes. In this case we set $a_0(u)=1.2\cos(2\pi u), \; a_1(u)=2\cos(1.5-\cos(8\pi u))$ and $a_2(u)=u.$ Both figures reveal the differences between real and dyadic stationarity. The square waveform of Walsh functions allows a more oscillatory behaviour of the dyadic spectral density function. \end{Example} \begin{figure} \begin{minipage}[h]{1\linewidth} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=.5\textwidth]{Example4Fnew3} & \includegraphics[width=.5\textwidth]{Example4Wn3} \end{tabular} \caption{The time varying spectral density function for the tvMA($2$) process (left) and the time varying dyadic spectral density function for the tvDMA($2$) process (right).} \label{S:LDS:F:tvMA(2) Vs tvDMA(2)} \end{minipage} \end{figure} \section{tvDARMA processes} \label{S:tvDARMA} It is well known that autoregressive, moving average, and ARMA models can be regarded as special cases of the general linear process. \citet{nagai1980finite} shows that a dyadic autoregressive process of finite order is always inverted into a dyadic moving average process of finite order, and vice versa. We obtain similar results, but within a time varying framework. We define the time varying, dyadic, autoregressive, moving average (tvDARMA) process as follows. \begin{Definition} A stochastic process $\{X_t,t=1,2,\ldots,T\}$ is called tvDARMA$(p,r)$ if it is locally dyadic stationary and can be represented by \begin{equation} \label{S:tvDARMA:E:tvDARMA rescaled} \sum_{k=0}^p b_{k,t,T}X_{t\oplus k,T} = \sum_{n=0}^r a_{n,t,T} \, \varepsilon_{t\oplus n}, \end{equation} where $p,r \in \mathbb{Z}^{+}$ with $p=2^m-1,\;r=2^f-1,$ the sequences of parameters $\{b_{k,t,T}\}_{k=0,1,\ldots,p},$ $\{a_{n,t,T}\}_{n=0,1,\ldots,p}$ are real numbers with at least two non-zero parameters $b_{k_0,t,T},a_{n_0,t,T} \,$ for $2^{m-1}\leq k_0 \leq 2^m-1$ and $2^{f-1}\leq n_0 \leq 2^f-1.$ In addition, $\{\varepsilon_t,t=1,2,\ldots,T\}$ is an i.i.d. sequence with $E(\varepsilon_t)=0$ and $E(\varepsilon_t^2)=\sigma^2$. \end{Definition} Assume that $b_{0,t,T}=a_{0,t,T}=1.$ If we set in \eqref{S:tvDARMA:E:tvDARMA rescaled}, $p=0$, then the tvDMA process arises as in \eqref{S:LDS:E:rescaled infinite tvDMA}, but for a finite order $r$. In case we set in \eqref{S:tvDARMA:E:tvDARMA rescaled} $r=0$, then \eqref{S:tvDARMA:E:tvDARMA rescaled} becomes \begin{equation} \label{S:tvDARMA:E:tvDARp rescaled} \sum_{k=0}^p b_{k,t,T} X_{t\oplus k, T}=\varepsilon_t. \end{equation} We call $X_{t,T}$ in \eqref{S:tvDARMA:E:tvDARp rescaled} a time varying, dyadic, autoregressive process of order $p$ (tvDAR$(p)$). We show that a tvDAR process, and even more generally, a tvDARMA process, can be approximated by a tvDMA process. Following \cite{nagai1987walsh}, who study multivariate dyadic stationary processes, set \begin{equation} \label{S:tvDARMA:E:Phi} \phi_{t,T}(x)=\sum_{j=0}^p b_{j,t,T} W(j,x),\;x\in I, \end{equation} where $p=2^m-1, m\in \mathbb{N}$ and $\{b_{j,t,T}\}_{j=0,1,\ldots,p}$ are real numbers. Denote by $\Sigma_{t,T}$ the $(p+1)\times (p+1)$ matrix, which is given by \begin{equation} \label{S:tvDARMA:M:Sigma matrix} \nonumber \Sigma_{t,T}=\left( \begin{array}{cccc} b_{0\oplus 0,t,T} & b_{0\oplus 1,t,T} & \cdots & b_{0\oplus p,t,T} \\ b_{1\oplus 0,t,T} & b_{1\oplus 1,t,T} & \cdots & b_{1\oplus p,t,T} \\ \vdots & \vdots & \ddots & \vdots \\ b_{p\oplus 0,t,T} & b_{p\oplus 1,t,T} & \cdots & b_{p\oplus p,t,T} \\ \end{array} \right). \end{equation} \begin{lemma} \label{S:tvDARMA:L:Determinant} The following equation holds \begin{equation} \label{S:tvDARMA:E:Determinant} \nonumber \det[\Sigma_{t,T}]=\prod_{j=0}^p \phi_{t,T}(x_j), \end{equation} where $x_j = j/(p+1), \; j=0,1,\ldots,p.$ Therefore the function $\phi_{t,T}(x)\neq 0$ if and only if $\det[\Sigma_{t,T}]\neq 0.$ \end{lemma} \begin{lemma} \label{S:tvDARMA:L:Invertible} Assume that $\phi_{t,T}(x) \neq 0$ in \eqref{S:tvDARMA:E:Phi}. Then there exists a function $\eta_{t,T}(x)$, which is defined by $$\eta_{t,T}(x)=\sum_{m=0}^p d_{m,t,T} W(m,x),\quad \{d_{m,t,T}\}_{m=0,1,\ldots,p} \in \mathbb{R},\; x\in I,$$ and satisfies $\phi_{t,T}(x)\eta_{t,T}(x)=1.$ The coefficients $d_{m,t,T}$ are uniquely determined by $$\Sigma_{t,T}\left( \begin{array}{cccc} d_{0,t,T} & d_{1,t,T} & \cdots & d_{p,t,T} \\ \end{array} \right)^\prime = \left( \begin{array}{cccc} 1 & 0 & \cdots & 0 \\ \end{array} \right)^\prime.$$ \end{lemma} Define $S_{t,T}$ to be the $(r+1)\times (r+1)$ matrix \begin{equation} \label{S:tvDARMA:M:S matrix} \nonumber S_{t,T}=\left( \begin{array}{cccc} a_{0\oplus 0,t,T} & a_{0\oplus 1,t,T} & \cdots & a_{0\oplus r,t,T} \\ a_{1\oplus 0,t,T} & a_{1\oplus 1,t,T} & \cdots & a_{1\oplus r,t,T} \\ \vdots & \vdots & \ddots & \vdots \\ a_{r\oplus 0,t,T} & a_{r\oplus 1,t,T} & \cdots & a_{r\oplus r,t,T} \\ \end{array} \right). \end{equation} The following theorem states that a tvDARMA process can be approximated locally by a tvDMA and a tvDAR process. \begin{theorem} \label{S:tvDARMA:T:tvDARMA representation} Suppose that $\{X_{t,T},t=1,2,\ldots,T\}$ is a tvDARMA$(p,r)$ as in \eqref{S:tvDARMA:E:tvDARMA rescaled}. Set $\mu=\max(p,r)$. Then the following hold \\ (i) If $\det[\Sigma_{t,T}]\neq 0,$ then $X_{t,T}$ can be approximated locally by a tvDMA$(\mu)$ process. \\ (ii) If $\det[S_{t,T}]\neq 0,$ then $X_{t,T}$ can be approximated locally by a tvDAR$(\mu)$ process. \end{theorem} \begin{corollary} \label{S:tvDARMA:C:tvDAR represented as tv DMA} Suppose that $\{X_{t,T},t=1,2,\ldots,T\}$ is a tvDAR$(p)$ as in \eqref{S:tvDARMA:E:tvDARp rescaled}. If $\det[\Sigma_{t,T}]\neq 0,$ $X_{t,T}$ can be approximated locally by a tvDMA$(p)$ process. \end{corollary} Proofs of these results are given in the appendix. \section{Conclusions} We anticipate that the above results will be useful for future work in the field of applications. In this direction, the concepts of Walsh spectrum and Walsh transform will be studied. The Walsh spectrum for a real-valued dyadic stationary process $X_t$ is defined by \begin{equation} \label{S:LDS:SS:WT:E:defintion of WS} f(x)=\sum_{\tau=0}^{\infty} R(\tau)W(\tau,x),\quad 0\leq x<\infty, \end{equation} where the covariance function $R(\cdot)$ satisfies $\sum_{\tau=0}^{\infty} |R(\tau)|<\infty$ (see e.g. \citet{morettin1974walsh,morettin1978estimation,morettin1981walsh}). Inverting \eqref{S:LDS:SS:WT:E:defintion of WS}, the covariance is given by $$R(\tau)=\int_0^1 W(\tau,x)f(x)dx.$$ The finite Walsh transform is given by \begin{equation} \label{S:LDS:SS:WT:E:defintion of WT} \nonumber d^{\:(N)}(x)=\sum_{n=0}^{N-1} X_n W(n,x), \quad x\in I. \end{equation} To estimate the Walsh spectrum, \citet{morettin1981walsh} defined the Walsh periodogram, by $$I^{(N)}(x)=N^{-1} [d^{\:(N)}(x)]^2,$$ and showed that $I^{(N)}(x)$ is asymptotically an unbiased, but inconsistent, estimator of $f(x).$ He also considered the smooth Walsh periodogram and other classes of estimates. Dyadic stationarity is necessary to estimate the time-varying Walsh spectrum. Therefore, in the case of local dyadic stationarity, it would be reasonable to divide the rescaled interval $I$ into subintervals and estimate the Walsh spectrum within each subinterval, where local dyadic stationarity is satisfied. The number of the observations within each subinterval $\{ u=t/T:|t/T-u_0|\leq \epsilon \}$ increases as $T$ tends to infinity and the above asymptotical results still hold. A similar method is applied by \citet{dahlhaus1998segment} for real-time stationary processes. \citet{kohnI1980spectral,kohnII1980spectral} studied the system of Walsh functions for real time stationary processes. He defined the $j$-$th$ logical autocovariance, $\tau(j)$, and the corresponding Walsh-Fourier spectral density function $F(x)$. This notation replaces the previous notation used for $R(\cdot)$ and $f(\cdot)$ above. He considered the finite Walsh-Fourier transform and studied its asymptotic properties. A class of estimators for $F(x)$ was obtained, the average Walsh periodogram being a member of this class. The concept of local stationarity could also be applied in the real time setting and we conjecture that similar results could be obtained also in this case. \section*{Acknowledgements} We cordially thank Prof. I. Nikiforov and four anonymous reviewers for several constructive comments that improved considerably an earlier version of the manuscript. \newpage \section*{Appendix} \label{TM:S:Appendix} \renewcommand{\theequation}{A-\arabic{equation}} \setcounter{equation}{0} \renewcommand{\thelemma}{A-\arabic{lemma}} \paragraph{Proof of Lemma \ref{S:tvDARMA:L:Determinant}} Recall that $p=2^m-1$. The Walsh-ordered Hadamard matrix $H_W(m)$ is a $(2^m \times 2^m)$ matrix with elements of the form $W(n,x_j),\; x_j=j/(p+1),\;j,n=0,1,\ldots, p$, see also \citet{stoffer1991walsh}. Then the following relations hold: \begin{eqnarray} \label{TM:S:Appendix:E:lemma1 proof} \nonumber \Sigma_{t,T} H_W(m) &=& \left( \begin{array}{cccc} b_{0\oplus 0,t,T} & b_{0\oplus 1,t,T} & \cdots & b_{0\oplus p,t,T} \\ b_{1\oplus 0,t,T} & b_{1\oplus 1,t,T} & \cdots & b_{1\oplus p,t,T} \\ \vdots & \vdots & \ddots & \vdots \\ b_{p\oplus 0,t,T} & b_{p\oplus 1,t,T} & \cdots & b_{p\oplus p,t,T} \\ \end{array} \right) \cdot \left( \begin{array}{cccc} W(0,x_0) & W(0,x_1) & \cdots & W(0,x_p) \\ W(1,x_0) & W(1,x_1) & \cdots & W(1,x_p) \\ \vdots & \vdots & \ddots & \vdots \\ W(p,x_0) & W(p,x_1) & \cdots & W(p,x_p) \\ \end{array} \right) \\ \nonumber &=& \left\{ \sum_{l=0}^p b_{(i-1)\oplus l,t,T} W(l,x_{j-1}) \right\}_{(i,j)} = \bigg\{ \phi_{t,T}(x_{j-1}) W(i-1,x_{j-1}) \bigg\}_{(i,j)} \\ &=& H_W(m) \cdot \textrm{diag}[\phi_{t,T}(x_0), \phi_{t,T}(x_1), \ldots, \phi_{t,T}(x_p)]. \end{eqnarray} But, since $\det [H_W(m)] = (p+1)^{(p+1)/2} \neq 0$, we get from \eqref{TM:S:Appendix:E:lemma1 proof} that $$\det [\Sigma_{t,T}] = \det\big[ \textrm{diag}(\phi_{t,T}(x_0), \phi_{t,T}(x_1), \ldots, \phi_{t,T}(x_p)) \big] = \prod _{j=0}^p \phi_{t,T}(x_j).$$ For the second argument of Lemma \ref{S:tvDARMA:L:Determinant}, note that $\phi_{t,T}(x)=\sum_{j=0}^p b_{j,t,T} W(j,x),\, x\in I$ and every Walsh function $W(n,x),\, n=0,1,\ldots,p$ remains invariant for $x \in [x_j,x_{j+1}), \, x_j=j/(p+1),\;j,n=0,1,\ldots, p$ and equal to $W(n,x_j)$. Therefore $\phi_{t,T}(x)=\phi_{t,T}(x_j), \; \forall x \in [x_j,x_{j+1}).$ \paragraph{Proof of Lemma \ref{S:tvDARMA:L:Invertible}} \begin{eqnarray} \label{TM:S:Appendix:E:lemma2 proof 1} \nonumber \phi_{t,T}(x)\eta_{t,T}(x) &=& \left(\sum_{l=0}^p b_{l,t,T} W(l,x)\right) \left(\sum_{m=0}^p d_{m,t,T} W(m,x)\right) = \sum_{l=0}^p \sum_{m=0}^p b_{l,t,T} \, d_{m,t,T} W(l\oplus m,x) \\ &=& \sum_{h=0}^p \Bigg [\sum_{j=0}^p b_{j\oplus h,t,T} \, d_{j,t,T} \Bigg] W(h,x). \end{eqnarray} In order for $\phi_{t,T}(x)\eta_{t,T}(x)=1$ to hold in $I$, we have from equation \eqref{TM:S:Appendix:E:lemma2 proof 1} that $$\sum_{h=0}^p \Bigg [\sum_{j=0}^p b_{j\oplus h,t,T} \, d_{j,t,T} \Bigg] W(h,x_l)=1, \quad l=0,1,\ldots,p,$$ which is equivalently written in matrix notation as \begin{equation} \label{TM:S:Appendix:E:lemma2 proof 2} H^\prime_W(m) \left( \begin{array}{c} \sum_{j=0}^p b_{j,t,T} \, d_{j,t,T} \\ \sum_{j=0}^p b_{j\oplus 1,t,T} \, d_{j,t,T} \\ \vdots \\ \sum_{j=0}^p b_{j\oplus p,t,T} \, d_{j,t,T} \end{array} \right) = \left( \begin{array}{c} 1 \\ 1 \\ \vdots \\ 1 \end{array} \right). \end{equation} But $H_W(m)H^\prime_W(m)=2^m I_{2^m}.$ Hence, since from assumption we have that $\det\left[\Sigma_{t,T}\right]\neq 0$, equation \eqref{TM:S:Appendix:E:lemma2 proof 2} gives that \begin{eqnarray} \nonumber 2^m I_{2^m} \left( \begin{array}{c} \sum_{j=0}^p b_{j,t,T} \, d_{j,t,T} \\ \sum_{j=0}^p b_{j\oplus 1,t,T} \, d_{j,t,T} \\ \vdots \\ \sum_{j=0}^p b_{j\oplus p,t,T} \, d_{j,t,T} \end{array} \right) &=& H_W(m) \left( \begin{array}{c} 1 \\ 1 \\ \vdots \\ 1 \end{array} \right) = 2^m \left( \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right) \Longrightarrow \left( \begin{array}{c} d_{0,t,T} \\ d_{1,t,T} \\ \vdots \\ d_{p,t,T} \end{array} \right) = \Sigma_{t,T}^{-1} \left( \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right). \end{eqnarray} \paragraph{Proof of Theorem \ref{S:tvDARMA:T:tvDARMA representation}} Since $X_{t,T}$ is locally dyadic stationary it has a Walsh spectral representation $$X_{t,T}=\int_0^1 W(t,x) A_{t,T}(x) dU(x),$$ while $\varepsilon_t$ is dyadic stationary and represented by $$\varepsilon_t=\int_0^1 W(t,x) dZ_\varepsilon(x).$$ Then the LHS and RHS of equation \eqref{S:tvDARMA:E:tvDARMA rescaled} can be written as \begin{eqnarray} \label{TM:S:Appendix:E:LHS} \textrm{LHS} &=& \sum_{k=0}^p b_{k,t,T} X_{t\oplus k,T} = \int_0^1 W(t,x) \left(\sum_{k=0}^p b_{k,t,T} A_{t\oplus k,T}(x)W(k,x) \right) dU(x), \end{eqnarray} and \begin{eqnarray} \label{TM:S:Appendix:E:RHS} \nonumber \textrm{RHS} &=& \sum_{n=0}^r a_{n,t,T} \varepsilon_{t\oplus n} = \int_0^1 W(t,x) \left(\sum_{n=0}^r a_{n,t,T}W(n,x) \right) dZ_\varepsilon(x). \end{eqnarray} \noindent Set \begin{equation} \label{TM:S:Appendix:E:LHS.prime} \textrm{LHS}^{\prime}=\int_0^1 W(t,x) \left(\sum_{k=0}^p b_{k,t,T} A_{t,T}(x)W(k,x) \right) dU(x). \end{equation} \noindent Then \begin{eqnarray} \label{TM:S:Appendix:E:LHS.LHS.prime} \nonumber |\textrm{LHS}-\textrm{LHS}^{\prime}| &=& \int_0^1 |W(t,x)| \left(\sum_{k=0}^p |b_{k,t,T}| \cdot |A_{t\oplus k,T}(x)-A_{t,T}(x)| \cdot |W(k,x)| \right) dU(x) \\ &=& \int_0^1 \left(\sum_{k=0}^p |b_{k,t,T}| \cdot |A_{t\oplus k,T}(x)-A_{t,T}(x)| \right) dU(x). \end{eqnarray} \noindent Consider the interval $A=\{ \left| (t\oplus k/T)-t/T \right| \leq \varepsilon \}$. Then, $\forall \varepsilon$, we can assume that $T$ is large enough, such that $t\oplus k/T \in A, \, \forall \, k=0,1,\ldots, p.$ From the continuity of the function $A(t/T,\cdot)$ and assumption \eqref{S:LDS:E:Definition, assumption (ii)} we have that \begin{eqnarray} \label{TM:S:Appendix:E:1st dif} \nonumber |A_{t\oplus k,T}(x)-A_{t,T}(x)| &=& \left|A_{t\oplus k,T}(x)-A\left(\frac{t\oplus k}{T},x\right)+A\left(\frac{t\oplus k}{T},x\right)-A_{t,T}(x)\right| \\ \nonumber &\leq& \left| A_{t\oplus k,T}(x)-A\left(\frac{t\oplus k}{T},x\right) \right| + \left| A\left(\frac{t\oplus k}{T},x\right)-A_{t,T}(x) \right| \\ \nonumber &\leq& \frac{K}{T}+ \left| A\left(\frac{t\oplus k}{T},x\right) - A\left(\frac{t}{T},x\right)\right| + \left|A\left(\frac{t}{T},x\right) - A_{t,T}(x) \right| \\ &\leq& \frac{2K}{T}+\varepsilon^{\prime}=\varepsilon^{\prime\prime}. \end{eqnarray} From \eqref{TM:S:Appendix:E:LHS.LHS.prime} and \eqref{TM:S:Appendix:E:1st dif}, we have that \begin{eqnarray} \label{TM:S:Appendix:E:LHS-LHS.prime} \nonumber |\textrm{LHS}-\textrm{LHS}^{\prime}| &\leq& \varepsilon^{\prime\prime} \sum_{k=0}^p |b_{k,t,T}| \int_0^1dU(x) \leq \varepsilon^{\prime\prime\prime}, \end{eqnarray} since $\sum_{k=0}^p |b_{k,t,T}|<M^{\prime}$ and $\int_0^1dU(x)<M^{\prime\prime}$, with $M^{\prime}$ and $M^{\prime\prime}$ real constants. Set $\phi_{1,t,T}(x)=\sum_{k=0}^p b_{k,t,T}W(k,x)$ and $\phi_{2,t,T}(x)=\sum_{n=0}^r a_{n,t,T}W(n,x).$ Assume that $\textrm{LHS}^{\prime}=\textrm{RHS}.$ Then, since the system of Walsh functions is complete and equation \eqref{S:tvDARMA:E:tvDARMA rescaled} holds for $t=1,2,\ldots,T$, we have that $\phi_{1,t,T}(x) A_{t,T}(x) dU(x) = \phi_{2,t,T}(x) dZ_\varepsilon(x).$ \\ (i) Since $\det[\Sigma_{t,T}]\neq 0,$ from Lemma \ref{S:tvDARMA:L:Determinant} we have that $\phi_{1,t,T}(x)\neq 0$ and from Lemma \ref{S:tvDARMA:L:Invertible} there exists a function $\eta_{1,t,T}(x)=\sum_{k=0}^p g_{k,t,T} W(k,x)$ such that \begin{eqnarray} \nonumber A_{t,T}(x) dU(x) &=& \eta_{1,t,T}(x) \phi_{2,t,T}(x)dZ_\varepsilon(x) \\ &=&\Bigg\{ \begin{array}{cc} \sum_{j=0}^\mu \Big( \sum_{l=0}^p g_{l,t,T}\:a_{l\oplus j,t,T} \Big) W(j,x) dZ_\varepsilon(x),& p\leq r, \\ \nonumber \sum_{j=0}^\mu \Big( \sum_{l=0}^r g_{l\oplus j,t,T}\:a_{l,t,T} \Big) W(j,x) dZ_\varepsilon(x),& p>r. \end{array} \end{eqnarray} Hence, $$\nonumber A_{t,T}(x)dU(x) = \sum_{j=0}^\mu K_{j,t,T}W(j,x) dZ_\varepsilon(x),$$ where \begin{eqnarray} \label{S:tvDARMA:E:definition of Ks} \nonumber K_{j,t,T} &=& \Bigg\{ \begin{array}{cc} \sum_{s=0}^p g_{s,t,T}\:a_{s\oplus j,t,T},& p\leq r, \\ \sum_{s=0}^r g_{s\oplus j,t,T}\:a_{s,t,T},& p>r, \end{array} \quad j=0,1,\ldots,p. \end{eqnarray} Therefore, $$X_{t,T} = \sum_{j=0}^\mu K_{j,t,T} \int_0^1 W(t\oplus j,x) dZ_\varepsilon(x) = \sum_{j=0}^\mu K_{j,t,T} \varepsilon_{t\oplus j}.$$ \\ (ii) Similarly with (i). \paragraph{Proof of Corollary \ref{S:tvDARMA:C:tvDAR represented as tv DMA}} Suppose that $X_{t,T}=\int_0^1 A_{t,T}(x)W(t,x) dU(x).$ The LHS of \eqref{S:tvDARMA:E:tvDARp rescaled} is given by \eqref{TM:S:Appendix:E:LHS} and the $\textrm{LHS}^{\prime}$ by \eqref{TM:S:Appendix:E:LHS.prime}. From Lemma \ref{S:tvDARMA:L:Invertible}, since by assumption $\det[\Sigma_{t,T}]\neq 0,$ $\exists \; \eta_{t,T}(x)=\sum_{m=0}^p d_{m,t,T} W(m,x),\; d_{m,t,T}\in \mathbb{R},$ such that $\phi_{1,t,T}(x) \eta_{t,T}(x)=1.$ Therefore, and since the system of Walsh functions is complete, we have that $$A_{t,T}(x)dU(x)= \eta_{t,T}(x) dZ_\varepsilon(x)= \sum_{m=0}^p d_{m,t,T} W(m,x) dZ_\varepsilon(x).$$ Hence, $X_{t,T}=\sum_{m=0}^p d_{m,t,T} \int_0^1 W(t\oplus m,x) dZ_\varepsilon(x) \nonumber = \sum_{m=0}^p d_{m,t,T} \varepsilon_{t\oplus m}.$ \newpage
{ "timestamp": "2016-11-08T02:12:00", "yymm": "1504", "arxiv_id": "1504.06185", "language": "en", "url": "https://arxiv.org/abs/1504.06185" }
\section{Introduction} \setcounter{equation}{0} The Coulomb wave function, which bears the name of the famous French physicist Charles Augustin de Coulomb (best known for his law describing the electrostatic interaction between electrically charged particles), is a solution of the Coulomb wave equation (or radial Schr\"odinger equation in the Coulomb potential) and it is used to describe the behavior of charged particles in a Coulomb potential. There is an extensive literature concerning the computation of the Coulomb wave function values, however, the zeros and other analytical properties have not been studied in detail. For more details we refer to the papers \cite{ikebe,miyazaki} and to the references therein. We mention that recently, an important study on the Coulomb wave function was made by \v{S}tampach and \v{S}\v{t}ov\'\i\v{c}ek \cite{stampach}. In this paper we present some new results on the Coulomb wave function, which may be useful for people working in special functions and mathematical physics. Our present paper belongs to the rich literature about Tur\'an type inequalities on orthogonal polynomials and special functions, named after the Hungarian mathematician Paul Tur\'an, and can be interpreted as the generalization of some of the results on Bessel functions of the first kind, obtained by Sz\'asz \cite{szasz1,szasz2}. The paper is organized as follows: the next section is divided into four subsections and contains some Tur\'an, Mitrinovi\'c-Adamovi\'c and Wilker type inequalities for the regular Coulomb wave function. The key tool in the proofs is a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. We also deduce some complete monotonicity results for the Coulomb zeta functions, which are defined by using the real zeros of the Coulomb wave functions. By using the Hadamard factorization of the Coulomb wave functions we also present some interlacing properties of the zeros of the Coulomb wave functions. \section{Properties of the regular Coulomb wave functions} \setcounter{equation}{0} In this section our aim is to present the main results of this paper about the regular Coulomb wave function together with their proofs. The section is divided into four subsections. \subsection{Tur\'an type inequalities for regular Coulomb wave functions} In order to obtain the main results of this subsection we use a Mittag-Leffler expansion for the regular Coulomb wave function together with the recurrence relations, and a result of Ross \cite{ross}. As we can see below the second main result of this subsection is a natural extension of a well-known result of Sz\'asz \cite{szasz1,szasz2} for Bessel functions of the first kind. The next result, which may be of independent interest, is an immediate consequence of a result of Wimp \cite{wimp} concerning confluent hypergeometric functions and it was recently rediscovered by \v{S}tampach and \v{S}\v{t}ov\'\i\v{c}ek \cite{stampach}, by using a different method. In both papers \cite{stampach,wimp} a new class of orthogonal polynomials associated with regular Coulomb wave functions is introduced. These polynomials play a role analogous to that the Lommel polynomials have in the theory of Bessel functions of the first kind. However, it is worth to mention that Wimp's approach \cite{wimp} is based on inversion of Stieltjes transforms, while \v{S}tampach and \v{S}\v{t}ov\'\i\v{c}ek \cite{stampach} used the eigenvalues of some Jacobi matrices. \begin{lemma} Let $\rho,\eta\in\mathbb{R}$ and let $L>-3/2,$ $L\neq-1$ if $\eta\neq0$ and $L>-3/2$ if $\eta=0.$ Then the next Mittag-Leffler expansion is valid \begin{equation}\label{Comitt} \frac{F_{L+1}(\eta,\rho)}{F_{L}(\eta,\rho)}=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{n\geq 1}\left[\frac{\rho}{x_{L,\eta,n}(x_{L,\eta,n}-\rho)}+\frac{\rho}{y_{L,\eta,n}(y_{L,\eta,n}-\rho)}\right], \end{equation} where $x_{L,\eta,n}$ and $y_{L,\eta,n}$ are the $n$th positive and negative zeros of the Coulomb wave function $F_L(\eta,\rho).$ \end{lemma} \begin{proof} Let ${}_1F_1$ denotes the Kummer confluent hypergeometric function. It is known that $$F_{L}(\eta,\rho)=C_{L}(\eta)\rho^{L+1}e^{-\mathrm{i}\rho}{}_1F_1(L+1-\mathrm{i}\eta,2L+2;2\mathrm{i}\rho),$$ where $$C_L(\eta)=\frac{2^Le^{-\frac{\pi\eta}{2}}\left|\Gamma(L+1+\mathrm{i}\eta)\right|}{\Gamma(2L+2)}.$$ By using the next result of Wimp \cite[p. 892]{wimp} for $c=2L+2,$ $\kappa=\eta$ and $z=1/\rho$ $$\frac{{}_1F_1\left(\frac{c}{2}+1-\mathrm{i}\kappa,c+2;\frac{2\mathrm{i}}{z}\right)}{{}_1F_1\left(\frac{c}{2}-\mathrm{i}\kappa,c;\frac{2\mathrm{i}}{z}\right)}= \frac{c^2(c+1)}{c^2+4\kappa^2}\sum_{k\in\mathbb{Z}\setminus\{0\}}z_k^{-2}\frac{z}{z-z_k^{-1}},$$ where $\kappa,z\in\mathbb{R},$ $c>-1$ and $z_k,$ $k\in\mathbb{Z}\setminus\{0\},$ are the zeros of the function ${}_1F_1(c/2-\mathrm{i}\kappa,c;2\mathrm{i}z),$ it follows that $$\frac{F_{L+1}(\eta,\rho)}{F_{L}(\eta,\rho)}=\frac{C_{L+1}(\eta)}{C_L(\eta)}\frac{(L+1)^2(2L+3)}{(L+1)^2+\eta^2}\sum_{n\geq 1}\left[\frac{\rho}{x_{L,\eta,n}(x_{L,\eta,n}-\rho)}+\frac{\rho}{y_{L,\eta,n}(y_{L,\eta,n}-\rho)}\right], $$ which by means of the relation \cite[p. 538]{abra} $L(2L+1)C_L(\eta)=\sqrt{L^2+\eta^2}C_{L-1}(\eta)$ yields \eqref{Comitt}. We note that in the above formula of Wimp \cite[p. 892]{wimp} instead of the correct expression $K=c^2(c+1)/(c^2+4\kappa^2)$ it was used $K=c^2(c+1)/(c^2/4+\kappa^2),$ and instead of the correct argument $2\mathrm{i}/z$ it was $\mathrm{i}/z.$ This can be verified by using the fact that when $\eta=0$ the Coulomb wave function reduces to Bessel function of the first kind, and by using the Mittag-Leffler expansion for Bessel functions of the first kind and the first Rayleigh sum of zeros of Bessel functions we would have contradiction. Another way to obtain \eqref{Comitt} is to consider the Hadamard infinite product expansion \cite[eq. 76]{stampach} \begin{equation}\label{infprod}F_{L}(\eta,\rho)=C_L(\eta)\rho^{L+1}e^{\frac{\eta\rho}{L+1}} \prod_{n\geq1}\left(1-\frac{\rho}{\rho_{L,\eta,n}}\right)e^{\frac{\rho}{\rho_{L,\eta,n}}},\end{equation} where $\rho_{L,\eta,n}$ is the $n$th zero of the Coulomb wave function. Logarithmic derivation yields $$\frac{F_{L}'(\eta,\rho)}{F_L(\eta,\rho)}=\frac{L+1}{\rho}+\frac{\eta}{L+1}-\sum_{n\geq 1}\frac{\rho}{\rho_{L,\eta,n}(\rho_{L,\eta,n}-\rho)},$$ which in view of the recurrence relation \cite[p. 539]{abra} \begin{equation}\label{reccou2}(L+1)F_L'(\eta,\rho)=\left[\frac{(L+1)^2}{\rho}+\eta\right] F_{L}(\eta,\rho)-\sqrt{(L+1)^2+\eta^2}F_{L+1}(\eta,\rho)\end{equation} yields $$\frac{F_{L+1}(\eta,\rho)}{F_{L}(\eta,\rho)}=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{n\geq 1}\frac{\rho}{\rho_{L,\eta,n}(\rho_{L,\eta,n}-\rho)}.$$ Now, taking into account that the zeros $\rho_{L,\eta,n}$ can be separated into positive and negative zeros, the proof of \eqref{Comitt} is done. \end{proof} It is worth to mention that if $\eta=0,$ then \eqref{Comitt} reduces to the next well-known Mittag-Leffler expansion $$\frac{F_{L+1}(0,\rho)}{F_{L}(0,\rho)}=\frac{J_{L+3/2}(\rho)}{J_{L+1/2}(\rho)}=\sum_{n\geq1}\frac{2\rho}{j_{L+1/2,n}^2-\rho^2},$$ where $L>-3/2,$ $J_{L}$ stands for the Bessel function of the first kind of order $L$ and $j_{L,n}$ is the $n$th positive zero of the Bessel function $J_L.$ Here we used that for each natural $n$ we have $x_{L,0,n}=-y_{L,0,n}=j_{L+1/2,n},$ that is, the corresponding negative and positive zeros of the Bessel function of the first kind are symmetric with respect to the origin. Now, we are ready to present the first set of results concerning the Tur\'an type inequalities for the regular Coulomb wave function. Three kind of Tur\'anians are considered and the results are mainly based on the Mittag-Leffler expansion \eqref{Comitt}. Our first main result is the following theorem. \begin{theorem} The following assertions are true: \begin{enumerate} \item[\bf a.] If $L,\eta>0,$ $0<\rho<L(L+1)/\eta,$ $\rho<x_{L,\eta,1}$ or $-3/2<L<-1,$ $\eta>0,$ $0<\rho<L(L+1)/\eta,$ $\rho<x_{L,\eta,1}$ or $\eta\leq0,$ $L\geq0$ and $0<\rho<x_{L,\eta,1},$ then \begin{equation}\label{coturan1} F_{L}^2(\eta,\rho)-F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho)\geq0. \end{equation} \item[\bf b.] If $L,\eta>0,$ $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}$ or $-3/2<L<-1,$ $\eta>0,$ $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}$ or $-1<L<0,$ $\eta<0,$ $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}$ then $$\frac{\sqrt{L^2+\eta^2}}{L}F_{L}^2(\eta,\rho)- \frac{\sqrt{(L+1)^2+\eta^2}}{L+1}F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho)\geq0.$$ \item[\bf c.] If $L>-1,$ $\eta\in\mathbb{R},$ $\rho^2\leq (L^3+1)/(L^2+\eta^2),$ ${\eta}/({L(L+1)})- {1}/{\rho}>0$ and $0<\rho<x_{L-1,\eta,1},$ then $$F_{L}^2(\eta,\rho)- \frac{\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}}{L(L+1)}F_{L-1}(\eta,\rho) F_{L+1}(\eta,\rho)\geq0.$$ \end{enumerate} \end{theorem} \begin{proof} {\bf a.} By using the recurrence relation \cite[p. 539]{abra} \begin{equation}\label{reccou1}LF_L'(\eta,\rho)=\sqrt{L^2+\eta^2}F_{L-1}(\eta,\rho)- \left(\frac{L^2}{\rho}+\eta\right)F_{L}(\eta,\rho)\end{equation} and \eqref{reccou2}, we obtain $$\frac{{}_1\Delta_{L,\eta}(\rho)}{F_{L}^2(\eta,\rho)}=a_{L,\eta}(\rho)- b_{L,\eta}(\rho)\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}+ c_{L,\eta}\left[\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}\right]^2,$$ where $$a_{L,\eta}(\rho)=1-\frac{\left(\frac{L^2}{\rho}+\eta\right)\left[\frac{(L+1)^2}{\rho}+\eta\right]} {\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}},\ b_{L,\eta}(\rho)=\frac{\frac{L(L+1)}{\rho}-\eta}{\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}},$$ $$c_{L,\eta}=\frac{L(L+1)}{\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}}$$ and ${}_1\Delta_{L,\eta}(\rho)$ stands for the Tur\'an expression, defined by $${}_1\Delta_{L,\eta}(\rho)=F_{L}^2(\eta,\rho)-F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho).$$ Now, taking into account that the Coulomb wave function is a particular solution of the Coulomb differential equation \cite[p. 538]{abra} \begin{equation}\label{eqcou}w''(\rho)+\left[1-\frac{2\eta}{\rho}- \frac{L(L+1)}{\rho^2}\right]w(\rho)=0,\end{equation} we get \begin{equation}\label{eqdiffercou}\left[\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}\right]^2= \frac{L(L+1)}{\rho^2}+\frac{2\eta}{\rho}-1- \left[\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}\right]',\end{equation} which in turn implies that $$\frac{{}_1\Delta_{L,\eta}(\rho)}{F_{L}^2(\eta,\rho)}=d_{L,\eta}(\rho)-b_{L,\eta}(\rho) \frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}- c_{L,\eta}\left[\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}\right]',$$ where $$d_{L,\eta}(\rho)=1-\frac{L(L+1)+\frac{\eta}{\rho}+\eta^2} {\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}}.$$ Moreover, by using the recurrence relation \eqref{reccou2} and the Mittag-Leffler expansion \eqref{Comitt}, it follows $$\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}=\frac{L+1}{\rho}+\frac{\eta}{L+1}-\sum_{n\geq1} \left[\frac{\rho}{x_{L,\eta,n}(x_{L,\eta,n}-\rho)}+ \frac{\rho}{y_{L,\eta,n}(y_{L,\eta,n}-\rho)}\right]$$ and \begin{equation}\label{logder}\left[\frac{F_{L}'(\eta,\rho)}{F_{L}(\eta,\rho)}\right]' =-\frac{L+1}{\rho^2}-\sum_{n\geq1} \left[\frac{1}{(x_{L,\eta,n}-\rho)^2}+\frac{1}{(y_{L,\eta,n}-\rho)^2}\right].\end{equation} Consequently we have \begin{align*}\frac{{}_1\Delta_{L,\eta}(\rho)}{F_{L}^2(\eta,\rho)}=e_{L,\eta}+b_{L,\eta}&(\rho)\sum_{n\geq1} \left[\frac{\rho}{x_{L,\eta,n}(x_{L,\eta,n}-\rho)}+ \frac{\rho}{y_{L,\eta,n}(y_{L,\eta,n}-\rho)}\right] \\&+c_{L,\eta}\sum_{n\geq1} \left[\frac{1}{(x_{L,\eta,n}-\rho)^2}+\frac{1}{(y_{L,\eta,n}-\rho)^2}\right],\end{align*} where $$e_{L,\eta}=1-\frac{L\sqrt{(L+1)^2+\eta^2}}{(L+1)\sqrt{L^2+\eta^2}}.$$ Note that for all $L\geq0$ or $-3/2<L<-1$ and $\eta\in\mathbb{R}$ we have $c_{L,\eta}\geq0$ and $e_{L,\eta}\geq0.$ Thus ${}_1\Delta_{L,\eta}(\rho)$ is positive if $L,\eta>0,$ $0<\rho<L(L+1)/\eta,$ $\rho<x_{L,\eta,1}$ or if $-3/2<L<-1,$ $\eta>0,$ $0<\rho<L(L+1)/\eta,$ $\rho<x_{L,\eta,1}$ or if $\eta\leq0,$ $L\geq0$ and $0<\rho<x_{L,\eta,1}.$ {\bf b.} By using the recurrence relations \eqref{reccou1} and \eqref{reccou2} we obtain $$F_{L+1}'(\eta,\rho)F_{L}(\eta,\rho)-F_{L}'(\eta,\rho)F_{L+1}(\eta,\rho)= {}_2\Delta_{L+1,\eta}(\rho)-\left[\frac{\eta}{(L+1)(L+2)}- \frac{1}{\rho}\right]F_{L}(\eta,\rho)F_{L+1}(\eta,\rho),$$ where $${}_2\Delta_{L,\eta}(\rho)=\frac{\sqrt{L^2+\eta^2}}{L}F_{L}^2(\eta,\rho)- \frac{\sqrt{(L+1)^2+\eta^2}}{L+1}F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho).$$ On the other hand, according to \cite[Lemma 2.4]{miyazaki} we have $$\rho^2\frac{\sqrt{(L+1)^2+\eta^2}}{L+1}\left[F_{L+1}'(\eta,\rho)F_{L}(\eta,\rho)- F_{L}'(\eta,\rho)F_{L+1}(\eta,\rho)\right]=\sum_{n\geq1}(2L+2n+1)F_{L+n}^2(\eta,\rho). $$ From this we obtain that $$\frac{{}_2\Delta_{L,\eta}(\rho)}{F_{L-1}^2(\eta,\rho)}\geq\left[\frac{\eta}{L(L+1)}- \frac{1}{\rho}\right]\frac{F_{L}(\eta,\rho)}{F_{L-1}(\eta,\rho)}$$ and by using the Mittag-Leffler expansion \eqref{Comitt}, the right-hand side of the above inequality is positive if $L,\eta>0$ and $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}$ or if $-3/2<L<-1,$ $\eta>0$ and $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}$ or if $-1<L<0,$ $\eta<0,$ $L(L+1)/\eta\leq\rho<x_{L-1,\eta,1}.$ {\bf c.} Observe that \eqref{logder} implies that for all $\eta,\rho\in\mathbb{R},$ $\rho\neq0$ and $L\geq-1$ we have $$D_{L,\eta}(\rho)=F_{L}''(\eta,\rho)F_{L}(\eta,\rho)-F_{L}'(\eta,\rho)F_{L}'(\eta,\rho)\leq0.$$ Now, by using the recurrence relations \eqref{reccou1} and \eqref{reccou2} and also the fact that $F_L(\eta,\rho)$ satisfies the Coulomb differential equation \eqref{eqcou}, we obtain $$D_{L,\eta}(\rho)=f_{L,\eta}(\rho)F_{L}^2(\eta,\rho)+ \frac{1}{c_{L,\eta}}F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho)+ \left[\frac{\eta}{L(L+1)}-\frac{1}{\rho}\right]F_{L-1}(\eta,\rho)F_{L}(\eta,\rho),$$ where $$f_{L,\eta}(\rho)=\frac{L}{\rho^2}-1-\frac{\eta^2}{L^2}.$$ If $L>-1,$ $\eta\in\mathbb{R}$ and $(L^3+1)/(L^2+\eta^2)\geq\rho^2,$ then we have that $f_{L,\eta}(\rho)\geq-1$ and consequently we have $$0\geq D_{L,\eta}(\rho)\geq -{}_3\Delta_{L,\eta}(\rho)+ \left[\frac{\eta}{L(L+1)}-\frac{1}{\rho}\right]F_{L-1}(\eta,\rho)F_{L}(\eta,\rho),$$ where $${}_3\Delta_{L,\eta}(\rho)=F_{L}^2(\eta,\rho)- \frac{\sqrt{L^2+\eta^2}\sqrt{(L+1)^2+\eta^2}}{L(L+1)}F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho).$$ But the above inequality is equivalent to $$\frac{{}_3\Delta_{L,\eta}(\rho)}{F_{L-1}^2(\eta,\rho)}\geq\left[\frac{\eta}{L(L+1)}- \frac{1}{\rho}\right]\frac{F_{L}(\eta,\rho)}{F_{L-1}(\eta,\rho)}$$ and by using again the Mittag-Leffler expansion \eqref{Comitt}, the right-hand side of the above inequality is positive if ${\eta}/({L(L+1)})- {1}/{\rho}>0$ and $0<\rho<x_{L-1,\eta,1}.$ With this the proof is complete. \end{proof} Now, let us consider the notations $$B_{L,\eta}(\rho)=\frac{L\sqrt{(L+1)^2+\eta^2}}{(2L+1)\left[\frac{L(L+1)}{\rho}+\eta\right]}\ \ \ \mbox{and}\ \ \ C_{L,\eta}(\rho)=\frac{(L+1)\sqrt{L^2+\eta^2}}{(2L+1)\left[\frac{L(L+1)}{\rho}+\eta\right]}.$$ In what follows we show that if $L\geq0,$ $\eta\leq0,$ then the restriction $\rho<x_{L,\eta,1}$ in the Tur\'an type inequality \eqref{coturan1} can be removed. Moreover, we show that in this case the inequality \eqref{coturan1} can be improved. \begin{theorem}\label{thturan} If $n\in\{0,1,\dots\}$ and $L\geq-3/2,$ $L\neq-1,$ $\rho>0,$ $\eta\in\mathbb{R},$ $\eta\neq0$ or $L>-3/2,$ $\rho>0$ and $\eta=0,$ then \begin{align}\label{identcou} &F_{L+n}^2(\eta,\rho)-F_{L+n-1}(\eta,\rho)F_{L+n+1}(\eta,\rho)=-\frac{\Theta C_{L+n,\eta}(\rho)}{C_{L+n,\eta}(\rho)}F_{L+n}^2(\eta,\rho)\\&-\sum_{i=1}^{\infty}\frac{B_{L+n+1,\eta}(\rho)B_{L+n+2,\eta}(\rho)\dots B_{L+n+i+1,\eta}(\rho)}{C_{L+n,\eta}(\rho)C_{L+n+1,\eta}(\rho)\dots C_{L+n+i,\eta}(\rho)}\Theta(B_{L+n+i-1,\eta}(\rho)C_{L+n+i,\eta}(\rho))F_{L+n+i}^2(\eta,\rho)\nonumber, \end{align} where $\Theta$ is the forward difference operator defined by $\Theta A_n=A_{n+1}-A_n.$ In particular, for all $L\geq 0,$ $\eta\leq0$ and $\rho>0$ the following sharp Tur\'an type inequality is valid \begin{equation}\label{coturan4} F_{L}^2(\eta,\rho)-F_{L-1}(\eta,\rho)F_{L+1}(\eta,\rho)\geq \left[1-\frac{L(2L+1)\sqrt{(L+1)^2+\eta^2}}{(L+1)(2L+3)\sqrt{L^2+\eta^2}}\right]F_{L}^2(\eta,\rho).\end{equation} \end{theorem} It is important to mention here that when $\eta=0$ the Tur\'an type inequalities \eqref{coturan1} and \eqref{coturan4} reduce to known results of Sz\'asz \cite{szasz1}. More precisely, since \cite[p. 542]{abra} $F_L(0,\rho)=\sqrt{\frac{\pi\rho}{2}}J_{L+1/2}(\rho),$ the Tur\'an type inequalities \eqref{coturan1} and \eqref{coturan4} for $\eta=0$ and $L+1/2=\nu$ become $$J_{\nu}^2(\rho)-J_{\nu-1}(\rho)J_{\nu+1}(\rho)\geq 0,$$ $$J_{\nu}^2(\rho)-J_{\nu-1}(\rho)J_{\nu+1}(\rho)\geq \frac{1}{\nu+1}J_{\nu}^2(\rho),$$ where $\nu\geq1/2$ and $\rho>0.$ For more details on Tur\'an type inequalities for Bessel functions and other generalizations we refer to the papers \cite{BP,bustoz,JB,karlin,rao,patrick,skov,szasz2,thiru} and the references therein. The proof of the above theorem is based on the next result of Ross \cite[Theorem 3]{ross}. \begin{lemma}\label{leross} Let $I$ be an interval and let $\{y_n\}_{n\geq0}$ be a sequence of functions of real variable $x$, which is uniformly bounded in $n$ for each $x\in I.$ If these functions satisfy $$y_n(x)=B_ny_{n+1}(x)+C_ny_{n-1}(x),$$ where $B_n$ and $C_n$ are functions of $x,$ $x\in I,$ with the property that $C_n(x)\neq0,$ $B_n(x)\to0$ and $\prod_{i=1}^n|B_i(x)/C_i(x)|$ converges as $n\to\infty$ for all $x\in I,$ then \begin{equation}\label{eqross} y_n^2(x)-y_{n-1}(x)y_{n+1}(x)=-\frac{\Theta C_n}{C_n}y_n^2(x)-\sum_{i=1}^{\infty}\frac{B_{n+1}B_{n+2}\dots B_{n+i+1}}{C_nC_{n+1}\dots C_{n+i}}\Theta(B_{n+i-1}C_{n+i})y_{n+i}^2(x). \end{equation} \end{lemma} For reader's convenience we note here that in formula (i) of \cite[p. 28]{ross} the expression $B_ny_n$ should be written as $B_{n+1}y_n,$ and in the main formula of \cite[Theorem 3]{ross} the expression $B_{n+i-1}$ should be written as $B_{n+i+1},$ just like in \eqref{eqross}. \begin{proof}[Proof of Theorem \ref{thturan}] In order to deduce the infinite sum representation of the Tur\'anian of the Coulomb wave functions in Theorem \ref{thturan} we shall use Lemma \ref{leross}. According to the recurrence relation \cite[p. 539]{abra} $$B_{L,\eta}(\rho)F_{L+1}(\eta,\rho)=F_{L}(\eta,\rho)-C_{L,\eta}(\rho)F_{L-1}(\eta,\rho)$$ we have $$F_{L+n}(\eta,\rho)=B_{L+n,\eta}(\rho)F_{L+n+1}(\eta,\rho)+C_{L+n,\eta}(\rho)F_{L+n-1}(\eta,\rho).$$ Observe that when $n\in\{0,1,\dots\}$ for $L>-3/2,$ $L\neq-1,$ $\rho\in\mathbb{R}$ and $\eta\in\mathbb{R},$ $\eta\neq0,$ or $L>-3/2,$ $\rho\in\mathbb{R}$ and $\eta=0$ we have $C_{L+n,\eta}(\rho)\neq0$ and $B_{L+n,\eta}(\rho)\to0$ as $n\to\infty.$ Moreover, the product $$\prod_{i=1}^n\frac{B_{L+i,\eta}(\rho)}{C_{L+i,\eta}(\rho)}= \prod_{i=1}^n\frac{(L+i)\sqrt{(L+i+1)^2+\eta^2}}{(L+i+1)\sqrt{(L+i)^2+\eta^2}}= \sqrt{\frac{1+\frac{\eta^2}{(L+n+1)^2}}{1+\frac{\eta^2}{(L+1)^2}}}$$ converges as $n\to\infty$ for all $L>-3/2,$ $L\neq-1,$ $\rho\in\mathbb{R}$ and $\eta\in\mathbb{R}.$ We just need to check the uniform boundedness of the Coulomb wave function with respect to $L+n.$ For this we use the asymptotic relation $F_{L}(\eta,\rho)\sim C_L(\eta)\rho^{L+1}$ as $L\to\infty.$ Note that according to \cite[p. 538]{abra} and \cite[p. 43]{nishi} for $L$ positive integer we have $$C_L(\eta)=\frac{2^Le^{-\frac{\pi\eta}{2}}\left|\Gamma(L+1+\mathrm{i}\eta)\right|}{\Gamma(2L+2)}= \left\{\begin{array}{lc}\frac{2^L}{(2L+1)!}\sqrt{\frac{2\pi\prod_{k=0}^L(k^2+\eta^2)}{\eta(e^{2\pi\eta}-1)}},& \mbox{if}\ \ \ \eta\neq0\\\frac{2^LL!}{(2L+1)!},& \mbox{if}\ \ \ \eta=0\end{array}\right..$$ Thus, by using the infinite product representation of the hyperbolic sine function \cite[p. 85]{abra} we get that for fixed $\eta\in\mathbb{R}$ and $\rho>0$ $$C_L(\eta)\rho^{L+1}\to C_{L}(0)\rho^{L+1}\sqrt{\frac{2\sinh(\pi\eta)}{e^{2\pi\eta}-1}}= \frac{\sqrt{\pi}\rho^{L+1}}{2^{L-1}\Gamma\left(L+\frac{3}{2}\right)} e^{-\frac{\pi\eta}{2}}\to0\ \ \ \mbox{as}\ \ \ L\to \infty,$$ and consequently $$C_{L+n}(\eta)\rho^{L+n+1}\to0\ \ \ \mbox{as}\ \ \ n\to \infty.$$ Thus, applying \eqref{eqross}, the proof of \eqref{identcou} is complete. Now, let us focus on the Tur\'an type inequality \eqref{coturan4}. If we choose $n=0$ in \eqref{identcou}, then we obtain $$ {}_1\Delta_{L,\eta}(\rho)=\left(1-\frac{C_{L+1,\eta}(\rho)}{C_{L,\eta}(\rho)}\right)F_{L}^2(\eta,\rho)- \sum_{i=1}^{\infty}\frac{B_{L+1,\eta}(\rho)\dots B_{L+i+1,\eta}(\rho)}{C_{L,\eta}(\rho)\dots C_{L+i,\eta}(\rho)}\Theta(B_{L+i-1,\eta}(\rho)C_{L+i,\eta}(\rho))F_{L+i}^2(\eta,\rho). $$ In what follows we show that \begin{equation}\label{ineqross}\Theta(B_{L+i-1,\eta}(\rho)C_{L+i,\eta}(\rho))= B_{L+i,\eta}(\rho)C_{L+i+1,\eta}(\rho)-B_{L+i-1,\eta}(\rho)C_{L+i,\eta}(\rho)\leq0\end{equation} for all $L\geq0,$ $\eta\leq0,$ $\rho>0$ and $i\in\{1,2,\dots\}.$ Observe that the above inequality can be written as $$\frac{(L+i)(L+i+2)\left((L+i+1)^2+\eta^2\right)}{(2L+2i+3)\left((L+i+1)(L+i+2)+\rho\eta\right)} \leq\frac{(L+i-1)(L+i+1)\left((L+i)^2+\eta^2\right)}{(2L+2i-1)\left((L+i-1)(L+i)+\rho\eta\right)},$$ which by using the notation $\omega=L+i,$ can be rewritten as $$\frac{\omega_1(\omega_2+\eta^2)}{\omega_3(\omega_4+\rho\eta)}\leq\frac{\omega_5(\omega_6+\eta^2)}{\omega_7(\omega_8+\rho\eta)}$$ where $\omega_1=\omega(\omega+2),$ $\omega_2=(\omega+1)^2,$ $\omega_3=2\omega+3,$ $\omega_4=(\omega+1)(\omega+2),$ $\omega_5=(\omega-1)(\omega+1),$ $\omega_6=\omega^2,$ $\omega_7=2\omega-1$ and $\omega_8=(\omega-1)\omega.$ Thus, in order to show \eqref{ineqross} we need to verify the inequality $$(\omega_3\omega_5-\omega_1\omega_7)\rho\eta^3+(\omega_3\omega_4\omega_5-\omega_1\omega_7\omega_8)\eta^2+ (\omega_3\omega_5\omega_6-\omega_1\omega_2\omega_7)\rho\eta+\omega_3\omega_4\omega_5\omega_6-\omega_1\omega_2\omega_7\omega_8\geq0,$$ where $L\geq0,$ $\eta\leq0,$ $\rho>0$ and $i\in\{1,2,\dots\}.$ Computations show that for all $L\geq0$ and $i\in\{1,2,\dots\}$ we have $$\left\{\begin{array}{l}\omega_3\omega_5-\omega_1\omega_7=-3<0\\ \omega_3\omega_4\omega_5-\omega_1\omega_7\omega_8=(\omega-1)(\omega+2)(8\omega^2+8\omega+3)\geq0\\ \omega_3\omega_5\omega_6-\omega_1\omega_2\omega_7=-2\omega(\omega+1)(2\omega^2+2\omega-1)<0\\ \omega_3\omega_4\omega_5\omega_6-\omega_1\omega_2\omega_7\omega_8=4(\omega-1)\omega^2(\omega+1)^2(\omega+2)\geq0\end{array}\right.,$$ which in turn implies the validity of inequality \eqref{ineqross}. Now, by using the inequality \eqref{ineqross} we obtain $${}_1\Delta_{L,\eta}(\rho)\geq\left(1-\frac{C_{L+1,\eta}(\rho)}{C_{L,\eta}(\rho)}\right)F_{L}^2(\eta,\rho),$$ where $L\geq0,$ $\eta\leq0$ and $\rho>0.$ On the other hand for $\eta\leq0$ and $L\geq0$ the function $$\rho\mapsto 1-\frac{C_{L+1,\eta}(\rho)}{C_{L,\eta}(\rho)}=1-\frac{(L+2)(2L+1)\sqrt{(L+1)^2+\eta^2}}{(L+1)(2L+3)\sqrt{L^2+\eta^2}} \frac{L(L+1)+\rho\eta}{(L+1)(L+2)+\rho\eta}$$ is increasing on $(0,\infty)$ and consequently for all $L\geq0,$ $\eta\leq0$ and $\rho>0$ we have $$1-\frac{C_{L+1,\eta}(\rho)}{C_{L,\eta}(\rho)}\geq\lim_{\rho\to0}\left[1-\frac{C_{L+1,\eta}(\rho)}{C_{L,\eta}(\rho)}\right] =1-\frac{L(2L+1)\sqrt{(L+1)^2+\eta^2}}{(L+1)(2L+3)\sqrt{L^2+\eta^2}}$$ and this together with the above Tur\'an type inequality gives \eqref{coturan4}. Finally, let us consider the sharpness of \eqref{coturan4}. By using the relation \cite[p. 538]{abra} $$L(2L+1)C_L(\eta)=\sqrt{L^2+\eta^2}C_{L-1}(\eta),$$ we obtain $$\lim_{\rho\to0}\frac{{}_1\Delta_{L,\eta}(\rho)}{F_{L}^2(\eta,\rho)}=1-\frac{C_{L-1}(\eta)C_{L+1}(\eta)}{C_{L}^2(\eta)}= 1-\frac{L(2L+1)\sqrt{(L+1)^2+\eta^2}}{(L+1)(2L+3)\sqrt{L^2+\eta^2}}$$ and this shows that the above constant (depending on $L$ and $\eta$) is best possible in \eqref{coturan4}. \end{proof} \subsection{Mitrinovi\'c-Adamovi\'c and Wilker type inequalities for Coulomb wave functions} Now, we present an immediate consequence of the Tur\'an type inequality \eqref{coturan4}. For this consider the power series representation of the Coulomb wave function, namely \cite[p. 538]{abra} $$F_{L}(\eta,\rho)=C_{L}(\eta)\sum_{n\geq0}a_{L,n}\rho^{n+L+1},$$ where $$a_{L,0}=1,\ a_{L,1}=\frac{\eta}{L+1}\ \ \ \mbox{and}\ \ \ a_{L,n}=\frac{2\eta a_{L,n-1}-a_{L,n-2}}{n(n+2L+1)},\ \ n\in\{2,3,\dots\}.$$ Observe that the Tur\'an type inequality \eqref{coturan4} is equivalent to \begin{equation}\label{coturan5} \mathcal{F}_L^2(\eta,\rho)-\mathcal{F}_{L-1}(\eta,\rho)\mathcal{F}_{L+1}(\eta,\rho)\geq0, \end{equation} where $L,\rho>0,$ $\eta\leq0$ and $\mathcal{F}_L(\eta,\rho)$ stands for the normalized regular Coulomb wave function, defined by $$\mathcal{F}_L(\eta,\rho)=C_L^{-1}(\eta)\rho^{-L-1}{F}_L(\eta,\rho)=\sum_{n\geq0}a_{L,n}\rho^n.$$ \begin{theorem}\label{thlazar} If $\eta\leq0,$ $L>-1$ and $0<\rho<x_{L,\eta,1},$ then the following Mitrinovi\'c-Adamovi\'c and Wilker type inequalities are valid \begin{equation}\label{lazarineq} \left[\mathcal{F}_L(\eta,\rho)\right]^{L+\frac{3}{2}}<\left[\mathcal{F}_{L+1}(\eta,\rho)\right]^{L+\frac{5}{2}}, \end{equation} \begin{equation}\label{wilkerineq} \left[\mathcal{F}_{L+1}(\eta,\rho)\right]^{\frac{1}{L+\frac{3}{2}}}+\frac{\mathcal{F}_{L+1}(\eta,\rho)}{\mathcal{F}_L(\eta,\rho)}\geq2. \end{equation} \end{theorem} We note that if we choose $\eta=0$ and $L+1/2=\nu$ in Theorem \ref{thlazar}, then we reobtain the next Mitrinovi\'c-Adamovi\'c and Wilker type inequalities \cite[Theorem 3]{bariczexpo} $$\mathcal{J}_{\nu}^{\nu+1}(\rho)\leq\mathcal{J}_{\nu+1}^{\nu+2}(\rho)\ \ \ \mbox{and}\ \ \ \left[\mathcal{J}_{\nu+1}(\rho)\right]^{\frac{1}{\nu+1}}+\frac{\mathcal{J}_{\nu+1}(\rho)}{\mathcal{J}_{\nu}(\rho)}\geq2,$$ where $\nu>-1/2$ and $0<\rho<j_{\nu,1}.$ Here $x_{L,0,n}=j_{\nu,n}$ stands for the $n$th positive zero of the Bessel function $J_{\nu},$ and $\mathcal{J}_{\nu}$ stands for the normalized Bessel function, defined by $\mathcal{F}_L(0,\rho)=\mathcal{J}_{\nu}(\rho)=2^{\nu}\Gamma(\nu+1)\rho^{-\nu}J_{\nu}(\rho).$ It is important to note here that the above inequalities are valid for all $\nu>-1$ and the case $\nu=-1/2$ corresponds to the original Mitrinovi\'c-Adamovi\'c and Wilker inequalities for sine and cosine functions. See \cite{bariczexpo,basa,wu} for more details on Mitrinovi\'c-Adamovi\'c and Wilker inequalities. \begin{proof}[Proof of Theorem \ref{thlazar}] Consider the function $\varphi_L(\eta,\rho),$ defined by $$\varphi_L(\eta,\rho)=\left(L+\frac{5}{2}\right)\log\left[\mathcal{F}_{L+1}(\eta,\rho)\right]- \left(L+\frac{3}{2}\right)\log\left[\mathcal{F}_{L}(\eta,\rho)\right].$$ Observe that the above function is well defined since for each $\eta\leq0,$ $L>-1$ and $0<\rho<x_{L,\eta,1}$ we have $$\mathcal{F}_{L}(\eta,\rho)>0\ \ \ \mbox{and}\ \ \ \mathcal{F}_{L+1}(\eta,\rho)>0.$$ Now, by using the recurrence relation \eqref{reccou2} we obtain \begin{equation}\label{differcou} \mathcal{F}_{L}'(\eta,\rho)=\frac{\eta}{L+1}\mathcal{F}_{L}(\eta,\rho)-\frac{(L+1)^2+\eta^2}{(L+1)^2(2L+3)}\rho\mathcal{F}_{L+1}(\eta,\rho), \end{equation} and consequently $$2\varphi_{L}'(\eta,\rho)={\eta}\left(\frac{2L+5}{L+2}-\frac{2L+3}{L+1}\right)+ \frac{1}{\mathcal{F}_{L+1}^2(\eta,\rho)}\frac{\rho\mathcal{F}_{L+1}(\eta,\rho)}{\mathcal{F}_{L}(\eta,\rho)}\Phi_L(\eta,\rho),$$ where according to \eqref{coturan5} \begin{align*}\Phi_L(\eta,\rho)&=\frac{(L+1)^2+\eta^2}{(L+1)^2}\mathcal{F}_{L+1}^2(\eta,\rho)-\frac{(L+2)^2+\eta^2}{(L+2)^2} \mathcal{F}_{L}(\eta,\rho)\mathcal{F}_{L+2}(\eta,\rho)\\ &\geq \frac{(L+2)^2+\eta^2}{(L+2)^2}\left[\mathcal{F}_{L+1}^2(\eta,\rho)- \mathcal{F}_{L}(\eta,\rho)\mathcal{F}_{L+2}(\eta,\rho)\right]\geq0.\end{align*} On the other hand, by using the Mittag-Leffler expansion \eqref{Comitt} we obtain $$\frac{\rho\mathcal{F}_{L+1}(\eta,\rho)}{\mathcal{F}_{L}(\eta,\rho)}=\frac{(L+1)^2(2L+3)}{(L+1)^2+\eta^2}\sum_{n\geq1} \left[\frac{\rho}{x_{L,\eta,n}(x_{L,\eta,n}-\rho)}+ \frac{\rho}{y_{L,\eta,n}(y_{L,\eta,n}-\rho)}\right]>0,$$ where $\eta\leq0,$ $L>-1$ and $0<\rho<x_{L,\eta,1}.$ These imply that for those values of $\eta,\rho,L$ we have $\varphi_{L}'(\eta,\rho)\geq0$ and thus $$\varphi_L(\eta,\rho)\geq\varphi_L(\eta,0)=0,$$ which completes the proof of \eqref{lazarineq}. Finally, the Wilker type inequality \eqref{wilkerineq} follows immediately from the inequality \eqref{lazarineq} and the arithmetic-geometric mean inequality for the values $\left[\mathcal{F}_{L+1}(\eta,\rho)\right]^{1/(L+3/2)}$ and $\mathcal{F}_{L+1}(\eta,\rho)/\mathcal{F}_L(\eta,\rho),$ that is, $$ \left[\mathcal{F}_{L+1}(\eta,\rho)\right]^{\frac{1}{L+\frac{3}{2}}}+\frac{\mathcal{F}_{L+1}(\eta,\rho)}{\mathcal{F}_L(\eta,\rho)}\geq 2\sqrt{\left[\mathcal{F}_{L+1}(\eta,\rho)\right]^{\frac{1}{L+\frac{3}{2}}}\cdot\frac{\mathcal{F}_{L+1}(\eta,\rho)}{\mathcal{F}_L(\eta,\rho)}}\geq2.$$ \end{proof} \subsection{Some properties of Coulomb zeta functions} This subsection is devoted to the study of some functions involving the positive and negative zeros of Coulomb wave functions. We give some basic properties, like recurrence relations, monotonicity properties and we study the higher order derivatives of these functions. We note that some of the results were already obtained in \cite{stampach}, but here we use a different approach. For $s>1$ and $L,\eta\in\mathbb{R}$ let us consider functions $X_{s,\eta}(L),$ $Y_{s,\eta}(L)$ and $\zeta_{s,\eta}(L),$ which we call as the Coulomb zeta functions, defined by $$X_{s,\eta}(L)=\sum_{n\geq1}\frac{1}{x_{L,\eta,n}^{s}},\ \ Y_{s,\eta}(L)=\sum_{n\geq1}\frac{1}{y_{L,\eta,n}^{s}} \ \ \ \mbox{and}\ \ \ \zeta_{s,\eta}(L)=X_{s,\eta}(L)+Y_{s,\eta}(L).$$ By using the Mittag-Leffler expansion \eqref{Comitt} we obtain for all $0<\rho<\min\{x_{L,\eta,1},-y_{L,\eta,1}\}$ the generating function for the Coulomb zeta functions as follows \begin{align*} \frac{\rho F_{L+1}(\eta,\rho)}{F_L(\eta,\rho)}&=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{n\geq1} \left[\frac{\left(\displaystyle\frac{\rho}{x_{L,\eta,n}}\right)^2}{1-\displaystyle\frac{\rho}{x_{L,\eta,n}}}+ \frac{\left(\displaystyle\frac{\rho}{y_{L,\eta,n}}\right)^2}{1-\displaystyle\frac{\rho}{y_{L,\eta,n}}}\right]\\ &=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{n\geq1}\left[\sum_{m\geq0}\left(\displaystyle\frac{\rho}{x_{L,\eta,n}}\right)^{m+2}+ \sum_{m\geq0}\left(\frac{\rho}{y_{L,\eta,n}}\right)^{m+2}\right]\\ &=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{m\geq0}\left[X_{m+2,\eta}(L)+Y_{m+2,\eta}(L)\right]\rho^{m+2}, \end{align*} that is, we have \begin{equation}\label{generator} \frac{F_{L+1}(\eta,\rho)}{\rho F_L(\eta,\rho)}=\frac{L+1}{\sqrt{(L+1)^2+\eta^2}}\sum_{m\geq0}\zeta_{m+2,\eta}(L)\rho^{m}. \end{equation} Let us suppose that $\eta=0.$ Then $x_{L,0,n}=-y_{L,0,n}=j_{L+1/2,n}$ for all $n\in\{1,2,\dots\}$ and the formula \eqref{generator} reduces to $$\frac{\rho J_{L+3/2}(\rho)}{J_{L+1/2}(\rho)}=\sum_{m\geq0}\zeta_{m+2,0}(L)\rho^{m+2}= 2\sum_{k\geq1}\left[\sum_{n\geq1}\frac{1}{j_{L+1/2,n}^{2k}}\right]\rho^{2k}.$$ Now, let $L+1/2$ be denoted by $\nu,$ then for $|\rho|<j_{\nu,1}$ we obtain the Kishore's formula \cite[p. 528]{kishore} $$\frac{\rho J_{\nu+1}(\rho)}{2J_{\nu}(\rho)}=\sum_{k\geq 1}\sigma_{2k}(\nu)\rho^{2k},$$ where $$\sigma_{2k}(\nu)=X_{2k,0}(\nu-1/2)=\sum_{n\geq1}\frac{1}{j_{\nu,n}^{2k}}$$ is the so-called Rayleigh function. Observe that $$\lim_{\rho\to0}\left[\frac{F_{L+1}(\eta,\rho)}{\rho F_L(\eta,\rho)}\right]=\frac{C_{L+1}(\eta)}{C_L(\eta)}=\frac{\sqrt{(L+1)^2+\eta^2}}{(L+1)(2L+3)},$$ and consequently if $\rho\to0$ in \eqref{generator}, then we obtain $$\zeta_{2,\eta}(L)=\frac{(L+1)^2+\eta^2}{(L+1)^2(2L+3)}.$$ It is also worth to mention that if we use \eqref{generator} and the power series representation of the Coulomb wave function, then we obtain \begin{align*}C_{L+1}(\eta)&\left(a_{L+1,0}+a_{L+1,1}\rho+{\dots}+a_{L+1,n}\rho^n+{\dots}\right)= \frac{L+1}{\sqrt{(L+1)^2+\eta^2}}C_L(\eta)\\&\times\left(a_{L,0}+a_{L,1}\rho+{\dots}+a_{L,n}\rho^n+{\dots}\right) \left(\zeta_{2,\eta}(L)+\zeta_{3,\eta}(L)\rho+{\dots}+\zeta_{n+2,\eta}(L)\rho^n+{\dots}\right), \end{align*} and identifying the coefficients of $\rho^n$ on both sides we arrive at the recurrence relation $$\zeta_{2,\eta}(L)a_{L+1,n}=\sum_{k=0}^na_{L,k}\zeta_{n-k+2,\eta}(L), \ \ \ n\in\{0,1,\dots\}.$$ By using the above relation for $n=1$ we obtain $$\zeta_{3,\eta}(L)=-\eta\frac{(L+1)^2+\eta^2}{(L+1)^3(L+2)(2L+3)},$$ and other values of $\zeta_{m,\eta}(L)$ can be computed also for $m\in\{4,5,\dots\}.$ Moreover, by using the relations \eqref{reccou2} and \eqref{generator} we obtain $$\frac{F_L'(\eta,\rho)}{F_L(\eta,\rho)}=\frac{L+1}{\rho}+\frac{\eta}{L+1}-\sum_{m\geq0}\zeta_{m+2,\eta}(L)\rho^{m+1}$$ and taking this in \eqref{eqdiffercou} and identifying the coefficients of $\rho^m$ on both sides we obtain \begin{equation}\label{recurzeta}(m+2L+3)\zeta_{m+2,\eta}(L)+\frac{2\eta}{L+1}\zeta_{m+1,\eta}(L)=\sum_{k=2}^m\zeta_{k,\eta}(L)\zeta_{m-k+2,\eta}(L), \ \ \ m\in\{2,3,\dots\}.\end{equation} Observe that the above result implies that the Coulomb zeta functions are actually rational functions of $L.$ We mention that the above results were obtained also by \v{S}tampach and \v{S}\v{t}ov\'\i\v{c}ek \cite{stampach}, however, they used a different approach. Now, we are ready to prove the following new result by using \eqref{recurzeta}. \begin{theorem}\label{thcouzeta} If $\eta\leq0$ and $m\in\{2,3,\dots\},$ then the Coulomb zeta function $L\mapsto \zeta_{m,\eta}(L),$ as well as the functions $L\mapsto (m+2L+3)\zeta_{m+2,\eta}(L)+{2\eta}\zeta_{m+1,\eta}(L)/(L+1),$ $L\mapsto \zeta_{m,\eta}(L)/\zeta_{2,\eta}(L)$ and $L\mapsto (2L+3)^{m-1}\zeta_{m,\eta}(L)$ are completely monotonic on $(-1,\infty).$ \end{theorem} Let $\eta=0.$ Then $x_{L,0,n}=-y_{L,0,n}=j_{L+1/2,n}$ for all $n\in\{1,2,\dots\}$ and $$\zeta_{s,0}(L)=\sum_{n\geq1}\frac{(-1)^s+1}{(-1)^s}\frac{1}{j_{L+1/2,n}^s}.$$ Observe that for all $s>1$ we have $\zeta_{2s,0}(L)=2X_{2s,0}(L)$ and $\zeta_{2s-1,0}(L)=0.$ Now, taking $m=2r$ in \eqref{recurzeta} we obtain $$(2r+2L+3)\zeta_{2r+2,0}(L)=\sum_{k=1}^{r}\zeta_{2k,0}(L)\zeta_{2r-2k+2,0}(L),$$ and if we let $L+1/2=\nu$ and $r+1=q,$ then the above relation becomes $$(\nu+q)\sigma_{2q}(\nu)=\sum_{k=1}^{q-1}\sigma_{2k}(\nu)\sigma_{2q-2k}(\nu),$$ which is the result of Kishore \cite[p. 532]{kishore}. We also note here that in particular when $\eta=0$ the results of Theorem \ref{thcouzeta} reduce to the main results of Obi \cite[p. 466]{obico} concerning the complete monotonicity of the functions $\nu\mapsto \sigma_{2q}(\nu),$ $\nu\mapsto (\nu+1)^q\sigma_{2q}(\nu)$ and $(\nu+q)\sigma_{2q}(\nu)$ on $(-1/2,\infty),$ where $q\in\{1,2,\dots\}.$ \begin{proof}[Proof of Theorem \ref{thcouzeta}] Since the sum and product of completely monotonic functions are also completely monotonic, we have that for $\eta\leq0$ the functions $L\mapsto\zeta_{2,\eta}(L)$ and $L\mapsto \zeta_{3,\eta}(L)$ are completely monotonic on $(-1,\infty).$ On the other hand, from \eqref{recurzeta} we have $$\zeta_{m+2,\eta}(L)=-\frac{2\eta}{(L+1)(m+2L+3)}\zeta_{m+1,\eta}(L)+\frac{1}{m+2L+3}\sum_{k=2}^m\zeta_{k,\eta}(L)\zeta_{m-k+2,\eta}(L), \ \ \ m\in\{2,3,\dots\}.$$ Thus, if we suppose that $L\mapsto\zeta_{s,\eta}(L)$ is completely monotonic on $(-1,\infty)$ for each $s\in\{2,3,\dots,m+1\},$ then by induction we get that $L\mapsto\zeta_{m+2,\eta}(L)$ is also completely monotonic on $(-1,\infty).$ Similarly, the functions $L\mapsto (2L+3)\zeta_{2,\eta}(L)$ and $L\mapsto (2L+3)^2\zeta_{3,\eta}(L)$ are clearly completely monotonic on $(-1,\infty)$ for all $\eta\leq0.$ Supposing that $L\mapsto(2L+3)^{s-1}\zeta_{s,\eta}(L)$ is completely monotonic on $(-1,\infty)$ for each $s\in\{2,3,\dots,m+1\},$ the relation \begin{align*}(2L+3)^{m+1}&\zeta_{m+2,\eta}(L)=-\frac{2\eta(2L+3)^{m}}{(L+1)(m+2L+3)}\zeta_{m+1,\eta}(L)\\&+ \frac{1}{m+2L+3}\sum_{k=2}^m\left[(2L+3)^{k-1}\zeta_{k,\eta}(L)\right]\left[(2L+3)^{m-k+1}\zeta_{m-k+2,\eta}(L)\right], \ \ \ m\in\{2,3,\dots\}.\end{align*} and complete mathematical induction imply that $L\mapsto(2L+3)^{m+1}\zeta_{m+2,\eta}(L)$ is also completely monotonic on $(-1,\infty).$ Observe that for $\eta\leq0$ the functions $$L\mapsto \frac{\zeta_{3,\eta}(L)}{\zeta_{2,\eta}(L)}=-\frac{\eta}{(L+1)(L+2)},$$ $$L\mapsto \frac{\zeta_{4,\eta}(L)}{\zeta_{2,\eta}(L)}=\frac{(L+2)(L+1)^2+(5L+8)\eta^2}{(L+1)^2(L+2)(2L+3)(2L+5)}$$ are completely monotonic on $(-1,\infty).$ If $L\mapsto \zeta_{s,\eta}(L)/\zeta_{2,\eta}(L)$ is completely monotonic on $(-1,\infty)$ for $s\in\{2,3,\dots,m+1\},$ then in view of $$\frac{\zeta_{m+2,\eta}(L)}{\zeta_{2,\eta}(L)}=-\frac{2\eta}{(L+1)(m+2L+3)}\frac{\zeta_{m+1,\eta}(L)}{\zeta_{2,\eta}(L)}+ \frac{1}{m+2L+3}\sum_{k=2}^m\frac{\zeta_{k,\eta}(L)}{\zeta_{2,\eta}(L)}\zeta_{m-k+2,\eta}(L), \ \ \ m\in\{2,3,\dots\}$$ and by using the fact that $L\mapsto\zeta_{s,\eta}(L)$ is completely monotonic on $(-1,\infty)$ for all $s\in\{2,3,\dots,m\},$ by using mathematical induction we obtain that $L\mapsto \zeta_{m+2,\eta}(L)/\zeta_{2,\eta}(L)$ is also completely monotonic on $(-1,\infty).$ Finally, the first part of this theorem together with \eqref{recurzeta} imply that the function $$L\mapsto (m+2L+3)\zeta_{m+2,\eta}(L)+{2\eta}\zeta_{m+1,\eta}(L)/(L+1)$$ is also completely monotonic on $(-1,\infty)$ for all $m\in\{2,3,\dots\}$ and $\eta\leq0.$ \end{proof} \subsection{Interlacing properties of the zeros of Coulomb wave functions} The first part of the next result is the extension of a result of Miyazaki et al. \cite[Remark 4.3]{miyazaki}, which states that if $\rho>0,$ $\eta\in\mathbb{R}$ and $L\in\{1,2,\dots\},$ then there is one and only one zero of $\rho\mapsto F_L'(\eta,\rho)$ between two continuous zeros of $\rho\mapsto F_L(\eta,\rho).$ \begin{theorem} If $L>-1/2$ and $\eta\in\mathbb{R},$ then the zeros of $\rho\mapsto F_L(\eta,\rho)$ and $\rho\mapsto F_L'(\eta,\rho)$ are interlacing. Moreover, if $L>-1$ and $\eta\in\mathbb{R},$ then the zeros of $\rho\mapsto F_L(\eta,\rho)$ and $\rho\mapsto \rho F_L'(\eta,\rho)-(L+1)F_L(\eta,\rho)$ are interlacing. \end{theorem} \begin{proof} In view of \eqref{logder}, for $L>-1$ the function $\rho\mapsto F_L'(\eta,\rho)/F_L(\eta,\rho)$ is decreasing on the interval $(x_{L,\eta,k},x_{L,\eta,k+1}),$ where $k\in\{1,2,\dots\}.$ Moreover, the expression $F_L'(\eta,\rho)/F_L(\eta,\rho)$ tends to $-\infty$ as $\rho\nearrow x_{L,\eta,k+1}$ and tends to $\infty$ as $\rho\searrow x_{L,\eta,k}.$ Since \cite[Remark 17]{stampach} for $L>-1/2$ and $\eta\in\mathbb{R}$ the zeros of $\rho\mapsto F_L'(\eta,\rho)$ are real and simple, it follows that $\rho\mapsto F_L'(\eta,\rho)/F_L(\eta,\rho)$ intersects once and only once the horizontal axis, and the abscissa of the intersection point is actually the $k$th positive zero of $\rho\mapsto F_L'(\eta,\rho).$ The interlacing property of the negative zeros is similar, and thus we omit the details. Hadamard's theorem states that an entire function of finite order $\tau$ may be represented in the form $$f(z)=z^me^{P_q(z)}\prod_{n\geq1}G\left(\frac{z}{a_n},p\right),$$ where $a_1,a_2,\dots$ are all nonzero roots of $f(z),$ $p\leq\tau,$ $P_q(z)$ is a polynomial in $z$ of degree $q\leq\tau,$ $m$ is the multiplicity of the root at the origin, and $G(u,p)=(1-u)e^{u+\frac{u^2}{2}+{\dots}+\frac{u^p}{p}}$ for $p>0.$ Combining this with \eqref{infprod} it follows that the growth order $\tau_C$ of the normalized entire Coulomb wave function $\rho\mapsto \mathcal{F}_{L}(\eta,\rho)$ satisfies $1\leq\tau_C<2.$ It is known that the genus of an entire function of order $\tau$ is $[\tau]$ when $\tau$ is not an integer, but the genus of an entire function of natural order $\tau$ can be either $\tau$ or $\tau-1.$ Thus, the normalized entire Coulomb wave function $\rho\mapsto \mathcal{F}_{L}(\eta,\rho)$ is of genus $0$ or $1.$ On the other hand, Laguerre's theorem on separation of zeros states that, if $z\mapsto f(z)$ is an entire function, not a constant, which is real for real $z$ and has only real zeros, and is of genus $0$ or $1,$ then the zeros of $f'$ are also real and are separated by the zeros of $f.$ According to \cite[Proposition 13]{stampach} when $L>-1$ and $\eta\in\mathbb{R}$ the zeros of $\rho\mapsto \mathcal{F}_L(\eta,\rho)$ are all real. Thus, appealing on Laguerre's separation theorem we conclude that when $L>-1$ and $\eta\in\mathbb{R},$ then the zeros of $\rho\mapsto \rho F_L'(\eta,\rho)-(L+1)F_L(\eta,\rho)$ are all real and are interlacing with the zeros of $\rho\mapsto F_L(\eta,\rho).$ \end{proof}
{ "timestamp": "2015-04-27T02:06:40", "yymm": "1504", "arxiv_id": "1504.06448", "language": "en", "url": "https://arxiv.org/abs/1504.06448" }
\section{Introduction} We consider the following setup. A source, Alice, is connected to a destination, Bob, over a packet network that can be represented as an arbitrary directed acyclic graph. Alice wants to send a message to Bob, securely from a passive eavesdropper, Eve, who wiretaps an unknown subset of $k$ edges in the network. Each edge $i$ that connects node $u$ to node $v$ corresponds to a packet erasure channel with probability $\delta_i$; when eavesdropping this edge, Eve also receives the packet transmissions of node $u$ with erasure probability $\delta_{iE}$, independently from node $v$. Moreover, we assume that all legitimate nodes in the network, as well as Eve, causally learn whether $v$ has successfully received the packets $u$ transmits or not; however, Eve does not report which packets she successfully receives. We propose the first, as far as we know, linear programming (LP) formulation, that explicitly selects paths in the network to maximize the secure message transmission rate. It is well known that the (non-secure) capacity of a network can be described by an LP which allows a natural flow-based interpretation of network traffic. Our work leverages this formulation to implement secure message transmission through a two-phase construction. In the first {\em key-creation} phase, Alice establishes a secret key with Bob; in the second {\em message-sending} phase, she uses the established secret key to encode and securely send a message. Accordingly, our LP selects two sets of paths (that share the network resources): key-creation paths, that Alice will use to share random packets with Bob (so as to create a secret key), and message-sending paths, that Alice will use to send the encrypted message. We term this end-to-end encryption algorithm (Algo 1). We discuss several extensions of Algo 1, notably Algo 2, that apart from the end-to-end key, also creates and utilizes link-by-link keys for secure message transmission. The LPs we propose are not optimal, but are still we believe interesting. An example where the LPs are suboptimal is the triangle network, where the capacity was characterized in \cite{triangle14} . However, there are also a number of examples where the LPs do achieve the known capacity (such as the two-parallel edges network, and the line network); they outperform the best known alternative in the literature in all the cases that we have tested; and they enable new observations. For instance, over erasure networks, there are cases where the key-sharing and message-sending paths use different edges (while over lossless networks, using the same sets of paths is optimal). Another attractive attribute of the proposed LPs is their generality: the LPs take as input the erasure probabilities $\delta_i$ and $\delta_{iE}$ at every channel edge $i$, that can be arbitrary. For instance, with $\delta_i=\delta_{iE}=0$ we recover the lossless network case, and the LPs achieve the secure network coding rate (which is the optimal scheme for lossless channels). Moreover, similarly to the max-flow LP, our LPs can be extended to incorporate multiple sources, multiple receivers, edges with costs, etc. The paper is organized as follows. Section~\ref{sec:related} briefly reviews related work; Section~\ref{sec:not} introduces our notation; Section~\ref{sec:algo} presents the algorithms; and Section~\ref{sec:eval} has evaluation results. \section{Related Work} \label{sec:related} Finding the highest achievable rate of secure communication of an arbitrary network setting is an open research problem. In the special case when the network consists of {\em error-free}, unit capacity channels, secure network coding is optimal \cite{cai2011}. For the same problem when the channels are not unit capacity (but still error-free) restricted complexity results suggest the hardness of calculating the secret message capacity \cite{cui2010,cui2013}. When the network edges are erasure channels {\em all with the same parameters} and channel state feedback, and the paths used for Alice to communicate with Bob are decided in advance, a secure communication achievable scheme is proposed in \cite{Allerton2013}. In contrast, this work provides schemes for arbitrary erasure channel parameters, and explicitly selects the best paths in the network so as to maximize the achievable rates. For a number of small networks (single channel, V-network, triangle network, line network) with erasures and state feedback, capacity characterization and a linear programming formulation were derived in \cite{triangle14,JDFPA10,ITW11,isit12,itw13,athan2014}. Our approach in this work is different: instead of schemes tailored to specific topologies, we design schemes that are general and extend to arbitrary network topologies. A preliminary version of LP formulations (a precursor of the algorithm we call Algo 1) for this problem was presented as an invited poster in a workshop \cite{GlobalSip}. \newpage \section{\label{sec:not}System Model and Notation} A source $s$ (Alice) wants to send a message $W$ securely to a destination $d$ (Bob), over a directed acyclic graph $G=(\mathcal{V},\; \mathcal{E})$, where each edge $g$ that connects node $u$ to node $v$ represents an orthogonal discrete memoryless broadcast erasure channel with two receivers: node $v$ and potentially a passive eavesdropper (Eve). We denote by $X_{gi}$ the input to channel $g$ at time slot $i=1\ldots n$; and by $Y_{gi}$ and $Z_{gi}$ the corresponding outputs at node $v$ and Eve respectively. We assume that $X_{gi}$ is a length $L$ vector over a finite field $\mathbb{F}_{q}$ (in the paper we use the convention that $L\log(q)=1$). We use $\oslash$ as the symbol of an erasure. The broadcast channel is conditionally independent, namely \[ \Pr\{Y_{gi}^{n},Z_{gi}^{n}|X_{gi}^{n}\}=\underset{i=1}{\overset{n}{\prod}}\Pr\{Y_{gi}|X_{gi}\}\Pr\{Z_{gi}|X_{gi}\}, \] \begin{align*} \mbox {with} \quad\quad\quad\quad \Pr\{Y_{gi}|X_{gi}\} & =\begin{cases} 1-\delta_{g}, & Y_{gi}=X_{gi}\\ \delta_{g}, & Y_{gi}=\oslash, \end{cases} \end{align*} \begin{eqnarray*}\mbox{and}\quad\quad \Pr\{Z_{gi}|X_{gi}\} & =\begin{cases} 1-\delta_{gE}, & Z_{gi}=X_{gi}\\ \delta_{gE}, & Z_{gi}=\oslash. \end{cases} \end{eqnarray*} We assume that the source has unlimited private randomness, and that all other network nodes have no private randomness. We also assume public state feedback, that is, each legitimate node sends an ACK (or NACK) so that all other nodes (including Eve) learn whether the packet transmission was successful. We use the notation $S^{i-1}$ for the state of all the channels before the transmission of the $i^{th}$ symbols. Also the notation $I_{u}$ and $O_{u}$ for the set of the incoming and outgoing edges of node~$u$. We require security in the strong information theoretical sense, defined next in the same way as in \cite{Allerton2013,ITW11}. We use $X_{Ai}$, for a set A, to denote the vector $(X_{gi})_{g\in A}$, and $X_{A}^{i}$ to denote the vector $(X_{A1},X_{A2},\dots,X_{Ai})$. \iffalse Let $W$ be the message that Alice wants to send to Bob to Bob. The input to channel $i$ at time slot $g$ is denoted $X_{ig}$ and it is a length $L$ vector over a finite field $\mathbb{F}_{q}$. In the achievability algorithms we use the convention that $L\log(q)=1$. We denote with $Y_{ig}$ and $Z_{ig}$ the $g^{th}$ output of channel $g$, i.e., the vectors received by the next node and Eve respectively. We use $\oslash$ as the symbol of an erasure. The notation $X_{Ai}$, for a set A, is used to denote the vector $(X_{gi})_{g\in A}$. The notation $X_{A}^{i}$ is used to denote the vector $(X_{A1},X_{A2},\dots,X_{Ai})$. Channels are memoryless and conditionally independent, i.e., \fi \\ {\bf Definition.} We say that $R_{SM}$ is an achievable secret message rate if for any $\epsilon>0$ and sufficiently large $n$ the following conditions hold for some functions $f_{gi,n}(\cdot),W_{B,n}(\cdot)$.\\ For $u\in U-\{s\}$ and for every $g\in O_{u}$: \begin{align} X_{gi}=\begin{cases} f_{gi,n}(Y_{I_{u}}^{i-1},S^{i-1})\end{cases}\label{eq:def1_1} \end{align} \begin{align}\mbox{and for every $g\in O_{s}$:} \quad\quad X_{gi}=\begin{cases} f_{gi,n}(W,U_{0},S^{i-1})\end{cases}\label{eq:def1_1-1} \end{align} where $U_{0}$ is the unlimited random source of Alice and where the message $W$ is uniformly distributed over $\{1,2,\ldots,2^{n(R_{SM}-\epsilon)}\}$. Bob is able to recover $W$ with high probability, \begin{align} \hat{W}=W_{B,n}(Y_{I_{d}}^{n}),\label{eq:def1_2}\\ \Pr\{\hat{W}\neq W\}<\epsilon.\label{eq:def1_3} \end{align} Eve gains negligible useful information by observing $V\subseteq\mathcal{E}$: \begin{align} I(W;Z_{V}^{n}S^{n})<\epsilon,\:\forall \;\;V\subseteq\mathcal{E}. \end{align} The supremum of all achievable secret message rates is the secret message capacity of the network denoted by $C_{SM}$. \section{ End-to-End Encryption Algorithm} \label{sec:algo} \paragraph*{Broad Approach} The algorithm selects two (possibly different) sets of paths, one set for key-creation and the other for message-sending. The source uses the first (key-creation) set of paths to send random packets to the destination; intermediate nodes forward the random packets they receive from their incoming edges to their outgoing edges using two techniques, ARQ and MDS expansion, as we will describe later in this section. The source and the destination create an end-to-end secret key, based on their shared random packets and an estimate of how many of these Eve has eavesdropped. The algorithm also selects a second set of paths, over which the source sends an information message to the destination, encrypted with the source-destination end-to-end key. Intermediate nodes simply forward the encrypted packets using ARQ. The goal of the program is to maximize the rate at which the message can be send securely to the destination, by optimizing over two things: 1) what are the paths selected for key-creation and message-sending and 2) and how are the random packets forwarded by the intermediate nodes. \subsection{Scheme Description and Algo 1 LP} We start from the case where Eve observes any (one) edge of the network. All the LP variables express rate of packets, either message-packets, or random-packets. \paragraph*{Key-creation constraints} The source generates uniform random packets, to be send to the destination. The intermediate nodes collect the random packets they receive from all their incoming edges, partition them into subsets, and send a subset to each of their outgoing edges using two techniques, Automatic Repeat Request (ARQ) and Maximum-Distance-Seperable (MDS) code expansion. To capture this, for each edge (channel) $g$, that connects say vertex $u$ to vertex $v$, the LP uses three variables $s_g$, $k_g$ and $r_g$. Node $u$ sends $k_g$ packets to node $v$, by first multiplying these packets with an MDS code of size $\frac{k_{g}}{1-\delta_{g}\delta_{gE}} \times k_g$ to create $\frac{k_{g}}{1-\delta_{g}\delta_{gE}}$ linear combinations, and then transmitting each linear combination exactly once (we discuss later why we expand with these parameters). From these packets, $v$ receives a fraction $k_{g}\frac{1-\delta_{g}}{1-\delta_{g}\delta_{gE}}$. Moreover, $u$ also sends to node $v$ $r_{g}$ packets using ARQ; $v$ receives all these packets. Node $v$ receives in total rate $s_g$ random packets, with \begin{align} s_{g}=r_{g}+k_{g}\frac{1-\delta_{g}}{1-\delta_{g}\delta_{iE}}. \end{align} If node $u$ has $I_{u}$ incoming and $O_{u}$ edges, we have that \begin{align} \sum_{i\in I_{u}}s_{i}=\sum_{j\in O_{u}}(k_{j}+r_{j}). \end{align} This constraint requires that the random packets node $u$ sends are equal to the random packets it receives; that is, intermediate network nodes do not discard or generate random packets. \paragraph*{Message-sending constraints} The source encrypts the message using an end-to-end key (we will describe how later), and forwards it to the destination; each intermediate node uses ARQ to forward the encrypted message packets it receives. The LP uses a variable $m_g$ to capture the encrypted message packets that node $u$ sends to node $v$ through the edge $g$ that connects them; note that to do so, node $u$ makes $\frac{m_{g}}{1-\delta_{g}}$ transmissions. We require message flow conservation, i.e., \begin{align} \sum_{i\in I_{u}}m_{i} & =\sum_{j\in O_{u}}m_{j}. \end{align} \paragraph*{Time-sharing (edge capacity) constraints} Random and encrypted packets need to potentially share the network edges (channels); we thus require for every edge of the network that \begin{align} \frac{r_{g}}{1-\delta_{g}}+\frac{k_{g}}{1-\delta_{g}\delta_{gE}}+\frac{m_{g}}{1-\delta_{g}}\leq 1. \end{align} \paragraph*{Security constraints} If Eve is located on edge $g$, she will overhear a fraction \begin{align*} m_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}} \end{align*} of the encrypted message flow $m_g$ through that edge. A necessary condition for our scheme to be secure is that, this amount of message is smaller than the amount of random packets that Alice and Bob have and Eve does not, i.e., the secret common random packets (this condition is also sufficient for security as we discuss later on). Alice and Bob share $\left(\sum_{j\in I_{D}}s_{j}\right)$ random packets; thus if, from these $\left(\sum_{j\in I_{D}}s_{j}\right)$ packets, Eve has overhead say $E_g$ (by observing the random packet flow through edge $g$), then the security constraint would be: \begin{align*} m_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}} & \leq\left(\sum_{j\in I_{D}}s_{j}\right)- E_g. \end{align*} \paragraph*{Conservatively estimating Eve's knowledge $E_g$} Consider again edge $g$ that connects vertex $u$ to vertex $v$. A conservative way to estimate Eve's knowledge, is to set \[ E_g=r_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}} + k_{g}\frac{(1-\delta_{gE})(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}.\] That is, calculate the number of random packets that both node $v$ and Eve receive. This estimate is conservative because we assume that all the randomness node $v$ receives eventually reaches the destination (Bob), which is not necessarily true. Indeed, when nodes forward packets using the MDS expansion, we "lose" part of the randomness (from the $k_g$ random packets, node $u$ only receives $k_{g}\frac{(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}).$ Algo 1 uses this approximation. \renewcommand{\thealgorithm}{} {\small \begin{algorithm} \floatname{algorithm}{Algo 1} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \REQUIRE Set of erasure probabilities $\delta_g$ and $\delta_{gE}$ \renewcommand{\algorithmicrequire}{\textbf{Output:} Secure message rate and achievability scheme parameters} \REQUIRE \begin{align} \max R,\text{s.t.:}\nonumber\\ R & =\sum_{i\in I_{D}}m_{i}\nonumber\\ \forall u\in \mathcal{V}-\{s,d\}:\nonumber\\ \sum_{i\in I_{u}}m_{i} & =\sum_{j\in O_{u}}m_{j}\nonumber\\ \sum_{i\in I_{u}}s_{i} & =\sum_{j\in O_{u}}(k_{j}+r_{j})\nonumber\\ \forall g\in \mathcal{E}:\nonumber\\ s_{g} & =r_{g}+k_{g}\frac{1-\delta_{g}}{1-\delta_{g}\delta_{gE}}\nonumber\\ 1 & \geq\frac{r_{g}}{1-\delta_{g}}+\frac{k_{g}}{1-\delta_{g}\delta_{gE}}+\frac{m_{g}}{1-\delta_{g}}\nonumber\\ m_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}} & \leq\left(\sum_{j\in I_{D}}s_{j}\right)-r_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}}\nonumber\\ & \;\:-k_{g}\frac{(1-\delta_{gE})(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}\nonumber\\ \forall\; i:\; \;m_i, s_i, k_i, r_i &\geq 0. \nonumber \end{align} \end{algorithmic} \caption{LP with end-to-end encryption and $E_g$ approximation} \label{algo1} \end{algorithm} } \paragraph*{Message encryption at the source} The LP identifies the rate $R$ at which we can send an encrypted message, and the rates $m_g$ of the message that flow through each edge. We encrypt the message using a one-time pad approach and a key of size $R$, that we create by multiplying the $ \sum s_i$ packets that Bob receives with an $ R \times \sum s_i$ MDS matrix. \subsection{Discussion} \paragraph*{Why use MDS expansion at intermediate nodes} When the network consists of a single edge, the optimal key-generation scheme has Alice generate uniform at random packets and send these to Bob \cite{ITW11}; this has the advantage that packets that Eve receives and Bob does not, give no information to Eve about the packets Bob receives. Using MDS at intermediate nodes mimics this effect more efficiently: the observation is that, if Alice sends uniform at random packets, there exist some packets that neither Bob nor Eve receive; thus in a sense these packets do not serve any purpose. To avoid this, Alice can simply expand the $k$ packets to $\frac{k}{1-\delta\delta_E}$ packets. MDS combining has the property that Eve cannot learn anything about the packets that Bob receives, from the packets that only she (and not Bob) has collected. This observation and the corresponding proof were provided in \cite{athan2014}. The LP selects what fraction of the packets to send using MDS, and what fraction to send using ARQ, separately for each edge. ARQ has the advantage that it preserves all random packets, and the disadvantage that Eve learns more about the packets that Bob collects. \paragraph*{Why ARQ for message sending} ARQ is a capacity achieving strategy over erasure channels, as is also erasure coding. However, when we are interested in secure message sending, if we were to take the message, encrypt it with a one-time pad, and then use erasure correcting coding to transmit it to Bob, we would get a worse performance than if we send the encrypted message with ARQ. This is beacuse, with erasure coding, {\em every} packet Eve receives gives her {\em new} information about the information message; however, with ARQ, she may receive repeated packets, that bring her no new information. \paragraph*{Exact calculation of $E_g$} One method is similar to the standard path-LP formulation of the (non-secure) max flow LP, i.e., the LP that assigns rates to each of the paths that connect a source to a destination. To calculate $E_g$, we associate with each path $p$ a random packet flow $s_p$ that captures the delivered random packets through that path from Alice to Bob. We can then calculate how many of the packets Bob receives are delivered through paths that include edge $g$, and remove the fraction that Eve overhears. This approach has a compact LP form and is illustrated in Algo 2. Although this formulation has exponential complexity, it is also possible to exactly calculate $E_g$ in polynomial time (see Appendix). For this, we need to assume that network nodes do an additional operation: every node in the network uniformly at random mixes its incoming random packets before forwarding them towards Bob; we thus ensure that "all packets are treated equally". We then reduce the problem to calculating, what fraction of random packets that go through a given node, reach Bob. \subsection{Analysis} \paragraph*{Why the scheme is secure} This follows directly by applying Theorems 10 and 11 of \cite{athan2014} as well as Lemma 4 of \cite{JDFPA10}. For completeness we include a proof in the Appendix. \paragraph*{Reduction to secure network coding} By setting $\delta_i=\delta_{iE}=1$ for every edge of the network, the solution of the Algo 1 LP gives the same result as secure network coding. Indeed, if we assume that the mincut equals $h$, selecting $h$ edge-disjoint paths, and using $h-1$ of them to end the encrypted message and one to send random packets for key generation, is a feasible solution. From \cite{cai2011} it is also an optimal solution for this network. \paragraph*{Suboptimality} The achievability algorithm we presented is suboptimal, not only because it uses an estimate for $E_i$, but also because it only creates an end-to-end key; we know from the work in \cite{triangle14}, that, to achieve the capacity in some cases, even when the intermediate nodes do not have private randomness, we need to create and explore common randomness they have by receiving the same source-generated random packets, leading to an exponential complexity problem \cite{cui2010,cui2013}. \paragraph*{Optimality in some cases} In some cases where the secure message capacity is known, we can prove that Algo 1 (or Algo 2 we describe later) are optimal. For illustration, we provide in the Appendix a proof that Algo 1 is optimal when Alice is connected to Bob through two parallel channels. Algo 2 achieves the capacity of the line network, as again shown in the Appendix. \subsection{Extensions}\label{sec:ext} Given the framework of Algo 1, we can directly extend it in a number of cases, as is also the case for the max flow LP, albeit at additional complexity cost in some cases. For instance, we can extend it to address the following:\\ $1.$ Link-by-link key creation (see for example Algo 2).\\ $2.$ Multicasting to more than one receivers (following a similar approach to the network coding LP in \cite{Allerton2013,Li2006}).\\ $3.$ Eve wiretaps more than one edges (if Eve eavesdrops $V$ edges, $E_g$ would be the amount of random packets Eve has collected by eavesdropping on edge $g$ plus $V-1$ arbitrary other edges. We provide such an LP in the Appendix for illustration.)\\ $4.$ Multiple sources not collocated transmitting messages to the same receiver (in this case, we can combine random packets across sources to create link by link keys; see Appendix).\\ $5$. Having costs associated with edges (similarly to \cite{athan2014}). \renewcommand{\thealgorithm}{} {\small \begin{algorithm} \floatname{algorithm}{Algo 2} \begin{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \REQUIRE Set of erasure probabilities $\delta_g$ and $\delta_{gE}$ \renewcommand{\algorithmicrequire}{\textbf{Output:} Secure message rate and achievability scheme parameters} \REQUIRE \begin{align} \max R,\text{s.t.:}\nonumber\\ R & =\sum_{i\in I_{D}}m_{i}\nonumber\\ \forall u\in \mathcal{V}-\{s,d\}:\nonumber\\ \sum_{i\in I_{u}}m_{i} & =\sum_{j\in O_{u}}m_{j}\nonumber\\ \sum_{i\in I_{u}}s_{i} & \geq\sum_{j\in O_{u}}(k_{j}+r_{j})\nonumber\\ \forall g\in \mathcal{E}:\nonumber\\ s_{g} & =r_{g}+k_{g}\frac{1-\delta_{g}}{1-\delta_{g}\delta_{gE}}\nonumber\\ 1 & \geq\frac{r_{g}}{1-\delta_{g}}+\frac{k_{g}}{1-\delta_{g}\delta_{gE}}+\frac{m_{g}}{1-\delta_{g}}\label{eq:noc}\\ s_{g} & =\sum_{p\in P:g\in p}s_{p} \label{eq:sum}\\ m_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}} & \leq(k_{g}+r_{g})\frac{\delta_{gE}(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}+\sum_{p\in P_{-g}^{'}}s_{p} \nonumber\\ \forall\; i,p:\; \;m_i, s_i, s_p, k_i, r_i &\geq 0. \nonumber \end{align} \end{algorithmic} \caption{LP with end-to-end and link-by-link encryption, and with $E_g$ exact calculation} \label{algo2} \end{algorithm} } \paragraph*{Algo 2 description} In this algorithm the message is encrypted both with an end-to-end key, and a link-by-link key (that is applied and peeled off at every edge). The source again selects two (possibly different) sets of paths, one set for random-packet-sending and the other for message-sending. An end-to-end key is created from these random packets. The source encrypts all the packets with this end-to-end key and transmits them appropriately through the message-sending paths. Furthermore, node $u$ (connected to node $v$ through edge $g$) may also apply an additional link-by-link key, that node $v$ will remove before further forwarding and potentially re-encrypting the message. Note that $u$ may send to node $v$ more random packets than what node $v$ can forward to Bob, as these extra packets are still useful to create a larger link-by-link key for edge $g$. Algo 2 uses all the $s_{g}$ random packets to create the link-by-link key. These packets can no longer contribute to the end-to-end key that will also protect the message $m_{g}$ through edge $g$, and need to be accounted for. Algo 2 exactly calculates how many of the $s_{g}$ packets reach Bob, through a path flow-decomposition approach. We denote with $P$ the set of all paths in the network that begin from the source, with $P'$ all the Alice-Bob paths, and with $P_{-g}^{'}$ all Alice-Bob paths that do not utilize edge $g$. The LP assigns values to each message-path-flow $s_{p}$ and of course it is, \[ s_{g}=\sum_{p\in P:g\in p}s_{p}. \] For the calculation of the key for edge $g$: The link-by-link key is calculated as the random packets that pass through edge $g$ and are not heard by Eve, \[ (k_{g}+r_{g})\frac{\delta_{gE}(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}. \] The end-by-end key is calculated as the random packets that were transmitted to the destination without passing through edge $g$, \[ \sum_{p\in P_{-g}^{'}}s_{p}. \] Since we are protecting from an Eve at edge $g$, we are sure that all these packets are secure. Thus the security condition becomes, \[ m_{g}\frac{1-\delta_{gE}}{1-\delta_{g}\delta_{gE}}\leq(k_{g}+r_{g})\frac{\delta_{gE}(1-\delta_{g})}{1-\delta_{g}\delta_{gE}}+\sum_{p\in P_{-g}^{'}}s_{p}. \] The LP can choose among many different path-flows for the messages in order to achieve the same $s_{g}$ for all edges $g$. The optimal choice is the one that maximizes the secure message sending rate. \begin{figure*}[t!] \centering \subfigure[Message-sending and key-creation paths can be different: the upper path is used only for message flow, the lower path is shared. We depict the optimal values Algo I has selected. ] {\includegraphics[width=0.8\columnwidth]{TwoPath} \label{fig1} } \subfigure[Line network with $N+1$ nodes.]{ \includegraphics[width=0.8\columnwidth]{Line} \label{fig:Line} } \caption{Network configurations.} \label{fig:2rel_networks} \end{figure*} \begin{figure*}[t!] \centering \subfigure[Two hop line network with $\delta_{\text{2\textgreek{E}}}=\delta_{\text{1\textgreek{E}}}=\delta_{\text{\textgreek{E}}}$, $\delta_{\text{1}}=0.2$, $\delta_{\text{2}}=0.8$.] {\includegraphics[width=0.65\columnwidth]{TwoParallelN2} \label{fig3} } \subfigure[Multiple parallel channels with $\delta_{\text{i\textgreek{E}}}=0.8$, $\delta_{\text{i}}=0.6$ for $i$ odd, and $\delta_{\text{i\textgreek{E}}}=0.9$, $\delta_{\text{i}}=0.6$ for $i$ even.] { \includegraphics[width=0.65\columnwidth]{ManyParallel} \label{fig4} } \subfigure[Two hop line network with $\delta_{\text{1\textgreek{E}}}=0.5$, $\delta_{\text{2\textgreek{E}}}=1$, $\delta_{\text{2}}=0.6$.] {\includegraphics[width=0.65\columnwidth]{TwoHopAssymetric} \label{fig5} } \caption{Evaluation results through matlab.} \label{fig:2rel_networks} \end{figure*} \newpage \section{Evaluation}\label{sec:eval} We used numerical evaluations (through matlab) to solve the LPs over specific configurations where the capacity is known. We verified that: \\ $\bullet$ \emph{Selecting paths helps.} The optimal message-sending and key-creation sets of paths in several instances did not share all edges. Such an example is provided in Fig.~\ref{fig1}.\\ $\bullet$ \emph{Generating keys using MDS helps.} Fig.~\ref{fig3} shows the performance we get over a two-hop line network (Fig.~\ref{fig:Line} with $N=2$), when: 1) we allow the LP in Algo~1 to only use ARQ for the random packets propagation to the destination, and 2) we use both ARQ and MDS for the same purpose. The benefits of using MDS in this case are clear. Note that over the line network secure network coding achieves zero rate.\\ $\bullet$ \emph{Algo 1 is suboptimal,} Fig.~\ref{fig5} compares the performance of Algo 1 with the capacity of the two-hop line network \cite{athan2014}; when Eve only wiretaps the first channel, and the first channel is better than the second, the optimal strategy uses a link-by-link key; Algo 1 cannot do this. Algo 2, that can do so, achieves the capacity.\\ $\bullet$ \emph{Using link-by-link keys can help.} See previous point. \\ $\bullet$ \emph{We achieve benefits over secure network coding.} We compare Algo 1 against using channel coding followed by secure network coding. Fig.~\ref{fig4} considers a configuration where Alice is connected to Bob through multiple parallel channels; this is a "worse case" configuration in terms of expected benefits, as the main opportunity to create keys comes from the number of paths (and not erasures), that secure network coding also leverages. The constant benefits Algo 1 offers are exactly due to exploiting the erasures over the edge that Eve wiretaps. \bibliographystyle{IEEEtran}
{ "timestamp": "2015-04-27T02:11:19", "yymm": "1504", "arxiv_id": "1504.06593", "language": "en", "url": "https://arxiv.org/abs/1504.06593" }
\section{Introduction} \label{sec:intro} Type II supernovae (SNe) originate from massive stars with $ M_{ZAMS} > 8 \mbox{M$_{\odot}$} $ \citep{2013RvMP...85..245B} which have retained substantial hydrogen in the envelope at the time of explosion. They belong to a subclass of core-collapse SNe (CCSNe), which collapse under their own gravity at the end of the nuclear burning phase, having insufficient thermal energy to withstand the collapse. The most common subtype among hydrogen rich supernovae is type IIP. At the time of shock breakout almost the entire mass of hydrogen is ionized. Type IIP SNe have an extended hydrogen envelope, which recombines slowly over a prolonged duration sustaining the plateau phase. During this phase the SN light curve shows almost constant brightness lasting for 80-100 days. At the end of plateau phase the SN experiences a sudden drop in luminosity, settling onto the slow declining radioactive tail, also known as nebular phase, which is mainly powered by gamma rays released from the decay of \mbox{$^{56}$Co}\ to \mbox{$^{56}$Fe}, which in turn depends upon the amount of \mbox{$^{56}$Ni}\ synthesized at the time of explosion. {The plateau slope of SN type II light curve primarily depends on the amount of hydrogen present in the ejecta. If hydrogen content is high, as in type IIP, the initial energy deposited from shock and decay of freshly produced \mbox{$^{56}$Ni}\, shall be released slowly over a longer period of time. On the other hand if hydrogen content is relatively low, the light curve will decline fast but with higher peak luminosity. Thus if hydrogen content is low enough, one would expect a linear decline in the light curve classifying it as type IIL. By the historical classification, type IIL \citep{1979A&A....72..287B} shows linear decline in light curve over 100 days until it reaches the radioactive tail phase. \cite{2012ApJ...756L..30A} claimed to find type IIP and IIL as to distinct group of events which may further indicate their distinct class of progenitors. However, recent studies by \cite{2014ApJ...786...67A} and \cite{2015ApJ...799..208S} on large sample of type II SNe do not favor any such bi-modality in the diversity, rather they found continuum in light curve slopes as well as in other physical parameters. The continuous distribution of plateau slopes in type II events is rather governed by variable amount of hydrogen mass left in the envelope at the time of explosion. Based on a sample of 11 type IIL events, \cite{2014MNRAS.445..554F} proposed that any event having decline of 0.5mag in V band light curve in first 50 days can be classified as type IIL. In light of these recent developments a large number of type IIP SNe classified earlier may now fall under IIL class. Thus many of the past studies collectively on samples of type IIP SNe, which we shall be referring in this work may include both IIP as well as IIL.} Extensive studies have been done to relate observable parameters and progenitor properties of IIP SNe \citep[e.g.,][]{1985SvAL...11..145L,2003ApJ...582..905H}. Stellar evolutionary models suggest that these SNe may originate from stars with zero-age-main-sequence mass of 9-25\mbox{M$_{\odot}$}\ \citep[e.g.,][]{2003ApJ...591..288H}. However, progenitors directly recovered for a number of nearby IIP SNe, using the pre-SN \textit{HST} archival images, are found to lie within $ 8-17 \mbox{M$_{\odot}$}$ RSG stars \citep{2009ARA&A..47...63S}. Recent X-ray study also infers an upper mass limit of $ <19\mbox{M$_{\odot}$} $ for type IIP progenitors \citep{2014MNRAS.440.1917D}, which is in close agreement to that obtained from direct detection of progenitors. The geometry of the explosion and presence of pre-existent circumstellar medium (CSM), often associated with progenitor mass loss during late stellar evolutionary phase, can significantly alter the observables even though originating from similar progenitors. There are number of recent studies of II SNe, like 2007od \citep{2011MNRAS.417..261I}, 2009bw \citep{2012MNRAS.422.1122I} and 2013by \citep{2015arXiv150106491V} which show signature of such CSM interactions during various phases of evolution. SN 2013ej\ is one of the youngest detected type II SN which was discovered soon after its explosion. The earliest detection was reported on July 24.125 UTC, 2013 by C. Feliciano in \textit{Bright Supernovae\footnote{http://www.rochesterastronomy.org/supernova.html}} and subsequent independent detection on July 24.83 UTC by \cite{2013ATel.5466....1L} at \textit{V}-band magnitude of $ \sim $14.0. The last non-detection was reported on July 23.54 UTC, 2013 by All Sky Automated Survey for Supernovae \citep{2013ATel.5237....1S} at a \textit{V}-band detection limit of $ > 16.7 $ mag. Therefore, we adopt an explosion epoch (0d) of July 23.8 UTC (JD $ =2456497.3\pm0.3 $), which is chosen in between the last non-detection and first detection of SN 2013ej. This being one of the nearest and brightest events, it provides us with an excellent opportunity to study the origin and evolution of type II SN. Some of the basic properties of SN 2013ej\ and its host galaxy are listed in Table~\ref{tab:host}. \cite{2014MNRAS.438L.101V} presented early observations of SN 2013ej\ and using temperature evolution for the first week, they estimated a progenitor radius of 400-600 \mbox{R$_{\odot}$}. \cite{2014MNRAS.439L..56F} used high resolution archival images from \textit{HST} to examine the location of SN 2013ej\ and identified the progenitor candidate to be a supergiant of mass $ 8-15.5\mbox{M$_{\odot}$} $. \cite{2013ATel.5275....1L} reported unusually high polarization using spectropolarimetric observation for the week old SN, as implying substantial asymmetry in the scattering atmosphere of ejecta. {X-ray emission has also been detected by \textit{Swift} XRT \citep{2013ATel.5243....1M}, which may indicate SN 2013ej\ has experienced CSM interaction.} In this work we present photometric and spectroscopic observation of SN 2013ej, and carry out qualitative as well as quantitative analysis of the various observables through modelling and comparison with other archetypal SNe. The paper is organized as follows. In section \ref{sec:obs} we describe photometric and spectroscopic observations and data reduction. The estimation of line of sight extinction is discussed in section \ref{sec:ext}. In section \ref{sec:lc} we analyze the light curves, compare absolute magnitude light curves and color curves. We also derive bolometric luminosities and estimate nickel mass from the tail luminosity. Optical spectra are analyzed in section \ref{sec:sp}, where we model and discuss evolution of various spectral features and compare velocity profile with other type II SNe. In section \ref{modelling}, we model the bolometric light curve of SN 2013ej\ and estimate progenitor and explosion parameters. Finally in section \ref{sec:sum}, we summarize the results of this work. \input{./host.tex} \section{Observation and data reduction} \label{sec:obs} \subsection{Photometry} \label{sec:obs.phot} Broadband photometric observations in \textit{UBVRI} filters have been carried out from 2.0m IIA Himalayan Chandra Telescope (HCT) telescope at Hanle and ARIES 1.0m Sampurananand (ST) and 1.3m Devasthal Fast Optical (DFOT) telescopes at Nainital. Additionally SN 2013ej\ has been also observed with \textit{Swift} Ultraviolet/optical (UVOT) telescope in all six bands. Photometric data reductions follows the same procedure as described in \cite{2013MNRAS.433.1871B}. Images are cleaned and processed using standard procedures of IRAF software. DAOPHOT routines have been used to perform PSF photometry and extracting differential light-curves. To standardize the SN field, three Landolt standard fields (PG~0231, PG~2231 and SA~92) were observed on October 27, 2013 with 1.0-m ST under good photometric night and seeing (typical FWHM $ \sim$2\arcsec.1 in \textit{V} band) condition. For atmospheric extinction measurement, PG~2231 and PG~0231 were observed at different air masses. The SN field has been also observed in between standard observations. The standardization coefficients derived are represented in the following transformation equations, \begin{figure} \centering \includegraphics[width=8.4cm]{./id.eps} \caption{SN 2013ej\ in NGC 0628. The $BR$-band composite image taken from 104-cm Sampurnanad telescope, covering an area of about 13\arcmin$\times$13\arcmin\, is shown. Eight local field standards and SN are marked in the image.} \label{fig:snid} \end{figure} \input{./photstar.tex} \input{./photsn.tex} \begin{eqnarray*} u &=& U + (7.800\pm0.005) - (0.067\pm0.009) \cdot (U-B)\\ b &=& B + (5.269\pm0.007) - (0.060\pm0.009) \cdot (B-V)\\ v &=& V + (4.677\pm0.004) - (0.056\pm0.005) \cdot (B-V)\\ r &=& R + (4.405\pm0.005) - (0.038\pm0.010) \cdot (V-R)\\ i &=& I + (4.821\pm0.006) - (0.048\pm0.006) \cdot (V-I) \end{eqnarray*} \noindent where $u$, $b$, $v$, $r$ and $i$ are instrumental magnitudes corrected for time, aperture and airmass; $U$, $B$, $V$, $R$ and $I$ are standard magnitude. The standard-deviation of the difference between the calibrated and the standard magnitudes of the observed Landolt stars are found to be $\sim$ 0.03 mag in $U$, $\sim$ 0.02 mag in $BR$ and $\sim$ 0.01 mag in $VI$. The transformation coefficients were then used to generate eight local standard stars in the field of SN 2013ej, which are verified to be non-variable and have brightness similar to SN. These stars are identified in Fig.\ref{fig:snid} and the calibrated \textit{UBVRI} magnitudes are listed in Table~\ref{tab:photstar}. These selected eight local standards were further used to standardize the instrumental light curve of the SN. One of these stars (star B) is common to that used in the study by \cite{2014JAVSO.tmp..275R}, and its \textit{BVRI} magnitudes are found to lie within 0.03 mag of our calibrated magnitudes. {Our calibrated magnitudes for SN 2013ej\ are also found to be consistent within errors to that presented in earlier studies of the event \citep{2014MNRAS.438L.101V,2014JAVSO.tmp..275R}.} The standard photometric magnitudes of SN 2013ej\ are listed in Table~\ref{tab:photsn}. This supernova was also observed with the Ultra-Violet/Optical Telescope \citep[UVOT;][]{2005SSRv..120...95R} in six bands (viz. uvw2, uvm2, uvw1, uvu, uvb, uvv) on the Swift spacecraft \citep{2004ApJ...611.1005G}. The UV photometry was obtained from the Swift Optical/Ultraviolet Supernova Archive\footnote{http://swift.gsfc.nasa.gov/docs/swift/sne/swift\_sn.html} (SOUSA; \citealp{2014Ap&SS.354...89B}). The reduction is based on that of \citet{2009AJ....137.4517B}, including subtraction of the host galaxy count rates and uses the revised UV zeropoints and time-dependent sensitivity from \citet{2011AIPC.1358..373B}. The UVOT photometry is listed in Table.~\ref{tab:photsn}. The first month of UVOT photometry was previously presented by \cite{2014MNRAS.438L.101V}. \subsection{Spectroscopy} \label{sec:obs.spec} Spectroscopic observations have been carried out at 10 phases during 12 to 125d. Out of these, nine epochs of low resolution spectra are obtained from Himalaya Faint Object Spectrograph and Camera (HFOSC) mounted on 2.0m HCT. Spectroscopy on the HCT/HFOSC was done using a slit width of 1.92 arcsec, and grisms with resolution $ \lambda/\Delta\lambda = 1330$ for Gr7 and $2190$ for Gr8, and bandwidth coverage of $0.38 - 0.64$ $\mu m$ and $0.58 - 0.84$ $\mu m$ respectively. One high resolution spectrum is obtained from the ARC Echelle Spectrograph (ARCES) mounted on 3.5m ARC telescope located at Apache Point Observatory (APO). ARCES is a high resolution cross-dispersion echelle spectrograph, the spectrum is recorded in 107 echelle orders covering a wavelength range of $\lambda\sim$ 0.32-1.00\mbox{$\mu{\rm m}$}, at resolution of $R\sim31500$ \citep{2003SPIE.4841.1145W}. Summary of spectroscopic observations is given in Table.~\ref{tab:speclog}. \input{./speclog.tex} Spectroscopic data reduction was done under the \texttt{IRAF}\ environment. Standard reduction procedures are followed for bias subtraction and flat fielding. Cosmic ray rejections are done using a Laplacian kernel detection algorithm for spectra, L.A.Cosmic \citep{2001PASP..113.1420V}. One dimensional low resolution spectra were extracted using the \textsc{apall} task. Wavelength calibration was done using the \textsc{identify} task applied on FeNe and FeAr (for HCT) arc spectra taken during observation. Wavelength calibration was crosschecked against the [\ion{O}{I}] $ \lambda5577 $ sky line in the sky spectrum, and it was found to lie within 0.3 to 4.5 \AA\ of the actual value. Spectra were flux calibrated using \textsc{standard, sensfunc} and \textsc{calibrate} tasks in \texttt{IRAF}. For flux calibration, spectrophotometric standards were used which were observed on the same nights as the SN spectra were recorded. All spectra were tied to absolute flux scale using the observed flux from \textit{UBVRI} photometry of SN. To perform the tying, individual spectrum is multiplied by a wavelength dependent polynomial, which is convolved with \textit{UBVRI} filters and then the polynomial is tuned to match the convolved flux with observations. The one dimensional calibrated spectra were corrected for heliocentric velocity of host galaxy (658 \mbox{$\rm{\,km\,s^{-1}}$}; Table~\ref{tab:host}) using \textsc{dopcor} task. \section{Distance and Extinction} \label{sec:ext} We adopt a distance of $ 9.57\pm0.70$ Mpc which is a mean value of four different distance estimation techniques used for NGC 0628, viz., 9.91 Mpc applying Standard Candle Method (\textsc{scm}) to SN 2003gd by \cite{2010ApJ...715..833O}; 10.19 Mpc using the Tully-Fisher method (\texttt{HyperLeda}\footnote{http://leda.univ-lyon1.fr/}); 9.59 Mpc using brightest supergiant distance estimate by \cite{2005MNRAS.359..906H}; and planetary nebula luminosity function distance 8.59 Mpc \citep{2008ApJ...683..630H}. Although for each of these methods number of distance estimates exists in literature, we tried to select only most recent estimates. \cite{2014JAVSO.tmp..275R} estimated a distance of $ 9.1\pm0.4 $ Mpc by applying Expanding Photosphere Method (\textsc{EPM}) to SN 2013ej, which we find consistent to that we adopted. One of the most reliable and well accepted method for SNe line-of-sight reddening estimation is using the \Nai~D absorption feature. The equivalent width (EW) of \Nai~D doublet (\mbox{$\lambda\lambda$}~5890, 5896) is found to be correlated with the reddening, estimated from the tail color curves of type Ia SNe \citep{1990A&A...237...79B,2003fthp.conf..200T}. However, \cite{2011MNRAS.415L..81P} suggested that although \Nai~D EW is weakly correlated with \mbox{$E(B-V)$}, the EWs estimated from low resolution spectra is a bad estimator of \mbox{$E(B-V)$}. \cite{2012MNRAS.426.1465P} used a larger sample of data and presented a more precise and rather different functional form of the correlation than that was derived earlier. Our high resolution echelle spectra at 79.5d provided an excellent opportunity to investigate the line-of-sight extinction. \begin{figure} \centering \includegraphics[width=\linewidth]{./NaID_echelle.eps} \caption{Echelle spectra at 79.5d showing the \Nai~D doublet for Milky-way while no impression for NGC 0628\ is detected.} \label{fig:naid.ech} \end{figure} The resolved \Nai~D doublet for Milky-way is clearly visible in the high-resolution spectra (recorded on 79.5d) as shown in Fig.\ref{fig:naid.ech}. Whereas no impression of \Nai~D for NGC 0628\ is detected at the expected redshifted position relative to Milky-way. This indicates that the reddening due to host is negligible, only Galactic reddening will contribute to the total line of sight extinction. A similar conclusion has also been inferred by \cite{2014MNRAS.438L.101V} from their high resolution spectra obtained at 31d. Thus, we adopt a total $ \mbox{$E(B-V)$}=0.060\pm0.001 $ mag, which is entirely due to Galactic reddening \citep{2011ApJ...737..103S} and assuming total-to-selective extinction at V band as $ R_V=3.1 $, it translates into $ A_V=0.185\pm0.004 $ mag. \section{Light curve} \label{sec:lc} \subsection{Light curve evolution and comparison} \label{sec:lc.app} The optical light curves of SN 2013ej\ in \textit{UBVRI} and six UVOT bands are shown in Fig.~\ref{fig:lc.app}. \textit{UBVRI} photometric observations were done at 38 phases during 12 to 209d (from plateau to nebular phase). The duration of plateau phase is sparsely covered, while denser follow-up initiated after 68d. The plateau phase lasted until $ \sim85 $d with an average decline rate of 6.60, 3.57, 1.74, 1.07 and 0.74 mag (100 d)$ ^{-1} $ in \textit{UBVRI} bands respectively. Since 95d, the light curve declines very fast until 115d, after which it settles to a relatively slow declining nebular phase. During this phase the decline rates for \textit{UBVRI} bands are 0.98, 1.22, 1.53, 1.42 and 1.55 mag (100 d)$ ^{-1} $ respectively. \begin{figure} \centering \includegraphics[width=\linewidth]{./lc_all_color.eps} \caption{The photometric light curves in Johnson-Cousins \textit{UBVRI} and \textit{Swift}~UVOT bands. The light curves are vertically shifted for clarity. The line joining the data points of light curves is for visualization purpose only.} \label{fig:lc.app} \end{figure} SN 2013ej\ has been also observed by \textit{Swift}~UVOT at 35 phases during 7 to 139d. The UVOT \textit{UV} band light curves declines steeply during the first 30d at a rate of 0.182, 0.213, 0.262 mag d$ ^{-1} $ in \textit{uvw1, uvw2} and \textit{uvm2} bands respectively, thereafter settling into a slow declining phase until it reaches the end of plateau. SN 2013ej\ experience a steeper plateau decline than that observed for SN 1999em \citep{2002AAS...201.2303L}, SN 1999gi \citep{2002AJ....124.2490L}, SN 2012aw \citep{2013MNRAS.433.1871B} and SN 2013ab \citep{2015arXiv150400838B}. For example, SN 2012aw plateau declines at a rate of 5.60, 1.74, 0.55 mag (100 d)$ ^{-1} $ in $ UBV $-bands, similarly for SN 2013ab decline rates in \textit{UBVRI} are 7.60, 2.72, 0.92, 0.59 and 0.30 mag (100 d)$ ^{-1} $ and 0.169, 0.236, 0.257 mag d$ ^{-1} $ in UVOT \textit{uvw1, uvw2} and \textit{uvm2} bands (during first 30d). {The absolute \textit{V}-band ($ M_V $) light curve of SN 2013ej\ is plotted in Fig.~\ref{fig:lc.abs} and is compared with other well studied type II SNe (after correcting for extinction and distance). In Table~\ref{tab:slopendrop} we list the plateau slope of all compared type II events. The comparison shows that the decline rate of SN 2013ej\ during this phase is highest (1.74 mag (100 d)$ ^{-1} $) among most other SNe, except three type IIL SNe 1980K, 2000dc and 2013by, where SN 1980 is among the very first observed prototypical type IIL event. The early plateau ($ <40 $d) light curve of SN 2013ej\ is identical to SN 2009bw. However, unlike most other IIP SNe, e.g. 2009bw and 2013ab, which becomes flatter during late plateau, SN 2013ej\ continues to decline almost at a steady rate until the end of plateau ($ \sim $ 85d). The mid-plateau $ M_V=-14.7 $ mag for SN 2013ej, which places it in the class of normal luminous type II events. SN 2013ej\ is comparable with fast declining and short plateau SNe in the sample of \cite{2014ApJ...786...67A}. Following the plateau phase, $ V $-band light drops very fast to reach slow declining nebular phase (1.53 mag (100 d)$ ^{-1} $), which is powered by the radioactive decay of $ ^{56} $Co to $ ^{56} $Fe. The fall of $ M_V $ during the plateau nebular transition is $ \sim 2.4$ mag, which is on the higher side of the compared events. The closest comparison is SNe 2009bw and 2012A which exhibits a drop of $ \sim $2.4 mag and $ \sim $2.5 mag respectively.} This also indicates low amount of \mbox{$^{56}$Ni}\ mass synthesized during the explosion which we shall further discuss in the next section. \begin{figure*} \centering \includegraphics[width=16cm]{./Mv_comparison.eps} \caption{ The M$_{V}$ light curve of SN 2013ej\ is compared with other type II SNe. The exponential decline of the tail light curve following the radioactive decay law for \mbox{$^{56}$Co}$ \rightarrow $\mbox{$^{56}$Fe}\ is shown with a dashed line. On the bottom left side, pair of dotted lines in each gray and green colors represent the slope range for type IIP and IIL SNe templates as given by \citet{2014MNRAS.445..554F}. The adopted explosion time in JD-2400000, distance in Mpc, \mbox{$E(B-V)$}\ in mag and the reference for observed V-band magnitude, respectively, are : SN 1980K -- 44540.5, 5.5, 0.30; \citet{1982A&A...116...35B}, NED database; SN 1987A -- 46849.8, 0.05, 0.16; \citet{1990AJ.....99.1146H}; SN 1999em -- 51475.6, 11.7, 0.10; \citet{2002PASP..114...35L,2003MNRAS.338..939E}; SN 1999gi -- 51522.3, 13.0, 0.21; \citet{2002AJ....124.2490L}; SN 2000dc -- 51762.4, 49.0, 0.07; \citet{2014MNRAS.445..554F}, NED database; SN 2003hn -- 52866.5, 17.0, 0.19; \citet{2009AJ....137...34K,2014ApJ...786...67A}; SN 2004et -- 53270.5, 5.4, 0.41; \citet{2006MNRAS.372.1315S}; SN 2005cs -- 53549.0, 7.8, 0.11; \citet{2009MNRAS.394.2266P}; SN 2009N -- 54848.1, 21.6, 0.13; \citet{2014MNRAS.438..368T}; SN 2009bw -- 54916.5, 20.2, 0.31; \citet{2012MNRAS.422.1122I}; SN 2012A -- 55933.5, 9.8, 0.04; \citet{2013MNRAS.434.1636T}; SN 2012aw -- 56002.6, 9.9, 0.07; \citet{2013MNRAS.433.1871B}; SN 2013ab -- 56340.0, 24.0, 0.04; \citet{2015arXiv150400838B}; SN 2013by -- 56404.0, 14.8, 0.19; \citet{2015arXiv150106491V}.} \label{fig:lc.abs} \end{figure*} \input{./slopendrop.tex} \textit{Swift}~UVOT absolute magnitude light curves of SN 2013ej\ are shown in Fig.~\ref{fig:uv.abs} and compared with other well observed type II SNe. The sample is selected in such a way that SNe have at least a month of observations. Most SNe are not followed for more than a month by \textit{Swift}, mainly because of the large distances or high extinction values. However, both these factors work in favor of SN 2013ej\, making it possible to have about four months of observations. Moreover, the location of the SN being in the outskirt of a spiral arm of NGC 0628, the background flux contamination is also negligible. The comparison shows that the SN 2013ej\ UV light curves are identical to SN 2012aw. SN 2013ej\ also shows a similar UV plateau trend as observed in SN 2012aw \citep{2013ApJ...764L..13B}, which is although expected but rarely detected for IIP/L SNe. \begin{figure} \centering \includegraphics[width=\linewidth]{./UVOT_comp.eps} \caption{Comparison of the \textit{Swift}~UVOT UV absolute light curves of SN 2013ej, with other well observed II SNe from UVOT. For the compared SNe, references for UVOT data, extinction and distance are: SN 2005cs -- \citet{2009AJ....137.4517B,2009MNRAS.394.2266P}, SN 2006at -- \citet{2009AJ....137.4517B}; Distance 65 Mpc; $ \mbox{$E(B-V)$}=0.031 $ mag \citep[only Galactic reddening][]{2011ApJ...737..103S}, SN 2006bp -- \citet{2008ApJ...675..644D}, SN 2012aw -- \citet{2013ApJ...764L..13B,2013MNRAS.433.1871B}, SN 2013ab -- \citet{2015arXiv150400838B}, SN 2013by -- \citet{2014Ap&SS.354...89B,2015arXiv150106491V}. Some late data points for SN 2013ab with large errors has been omitted from the plot.} \label{fig:uv.abs} \end{figure} Broadband color provides important information to study the temporal evolution of SN envelope. In Fig.~\ref{fig:cc.abs}, we plot the intrinsic colors \textit{U-B, B-V, V-R} and \textit{V-I} for SN 2013ej\ and compare its evolution with type II-pec SN 1987A, and type IIP SNe 1999em, 2004et, 2012aw and 2013ab. All the colors show generic signature of fast cooling ejecta until the end of plateau ($ \sim110 $d). With the start of the nebular phase it continues to cool at a much slower rate in \textit{V-I} and \textit{V-R} colors, whereas \textit{U-V} and \textit{B-V} shows a bluer trend. This is because, as the SN enters the nebular phase, the ejecta become depleted of free electrons, thereby making the envelope optically thin, and so unable to thermalize the photons from radioactive decay of $ ^{56} $Co to $ ^{56} $Fe. \begin{figure} \centering \includegraphics[width=8.75cm]{./color_comp.eps} \caption{The intrinsic colors evolution of SN 2013ej\ is compared with other well-studied type IIP SNe 1987A, 1999em, 2004et, 2012aw and 2013ab. The reference for the data is same as in Fig.~\ref{fig:lc.abs}.} \label{fig:cc.abs} \end{figure} \subsection{Bolometric light-curve} \label{sec:lc.bol} We compute the pseudo-bolometric luminosities following the method described in \cite{2013MNRAS.433.1871B}; which include SED integration over the semi-deconvolved photometric fluxes after correcting for extinction and distance. Supernova bolometric luminosities during early phases ($ \le30 $d) are dominated by ultraviolet fluxes, while after mid-plateau ($ \sim50 $d) UV contribution becomes insignificant as compared to optical counterpart \citep[e.g., as seen in SNe 2012aw, 2013ab;][]{2013MNRAS.433.1871B,2015arXiv150400838B}. Similarly, during late phases $ >100 $d NIR becomes dominant over optical fluxes. However, during most of the light curve evolution, optical fluxes still provide significant contribution. We compute pseudo-bolometric luminosities in the wavelength range of \textit{U} to \textit{I} band (3335-8750\AA). We also computed UV-optical pseudo-bolometric light curve with wavelength starting from \textit{uvw2} band (wavelength range of 1606-8750\AA). The UV contribution enhances the luminosity significantly during early phases, whereas it is almost negligible after mid-plateau. In Fig.~\ref{fig:lc.bol}, we plot pseudo bolometric light curve for SN 2013ej\ and compare it with other SNe light curves computed using the same technique. We also include UV-optical bolometric light curve for SNe 2012aw and 2013ab along with SN 2013ej for comparison. Although the UV-optical light curve is initially brighter than the optical light curve, they completely coincide by the end of plateau phase (85d). It is evident from the comparison that SN 2013ej\ experienced a steep decline during the plateau phase, but with a much shorter duration. This is consistent with the anti-correlation observed between plateau slope and duration for type II SNe \citep{1993A&A...273..106B,2014ApJ...786...67A}. The UV-optical bolometric light decreases by 0.83 dex during plateau phase (from 12 to 85d), followed by an even faster drop by 0.76 dex in a short duration of 21 days (from 90 to 111d). Thereafter, the SN settles in a slow declining nebular phase. The tail luminosities are significantly lower than other normal luminosity IIP events, e.g., SN 2013ej\ luminosities are lower by $ \sim0.5 $ dex (at 200d) than that of type II SNe 1987A, 1999em, 2004et and 2012aw, but higher than subluminous events like SN 2005cs. Another noticeable dissimilarity of the tail light curve is its high decline rate. SN 2013ej\ tail luminosity declines at a rate of 0.55 dex 100 d$ ^{-1} $, which is much higher than that expected from radioactive decay of \mbox{$^{56}$Co}\ to \mbox{$^{56}$Fe}. This is possibly because of inefficient gamma-ray trapping in the ejecta, and thus incomplete thermalization of the photons. We shall further explore this in \S\ref{modelling} in context of modeling the light curve. \begin{figure} \includegraphics[width=8.75cm]{./bol_lc_multi.eps} \caption{The \textit{UBVRI} bolometric light-curve of {SN 2013ej} is compared with other well studied supernovae. Light curves with added UVOT UV contributions are also shown for SNe 2013ej, 2013ab and 2012aw (labeled as UVO). The adopted values of distances, reddening and explosion time are same as in Fig.~\ref{fig:lc.abs}. The exponential decline of the tail light curve following the radioactive decay law is shown with a dashed line.} \label{fig:lc.bol} \end{figure} \subsection {Mass of nickel} \label{sec:lc.nick} During the explosive nucleosynthesis of silicon and oxygen, at the time of shock-breakout in CCSNe, radioactive \mbox{$^{56}$Ni}\ is produced. The nebular-phase light-curve is mainly powered by the radioactive decay of \mbox{$^{56}$Ni}\ to \mbox{$^{56}$Co}\ and \mbox{$^{56}$Co}\ to \mbox{$^{56}$Fe}\ with half-life times of 6.1d and 77.1d respectively emitting $\gamma$-rays and positrons. Thus the tail luminosity will be proportional to the amount of radioactive \mbox{$^{56}$Ni}\ synthesized at the time of explosion. We determine the mass of \mbox{$^{56}$Ni}\ using following two methods. For SN 1987A, one of the most well studied and well observed event, the mass of \mbox{$^{56}$Ni}\ produced in the explosion has been estimated quite accurately, to be $ 0.075\pm0.005 $ \mbox{M$_{\odot}$}\ \citep{1996snih.book.....A}. By comparing the tail luminosities of SN 2013ej\ and SN 1987A at similar phases, it is possible to estimate the \mbox{$^{56}$Ni}\ mass for SN 2013ej. In principle true bolometric luminosities (including UV, optical and IR) are to be used for this purpose, which are available for SN 1987A, whereas for SN 2013ej\ we have only UV and optical observations. Thus, in order to have uniformity in comparison, we used only the \textit{UBVRI} bolometric luminosities for both SNe and computed using the same method and wavelength range. We estimate the tail \textit{UBVRI} luminosity at 175d, by making a linear fit over 155 to 195d, to be $ 2.90\pm0.43\times 10^{40} $ erg s$^{-1}$. Likewise, SN 1987A luminosity is estimated to be $ 9.60\pm0.06\times 10^{40} $ erg s$^{-1}$ at similar phase. Thus, the ratio of SN 2013ej\ to SN 1987A luminosity is $0.302\pm0.044$, which corresponds to a \mbox{$^{56}$Ni}\ mass of $ 0.023\pm0.003 \mbox{M$_{\odot}$}$ for SN 2013ej. Assuming the $\gamma$-photons emitted from radioactive decay of \mbox{$^{56}$Co}\ thermalize the ejecta, \mbox{$^{56}$Ni}\ mass can be independently estimated from the tail luminosity as described by \cite{2003ApJ...582..905H}. \begin{eqnarray*} M_{\rm Ni} = 7.866\times10^{-44} \times L_{t} \exp\left[ \frac{(t_{t}-t_{0})/(1+z)-6.1}{111.26}\right]\mbox{M$_{\odot}$} \end{eqnarray*} where $t_{0}$ is the explosion time, 6.1d is the half-life time of \mbox{$^{56}$Ni}\ and 111.26d is the e-folding time of the \mbox{$^{56}$Co}\ decay. We compute tail luminosity $L_{t}$ at 6 epochs within 153 to 185d from the $ V $ band data corrected for distance, extinction and bolometric correction factor of $0.26 \pm 0.06$ mag during nebular phase \citep{2003ApJ...582..905H}. The weighted mean value of $L_{\rm t}$ is found to be $5.45\pm0.35\times10^{40}\,$\mbox{$\rm{\,erg\,s^{-1}}$} corresponding to mean phase of 170d. This tail luminosity corresponds to a value of $M_{\rm Ni} =0.019\pm0.002$\mbox{M$_{\odot}$}. We take the weighted mean of the estimated values from above two methods, and adopt a \mbox{$^{56}$Ni}\ mass of $ 0.020\pm0.002 \mbox{M$_{\odot}$}$ for SN 2013ej. {\cite{2003ApJ...582..905H} found a strong correlation between the \mbox{$^{56}$Ni}\ mass and the mid plateau (at 50d) $ V $ band absolute magnitude for type II SNe and this correlation was further confirmed by \cite{2014MNRAS.439.2873S} specifically for low luminous events. Fig.~\ref{fig:nicomp} shows the correlation of mid plateau M$ _V $ versus \mbox{$^{56}$Ni}\ mass for 34 events, including SN 2013ej. The SN lies within the scatter relation, but towards the lower mass range of \mbox{$^{56}$Ni}\ than where most of the events cluster around (top right).} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{./Ni_comp.eps} \caption{The plot of absolute $ V $ band magnitude at 50 day versus \mbox{$^{56}$Ni}\ mass for 34 type II SNe. Data taken from \citet{2003ApJ...582..905H} and \citet{2014MNRAS.439.2873S}. The position of SN 2013ej\ on the correlation is shown with filled red circle.} \label{fig:nicomp} \end{figure} \section{Optical spectra} \label{sec:sp} \subsection{Key spectral features} \label{sec:sp.key} The spectroscopic evolution of SN 2013ej\ is presented in Fig.~\ref{fig:sp.all}. Preliminary identifications of spectral features has been done as per previously studied type IIP SNe \citep[e.g.,][]{2002PASP..114...35L,2013MNRAS.433.1871B}. The spectrum at 12d shows broad \ha, \hb\ and \Hei\ features on top of a hot blue continuum. The 35d spectrum shows a relatively flat continuum with well developed features of \ha, \hb, \Feii\ along with blends of other heavier species \Tiii\ and \Baii. \Hei\ line is no longer detectable, instead \Nai~D features start to appear at similar location. The spectra from 35 to 80d represent the cooler photospheric phase, where the photosphere starts to penetrate deeper layers rich in heavier elements like \Feii\ and \Scii. During these phases we see the emergence and development of various other heavy atomic lines and their blends like \Tiii, \Baii, \Nai~D and \Caii. Fig.~\ref{fig:sp.lit} shows the comparison of three plateau phase spectra, viz. 12, 35 and 68d with other well studied type IIP SNe at similar epochs. The comparison shows the spectra of SN 2013ej\ is broadly identical to others in terms of observable line features and their evolution. A notable feature during early spectrum (12d) is the dip on the bluer wing of \ha\ profiles near 6170 \AA\, which can be attributed as the \Siii\ feature. {\cite{2013ATel.5275....1L} also identified this feature at $ \sim 9$d spectra of SN 2013ej\, however, due to unlikeliness of such a strong \Siii\ feature at such early epochs, a possiblity of non-standard red supergiant envelope or CSM interaction was suggested.} However, such dips are detectable in 35 and 42d spectra, which we identify as \Siii\ feature in \textsc{synow}\ modeling. \begin{figure*} \centering \includegraphics[width=14cm]{./spec_all_color.eps} \caption{The redshift corrected spectra of SN 2013ej\ are plotted for 10 phases during 12d to 125d. The prominent P-Cygni profiles of hydrogen (\ha, \hb, \hg) and helium (\Hei\ \ld5876) are marked. The telluric absorption features of O$ _2 $ are marked with $ \oplus $, symbol. {Portion of spectra in extreme blue or red ends have low SNR. Individual spectra with with overall low SNR has been binned for better visualization.}} \label{fig:sp.all} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{./spec_compare.eps} \caption{Comparison of early (12d) and plateau (35d, 68d) phase spectra of {SN 2013ej} with other well-studied type IIP SNe 1999em \citep{2002PASP..114...35L}, 1999gi \citep{2002AJ....124.2490L}, 2004et \citep{2006MNRAS.372.1315S,2010MNRAS.404..981M}, 2012aw \citep{2013MNRAS.433.1871B} and 2013ab \citep{2015arXiv150400838B}. All comparison spectra are corrected for extinction and redshift (adopted values are same as in Fig.~\ref{fig:lc.abs}).} \label{fig:sp.lit} \end{figure} The spectra at 96 and 97d represents the plateau-nebular transition phase. Thereafter, spectra at 109 and 125d represents the nebular phase, where the ejecta has become optically thin. {These spectra shows the emergence of some emission features from forbidden lines of \Oia\ \mbox{$\lambda\lambda$}~6300, 6364 and \Caiia\ \mbox{$\lambda\lambda$}~7291, 7324\AA, as well as previously evolved permitted lines of \Hi, and the \Nai\ \ld5893 doublet (see Fig.~\ref{fig:sp.neb}).} \begin{figure} \centering \includegraphics[width=8.5cm]{./spec_nebular.eps} \caption{The nebular phase spectrum of SN 2013ej\ at 125d. Prominent emission and absorption features are marked and labeled.} \label{fig:sp.neb} \end{figure} \cite{2014ApJ...786L..15G} found correlations between \ha\ absorption to emission strengths and light curve parameters, i.e. plateau slope and duration of optically thick phase. Following their selection criteria for choosing phase of SN spectra, i.e. ten days after start of recombination, we selected 42d spectrum as the closet available phase to the criteria. The \ha\ absorption to emission ratio of equivalent widths for SN 2013ej\ is found to be $ 0.23\pm0.02 $, the optically thick phase is $ \sim85 $d and \textit{B}-band late plateau (40 to 85d) slope is $ \sim0.27 $ mag (100 d)$ ^{-1} $. The correlation for optically thick phase duration is found to follow that presented by \cite{2014ApJ...786L..15G}. For the plateau slope, the correlation also hold true, but here SN 2013ej\ lies in the border line position of the scattered relation. However, it may be noted that \ha\ profiles are possibly contaminated by high velocity features as we describe in next sections, which may result in deviation from correlation. \subsection{\textsc{SYNOW} modelling of spectra} \label{sec:synow} SN 2013ej\ spectra has been modeled with \textsc{synow}\footnote{https://c3.lbl.gov/es/\#id22} \citep{1997ApJ...481L..89F,1999MNRAS.304...67F,2002ApJ...566.1005B} for line identification and its velocity estimation. \textsc{synow}\ is a highly parametrized spectrum synthesis code which employs the Sobolev approximation to simplify radiation transfer equations assuming a spherically symmetric supernova expanding homologously. The strength of the \textsc{synow}\ code is its capability to reproduce P-Cygni profiles simultaneously in synthetic spectra for a given set of atomic species and ionization states. The applicability of \textsc{synow}\ is well tested in various core-collapse SNe studies \citep[e.g.][]{2012MNRAS.422.1178I,2013MNRAS.433.1871B,2013ApJ...767...71M,2014ApJ...782...98B, 2014MNRAS.438..368T,2014ApJ...781...69M} for velocity estimation and analysis of spectral lines. To model the spectra we tried various optical depth profiles (viz. gaussian, exponential and power law) with no significant difference among them, however we find exponential profile ($\tau\propto exp[-v/v_e]$) marginally better suited to match the absorption trough of observed spectra, where $v_{e}$ the e-folding velocity, is a fitted parameter. While modeling spectra, \Hi\ lines are always dealt as detached scenario. This implies the velocity of hydrogen layer is significantly higher and is thus detached from photospheric layer, close to which most heavier atomic lines form, as assumed in \textsc{synow}\ code. As a consequence to this, the \ha\ lines in synthetic spectrum, which are highly detached, has flat topped emissions with blue shifted absorption counter parts. SN 2013ej\ spectra are dereddened and approximate blackbody temperature is supplied in the model to match the spectral continuum. For early spectrum (12d), local thermodynamic equilibrium (LTE) assumption holds good and thus \textsc{synow}\ could fit the continuum well, whereas at later epochs it fails to fit properly. The set of atomic species incorporated to generate the synthetic model spectrum are \Hi, \Hei; \Feii; \Tiii; \Scii; \Caii; \Baii; \Nai\ and \Siii. The photospheric velocity $ v_{\rm ph} $ is optimized to simultaneously fit the \Feii\ (\mbox{$\lambda\lambda$}~4924, 5018, 5169) P-Cygni profiles and \Hi\ lines are treated as detached. The optical depths and optical depth profile parameters, e-folding velocity are varied for individual species to fit respective line profiles. In Fig.~\ref{fig:sp.synph} we show the model fit of 71d spectrum. Most of the observable spectral features are reproduced well and are identified in the figure. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{./plot_single.eps} \caption{\textsc{synow}\ modelling of SN 2013ej\ spectrum at 71d. Model spectrum is shown with thick solid line (blue), while the observed one is shown with thin solid line (red). Observed fluxes are corrected for extinction.} \label{fig:sp.synph} \end{figure*} Similarly all spectra during 12 to 97d are modeled with \textsc{synow}. The model fits for \Feii\ (\mbox{$\lambda\lambda$}~4924, 5018, 5169), \hb\ and \ha\ spectral sections are shown in Fig.~\ref{fig:sp.synall}. The atomic species which are important to model these features are \Hi, \Feii, \Baii, \Tiii, \Scii\ and \Nai. In addition to these \Siii\ is also used to model the dips in the blue wing of \ha\ P-Cygni during 12 to 42d. While modeling the \ha\ and \hb\ profiles, \textsc{synow}\ was unable to properly fit the broad and extended P-Cygni absorption troughs with single regular component. In order to fit these extended troughs, we invoke high-velocity (HV) component of \Hi. Although no separate dip is seen, possibly due to low spectral resolution and overlapping of broad P-Cygni profiles, the HV component can well reproduce the observed features in synthetic model spectrum. The implication and interpretation of these HV components are further discussed in \S\ref{sec:sp.vel}. The \textsc{synow}-derived velocities for \Feii, \ha, \hb\ lines and corresponding HV components are listed in Table~\ref{tab:synow}. The nebular spectra during 109 to 125d have not been modeled primarily due to limitations of the LTE assumption of \textsc{synow}, and also because nebular phase spectra are dominated by emission lines rather than P-Cygni profiles. \input{./synow.tex} \begin{figure} \centering \includegraphics[height=7.75cm]{./synow_iron_combi.eps} \includegraphics[height=7.75cm]{./synow_ha_combi.eps} \caption{\textsc{synow}\ modelling of SN 2013ej\ spectra at 8 phases during 12d to 97d for \hb, \Feii\ multiplet (left) and \ha\ (right) profiles. Model spectra are shown with thick solid line (blue), while the observed ones are shown with thin solid line (red). In the model, \Hi\ lines are treated as detached to fit the absorption troughs. Along with \Feii\ and \Hi; other ions (\Scii, \Baii, \ion{Si}{ii} and \Nai, \Tiii) are also incorporated in model to fit some weaker features, specially at later phases. In addition to this, high-velocity \Hi\ lines are also incorporated (42d onwards) to fit the extended \ha\ and \hb\ absorption troughs. The 97d spectrum do not have \hb\ and \Feii\ wavelength region, hence it is not shown here. } \label{fig:sp.synall} \end{figure} \subsection{Evolution of spectral lines} \label{sec:sp.line} Investigation of the spectral evolution sheds light on various important aspects of the SN, like interaction of ejecta with the circumstellar material, geometrical distribution of expanding shell of ejecta and formation of dust during late time. SN spectra are dominated by P-Cygni profiles which are direct indicators of expansion velocities and they evolve with the velocity of photosphere. As ejecta expands and opacity decreases allowing photons to escape from deeper layers rich in heavier elements, we are able to see emergence and growth of various spectral lines. To illustrate the evolution of \ha\ line, in Fig.~\ref{fig:sp.line} partial region of spectra is plotted in velocity domain corresponding to rest wavelengths of \ha. At 12d broad P-Cygni profile (FWHM $ \sim9500 $ \mbox{$\rm{\,km\,s^{-1}}$}) is visible which becomes narrower with time as the expansion slows down. The blue-shifted absorption troughs are direct estimator of expansion velocity of the associated line forming layer. The emission peaks are found to be blue-shifted (by $ \sim3200 $ \mbox{$\rm{\,km\,s^{-1}}$} at 12d), which progressively decreases with decrease in expansion velocity and almost settling to zero velocity when the SN starts to enter nebular phase (97d). Such blue-shifted emission peaks, especially during early phases are generic features observable in SN spectra, e.g., SNe 1987A \citep{1987A&A...182L..29H}, 1998A \citep{2005MNRAS.360..950P}, 1999em \citep{2003MNRAS.338..939E}, 2004et \citep{2006MNRAS.372.1315S}, 2012aw \citep{2013MNRAS.433.1871B}, 2013ab \citep{2015arXiv150400838B}. These features are tied with the density structure of the ejecta, which in turn controls the amount of occultation of the receding part of ejecta, resulting in biasing of the emission peak \citep{2014MNRAS.441..671A}, which are not limited to \ha\ but applicable to all spectral lines. However, such a blue-shift is clearly detected for \ha\, whereas for most other lines, emission profiles are weak and peaks are contaminated by adjacent P-Cygni profiles. Detailed SN spectral synthesis code like \textsc{cmfgen}\ \citep{2005ASPC..332..415D} is capable of reproducing such blue-shifted emission peaks. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{./spec_all_line_6563.eps} \caption{Evolution of \ha\ line profile at 10 phases during 12d to 125d. A zero-velocity line is plotted with a dashed line corresponding to the rest wavelength of \ha\ \ld6563.} \label{fig:sp.line} \end{figure} As inferred from Fig.~\ref{fig:sp.lit}, the spectral evolution of SN 2013ej\ is almost identical to other typical IIP SNe. However, the comparison of 35 and 68d spectra indicates \Feii\ lines are somewhat under developed as compared to other SNe at similar phase. As seen in the 68d comparison, the \Feii\ (\mbox{$\lambda\lambda$}~4924, 5018, 5169) absorption dips are significantly weaker in comparison to that seen in other SNe. Another prominent and unusual feature is seen in nebular spectra at 109d and 125d, on top of \ha\ emission, and the same is marked as feature A in Fig.~\ref{fig:sp.line}. This unusual dip is resulting into an apparent blue-shift of the emission peak, which is in fact larger than that seen in the last plateau spectra at 97d. Such evolution is unexpected and against the general trend of emission peak evolution in SNe. The low resolution of these spectra prohibits us from investigating this feature in detail. {This feature can be split into two emission components, one redshifted at 1200 \mbox{$\rm{\,km\,s^{-1}}$}\ and another blueshifted by 1300 \mbox{$\rm{\,km\,s^{-1}}$}\ (see \S\ref{app:nebular_ha} for further explanation) with respect to \ha\ rest position. Such an asymmetric or double peaked \ha\ nebular emission has been observed in a number of SNe, e.g. SN 1999em \citep{2002PASP..114...35L} and SN 2004dj \citep{2005AstL...31..792C}. \cite{2002PASP..114...35L} identified such a dip or notch in \ha\ emission profile only during nebular phase of SN 1999em, which they suggested as possible ejecta-CSM interaction or asymmetry in line emitting region. In SN 2004dj, the asymmetry in nebular \ha\ spectra identified by \cite{2005AstL...31..792C} has been explained by bipolar distribution of \mbox{$^{56}$Ni}\ with a spherical hydrogen envelope \citep{2006AstL...32..739C}.} \subsection{Ejecta velocity} \label{sec:sp.vel} Progenitor stars prior to explosion develop stratified layers of different elements, which are generally arranged in an elemental sequence, hydrogen being abundant in the outermost shell, whereas heavier metals like iron predominate at deeper layers. However at the time of shock breakout significant mixing of layers may occur. Spectral lines originating from different layers of the ejecta attains different characteristic velocities. Thus study of velocity evolution provides important clues to the explosion geometry and the characteristics of various layers. Evolution of photospheric layer is of special interest as it is directly connected to the kinematics and other related properties. Photosphere represents the layer of SN atmosphere where optical depth attains a value of $\sim~^2/_3 $ \citep{2005A&A...437..667D}. Due to complex mixing of layers and continuous recession of the recombination front, no single spectral line can represent the true photospheric layer. During the plateau phase, \Feii\ or \Scii\ lines are the best estimator of photospheric velocity ($v_{\rm ph}$). In early phases when \Feii\ lines are not strongly detectable, the best proxy for $v_{\rm ph}$ is \Hei, or \hb\ \citep{2012MNRAS.419.2783T} in even earlier phases. Line velocities can either be estimated by directly locating the P-Cygni absorption troughs, as done using \textsc{splot} task of \texttt{IRAF}, or by by modeling the line profiles with velocity as one of the input, as we do in \textsc{synow}. In Fig.~\ref{fig:sp.velall}, we plot the line velocities of \ha, \hb, \Feii\ (\mbox{$\lambda\lambda$}~4924, 5018, 5169) and \Scii\ (\mbox{$\lambda\lambda$}~4670, 6247), using the absorption minima method. It is evident that \Feii\ and \Scii\ line velocities are very close to each other and they are formed at deeper layers, whereas \ha\ and \hb\ line velocities are consistently higher at all phases as they form at larger radii. The \textsc{synow}\ estimated photospheric velocities are also plotted for comparison, which is very close to the \Feii\ and \Scii\ velocities estimated from absorption minima method. Here the \textsc{synow}-derived photospheric velocities are estimated by modelling \Hei\ line for 12d spectrum and \Feii\ lines for rest of the spectra. Velocities for various lines estimated using \textsc{synow}\ are tabulated in Table~\ref{tab:synow}. \begin{figure} \includegraphics[width=8.5cm]{./vel_profile.eps} \caption{Velocity evolution of \ha, \hb, \Hei, \Scii\ and \Feii\ lines. The velocities are estimated using blueshift of the absorption minima. The expansion velocity of photosphere ($v_{\rm phm}$) estimated from \textsc{synow}\ modeling of \Hei\ line at 12d and \Feii\ lines at later phases (see Table~\ref{tab:synow}) are also overplotted for comparison.} \label{fig:sp.velall} \end{figure} Fig.~\ref{fig:sp.velph} shows the comparison of photospheric velocity of SN 2013ej\ with other well-studied type II SNe 1987A, 1999em, 1999gi, 2004et, 2005cs, 2012aw and 2013ab. For the purpose of comparison the absorption trough velocities have been used, taking the mean of \Feii\ line triplet, or \Hei\ lines at early phases where \Feii\ lines are not detectable. The velocity profile of SN 2013ej\ is very similar to other normal IIP SNe 1999em, 1999gi, 2004et, 2012aw and 2013ab, on the other hand velocities of SN 2005cs and 1987A are significantly lower. The velocity profile of SN 2013ej\ is almost identical with SNe 2004et, 2012aw and 2013ab, whereas it is consistently higher than SNe 1999gi and 1999em by $ \sim800-900 $\mbox{$\rm{\,km\,s^{-1}}$}. For comparison of \Hi\ (\ha\ and \hb) velocities, we have chosen all those events which are at least photometrically and spectroscopically similar to SN 2013ej. Comparison reveals that, H velocities during later phases (60-100 d) are consistently higher than all comparable events. SNe 2012aw and 2013ab, have photospheric velocities identical to SN 2013ej, but their H velocities are significantly lower by large values, e.g., for SN 2013ej\ the \ha\ velocity at 80d is higher by 1500 \mbox{$\rm{\,km\,s^{-1}}$}\ and \hb\ is higher by 2400 \mbox{$\rm{\,km\,s^{-1}}$}. Likewise, H velocities for SNe 1999em and 1999gi are even lower at similar phases. Although SN 2004et \Hi\ velocities are somewhat on higher end, they are still significantly less than those of SN 2013ej. It is also to be noted that, at 12d SN 2013ej\ \Hi\ velocities are consistent and similar to those of other normal SNe, but as it evolves these velocities decline relatively slowly, ultimately turning out into a higher velocity profile after $ \sim40 $d. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{./HVLV_comp.eps} \caption{For 68d spectrum, the \hb\ profile is fitted using \textsc{synow}\ with various velocity components. (a) The fit only with a single high velocity component to match the blue wing of the absorption dip, (b) with a single low velocity component to match the red wing, (c) with single velocity to only fit the trough, (d) with two velocity components to fit entire absorption profile.} \label{fig:synow.hvlv} \end{figure} \subsection{High velocity components of \Hi\ and CSM interaction} \label{sec:hvcsm} As discussed in \S\ref{sec:synow}, the broad and extended \ha\ or \hb\ absorption profiles are not properly reproduced using single \Hi\ velocity component in \textsc{synow}, and those profiles can only be fitted by incorporating a high-velocity (HV) components along with the regular one. {Fig.~\ref{fig:synow.hvlv} shows the comparison of \textsc{synow}\ fits for 68d \hb\ profile with various single velocity as well as for combined two velocity components. A single velocity component at 5600 \mbox{$\rm{\,km\,s^{-1}}$}\ can match the blue wing well and partially the trough, whereas, it does not match the red side at all. Similarly, with a single velocity component at 4000 \mbox{$\rm{\,km\,s^{-1}}$}\ can partially match the red slope of the trough, but does not include the trough as well as the extended blue wing. By only matching the trough position, the model fits for a single velocity of 5300 \mbox{$\rm{\,km\,s^{-1}}$}, which does not fit either of the blue or red wing. Even-though the `detachment' of \Hi\ from photosphere in \textsc{synow}\ model makes the fit of red wing worse by steepening it further, but it is still conclusive that none of these single velocity component can properly reproduce the absorption profile. It is only by including two velocity components together in the model could reproduce the entire \hb\ profile. Such a scenario start to appear from 42d spectrum which only becomes stronger as the line evolves until 97d. The \ha\ troughs are also reproduced in a similar fashion.} {However, it may be noted, that such an extended \Hi\ feature may also be explained as a possible outcome of a different (complex and extended) density profile which \textsc{synow}\ can not reproduce.} The comparison of \ha\ and \hb\ velocities with other normal IIP SNe (see Fig.~\ref{fig:sp.velph}), estimated by directly locating the P-Cygni absorption troughs, shows that SN 2013ej\ velocities are significantly higher and declines relatively slowly (especially during later phases; 60-100d) as compared to those seen in typical IIP SNe, e.g., 1999em, 1999gi, 2012aw or 2013ab. On the other hand the photospheric velocity comparison with other IIP SNe does not show any such anomaly. This, we suggest as the effect of blending with \Hi\ HV components in \ha\ and \hb, which we could separate out while modeling these broad features with \textsc{synow}\ having two velocity components. The regular \ha\ and \hb\ velocities estimated from \textsc{synow}\ declines at a normal rate consistent to that seen in other SNe (see Fig.\ref{fig:sp.velph}), whereas the HV components remains at higher velocities by $ 1000 - 2000 $ \mbox{$\rm{\,km\,s^{-1}}$}, declining at relatively slower rate. It is also interesting to note that the velocity difference between the regular and HV component for \ha\ and \hb\ is similar at same epochs. \cite{2007ApJ...662.1136C} identified similar HV absorption features associated close to \ha\ and \hb\ troughs in SNe 1999em and 2004dj, which remained constant with time. Presence of such HV features has also been detected in SN 2009bw \citep{2012MNRAS.422.1122I} and SN 2012aw \citep{2013MNRAS.433.1871B} which is suggestive of interaction of SN ejecta with pre-existent CSM. Similar to SN 2013ej, HV signatures has been detected all throughout the plateau phase evolution of SN 2009bw, while in SN 2012aw such features were only detected at late plateau phase (55 to 104d). Although, we found HV components in SN 2013ej\ by modeling the extended P-Cygni troughs, we are unable to visually detect such two individual velocity components, this is possibly because of our signal-to-noise-ratio limited spectra and weaker strength of HV components. \cite{2007ApJ...662.1136C} argued that SN ejecta can interact with the cooler dense shell of CMS material, which might have originated from the pre-supernova mass loss in the form of stellar winds. Their analysis showed that such interaction can led to the detection of HV absorption features on bluer wings of Balmer lines due to enhanced excitation of the outer layers of unshocked ejecta. We, therefore suggest weak or moderate ejecta-CSM interaction in SN 2013ej. {X-ray emission from SN 2013ej\ has also been reported by \cite{2013ATel.5243....1M}, which they measured a 0.3-10 keV count-rate of 2.7$ \pm $0.5 cps, translating into a flux of $ \sim1.1\times10^{-13} $ erg~s$ ^{-1} $cm$ ^{-2} $ (assuming simple power-law spectral model with photon index Gamma $ =2 $). Such X-ray emission may also indicate ejecta-CSM interaction suffered by SN 2013ej.} \begin{figure} \centering \includegraphics[width=8.5cm]{./vel_comparison.eps} \caption{The photospheric velocity (top) evolution ($v_{\rm ph}$) of {SN 2013ej} is compared with other well-studied type II SNe. The $v_{\rm ph}$ plotted here are the absorption trough velocities (average of \Feii\ lines at late phases and \Hei\ at early phases). Similar comparison of P-Cygni absorption velocities, but for \ha\ and \hb\ are shown in middle and bottom panels respectively. The regular velocity component for \ha\ and \hb\ estimated from \textsc{synow}\ (without HV components; see Table.~\ref{tab:synow}) are also plotted for comparison.} \label{fig:sp.velph} \end{figure} \section{Status of SN 2013ej\ in type II diversity} \subsection{Factors favoring SN 2013ej\ as type IIL} {Having characterized the event both photometrically and spectroscopically, we may now revisit the aspects which favor SN 2013ej\ as type IIL event. The SN was originally classified as type IIP \citep{2013ATel.5228....1V} based on spectroscopic similarity to SN 1999gi. Due to same underlying physical mechanisms which govern both type IIP and IIL SNe, early spectra may not clearly distinguish these sub classes of SN type II. The distinguishing factor among IIP and IIL is nominal and mainly depend upon light curve characteristics. SN 2013ej\ shows a decline of 1.74 \mbox{mag (100 d)$ ^{-1} $}\ (see Table~\ref{tab:slopendrop}) or $ \sim0.87 $ mag in 50 days, which definitely falls in the criteria of type IIL SNe as proposed by \cite{2014MNRAS.445..554F}. In Fig.~\ref{fig:lc.abs}, the spread of template light curves for type IIP and IIL \citep{2014MNRAS.445..554F} is shown along with M$ _V $ light curves of SNe sample. It is evident that under this scheme of classification, SN 2013ej\ is not a type IIP, rather it is marginally within the range of type IIL template light curves. This is also justified from the point of basic idea behind these classifications, that type IIP must show a `plateau' of almost constant brightness for some time ($ \sim90$d), which is not the case with SN 2013ej.} {Due to the very fact that SN type II light curves and physical properties exhibit a continuum distribution rather than a bi-modality \citep{2014ApJ...786...67A}, SN 2013ej\ shows intermediate characteristic in the SN type II diversity.} {One distinguishing spectroscopic property \cite{2014MNRAS.445..554F} found for type IIL SNe is the overall higher photospheric (\Feii~\ld5196) velocity and flatter \Hi\ (\hb\ and \ha) velocity profiles as compared to type IIP counterpart. Although \Feii\ velocities are on the higher end as compared to typical IIP SNe velocities, we do not find it as a remarkable deviation to distinguish SN 2013ej\ from IIP sample. However, we do see a anomaly in \ha, \hb\ absorption minima velocity profiles, as they start off with velocities consistent with those of type IIP but declines relatively slowly (see \S\ref{sec:sp.vel} for more description of this feature) ultimately surpassing faster declining IIP velocity profiles after 50 days. This characteristic feature of \Hi\ velocities for SN 2013ej\ is typical for most IIL SNe as found by \cite{2014MNRAS.445..554F}.} \subsection{CSM interaction and type IIL} {\cite{2014MNRAS.445..554F} proposed a possible explanation for the flatter velocity profiles in IIL SNe, which is due the lack of hydrogen in deeper and slow expanding layers of ejecta, resulting into higher \Hi\ absorption velocities arising mostly from outer layer. However, for SN 2013ej\ we suggest the flattening of \ha\ and \hb\ velocity profiles are due to the contamination of HV component of \Hi\ (see \S\ref{sec:hvcsm}). Indication of CSM interaction in SN 2013ej\ may also be inferred from X-ray detection by \cite{2013ATel.5243....1M}. \cite{2015arXiv150106491V} found SN 2013by, a type IIL SN, to be moderately interacting with CSM. This led them to ask the prevalence of CSM interaction among IIL SNe in general. Type IIL SNe originate from progenitors similar to IIPs, but have lost a significant fraction of hydrogen before explosion during pre SN evolution. Hence it may not be usual to detect HV \Hi\ signatures in \ha, \hb\ absorption profiles as a consequence of ejecta-CSM interaction. A moderate or weak interaction may produce a HV component blending with \ha, \hb\ profiles, which may result into shift in absorption minima, rather than a prominent secondary HV dip. Such a scenario may perfectly explain the relatively higher and flatter \Hi\ velocity profiles of most type IIL SNe as compared to IIP counterparts, found by \cite{2014MNRAS.445..554F} based on direct velocity estimates of absorption minima.} {Another example of CSM interaction in type IIL is SN 2008fq, which does show strong interaction signature like a type IIn \citep{2013A&A...555A..10T}, but also shows a steep decline like IIL during first 60 days \citep{2014MNRAS.445..554F}. {Supernova PTF11iqb \citep{2015MNRAS.449.1876S} is also a type IIn SN, having prominent CSM interaction signatures, but with IIL like steeper light curve. Initial spectra of this SN showed IIn characteristics, however late plateau spectra revealed features similar to type IIL. PTF11iqb originated from a progenitor identical to type IIP/L, instead of a LBV as expected for a typical IIn.} However, because of rare detection of type IIL events and its fast decline in magnitudes we do not have sufficient information to investigate CSM interaction in all such objects. Thus, the question still remains open if all or most IIL SNe interact with CSM and whether the flatter \Hi\ absorption minima velocity profiles is a consequence of interaction.} \section{Light curve modelling}\label{modelling} To determine the explosion parameters of SN 2013ej, the observed light curve is modeled following the semi-analytical approach originally developed by \cite{1980ApJ...237..541A} and further refined in \cite{1989ApJ...340..396A}. More appropriate and accurate approach would have been detailed hydrodynamical modeling \citep[e.g.][]{1977ApJS...33..515F,2007A&A...461..233U,2011ApJ...729...61B,2011ApJ...741...41P} to determine explosion properties, however application of simple semi-analytical models \citep{1980ApJ...237..541A,1982ApJ...253..785A,1989ApJ...340..396A,1993ApJ...414..712P,2003MNRAS.338..711Z,2012ApJ...746..121C} can be useful to get preliminary yet reliable estimates of the parameters without running resource intensive and time consuming hydrodynamical codes. \cite{2014A&A...571A..77N} also followed the original semi-analytical formulation presented by \cite{1989ApJ...340..396A} and modeled a few well studied II SNe. The results are compared with hydrodynamical models from the literature and are found to be in good agreement. The model light-curve is computed by solving the energy balance of the spherically symmetric supernova envelope, which is assumed to be in homologous expansion having spatially uniform density profile. The temperature evolution is given as \citep{1980ApJ...237..541A}, \[ T(x,t)^4=T_0^4\psi(x)\phi(t)\left(\frac{R_0}{R(t)}\right)^4 ,\] where $ x $ is defined as dimensionless co-moving radius relative to the mass of the envelope and, $ \psi(x) $ is the radial component of temperature profile which falls off with radius as $ sin(\pi x)/\pi x $. Here we incorporate the effect of recombination, as shock heated and ionized envelope expands and cools down to recombine at temperature $ T_{rec} $. We define $ x_i $ as the co-moving radius of the recombination front and the opacity ($ \kappa $) changes very sharply at this layer such that $ \kappa \approx 0 $ for the ejecta above $ x_i $. Following the treatment of \cite{1989ApJ...340..396A} the temporal component of temperature, $ \phi(t) $ can be expressed as \citep{2014A&A...571A..77N}, \[ \frac{d\phi(t)}{dz}= \frac{R(t)}{R_0 x_i^3}\left[p_1\zeta(t)-p_2\phi(t)x_i-2 x_i^2 \phi(t) \frac{R_0}{R(t)}\frac{dx_i}{dz}\right] ,\] here $ \zeta(t) $ is the total radioactive energy input from decay chain of unit mass of \mbox{$^{56}$Ni}, which is normalized to the energy production rate of \mbox{$^{56}$Ni}. The rest of the parameters in the equation have usual meaning and can be found in aforementioned papers. From this ordinary differential equation we can find out the solution of $ \phi(t) $ using Runge-Kutta method. The treatment adopted to determine $ x_i $ is somewhat similar to \cite{2014A&A...571A..77N}, where we numerically determine the radius $ x_i $ (to an accuracy of $ 10^{-12} $) for which the temperature of the layer reaches $ T_{rec} $. Once we find out the solution of $ \phi(t) $ and $ x_i $, the total bolometric luminosity is calculated as the sum of radioactive heating and rate of energy released due to recombination, \[ L(t)=x_i\frac{\phi(t)E_{th}(0)}{\tau_d}\left(1-e^{-A_g/t^2}\right)+4\pi r_i^2 Q\rho(x_i,t)R(t)\frac{dx_i}{dt} , \] here, $ d(x_i)/dt $ is the inward velocity of co-moving recombination front and the term $ [1-exp(-A_g/t^2)] $, takes into account of gamma-ray leakage from the ejecta. The factor $ A_g $ is the effectiveness of gamma ray trapping {\citep[see e.g., ][]{1997ApJ...491..375C,2012ApJ...746..121C}}, where large $ A_g $ means full trapping of gamma rays, this factor is particularly important to model the SN 2013ej\ tail light curve. In this relation we also modified the second term to correctly account for the amount of envelope mass being recombined. \begin{figure} \centering \hspace{-0.5cm} \includegraphics[width=1.05\linewidth]{./model.eps} \caption{Model fit (solid line) on the observed bolometric light curve (open circles) of SN 2013ej. The green solid line follows only the radioactive decay law, where the recombination front has completely disappeared.} \label{fig:model} \end{figure} To model SN light curves it is essential to obtain the true bolometric luminosity from observations. Since our data is limited only to optical and UV bands, we adopt the prescription for color dependent bolometric corrections by \cite{2009ApJ...701..200B} to obtain bolometric light curve for SN 2013ej. Figure~\ref{fig:model} shows the model fit with the observed bolometric light curve of the SN. We estimate an ejecta mass of 12 \mbox{M$_{\odot}$}, progenitor radius of 450 \mbox{R$_{\odot}$}\ and explosion energy (kinetic + thermal) of 2.3 foe ($ 10^{51} $ erg). The uncertainty in mass and radius is about 25\%. We find that the plateau duration is strongly correlated with explosion energies (especially kinetic), and also with $ \kappa $ and $ T_{rec} $. Thus depending upon these parameters our model is consistent with a wide range of explosion energies, with 2.3 foe towards the lower end and energies up to 4.5 foe at higher end. Assuming the mass of the compact remnant to be 1.5-2.0 \mbox{M$_{\odot}$}, the total progenitor mass adds up to be 14\mbox{M$_{\odot}$}. The mass of radioactive \mbox{$^{56}$Ni}\ estimated from the model is 0.018\mbox{M$_{\odot}$}, which primarily governs the tail light curve of the SN. As discussed in \S\ref{sec:lc.bol}, the slope of the tail light curve observed for SN 2013ej\ is significantly higher than other typical IIP SNe and also to that expected from radioactive decay of \mbox{$^{56}$Co}\ to \mbox{$^{56}$Fe}. The light curve powered by full gamma-ray trapping from radioactive decay chain of $ \mbox{$^{56}$Ni} \rightarrow \mbox{$^{56}$Co} \rightarrow \mbox{$^{56}$Fe} $ results in a slower decline and does not explain the steeper tail observed in SN 2013ej. In the model we decreased the gamma-ray trapping effectiveness parameter $ A_g $ to $ 3\times10^4 $ day$ ^2 $, which matches the steeper radioactive tail. The gamma-ray optical depth can be related to this parameter as $ \tau_g\sim A_g/t^2 $. This implies that the gamma-ray leakage in SN 2013ej\ is significantly higher than other typical type IIP SNe. \cite{2014MNRAS.438L.101V} using early temperatures ($ <5 $ days) of SN 2013ej\ provided a preliminary estimate of the progenitor radius as $ 400-600 $ \mbox{R$_{\odot}$}, which is in good agreement with our result. Our progenitor mass estimate is also consistent with that reported by \cite{2014MNRAS.439L..56F} from direct observational identification of the progenitor using \textit{HST} archival images, which is $ 8 - 15.5 $ \mbox{M$_{\odot}$}. \section {Summary} \label{sec:sum} We present photometric and spectroscopic observations of SN 2013ej. Despite low cadence optical photometric follow up during photospheric phase, we are able to cover most of the important phases and features of light curve. Our high resolution spectrum at 80d shows the presence of \Nai~D (\mbox{$\lambda\lambda$}~5890, 5896) doublet for Milky Way, while no impression for host galaxy NGC 0628. This indicates that SN 2013ej\ suffers minimal or no reddening due to its host galaxy. The optical light curves are similar to type IIL SNe, with a relatively short plateau duration of 85d and steeper decline rates of 6.60, 3.57, 1.74, 1.07 and 0.74 mag 100 day$ ^{-1} $ in \textit{UBVRI} bands respectively. The comparison of absolute \textit{V} band light curves shows that SN 2013ej\ suffers the higher decline rate than all type IIP SNe, but similar to type IIL SNe 1980k, 2000dc and 2013by. The drop in luminosity during the plateau-nebular transition is also higher than most type II SNe in our sample, which is 2.4 mag in \textit{V} band. The UVOT UV optical light curves shows steep decline during first 30 days at a rate of 0.182, 0.213, 0.262 mag d$ ^{-1} $ in \textit{uvw1, uvw2} and \textit{uvm2} bands respectively. The absolute UV light curves are identical to SN 2012aw and also shows a similar UV-plateau trend as observed in SN 2012aw. Owing to the large drop in luminosity during plateau-nebular transition, the light curve settles to a significantly low luminous tail phase as compared to other normal IIP SNe. The mass of radioactive \mbox{$^{56}$Ni}\ estimated from the tail bolometric luminosity is $ 0.020\pm0.002 $ \mbox{M$_{\odot}$}, which is in between normal IIP SNe (e.g., 1999em, 2004et, 2012aw) and subluminous events, like SN 2005cs. The spectroscopic features and their evolution is similar to normal type II events. Detailed \textsc{synow}\ modelling has been performed to identify spectral features and to estimate velocities for \ha, \hb, \Feii\ (\mbox{$\lambda\lambda$}~4924, 5018, 5169) and \Scii\ (\mbox{$\lambda\lambda$}~4670, 6247) lines. The photospheric velocity profile of SN 2013ej, which is represented by \Feii\ lines and \Hei\ line at 12d, is almost identical to SNe 2004et, 2012aw and 2013ab. The \ha, \hb\ velocities estimated by directly locating the absorption troughs are significantly higher and slow declining as compared to other normal IIP events. However, such \Hi\ velocity profiles are typical for type IIL SNe. {The P-Cygni absorption troughs of \ha\ and \hb\ are found to be broad and extended which a single \Hi\ component in \textsc{synow}\ model could not fit properly. However, these extended features are fitted well with \textsc{synow}\ by incorporating a high velocity \Hi\ component. These HV components can be traced throughout the photospheric phase which may indicate possible ejecta-CSM interaction. Our inference is also supported by the detection of X-ray emission from the SN 2013ej\ \citep{2013ATel.5243....1M} indicating possible CSM interaction, and the unusually high polarization reported by \cite{2013ATel.5275....1L} may also further indicate asymmetry in environment or ejecta of the SN. Such CSM interaction and their signature in \ha, \hb\ profiles has also been reported for SNe 2009bw \citep{2012MNRAS.422.1122I} and 2012aw \citep{2013MNRAS.433.1871B}.} Nebular phase spectra during 109 to 125d phases are dominated by characteristic emission lines, however the \ha\ line shows an unusual notch, which may be explained by superposition of HV emission on regular \ha\ profile. Although, the origin of the feature is not fully explained, it may indicate bipolar distribution of \mbox{$^{56}$Ni}\ in the core. We modeled the bolometric light curve of SN 2013ej\ and estimated a progenitor mass of $ \sim14 $\mbox{M$_{\odot}$}, radius of $ \sim450 $\mbox{R$_{\odot}$}\ and explosion energy of $ \sim2.3$ foe. These progenitor property estimates are consistent to those given by \cite{2014MNRAS.439L..56F} and \cite{2014MNRAS.438L.101V} for mass and radius respectively. The tail bolometric light curve of SN 2013ej, is found to be significantly steeper than that expected from decay chain of radioactive \mbox{$^{56}$Ni}. Thus, in the model we decreased the effectiveness of gamma ray trapping, which could explain the steeper slope of tail light curve. \acknowledgments We are thankful to the observing staffs and technical assistants of ARIES 1.0-m and 1.3-m telescopes and we also express our thanks to 2-m HCT telescope staffs for their kind cooperation in observation of SN 2013ej. We also express our thanks to Mr. Shashank Shekhar for his sincere efforts and co-operation during observations at ARIES 1.3m telescope. Authors gratefully acknowledge the services of the NASA ADS and NED databases which are used to access data and references in this paper. SOUSA is supported by NASA's Astrophysics Data Analysis Program through grant NNX13AF35G. VVDs work on Type IIP SNe is supported by the NASA through Chandra Award Number GO2-13092B issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the NASA under contract NAS8-03060. We also thank the anonymous referee for detailed and insightful comments which helped in significant improvement of the manuscript.
{ "timestamp": "2015-04-28T02:07:02", "yymm": "1504", "arxiv_id": "1504.06207", "language": "en", "url": "https://arxiv.org/abs/1504.06207" }
\section{Main Result} Let $S_n$ be a martingale with respect to a filtration $\{{\cal F}_n\}_{n=0}^\infty$ and let $x_n=S_n - S_{n-1}$ be the martingale difference. On some regularity conditions on the growth of $|x_n|$, various versions of the law of the iterated logarithm (LIL) have been given in literature. In particular the Erd\H{o}s--Feller--Kolmogorov--Petrowsky law of the iterated logarithm (EFKP-LIL \cite[Chapter 5.2]{Revesz2013Random}) is an important extension of LIL. \begin{comment} Let $x_n=\pm 1$, \ $n=1,2,\dots$, \ be independent symmetric Bernoulli random variables with $\mathrm{P}(x_n=-1)=\mathrm{P}(x_n=1)=1/2$. Let $S_n = x_1 + x_2 + \dots + x_n$. Concerning the behavior of $S_n$, the celebrated Erd\H{o}s--Feller--Kolmogorov--Petrowsky law of the iterated logarithm (EFKP-LIL \cite[Chapter 5.2]{Revesz2013Random}) states the following: \begin{equation} \label{eq:efkp-lil} \mathrm{P}(S_n \ge \sqrt{n}\psi(n) \ \ i.o.)= 0 \ \text{or}\ 1 \quad \text{according as} \quad \int_1^\infty \psi(\lambda) e^{-\psi(\lambda)^2/2} \frac{d\lambda}{\lambda}\ < \infty \ \text{or} \ =\infty , \end{equation} where $\psi$ is a positive non-decreasing continuous function. The set of functions $\psi$ such that $\mathrm{P}(S_n \ge \sqrt{n}\psi(n) \ \ i.o.)= 0$ is called the {\em upper class} and the set of functions $\psi$ such that $\mathrm{P}(S_n \ge \sqrt{n}\psi(n) \ \ i.o.)= 1$ is called the {\em lower class} \cite[pp.33-34]{Revesz2013Random}. The first one who showed this result seems to be Kolmogorov, which has been stated in L\'evy's book \cite{Levy1937Theorie} without a proof and \end{comment} Erd\H{o}s \cite{Erdos1942Law} proved EFKP-LIL for symmetric Bernoulli random variables. EFKP-LIL has been generalized by Feller \cite{Feller1943General} for bounded and independent random variables and \cite{Feller1946Law} (see also Bai \cite{Bai1989Theorem}) for the i.i.d.\ case. Further, EFKP-LIL has been generalized for martingales by Strassen \cite{Strassen1967Almost}, Jain, Jogdeo and Stout \cite{JainJogdeoStout1975Upper}, Philipp and Stout \cite{PhilippStout1986Invariance}, Einmahl and Mason \cite{EinmahlMason1990Some} and Berkes, H{\"o}rmann and Weber \cite{BerkesHormannWeber2010Upper}. In particular, Einmahl and Mason \cite{EinmahlMason1990Some} proved a martingale analogue of Feller's result in \cite{Feller1943General}, just as Stout \cite{Stout1970Martingale} obtained a martingale analogue of Kolmogorov's result in \cite{Kolmogoroff1929Uber}. For self-normalized processes, EFKP-LIL was derived by \cite{GriffinKuelbs1991Some,CsorgoSzyszkowiczWang2003Darling} in the i.i.d.\ case. However EFKP-LIL has not been derived in the martingale case, even though de la Pe{\~n}a, Klass and Lai \cite{PenaKlassLai2004Self} obtained the usual LIL. The purpose of this paper is to prove EFKP-LIL for self-normalized martingales. For a positive non-decreasing continuous function $\psi(\lambda)$ let \begin{align} \label{eq:Ipsi} I(\psi):=\int_1^\infty \psi(\lambda) e^{-\psi(\lambda)^2/2} \frac{d\lambda}{\lambda}. \end{align} We state our main theorem. \begin{theorem} \label{th:m-self-normalized-efkp-lil} Let $S_n,\,n=1,2,\ldots,$ be a martingale with $S_0 = 0$ and $x_n = S_n -S_{n-1}$ be a martingale difference with respect to a filtration $\{\mathcal{F}_{n}\}_{n=0}^{\infty}$ such that \begin{align*} |x_n| \le c_n\,\,a.s. \end{align*} for some $\mathcal{F}_{n-1}$-measurable random variable $c_n$. Let \[ A_n^2 := \sum_{i=1}^n x_i^2 \ \ge 0 \] and let $\psi$ be a positive non-decreasing continuous function. If $I(\psi)< \infty$, then \begin{equation} {\mathrm{P} }\left( S_n < A_n \psi(A_n^2) \ a.a. \mid \lim A_n = \infty, \limsup c_n \frac{\psi(A_n^2)^3}{A_n} < \infty \right) = 1. \label{eq:measure-validity} \end{equation} If $I(\psi)= \infty$, then \begin{equation} {\mathrm{P} }\left( S_n \ge A_n \psi(A_n^2) \ i.o. \mid \lim A_n = \infty, \limsup c_n \frac{\psi(A_n^2)^3}{A_n} < \infty \right) = 1. \label{eq:measure-sharpness} \end{equation} \end{theorem} This theorem is a self-normalization of the result in Einmahl and Mason \cite{EinmahlMason1990Some} and a generalization of the result in de la Pe{\~n}a, Klass and Lai \cite{PenaKlassLai2004Self}. The order of growth $A_n/(\psi(A_n^2))^3$ for $c_n$ is currently the best known order for EFKP-LIL even in the independent case (\cite{BerkesHormannWeber2010Upper}). We call \eqref{eq:measure-validity} the {\em validity} and \eqref{eq:measure-sharpness} the {\em sharpness} of EFKP-LIL. In \eqref{eq:measure-validity} and \eqref{eq:measure-sharpness}, we are not assuming that the conditioning events happen with probability one. We can state \eqref{eq:measure-validity} equivalently as \begin{equation} {\mathrm{P} }\left( \lim A_n = \infty, \limsup c_n \frac{\psi(A_n^2)^3}{A_n} < \infty, S_n \ge A_n \psi(A_n^2) \ i.o. \right) = 0. \label{eq:measure-validity1} \end{equation} For our proof we adopt the framework of game-theoretic probability by Shafer and Vovk \cite{ShaferVovk2001Probability}. In a game-theoretic approach, for proving \eqref{eq:measure-validity}, we explicitly construct a non-negative martingale diverging to infinity on the event of \eqref{eq:measure-validity1}. We use the following notation throughout the paper \begin{align*} \ln_k n := \underbrace{\ln \ln \dots \ln}_{k \text{times}} n. \end{align*} We also fix a small positive $\delta$ for the rest of this paper, e.g., $\delta=0.01$. For our proof, as is often seen in the upper-lower class theory (cf.\ Feller \cite[Lemma 1]{Feller1946Law}), we can restrict our attention to $\psi$ such that \begin{align} \label{eq:uc0} \lc(n)\le\psi(n)\le\uc(n)\mbox{ for all sufficiently large }n, \end{align} where \[ \lc(n):=\sqrt{2 \ln_2 n + 3 \ln_3 n}, \quad \uc(n):=\sqrt{2 \ln_2 n + 4 \ln_3 n}. \] Here $L$ means the lower class and $U$ means the upper class. It can be verified that $I(\uc)<\infty$ and $I(\lc)=\infty$. The rest of this paper is organized as follows. In Section \ref{sec:gtp} we give a game-theoretic statement corresponding to our main theorem. In Section \ref{sec:validity} we give a proof of the validity and in Section \ref{sec:sharpness} we give a proof of the sharpness. \section{Preliminaries on Game-Theoretic Probability} \label{sec:gtp} In order to state a game-theoretic version of Theorem \ref{th:m-self-normalized-efkp-lil}, consider the following simplified predictably unbounded forecasting game (SPUFG, Section 5.1 of \cite{ShaferVovk2001Probability}) with the initial capital $\alpha>0$. \begin{quote} {\sc Simplified Predictably Unbounded Forecasting Game}\\ \textbf{Players}: Forecaster, Skeptic, Reality\\ \textbf{Protocol}:\\ \indent $\cps_0:=\alpha$.\\ \indent FOR $n=1,2,\ldots$:\\ \indent\indent Forecaster announces $c_n \ge 0$.\\ \indent\indent Skeptic announces $M_n\in\mathbb{R}$.\\ \indent\indent Reality announces $x_n\in[-c_n,c_n]$.\\ \indent\indent $\cps_n:=\cps_{n-1}+M_n x_n$.\\ \textbf{Collateral Duties}: Skeptic must keep $\cps_n$ non-negative. Reality must keep $\cps_n$ from tending to infinity. \end{quote} Usually $\alpha$ is taken to be 1, but in Section \ref{sec:sharpness} we use $\alpha\neq 1$ for notational simplicity. We prove the following theorem, which implies Theorem \ref{th:m-self-normalized-efkp-lil} by Chapter 8 of \cite{ShaferVovk2001Probability}. \begin{theorem} \label{th:self-normalized-efkp-lil} Consider SPUFG. Let $\psi$ be a positive non-decreasing continuous function. If $I(\psi)<\infty$, Skeptic can force \begin{align} A_n^2 \rightarrow \infty \,\, \text{and}\,\, \limsup c_n \frac{\psi(A_n^2)^3}{A_n} <\infty \ \Rightarrow \ S_n < A_n\psi(A_n^2) \ \ a.a. \label{eq:validity1} \end{align} and if $I(\psi)=\infty$, Skeptic can force \begin{align} A_n^2 \rightarrow \infty \,\, \text{and} \,\, &\limsup c_n \frac{\psi(A_n^2)^3}{A_n} <\infty \ \Rightarrow \ S_n \ge A_n\psi(A_n^2) \ \ i.o. \label{eq:sharpness1} \end{align} \end{theorem} We use the same line of arguments as in \cite{MiyabeTakemura2013Law} and Chapter 5 of Shafer and Vovk \cite{ShaferVovk2001Probability}. We employ a Bayesian mixture of constant-proportion betting strategies. Here we give basic properties of constant-proportion betting strategies. A constant-proportion betting strategy with betting proportion $\br>0$ sets \begin{align*} M_n = \br \cps_{n-1}. \end{align*} However, $\cps_n$ becomes negative if $\br x_n< -1$. For simplicity we consider applying the strategy (``keep the account open'') as long as $\br c_n \le \delta$ and sets $M_n=0$ once $\br c_n > \delta$ happens (``freeze the account''). Define a stopping time \begin{align} \label{eq:stop} \sigma_{\br} := \min\{n \mid \br c_n >\delta\}. \end{align} Note the monotonicity of $\sigma_{\br}$, i.e., $\sigma_{\br'}\ge \sigma_{\br}$ if $\br' \le \br$. We denote the capital process of the constant-proportion betting strategy with this stopping time by $\cps^\br_n$. With the initial capital of $\cps^\br_0 = \alpha$, the value of $\cps^\br_n$ is written as \begin{align*} \label{eq:sncps} \cps^\br_n = \alpha \prod_{i=1}^{\min(n,\sigma_{\br}-1)} (1+\br x_i). \end{align*} By \[ t - \frac{t^2}{2} - t^2 \times |t| \le \ln(1+t) \le t - \frac{t^2}{2} + t^2 \times |t| \] for $|t| \le \delta$, taking the logarithm of $\prod_{i=1}^n (1+\br x_i)$, for $n< \sigma_\br$, we have \begin{equation*} \br S_n - \frac{\br^2 A_n^2}{2} - \br^3 A_n^2 \bar c_n \le \ln \left(\cps_n^\br /\alpha\right) \le \br S_n - \frac{\br^2 A_n^2}{2} + \br^3 A_n^2 \bar c_n \end{equation*} and \begin{equation} \label{eq:cp-bound} e^{- \br^3 A_n^2 \bar c_n} e^{\br S_n - \br^2 A_n^2/2} \le \cps^\br_n / \alpha \le e^{\br^3 A_n^2 \bar c_n } e^{\br S_n - \br^2 A_n^2/2}, \end{equation} where \[ \bar c_n := \max_{1\le i \le n} c_i. \] We also set up some notation for expressing the condition in \eqref{eq:validity1} and \eqref{eq:sharpness1}. An infinite sequence of Forecaster's and Reality's announces $\omega = (c_1,x_1,c_2,x_2,\ldots)$ is called a \textit{path} and the set of paths $\Omega=\{\omega\}$ is called the sample space. Define a subset $\Omega_{<\infty}$ of $\Omega$ as \begin{equation*} \label{eq:sample-space-0} \Omega_{<\infty} := \left\{ \omega \mid A_n^2 \rightarrow \infty, \limsup_n c_n \frac{\psi(A_n^2)^3}{A_n} < \infty \right\}. \end{equation*} For an arbitrary path $\omega \in \Omega_{<\infty}$ we have \begin{align} \label{eq:sn-forecaster-v} \exists C(\omega) < \infty,\exists n_1(\omega),\forall n>n_1(\omega),\, c_n<C(\omega)\frac{A_n}{\psi(A_n^2)^3}, \ \psi(A_n^2)\ge 1. \end{align} The last inequality holds by the lower bound in \eqref{eq:uc0}. \section{Validity} \label{sec:validity} We prove the validity in \eqref{eq:validity1} of Theorem \ref{th:self-normalized-efkp-lil}. In this section we let $\alpha=1$. We discretize the integral in \eqref{eq:Ipsi} as \begin{align} \label{eq:val-suml-cond} \sum_{k=1}^\infty \frac{ \psi(k)}{k} e^{-\psi(k)^2/2} < \infty. \end{align} Since $xe^{-x^2/2}$ is decreasing for $x\ge1$, the function $\lambda\mapsto\frac{\psi(\lambda)}{\lambda}e^{-\psi(\lambda)^2/2}$ is decreasing for $\lambda$ such that $\psi(\lambda)\ge 1$ and convergences of the integral in \eqref{eq:Ipsi} and the sum in \eqref{eq:val-suml-cond} are equivalent. The convergence of the infinite series in \eqref{eq:val-suml-cond} implies the existence of a non-decreasing sequence of positive reals $a_k$ diverging to infinity ($a_k\uparrow \infty$), such that the series multiplied term by term by $a_k$ is still convergent: \begin{align*} Z:=\sum_{k=1}^\infty a_k\frac{ \psi(k)}{k} e^{-\psi(k)^2/2} < \infty. \end{align*} This is easily seen by dividing the infinite series into blocks of sums less than or equal to $1/2^k$ and multiplying the $k$-th block by $k$ (see also \cite[Lemma 4.15]{MiyabeTakemura2012Convergence}). For $k\ge 1$ let \[ p_k := \frac{1}{Z}a_k \frac{\psi(k)}{k} e^{-\psi(k)^2/2} \] and consider the capital process of a countable mixture of constant-proportion strategies \begin{align} \label{eq:validity-br} {\cal K}_n := \sum_{k=1}^\infty p_k \cps_n^{\br_k}, \quad\mbox{ where }\quad \br_k := \frac{\psi(k)}{\sqrt{k}}. \end{align} Note that ${\cal K}_n$ is never negative. By the upper bound in \eqref{eq:uc0}, as $k\rightarrow\infty$ we have \begin{equation} \label{eq:gamma-k-zero} \br_k \le \frac{\uc(k)}{\sqrt{k}} = \sqrt{\frac{2 \ln_2 k + 4 \ln_3 k}{k}} \rightarrow 0. \end{equation} We show that $\limsup_n {\cal K}_n=\infty$ if a path $\omega\in \Omega_{<\infty}$ satisfies $S_n \ge A_n\psi(A_n^2)\,\,i.o.$ \ We bound $Z {\cal K}_n$ as \begin{equation} \label{eq:zk-1} Z{\cal K}_n \ge \sum_{k=\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}^{\lfloor A^2_n \rfloor} p_k {\cal K}_n^{\br_k} . \end{equation} At this point we check that all accounts on the right-hand side of \eqref{eq:zk-1} are open for sufficiently large $n$ and the lower bound in \eqref{eq:cp-bound} can be applied to each term of \eqref{eq:zk-1} for $\omega\in \Omega_{<\infty}$. We have the following two lemmas. \begin{lemma} \label{lem:bar_c_n_bound} Let $\omega\in \Omega_{<\infty}$. Let $C=C(\omega)$ in \eqref{eq:sn-forecaster-v}. For sufficiently large $n$ \begin{equation} \label{eq:bar_c_n_bound} \bar c_n = \max_{1\le i \le n} c_i < (1+\delta) C \frac{A_n}{\psi(A_n^2)^3}. \end{equation} \end{lemma} \begin{proof} Note that the first $n_1(\omega)$ $c$'s i.e., $c_1, \dots, c_{n_1(\omega)}$, do not matter since $\lim_{n\rightarrow\infty} A_n/\psi(A_n^2)^3=\infty$. For $l > n_1(\omega)$, by \eqref{eq:sn-forecaster-v} we have \[ c_l \le C \frac{A_l}{\psi(A_l^2)^3} \le C A_l. \] Hence $c_l$ such that $A_l \le A_n/{\psi(A_n^2)^3}$ do not matter in $\bar c_n$. For $c_l$ such that $A_l > A_n/{\psi(A_n^2)^3}$ we have \[ c_l \le C \frac{A_l}{\psi\big(A_n^2/\psi(A_n^2)^6\big)^3} \le C \frac{A_n}{\psi\big(A_n^2/\psi(A_n^2)^6\big)^3} = C \frac{A_n}{\psi(A_n^2)^3} \frac{\psi(A_n^2)^3}{\psi\big(A_n^2/\psi(A_n^2)^6\big)^3}. \] But by \eqref{eq:uc0}, both $\psi(A_n^2)$ and $\psi\big(A_n^2/\psi(A_n^2)^6\big)$ are of the order $\sqrt{2 \ln_2 A_n^2}(1+o(1))$ and $\psi(A_n^2)/\psi\big(A_n^2/\psi(A_n^2)^6\big) \rightarrow 1$ as $n\rightarrow\infty$. Hence \eqref{eq:bar_c_n_bound} holds. \end{proof} \begin{lemma} \label{lem:keep-open} Let $\omega\in \Omega_{<\infty}$. For sufficiently large $n$, $\sigma_{\br_k} > n$ for all $k=\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor, \dots, \lfloor A^2_n \rfloor$. \end{lemma} \begin{proof} By the monotonicity of $\psi$, we have $\br_k \le \psi(A_n^2)/\sqrt{\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}$ for $k=\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor, \dots, \lfloor A^2_n \rfloor$. Then by the monotonicity of $\sigma_\br$, it suffices to show \[ \frac{\psi(A_n^2)}{\sqrt{\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}} \bar c_n \le \delta \] for sufficiently large $n$. By \eqref{eq:bar_c_n_bound}, the left-hand side is bounded from above by \[ \frac{\psi(A_n^2)}{\sqrt{\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}} \times (1+\delta) C \frac{A_n}{\psi(A_n^2)^3} = (1+\delta) C \frac{A_n}{\sqrt{\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}} \frac{1}{\psi(A_n^2)^2}. \] But this converges to 0 as $n\rightarrow\infty$. \end{proof} By Lemma \ref{lem:keep-open} and the lower bound in \eqref{eq:cp-bound}, for sufficiently large $n$, we have \begin{align*} \cps^{\br_{k}}_n \ge e^{-\br_{k}^3 A_n^2 \bar c_n} e^{\br_k S_n - \br_{k}^2 A_n^2/2},\quad k=\lfloor A_n^2-A_n^2/\psi(A_n^2)\rfloor,\dots, \lfloor A_n^2\rfloor \end{align*} and $Z {\cal K}_n$ can be evaluated from below as \allowdisplaybreaks \begin{align*} Z {\cal K}_n &\ge Z\sum_{k=\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}^{\lfloor A^2_n \rfloor} p_k \exp( \br_k S_n - \frac{\br_k^2A_n^2}{2} - \br_k^3 A_n^2 \bar c_n )\\ &= \sum_{k=\lfloor A^2_n-A^2_n/\psi(A^2_n)\rfloor}^{\lfloor A^2_n \rfloor} a_k \frac{\psi(k)}{k} \exp(-\frac{\psi(k)^2}{2} + \br_k S_n - \frac{\br_k^2A_n^2}{2} - \br_k^3 A_n^2 \bar c_n ) \end{align*} Now we assume that $S_n \ge A_n \psi(A_n^2)\ i.o.$ for the path $\omega\in \Omega_{<\infty}$. Then for sufficiently large $n$ such that $S_n \ge A_n \psi(A_n^2)$, $\psi(A_n^2)/(\psi(A_n^2) -1) \le 1+ \delta$ and $A_n/\left(\lfloor A_n^2-A_n^2/\psi(A_n^2)\rfloor \right)^{1/2} \le 1+ \delta$, we evaluate the exponent part by \eqref{eq:cp-bound} as \begin{align*} -\frac{\psi(k)^2}{2}+ \br_k S_n - \frac{\br_k^2 A_n^2}{2} &\ge -\frac{\psi(k)^2}{2}+A_n\psi(A_n^2)\frac{\psi(k)}{\sqrt{k}}-\frac{\psi(k)^2}{k}\frac{A_n^2}{2}\\ &= \psi(k)\left(-\frac{1}{2}\left(1+\frac{A_n^2}{k}\right)\psi(k)+\sqrt{\frac{A_n^2}{k}}\psi(A_n^2)\right)\\ &\ge -\frac{\psi(A_n^2)^2}{2}\left(\sqrt{\frac{A_n^2}{k}}-1\right)^2\ge-\frac{\psi(A_n^2)^2}{2}\left(\frac{A_n^2}{k}-1\right)^2 \\ &\ge -\frac{1}{2}\left(\frac{\psi(A_n^2)}{\psi(A_n^2)-1}\right)^2\ge-\frac{1}{2}-2\delta \end{align*} and by Lemma \ref{lem:bar_c_n_bound} \begin{align} \br_k^3 A_n^2 \bar c_n &\le \frac{\psi(A_n^2)^3}{\left(\lfloor A_n^2-A_n^2/\psi(A_n^2)\rfloor \right)^{3/2}} A_n^2 (1+\delta) C\frac{A_n }{\psi(A_n^2)^3} \nonumber \\ &\le (1+\delta)C \left(\frac{A_n}{\left(\lfloor A_n^2-A_n^2/\psi(A_n^2)\rfloor \right)^{1/2}} \right) ^3 \nonumber \\ &\le C(1+ \delta)^4. \label{eq:validity-deltaconst} \end{align} For sufficiently large $n$, we have \begin{align*} \psi(A_n^2)\le\uc(A_n^2)<\uc(2k)=\sqrt{2\ln_2 2k+4\ln_3 2k} <2\sqrt{2\ln_2 k+3\ln_2 k}=2\lc(k)\le2\psi(k). \end{align*} Thus by \eqref{eq:validity-deltaconst}, \begin{align*} Z {\cal K}_n &\ge \sum_{k=\left\lfloor A_n^2-A_n^2/\psi(A_n^2) \right\rfloor}^{\lfloor A^2_n \rfloor } a_k \frac{\psi(k)}{k}\exp\left(-\frac{1}{2}-2\delta - C(1+\delta)^4 \right)\\ &\ge a_{\left\lfloor A_n^2-A_n^2/\psi(A_n^2) \right\rfloor} \frac{\psi(A_n^2)}{2A^2_n} \sum_{k=\lfloor A_n^2-A_n^2/\psi(A_n^2) \rfloor}^{\lfloor A^2_n \rfloor} \exp \left(-\frac{1}{2}-2\delta - C(1+\delta)^4 \right)\\ &\ge a_{\left\lfloor A_n^2-A_n^2/\psi(A_n^2) \right\rfloor} \frac{\psi(A_n^2)}{2A^2_n} \left(\frac{A^2_n}{\psi(A_n^2)}-1\right)\exp\left(-\frac{1}{2}-2\delta - C(1+\delta)^4 \right)\\ &= a_{\left\lfloor A_n^2-A_n^2/\psi(A_n^2) \right\rfloor} \left(\frac{1}{2}-\frac{\psi(A_n^2)}{2A^2_n}\right)\exp \left(-\frac{1}{2}-2\delta - C(1+\delta)^4 \right). \end{align*} Since $a_{\left\lfloor A_n^2-A_n^2/\psi(A_n^2) \right\rfloor}\rightarrow\infty$ as $n \rightarrow \infty$, we have shown \[ \omega\in \Omega_{<\infty}, \ S_n \ge A_n\psi(A_n^2) \ i.o.\ \Rightarrow \ \limsup_{n\rightarrow\infty} {\cal K}_n = \infty. \] \section{Sharpness} \label{sec:sharpness} We prove the sharpness in \eqref{eq:sharpness1} of Theorem \ref{th:self-normalized-efkp-lil}. As in Section 4.2 of \cite{ShaferVovk2001Probability} and in \cite{MiyabeTakemura2012Convergence}, in order to prove the sharpness, it suffices to show the following proposition. \begin{proposition} \label{th:self-normalized-efkp-lil-dash} Consider SPUFG. Let $\psi$ be a positive non-decreasing continuous function. If $I(\psi)=\infty$, then for each $C>0$, Skeptic can force \begin{align} \label{eq:sn-forecaster-s} A_n^2 \rightarrow \infty, \limsup_n c_n \frac{\psi(A_n^2)^3}{A_n} \le C \ \Rightarrow \ S_n \ge A_n\psi(A_n^2) \ \ i.o. \end{align} \end{proposition} Once we prove this proposition, we can take the mixture over $C=1,2,\dots$. Then the sharpness follows, because for each $\omega\in \Omega_{<\infty}$, there exists $C(\omega)$ satisfying \eqref{eq:sn-forecaster-v}. We denote \begin{align*} \Omega_C &:= \left\{ \omega\in \Omega \mid A_n^2 \rightarrow \infty, \limsup_n c_n \frac{\psi(A_n^2)^3}{A_n} < (1-\delta)C\right\},\\ \Omega_0 &:= \left\{ \omega\in \Omega \mid \lim_{n \rightarrow \infty}A_n^2 < \infty\right\},\\ \Omega_{=\infty} &:= \left\{ \omega\in \Omega \mid A_n^2 \rightarrow \infty, \limsup_n c_n \frac{\psi(A_n^2)^3}{A_n} = \infty \right\}. \end{align*} We divide our proof of Proposition \ref{th:self-normalized-efkp-lil-dash} into several subsections. For notational simplicity we use the initial capital of $\alpha=1-2/e=(e-2)/e$ in this section. In Sections \ref{subsec:uniform-mixture} and \ref{subsec:bss} we only consider $\br$ and $n$ with $n < \sigma_\br$. As in Lemma \ref{lem:keep-open} for the validity, this condition will be satisfied for sufficiently small $\br$ and relevant $n$. \subsection{Uniform mixture of constant-proportion betting strategies} \label{subsec:uniform-mixture} We consider a continuous uniform mixture of constant-proportion strategies with the betting proportion $u\br$, $2/e\le u\le 1$. This is a Bayesian strategy, a similar one to which has been considered in \cite{KumonTakemuraTakeuchi2008Capital}. Define \begin{align*} \mcps^\br_n :=\int_{2/e}^1 \prod_{i=1}^{\min(n,\sigma_{\br}-1)} (1+u\br x_i)du, \qquad \mcps^\br_0=\alpha=1-e/2. \end{align*} At round $n < \sigma_\br$ this strategy bets $ M_n = \int_{2/e}^1 u\br \prod_{i=1}^{n-1} (1+u\br x_i) du . $ Then by induction on $n<\sigma_\br$ the capital process is indeed written as \begin{align*} \mcps^\br_n &= \mcps^\br_{n-1} + M_n x_n =\int_{2/e}^1 \prod_{i=1}^{n-1} (1+u\br x_i) du + x_n\int_{2/e}^1 u\br\prod_{i=1}^{n-1} (1+u\br x_i) du \\ &= \int_{2/e}^1 \prod_{i=1}^n (1+u\br x_i)du. % \end{align*} Applying \eqref{eq:cp-bound}, we have \begin{align*} e^{ -\br^3 A_n^2 \bar c_n } \int_{2/e}^1 e^{u\br S_n - u^2\br^2 A_n^2/2}du \le \mcps^\br_n \le e^{ \br^3 A_n^2 \bar c_n }\int_{2/e}^1 e^{u\br S_n - u^2\br^2 A_n^2/2} du, \end{align*} for $n < \sigma_{\br}$. We further bound the integral in the following lemma. \begin{lemma} \label{lem:mcps-upper} % For $n < \sigma_{\br}$, \begin{numcases} {\mcps^\br_n\le } e^{\br^3 A_n^2 \bar c_n} e^{2\br(S_n/e - \br A_n^2/e^2)} & \text{if}\quad $S_n\le 2\br A_n^2/e$, \label{eq:mcps-positive1-sn}\\ e^{\br^3 A_n^2 \bar c_n}\min \left\{ e^{S_n^2/(2A_n^2)} \frac{\sqrt{2\pi}}{\br A_n}, e^{\br S_n/2} \right\} & \text{if}\quad $2\br A_n^2/e<S_n< \br A_n^2$, \label{eq:mcps-positive2-sn} \\ \label{eq:mcps-positive3-sn} e^{\br^3 A_n^2 \bar c_n}\min\left\{e^{S_n^2/(2A_n^2)}\frac{\sqrt{2\pi}}{\br A_n},e^{\br S_n-\br^2 A_n^2/2}\right\}& \text{if}\quad $S_n \ge \br A_n^2$. \end{numcases} \end{lemma} \begin{proof} Completing the square we have \begin{equation*} - \frac{1}{2}u^2\br^2 A_n^2 + u\br S_n = -\frac{\br^2 A_n^2}{2} \left(u-\frac{S_n}{\br A_n^2}\right)^2 + \frac{S_n^2}{2A_n^2}. \end{equation*} Hence by the change of variables \[ v = \br A_n \left( u-\frac{S_n}{\br A_n^2}\right), \qquad du = \frac{dv}{\br A_n}, \] we obtain \begin{align*} \int_{2/e}^1 e^{u \br S_n - u^2\br^2 A_n^2/2} du &= e^{S_n^2/(2A_n^2)} \int_{2/e}^1 \exp \left(-\frac{\br^2 A_n^2}{2} \left(u-\frac{S_n}{\br A_n^2}\right)^2\right) du\nonumber \\ &= e^{S_n^2/(2A_n^2)}\frac{1}{\br A_n} \int_{2 \br A_n/e-S_n/ A_n }^{\br A_n - S_n/A_n} e^{-v^2/2} dv . \end{align*} Then for all cases we can bound $\mcps^\br_n$ from above as \begin{equation} \label{eq:Q-bound-up-1} \mcps^\br_n \le e^{\br^3 A_n^2 \bar c_n + S_n^2/(2A_n^2)} \frac{\sqrt{2\pi}}{\br A_n}. \end{equation} Without change of variables, we can also bound the integral $\int_{2/e}^1 g(u)du$, $g(u):=e^{u \br S_n - u^2\br^2 A_n^2/2}$, directly as \[ \int_{2/e}^1 g(u)du \le \max_{2/e\le u\le 1} g(u). \] Note that \begin{equation} g(2/e)=e ^{2 \br (S_n/e - \br A_n /e^2)}, \quad g(1)=e^{ \br S_n-\br^2 A_n^2/2}. \label{eq:g-lower-upper} \end{equation} We now consider the following three cases. \begin{description} \item[Case 1] $ S_n \le 2 \br A_n^2/e$. \ In this case $S_n/(\br A_n^2) \le 2/e$ and by the unimodality of $g(u)$ we have $\max_{2/e\le u \le 1}g(u)= g(2/e)$. Hence \eqref{eq:mcps-positive1-sn} follows from \eqref{eq:g-lower-upper}. \item[Case 2] $2 \br A_n^2/e < S_n < \br A_n^2$. \ In this case $\max_{2/e\le u \le 1} g(u)=g(S_n/(\br A_n^2))=e^{S_n^2/(2A_n^2)}$ and $\mcps^\br_n \le e^{\br^3 A_n^2 \bar c_n} e^{S_n^2/(2A_n^2)}$. Furthermore in this case $S_n^2 < \br A_n^2 S_n$ implies $S_n^2/(2A_n^2) < \br S_n/2$ and we also have \begin{equation} \label{eq:qn1-case2a} \mcps^\br_n \le e^{\br^3 A_n^2 \bar c_n} e^{\br S_n/2}. \end{equation} By \eqref{eq:Q-bound-up-1} % and \eqref{eq:qn1-case2a}, we have \eqref{eq:mcps-positive2-sn}. \item[Case 3] $S_n \ge \br A_n^2$. \ Then $S_n/(\br A_n^2)\ge 1$ % and $\max_{2/e\le u \le 1} g(u)=g(1)$. Hence \begin{equation} \label{eq:qn1-case1} \mcps^\br_n \le e^{\br^3 A_n^2 \bar c_n} e^{ \br S_n-\br^2 A_n^2/2}. % \end{equation} By \eqref{eq:Q-bound-up-1} and \eqref{eq:qn1-case1}, we have \eqref{eq:mcps-positive3-sn}. \end{description} \end{proof} \subsection{Buying a process and selling a process} \label{subsec:bss} Next we consider the following capital process. \begin{equation} \bss_n^{\br} := 2 \mcps_n^\br - \cps_n^{\br e}. \label{eq:bssn-def} \end{equation} This capital process consists of buying two units of $\mcps_n^\br$ and selling one unit of $\cps_n^{\br e}$. This combination of selling and buying is essential in the game-theoretic proof of LIL in Chapter 5 of \cite{ShaferVovk2001Probability} and \cite{MiyabeTakemura2013Law}. However, unlike Chapter 5 of \cite{ShaferVovk2001Probability} and \cite{MiyabeTakemura2013Law}, where a combination of {\em three} capital processes is used, we only combine {\em two} capital processes. We want to bound $\bss_n^{\br}$ from above. \begin{lemma}\label{lem:bss-bound} Let \begin{align} \label{eq:case-i-negative-condition-1} C_1 &:= 2 e^{\br^3 A_n^2 \bar c_n} \exp \left(\frac{(2e-1)((1+e^3)\br^3 A_n^2 \bar c_n + \ln 2)}{(e-1)^2}\right). \end{align} Then for $n < \sigma_{\br e}$, \begin{numcases} {\bss_n^\br \le } C_1 & \text{if}\quad $S_n\le \br A_n^2/e$, \label{eq:bss-positive1-sn}\\ 2 e^{\br^3 A_n^2 \bar c_n}\min \left\{ e^{S_n^2/(2A_n^2)} \frac{\sqrt{2\pi}}{\br A_n}, e^{\br S_n} \right\} & \text{if}\quad $\br A_n^2/e<S_n< e\br A_n^2$, \label{eq:bss-positive2-sn} \\ \label{eq:bss-positive3-sn} C_1 & \text{if}\quad $S_n \ge e \br A_n^2$. \end{numcases} \end{lemma} \begin{remark} \label{rem:2} In this lemma, $C_1$ depends on $\bar c_n$, $\br$ and $A_n$ through $\br^3 A_n^2 \bar c_n$. However from Section \ref{subsec:Futher discrete mixture of processes} on, we evaluate $\br^3 A_n^2 \bar c_n$ from above by a constant. Hence, $C_1$ can be also taken to be a constant (cf.\ \eqref{eq:C_1-const}) not depending on $\br$ and $A_n$. Also note that the interval for $S_n$ in \eqref{eq:bss-positive2-sn} is larger than the interval in \eqref{eq:mcps-positive2-sn}. \end{remark} \begin{proof} We bound $\bss_n^\br =2\mcps_n^\br - \cps_n^{\br e}$ from above in the following three cases: \[ \text{(i)}\ S_n \le \br A_n^2 /e, \quad \text{(ii)}\ \br A_n^2/e < S_n < e \br A_n^2, \quad \text{(iii)}\ S_n \ge e \br A_n^2, \] \begin{description} \item[Case (i)]\ In this case $S_n/e - \br A_n^2 /e^2\le 0$. Hence \eqref{eq:bss-positive1-sn} follows from \eqref{eq:mcps-positive1-sn} and $\bss_n^\br \le 2 \mcps_n^\br$. \item[Case (ii)] We again use $\bss^\br_n\le 2\mcps^\br_n$. If $\br A_n^2/e < S_n \le 2 \br A_n^2/e$, then \[ \frac{S_n}{e} - \frac{\br A_n^2 }{e^2}\le \frac{\br A_n^2}{e^2} \le \frac{S_n}{e} \] and $\mcps_n^\br\le e^{\br^3 A_n^2 \bar c_n}e^{2\br S_n/e} \le e^{\br^3 A_n^2 \bar c_n}e^{\br S_n}$ from \eqref{eq:mcps-positive1-sn}. Otherwise \eqref{eq:bss-positive2-sn} follows from \eqref{eq:mcps-positive2-sn} and \eqref{eq:mcps-positive3-sn}. \item[Case (iii)] \ Since $S_n \ge e A_n^2 \br > A_n^2 \br$, by \eqref{eq:qn1-case1} we have $\mcps^\br_n \le e^{\br^3 A_n^2 \bar c_n} e^{ \br S_n-\br^2A_n^2/2}$ and \begin{align*} \bss_n^\br &\le 2\mcps_n^\br - \cps_n^{\br e} \le 2 e^{\br^3 A_n^2 \bar c_n} e^{ \br S_n-\br^2A_n^2/2} - e^{-\br^3 e^3 A_n^2 \bar c_n}e^{\br e S_n - \br^2 e^2 A_n^2/2}\\ &= 2e^{\br^3 A_n^2 \bar c_n} e^{ \br S_n-\br^2A_n^2/2} \left( 1 - \frac{1}{2} e^{-(1+e^3)\br^3A_n^2 \bar c_n}e^{\br (e-1)S_n - (e^2-1)\br^2A_n^2/2}\right). \end{align*} Hence if the right-hand side is non-positive we have $\bss_n^\br \le 0$: \begin{align} &S_n \ge e A_n^2 \br \ \ \text{and}\ -(1+e^3)\br^3 A_n^2 \bar c_n - \ln 2 + \br(e-1)S_n - \frac{1}{2}(e^2-1)\br^2A_n^2 \ge 0 \nonumber\\ & \qquad \qquad \Rightarrow \ \ \bss_n^\br \le 0. \label{eq:case-iii-1} \end{align} Otherwise, write $B_n:=(1+e^3) \br^3 A_n^2 \bar c_n + \ln 2$ and consider the case \begin{align*} \br(e-1)S_n - \frac{1}{2}(e^2-1) \br^2 A_n^2 \le B_n. \end{align*} Dividing this by $e-1$ and also considering $S_n \ge e A_n^2\br$, we have \begin{align} \label{eq:case-iii-1a} \br S_n - \frac{1}{2}(e+1) \br^2A_n^2 &\le \frac{B_n}{e-1},\\ -S_n +eA_n^2 \br &\le 0. \label{eq:case-iii-1b} \end{align} $\br \times \eqref{eq:case-iii-1b} + \eqref{eq:case-iii-1a}$ gives \begin{align*} \frac{1}{2}(e-1) \br^2 A_n^2 \le \frac{B_n}{e-1} \quad \text{or} \quad \frac{1}{2} \br^2 A_n^2 \le \frac{B_n}{(e-1)^2}. \end{align*} Then by \eqref{eq:case-iii-1a} \begin{align*} \br S_n - \frac{1}{2}\br^2 A_n^2 \le \frac{B_n}{e-1} + \frac{e}{2} \br^2 A_n^2 \le \frac{B_n}{e-1} + \frac{eB_n}{(e-1)^2}=\frac{(2e-1)B_n}{(e-1)^2}. \end{align*} Hence just using $\bss_n^\br\le 2\mcps_n^\br$ and \eqref{eq:qn1-case1} in this case, we obtain \begin{align} \label{eq:case-iii-conclusion} \bss_n^\br \le 2 e^{\br^3 A_n^2 \bar c_n} \exp\left(\frac{(2e-1)((1+e^3) \br^3A_n^2 \bar c_n + \ln 2 )}{(e-1)^2}\right) = C_1. \end{align} This also covers \eqref{eq:case-iii-1} and we have \eqref{eq:case-iii-conclusion} for the whole case (iii). \end{description} \end{proof} \subsection{Change of time scale and dividing the rounds into cycles} \label{subsec:Change of time scale} For proving the sharpness we consider the change of time scale from $\lambda$ to $k$: \[ \lambda= e^{5k \ln k} = k^{5k}. \] By taking the derivative of $\ln \lambda= 5 k \ln k$, we have $ d\lambda/\lambda = 5(\ln k+1)dk. $ Since $\ln k$ is dominant in $(\ln k+1)$, the integrability condition is written as \begin{equation*} \int_1^\infty \psi(\lambda) e^{-\psi(\lambda)^2/2} \frac{d\lambda}{\lambda} = \infty \ \Leftrightarrow \ \int_1^\infty (\ln k) \psi(e^{5k\ln k}) e^{-\psi(e^{5k\ln k})^2/2} dk = \infty. \end{equation*} Let $f(x):=\psi(e^{5x\ln x}) e^{-\psi(e^{5x\ln x})^2/2}$. Since $xe^{-x^2/2}$ is decreasing for $x\ge 1$, the function $f(x)$ is decreasing for $x$ such that $\psi(e^{5x\ln x})\ge 1$. Thus, for sufficiently large $k$ and $x$ such that $k\le x\le k+1$, we have \[ \frac{1}{2}\ln (k+1) f(k+1) \le \ln k f(x+1)\le \ln x f(x) \le \ln (k+1) f(x)\le 2\ln k f(k). \] Hence, we have \begin{equation*} \int_1^\infty (\ln k) \psi(e^{5k\ln k}) e^{-\psi(e^{5k\ln k})^2/2} dk = \infty \ \ \Leftrightarrow \ \ \sum_{k=1}^\infty (\ln k) \psi(e^{5k\ln k}) e^{-\psi(e^{5k\ln k})^2/2} = \infty . \end{equation*} Then, it suffices to show \eqref{eq:sn-forecaster-s} if $\sum_{k=1}^\infty (\ln k) \psi(e^{5k\ln k}) e^{-\psi(e^{5k\ln k})^2/2} = \infty.$ As in Chapter 5 of \cite{ShaferVovk2001Probability} and \cite{MiyabeTakemura2013Law}, we divide the time axis into ``cycles''. However, unlike in Chapter 5 of \cite{ShaferVovk2001Probability} and \cite{MiyabeTakemura2013Law}, our cycles are based on stopping times. Let \begin{equation} \label{eq:nk-sharpness} n_k := k^{5k}, \quad k=1,2,\dots, \end{equation} and define a family of stopping times \begin{equation} \label{eq:stopping-time-cycle} \tau_k:= \min \left\{n \mid A^2_n \ge n_k \right\}. \end{equation} We define the $k$-th cycle by $[\tau_k, \tau_{k+1}]$, $k\ge 1$. Note that $\tau_k$ is finite for all $k$ if and only if $A_n^2 \rightarrow\infty$. Betting strategy for the $k$-th cycle is based on the following betting proportion: \begin{equation} \label{eq:br_k} \br_k := \frac{\psi(n_{k+1})}{\sqrt{n_{k+1}} }k^2. \end{equation} Note that $\br_k$ in \eqref{eq:br_k} is slightly different from \eqref{eq:validity-br}. For the rest of this section, we check the growth of various quantities along the cycles. Let $\omega\in \Omega_C$. For sufficiently large $n$, \begin{equation} \label{eq:xn2bound0} |x_n|\le c_n \le C \frac{A_n}{\psi(A_n^2)^3}. \end{equation} Furthermore $A_n^2 = A_{n-1}^2 + x_n^2$. This allows us to bound $x_n^2$ and $A_n^2$ in terms of $A^2_{n-1}$. By squaring \eqref{eq:xn2bound0} we have \begin{equation} \label{eq:xn2bound} x_n^2 \le C^2 \frac{A_{n-1}^2}{\psi(A_n^2)^6-C^2} \end{equation} and \begin{equation} \label{eq:An2bound} A_n^2 = A_{n-1}^2 + x_n^2 \le A_{n-1}^2 ( 1+ \frac{C^2}{\psi(A_n^2)^6-C^2}) = A_{n-1}^2 \frac{\psi(A_n^2)^6}{\psi(A_n^2)^6-C^2}. \end{equation} Since $\psi(A_n^2)^6/(\psi(A_n^2)^6-C^2) \rightarrow 1$ as $n\rightarrow \infty$, we have \begin{equation*} \lim_{n\rightarrow\infty} \frac{A_n^2}{A_{n-1}^2} = 1. \end{equation*} Note that $A_{\tau_k-1}^2 < n_k \le A_{\tau_k}^2$ by the definition of $\tau_k$. Hence for $\omega\in \Omega_C$ we also have \begin{equation} \label{eq:stop-error} \lim_{k\rightarrow\infty}\frac{A_{\tau_k}^2}{n_k} = 1. \end{equation} The limits in the following lemma will be useful for our argument. \begin{lemma} \label{lem:cycle-growth} For $\omega\in \Omega_C$ \begin{equation} \lim_{k \rightarrow \infty} \frac{\uc(n_k)}{\psi(n_{k+1})} =1, \quad \lim_{k \rightarrow \infty} \frac{k^5A_{\tau_k}^2}{n_{k+1}} = e^{-5},\quad \lim_{k \rightarrow \infty} \br_k A_{\tau_k}\psi(n_{k+1})= 0. \quad \end{equation} \end{lemma} \begin{proof} All of $\uc(n_k)$, $\uc(n_{k+1})$, $\lc(n_k)$, $\lc(n_{k+1}), \psi(n_{k+1}), \psi(n_{k+1} /k^4)$ are of the order \begin{equation} \label{eq:order-of-nk} \sqrt{2 \ln \ln e^{5k\ln k}}(1+o(1))=\sqrt{2\ln k}(1+o(1)) \end{equation} as $k\rightarrow\infty$ and the first equality holds by \eqref{eq:uc0}. The second equality holds by \eqref{eq:stop-error} and \begin{align*} \lim_{k\rightarrow\infty} \frac{k^5 n_k}{n_{k+1}}= \lim_{k\rightarrow\infty} \frac{k^{5(k+1)}}{(k+1)^{5(k+1)}} =\lim_{k\rightarrow\infty} \left(1-\frac{1}{k+1}\right)^{5(k+1)} = e^{-5}. \end{align*} Then $A^2_{\tau_k}/n_{k+1}=(1+o(1))n_k/n_{k+1}=O(k^{-5})$ and the third equality holds by \begin{align*} \br_k A_{\tau_k}\psi(n_{k+1}) \le \psi(n_{k+1})^2k^2 ((1+\delta)n_k/n_{k+1})^{1/2} \rightarrow 0 \qquad(k\rightarrow\infty). \end{align*} \end{proof} \subsection{Stopping times for aborting and sequential freezing for each cycle} In \eqref{eq:bssd} of the next section we will introduce another capital process $\bssd_n^{\br_k,k}$, which will be employed in each cycle. Here we introduce some stopping times for aborting the cycle and for sequential freezing of accounts in $\bssd_n^{\br_k,k}$. We say that we {\em abort} the $k$-th cycle, when we freeze all accounts in the $k$-th cycle and wait for the $(k+1)$-st cycle. There are two cases for aborting the $k$-th cycle. The first case is when some $c_n$ is too large for $\omega\in \Omega_{C}$. Define \begin{equation} \label{eq:sigma-kC} \sigma_{k,C} := \min \left \{n \ge \tau_k \mid c_n \psi(A^2_{\tau_{k}})^3 > (1+\delta)C A_{n-1}\right \}. \end{equation} We will abort the $k$-th cycle if $\sigma_{k,C} < \tau_{k+1}$. Note that for $\omega \in \Omega_C$, there exists $k_1(\omega)$ such that \begin{equation} \label{eq:simga-kC-infty} \sigma_{k,C}=\infty,\ \ \text{for} \ \ k \ge k_1(\omega). \end{equation} Another case is when $S_n$ is too large. Define \begin{align} \label{eq:nu_k} \nu_k := \min\{n \ge \tau_k \mid A_{n} \psi(A^2_{n}) < S_{n} \}. \end{align} If $\nu_k < \tau_{k+1}$, then Skeptic is happy to abort the $k$-th cycle, because he wants to force $S_n \ge A_{n} \psi(A^2_{n})\ i.o.$ \ The above two stopping times will be used in the final construction of a dynamic strategy in Section \ref{subsec:skepticforcesharpness}. For each cycle, we define another family of stopping times indexed by $w=1,\ldots, \lceil \ln k\rceil$, by \begin{align} \label{eq:tau-k-w} \tau_{k,w}:= \min \left\{n \mid A^2_n \ge e^{2(w+2)} \frac{n_{k+1}}{k^4}\right\}. \end{align} for sequential freezing of accounts of $\bssd_n^{\br_k,k}$ in \eqref{eq:bssd}. We have $\tau_k \le \tau_{k,w}$ for $k \ge 1$ and $w\ge 1$, because \[ \frac{n_{k+1}}{k^4} = \frac{(k+1)^{5(k+1)}}{k^4} > k^{5k}=n_k. \] \begin{lemma} \label{lemm:all-accounts-stop} Let $\omega \in \Omega_C$. $ \tau_{k,\lceil \ln k \rceil} \le \tau_{k+1}$ for sufficiently large $k$. \end{lemma} \begin{proof} By $ A^2_{\tau_{k,w}-1} \le e^{2(w+2)}n_{k+1} / k^4$ and by \eqref{eq:xn2bound}, for sufficiently large $k$ we have \begin{align*} x_{\tau_{k,w}}^2 \le (1+\delta)C^2 \frac{A^2_{\tau_{k,w}-1} }{\psi(A^2_{\tau_{k}})^6} \le \frac{(1+\delta) C^2 }{\psi(A^2_{\tau_{k}})^6} \times \frac{e^{2(w+2)} n_{k+1}}{k^4} \end{align*} and \begin{align} \label{eq:A_n-stop-above2} A_{\tau_{k,w}}^2 \le A^2_{\tau_{k,w}-1} + x^2_{\tau_{k,w}} &\le (1+\delta) e^{2(w+2)} \frac{n_{k+1}}{k^4}. \end{align} Then \begin{align*} A^2_{\tau_{k, \lceil \ln k \rceil } } \le (1+\delta) \left(e^{2(\ln k+2)} \frac{n_{k+1}}{k^4}\right) = (1+\delta) e^4 \frac{n_{k+1}}{k^2} \le n_{k+1} \le A^2_{\tau_{k+1}}. \end{align*} \end{proof} We also compare $\tau_{k,w}$ to $\sigma_{\br_k e^{-w+1}}$ defined in \eqref{eq:stop}. This is needed for applying the bounds derived in previous sections to $\bssd_n^{\br_k,k}$ in the next section. \begin{lemma} \label{lemm:sigma_br-sigma_c} Let $\omega\in \Omega_C$. $\tau_{k,w}\le \sigma_{\br_k e^{-w+1}}$ for sufficiently large $k$. \end{lemma} \begin{proof} By \eqref{eq:A_n-stop-above2} and by Lemma \ref{lem:bar_c_n_bound}, for sufficiently large $k$ \begin{align*} \br_{k}e^{-w+1} \bar c_{\tau_{k,w}} \le \frac{\psi(n_{k+1})}{\sqrt{n_{k+1}}}k^2 e^{-w+1}\times (1+\delta)^2 C \frac{e^{w+2} \sqrt{n_{k+1}} }{k^2 \psi(A^2_{\tau_k})^3} \le (1+\delta)^2 C e^3 \frac{\psi(n_{k+1})}{\psi(A^2_{\tau_k})^3} \le \delta, \end{align*} because $\psi(n_{k+1})/\psi(A^2_{\tau_k})^3 \rightarrow 0$ as $k\rightarrow\infty$ by \eqref{eq:order-of-nk}. \end{proof} \subsection{Further discrete mixture of processes for each cycle with sequential freezing} \label{subsec:Futher discrete mixture of processes} We introduce another discrete mixture of capital process for the $k$-th cycle. Define \begin{align} \label{eq:bssd} \bssd_n^{\br_k,k}:= \frac{1}{\lceil \ln k \rceil}\sum_{w=1}^{\lceil \ln k \rceil} \bss_{\min(n,\tau_{k,w})}^{\br_k e^{-w}} = \frac{1}{\lceil \ln k \rceil}\sum_{w=1}^{\lceil \ln k \rceil} (2\mcps_{\min(n,\tau_{k,w})}^{\br_k e^{-w}} - \cps_{\min(n,\tau_{k,w})}^{\br_k e^{-w+1}}). \end{align} Note that the $w$-th account in the sum of $\bssd_n^{\br_k,k}$is frozen at the stopping time $\tau_{k,w}$. This is needed since the bound for $c_n$ is growing even during the $k$-th cycle. In order to bound $\bssd_n^{\br_k,k}$, we first bound $C_1$ in \eqref{eq:case-i-negative-condition-1} for each $w$ in the sum of \eqref{eq:bssd} by a constant independent of $n$. Note that we only need to consider $n\le \tau_{k,w}$ for the $w$-th account. \begin{lemma} \label{lem:C_1-const} Let $\omega \in \Omega_C$. $(\br_k e^{-w})^3 A_n^2 \bar c_n$ and hence $C_1$ are bounded from above by \begin{align} \label{eq:remainder-const} (\br_k e^{-w})^3 A_n^2 \bar c_n &\le (1+\delta)^5 C e^6,\\ \label{eq:C_1-const} C_1 &\le 2 e^{(1+\delta)^5Ce^6} \exp\left(\frac{(2e-1)((1+\delta)^5Ce^6(1+e^3)+ \ln 2)}{(e-1)^2}\right)=:\bar C_1, \end{align} for sufficiently large $k$. \end{lemma} \begin{proof} By \eqref{eq:order-of-nk}, for sufficiently large $k$ \begin{align} \label{eq:psi-ratio} \frac{\psi(n_{k+1})}{\psi(A^2_{\tau_{k,w}})} \le \frac{\psi(n_{k+1})}{\psi(n_k)} \le 1+\delta. \end{align} Thus \begin{align*} \br_k^3 e^{-3w} A_{\min(n,\tau_{k,w})}^2 \bar c_{\min(n,\tau_{k,w})} &\le \br_k^3 e^{-3w} \times A_{\tau_{k,w}}^2 \times \bar c_{\min(n,\tau_{k,w})} \\ &\le \frac{\psi(n_{k+1})^3}{n_{k+1}^{3/2}}k^6 e^{-3w} \times A^2_{\tau_{k,w}} \times (1+\delta)C \frac{A_{\tau_{k,w}}}{\psi(A^2_{\tau_k})^3} \\ &\le (1+\delta)C \frac{\psi(n_{k+1})^3}{\psi(A^2_{\tau_k})^3} k^6e^{-3w} \frac{A^3_{\tau_{k,w}}}{n^{3/2}_{k+1}} \le (1+\delta)^5Ce^{6}. \end{align*} \end{proof} \begin{lemma} \label{lem:sn-bssd-bound} Let $\omega \in \Omega_C$. For sufficiently large $k$, \begin{align} \label{eq:sn-bssd-bound} \bssd_n^{\br_k,k} \le \bar C_1 + \frac{2}{\lceil \ln k \rceil} e^{(1+\delta)^5 C e^6} \max_{\br\in [\br_k/k,\br_k]} \left(\min \{ e^{S_n^2/(2n)} \frac{\sqrt{2\pi}}{\br A_n}, e^{\br S_n} \}\right), \quad n \in [\tau_k,\tau_{k+1}], \end{align} where $\bar C_1$ is given by the right-hand side of \eqref{eq:C_1-const}. \end{lemma} \begin{proof} We have $|\br_k e^{-w} \bar c_{\min(n,\tau_{k,w})}| \le |\br_k e^{-w+1} \bar c_{\min(n,\tau_{k,w})}| \le \delta $ by Lemma \ref{lemm:sigma_br-sigma_c}. Then we can complete the proof of \eqref{eq:sn-bssd-bound} by Lemma \ref{lem:bss-bound} and Lemma \ref{lemm:sigma_br-sigma_c} because the length of the interval \begin{align*} \left \{ w \mid \frac{S_n}{ne} < \br e^{-w} < \frac{S_n e}{n}\right \} \end{align*} is equal to $2$. \end{proof} As in Chapter 5 of Shafer and Vovk \cite{ShaferVovk2001Probability}, we use $\bssd_n^{\br_k,k}$ in the following form. \begin{align} \label{eq:dynp} \dynp_{n}^{\br_k,D} :=\alpha + \frac{1}{D} \lceil \ln k \rceil \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} (\alpha-\bssd_{n-\tau_k}^{\br_k,k}),\quad \alpha = 1-\frac{2}{e}, \,\, D=\frac{24\sqrt{2\pi}e^{(1+\delta)^5e^6C}+4\bar C_1}{\alpha}. \end{align} Here we give a specific value of $D$ for definiteness, but from the proof below it will be clear that any sufficiently large $D$ can be used. Since the strategy for $\bssd_{n-\tau_k}^{\br_k,k}$ is applied only to $x_n$'s in the cycle, $\alpha=\dynp_{\tau_k}^{\br_k,D}=\bssd_{0}^{\br_k}$. Concerning $\dynp_{n}^{\br_k,D}$ we prove the following two propositions. \begin{proposition} \label{prop:dynp} Let $\omega \in \Omega_C$. Suppose that \begin{align} \label{eq:within-range} -A_n \uc(A_n^2) \le S_n \le A_n \psi(A_n^2), \qquad \forall n\in [\tau_k,\tau_{k+1}]. \end{align} and $\tau_{k+1} < \sigma_{k,C}$. Then for sufficiently large $k$ \begin{align} \label{eq:nkn-lower-bound} \dynp_n^{\br_k,D} \ge \frac{\alpha}{2}, \qquad \forall n\in [\tau_k,\tau_{k+1}], \end{align} and \begin{align} \label{eq:nk+1} \dynp_{\tau_{k+1}}^{\br_k,D} \ge \alpha \left( 1 + \frac{1-\delta}{D} \lceil \ln k \rceil \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \right). \end{align} \end{proposition} \begin{proof} In our proof we denote $t=n-\tau_k$, $S_t = S_n - S_{\tau_k}$ and $A_t^2 = A_n^2-A^2_{\tau_k}$ for $n>\tau_k$. For proving \eqref{eq:nkn-lower-bound}, we use \eqref{eq:sn-bssd-bound} for $S_t$. We bound $\bssd_{t}^{\br_k,k}$ from above. By the term $\dfrac{2}{\lceil \ln k \rceil}$ on the right-hand side of \eqref{eq:sn-bssd-bound}, it suffices to show \begin{align*} &S_t \le A_{\tau_k} \uc(A^2_{\tau_k}) + \sqrt{A^2_{\tau_k} + A_t^2} \psi(A^2_{\tau_k}+ A_t^2) \\ & \quad \Rightarrow \ \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} 2e^{(1+\delta)^5e^6C} \min \{ e^{S_t^2/(2A_t^2)} \frac{\sqrt{2\pi}}{\br A_t}, e^{\br S_t} \}\le \frac{D\alpha}{4}, \,\forall \br \in [\br_k/k, \br_k],\,\forall t\in [0,\tau_{k+1}-\tau_k] \end{align*} for sufficient large $k$. Let \begin{align} \label{eq:c1-const} c_1 = \frac{9}{(1+2\delta)^2} \quad \mathrm{s.t.} \quad \frac{1}{2} - \frac{1}{\sqrt{c_1}} - \delta > 0. \end{align} We distinguish two cases: \[ \text{(a)} \ A_t^2\le \frac{\psi(n_{k+1})^2}{c_1 \br^2}, \quad \text{(b)}\ \frac{\psi(n_{k+1})^2}{c_1 \br^2} < A_t^2 \le A^2_{\tau_{k+1}}-A^2_{\tau_k}. \] For case (a), $A_{\tau_k} \psi^{U}(A_{\tau_k}^2) \le (1+\delta) A_{\tau_k}\psi(n_{k+1})$ by the first equality in Lemma \ref{lem:cycle-growth} for sufficiently large $k$. Also $\psi(A^2_{\tau_k}+A_t^2)\le \psi(n_{k+1})$. Hence in this case \begin{align*} \br S_t \le \left((1+\delta) \br A_{\tau_k} + \sqrt{\br^2 A^2_{\tau_k} +\psi(n_{k+1})^2/c_1 } \right)\psi(n_{k+1}). \end{align*} Then for $\br \le \br_k$ by the third equality in Lemma \ref{lem:cycle-growth} \begin{align} \label{eq:another-large-k} \br S_t \le \left((1+\delta)\br_k A_{\tau_k}+ \sqrt{\br_k^2 A^2_{\tau_k} + \psi(n_{k+1})^2/c_1}\right)\psi(n_{k+1}) = \psi(n_{k+1})^2 \left(\frac{1}{\sqrt{c_1}}+\delta\right) \end{align} for sufficiently large $k$. Since \begin{align*} \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} 2 e^{(1+\delta)^5e^6C} e^{\br S_t} \le \psi(n_{k+1}) \exp\left(-\psi(n_{k+1})^2\big(\frac{1}{2} - \frac{1}{\sqrt{c_1}}-\delta\big)\right)2 e^{(1+\delta)^5e^6C} \rightarrow 0\quad (k \rightarrow \infty), \end{align*} we have $\dynp_n^{\br_k,D}\ge \alpha / 2$ uniformly in $\br\in [\br_k/k,\br_k]$ For case (b), $\psi(n_{k+1})/\sqrt{c_1} < \br A_t$ and $S_t \le \left( (1+\delta)A_{\tau_k} + \sqrt{A^2_{\tau_k} + A_t^2} \right) \psi(n_{k+1})$. Hence \begin{align} &\psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \times 2 e^{(1+\delta)^5e^6C}e^{S_t^2/(2A_t^2)} \frac{\sqrt{2\pi}}{\br A_t} \nonumber \\ &\le \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \times \frac{2e^{(1+\delta)^5e^6C}\sqrt{2\pi}\sqrt{c_1}}{\psi(n_{k+1})} \exp\left(\frac{\left((1+\delta)A_{\tau_k}+ \sqrt{A^2_{\tau_k} + A_t^2}\right)^2}{2A_t^2}\psi(n_{k+1})^2\right) \nonumber \\ &= 2e^{(1+\delta)^5e^6C}\sqrt{2\pi}\sqrt{c_1} \exp\left( \frac{(1+(1+\delta)^2)A_{\tau_k}^2 + 2(1+\delta) A_{\tau_k}\sqrt{A^2_{\tau_k}+A_t^2}}{2A_t^2} \psi(n_{k+1})^2\right). \label{eq:case-b-constant-bound} \end{align} For $\br\le \br_k$, \[ \frac{\psi(n_{k+1})^2}{c_1 \br^2} < A_t^2 \ \Rightarrow \ \frac{A^2_{\tau_k}}{A_t^2} \psi(n_{k+1})^2 < c_1 \br^2 A^2_{\tau_k} \le c_1 \br_k^2 A^2_{\tau_k} = c_1 \frac{A^2_{\tau_k}}{n_{k+1}}k^4\psi(n_{k+1})^2 =O(k^{-1}\ln k). \] Hence $\psi(n_{k+1})^2 A^2_{\tau_k}/A_t^2 \rightarrow 0$ as $k\rightarrow \infty$. Similarly $\psi(n_{k+1})^2 A_{\tau_k} / A_t \rightarrow 0$ as $k\rightarrow\infty$, because $\psi(n_{k+1})^2 A_{\tau_k} / A_t=O(k^{-1/2}(\ln k)^{3/2})$. Therefore the right-hand side of \eqref{eq:case-b-constant-bound} is bounded from above by $2e^{(1+\delta)^5e^6C}\sqrt{2\pi}\sqrt{c_1}(1+\delta)$ for sufficiently large $k$ and \begin{align*} \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \times 2 e^{(1+\delta)^5e^6C}e^{S_t^2/(2A_t^2)} \frac{\sqrt{2\pi}}{\br A_t} \le \frac{D\alpha}{4}, \end{align*} with the choice of $D$ in \eqref{eq:dynp} and $c_1$ in \eqref{eq:c1-const}. This proves \eqref{eq:nkn-lower-bound}. Now we prove \eqref{eq:nk+1}. We focus on the $w$-th account when $n \ge \tau_{k,w}$. Recall that in this proof we have been denoting $A_t^2 = A_n^2 - A_{\tau_k}^2$. Similarly we denote $A^2_{\tau_{k,w}}$ instead of $A^2_{\tau_{k,w}}-A^2_{\tau_k}$. Thus \begin{align} \label{eq:An-dy-above} e^{2(w+2)} \frac{n_{k+1}}{k^4} -A^2_{\tau_k}\le A^2_{\tau_{k,w}}. \end{align} We will show that $\limsup_{k\rightarrow\infty} \bssd_{\tau_{k+1}-\tau_k}^{\br_k,k} \le 0$, if \begin{align} \label{eq:sn-dy-above} S_{\tau_{k,w}} \le A_{\tau_k} \psi(A^2_{\tau_k}) + A_{\tau_{k,w}} \psi(A^2_{\tau_{k,w}}) \le \psi(n_{k+1}) \left\{A_{\tau_k} + A_{\tau_{k,w}}\right\} \le 2 \psi(n_{k+1}) A_{\tau_{k,w}}. \end{align} We evaluate \[ \mcps_{\tau_{k,w}}^{\br_ke^{-w},k}:=\int_{2/e}^1 \exp \left(u\br_k e^{-w} S_{\tau_{k,w}}-u^2 \br_k^2 e^{-2w}A^2_{\tau_{k,w}}/2\right)du \] from above. Because $u\br_k e^{-w} S_{\tau_{k,w}}-u^2 \br_k^2 e^{-2w} A^2_{\tau_{k,w}}/2$ is maximized at $u= S_{\tau_{k,w}}/(\br_k e^{-w} A^2_{\tau_{k,w}})$ and \begin{align*} \frac{S_{\tau_{k,w}}}{\br_k e^{-w} A^2_{\tau_{k,w}}} \le \frac{2 \psi(n_{k+1})A_{\tau_{k,w}}} { (\psi(n_{k+1}) k^2/\sqrt{n_{k+1}}) e^{-w} A^2_{\tau_{k,w}}} \le \frac{2\sqrt{n_{k+1}}}{k^2 e^{-w} A_{\tau_{k,w}}} \le \frac{2}{e^2}\le \frac{2}{e}, \end{align*} the integrand in $\mcps_{\tau_{k,w}}^{\br_ke^{-w},k}$ is maximized at $2/e$ and we have \begin{align*} \mcps_{\tau_{k,w}}^{\br_ke^{-w},k} &\le \exp \left(\frac{2}{e}\br_k e^{-w} S_{\tau_{k,w}}-\frac{2 \br_k^2 e^{-2w}A^2_{\tau_{k,w}}}{e^2}\right). \end{align*} By \eqref{eq:An-dy-above} and \eqref{eq:sn-dy-above}, for sufficiently large $k$, \begin{align*} \frac{2}{e}\br_k e^{-w} S_{\tau_{k,w}}-\frac{2 \br_k^2 e^{-2w}A^2_{\tau_{k,w}}}{e^2} &\le \frac{4 \br_k \psi(n_{k+1})A_{\tau_{k,w}}}{ e^{w+1}} - \frac{2\br_k^2 A^2_{\tau_{k,w}}}{e^{2(w+1)}}\nonumber \\ &= \frac{\psi(n_{k+1})^2k^2A_{\tau_{k,w}}}{\sqrt{n_{k+1}}e^w} \left( \frac{4}{e} - \frac{2k^2A_{\tau_{k,w}}}{e^2\sqrt{n_{k+1}}e^w}\right)\nonumber \\ &\le \frac{\psi(n_{k+1})^2k^2A_{\tau_{k,w}}}{\sqrt{n_{k+1}}e^w} \left( \frac{4}{e} - \frac{2}{e^2}\sqrt{e^4-\frac{(1+\delta)k^4n_k}{n_{k+1}e^{2w}}}\right) \\ &\le -\psi(n_{k+1})^2 \frac{k^2}{\sqrt{n_{k+1}}e^w} \times \frac{\sqrt{n_{k+1}}e^{w+2}}{k^2} \times \frac{1}{2} \qquad \nonumber \\ &= - \frac{e^2\psi(n_{k+1})^2}{2}. \end{align*} The last inequality holds because $\lim_{k\rightarrow\infty}k^4 n_k/n_{k+1} =0$ and $4/e - 2 < -1/2$. Hence $\mcps_{\tau_{k,w}}^{\br_k e^{-w},k} \rightarrow 0$ uniformly in $1\le w \le \lceil \ln k \rceil$. This implies $\limsup_{k\rightarrow\infty} \bssd_{\tau_{k+1}-\tau_k}^{\br_k,k} \le 0$. \end{proof} \begin{proposition} \label{prop:dynp2} Let $\omega \in \Omega_C$. Suppose that $\nu_k \le \min(\tau_{k+1}, \sigma_{k,C})$ and \[ -A_{n} \uc(A_{n}^2) \le S_{n}, \,\, \forall n\in [\tau_k,\nu_k]. \] Then for sufficiently large $k$ \begin{align*} \dynp_{\nu_k}^{\br_k,D} \ge \frac{\alpha}{2} . \end{align*} \end{proposition} \begin{proof} As in the proof of the previous lemma, we denote $t= n-\tau_k$, $S_t = S_n - S_{\tau_k}$ and $A_t^2 = A_n^2-A^2_{\tau_k}$. We distinguish two cases: \[ \text{(a)} \ A_{\nu_k}^2\le \frac{\psi(n_{k+1})^2}{c_1 \br^2}, \quad \text{(b)}\ \frac{\psi(n_{k+1})^2}{c_1 \br^2} < A_{\nu_k}^2 \le A^2_{\tau_{k+1}}-A^2_{\tau_k}. \] For case (a), for sufficiently large $k$ and for any $\br \le \br_k$, as in \eqref{eq:another-large-k}, \begin{align*} \br S_{\nu_k} &\le \br \left(S_{\nu_k-1}+c_{\nu_k} \right) \le \br \left(\left((1+\delta) A_{\tau_k} + \sqrt{A^2_{\tau_k} + A^2_{\nu_k -1 }} \right)\psi(n_{k+1})+(1+\delta)C\frac{\sqrt{A^2_{\tau_k}+A^2_{\nu_k-1}}}{\psi(A^2_{\tau_k})^3} \right)\\ &\le \psi(n_{k+1})^2\left(\frac{1}{\sqrt{c_1}}+\delta\right) \end{align*} and \begin{align*} \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} 2 e^{(1+\delta)^5 e^6C} e^{\br S_{\nu_k}} \rightarrow 0\quad (k \rightarrow \infty). \end{align*} Hence $\dynp_{\nu_k}^{\br_k,D} \ge \alpha/2$ uniformly in $\br\in [\br_k/k,\br_k]$. For case (b), $S_{\nu_k}$ can be evaluated as \begin{align*} S_{\nu_k} &\le S_{\nu_k-1}+c_{\nu_k} \le S_{\nu_k-1}+(1+\delta)C\frac{\sqrt{A_{\tau_k}^2 + A_{\nu_k-1}^2}}{\psi(A^2_{\tau_k})^3}\\ \nonumber &\le \left((1+\delta)A_{\tau_k} + \sqrt{A^2_{\tau_k} + A_{\nu_k}^2}\right) \psi(n_{k+1}) +(1+\delta)C\frac{\sqrt{A_{\tau_k}^2 + A_{\nu_k}^2}}{\psi(A^2_{\tau_k})^3}\\ \nonumber & \le \left((1+\delta)A_{\tau_k} + \sqrt{A^2_{\tau_k} + A_{\nu_k}^2}\left(1+\frac{(1+\delta)C}{\psi(A^2_{\tau_k})^3\psi(n_{k+1})} \right) \right) \psi(n_{k+1}) \end{align*} by \eqref{eq:psi-ratio}. Put \[ q_k^2 := \frac{A^2_{\tau_k}}{ A_{\nu_k}^2} \le \frac{c_1\br_k^2}{\psi(n_{k+1})^2}, \qquad s_k: = \frac{(1+\delta)C}{\psi(A^2_{\tau_k})^3\psi(n_{k+1})}, \] so that $\lim_k q_k \psi(n_{k+1})^2=0$ and $\lim_k s_k\psi(n_{k+1})^2=0$. Then for sufficiently large $k$ \begin{align*} \frac{S_{\nu_k}^2}{2A_{\nu_k}^2} &\le \left(( 1+\delta)^2 \frac{q_k^2}{2} + (1+\delta)(1+s_k) q_{k} \sqrt{1+q^2_{k} } + (1+ s_k)^2\left(\frac{1}{2} + \frac{q^2_{k} }{2}\right) \right) \psi(n_{k+1})^2\nonumber \\ &\le \frac{\psi(n_{k+1})^2}{2} + \delta . \end{align*} Then \begin{align*} \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \times 2 e^{(1+\delta)^5 e^6C}e^{S_{\nu_k}^2/(2A_{\nu_k}^2)} \frac{\sqrt{2\pi}}{\br A_{\nu_k}} \le 2 e^{(1+\delta)^5 e^6C+ \delta} \sqrt{2\pi c_1} e^\delta \le \frac{D\alpha }{4}. \end{align*} \end{proof} \subsection{Dynamic strategy forcing the sharpness} \label{subsec:skepticforcesharpness} Finally, we prove Proposition \ref{th:self-normalized-efkp-lil-dash}. We assume that by the validity result, Skeptic already employs a strategy forcing $S_n \ge -A_n\uc(A_n^2)\ a.a.$ for $\omega\in\Omega_{C}$. In addition to this strategy, based on Proposition \ref{prop:dynp}, consider the following strategy. \begin{quote} Start with initial capital $\cps_0=\alpha$.\\ Set $k=1$.\\ Do the followings repeatedly:\\ \indent 1. Apply the strategy in Proposition \ref{prop:dynp} for $n\in [\tau_k, \tau_{k+1}]$. \\ \indent \quad If $\tau_{k+1} < \min(\sigma_{k,C} , \nu_k)$, then go to 2. Otherwise go to 3.\\ \indent 2. Let $k=k+1$. Go to 1.\\ \indent 3. Wait until $\exists k'$ such that $-\sqrt{\tau_{k'}} \uc(\tau_{k'}) \le S_{\tau_{k'}} \le \sqrt{\tau_{k'}}\psi(\tau_{k'})$. Set $k=k'$ and go to 1. \end{quote} By this strategy Skeptic keeps his capital non-negative for every path $\omega$. For $\omega\in \Omega_0$, $\tau_k=\infty$ for some $k$ and Skeptic stays in Step 1 forever. For $\omega\in \Omega_{=\infty}$, Step 3 is performed infinite number of times, but the overshoot of $|x_n|$ in Step 3 does not make Skeptic bankrupt by Proposition \ref{prop:dynp2}. Now consider $\omega\in \Omega_{C}$. Since Skeptic already employs a strategy forcing $S_n \ge -A_n \uc(A^2_n)\ a.a.$, the lower bound in \eqref{eq:within-range} violated only finite number of times. By $\omega \in \Omega_{C}$, $n \ge \sigma_{k,C}$ is happens only finite number of times. Hence if $S_n \le A_n\psi(A^2_n) \ a.a.$, then Step 3 is performed only finite number of times and there exists $k_0$ such that only Step 2 is repeated for all $k\ge k_0$. Now for each iteration of Step 2, Skeptic multiplies his capital at least by \begin{align*} 1 + \frac{1-\delta}{D} \lceil \ln k \rceil \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2}. \end{align*} Then \begin{align*} \frac{1-\delta}{D} \sum_{k=k_0}^\infty \lceil \ln k \rceil \psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2} \le \prod_{k=k_0}^\infty \left( 1 + \frac{1-\delta}{D} \lceil \ln k \rceil\psi(n_{k+1}) e^{-\psi(n_{k+1})^2/2}\right). \end{align*} Since the left-hand side diverges to infinity, the above strategy forces the sharpness. \bibliographystyle{abbrv}
{ "timestamp": "2015-04-27T02:04:21", "yymm": "1504", "arxiv_id": "1504.06398", "language": "en", "url": "https://arxiv.org/abs/1504.06398" }